AI for governance

BCS: AI for Governance and Governance of AI (20-Jan-2021)

BCS: AI for Governance and Governance of AI (20-Jan-2021)

BCS, The Chartered Institute for IT (more commonly known simply as the British Computer Society) strikes me as *the* ultimate underdog in the landscape of British educational and science societies. It’s an entirely subjective view, but entities like The Royal Institution, The Royal Society, British Academy, The Turing Institute, British Library are essentially household names in the UK. They have pedigree, they are venerated, they are super-exclusive, money-can’t-buy brands – and rightly so. Even enthusiasts-led British Interplanetary Society (one of my beloved institutions) is enjoying well-earned respect in both scientific and entrepreneurial circles, while BCS strikes me as being sort of taken for granted and somewhat invisible outside the professional circles, which is a shame. I first came across BCS by chance a couple of years ago, and have since added it to my personal “Tier 1” of science events.

Financial services are probably *the* number one non-tech industry most frequently appearing in BCS presentations, which isn’t surprising given the amount of crossover between finance and tech (“we are a tech company with a banking license” sort of thing).

One things BCS does exceptionally well is interdisciplinarity, which is something I am personally very passionate about. In their many events BCS goes well above and beyond technology and discusses topics such as environment (their event on the climate aspects of cloud computing was *amazing*!), diversity and inclusion, crime, society, even history (their 20-May-2021 event on the historiography of the history of computing was very narrowly beaten by receiving my first Covid jab as the best thing to have happened to me on that day). Another thing BCS does exceedingly well is balance general public-level accessibility with really specialist and in-depth content (something RI and BIS also do very well, while, in my personal opinion, RS or Turing… not so well). BCS has a major advantage of “strength in numbers”, because their membership is so broad that they can recruit specialist speakers to present on any topic. Their speakers tend to be practitioners (at least in the events I attended), and more often than not they are fascinating individuals, which oftentimes turns a presentation into a treat.

AI for governance

 The ”AI for Governance and Governance of AI” event definitely belongs to the “treat” category thanks to the speaker Mike Small, who – with 40+ years of IT experience – definitely knew what he was talking about. He started off with a very brief intro to AI (which, frankly, beats anything I’ve seen online for conciseness and clarity). Moving on to governance, Mike very clearly explained the difference between using AI in governance functions (i.e. mostly Compliance / General Counsel and tangential areas) and governance *of* AI, i.e. the completely new (and in many organisations as-yet-nonexistent) framework for managing AI systems and ensuring their compliant and ethical functioning. While the former is becoming increasingly understood and appreciated conceptually (as for implementation, I would say that varies greatly between organisations), the latter is far behind, as I can attest from observation. The default thinking seems to be that AI is just new another technology, and should be approached purely as a technological means to a (business) end. The challenge is that with its many unique considerations AI (Machine Learning to be exact) can be an opaque means to an end, which is incompatible (if not downright non-compliant) with prudential and / or regulatory expectations. AI is not the first technology which requires an interdisciplinary / holistic governance perspective in the corporate setting, because cloud outsourcing has been an increasingly tightly regulated technology in financial services since around 2018.

The eight unique AI governance challenges singled out by Mike are:

  • Explainability;
  • Data privacy;
  • Data bias;
  • Lifecycle management (aka model or algorithm management);
  • Culture and ethics;
  • Human involvement;
  • Adversarial attacks;
  • Internal risk management (this one, in my view, may not necessarily belong here, as risk management is a function, not a risk per se).

The list, as well as a comparison of global AI frameworks that followed, were what really triggered me in the presentation (in the positive way) because of their themes-based approach to governance of AI (which happens to be one of my academic research interests). The list of AI regulatory guidances, best practices, consultations, and most recently regulations proper has been growing rapidly for at least 2 years now (and that is excluding ethics, where guidances started a couple of years earlier, and currently count a much higher number than anything regulation-related). Some of them come from government bodies (e.g. the European Commission), other from regulators (e.g. the ICO), other from industry associations (e.g. IOSCO). I reviewed many of them, and they all contribute meaningful ideas and / or perspectives, but they’re quite difficult to compare side-by-side because of how new and non-standardised the area is. Mike Small is definitely onto something by extracting comparable, broad themes from long and complex guidances. I suspect that the next step will be for policy analysts, academics, the industry, and lastly the regulators themselves to start conducting analyses similar to Mike’s to come up with themes that can be universally agreed upon (for example model management – that strikes me as rather uncontroversial) and those where lines of political / ideological / economic divides are being drawn (e.g. data localisation or handling of personal data in general). 

One thing’s for sure: regulatory (be it soft or hard laws) standards for the governance of AI are beginning to emerge, and there are many interesting developments ahead in foreseeable future. It’s better to start paying attention at the early stages than play a painful and expensive game of catch-up in a couple of years.

You can replay the complete presentation here. 

Human expertise in the age of AI

Frank Pasquale “Human expertise in the age of AI”

Frank Pasquale “Human expertise in the age of AI” Cambridge talk 26-Nov-2020

Frank Pasquale is a Professor of Law at the Brooklyn Law School and is one of the few “brand name” scholars in the nascent field of AI governance and regulation (Lilian Edwards and Luciano Floridi are other names that come to mind).

I had the pleasure of attending his presentation for the Trust and Technology Initiative at the University of Cambridge back in Nov-2020. The presentation was tied to Pasquale’s forthcoming book titled “New Law of Robotics – Defending Human Expertise in the Age of AI”.

Professor Pasquale opened his presentation listing what he describes as “paradigm cases for rapid automation” (areas where AI and / or automation have already made substantial inroads, or are very likely to do so in the near future) such as manufacturing, logistics, agriculture; as well as those I personally disagree with: transport, mining (1). He argues for AI complementing rather than replacing humans as they key to advancing the technology in many disciplines (as well as advancing those disciplines themselves) – a view I fully concur with.

Frank Pasquale - how to regulate google, Facebook and credit scoring

Stifterverband, CC BY 3.0 >, via Wikimedia Commons

He then moves on to the critical – though largely overlooked – distinction between governance *of* artificial intelligence vs. governance *by* artificial intelligence (the latter being obviously more of a concern, whilst the former has until recently been an afterthought or a non-thought). He remarked that the push for researchers to increasingly determine policy is not technical, but political. It prioritises researchers over subject matter experts, which is not necessarily a good thing (n.b. I cannot say I witnessed that push in financial services, but perhaps in other industries it *is* happening?)

Prof. Pasquale identifies three possible ways forward:

  1. AI developed and used above / instead of domain experts (“meta-expertise”);
  2. AI and professionals melt into something new (“melting pot”);
  3. AI and professionals maintain their distinctiveness (“peaceable kingdom”).

In conclusion, Pasquale proposes his own new laws of robotics:

  1. Complementarity: Intelligence Augmentation (IA) over Artificial Intelligence (AI) in professions;
  2. Robots and AI should not fake humanity;
  3. Cooperation: no arms races;
  4. Attribution of ownership, control, and accountability to humans.

Pasquale’s presentation and views resonated with me strongly because I arrived at similar conclusions not through academic research, but by observation, particularly in financial services and legal services industries. Pasquale is one of the relatively few voices who mitigate some of the (over)enthusiasm regarding how much AI will be able to do for us in very near future (think: fully autonomous vehicles), as well as some of the doom and gloom regarding how badly AI will upend / disrupt our lives (think: “35% of current jobs in the UK are at high risk of computerisation”). I find it very interesting that for a couple of years now we’ve had all kinds of business leaders, thought leaders, consultants etc. express all kinds of extreme visions of the AI-powered future, but hardly any with any sort of common-sense middle-ground views. Despite AI evolving at breakneck speed, it seems that our visions and projections of it evolve slower. The acknowledgment that fully autonomous vehicles are proving more challenging and take longer to develop than anticipated only a few years back has been muted to say the least. Despite the frightening prognoses regarding unemployment, it has actually been at record lows for years now in the UK, even during the pandemic (2)(3). [Speaking from closer proximity professionally, paralegals were one profession that was singled out as being under immediate existential threat from AI – and I am not aware of that materialising in any way. On the contrary, price competition for junior lawyers in the UK has recently reached record heights,(4)(5)].

It is obviously incredibly challenging to keep up with a technology developing faster than just about anything else in the history of mankind – particularly for those of us outside the technology industry. Regulators and policy makers (to a lesser extent also lawyers and legal scholars) have in recent years been somewhat on the backfoot in the face of rapid development of AI and its applications. However, thanks to some fresh perspectives from people like Prof. Pasquale this seems to be turning around. Self-regulation (which, as financial services proved in 2008, sometimes spectacularly fails) and abstract / existential high-level discussions are being replaced with concrete, versatile proposals for policy and regulation, which focus on industries, use cases, and outcomes rather than details and nuances of the underlying technology. 



(1) Plus a unique case of AI in healthcare. While Pasquale adds healthcare to the “rapid paradigm shift” list, the pandemic-era evidence raises doubts over this:  





Artificial Intelligence concept

The Polish AI landscape

Wed 29-Apr-2020

Countries most synonymous with “AI powerhouses” are without a doubt the US and China. Both have economies of scale, resources, and strategic (not just business) interests in being at the forefront of AI. EU as a whole would probably come third, although there is always a degree of subjectivity in these rankingsi. The UK would probably come next (owing to Demis Hassabis and DeepMind, as well as thriving scientific and academic communities). In any case, it’s rather unlikely that Poland would be listed in the top tier. Poland is known for being an ideal place to set up corporate back- or (less frequently) middle office functions: much cheaper than Western Europe, with huge pool of well-educated talent, in the same time zone as the rest of the EU. A great alternative (or complement) to setting up a campus in India, but not exactly a major player in AI research and entrepreneurship. Plus, Poland and its young democracy (dating back to 1989) are currently going through a bit of a social, identity, and political rough patch. Not usually a catalyst or enabler of cutting-edge technology.

And despite all that (and despite being a mid-sized country at best… 38 million people; and despite being global number #70 globally in GDP per capita in 2018 out of 239 countries and territoriesii) for some mysterious reason Poland still made it to #15 globally in AI (using total number of companies as a metric) according to China Academy of Information and Communications Technology (CAICT) Data Research Centre Global Artificial Intelligence Industry Data Reportiii as kindly translatediv by my fellow academic Jeffrey Ding from Oxford University (whose ChinAI newsletter is brilliant and I encourage everyone to subscribe and read). I found this news so unexpected that it was the inspiration behind the entire post below.

Artificial Intelligence conceptThe recent (2019) Map of the Polish AI from Digital Poland Foundation reveals a vibrant, entrepreneurial ecosystem with a number of interesting characteristics. The official Polish AI Development Policy 2019 – 2027 released around the same time by a multidisciplinary team working across a number of government ministries paints a picture of impressive ambitions, though experts have questioned their realism.

The Polish AI scene is very young (50% of the 160 organisations polled introduced AI-based services in 2017 or 2018, the most recent years in the survey). Warsaw (unsurprisingly) steals the top spot, with 85% of all companies being located in one of the 6 major metropolitan areas. The companies tend to be small: only 22% have more than 50 people; 59% have 20 or fewer. Let’s not conflate company headcount with AI teams proper – over 50% of the companies surveyed have AI teams of 5 employees or fewer. Shortage of talent is a truly global theme in AI (which I personally don’t fully agree with – companies with resources to offer competitive packages [sometimes affectionately referred to as “basketball player salaries”] have no shortage of candidates; whether this level of pay is justifiable [the very short-lived bonanza for iOS app developers circa 2008 comes to mind] and fair to the smaller players is a different matter). The additional challenge in Poland is that Polish salaries simply cannot compete with what is on offer within 3 hours’ flight – many talented computer scientists are naturally tempted to move to places like Berlin, Paris, London or other major European AI hubs where there are more opportunities, more developed AI ecosystems, and much, much better money to be made.

What stands out is the ultra-close connection between business and academic communities. While the same is the case in most countries seriously developing AI, some of them are home to global tech corporates whose financial resources and thus R&D capabilities give them the luxury to develop on their own, at par (if not ahead) of leading research institutions. These corporates’ resources also enable them to poach world-class talent (e.g. Google hiring John Martinis to lead their quantum computer efforts [who has since left…], Facebook appointing Yann LeCun as head of AI research, or Google Cloud poaching [albeit briefly] Fei-Fei Li as their Chief Scientist of AI/ML). In Poland this will not apply – it does not have any large (or even mid-size) home-grown innovative tech firms. The ultra-close connection between business and academia is a logical consequence of these factors – plus in a 38-million-strong country with relatively few major cities serving as business and academic hubs, the entire ecosystem simply can’t be very populous.

The start-up scene might in part be constrained by limited amount of available funds (anecdotally the angel investor / VC scene in Poland is very modest). However, the Digital Poland report states:

Categorically, as experts point out, the main barrier to the development of the Polish AI sector is not the absence of funding or expertise but rather a very limited demand for solutions based on AI.

My personal contacts echo this conclusion – they are not that worried about funding. Anecdotally, there is a huge pool of state grants (NCiBR) with limited competition for those (although post-COVID-19 they may all but evaporate).

Multiple experts cited by Digital Poland all list domestic demand as the primary concern. According to the survey, potential local clients simply do not understand the technology well enough to realise how it can benefit them (41% of responses in a multiple choice questionnaire – single highest cause; [client] staff not understanding AI had its own mention at 23% and [managers] not understanding AI came at 22%).

The AI market in Poland is focused on more commercial products (Big Data analytics, sales, analytics) rather than cutting-edge innovative research. It is understandable – in an ecosystem of limited size with very limited local demand the start-ups’ decision to develop more established, monetisable, applications which can be sold to a broad pool of global clients is a reasonable business strategy.

One side-conclusion I found really interesting is that there’s quite a vibrant conference and meetup scene given how nascent and “unsolidified” the AI ecosystem is.

The Polish AI Policy document is an interesting complement to the Digital Poland report. While the latter is a thoroughly researched snapshot of the Polish AI market right here right now (2019 to be exact), the policy document is more of a mission statement – a mission of impressive ambitions. I always support bold, ambitious, and audacious thinking – but experience has taught me to curb my enthusiasm as far as Polish policy-making is concerned. Grand visions of the 2019 – 2027 come with not even a draft of a roadmap. The document is also, unfortunately, quite pompous and vacuous at times.

The report is rightly concerned about impact on jobs, concluding that the expectation is that more jobs will be created than lost, and concluding that some of this surplus should benefit Poland. One characteristic of Polish economy is that it (still) has a substantial number of state-owned enterprises in key industries (banking, petrochemicals, insurance, mining and metallurgy, civil aviation, defence), which are among the largest in their industries on a national scale. Those companies have the size and the valid business cases for AI, yet they don’t seem ready (from education and risk-appetite perspectives) to commit to AI. State-level policy could provide the nudge (if not outright push) towards AI and emerging technologies, yet, unfortunately, that is not happening.

The report rightly acknowledges the skills gap, as well as some issues on the education side (dwindling PhD rates, relatively low (still!) level of interest in AI among Polish students as measured by thesis subject choices). The quality of Polish universities merits its own article (merits its own research, in fact). On one hand, the anecdotal and first-hand experiences lead me to believe that Polish computer scientists are absolutely top-notch, on the other, the university rankings are… unforgiving (there are literally two Polish universities on QS Global 500 list 2020, at positions #338 and #349v).

Last but not least, a couple of Polish AI companies I like (selection entirely subjective):

  • Sigmoidal – AI business/management consultancy.
  • – AI-aided sales and customer relationship management (CRM) solutions.
  • – behavioural biometrics solutions.

Disclaimer: I have no affiliations with any of the abovementioned companies.

[i] Are we looking at corporate research spending? Government funding/grants for academia? Absolute amounts or % of GDP? How reliable are the figures and how consistent are they between different states? etc. etc.

[ii] Source: World Bank (

[iii] You can read the original Chinese version here:

[iv] Jeff’s English translation can be found here:


Artificial hand holding judge scales

UCL Digital Ethics Forum: Translating Algorithm Ethics into Engineering Practice

Tue 04-Feb-2020

On Tue 04-Feb-2020 my fellow academics at UCL held a workshop on algorithmic ethics. It was organised by Emre Kazim and Adriano Koshiyama, two incandescently brilliant post-docs from UCL. The broader group is run by Prof. Philip Treleaven, who is a living legend in academic circles and an indefatigable innovator with an entrepreneurial streak.

Algorithmic ethics is a relatively new concept. It’s very similar to AI ethics (a much better-known concept), with the difference being that not all algorithms are AI (meaning that algorithmic ethics is a slightly broader term). Personally, I think that when most academics or practitioners say “algorithmic ethics” they really mean “ethics of complex, networked computer systems”.

Artificial hand holding judge scalesThe problem with algorithmic ethics doesn’t start with them being ignored. It starts with them being rather difficult to define. Ethics are a bit like art – fairly subjective and challenging to define. Off the top of our heads we can probably think of cases of (hopefully unintentional) discrimination of job applicants on the basis of their gender (Amazon), varying loan and credit card limits offered to men and women within the same householdi (Apple / Goldman), online premium delivery services more likely being offered to white residents than blackii (Amazon again). And then there’s the racist soap dispenseriii (unattributed).

These examples – deliberately broad, unfortunate and absurd in equal measure – show how easy it is to “weaponise” technology without an explicit intention of doing so (I assume that none of the entities above intentionally designed their algorithms as discriminatory). Most (if not all) of the algorithms above were AI’s which trained themselves off of a vast training dataset, or optimised a business problem without sufficient checks and balances in the system.

With all of the above most of us will just know that they were unethical. But if we were to go from an intuitive to a more explicit understanding of algorithmic ethics, what would it encompass exactly? Rather than try to reinvent the ethics, I will revert to trusted sources: one of them will be Alan Turing Institute’s “understanding artificial intelligence ethics and safety”iv and the other will be a 2019 paper “artificial intelligence: the global landscape of ethics guidelines”v co-authored by Dr. Marcello Ienca from ETH Zurich, whom I had the pleasure of meeting in person at Kinds of Intelligence conference in Cambridge in 2019. The latter is a meta-analysis of 84 AI ethics guidelines published by various governmental, academic, think-tank, or private entities. My pick of the big ticket items would be:

  • Equality and fairness (absence of bias and discrimination)
  • Accountability
  • Transparency and explicability
  • Benevolence and safety (safety of operation and of outcomes)

There is an obvious fifth – Privacy – but I have slightly mixed feelings when it comes to throwing it in the mix with the abovementioned considerations. It’s not that privacy doesn’t matter (it matters greatly), but it’s not as unique to AI as the above. Privacy is a universal right and consideration, and doesn’t (in my view) feed and map to AI as directly as, for example, fairness and transparency.

Depending on the context and application, the above will apply in different proportions. Fairness will be critical in employment, provision of credit, or criminal justice, but I won’t really care about it inside a self-driving car (or a self-piloting plane – they’re coming!) – then I will care mostly about my safety. Privacy will be critical in the medical domain, but it will not apply to trading algorithms in finance.

The list above contains (mostly humanistic) concepts and values. The real challenge (in my view) is two-fold:

  1. Defining them in a more analytical way.
  2. Subsequently “operationalising” them into real-world applications (both in public and private sectors).

The first speaker of the day, Dr. Luca Oneto from the University of Genoa, presented mostly in reference to point #1 above. He talked about his and his team’s work on formulating fairness in a quantitative manner (basically “an equation for fairness”). While the formula was mathematically a bit above my paygrade, the idea itself was very clear, and I was sold on it instantly. If fairness can be calculated, with all (or as much as possible) ambiguity removed from the process, then the result will not only be objective, but also comparable across different applications. At the same time, it didn’t take long for some doubts to set in (although I’m not sure to what extent they were original – they were heavily inspired with some of the points raised by Prof. Kate Crawford in her Royal Society lecture, which I covered here). In essence, measuring fairness seems do-able when we can clearly define what constitutes a fair outcome – which, in many cases in real life, we cannot. Let’s take two examples close to my heart: fairness in recruitment and the Oscars.

With my first degree being from not-so-highly ranked university, I know for a fact I have been autorejected by several employers – so (un)fairness in recruitment is something I feel strongly about. But let’s assume the rank of one’s university is a decent proxy for their skills, and let’s focus on gender representation. What *should be* the fair representation of women in typically male-dominated environments such as finance or tech? It is well documented that and widely debated as to why women drop out of STEM careers at a high ratevi vii – but they do, around 40% of them. The explanations go from “hegemonic and masculine culture of engineering” to challenges of combining work and childcare disproportionately affecting new mothers. What would be the fair outcome in tech recruitment then? A % representation of women in line with present-day average? A mandatory affirmative action-like quota? (if so, who and how would determine the fairness of the quota?) 50/50 (with a small allowance for non-binary individuals)?

And what about additional attributes of potential (non-explicit) discrimination, such as race or nationality? The 2020 Oscars provided a good case study. There were no females nominated in the Best Director category (a category which historically has been close to 100% male, with exactly one female winner, Kathryn Bigelow for “the hurt locker” and 5 female nominees, and zero black winners and 6 nominees), and only one black person across all the major categories combined (Cynthia Erivo for “Harriet”). Stephen King caused outrage with his tweet about how diversity should be a non-consideration – only quality (he later graciously explained that it was not yet the case todayviii). Then South Korean “parasite” took the Best Picture gong – the first time in the Academy Awards history the top honour went to a foreign language film. My question is: what exactly would be fair at the Oscars? If it was proportional representation, then some 40% of the Oscars should be awarded to Chinese movies, another 40% to Indian ones, with the remainder split among European, British, American, Latin, and other international productions. Would that be fair? Should special quota be saved for the American movies given that the Oscars and the Academy are American institutions? Whose taste are the Oscars meant to represent, and how can we measure the fairness of that representation?

All these thoughts flashed through my mind as I was staring (somewhat blankly, I admit), at Dr. Oneto’s formulae. The formulae are a great idea, but determining the distributions to measure the fairness against… much more of a challenge.

The second speaker, Prof. Yvonne Rogers of UCL, tackled AI transparency and explainability. Prof. Rogers tackled the familiar topics of AI’s being black boxes and the need for explanations in important areas of life (such as recruitment or loan decisions). Her go-to example was AI software scrutinising facial expressions of candidates during recruitment process based on unverified science (as upsetting as that is, it’s nothing compared to fellas at Faception who declare they can identify whether somebody is a terrorist by looking at their face). While my favourite approach towards explainable AI, counterfactuals, was not mentioned explicitly, they were definitely there in spirit. Overall it was a really good presentation on a topic I’m quite familiar with.

The third speaker, Prof. David Barber of UCL, talked about privacy in AI systems. In his talk, he strongly criticised present-day approaches to data handling and ownership (hardly surprisingly…). He presented an up-and-coming concept called “randomised response”. Its aim is described succinctly in his paperix as “to develop a strategy for machine learning driven by the requirement that private data should be shared as little as possible and that no-one can be trusted with an individual’s data, neither a data collector/aggregator, nor the machine learner that tries to fit a model”. It was a presentation I should be interested in – and yet I wasn’t. I think it’s because in my industry (investment management) privacy in AI is less of a concern than it would be in recruitment or medicine. Besides, IBM sold me on homomorphic encryption during their 2019 event, so I was somewhat less interested in a solution that (if I understood correctly) “noisifies” part of personal data in order to make it untraceable, as opposed to homomorphic encryption’s complete, proper encryption.

In the only presentation from the business perspective, Pete Rai from Cisco talked about his company’s experiences with broadly defined digital ethics. It was a very useful counterpoint to at times slightly too philosophical or theoretical academic presentations that preceded it. It was an interesting presentation, but like many others, I’m not sure to what extent it really related to digital ethics or AI ethics – I think it was more about corporate ethics and conduct. It didn’t make the presentation any less interesting, but I think it inadvertently showed how broad and ambiguous area digital ethics can be – it’s very different things to different people, which doesn’t always help push the conversation forward.

The event was part of a series, so it’s quite regrettable I have not heard of it before. But that’s just a London thing – one may put all the work and research to try to stay in the loop of relevant, meaningful, interesting events – and some great events will slip under the radar nonetheless. There are some seriously fuzzy, irrational forces at play here.

Looking forward to the next series!










The Alan Turing Institute presents Lilian Edwards “regulating unreality”


On Thursday 11-Jul-2019 the Alan Turing Institute served up a real treat in the form of lecture by Professor Lilian Edwards1. To paraphrase Sonny Bono, Lilian Edwards is not just a professor, she’s an experience. Besides being a prolific scholar at the bleeding edge of law and regulation, she is one of the most engaging and charismatic speakers I have ever met. I first heard Prof. Edwards present at one of the New Scientist Instant Expert event series (of which I’m a big fan btw), and I have been her fan ever since.

After hearing a comprehensive and at times provocative lecture on legal and social aspects of AI and robotics (twice!) in 2017, in 2019 Prof. Edwards focused on something even more cutting edge: social and legal aspects of deepfakes.

Lilian Edwards regulating unreality talkDelivered in the stunning Brutalist surroundings of the Barbican and hosted by Professor Adrian Weller, the lecture started with revisiting the first well-known deepfake: Gal Gadot’s face photorealistically composited onto a body of an adult film actress in a 2017 gif from Reddit.

Most things on the Internet start with (or end, or lead to) porn – it’s simply the way it is. However, developing technologies which allow access to, streaming, or sharing of images of real consenting adults engaging in enjoyable, consensual activities is one thing (the keyword being in my personal opinion “consent”) – deepfaking explicit images of celebrities or anyone else is vulgar and invasive (try to imagine photos of your mom, daughter, or even yourself being digitally, photorealistically “pornified”, and then think how it would make you feel). As awful as that is, deployment of deepfake technology in politics is something else entirely. Deepfakes are likely the new fake news, albeit taken to a new level: a seamless audio-visual distortion of reality.

Prof. Edwards reminded everyone that image manipulation was deployed in politics pre-deepfake era – using very simple techniques with often very successful effects: the Nancy Pelosi video slowed down in order to give appearance of her slurring and seemingly drunk and the White House video of CNN reporter Jim Acosta. Furthermore, she pointed out that deepfakes are not necessarily the end of the road: they are just one (disturbing) element of seamlessly generated/falsified “synthetic reality”.

Among the many threats (“use cases”) caused by deepfakes, the one that resonated with me the strongest were:

  • Deepfakes are not just about presenting something fake as something real, they’re also about discrediting something real as fake.
  • Plenty of potential uses in both civil and criminal smears.
  • Deepfakes create plausible deniability for anything and everything.

Moving on to legal and social considerations, Prof. Edwards looked at plethora of existing and proposed legal and technological solutions across the US, UK, and EU, expertly pointing out their shortcomings and/or unfeasibility. What moved me the most was the similarity (at least in terms of underlying concept and intent) between deepfakes and revenge porn, which I find absolutely and utterly disgusting. Another consideration was the question of one, objective, canonical reality (particularly online): while it wasn’t raised in great detail (we didn’t take the quantum physics route), it that resonated strongly with me. Lastly (in true Lilian Edwards cutting edge/provocative thinking style) there was a question whether reality should be considered a human right.

In terms of big, open questions, I think 2 stands out particularly prominently:

  • What should be the strategy: ex ante prevention or post factum sanctions?
  • Whom to prosecute: the maker, the distributor, the social media platform, the search engine?

Overall, it was a fantastic, thoroughly researched and brilliantly delivered lecture (and at GBP 4.60 it wasn’t just the hottest ticket in town, but very likely the cheapest one too). You can watch the complete recording on Turing’s YouTube channel. You can hear me around the 01:10:00 mark, raising my concerns about deepfake-driven detachment from reality among younger generations.


Polish AI Manifesto

Polish academic community releases Manifesto for Polish AI


The big press release of the season (possibly of the year as well, even though we’re less than halfway through) is without a doubt the European Commission’s Ethical Guidelines for Trustworthy AI1 (published in Apr-2019). Most developed states have their own AI strategies. That list includes my home country, Poland, which released an extensive AI strategy document in Nov-2018. The document (which I may review separately on another occasion) is not strategy proper, it’s more of a summary of key considerations in formulating a full-blown strategy.

It is likely that the lack of an explicit strategy (and lack of explicit commitment to funding) led Polish academic community to publish their own Manifesto for Polish AI.

Poland may not seem like an obvious location to launch an AI (or any other tech) startup, but anecdotal evidence states to the contrary. There are pools of funds (including state subsidies and grants) available to tech entrepreneurs and a relatively low number of entrepreneurs competing for those funds. The most popular explanations for this are the state’s attempts to stimulate digital economy and to stem (maybe even reverse) the pervasive brain drain, which started when Poland joined the EU in 2004.

Entrepreneurship is one thing, but research is something different altogether – at least in Poland. In the absence of home-grown innovation leaders the like of Amazon or Deep Mind, research is almost entirely confined to academic institutions. Reliant entirely on state funding, Polish academia has always been underfunded: in 2018 Polish GERD (gross domestic expenditure on research and development) was 1.03%, compared to EU’s average of 2.07%2 3. In all fairness, the growth of GERD in Poland has been rapid (from 0.56% in 2007) compared to the EU (1.77% in 2007), but the current expenditure is still barely a half of EU’s average (not to mention global outliers: Israel and South Korea both 4.2%, Sweden 3.25%, Japan 3.14%4).

Between flourishing tech entrepreneurship (18 companies on Deloitte’s FAST 50 Central and Eastern Europe list are Polish5 – though, arguably, none of them are cutting-edge technology powerhouses like Dark Trace or Onfido, both 2018 laureates from the FAST 50 UK list6), widely respected tech talent, nascent AI startup scene (, sigmoidal, growbots) Polish academics clearly felt a little left out – or wanted to make sure they won’t be.

Consequently, in early 2019 Polish academics from Poznan University of Technology’s Institute of Computing Science published Manifesto for Polish AI7, which has since been signed by over 300 leading academics, researchers, and business leaders.

The manifesto is compact. Below is an abridged summary:

Considering that:

  • AI innovations can yield particularly high economic returns, as evidenced by the fact that AI is currently a primary focus of Venture Capital firms.
  • The significance of AI for economic growth, social development, and defence is recognised by world leaders.
  • Innovative economies worldwide are based on strong commitment to science and tertiary education. The world’s most innovative countries such as South Korea, Israel, Scandinavia, USA, and increasingly China happen to also contribute the highest proportion of their GDP’s to R&D and education.
  • Polish academia has significant potential in the field of AI.
  • A barrier for growth of innovative start-ups in Poland is not just lack of capital, but also, if not mainly, too low a number of start-ups themselves
  • Innovative start-ups worldwide are developed mostly within academic ecosystems.
  • There is an insufficient number of IT specialists in Poland, and even more so of AI specialists.

We are calling on the decision makers to develop a strategy for growth of key branches of R&D and tertiary education (in particular AI), and to take decisive actions to fulfil that strategy:

  • Severalfold increase in expenditure on primary research and implementation of AI to reach parity with the most innovative countries.
  • Growth and integration of R&D teams working in the field of AI and improvement of their collaboration with the industry.
  • Ensuring PhD’s and other academics doing AI research receive grants and remuneration comparable to those in the (AI) industry.
  • Additional funding for opening new AI university courses and broadening the intake into the existing ones.
  • Funding for entrepreneurship centres (based on the Martin Trust Center for MIT Entrepreneurship) on Polish university campuses, offering students and employees entrepreneurship education, seed funding, co-working spaces etc.

Funds committed to the above will be a great investment, which will pay for itself many times over in the form of greater innovation, a higher number of AI experts on the job market, and in effect faster economic and social growth, and better defence.

There is nothing controversial in the manifesto – in fact, probably most academics worldwide would sign a copy with “Poland” being replaced with their respective country name. It may be a little idealistic/ unrealistic (reaching R&D parity as % of GDP with global leaders is… ambitious), but that doesn’t diminish its merit. I for one would be ecstatic to see Poland committing 3% or 4% of GDP to R&D. Separately, it’s really nice to see that a compelling argument can be presented on literally 1 page. Less is almost always more.

Royal Society - Kate Crawford: You and AI: machine learning and bias (aka Just an engineer: the politics of AI)

Royal Society, Tue 17-Jul-2018

The lecture was one of eight in the broad “You and AI” series organised by the Royal Society (and, sadly, the only one I attended). This particular lecture was presented by an AI grandee, Microsoft’s Principal Researches and NYU’s Distinguished Research Professor Kate Crawford.

The lecture was hosted by Microsoft’s Chris Bishop, who himself gave a very interesting and interactive talk on AI and machine learning at the Royal Institution back in 2016 (it is also posted on RI’s YouTube channel here).

What made Prof. Crawford’s presentation stand out among many others (AI is a hot topic, and there are multiple events and meetups on it literally every day) was that it didn’t just focus on the technology in isolation, but right from the outset framed it within the broader context of ethics, politics, biases, and accountability. One might think there’s nothing particularly original about it, but there is. Much of the conversation to date has focused on the technology in sort of clinical sense, rarely referencing broader context (with occasional exceptions for concerns about the future of employment, which is understandable, given it’s one of top white collar anxieties this day and age). I think that in talking about the dangers of being “seduced by the potential of AI”, she really hit the nail on the head.

The lecture started with Prof. Crawford attempting to define AI, which makes more sense than it might seem: just like art, AI means different things to different people:

  • Technical approaches. She rightly pointed out that what is commonly referred to as AI is a mix of a number of technologies such as machine learning, pattern recognition, optimization (echoing a very similar mention by UNSW’s Prof. Toby Walsh in one of the New Scientist Instant Expert workshops in London last year; also – of literally *all* the people – Henry Kissinger (*the* Henry Kissinger) made the same observation not long ago in his Atlantic article: “ultimately, the term artificial intelligence may be a misnomer. To be sure, these machines can solve complex, seemingly abstract problems that had previously yielded only to human cognition. But what they do uniquely is not thinking as heretofore conceived and experienced.”). She also restated what many enthusiasts seem to overlook: AI techniques are in no way intelligent the way humans are intelligent. The use of “I” in “AI” may be somewhat misleading.
  • Social practices: the decision makers in the field and how their backgrounds, views and biases shape the applications of AI in the society
  • Industrial infrastructure. Using a basic Alexa interaction as an example, Prof. Crawford stated that – contrary to popular beliefs in thriving AI start-up scene – only few tech giants have resources to provide and maintain it.

She took AI outside of its purely technical capacity and made very clear exactly how political – and potentially politicised – AI can be, and that we don’t think enough about the social context and the ethics of AI. She illustrated that point by pulling multiple data sets used to train machine learning algorithms and making very clear how supposedly benign and neutral data is in fact not representative and highly skewed (along the “usual suspect” lines of race, sex, sexism, representativeness, societal / gender roles).

Prof. Crawford talked about the dangers of putting distance between AI engineering and the true human cost of its consequences, including poor quality training data reinforced biases, against the backdrop if ongoing resurgence of totalitarian views and movements. On more positive note, she mentioned the emergence of a “new industry” of fairness in machine learning – at the same time asking who and how will actually define fairness and equality. She discussed three approaches she had herself researched (improving accuracy, scrubbing to neutral, mirroring demographics), pointing out that we still need to determine what exactly we define as neutral and / or representative.

She mentioned feedback loops more than once, in the sense of reinforcing stereotypes in the real world, with systems and algorithms becoming more inscrutable (“black box”) and disguising very real political and social considerations and implications as purely technical, which they aren’t. She quoted Brian Brackeen, an (African-American) CEO of a US facial recognition company Kairos, who stated that his company would not sell their systems to law enforcement as they are “morally corrupt” (you can read his op-ed in its entirety on techcrunch).

As a regular on London tech / popular science events scene, I found it very interesting to pick out some very current and very relevant references to speeches given by other esteemed speakers (whether these references were witting or unwitting, I don’t know). In one of them Prof. Crawford addressed the fresh topic of “geopolitics of AI”, which was introduced and covered in detail by Evgeny Morozov in his Nesta Future Fest presentation (titled “the geopolitics of AI”… – you can watch it here); in another she mentioned the importance of ongoing conversation at the same time as Mariana Mazzucato talks about hijacking the economic narrative(s) by vested corporate and political interests in her new book (“the value of everything”) and accompanying event at the British Library (which you can watch here). Lastly, Crawford’s open questions (which, in my view, could have been raised a little more prominently) about the use of black box algorithms without clear path of an appeal in broadly defined criminal justice system resonated Prof. Lilian Edwards’ of Strathclyde University research into legal aspects of robotics and AI.

On the iconoclastic side, I found it unwittingly ironic that a presentation about democratising AI (both the technology and the debate around it) and the concerns of crowding out smaller players by acquisitive FAANG’s was delivered by a Microsoft employee at an event series hosted by Google.

You can watch the entire presentation (67 minutes) here. For those interested in Royal Society’s report on machine learning (referenced in the opening speech by Prof. Bishop), you can find it here.


Nick Bostrom "superintelligence" book review


Thu 24-Oct-2018

There are few things less fashionable than reading a book that was all the rage 2 years prior. One might as well not bother – the time for all the casual watercooler/dinner party mentions is gone, and the world has moved on. However, despite tape delay caused by “life” and with all social credit devalued, I decided to make an effort and reach for it nonetheless.

In terms of content, there’s a lot in it – and I mean *a lot*. Regardless of discipline, in non-fiction a lot can be a great thing, but also challenging (one can only process, let alone absorb, so much). However, a lot in what is essentially a techno-existential divagation is like… really a lot.

For starters, Bostrom deserves credit for defining the titular superintelligence as “a system that is at least as fast as a human mind and vastly qualitatively smarter”, and for consistently using the correct terminology. The term “AI” – as is increasingly called out these days (“Ultimately, the term artificial intelligence may be a misnomer. To be sure, these machines can solve complex, seemingly abstract problems that had previously yielded only to human cognition. But what they do uniquely is not thinking as heretofore conceived and experienced.”) – is routinely misused and applied to machine-learning and / or statistical systems which are all about pattern recognition, statistical discrimination, and accurate predictions; none of which are close to AI / AGI (artificial general intelligence) proper (ironically, as I’m writing this Microsoft’s AI commercial ft. Common – which conflates AI, VR, AR, and IoT and throws them all into one supermixed bag – is playing in the background; and btw, Common? Whatever made Satya Nadella choose a vaguely recognizable actor when he could choose and afford a great actor? Or a scientist for that matter?).

Anyway, back to “superintelligence”. One of the most repetitive and profound themes in Bostrom’s book is that we cannot predict nor comprehend how an agent orders of magnitude smarter than the smartest of humans will reason; what goals it might have; how it might go about reaching them. It’s kind of a tautology (“we cannot comprehend the incomprehensible; we cannot predict the unpredictable”), but still, makes one think.

Separately – and it gave me pause many times as I was reading it – the word “privilege” is used a lot these days: male privilege, white privilege, Western privilege, straight privilege etc. etc. These privileges are bad, but I think most of us humans – inequalities notwithstanding – is used to and quite fond of the homo sapiens privilege of being the smartest (to our knowledge…) species on Earth. “Smart” may not be the most important attribute for everyone (some people bank on their looks, others on humour, others still on personality, others on love), but I think few people don’t like to think of themselves as broadly smart. Some – myself included – chose to bank most of themselves, our lives, our sense of self, our self-esteem on being smart / clever / educated. What happens when the best, smartest, sharpest, wittiest possible version of me becomes available as an EPROM chip or a download? If ever / whenever the time comes when my intellect becomes vastly inferior to an artificial one, how will I be able to live? What will drive me? I don’t want to be kept as some superintelligence’s ******* pet…

The amount of ideas, technologies, and considerations listed in Bostrom’s book is quite staggering. He’s also not shy to think big, really big (computronium, colonizing the Hubble volume, lastly – what if we’re all living in a simulation in the first place?) – and I love it (the Asimov-loving kid inside me loves it too). Separately though… Bostrom seems to be quite confident that the first superintelligence would end up colonizing and transforming the observable Universe (there would be no 2ndsuperintelligence… and even if there were, there is only one Universe we know of for sure). However – as far as our still rather basic civilisation can observe – the universe is neither colonized nor transformed (unless we are all living in a simulation, in which case it can be). Has the path not been taken before…? To be the first (or only) civilisation in the history of the universe capable of developing AI sounds like being really, really lucky… *too* lucky almost. Then again, it may be the case of trying to comprehend the incomprehensible and predict the unpredictable

It may not be the best written book ever, but the guy did his homework and knows his stuff. Separately, the author deserves credit for not looking at AI in a technological silo, but broadly: from neuroscience, through politics, all the way to philosophy and ethics. For someone who’s a big believer in the future being more interdisciplinary (i.e. myself), that’s confirmation that having wide and diverse interests is worthwhile.

Reaching for hit books is a little bit like reaching for hit albums (regardless of the genre) – sometimes great ones deservedly become hits, and sometimes substance-less **** ones undeservedly become hits all the same. However, despite attaining recognition similar to “4 hour week” or “blink!”, “superintelligence” does actually have substance – and plenty of it. Along with substance comes a certain challenge in reading and following, which makes me wonder how many people who bought the book bought it to read it, and how many merely bought it to be seen reading it in public? (much like this). The substance of “superintelligence” can actually be really overwhelming – not just in terms of mind-boggliness of its content (although that too), but purely on volume. There is a lot in it – and I mean *a lot*. In his efforts to cram as much substance as possible, Bostrom forgot that in order for a book to be great it needs to be – has to be – well written. “Superintelligence” is a lot of things, but well-written it, unfortunately, is not. The first hundred pages are particularly tough to get through – the author could have easily trimmed them to 30 – 40, making the material more concise and comprehensible to a regular reader (it is, after all, meant to be a “popular science” book, not Principia Mathematica – Joe Average should be able to follow it). Yuval Noah Harari’s “homo deus” is a great example of a book that’s got substance, but reads really, really well (on an unrelated note: YNH is a much better auteur than he is a public speaker). Nassim Taleb’s “black swan” is another (though Nassim is all about Nassim more than Kanye is all about Kanye – it does get old real quick).

On top of that, AI occupies a unique place in the zeitgeist. On one hand, FT (“Artificial intelligence: winter is coming”) rightly points out that “We have not moved a byte forward in understanding human intelligence. We have much faster computers, thanks to Moore’s law, but the underlying algorithms are mostly identical to those that powered machines 40 years ago. Instead, we have creatively rebranded those algorithms. Good old-fashioned “data” has suddenly become “big”. And 1970s-vintage neural networks have started to provide the mysterious phenomenon of “deep learning”.”; on the other Alan Turing’s Institute notes that, “artificial intelligence manages to sit at the peak of ‘inflated expectations’ on Gartner’s technology hype curve whilst simultaneously being underestimated in other assessments”.

Consequently, in the end, I was struck by a peculiar dissonance: on one hand, reading the book (which is measured and balanced, it’s not unabashed evangelising) one might get the impression that the titular superintelligence really is inevitable – that it’s a matter of “when” rather than “if” (with the entire focus being on “how”), and that the likelihood of this becoming an existential threat to humanity is substantial. Then, around page 230, Bostrom gives the readers a bit of a cold shower by making them realise that it’s essentially impossible to express human values (such as “happiness”) in code. And then I’m left agitated (the good way) and confused (also the good way): inevitable or impossible? Which one is it?


PS. Not as an alternative, but as a condensed and very well written compendium, I cannot recommend this waitbutwhy classic enough.


Nesta FutureFest: Will my job exist in 2030?

Fri 06-Jul-2018

Hosted by Nesta’s Eliza Easton and Jed Cinnamon, the “will my job exist in 2030?” session took place at high noon of what was probably the hottest day of the year, in a modest size auditorium, under a glass roof.

To say that it was hot would be a misunderstatement: it was boiling, it was a sauna, it was Dubai. And still, the auditorium was *packed*. I know why I braved it, and I suspect everyone else in there braved it for the same reason: as white collar professionals we are mortified about the prospect of being rendered obsolete by an algo, and we want to find out:

  • How likely it is exactly
  • What (if anything) we can do about it

While the presentation echoed Nesta’s report bearing the same title, it was entirely self-contained and, as far as I could tell, told from a slightly different angle than the report.

Part of the presentation was delivered from the perspective of educating and training today’s middle- and high-school students (so only tangentially relevant to grown ups), though the skills and competencies listed as enhancing the employability of modern day teenagers’ were definitely good to know for everyone (along the lines of: do I have this? Could I plausibly argue that I have this? What can I still do in order to have this? How can I rephrase my resume in order to say I have this?). The list included:

  • Judgement
  • Decision making
  • Complex problem solving
  • Fluidity of ideas
  • Collaborative problem solving
  • Creative problem solving
  • Resilience
  • Critical thinking

I think that list alone made it worth attending the event, because the skills and competencies listed above are indeed in the Venn diagram sweet spot where analytical, creative, interpersonal and imaginative overlap; in short, those (at least in 2018) appear to be abilities relatively most difficult to automate.

Separate consideration was given to the interdisciplinary. The hosts mentioned how important it is to hone creative and artistic skills alongside modern STEM curriculum, as well as remove barriers between different, previously silo’ed disciplines (where Finland’s success was used as an example). This point echoed with me quite strongly because only a few years ago my own interdisciplinary approach towards career development (defined as pursuing a relatively wide variety of roles based on how interesting they appeared rather than how closely they aligned to my direct experience) earned rather limited support and understanding of my London Business School colleagues, most of whom spent their professional careers within one specialism (equity sales; credit derivs etc.) and were all about climbing the ranks of higher and better paid positions within their respective specialisms.

There was a somewhat fresh (and refreshingly sane) angle on the nemesis-du-jour topic of automation. While the authors echoed the general ennui of many jobs currently done by humans being automated away, they thought their conclusions over more thoroughly than the prevailing “unskilled jobs will all go, and many skilled jobs as well” and pointed out that some of the lower-skilled jobs are not only relatively safe, but also likely to grow (examples listed were agriculture and construction; I would also think that all types of non-specialised carer jobs were in the same category; basically jobs requiring body mobility and dexterity and / or emotional connection).

Another original observation was that while there is a continuity of certain occupations and job titles, the day-to-day work itself – and the skillset it requires – may have very little in common over the years (the example given was typesetter: modern day InDesign user vs. a heavy machinery operator from couple of decades ago; finance automatically comes to mind where many roles have much more to do with technology than any “pure” finance).

The final point of relevance to me was raised during the Q&A (by myself…): given my personal experiences with some employers being more open towards the concept of lifetime education than others (with few being actively hostile), I wanted to know whether businesses begin to recognise the value of ongoing education, learning and training of their staff as opposed to the entrenched view of seeing any upskilling as a distraction and / or a threat (basically a variation on: “they don’t need this for their current job… they will want more money and leave”). I was quite happy to hear that there is indeed a certain pivot happening (albeit not very fast) as we speak, and businesses begin to see the (commercial) added value of their employees gaining more skills.

Nesta FutureFest 2018

Fri 06 – Sat 07-Jul-2018

First weekend of July saw the 4th edition of Nesta FutureFest. For those of you who don’t know, Nesta is UK government’s agency for innovation (one of very few government bodies I feel isn’t wasting my taxpayer contributions), and FutureFest is its sorta-annual (taking place about once every 18 months) two-day festival.

The event consists of talks on a number of stages + smaller accompanying events and presentations. The closest reference point to FutureFest I can think of is New Scientist Live, but NS Live is mostly “pure” science: whether it’s quantum physics, CERN, DNA memory storage, or nuclear fusion, it’s mostly just talks on some exciting discoveries, ideas, and inventions. It is thought-provoking but largely uncontroversial and apolitical.

FutureFest is a little different. While the focus is firmly on the future (as the name would suggest), it is broader than science and technology and also considers social, urban, artistic, and even philosophical and religious angles. This holistic scope makes the event really unique.

The 2018 event was held in the Tobacco Dock in London over one of the hottest weekends of the year, which, given that most of the building is covered with glass roof, made attending some of the jam-packed talks (and vast majority of them were jam-packed…) something of a challenge, but trust me, well worth it. Plus the venue itself is really nice too (I believe some of the Wired magazine’s events are being held there as well).

The Nesta team did a brilliant job booking diverse headline speakers to deliver talks across a very broad spectrum of subject I’d describe as not just “future”, but more so “us humans and our future”.

Just to give you a little taste of the diversity of the 2018 talks, here are titles of the ones I attended:

  • The geopolitics of AI
  • Will my job exist in 2030?
  • 2027: When the post-work era begins
  • Digital workers of the world, unite?
  • How blockchain can, literally, save the world
  • The tangle of mind and matter
  • Future humans: augmented selves
  • Let there be bytes
    (I’ll expand on a couple of these in greater detail in separate posts).

In parallel, there was a “meet the author” stage, where I resisted buying even more books I may never get a chance to read (I regretted afterwards), and where I finally managed to talk to my idol Dr. Julia Shaw, and caught up with the inimitable and very, very candid Ruby Wax.

What struck me as surprising was that of the 2 days, Friday was definitely busier and more packed with attendees. I mean… I had to take a day off work to attend, was everyone else there on business? Or was it the World Cup quarterfinal match (England vs. Sweden – we won) on Saturday? In any case, I showed up on both days and had an absolute blast. It was intense and towards Saturday evening my brain was definitely overflowing, but it was absolutely worth it. I can’t wait for the next one.