Artificial Intelligence concept

The Polish AI landscape

Wed 29-Apr-2020

Countries most synonymous with “AI powerhouses” are without a doubt the US and China. Both have economies of scale, resources, and strategic (not just business) interests in being at the forefront of AI. EU as a whole would probably come third, although there is always a degree of subjectivity in these rankingsi. The UK would probably come next (owing to Demis Hassabis and DeepMind, as well as thriving scientific and academic communities). In any case, it’s rather unlikely that Poland would be listed in the top tier. Poland is known for being an ideal place to set up corporate back- or (less frequently) middle office functions: much cheaper than Western Europe, with huge pool of well-educated talent, in the same time zone as the rest of the EU. A great alternative (or complement) to setting up a campus in India, but not exactly a major player in AI research and entrepreneurship. Plus, Poland and its young democracy (dating back to 1989) are currently going through a bit of a social, identity, and political rough patch. Not usually a catalyst or enabler of cutting-edge technology.

And despite all that (and despite being a mid-sized country at best… 38 million people; and despite being global number #70 globally in GDP per capita in 2018 out of 239 countries and territoriesii) for some mysterious reason Poland still made it to #15 globally in AI (using total number of companies as a metric) according to China Academy of Information and Communications Technology (CAICT) Data Research Centre Global Artificial Intelligence Industry Data Reportiii as kindly translatediv by my fellow academic Jeffrey Ding from Oxford University (whose ChinAI newsletter is brilliant and I encourage everyone to subscribe and read). I found this news so unexpected that it was the inspiration behind the entire post below.

Artificial Intelligence conceptThe recent (2019) Map of the Polish AI from Digital Poland Foundation reveals a vibrant, entrepreneurial ecosystem with a number of interesting characteristics. The official Polish AI Development Policy 2019 – 2027 released around the same time by a multidisciplinary team working across a number of government ministries paints a picture of impressive ambitions, though experts have questioned their realism.

The Polish AI scene is very young (50% of the 160 organisations polled introduced AI-based services in 2017 or 2018, the most recent years in the survey). Warsaw (unsurprisingly) steals the top spot, with 85% of all companies being located in one of the 6 major metropolitan areas. The companies tend to be small: only 22% have more than 50 people; 59% have 20 or fewer. Let’s not conflate company headcount with AI teams proper – over 50% of the companies surveyed have AI teams of 5 employees or fewer. Shortage of talent is a truly global theme in AI (which I personally don’t fully agree with – companies with resources to offer competitive packages [sometimes affectionately referred to as “basketball player salaries”] have no shortage of candidates; whether this level of pay is justifiable [the very short-lived bonanza for iOS app developers circa 2008 comes to mind] and fair to the smaller players is a different matter). The additional challenge in Poland is that Polish salaries simply cannot compete with what is on offer within 3 hours’ flight – many talented computer scientists are naturally tempted to move to places like Berlin, Paris, London or other major European AI hubs where there are more opportunities, more developed AI ecosystems, and much, much better money to be made.

What stands out is the ultra-close connection between business and academic communities. While the same is the case in most countries seriously developing AI, some of them are home to global tech corporates whose financial resources and thus R&D capabilities give them the luxury to develop on their own, at par (if not ahead) of leading research institutions. These corporates’ resources also enable them to poach world-class talent (e.g. Google hiring John Martinis to lead their quantum computer efforts [who has since left…], Facebook appointing Yann LeCun as head of AI research, or Google Cloud poaching [albeit briefly] Fei-Fei Li as their Chief Scientist of AI/ML). In Poland this will not apply – it does not have any large (or even mid-size) home-grown innovative tech firms. The ultra-close connection between business and academia is a logical consequence of these factors – plus in a 38-million-strong country with relatively few major cities serving as business and academic hubs, the entire ecosystem simply can’t be very populous.

The start-up scene might in part be constrained by limited amount of available funds (anecdotally the angel investor / VC scene in Poland is very modest). However, the Digital Poland report states:

Categorically, as experts point out, the main barrier to the development of the Polish AI sector is not the absence of funding or expertise but rather a very limited demand for solutions based on AI.

My personal contacts echo this conclusion – they are not that worried about funding. Anecdotally, there is a huge pool of state grants (NCiBR) with limited competition for those (although post-COVID-19 they may all but evaporate).

Multiple experts cited by Digital Poland all list domestic demand as the primary concern. According to the survey, potential local clients simply do not understand the technology well enough to realise how it can benefit them (41% of responses in a multiple choice questionnaire – single highest cause; [client] staff not understanding AI had its own mention at 23% and [managers] not understanding AI came at 22%).

The AI market in Poland is focused on more commercial products (Big Data analytics, sales, analytics) rather than cutting-edge innovative research. It is understandable – in an ecosystem of limited size with very limited local demand the start-ups’ decision to develop more established, monetisable, applications which can be sold to a broad pool of global clients is a reasonable business strategy.

One side-conclusion I found really interesting is that there’s quite a vibrant conference and meetup scene given how nascent and “unsolidified” the AI ecosystem is.

The Polish AI Policy document is an interesting complement to the Digital Poland report. While the latter is a thoroughly researched snapshot of the Polish AI market right here right now (2019 to be exact), the policy document is more of a mission statement – a mission of impressive ambitions. I always support bold, ambitious, and audacious thinking – but experience has taught me to curb my enthusiasm as far as Polish policy-making is concerned. Grand visions of the 2019 – 2027 come with not even a draft of a roadmap. The document is also, unfortunately, quite pompous and vacuous at times.

The report is rightly concerned about impact on jobs, concluding that the expectation is that more jobs will be created than lost, and concluding that some of this surplus should benefit Poland. One characteristic of Polish economy is that it (still) has a substantial number of state-owned enterprises in key industries (banking, petrochemicals, insurance, mining and metallurgy, civil aviation, defence), which are among the largest in their industries on a national scale. Those companies have the size and the valid business cases for AI, yet they don’t seem ready (from education and risk-appetite perspectives) to commit to AI. State-level policy could provide the nudge (if not outright push) towards AI and emerging technologies, yet, unfortunately, that is not happening.

The report rightly acknowledges the skills gap, as well as some issues on the education side (dwindling PhD rates, relatively low (still!) level of interest in AI among Polish students as measured by thesis subject choices). The quality of Polish universities merits its own article (merits its own research, in fact). On one hand, the anecdotal and first-hand experiences lead me to believe that Polish computer scientists are absolutely top-notch, on the other, the university rankings are… unforgiving (there are literally two Polish universities on QS Global 500 list 2020, at positions #338 and #349v).

Last but not least, a couple of Polish AI companies I like (selection entirely subjective):

  • Sigmoidal – AI business/management consultancy.
  • – AI-aided sales and customer relationship management (CRM) solutions.
  • – behavioural biometrics solutions.

Disclaimer: I have no affiliations with any of the abovementioned companies.

[i] Are we looking at corporate research spending? Government funding/grants for academia? Absolute amounts or % of GDP? How reliable are the figures and how consistent are they between different states? etc. etc.

[ii] Source: World Bank (

[iii] You can read the original Chinese version here:

[iv] Jeff’s English translation can be found here:


Money laundering

Nerd Nite London – AI to the rescue! How Artificial Intelligence can help combat money laundering


In April 2020, in the apex of the UK lockdown, I had the pleasure of being one of three presenters at online edition of Nerd Nite London. Nerd Nite is a wildly popular global meetup series, with multiple regional chapters. Each chapter is run by volunteers, and the proceeds from ticket sales (after costs) go to local charities. In this sense, lockdown did us an odd favour: normally Nerd Nites are organised in pubs, so there is venue rental cost. This time the venue were our living rooms, so pretty much all the money went to a local foodbank.

I had the pleasure of presenting on one of the topics close to my heart (and mind!), which is the potential for AI to dramatically improve anti-money laundering efforts in financial organisations. You can find the complete recording below.


Artificial hand holding judge scales

UCL Digital Ethics Forum: Translating Algorithm Ethics into Engineering Practice

Tue 04-Feb-2020

On Tue 04-Feb-2020 my fellow academics at UCL held a workshop on algorithmic ethics. It was organised by Emre Kazim and Adriano Koshiyama, two incandescently brilliant post-docs from UCL. The broader group is run by Prof. Philip Treleaven, who is a living legend in academic circles and an indefatigable innovator with an entrepreneurial streak.

Algorithmic ethics is a relatively new concept. It’s very similar to AI ethics (a much better-known concept), with the difference being that not all algorithms are AI (meaning that algorithmic ethics is a slightly broader term). Personally, I think that when most academics or practitioners say “algorithmic ethics” they really mean “ethics of complex, networked computer systems”.

Artificial hand holding judge scalesThe problem with algorithmic ethics doesn’t start with them being ignored. It starts with them being rather difficult to define. Ethics are a bit like art – fairly subjective and challenging to define. Off the top of our heads we can probably think of cases of (hopefully unintentional) discrimination of job applicants on the basis of their gender (Amazon), varying loan and credit card limits offered to men and women within the same householdi (Apple / Goldman), online premium delivery services more likely being offered to white residents than blackii (Amazon again). And then there’s the racist soap dispenseriii (unattributed).

These examples – deliberately broad, unfortunate and absurd in equal measure – show how easy it is to “weaponise” technology without an explicit intention of doing so (I assume that none of the entities above intentionally designed their algorithms as discriminatory). Most (if not all) of the algorithms above were AI’s which trained themselves off of a vast training dataset, or optimised a business problem without sufficient checks and balances in the system.

With all of the above most of us will just know that they were unethical. But if we were to go from an intuitive to a more explicit understanding of algorithmic ethics, what would it encompass exactly? Rather than try to reinvent the ethics, I will revert to trusted sources: one of them will be Alan Turing Institute’s “understanding artificial intelligence ethics and safety”iv and the other will be a 2019 paper “artificial intelligence: the global landscape of ethics guidelines”v co-authored by Dr. Marcello Ienca from ETH Zurich, whom I had the pleasure of meeting in person at Kinds of Intelligence conference in Cambridge in 2019. The latter is a meta-analysis of 84 AI ethics guidelines published by various governmental, academic, think-tank, or private entities. My pick of the big ticket items would be:

  • Equality and fairness (absence of bias and discrimination)
  • Accountability
  • Transparency and explicability
  • Benevolence and safety (safety of operation and of outcomes)

There is an obvious fifth – Privacy – but I have slightly mixed feelings when it comes to throwing it in the mix with the abovementioned considerations. It’s not that privacy doesn’t matter (it matters greatly), but it’s not as unique to AI as the above. Privacy is a universal right and consideration, and doesn’t (in my view) feed and map to AI as directly as, for example, fairness and transparency.

Depending on the context and application, the above will apply in different proportions. Fairness will be critical in employment, provision of credit, or criminal justice, but I won’t really care about it inside a self-driving car (or a self-piloting plane – they’re coming!) – then I will care mostly about my safety. Privacy will be critical in the medical domain, but it will not apply to trading algorithms in finance.

The list above contains (mostly humanistic) concepts and values. The real challenge (in my view) is two-fold:

  1. Defining them in a more analytical way.
  2. Subsequently “operationalising” them into real-world applications (both in public and private sectors).

The first speaker of the day, Dr. Luca Oneto from the University of Genoa, presented mostly in reference to point #1 above. He talked about his and his team’s work on formulating fairness in a quantitative manner (basically “an equation for fairness”). While the formula was mathematically a bit above my paygrade, the idea itself was very clear, and I was sold on it instantly. If fairness can be calculated, with all (or as much as possible) ambiguity removed from the process, then the result will not only be objective, but also comparable across different applications. At the same time, it didn’t take long for some doubts to set in (although I’m not sure to what extent they were original – they were heavily inspired with some of the points raised by Prof. Kate Crawford in her Royal Society lecture, which I covered here). In essence, measuring fairness seems do-able when we can clearly define what constitutes a fair outcome – which, in many cases in real life, we cannot. Let’s take two examples close to my heart: fairness in recruitment and the Oscars.

With my first degree being from not-so-highly ranked university, I know for a fact I have been autorejected by several employers – so (un)fairness in recruitment is something I feel strongly about. But let’s assume the rank of one’s university is a decent proxy for their skills, and let’s focus on gender representation. What *should be* the fair representation of women in typically male-dominated environments such as finance or tech? It is well documented that and widely debated as to why women drop out of STEM careers at a high ratevi vii – but they do, around 40% of them. The explanations go from “hegemonic and masculine culture of engineering” to challenges of combining work and childcare disproportionately affecting new mothers. What would be the fair outcome in tech recruitment then? A % representation of women in line with present-day average? A mandatory affirmative action-like quota? (if so, who and how would determine the fairness of the quota?) 50/50 (with a small allowance for non-binary individuals)?

And what about additional attributes of potential (non-explicit) discrimination, such as race or nationality? The 2020 Oscars provided a good case study. There were no females nominated in the Best Director category (a category which historically has been close to 100% male, with exactly one female winner, Kathryn Bigelow for “the hurt locker” and 5 female nominees, and zero black winners and 6 nominees), and only one black person across all the major categories combined (Cynthia Erivo for “Harriet”). Stephen King caused outrage with his tweet about how diversity should be a non-consideration – only quality (he later graciously explained that it was not yet the case todayviii). Then South Korean “parasite” took the Best Picture gong – the first time in the Academy Awards history the top honour went to a foreign language film. My question is: what exactly would be fair at the Oscars? If it was proportional representation, then some 40% of the Oscars should be awarded to Chinese movies, another 40% to Indian ones, with the remainder split among European, British, American, Latin, and other international productions. Would that be fair? Should special quota be saved for the American movies given that the Oscars and the Academy are American institutions? Whose taste are the Oscars meant to represent, and how can we measure the fairness of that representation?

All these thoughts flashed through my mind as I was staring (somewhat blankly, I admit), at Dr. Oneto’s formulae. The formulae are a great idea, but determining the distributions to measure the fairness against… much more of a challenge.

The second speaker, Prof. Yvonne Rogers of UCL, tackled AI transparency and explainability. Prof. Rogers tackled the familiar topics of AI’s being black boxes and the need for explanations in important areas of life (such as recruitment or loan decisions). Her go-to example was AI software scrutinising facial expressions of candidates during recruitment process based on unverified science (as upsetting as that is, it’s nothing compared to fellas at Faception who declare they can identify whether somebody is a terrorist by looking at their face). While my favourite approach towards explainable AI, counterfactuals, was not mentioned explicitly, they were definitely there in spirit. Overall it was a really good presentation on a topic I’m quite familiar with.

The third speaker, Prof. David Barber of UCL, talked about privacy in AI systems. In his talk, he strongly criticised present-day approaches to data handling and ownership (hardly surprisingly…). He presented an up-and-coming concept called “randomised response”. Its aim is described succinctly in his paperix as “to develop a strategy for machine learning driven by the requirement that private data should be shared as little as possible and that no-one can be trusted with an individual’s data, neither a data collector/aggregator, nor the machine learner that tries to fit a model”. It was a presentation I should be interested in – and yet I wasn’t. I think it’s because in my industry (investment management) privacy in AI is less of a concern than it would be in recruitment or medicine. Besides, IBM sold me on homomorphic encryption during their 2019 event, so I was somewhat less interested in a solution that (if I understood correctly) “noisifies” part of personal data in order to make it untraceable, as opposed to homomorphic encryption’s complete, proper encryption.

In the only presentation from the business perspective, Pete Rai from Cisco talked about his company’s experiences with broadly defined digital ethics. It was a very useful counterpoint to at times slightly too philosophical or theoretical academic presentations that preceded it. It was an interesting presentation, but like many others, I’m not sure to what extent it really related to digital ethics or AI ethics – I think it was more about corporate ethics and conduct. It didn’t make the presentation any less interesting, but I think it inadvertently showed how broad and ambiguous area digital ethics can be – it’s very different things to different people, which doesn’t always help push the conversation forward.

The event was part of a series, so it’s quite regrettable I have not heard of it before. But that’s just a London thing – one may put all the work and research to try to stay in the loop of relevant, meaningful, interesting events – and some great events will slip under the radar nonetheless. There are some seriously fuzzy, irrational forces at play here.

Looking forward to the next series!










Smart city-blurry background of people crossing street

Royal Institution “Quantum in the City”

Sat 16-Nov-2019

On Sat 16-Nov-2019 the Royal Institution served its science-hungry patrons a real treat: a half-day quantum technologies showcase titled “Quantum in the City: the shape of things to come”. The overarching concept was to present what living in the “quantum city” of the future might look like.

It was organised with the participation of UK National Quantum Technologies Programme and ran a day after a big industry event at the QE2 Centre.

Weekend events at the RI usually differ from the standard evening lectures in that they are longer and cover one area in more depth. This one was no exception: in addition to a 1.5hr panel discussion, there was an extensive technology showcase across the 1st floor of the RI building, with no fewer than 20 exhibitors, most of them from academia or university spin-off companies.

Quantum in the City event-people networking in a roomOne of the chapters from Nassim Taleb’s “skin in the game” (full disclosure: I haven’t read the whole book; I only read the abridged chapter when it appeared in my news feed on, all of the places, Facebook[1]) describes a social group he (with his usual Kanye charm) calls “Intellectual Yet Idiot”. I tick pretty much all the boxes in that description (except the “comfort of his suburban home with 2-car garage” – try “precarious comfort of his Qatari-owned 2-bed rental“), but none more than “has mentioned quantum mechanics at least twice in the past five years in conversations that had nothing to do with physics”. Guilty as charged, that’s me. The context in which I mention quantum mechanics, physics, and technologies in conversations is usually the same – I don’t understand them. I understand one or two of the basic concepts, but I still completely don’t get how with each new qubit the computing power of a quantum computer doubles, what quantum (let alone quantum-safe) encryption is, and why the observer makes all the difference (and what does “observer” even mean?! A conscious observer?!).

Consequently, I keep going to different quantum lectures and presentations, in order to actually understand what this stuff’s about. I basically hope that if I hear it for the n-th time, something in my brain will click. It was that hope that sent me to the RI in November of 2019. Plus, I was really keen to see practical applications of quantum technology.

The discussion panel was great. The panellists were:

  • Miles Padgett, Principal Investigator for the QuantIC Hub
  • Kai Bongs, Director, UK Quantum Technology Hub for Sensors and Metrology (I previously attended Kai’s presentation on quantum sensors at New Scientist Live)
  • Dominic O’Brien, Co-Director, NQIT (UK Quantum Technology Hub for Networked Quantum Information Technologies)
  • Tim Spiller, Director, UK Quantum Technology Hub for Quantum Communications Technologies

The discussion revolved around current and future applications of quantum technologies. Like everyone, I know of quantum computers (I even saw IBM’s one during their Think!2019 event), and quantum encryption. I have a basic awareness of quantum sensors (from Kai’s talk at NS Live in 2019 or 2018) and some ambitious plans for quantum technologies-based medical imaging (“quantum doppelgangers” if I recall correctly… I heard of those during Science Museum Lates even on quantum). Paul Davies mentioned quantum biology in his own RI lecture “what is life”, as did Prof. Jim al-Khalili in some interview – but that’s about it.

Fundamentally though, my understanding was that quantum technologies are only beginning to emerge in academic and / or industrial settings. It was genuine news to me that existing technologies (chief among them semiconductors and transistors, which is basically all of modern technology and the Internet; also lasers and MRI scanners) are reliant on the effects of quantum mechanics and are referred to as “quantum 1.0”. The cutting-edge technologies emerging these days are “quantum 2.0”.

Imaging was a prominent use case for quantum technologies, across a number of fields: medical (endoscopy, brain imaging for dementia research), environmental, construction (what’s underneath the soil), industrial (seeing through dirty water or unclear air).

Quantum computing and encryption were also discussed at length. With quantum computing, we’re on the cusp of doing practically useful things at a much lower (energy and time) cost than traditional computing. (nb. the Google experiment was a test problem, not a real problem). In some use cases, quantum computing may be orders of magnitude cheaper in terms of energy consumption compared to conventional computing. In some other use cases this saving will be minimal (interesting comment – I assumed that quantum computers would generate orders-of-magnitude energy and time savings across the board). In terms of encryption, the experts at the RI repeated almost verbatim what Ian Levy from NCSC / GCHQ said at quantum computing panel at the Science Museum a few weeks prior: currently all our communications are encrypted and therefore assumed more or less safe. However, it is theoretically possible for an actor to store encrypted communications of today and decrypt them using quantum technology in the future. Work is underway to develop mathematical models for quantum-safe encryption.

There is work starting on standardization of quantum technologies to ensure their portability.

The panellists also discussed at length the research and investment landscape of quantum technologies across the UK. They noted that the UK was the first country in the world to come up with a national programme of academic + industry partnership and funding in quantum technology research. The US and their programme have (allegedly!) pretty much copied the British blueprint. To date, all distributed and committed funds are close to GBP 1bn. That’s a decent level of funding, but in part, because different groups and laboratories have been set up and funded through different sources before. If the GBP 1bn funding was to fund everything from scratch, then it might not be sufficient. Currently, a substantial part of UK quantum research funding (varies by group and programme) comes from the EU. Brexit is an obvious concern.

Separately, there is an acute talent shortage in engineering in general, and even more so in quantum technology. Big tech companies are in a strong position to compete for talent because they can offer great salaries and interesting careers.

Speaking of quantum talent, their rooms of the RI were filled with the country’s (and likely the world’s) best and brightest in the field. 20 exhibitors presented their projects, all of which were applications-based rather than pure research. Some of those were proofs of concept (PoC’s), some were prototypes, and some were in between. A handful of exhibitors stood out based on my subjective and oft-biased judgement:

  • Underwater 3D imaging, ultra-thin endoscope, and a camera looking around corners (all from QuantIC: UK tech hub for quantum enhanced imaging) were all practical examples of advanced imaging applications.
  • Trapped Ion Quantum Computer (University of Sussex). The technological details are a little above my paygrade, but apparently different engineering approaches towards quantum computing lend themselves differently to scaling. The researchers in Sussex use microwave technology, which differs from existing mainstream approaches and can be quite promising. I have had a soft spot and very high regard for the Sussex lab ever since I met its head, the fabulously brilliant and eccentric Winfried Hensinger when he presented at one of New Scientist Instant Expert events.
  • Quantum Money (University of Cambridge) was the only project related to my line of work and a slightly exotic one even in the weird and wonderful world of quantum technologies. S-Money, as it’s called, is at the intersection of quantum theory and theory of relativity, and could enable unhackable identification as well as lag-free transacting – on Earth and beyond. And they say the finance industry lacks vision…

In summary, the RI event was nothing short of awesome. I don’t know whether I got anywhere beyond the “Intellectual yet Idiot” point on the scale of quantum expertise, but I can live with that. I learned of new applications of quantum technologies, and I met some incandescently brilliant people; couldn’t really ask for much more.

[1] Fuller disclosure: I only ever read Nassim’s „black swan”, and I consider it to be a genuinely great book. I bought “fooled by randomness” and “antifragile” with an intention of reading them some day (meaning never). Still, if I mention the titles with sufficient conviction, most people usually assume I read those end-to-end. I don’t correct them.

Solar Panels in a field

London Business School Energy Club presents: the renewables revolution

Thu 26-Sep-2019

As an LBS alumn (or is it “alumnus”? I never know…) I am a part of a very busy e-mail distribution list, connecting tens of thousands of LBS grads worldwide. LBS, its clubs, alumni networks etc. regularly organise different events, and I make an active effort to attend one at least every couple of months. I went to “the business of sustainability” a couple of months ago, so the upcoming “the renewables revolution” organised by LBS Energy Club (and sponsored by PWC) was an easy choice.

Renewable energy is not a controversial topic in its own right (unless you’re a climate change denier or a part of the fossil fuel lobby, especially on the coal side). It’s a controversial topic along the lines of disruption of powerful, established, entrenched industries (mostly mining and petrochemicals) and also along the lines of disruption of life(style) as we know it. Most of us in the West (the proverbial First World, even if it doesn’t feel like one very often) want to live green, sustainable, environmentally-friendly lifestyles… as long as the toughest environmental sacrifice is ditching a BMW / Merc / Lexus etc. for a Tesla, and swapping paper tissues for bamboo-based ones (obviously I am projecting here, but I don’t think I’m that far off the mark). Us Westerners (if not “we mankind”, quoting Taryn Manning’s character from “hustle and flow”) love to consume, love the ever-expanding choices, love all the conveniences we can afford – the prospect of cutting down on hot water, not being able to go on overseas holidays once or twice a year, or not replacing our mobiles whenever we feel like it, is an unpleasant one. Renewables, with their dependency on weather (wind, solar) and generally less abundant (or at least less easily and immediately abundant) output are an unpleasant reminder that the time of abundance (when, quoting Michael Caine’s character from “Interstellar”, “every day felt like Christmas”) might be coming to an end.

Furthermore, even for a vaguely educated Westerner like myself, renewables are a source of certain cognitive dissonance. On one hand we have several consecutive hottest years on record, floods, wildfires, disrupted weather patterns, environmental migrants, the prospect of ice-free Arctic ocean, Extinction Rebellion etc. – on the other hand we have seemingly very upbeat news like “Britain goes week without coal power for first time since industrial revolution”, “Fossil fuels produce less than half of UK electricity for first time”, or “Renewable electricity overtakes fossil fuels in the UK for first time”. So in the end, I don’t know whether we’re turning the corner as we speak, or not.

There is no shortage of credible statistics out there – it’s quite a challenge for a non-energy expert to understand them. According to BP, renewables (i.e. solar, wind and other renewables) accounted for approx. 9.3% of global electricity generation in 2018 (25% if we add hydroelectric). Then, as per the World Bank (spreadsheets with underlying data from Renewable Energy), in 2016 all renewables accounted for approx. 11% of global energy generation (35% if we add hydroelectric). Then, as per IEA, in 2018 renewables accounted for measly 2% of total energy production (rising to 12% if we add biomass and waste, and to 15% if we add hydro).

2% looks tragic, 9.3% looks poor, 25% or 35% looks at least vaguely promising – but no matter which set of stats we choose, fossil fuels still account for vast majority of global energy generation (and the demand is constantly rising). Consequently, my anxiety remains well justified. It was the reason I went to the event in the first place – to find out what the future holds.

The panellists were:

  • Equinor, Head of Corporate Financing & Analysis, Anca Jalba
  • Glennmont Partners, Founding Partner, Scott Lawrence
  • Globeleq, Head of Renewables, Paolo de Michelis
  • Camco, Managing Director, Geoff Sinclair

The panellists made a wide range of observations, depending on their diverse geographical focus and nature of their companies. You will find a summary below, coupled with my personal observations and comments. I intentionally anonymized the speakers’ comments.

One of the panellists remarked that in the last decade a cost of 1MW of solar panels went from EUR 6-8m to EUR 3.5m to EUR 240k, and at the same time ESG went from being a niche area in investment management to being very much at the core (I echo the latter from my own observations). At the same time, according to research, in order to meet Paris Accord targets, by 2050 50% of global energy will need to come from renewables. So no matter which set of abovementioned statistics we choose, we’re globally nowhere near 50%.

The above comments are probably fairly well known, sort of goes without saying. However, the speakers made a whole lot of more targeted observations.

The concept of distributed renewables (individual households generating their own electricity, mostly using solar panels on their roofs, and feeding surplus into the power grid) was mentioned. This is being encouraged by some governments, and the speakers noted that governments are the key players in reshaping the energy landscape. They were also quite candid on there being a lot of rent seeking behaviour in the (established) energy sector (esp. utility companies). Given the size and influence of the utility sector, it is fairly understandable that they may have mixed feelings towards activities that may effectively undercut them. At the same time, one would hope that at least some of them see the changes coming, and appreciate their necessity and inevitability by adapting rather than opposing. Interestingly, emerging markets where energy infrastructure and power generation are not very reliable were mentioned as an opportunity for off-grid renewables.

We were also reminded that electricity generation is just part of the energy mix. It’s a massive part, of course, but there is also automotive transport, aviation, and shipping – all of which consume vast amounts of energy, with very few low-carbon or no-carbon options. Electric vehicles are a promising start (not without their own issues though: cobalt mining), but aviation and shipping do not currently have viable non-fossil-fuel-based options (except perhaps biofuels, but I doubt there is enough arable land in the whole world to plant enough biofuel-generating crops to feed the demands of aviation and shipping).

The need for (truly) global carbon tax was also raised. I think (using tax havens as reference) it may be challenging to implement, but, unlike corporate domicile and taxation, energy generation is generally local, so if governments would tax emissions physically produced by utility companies within their borders, that could be more feasible. Then again, it could be quite disruptive and thus challenging politically (think the fight around coal mining in the US or gillet jeunes in France as examples).

On the technical side, intermittency risk is a big factor in renewables, and energy storage is not there yet on an industrial scale. It is a huge investment opportunity.

In terms of new sources of renewable energy, floating offshore wind farms were mentioned as the potential next big thing, even though it is currently not commercially viable. My question about the panellists’ views on feasibility of fusion power was met with scepticism.

In terms of investment opportunities, one of the speakers (prompted by my question) mentioned that climate change adaptation is also one. This echoes exactly what Mariana Mazzucato said at the British Library event some time ago (pls see my post “Mariana Mazzucato: the value of everything” for reference), so there might be something there. More broadly, there seemed to be a consensus among the speakers once subsidies disappear, only investors will large balance sheets and portfolios of projects will be in the position to compete, given capital-intensive nature of energy infrastructure.

I ended by asking a question about the inevitability and scale of impact of the climate change on the world as we know it and on our lifestyles. I didn’t get a very concrete reply other than there *will be* impact, and adaptation will be essential. It hasn’t lifted my spirit, but I don’t think I was expecting a different answer. In the end, it looks like the renewables are currently more of an evolution than revolution. Evolution is better than nothing; it might just not be enough.

The Alan Turing Institute presents Lilian Edwards “regulating unreality”


On Thursday 11-Jul-2019 the Alan Turing Institute served up a real treat in the form of lecture by Professor Lilian Edwards1. To paraphrase Sonny Bono, Lilian Edwards is not just a professor, she’s an experience. Besides being a prolific scholar at the bleeding edge of law and regulation, she is one of the most engaging and charismatic speakers I have ever met. I first heard Prof. Edwards present at one of the New Scientist Instant Expert event series (of which I’m a big fan btw), and I have been her fan ever since.

After hearing a comprehensive and at times provocative lecture on legal and social aspects of AI and robotics (twice!) in 2017, in 2019 Prof. Edwards focused on something even more cutting edge: social and legal aspects of deepfakes.

Lilian Edwards regulating unreality talkDelivered in the stunning Brutalist surroundings of the Barbican and hosted by Professor Adrian Weller, the lecture started with revisiting the first well-known deepfake: Gal Gadot’s face photorealistically composited onto a body of an adult film actress in a 2017 gif from Reddit.

Most things on the Internet start with (or end, or lead to) porn – it’s simply the way it is. However, developing technologies which allow access to, streaming, or sharing of images of real consenting adults engaging in enjoyable, consensual activities is one thing (the keyword being in my personal opinion “consent”) – deepfaking explicit images of celebrities or anyone else is vulgar and invasive (try to imagine photos of your mom, daughter, or even yourself being digitally, photorealistically “pornified”, and then think how it would make you feel). As awful as that is, deployment of deepfake technology in politics is something else entirely. Deepfakes are likely the new fake news, albeit taken to a new level: a seamless audio-visual distortion of reality.

Prof. Edwards reminded everyone that image manipulation was deployed in politics pre-deepfake era – using very simple techniques with often very successful effects: the Nancy Pelosi video slowed down in order to give appearance of her slurring and seemingly drunk and the White House video of CNN reporter Jim Acosta. Furthermore, she pointed out that deepfakes are not necessarily the end of the road: they are just one (disturbing) element of seamlessly generated/falsified “synthetic reality”.

Among the many threats (“use cases”) caused by deepfakes, the one that resonated with me the strongest were:

  • Deepfakes are not just about presenting something fake as something real, they’re also about discrediting something real as fake.
  • Plenty of potential uses in both civil and criminal smears.
  • Deepfakes create plausible deniability for anything and everything.

Moving on to legal and social considerations, Prof. Edwards looked at plethora of existing and proposed legal and technological solutions across the US, UK, and EU, expertly pointing out their shortcomings and/or unfeasibility. What moved me the most was the similarity (at least in terms of underlying concept and intent) between deepfakes and revenge porn, which I find absolutely and utterly disgusting. Another consideration was the question of one, objective, canonical reality (particularly online): while it wasn’t raised in great detail (we didn’t take the quantum physics route), it that resonated strongly with me. Lastly (in true Lilian Edwards cutting edge/provocative thinking style) there was a question whether reality should be considered a human right.

In terms of big, open questions, I think 2 stands out particularly prominently:

  • What should be the strategy: ex ante prevention or post factum sanctions?
  • Whom to prosecute: the maker, the distributor, the social media platform, the search engine?

Overall, it was a fantastic, thoroughly researched and brilliantly delivered lecture (and at GBP 4.60 it wasn’t just the hottest ticket in town, but very likely the cheapest one too). You can watch the complete recording on Turing’s YouTube channel. You can hear me around the 01:10:00 mark, raising my concerns about deepfake-driven detachment from reality among younger generations.


Leonardo at the British Library


A museum exhibition dating back half of a millennium may not immediately seem like an organic fit in a cutting-edge finance and technology blog. However, this isn’t just any exhibition we’re talking about – this is a Leonardo da Vinci exhibition at the British Library. The quality and quantity of scientific and academic content of the works on display were orders of magnitude greater than any technology discussed throughout the rest of my blog.

With 3 codices for the first time on display together, the exhibition makes for a riveting experience (and also a chilling one – the exhibition room is kept at 17 Centigrade in order to preserve the manuscripts). Each and every one of the pages on display (“codex” is really just a fancy term for “notebook”) presents an insight or thought that would make worthy life’s work for most non-genius folk.

Facing Leonardo’s work is an experience comparable to seeing “2001: a space odyssey” for the first time – it’s not exactly religious, not really spiritual (I don’t like this word anyway), but transcendental still. There is, of course, a huge degree of subjectivity (my mom was not exactly riveted by “2001:…”), but facing this calibre of genius up close was deeply moving. Right before my eyes, separated from me only with a thin sheet of glass, were the notes of one of the finest polymathic minds in the history of mankind.

The exhibition showcases 3 of Leonardo’s codices (out of a total of 11 his works were broken into):

  • Codex Arundel: 283 sheets of notes on various subjects, including mechanics and geometry (from British Library’s own collection)
  • Codex Forster: notes on a variety of scientific topics including hydraulics, weights, and geometry (from the V&A Museum in London)
  • Codex Leicester: 36 sheets on a variety of topics including astronomy, geology, and the flow of water (one of the topics Leonardo seemed to have a particular fascination with). Codex Leicester is a property of Bill Gates, who purchased it in 1994 for USD 30.8 mln* (so close to a million per sheet) and allows it to be exhibited worldwide.

Leonardo’s famous mirror writing just adds to his mystique – what made him write this way? Is it just because he was left-handed? (seems a bit contrived an explanation). Fun fact: the multiplication table (up to 10×10) on one of the sheets is written left-to-right.

Even on a purely aesthetic level, Leonardo’s work is a delight. The beige sheets – almost all of them richly illustrated – have artistic values. The drawings may not all be quite Salvator Mundi-quality, but they are spectacular nonetheless. Even the handwriting itself is stylish; I mean, just look:

Codex de Leicester
Codex Arundel
Work by Leonardo da Vinci

The realisation that Leonardo made these notes and drawings himself was pretty mindblowing in its own right. It’s almost like Leonardo was there himself.

The stylish-to-a-fault exhibition** doesn’t even present the complete contents of the 3 codices – just selected sheets, and many of these sheets contain enough to be an accomplished Renaissance academic’s life’s work. So that’s just a fraction of his scientific work, all of which is on top of 2*** of the world’s most famous paintings: the Mona Lisa and the Last Supper. To me that realisation – coming up close with it – was the most moving and transfixing experience of the entire exhibition: how did he do it? Was Leonardo just some sort of statistical inevitability (out of large enough population a supergenius polymath of his calibre simply *had to* emerge)? What did it feel like to be him? Was famously private Leonardo aware of the extent of his own genius and talent?

All these questions (and many more) were tackled by Oxford University’s Professor Martin Kemp during an accompanying event on Tue 06-Aug-2019. In an interdisciplinary talk (which lasted well over an hour but felt like a moment) he showed the audiences how to find beauty in Leonardo’s scientific work and how to find scientific accuracy in Leonardo’s art.

Prof. Kemp said that Leonardo exhibited “almost pathological attention to detail” – something I’m a big fan of myself. He also mentioned that everything Leonardo ever did (though I’m unsure whether that included paintings) was an act of analysis – “how does the mechanism work?” was apparently one of the questions driving Leonardo throughout his life and work. Professor Kemp added that Leonardo admired the perfection of the structure of things and believed that a great design had an element of inevitability; consequently, Leonardo was much more of a geometric rather than an arithmetic mind. He set himself impossible, “unfinishable” tasks (something I can relate to), was famously private (something I can’t relate to at all), but he did apparently like dirty humour (something I can relate to).

Prof. Kemp mentioned that for Leonardo science and fantasy went hand in hand. It wasn’t perhaps entirely uncommon at the time (though it would more likely have been science and religion and/or theology), but still, it seems just so naturally and organically fitting to his work, where groundbreaking science and art worked hand-in-hand.

Fun fact: Prof. Kemp is of a certain age (77 years young to be exact), yet when he was on stage discussing Leonardo, he was literally beaming with the enthusiasm and energy of someone much younger. It was quite incredible. I hope that I will one day find that one thing that will give me as much joy and fulfilment.

* Normally, I’m rather unimpressed with 8- or 9-figure sums paid for works of art (it’s pure excess to me), but I have to say, for Leonardo’s codex I’m willing to make an exception: that was money well spent.

** Fittingly, the exhibition is sponsored and styled by Pininfarina, the Italian automotive design house behind some of the world’s most stylish sports cars.

***3 If we count Salvator Mundi.

Polish AI Manifesto

Polish academic community releases Manifesto for Polish AI


The big press release of the season (possibly of the year as well, even though we’re less than halfway through) is without a doubt the European Commission’s Ethical Guidelines for Trustworthy AI1 (published in Apr-2019). Most developed states have their own AI strategies. That list includes my home country, Poland, which released an extensive AI strategy document in Nov-2018. The document (which I may review separately on another occasion) is not strategy proper, it’s more of a summary of key considerations in formulating a full-blown strategy.

It is likely that the lack of an explicit strategy (and lack of explicit commitment to funding) led Polish academic community to publish their own Manifesto for Polish AI.

Poland may not seem like an obvious location to launch an AI (or any other tech) startup, but anecdotal evidence states to the contrary. There are pools of funds (including state subsidies and grants) available to tech entrepreneurs and a relatively low number of entrepreneurs competing for those funds. The most popular explanations for this are the state’s attempts to stimulate digital economy and to stem (maybe even reverse) the pervasive brain drain, which started when Poland joined the EU in 2004.

Entrepreneurship is one thing, but research is something different altogether – at least in Poland. In the absence of home-grown innovation leaders the like of Amazon or Deep Mind, research is almost entirely confined to academic institutions. Reliant entirely on state funding, Polish academia has always been underfunded: in 2018 Polish GERD (gross domestic expenditure on research and development) was 1.03%, compared to EU’s average of 2.07%2 3. In all fairness, the growth of GERD in Poland has been rapid (from 0.56% in 2007) compared to the EU (1.77% in 2007), but the current expenditure is still barely a half of EU’s average (not to mention global outliers: Israel and South Korea both 4.2%, Sweden 3.25%, Japan 3.14%4).

Between flourishing tech entrepreneurship (18 companies on Deloitte’s FAST 50 Central and Eastern Europe list are Polish5 – though, arguably, none of them are cutting-edge technology powerhouses like Dark Trace or Onfido, both 2018 laureates from the FAST 50 UK list6), widely respected tech talent, nascent AI startup scene (, sigmoidal, growbots) Polish academics clearly felt a little left out – or wanted to make sure they won’t be.

Consequently, in early 2019 Polish academics from Poznan University of Technology’s Institute of Computing Science published Manifesto for Polish AI7, which has since been signed by over 300 leading academics, researchers, and business leaders.

The manifesto is compact. Below is an abridged summary:

Considering that:

  • AI innovations can yield particularly high economic returns, as evidenced by the fact that AI is currently a primary focus of Venture Capital firms.
  • The significance of AI for economic growth, social development, and defence is recognised by world leaders.
  • Innovative economies worldwide are based on strong commitment to science and tertiary education. The world’s most innovative countries such as South Korea, Israel, Scandinavia, USA, and increasingly China happen to also contribute the highest proportion of their GDP’s to R&D and education.
  • Polish academia has significant potential in the field of AI.
  • A barrier for growth of innovative start-ups in Poland is not just lack of capital, but also, if not mainly, too low a number of start-ups themselves
  • Innovative start-ups worldwide are developed mostly within academic ecosystems.
  • There is an insufficient number of IT specialists in Poland, and even more so of AI specialists.

We are calling on the decision makers to develop a strategy for growth of key branches of R&D and tertiary education (in particular AI), and to take decisive actions to fulfil that strategy:

  • Severalfold increase in expenditure on primary research and implementation of AI to reach parity with the most innovative countries.
  • Growth and integration of R&D teams working in the field of AI and improvement of their collaboration with the industry.
  • Ensuring PhD’s and other academics doing AI research receive grants and remuneration comparable to those in the (AI) industry.
  • Additional funding for opening new AI university courses and broadening the intake into the existing ones.
  • Funding for entrepreneurship centres (based on the Martin Trust Center for MIT Entrepreneurship) on Polish university campuses, offering students and employees entrepreneurship education, seed funding, co-working spaces etc.

Funds committed to the above will be a great investment, which will pay for itself many times over in the form of greater innovation, a higher number of AI experts on the job market, and in effect faster economic and social growth, and better defence.

There is nothing controversial in the manifesto – in fact, probably most academics worldwide would sign a copy with “Poland” being replaced with their respective country name. It may be a little idealistic/ unrealistic (reaching R&D parity as % of GDP with global leaders is… ambitious), but that doesn’t diminish its merit. I for one would be ecstatic to see Poland committing 3% or 4% of GDP to R&D. Separately, it’s really nice to see that a compelling argument can be presented on literally 1 page. Less is almost always more.

Natural Language Generation

Natural Language Generation (NLG) is coming to asset management

Sun 17-Mar-2019

Natural Language Processing (NLP) is a domain of artificial intelligence (AI) focused on, well, processing normal, everyday language (written or spoken). It is used by digital assistants such as Siri or Google, smart speakers such as Google Home or Alexa, and countless chatbots and helplines all around the world (“in your own words, please state the reason for your call…”). The idea is to simplify and humanize human-computer interaction, making it more natural and free-flowing. It is also meant to generate substantial operational efficiencies for service providers, allowing their AI’s provide services that were either previously unavailable (human-powered equivalent of Siri – not an option), or costly (human-powered chats and helplines).

Natural Language Generation (NLG) is an up-and-coming twin of NLP. Again, the name is rather self-explanatory – NLG is all about AI generating text indistinguishable from what could be written by a human author. It has been slowly (and somewhat discreetly) taking off in journalism for a couple of years now.1 2 3

NLG is far less known and less deployed in financial services (and otherwise), but given potential for operational efficiencies (AI can instantly, and close to zero cost produce text which would otherwise take humans much more time to prepare, and at a non-negligible cost) it makes an instant and strong business case. There are areas within asset management whose primary (if not sole) purpose is the preparation of standardised reports and summaries: attribution reports, performance reports, risk reports, or periodical fund/market updates. Some of these are so rote and rules-based that they make natural candidates for automation (attribution, performance, perhaps risk). Fund updates and alike are much more open and free-flowing, but still, they are rules- and template-driven.

AI replacing humans is an obvious recipe for controversy, but perhaps it is not the right framing of the situation: rather than consider AI as a *replacement*, perhaps it would be much better for everyone to consider it a *complement* or even more simply: a tool. You will still need an analyst to review those attribution reports and check the figures, and you will still need an analyst to review those fund updates. And with the time saved on compiling the report, the analyst can move on to doing something more analytical, productive, and value-adding. At least that’s the idea (QuantumBlack, an analytics consultancy and part of McKinsey, calls this “augmented intelligence” and did some research in this field which they shared during a Royal Institution event in 2018. You can watch the recording of the entire event here – the key slide is at 16:44. There is some additional reading on Medium here and here).

Some early adoption stories begin to pop up in the media: SocGen and Schroders (who, with their start-up hub, are quite proactive in terms of being close to the cutting edge of tech in investment management) are implementing systems for writing automated portfolio commentaries4. No doubt there will be more.

Disclaimer: this post was written by a human.

United states of crypto crazy


Sat 02-Mar-2019

Some assorted reflections on cryptocurrencies a little more than a year since the peak (Dec-2017). Hindsight is always 20/20, but I never was on the crypto bandwagon (and there are timestamped records to prove it…), so I believe I have the right to a little bit of “told you so” smugness.

A number of friends and colleagues have in recent days mentioned the FCA cryptocurrency assets consultation paper, which made me reflect. That FCA is on top of fintech developments is in itself great; regulators haven’t historically always been known for being ahead of the curve, but in recent years there has been marked improvement (nb. FCA isn’t the only regulator proactively looking into cryptocurrencies – regulators in many jurisdictions including USA, German, France, China, Australia, Japan and EU (ESMA) published guidelines, consultation papers, or cautions pertaining to investing in coins and tokens).

Reading the FCA paper I recalled an article in Wired magazine (UK edition) published more or less exactly a year ago, at a time when bitcoin was only beginning the precipitous slide off its all-time peak of nearly USD 20,000 (which happened in Dec-2018), all things crypto were still the hottest topic in fintech, utilities and services were meant to become a better and less-centralized, and nothing could have possibly gone wrong. And there was plenty of money being thrown at crypto. PLEN-TY.

While the article was measured and not too hype’y, it still struck me as a little less critical than I’d expect from Wired. But that in itself is probably a reflection of the time it was written in: it was such a frenzied and insane period, even measured journalism would still reflect a little bit of that insanity, it had to (my favourite quote: “<<We had all the money we needed to build the software,>> CEO Brendan Blumer told me. <<All the money that comes from the token sale will be’s profit.>>” – I mean, that level of crazy puts the “AAA” CDO’s of the aughts to shame).

One thing that stood out factually in the article is that coins and tokens were referenced synonymously, while they shouldn’t be. I would have never picked up on it had it not been for a very useful session at Clifford Chance in Jun-2018, and that difference is useful to know: while the entire ecosystem is ultra-fluid (as you’d expect given that it’s entirely digital), coins are generally a medium of exchange native to given chain and do not represent any claims or assets, while tokens tend to represent claims against the issuer or some sort of rights. So it’s really not the same thing, with tokens falling quite closely under the definition of a security.

What was symbolic to me in this “what a difference a year makes” story is that *the* crypto investor extraordinaire, Brock Pierce featured in the Wired story has been a subject of a super-scathing expose by John Oliver (a part of entire episode-length scathing expose of crypto in general and bitcoin in particular) and the 2 ventures he’s been associated with are yet to revolutionise the world (I’m not saying they can’t or won’t, I’m saying that they haven’t as yet), while the other, seemingly much more measured crypto venture from the same article, Dovu, appears to still be out there, but (see above) is yet to deliver anything I would want to use.

More broadly one can’t help but notice that despite the hype, the interest, the obscene amounts of money, and genuinely innovative technology there hasn’t been a genuine game-changing disruption use case as yet; JP Morgan and its blockchain-based cross-country payment project may be one exception, the Japanese project of improving efficiency of the power grid may be another, but even those are still pilots / POC’s – definitely not verified success stories (at least not yet).