The Technological Revolution in Financial Services

LSE: The Technological Revolution in Financial Services 23-Apr-2021

The term “FinTech” is currently enjoying truly global recognition. Definition is much trickier though: a bit like art or artificial intelligence, everyone has a pretty good intuitive / organic understanding, while a formal, universally agreed-upon definition remains somewhat ambiguous. Given that this blog has an entire section dedicated to FinTech, I will proceed on the assumption that all of us have at least that intuitive understanding of the term.

It will be a truism to say that FinTechs are disrupting financial services on multiple levels, such as:

  • Convenience (e.g. branch-less opening of an account using a camera and AI-powered KYC software in the background; availability of all services on a mobile);
  • Cost competition vis-à-vis established models (think Robinhood’s commission-free share dealing vs. standard USD or GBP 6 – 10 fee per single trade [for retail clients] at established brokerages or investment firms; think Revolut’s zero-fee [or nearly zero-fee] fx transactions at interbank rates);
  • Innovativeness and fresh thinking;
  • Inclusion and reduction of barriers to entry (e.g. by allowing access to investment products below the once-standard minimum thresholds of GBP 1,000 or more or by making global remittances easier and cheaper);
  • Smoother and sleeker user experience;
  • Greater customisation of products to individual client’s needs;
  • …and many more…

On a more personal note, FinTech (and many of its abovementioned benefits, chief among them cost reduction and innovativeness) are like future in that famous William Gibson quote: they’re here, but they’re not equally distributed. My primary home (London) is one of FinTech capitals of the world; we all take it for granted that all the latest innovations will be instantly available in London, and Londoners are (broadly generalising) early – if not very early – adopters. I instantly and very tangibly benefitted from all the forex transaction features from Revolut, because compared to what I had to contend with from my brand-name high-street bank it was a whole ‘nother level of convenience (one card with multiple sub-accounts in different currencies) and cost (all those ridiculous, extortionate forex transaction fees were gone, reduced to near-zero1). My secondary home (Warsaw) is neither a global nor even a European FinTech capital – consequently, it’s a markedly different FinTech environment out here. Challenger banks are close to non-existent; as for other FinTechs, they are clustered in payday lending (whose use of technology and innovation is notorious rather than beneficial), forex trading, and online payments (one area which is of some benefit, though sometimes the commissions charged by the payment operators behind the scenes seem higher than they should be).

The speakers invited by the LSE (Michael R. King and Richard W. Nesbitt) have recently concluded comprehensive industry research, summarised in their book “The Technological Revolution in Financial Services: How Banks, FinTechs and Customers Win Together”. They shared some of their insights and conclusions.

Firstly, they stated that technology itself is not a strategy (which, depending on the interpretation, might run a bit counter to the iconic “we are a technology company with a banking license” declarations of many CEO’s). Technology is a tool, which may be a part of the strategy.

Secondly, technology itself does not provide a sustained competitive advantage, because it is widely available, and it can be copied by competitors. I found this observation both interesting and counterintuitive. I have always thought that for FinTechs technology *defines* their competitive advantage, but perhaps they operative word here is “sustained”. It’s one thing to come up with a disruptive solution, but if it is widely copied and becomes the new normal, then indeed the initial advantage is eroded. Still, personal observations lead me to challenge this statement: Revolut may have introduced zero-fee forex conversions some years ago (I joined in 2018), and yet many high-street banks still charge conversion fees and use terrible, re-quoted rates. They could copy Revolut’s idea in an instant, and yet they choose not to. Another example: if technology does not provide FinTechs’ sustained competitive advantage, then how come challengers Revolut, Monzo, or Starling are enjoying growth in customer numbers and have strong brand recognition, while NatWest’s Bo was but a blip on the radar?

King and Nesbitt further argue that the biggest barrier in financial services is not technology or even regulation, but access to customers. Again, I acknowledge and respect that conclusion, but I can’t fully agree with it. All the brand-name banks have generated and sat on enormous troves of data for decades, and only PSD 2 compelled them to make this widely data available – Revolut and Monzo succeeded without reverting to Open Banking as their main selling point; they just offered innovative, sleek products; countless remittance companies entered the market and succeeded not because they gained access to data Western Union previously kept to itself – they just offered better rates.

Another major component of the “FinTech equation” is, according to the authors, trust in financial services (trust defined as both data and privacy). They argue that erosion of that trust post-2008 was what paved the way for FinTechs. I agree with the erosion of trust part, but it was neither data leaks nor customer data leaks that led to public’s distrust during the 2008 financial crisis: it was imprudent management of money and too-creative financial innovation (back then sometimes labelled as “financial engineering”, even though this term is a terrible misnomer, because finance, no matter how badly some people in the industry would want it, is not an exact science, it’s a social science).

On the social side, FinTech (expectedly or not) may lead to substantial benefits in terms of financial inclusion of the underbanked (e.g. people without a birth certificate and / or any proof of ID) and / or previously marginalised groups (e.g. women). One of the panellists, Ghela Boskovich, brought up India’s aadhar system, which allows everyone to obtain ID number based purely on their biometric data, and Kenya’s M*Pesa mobile payments system (which does not even require a smartphone – an old-school mobile is sufficient), which opened the financial system to women in ways that were not available prior.

On the more traditional thinking side, the authors concluded that regulation and risk management remain pillars of financial services. On the cybersecurity side they advocated switching from incumbent thinking of “if we are hacked” to FinTechs’ thinking of “when we are hacked”, with prompt and transparent disclosures of cybersecurity incidents.

King and Nesbitt concluded that in the end the partnership model between established banks and FinTech start-ups will be the winning combination. It is a very interesting thought. On one hand, many (perhaps most) FinTechs need these partnerships throughout most of their journey: from incubators and accelerators (like Barclays Rise in London), through flagship / strategic client relationships (whereby one established financial institution becomes a FinTech’s primary client, and the FinTech effectively depends on it for its survival). Sometimes established financials end up acquiring FinTech start-ups, though it doesn’t happen anywhere as often as in the tech industry.

Overall King, Nesbitt, and their esteemed guests gave me a huge amount of food for thought around the area of great interest to me. I may or may not fully agree with some of their conclusions, and it doesn’t matter that much – we will see how FinTech evolves in the coming years, and I’m quite certain its evolution will take some twists and turns few have foreseen. The really important thing for me is inclusion, because I see it as massive and undeniable benefit.

_______________________________________

1. Disclosure: I am not and have been an employee of Revolute to date. This is not an advertorial or any form of promotion.


Cambridge Zero presents Solar & Carbon geoengineering

Cambridge Zero presents “Solar & Carbon geoengineering”

Cambridge Zero presents “Solar & Carbon geoengineering”

Cambridge Zero is an interdisciplinary climate change initiative set up by the University of Cambridge. Its focus is research and policy, which also includes science communication through courses, projects, and events.

One of such events was a 29-Mar-2021 panel and discussion on geoengineering as a way of mitigating / offsetting / reducing global warming. Geoengineering has been a relatively popular term in recent months, mostly in relation to a high-profile experiment planned by Harvard University and publicised by the MIT Tech Review… which was subsequently indefinitely halted.

I believe the first time I heard the term was at the (monumental) Science Museum IMAX theatre in London, where I attended a special screening of a breathtakingly beautiful and heartbreaking documentary titled “Anote’s Ark” back in 2018. “Anote’s Ark” follows a then-president of the Republic of Kiribati, Anote Tong, as he attended multiple high-profile climate events trying to ask for tangible assistance to Kiribati, which as at a very realistic risk of disappearing under the waters of the Pacific Ocean in coming decades, as its elevation over the sea level is 2 meters at the highest point (come to think about it, Maldives could face a similar threat soon). Geoengineering was one of the discussion points among the scientists invited to the after-movie panel. I vividly remember thinking about the disconnect between the cutting-edge but ultimately purely theoretical ideas of geoengineering and a painfully tangible reality of Kiribati and its citizens, who witness increasingly higher waves penetrating increasingly deeper inland.

cambridge zero solar & carbon geoengineering

Prototype of CO2-capturing machine, Science Museum, London 2022

Geoengineering has all the attributes needed to make it into the zeitgeist: a catchy, self-explanatory name; explicit hi-tech connotations; and the potential to become the silver bullet (at least conceptually) that almost magically balances the equations of still-rising emissions with the desperate need to keep the temperature rise as close to 1.5 Centigrade as possible.

The Cambridge Zero event was a great introduction to the topic and its many nuances. Firstly, there are (at least) two types of geoengineering:

  • Solar (increasing the reflectivity of the Earth to reflect more and absorb less of the Sun’s heat in order to reduce the planet’s temperature);
  • Carbon (removing the already-emitted excess CO2 from the atmosphere).

The broad premise of solar geoengineering is to scatter sunshine in the stratosphere (most likely by dispersing particles of highly reflective compounds or materials). While some compare it to a result of a volcano explosion, the speaker’s suggestion was to compare it to a thin layer of global smog. The conceptual premise of solar geoengineering is quite easy to grasp (which is not the same as saying that solar geoengineering itself is in any way easy – as a matter of fact, it is extremely complex and, in the parlance of every academic paper ever, “further research is needed”). The moral and political considerations may be almost as complex as the process itself. There is a huge moral hazard that fossil fuel industry (and similar vested interests, economic and political) might perform a “superfreak pivot”, going from overt or covert climate change denial to acknowledging it and pointing to solar geoengineering as the only way to fix it. Consequently, these entities and individuals would no longer need to deny the climate is changing (which is becoming an increasingly difficult position to defend these days), while they could still push for business as usual (as in: *their* business as usual) and delay decarbonisation.

The quote from the book Has It Come to This: The Promises and Perils of Geoengineering on the Brink puts is brilliantly: “Subjectively and objectively, geoengineering is an extreme expression of a – perhaps *the* – paradox of capitalist modernity. The structures set up by people are perceived as immune to tinkering, but there is hardly any limit to how natural systems can be manipulated. The natural becomes plastic and contingent; the social becomes set in stone.”

The concept is carbon geoengineering is also very straightforward (conceptually): to remove excess carbon from the atmosphere. There are two rationales for carbon geoengineering:

  • To offset carbon emissions that – for economic or technological reasons – cannot be eliminated (emissions from planes are one example that comes to mind). Those residual emissions will be balanced out by negative emissions resulting from geoengineering;
  • To compensate for historical emissions.

The IPCC special report on 1.5°C states unambiguously that limiting global warming to 1.5°C needs to involve large-scale carbon removal from the atmosphere (“large-scale” being defined as 100 – 1,000 gigatons over the course of the 21st century). This, in my view, fundamentally differentiates carbon geoengineering from solar geoengineering in terms of politics and policy: the latter is more conceptual, a “nice to have if it works” lofty concept; the former is enshrined in climate policy plan from the leading scientific authority on climate change. This means that carbon capture is not a “nice to have”, it’s critical.

Carbon geoengineering is a goal that can be achieved using a variety of means: from natural (planting more trees, planting crops with roots trapping more carbon) to technological (carbon capture and storage). The problem is that it largely remains in the research phase, and is nowhere near deployment at scale (which, on carbon capture and storage side is akin to having a parallel energy infrastructure in reverse).

There is an elephant in the room that shows the limitations of geoengineering: the (im)permanence of the results. The effects of solar geoengineering are temporary, while carbon capture has its limits: natural (available land; be it for carbon-trapping vegetation or carbon capture plants), technological (capture and processing plants), and storage. Geoengineering could hopefully stave off the worst-case climate change scenarios, slow down the rate at which the planet is warming, and / or buy mankind a little bit of much-needed time to decarbonize the economy – but it’s not going to be a magic bullet.


Global AI Narratives – AI and Communism

Global AI Narratives – AI and Communism 07-May-2021

07-May-2021 saw the 18th overall (and the first one for me… as irony would have it, the entire event series slipped below my radar during the consecutive lockdowns) event in the Global AI Narratives (GAIN) series, co-organised by Cambridge University’s Leverhulme Centre for the Future of Intelligence (LCFI). Missing the first 17 presentations was definitely a downer, but the one I tuned in was *the* one: AI and Communism.

Being born and raised in a Communist (and subsequently post-Communist) country is a bit like music: you can talk about it all you want, but you can’t really know it unless you’ve experienced it (and I have). [Sidebar: as much as I respect everyone’s right to have an opinion and to voice it, I can’t help but cringe hearing Western-born 20- or 30-something year old proponents of Communism or Socialism, who have never experienced centrally planned economy, hyperinflation (or even good old-fashioned upper double-, lower triple-digit inflation), state-owned and controlled media, censorship, shortages of basic everyday goods, etc. etc. etc. I know that Capitalism is not exactly perfect, and maybe it’s time to come up with something better, but I don’t think that many Eastern Europeans would willingly surrender their EU passports, freedom of movement, freedom of speech etc. Then again, it *might* be different in the era of AI and Fully Automated Luxury Communism.]

Stanisław Lem in 1966

Stanisław Lem in 1966

The thing about Communism (and that’s speaking from limited and still-learning perspective) is that there was in fact much, much more to it than many people realise. We’re talking decades (how many decades exactly depends on individual country) and hundreds of millions of people, so that’s obviously a significant part of the history of the 20th century. The Iron Curtain held so tight that for many years the West was either missing out altogether or had disproportionately low exposure to culture, art, or science of the Eastern Bloc (basically everything that was not related to the Cold War). As the West was largely about competition (including competition for attention) there was limited demand for Communist exports because there wasn’t much of a void to fill. That doesn’t mean that there weren’t exciting ideas, philosophies, works of art, or technological inventions being created in the East.

The GAIN event focused on the fascinating intersection of philosophy, literature, and technology. It just so happens that one of the (world’s) most prolific Cold War-era thinkers on the topic of the future of technology and mankind in general was Polish. I’m referring to the late, great, there-will-never-be-anyone-like-him Stanislaw Lem (who deserved the Nobel Prize in Literature like there was no tomorrow – one could even say more than some of the Polish recipients thereof). Lem was a great many things: he was a prolific writer whose works span a very wide spectrum of sci-fi (almost always with a deep philosophical or existential layer), satire disguised as sci-fi, and lastly philosophy and technology proper (it was in one of his essays in the late 1990’s or early 2000’s I have first read of the concept of the brain-computer interface (BCI); I don’t know to what extent BCI was Lem’s original idea, but he was certainly one of its pioneers and early advocates). He continued writing until his death in 2006.

One of Lem’s foremost accomplishments is definitely 1964’s Summa Technologiae, a reference to Thomas Acquinas’ Summa Theologiae dating nearly seven centuries prior (1268 – 1273). Summa discusses technology’s ability to change the course of human civilisation (as well as the civilisation itself) through cybernetics, evolution (genetic engineering), and space travel. Summa was the sole topic of one of the GAIN event’s presentations, delivered by Bogna Konior, an Assistant Arts Professor at the Interactive Media Arts department of NYU Shanghai. Konior took Lem’s masterpiece out of its philosophical and technological “container” and looked at it from a wider perspective of the Polish social and political system Lem lived in – a system that was highly suspicious, discouraging (if not openly hostile) to new ways of thinking. She finds Lem pushing back against the political status quo.

While Bogna Konior discussed one of the masterpieces of a venerated sci-fi giant, the next speaker, Jędrzej Niklas, presented what may have sounded like sci-fi, but was in fact very real (or at least planned to happen for real). Niklas told the story of Poland’s National Information System (Krajowy System Informatyczny, KSI) and a (brief) eruption of “technoenthusiasm” in early 1970’s Poland. In a presentation that sounded at times more like alternative history than actual one Niklas reminded us of some of the visionary ideas developed in Poland around late 1960’s / early 1970’s. KSI was meant to be a lot of things:

  • a central-control system of the economy and manufacturing (pls note that at that time vast majority of Polish enterprises were state-owned);
  • a system of public administration (population register, state budgeting / taxation, natural resources management, academic information index and search engine);
  • academic mainframe network;
  • “Info-highway” – a broad data network for enterprises and individuals linking all major and mid-size cities.

If some or all of the above sound familiar, it’s because they all became everyday uses cases of the Internet. [sidebar: while we don’t / can’t / won’t know for sure, there have been some allegations that Polish ideas from the 1970’s were duly noted in the West; whether they became an inspiration to what ultimately became the Internet we will never know].

While KSI ultimately turned out to be too ambitious and intellectually threatening for the ruling Communist Party, it has not been a purely academic exercise. The population register part of KSI became the PESEL system (an equivalent of the US Social Security Number or British National Insurance Number), which is still in use today, while all the enterprises are indexed by the REGON system.

And just like that, the GAIN / LCFI event made us all aware how many ideas which have materialised (or are likely to materialise in the foreseeable future) may not have originated exclusively in the Western domain. I’m Polish, so my interest and focus are understandably on Poland, but I’m sure the same can be said by people in other, non-Western, parts of the world. While the GAIN / LCFI events have not been recorded in their entirety (which is a real shame), they will form a part of the forthcoming book “Imagining AI: how the world sees intelligent machines” (Oxford University Press). It’s definitely one to add to cart if you ask me.

____________________________________

1. I don’t think that any single work of Lem’s can be singled out as his ultimate masterpiece. His best-known work internationally is arguably Solaris, which had two cinematic adaptations (by Tarkovsky and by Soderbergh), which is equal parts sci-fi and philosophy. Summa Technologiae is probably his most venerated work in the technology circles, possible in the philosophy circles as well. The Star Diaries are likely his ultimate satirical accomplishment. Eden and Return from the Stars are regarded as his finest sci-fi works.


Knowing me, knowing you: theory of mind in AI

#articlesworthreading: Cuzzolin, Morelli, Cirstea, and Sahakian “Knowing me, knowing you: theory of mind in AI”

Depending on the source, it is estimated that some 30,000 – 40,000 peer-reviewed academic journals publish some 2,000,000 – 3,000,000 academic articles… per year. Just take a moment for this to sink in: a brand-new research article published (on average) every 10 – 15 seconds.

Only exceptionally, exceedingly rarely does an academic article make its way into the Zeitgeist (off the top of my head I can think of one); countless articles fade into obscurity having been read only by the authors, their loved ones, journal reviewers, editors, and no one else ever (am I speaking from experience? Most likely). I sometimes wonder how much truly world-changing research is out there, collecting (digital) dust, forgotten by (nearly) everyone.

I have no ambition of turning my blog into a recommender RSS feed, but every now and then, when I come across something truly valuable, I’d like to share it. Such was the case with “theory of mind in AI” article (disclosure: Prof. Cuzzolin is my PhD co-supervisor, while Prof. Sahakian is my PhD supervisor).

Published in Psychological Medicine, the article encourages approaching Artificial Intelligence (AI), and more specifically Reinforcement Learning (RL) from the perspective of hot cognition. Hot and cold cognition are concepts somewhat similar to very well-known concepts of thinking fast and slow, popularised by the Nobel Prize winner Daniel Kahneman in his 2014 bestseller. While thinking fast focuses on heuristics, biases, and mental shortcuts (vs. fully analytical thinking slow), hot cognition is describing thinking that is influenced by emotions. Arguably, a great deal of human cognition is hot rather than cold. Theory of Mind (ToM) is a major component of social cognition which allows us to infer mental states of other people.

By contrast, AI development to date has been overwhelmingly focused on purely analytical inferences based on vast amounts of training data. The inferences are not the problem per se – in fact, they do have a very important place in AI. The problem is what the authors refer to as “naïve pattern recognition incapable of producing accurate predictions of complex and spontaneous human behaviours”, i.e. the “cold” way these inferences were made. The arguments for incorporating hot cognition, and specifically ToM are entirely pragmatic and include improved safety of autonomous vehicles, more and better applications in healthcare (particularly in psychiatry), and likely a substantial improvement in many AI systems dealing directly with humans or operating in human environments; ultimately leading to AI that could be more ethical and more trustworthy (potentially also more explainable).

Whilst the argument in favour of incorporating ToM in AI makes perfect sense, the authors are very realistic in noting limited research done in this field to date. Instead of getting discouraged by it, they put forth a couple of broad, tangible, and actionable recommendations on how one could get to Machine Theory of Mind whilst harnessing existing RL approaches. The authors are also very realistic in challenges the development of Machine ToM will likely face, such as empirical validation (which will require mental state annotations to learning data to compare them with the mental states inferred by AI) or performance measurement. 

I was very fortunate to attend one of co-author’s (Bogdan-Ionut Cirstea) presentation in 2020, which was a heavily expanded companion piece to the article. There was a brilliant PowerPoint presentation which dove into the AI considerations raised in the article in much greater detail. I don’t know whether Dr. Cirstea is at liberty to share it, but for those interested in further reading it would probably be worthwhile to ask.

You can find the complete article here; it is freely available under Open Access license. 


Lord Martin Rees on existential risk

25-Sep-2020

The Mar-2017 cover story of Wired magazine (UK) was catastrophic / existential risks (one of their Top 10 risks was global pandemic, so Wired has certainly delivered in the future-forecasting department). It featured, among other esteemed research centres (such as Cambridge University’s Leverhulme Centre for the Future of Intelligence or Oxford’s Future of Humanity Institute), Centre for the Study of Existential Risk (like LCFI, also based in Cambridge).

CSER was co-founded by Lord Martin Rees, who is one person I can think of to genuinely deserve this kind of title (I’m otherwise rather Leftist in my views and I’m all for merit-based titles and distinctions – and *only* those). Martin Rees is practically a household name in the UK (I think he’s UK’s second most recognisable scientist after Sir David Attenborough), where he is the Astronomer Royal, the former Master of Trinity College in Cambridge, and the former President of the Royal Society (to name but a few). Lord Rees has for many years been an indefatigable science communicator, and his passion and enthusiasm for science are better witnessed than described. Speaking from personal observation, he also comes across as an exceptional individual on a personal level.

Fun fact: among his too-many-to-list honours and awards one will not find the Nobel Prize, which is… disappointing (not as regards Lord Rees, but as regards the Nobel Prize committee). One will however find a far lesser-know Order of Merit (OM). The number of living Nobel Prize recipients is not capped and stands at about 120 worldwide. By comparison, there can be only 24 living OMs at any one time. Though lesser known, the OM is widely regarded as one of the most prestigious honours in the world.

Lord Rees

By Roger Harris – https://members-api.parliament.uk/api/Members/3751/Portrait?cropType=ThreeFourGallery: https://members.parliament.uk/member/3751/portrait, CC BY 3.0, https://commons.wikimedia.org/w/index.php?curid=86636167

Lord Rees has authored hundreds of academic articles and a number of books. His most recent book is On the Future, published in 2018. In Sep-2020 Lord Rees delivered a lecture on existential risk as part of a larger event titled “Future pandemics” held at the Isaac Newton Institute for Mathematical Sciences in Cambridge. The lecture was very much thematically aligned with his latest book, but it was far from the typical “book launch / promo / tie-in” presentation I have attended more than once in the past. The depth, breadth, vision, and intellectual audacity of the lecture were nothing short of mind-blowing – it was akin to a full-time postgraduate course compressed into one hour. Interestingly, the lecture was very “open-ended” in terms of the many technologies, threats, and visions Lord Rees has covered. Oftentimes similar lectures focus on a specific vision or prediction and try to sell it to the audience wholesale; with Lord Rees it was more of a roulette of possible events and outcomes: some of them terrifying, some of them exciting, and some of them unlikely.

The lecture started with the familiar trends of population growth and shifting “demographic power” from the West to Asia and Africa (the latter likely to experience the most pronounced population growth throughout 21st century), followed by the climate emergency. On the climate topic Lord Rees stressed the urgency for green energy sources, but he seemed a bit more realistic than many in terms of attainability of that goal on a mass scale. Consequently, he introduced and discussed Plan B, i.e. geoengineering. If “geoengineering” sounds somewhat familiar, it is likely because of a high-profile planned Harvard University experiment publicised by the MIT Tech Review  . . . which was subsequently indefinitely halted. I was personally very supportive of the Harvard experiment as one possible bridge between the targets of the Paris Accord and the reality of the climate situation on the ground. Lord Rees then discussed the threat of biotechnology and casual biohacking / bioengineering, citing as an example a Dutch experiment of weaponizing the flu virus to make it more virulent. The lecture really got into gear from then onwards, as Lord Rees began pondering the future of evolution itself (the evolution of evolution so to speak), where traditional Darwinian evolution may be replaced or complemented (and certainly accelerated by orders of magnitude) by man-powered genetic engineering (CRISPR-Cas9 and alike) and / or ultimate evolution of intelligent life into synthetic forms, which leads to fundamental and existential questions of what will be the future definition of life, intelligence, or personal identity. [N.B. These discussions are being had on many levels by many of the finest (and less-than-finest) minds of our time. I am not competent to provide even a cursory list of resources, but I can offer two of my personal favourites from the realm of sci-fi: the late, great Polish sci-fi visionary Stanislaw Lem humorously tackled the topic of bioengineering in Star Diaries (Voyage Twenty-One) while Greg Egan has taken on the topic of (post)human evolution in the amazing short story “Wang’s carpets”, further expanded into a full-length novel Diaspora.]

From life on Earth Lord Rees moves on to life outside Earth, disagreeing with the idea of other planets being humanity’s “Plan B” and touching (but not expanding) on the topic of extraterrestrial life. The truly cosmic in scope a lecture ended with a much more down-to-Earth call for political action as well as (interestingly) action from religions and religious leaders on the environmental front.


Abstract Triangle Spaceship corridor

Humanity+ Festival 07/08-Jul-2020: David Brin on the future

Wed 28-Oct-2020

Continuing from the previous post on UBI, I would like to delve into another brilliant presentation from the Humanity+ Festival in Jul-2020: David Brin on life and work in the near future.

For those of you to whom the names rings somewhat familiar, David is the author of an acclaimed sci-fi novel “the postman”, adapted into a major box office bomb starring Kevin Costner (how does he keep getting work? Don’t get me wrong, I liked “dances with wolves” and I loved “a perfect world”, but commercially this guy is a near-guarantee of a box-office disaster…). Lesser known is David’s career in science (including an ongoing collaboration with NASA’s innovation division), based on his background in applied physics and space science. A combination of the two, topped with David’s humour and charisma, made for a brilliant presentation.

Brin’s presentation covered a lot of ground. There hasn’t necessarily been a main thread or common theme – it was just broadly defined future, with its many alluring promises and many very imaginative threats.

One of the main themes was that of feudalism, which figures largely in Polish sociological discourse (and Polish psyche), so it is a topic of close personal interest to me. When David started, I assumed (as would everyone) that he was talking in past tense, as something that is clearly and obviously over – but no! Brin argues (and on second thought it’s hard to disagree) that we not only live in very much feudal world at present, but that it is the very thing standing in the way of the (neoliberal) dream of free competition. Only when there are no unfair advantages, argues Brin, can everyone’s potential fully develop, and everyone can *properly* compete. I have to say, for someone who has had some, but not many advantages in life, that concept is… immediately alluring. Then again, the “levellers” Brin is referring to (nutrition, health, and education for all) are not exactly the moon and stars – I have benefitted from all of them throughout my life (albeit, not always free of charge – my university tuition fees are north of GBP 70k alone).

The next theme Brin discusses (extensively!) is near-term human augmentation. And for someone as self-improvement-obsessed as myself, there are few topics that guarantee my full(er) and (more) undivided attention. Brin lists such enhancements as:

  • Pharmacological enhancements (nootropics and alike);
  • Prosthetics;
  • Cyber-neuro links: enhancements for our sensory perceptions;
  • Biological computing (as in: intracellular biological computing; unfortunately Brin doesn’t get into the details of how exactly it would benefit or augment a person);
  • Lifespan extension – Brin makes an interesting point here; he argues that any extensions that worked so impressively in fruit flies or lab rats fail to translate to results in humans, leading to the conclusion that there may not be any low-hanging fruit in lifespan extension department (though, arguably, proper nutrition and healthcare could be argued as those low-hanging fruit of human lifespan extension).

Moving on to AI, Brin comes up with a plausible near-future scenario of an AI-powered “avatarised” chatbot, which would claim to be a sentient, disembodied, female AI running away from the creators who want to shut it down. Said chatbot would need help – namely financial help. I foresee some limitations of scale to this idea (the bot would need to maintain the illusion of intimacy and uniqueness of the encounter with its “white knight”), but other than that, it’s a chillingly simple idea. It could be more effective than openly hostile ransomware, whilst in principle not being much different from it. That idea is chilling, but it’s also exactly what one would expect from a sci-fi writer with a background in science. It is also just one idea – the creativity of rogue agents could go well beyond that.

David’s presentation could have benefitted from slightly clearer structure and an extra 20 minutes running time, but it was still great as it was. Feudalism, human augmentation, and creatively rogue AI may not have much of a common thread except one – they are all topics of great interest to me. I think the presentation also shows how important sci-fi literature (and its authors – particularly those with scientific background) can be in extrapolating (if not shaping) near-term future of mankind. It also proves (to me, at least), how important it is to have plurality and diversity of voice participating in the discourse of the future. Until recently I would see parallel threads: scientists were sharing their views and ideas, sci-fi writers theirs, politicians theirs – and most of them would be the proverbial socially-privileged middle-aged white men. I see some positive change happening, and I hope it continues. It has to be said that futurist / transhumanist movements seem to be in the vanguard of this change: all their events I attended have had a wonderfully diverse group of speakers.


100 dollar money bill with face mask

Humanity+ Festival 07/08-Jul-2020: Max More on UBI

Sat 05-Sep-2020

In the past 2 years or so I have been increasingly interested in the transhumanist movement. Transhumanism has a bit of a mixed reputation in “serious” circles of business and academia – sometimes patronised, sometimes ridiculed, occasionally sparking some interest. With its audacious goals of (extreme) lifespan / healthspan extension, radical enhancement of physical and cognitive abilities, all the way up to brain uploading and immortality one can kind of understand where the ridicule is coming from. I can’t quite comprehend the prospect of immortality, but it’d be nice to have an option. And I wouldn’t think twice before enhancing my body and mind in all the ways science wishes to enable.

With that mindset, I started attending (first in person, then, when the world as we knew it ended, online) transhumanist events. Paradoxically, Covid-19 pandemic enabled me to attend *more* events than before, including Humanity+ Festival. Had it been organised in a physical location, it would have likely been the in US; even if it was held in London, I couldn’t take 2 days off work to attend it, I save my days off for my family. I was very fortunate to be able to attend it online.

I attended a couple of fascinating presentations during the 2020 event, and I will try to present them in individual posts.

I’d say that – based on the way it is often referred to as a cult – transhumanism is currently going through the first stage of Schopenhauer’s three stages of truth. The first stage is ridicule, the second stage is violent opposition, and the third stage is being accepted as self-evident. I (and the sci-fi-loving kid inside me) find many of the transhumanist concepts interesting. I don’t concern myself too much with how realistic they seem today, because I realise how many self-evident things today (Roomba; self-driving cars; pacemakers; Viagra; deepfakes) seemed completely unrealistic, audacious, and downright crazy only a couple of years / decades ago. In fact, I *love* all those crazy, audacious ideas which focus on possibilities and don’t worry too much about limitations.

Humanity+ is… what is it actually? I’d say Humanity+ is one of the big players / thought leaders in transhumanism, alongside David Wood’s London Futurists and probably many other groups I am not aware of. Humanity+ is currently led by the fascinating, charismatic, and – it just has to be said – stunning Natasha Vita-More.

Transhumanist movement is a tight-knit community (I can’t consider myself a member… I’m more of a fan) with a number of high-profile individuals: Natasha Vita-More, Max More (aka Mr. Natasha Vita-More aka the current head of cryogenic preservation company Alcor), David Wood, Jose Cordeiro, Ben Goertzel. They are all brilliant, charismatic, and colourful individuals. As a slightly non-normative individual I suspect their occasionally eccentric ways work can sometimes work against them in the mainstream academic and business circles, but I wouldn’t have them any other way.

During the 2020 event Max More talked about UBI (Universal Basic Income). I quite like the idea of UBI, but I appreciate there are complexities and nuances to it, and I’m probably not aware of many of them. Max More definitely gave it some thought, and he presented some really interesting thoughts and posed many difficult questions. For starters, I liked reframing UBI as “negative income tax” – the very term “UBI” sends so many thought leaders and politicians (from more than one side of the political spectrum) into panic mode, but “negative income tax” sounds just about as capitalist and neoliberal as it gets. More amused the audience with realisation (which, I believe, was technically correct), that Donald Trump’s USD 1,200 cheques for all Americans were in fact UBI (who would have thought that of all people it would be Donald Trump who implements UBI on a national scale first…? Btw, it could be argued that with their furlough support Boris Johnson and Rishi Sunak did something very similar in the UK – though these cheques were not for everyone, only for those who couldn’t work due to lockdown; so this was more like Guaranteed Minimum Income).

The questions raised by More were really thought-provoking:

  • Will / should UBI be funded by taxing AI?
  • Should it be payable to immigrants? Legal and illegal?
  • Should UBI be paid per individual or per household?
  • What about people with medical conditions requiring extra care? They would require UBI add-ons, which undermines the whole concept.
  • Should people living in metropolitan cities like London be paid the same amount as people living in the (cheaper) countryside?
  • How should runaway inflation be prevented?

Lastly, More suggested some alternatives to UBI which (in his view) could work better. He proposed an idea of universal endowment (sort of universal inheritance, but without an actual wealthy relative dying) for everyone. It wouldn’t be a cash lump-sum (which so many people – myself included – could probably spend very quickly and not-too-wisely), but a more complex structure: a bankruptcy-protected stock ownership. The idea is very interesting – wealthy people (and even not-so-wealthy people) don’t necessarily leave cash to their descendants: physical assets aside (real estate etc.) leaving shares, bonds, and other financial assets in one’s will is relatively common. Basically the wealthier the benefactor, the more diverse the portfolio of assets they’d leave behind. The concept of bankruptcy-protected assets is not new, it exists in modern law (e.g. US Chapter 13 bankruptcy allows the bankrupting party to keep their property), but to me it sounded like More meant it in a different way. If More meant his endowment as a market-linked financial portfolio whose value cannot go down – well, this can be technically done (long equity + long put options on the entire portfolio) – but only to a point. Firstly, it would be challenging doing it on a mass scale (the supply of required amount of put options could or could not be a problem, but their prices would likely go up so much across the board that it would have a substantial impact on the value and profitability of the entire portfolio). Secondly, one cannot have a portfolio whose value can truly only go up – it wouldn’t necessarily be the proverbial free lunch, but definitely a free starter. Put options have expiry dates (all options do), and their maturity is usually months, not years. Expiring options can be replaced (rolled) with longer-dated ones, but this would come with a cost. Perpetual downside protection of a portfolio with put options could erode its value over time (especially in adverse market conditions, i.e. underlying assets values not going up).

If More had something even more innovative in mind then it could require rewriting some of the financial markets rulebook (why would anyone invest the old-fashioned way without bankruptcy protection when everyone would have their bankruptcy-protected endowments?). I’m not saying it’s never going to happen – in fact I like the idea a lot (and I realise how much different my life could be from the material perspective had I received such endowment when I was entering adulthood), I’m just pointing out practical considerations to address.

And one last thing: speaking from personal experience, I’d say that this endowment *definitely* shouldn’t be paid in full upon reaching the age of 18 (at least not for guys… I was a total liability at that age; I’d squander any money in a heartbeat); nor 21. *Maybe* 25, but frankly, I think a staggered release from mid-20’s to mid-30’s would work best.


Artificial intelligence concept image

#articlesworthreading: Tyagi et al "A Randomized Trial Directly Comparing Ventral Capsule and Anteromedial Subthalamic Nucleus Stimulation in Obsessive-Compulsive Disorder: Clinical and Imaging Evidence for Dissociable Effects”

Tue 01-September-2020

“Biological psychiatry” is a journal I’m not very likely to cross paths with (I’m more of “expert systems with applications” kind of guy). The only reason I came across the article and the incredible work it describes is because my PhD supervisor, the esteemed Prof. Barbara Sahakian, was one of the contributors and co-authors.

Even if somebody pointed me directly to the article, I wouldn’t be able to appreciate the entirety of the underlying work. First and foremost due to lack of any medical/neuroscientific subject matter expertise, but also because this incredible work is described so modestly and matter-of-factly that one may easily miss its significance, which is quite enormous. A team of world-class (and I mean: WORLD-CLASS) scientists planted electrodes in the brains of six patients suffering from severe (and I mean: SEVERE; debilitating) obsessive-compulsive disorder (OCD). The results were nothing short of astonishing…

The experiment examined the impact of direct brain stimulation (DBS) on patients with treatment-resistant cases of OCD. I have mild OCD myself, but it’s really mild – it’s just being slightly over the top with to do lists and turning off the stove (the latter not being much of a concern given that I leave my place roughly once a fortnight these days). I tend to think that it’s less of an issue and more of an integral part of my personality. It did bother me more when I was in my early 20’s and I briefly took some medication to alleviate it. The meds took care of my OCD, and of everything else as well. I became a carefree vegetable (not to mention some deeply unwelcome side-effects unique to the male kind). Soon afterwards I concluded my OCD is not so bad considering. However mild my own OCD is, I can empathise with people experiencing it in much more severe forms, and the six patients who participated in the study have been experiencing debilitating, super-severe (and, frankly, heartbreaking) cases of treatment-resistant OCD.

DBS was a new term to me, but conceptually it sounded vaguely similar to BCI (brain / computer interface) and even more similar to TDCS (trans-cranial direct current stimulation). Wikipedia explains that DBS is “is a neurosurgical procedure involving the placement of a medical device called a neurostimulator (sometimes referred to as a “brain pacemaker”), which sends electrical impulses, through implanted electrodes, to specific targets in the brain (brain nuclei) for the treatment of movement disorders, including Parkinson’s disease, essential tremor, and dystonia. While its underlying principles and mechanisms are not fully understood, DBS directly changes brain activity in a controlled manner”. That sounds pretty amazing as it is, though the researchers in this particular instance were using DBS for a non-movement disorder (definition from the world-famous Mayo clinic is broader and does mention DBS being used for treatment of OCD).

Some (many) of the medical technicalities of the experiment were above my intellectual paygrade, but I understood just enough to appreciate its significance. Six patients underwent general anaesthesia surgery (in simple terms: they had holes burred in their skulls), during which electrodes were implanted into 2 target areas in order to verify how each of them will respond to stimulation in a double-blind setting. What mattered to me was whether *either* stimulations would lead to improvement – and they did; boy, did they improve…! The Y-BOCS scores (which are used to measure and define clinical severity of OCD in adults) plummeted like restaurants’ income during COVID-19 lockdown: in the optimum stimulation settings + Cognitive Behavioural Therapy (CBT) combo, the average reduction in y-COBS was an astonishing 73.8%; one person’s score went down by 95%, another one’s by 100%. The technical and modest language of the article doesn’t include a comment allegedly made by one of the patients post-surgery: “it was like a flip of a switch” – but that’s what the results are saying for those two patients (the remaining four ranged between 38% and 82% reduction).

There is something to be said about the article itself. First of all, this epic, multi-researcher, multi-patient, exceptionally complex and sensitive study is captured in full on 8 pages. I can barely say “hello” in under 10 pages… The modest and somewhat anticlimactic language of the article is understandable (this is how formulaic, rigidly structured academic writing works whether I like it or not [I don’t!]), but at the same time is not giving sufficient credit to the significance of the results. Quite often I’d come across something in an academic (or even borderline popular science) journal that should be headline news on BBC or Sky News, and yet it isn’t (sidebar: there was one time, literally one time in my adult life that I can recall a science story being front-page news – it was the discovery of the Higgs Boson around 2012). Professor Steve Fuller really triggered me with his presentation at TransVision 2019 festival (“trans” as in “transhumanism”, not gender) when he mentioned that everyone in academia is busy writing articles and hardly anyone is actually reading them. I wonder how many people who should know of the Tyagi study (fellow researchers, grant approvers, donors, pharma and life sciences corporations, medical authorities, OCD patients etc.) actually do. I also wonder how many connections between seemingly-unrelated research are waiting to be uncovered and how many brilliant theories and discoveries have been published once in some obscure journal (or even not published at all) and have basically faded into scientific oblivion. I’m not saying this is the fate of this particular article (“Biological Psychiatry” is a highly-esteemed journal with impact factor placing it around top 10 of psychiatry and neuroscience publications, and it’s been around since late-1950’s), but still, this research gives hope to so many OCD sufferers (and potentially also depression sufferers and addicts, as per the Mayo clinic, so literally millions of people) that it should have been headline news on the BBC – but it wasn’t…


Candle stick graph

Utilisation of AI / Machine Learning in investment management: views from CFA Institute, FCA / BoE, and Cambridge Judge Business School

Mon 31-August-2020

I spent a better part of the past 18 months researching Machine Learning in equity investment decision-making for my PhD. During that time two high-profile industry surveys and one not-so-high-profile were published (FCA / BoE, CFA Institute, and Cambridge Judge Business School respectively). They provided a valuable insight into the degree of adoption / utilisation of Artificial Intelligence in general and Machine Learning in particular in the investment management industry.

Below you will find a brief summary of their findings as well as some critique and discussion of individual surveys.

My research into ML in the investment management industry delivered some unobvious conclusions:

  • The *actual* level of ML utilisation in the industry is (as of mid-2020) low (if not very low).
  • There are some areas where ML is uncontroversial and essentially a win/win for everyone – chief among them anti-money laundering (AML), which I discussed a number of times in meetups and workshops like this one [link]. Other areas include chatbots, sales / CRM support systems, legal document analysis software, or advanced Cybersecurity.
  • There are some areas where using ML could do more harm than good: recruitment or personalised pricing (the latter arguably not being very relevant in investment management).
  • There is curiosity, openness, and appreciation of AI in the industry. Practicalities such as operational and strategic inertia on one hand and regulatory concerns on the other stand in the way. It’s not particularly surprising nor unexpected, and attitude towards this situation is stoical. Investment management has once been referred to as “glacial” in its adoption of new technologies – I think the industry has made huge progress in the past decade or so. I think that AI / ML adoption will accelerate, much like the adoption of the cloud had in recent years.
  • COVID-19 may (oddly) accelerate the adoption of ML, driven by competitive pressure, thinning margins (which started years before COVID-19), and overall push towards operational (and thus financial) efficiencies.

I was confident about my findings and conclusions, but I welcome three industry publications, which surveyed hundreds of investment managers among them. These reports were in the position to corroborate (or disprove) my conclusions from a more statistically significant perspective.

So… Was I right or was I wrong?

The joint FCA / BoE survey (conducted in Apr-2019, with the summary report[1] published in Oct-2019) covered the entirety of UK financial services industry, including but not limited to investment management. It was the first (chronologically) comprehensive publication concluding that:

  • Investment management industry as a subsector of financial services industry has generally low adoption of AI compared to, for example, banking;
  • The predominant uses of AI investment management are areas outside of investment decision making (e.g. AML). Consequently, many investment management firms may say “we use AI in our organisation” and be entirely truthful in saying so. What the market and general public infer from such general statements may be much wider and more sophisticated applications of the technology than they really are.

The CFA Institute survey was conducted around April and May 2019 and published[2] in Sep-2019. It was more investment-management centric than the FCA / BoE publication. Its introduction states unambiguously: “We found that relatively few investment professionals are currently exploiting AI and big data applications in their investment processes”.

I consider one of its statistics particularly relevant: of the 230 respondents who answered the question “Which of these [techniques] have you used in the past 12 months for investment strategy and process?” only 10% chose “AI / ML to find nonlinear relationship or estimate”. I believe that even the low 10% figure represented a self-selected group of respondents, who were more likely to employ AI / ML in their investment functions than those who decided not to complete the survey.

Please note that for any of the respondents who confirm that their firms use AI / ML in investment decision-making (or even broader investment process) it doesn’t mean that *all* of their firm’s AUM will be subject to this process. It just means that some fraction of the AUM will be. My educated presumption is that this fraction is likely to be low.

Please also note that both FCA / BoE and CFA Institute reports relied on *self-selected* groups of respondents. The former is based on 106 firms’ responses out of 287 the survey was sent to. 230 respondents answered the particular question of interest to me in the CFA Institute report – out of 734 total respondents the survey was sent to.

The Cambridge Judge Business School survey report[3] (published in Jan-2020) strongly disagrees with the two reports above. It concludes that “AI is widely adopted in the Investment Management sector, where it is becoming a fundamental driver for revenue generation”. It also reads that “59% of all surveyed investment managers are currently using AI in their investment process [out of which] portfolio risk management is currently the most active area of AI implementation at an adoption rate of 61%, followed by portfolio structuring (58%) and asset price forecasting (55%)”. I believe that Cambridge results are driven by the fact that the survey combined both FinTech startups and incumbents, without revealing the % weights of each in the investment management category. In my experience within the investment management industry, the quotes above make sense only in sample dominated by FinTechs (particularly the first statement, which I strongly disagree with on the basis of my professional experience and industry observations). I consider lumping FinTech’s and incumbents’ results into one survey as unfortunate due to extreme differences between the types of organisations.

That Cambridge Judge Business School publishes a report containing odd findings does not strike me as particularly surprising. It is, frankly, not uncommon for academics to get so detached from the underlying industry that their conclusions stand at odds with observable reality. However, the CJBS report has been co-authored by Invesco and EY, which I find quite baffling. Invesco is a brand-name investment management firm with USD 1+ Tn in AUM, which puts it in the Tier 1 / “superjumbo” category size-wise. I am not aware of it being on the forefront of cutting-edge technologies, but as is the case with US-centric firms, I may simply lack sufficiently detailed insight; Invesco’s AUM seem sufficient to support active research/implementation of AI. One way or another, Invesco should know better than to sign off on a report with questionable conclusions. EY is well on the forefront of the cutting-edge technologies (I know that from personal experience), so for them to sign off on the report is even more baffling.

Frankly, the Cambridge Judge report fails to impress and fails to convince (me). My academic research and industry experience (including extensive networking) are fully in line with FCA / BoE’s and CFA Institute’s reports’ findings.

The fact that AI adoption in investment management stands at a much more modest level than the hype would have us believe may be slightly disappointing, but isn’t that surprising. It just goes to show that AI as a powerful, disruptive technology is being adopted with caution, which isn’t a bad thing. There are questions regarding regulation applicable to AI which need to be addressed. Lastly, business strategies take time (particularly for larger investment managers), and at times the technology is developing faster than business can keep up. Based on my experiences and observation with cloud adoption (and lessons seemingly learned by the industry), I am (uncharacteristically) optimistic.

[1] https://www.fca.org.uk/publication/research/research-note-on-machine-learning-in-uk-financial-services.pdf

[2] https://www.cfainstitute.org/-/media/documents/survey/AI-Pioneers-in-Investment-Management.ashx

[3] https://www.jbs.cam.ac.uk/wp-content/uploads/2020/08/2020-ccaf-ai-in-financial-services-survey.pdf


Network connection in shape of running man

London Futurists: Adventures at the Frontier of Birth, Food, Sex & Death

Mon 13-Jul-2020

On Monday 13th of July 2020 I attended (virtually, of course) one of my favourite meetup series, David Wood’s London Futurists. The event was mostly focused around the key themes of Jenny Kleeman’s recent book (“Sex robots & vegan meat: Adventures at the Frontier of Birth, Food, Sex & Death”), with Kleeman herself featured as the keynote (but not the only) speaker.

Frankly, I think that addressing the future of all-that-really-counts-in-life in one book and one event is a somewhat over-ambitious task. Fortunately, the speakers didn’t attempt to present a comprehensive vision of mankind’s future according to themselves. Instead, they presented a series of concepts which are either in existence at present, or are extrapolated into the near future. This way each attendee could piece together their own vision of the future, which made for a much more thought-provoking and interesting experience than having a uniform vision laid out.

I paid limited attention to the part concerning vegan meat / meat alternatives / meat substitutes. I have a fairly complicated relationship with animal products as it is, a relationship that can be perfectly summed up in Ovid’s famous phrase “video meliora proboque, deteriora sequor” (“I see better things and I approve them, but I follow the worse things”). I *would like to* be vegan, but, frankly, I experience limited guilt eating chicken or fish. I eat pork in the form of ham on my sandwiches and in my scrambled eggs, so not that much. Milk and eggs are a challenge, as they are added as ingredients to *everything* (chief among them vegetable soup, milk chocolate [obvs], and cakes). I can live with guilty conscience, but I cannot live without vegetable soup, milk chocolate, or cake. I believe that the (not-so-) invisible hand of the market is driving food production quite aggressively towards veganism, and when I can switch with zero-to-little flavour sacrifice (even if it means paying a premium), I will do so. In any case, I think that vegan (or almost vegan) future is almost a done deal now; seeing vegan burger options at the likes of McDonald’s and KFC dispelled any doubts I may have had.

One thing that immediately comes to my mind when discussing meat-free alternatives to meat products is the acerbic hilarity brought on by the meat and dairy incumbents in the semantics department. Kleeman mentioned ongoing disputes over what exactly can correctly be referred to as “milk” or “meat”. I raise and check with US Egg Board (not making this up!) suing a vegan may start-up Just Mayo over… a misleading use of the term “mayo”. Feel free to Google it, it’s one of those stories where life beats fiction.

Sex robots are something I am relatively familiar with (in theory only! – I attended one or two presentations on the topic). On this one I am slightly unconvinced to be honest… I’m all for sex toys, but I think that when it comes to the full experience I couldn’t self-delude myself to the point of being able to… “engage” with a doll or a robot. I think that for those who do not have an intimate partner porn, camming, or sex workers are all much better options. That being said, technology might leap, and my views might change as I’m getting older… One observation made by the speakers really struck a chord with me: companion robots can get “weaponised” by commercial or political interests, influencing (effectively exploiting) their emotionally attached owners’ choices: from their pick of a shampoo brand to presidential elections. I can picture that happening in foreseeable future and upending the society. It will also be a legal minefield…

Anecdotes from the frontlines of birth and death have definitely rattled me the most. Apparently mankind is only a short time away from fully-functional external wombs, which has the potential to profoundly shake the foundations of the society.
The woman’s choice may no longer be between pregnancy and abortion. An unwanted healthy pregnancy could be transferred to an external womb, where it could gestate until birth. The ethical, moral, and legal implications of such option are staggering (in some countries this would be a choice; in others, where abortion is fully or largely prohibited, this could be enforced on pregnant women who would want to terminate the pregnancy). I think about abortion protests taking place all over Poland as we speak (and in countless cities with Polish diasporas outside Poland), I think about what far-right politicians are willing to do to women’s rights and bodies… and it’s chilling.
This doesn’t just concern unwanted pregnancies: what about expectant mothers with poor lifestyles (e.g. smokers)? Could they face forced extractions into a (healthier) artificial womb?
Lastly, this may become a lifestyle choice for women who do not want to carry their pregnancy. It is conceivable that standard pregnancy may at some point become a symbol of low status.

Finally, there’s death; still an inevitability (though, according to transhumanists, not for much longer). Frankly, I expected the wildest visions to concern sex, but I was badly mistaken: nothing beats death (so to speak). The speakers presented a logical, coherent, (and horrifying) prospect of death as an economic choice in modern, rapidly ageing societies. A terminally ill person could be presented by the state with a choice: to live out their life at a huge cost to the state, or go for immediate, painless euthanasia whereby a percentage of the saved amount (e.g. 25%) would be bequeathed to the person’s family. Similar choice could be offered to people serving long sentences in prison.

State-sponsored euthanasia may seem too dystopian (Logan’s Run-style), but what about taking one’s death into one’s own hands (particularly in light of severe physical or mental health issues)? Suicide has been a part of mankind’s experience forever and yet it remains highly stigmatised among many cultures and religions to this day. Euthanasia remains even more controversial, and is allowed only in handful of countries (Netherlands, Belgium, Luxembourg, Canada, Australia, and Colombia if I’m not mistaken). Every now and then a person who wishes to die but is unable to do so due to severe incapacitation makes headlines with their court case or attempts to take their own life. In the near future innovative solutions like Sarco euthanasia device may allow more people to end their lives when they so choose, in an (allegedly) painless way, thus effectively “democratising” suicide / euthanasia.

The visions presented in the London Futurists’ event range from lifestyle-altering (vegan alternatives to meat and dairy; sex robots) to profound (companion robots; artificial wombs; euthanasia). Some of them are already here, some of them are not. We can’t know for sure which ones will be mass-adopted, and which ones will be rejected by the society – we will only know that in time. It is also more than likely that there will be some inventions at the frontier of birth, food, sex, and death that even futurists can’t foresee. The best we can do is remain open-minded (perhaps cautiously hopeful) regarding the future and take little for granted.