Abstract Triangle Spaceship corridor

Humanity+ Festival 07/08-Jul-2020: David Brin on the future

Wed 28-Oct-2020

Continuing from the previous post on UBI, I would like to delve into another brilliant presentation from the Humanity+ Festival in Jul-2020: David Brin on life and work in the near future.

For those of you to whom the names rings somewhat familiar, David is the author of an acclaimed sci-fi novel “the postman”, adapted into a major box office bomb starring Kevin Costner (how does he keep getting work? Don’t get me wrong, I liked “dances with wolves” and I loved “a perfect world”, but commercially this guy is a near-guarantee of a box-office disaster…). Lesser known is David’s career in science (including an ongoing collaboration with NASA’s innovation division), based on his background in applied physics and space science. A combination of the two, topped with David’s humour and charisma, made for a brilliant presentation.

Brin’s presentation covered a lot of ground. There hasn’t necessarily been a main thread or common theme – it was just broadly defined future, with its many alluring promises and many very imaginative threats.

One of the main themes was that of feudalism, which figures largely in Polish sociological discourse (and Polish psyche), so it is a topic of close personal interest to me. When David started, I assumed (as would everyone) that he was talking in past tense, as something that is clearly and obviously over – but no! Brin argues (and on second thought it’s hard to disagree) that we not only live in very much feudal world at present, but that it is the very thing standing in the way of the (neoliberal) dream of free competition. Only when there are no unfair advantages, argues Brin, can everyone’s potential fully develop, and everyone can *properly* compete. I have to say, for someone who has had some, but not many advantages in life, that concept is… immediately alluring. Then again, the “levellers” Brin is referring to (nutrition, health, and education for all) are not exactly the moon and stars – I have benefitted from all of them throughout my life (albeit, not always free of charge – my university tuition fees are north of GBP 70k alone).

The next theme Brin discusses (extensively!) is near-term human augmentation. And for someone as self-improvement-obsessed as myself, there are few topics that guarantee my full(er) and (more) undivided attention. Brin lists such enhancements as:

  • Pharmacological enhancements (nootropics and alike);
  • Prosthetics;
  • Cyber-neuro links: enhancements for our sensory perceptions;
  • Biological computing (as in: intracellular biological computing; unfortunately Brin doesn’t get into the details of how exactly it would benefit or augment a person);
  • Lifespan extension – Brin makes an interesting point here; he argues that any extensions that worked so impressively in fruit flies or lab rats fail to translate to results in humans, leading to the conclusion that there may not be any low-hanging fruit in lifespan extension department (though, arguably, proper nutrition and healthcare could be argued as those low-hanging fruit of human lifespan extension).

Moving on to AI, Brin comes up with a plausible near-future scenario of an AI-powered “avatarised” chatbot, which would claim to be a sentient, disembodied, female AI running away from the creators who want to shut it down. Said chatbot would need help – namely financial help. I foresee some limitations of scale to this idea (the bot would need to maintain the illusion of intimacy and uniqueness of the encounter with its “white knight”), but other than that, it’s a chillingly simple idea. It could be more effective than openly hostile ransomware, whilst in principle not being much different from it. That idea is chilling, but it’s also exactly what one would expect from a sci-fi writer with a background in science. It is also just one idea – the creativity of rogue agents could go well beyond that.

David’s presentation could have benefitted from slightly clearer structure and an extra 20 minutes running time, but it was still great as it was. Feudalism, human augmentation, and creatively rogue AI may not have much of a common thread except one – they are all topics of great interest to me. I think the presentation also shows how important sci-fi literature (and its authors – particularly those with scientific background) can be in extrapolating (if not shaping) near-term future of mankind. It also proves (to me, at least), how important it is to have plurality and diversity of voice participating in the discourse of the future. Until recently I would see parallel threads: scientists were sharing their views and ideas, sci-fi writers theirs, politicians theirs – and most of them would be the proverbial socially-privileged middle-aged white men. I see some positive change happening, and I hope it continues. It has to be said that futurist / transhumanist movements seem to be in the vanguard of this change: all their events I attended have had a wonderfully diverse group of speakers.


100 dollar money bill with face mask

Humanity+ Festival 07/08-Jul-2020: Max More on UBI

Sat 05-Sep-2020

In the past 2 years or so I have been increasingly interested in the transhumanist movement. Transhumanism has a bit of a mixed reputation in “serious” circles of business and academia – sometimes patronised, sometimes ridiculed, occasionally sparking some interest. With its audacious goals of (extreme) lifespan / healthspan extension, radical enhancement of physical and cognitive abilities, all the way up to brain uploading and immortality one can kind of understand where the ridicule is coming from. I can’t quite comprehend the prospect of immortality, but it’d be nice to have an option. And I wouldn’t think twice before enhancing my body and mind in all the ways science wishes to enable.

With that mindset, I started attending (first in person, then, when the world as we knew it ended, online) transhumanist events. Paradoxically, Covid-19 pandemic enabled me to attend *more* events than before, including Humanity+ Festival. Had it been organised in a physical location, it would have likely been the in US; even if it was held in London, I couldn’t take 2 days off work to attend it, I save my days off for my family. I was very fortunate to be able to attend it online.

I attended a couple of fascinating presentations during the 2020 event, and I will try to present them in individual posts.

I’d say that – based on the way it is often referred to as a cult – transhumanism is currently going through the first stage of Schopenhauer’s three stages of truth. The first stage is ridicule, the second stage is violent opposition, and the third stage is being accepted as self-evident. I (and the sci-fi-loving kid inside me) find many of the transhumanist concepts interesting. I don’t concern myself too much with how realistic they seem today, because I realise how many self-evident things today (Roomba; self-driving cars; pacemakers; Viagra; deepfakes) seemed completely unrealistic, audacious, and downright crazy only a couple of years / decades ago. In fact, I *love* all those crazy, audacious ideas which focus on possibilities and don’t worry too much about limitations.

Humanity+ is… what is it actually? I’d say Humanity+ is one of the big players / thought leaders in transhumanism, alongside David Wood’s London Futurists and probably many other groups I am not aware of. Humanity+ is currently led by the fascinating, charismatic, and – it just has to be said – stunning Natasha Vita-More.

Transhumanist movement is a tight-knit community (I can’t consider myself a member… I’m more of a fan) with a number of high-profile individuals: Natasha Vita-More, Max More (aka Mr. Natasha Vita-More aka the current head of cryogenic preservation company Alcor), David Wood, Jose Cordeiro, Ben Goertzel. They are all brilliant, charismatic, and colourful individuals. As a slightly non-normative individual I suspect their occasionally eccentric ways work can sometimes work against them in the mainstream academic and business circles, but I wouldn’t have them any other way.

During the 2020 event Max More talked about UBI (Universal Basic Income). I quite like the idea of UBI, but I appreciate there are complexities and nuances to it, and I’m probably not aware of many of them. Max More definitely gave it some thought, and he presented some really interesting thoughts and posed many difficult questions. For starters, I liked reframing UBI as “negative income tax” – the very term “UBI” sends so many thought leaders and politicians (from more than one side of the political spectrum) into panic mode, but “negative income tax” sounds just about as capitalist and neoliberal as it gets. More amused the audience with realisation (which, I believe, was technically correct), that Donald Trump’s USD 1,200 cheques for all Americans were in fact UBI (who would have thought that of all people it would be Donald Trump who implements UBI on a national scale first…? Btw, it could be argued that with their furlough support Boris Johnson and Rishi Sunak did something very similar in the UK – though these cheques were not for everyone, only for those who couldn’t work due to lockdown; so this was more like Guaranteed Minimum Income).

The questions raised by More were really thought-provoking:

  • Will / should UBI be funded by taxing AI?
  • Should it be payable to immigrants? Legal and illegal?
  • Should UBI be paid per individual or per household?
  • What about people with medical conditions requiring extra care? They would require UBI add-ons, which undermines the whole concept.
  • Should people living in metropolitan cities like London be paid the same amount as people living in the (cheaper) countryside?
  • How should runaway inflation be prevented?

Lastly, More suggested some alternatives to UBI which (in his view) could work better. He proposed an idea of universal endowment (sort of universal inheritance, but without an actual wealthy relative dying) for everyone. It wouldn’t be a cash lump-sum (which so many people – myself included – could probably spend very quickly and not-too-wisely), but a more complex structure: a bankruptcy-protected stock ownership. The idea is very interesting – wealthy people (and even not-so-wealthy people) don’t necessarily leave cash to their descendants: physical assets aside (real estate etc.) leaving shares, bonds, and other financial assets in one’s will is relatively common. Basically the wealthier the benefactor, the more diverse the portfolio of assets they’d leave behind. The concept of bankruptcy-protected assets is not new, it exists in modern law (e.g. US Chapter 13 bankruptcy allows the bankrupting party to keep their property), but to me it sounded like More meant it in a different way. If More meant his endowment as a market-linked financial portfolio whose value cannot go down – well, this can be technically done (long equity + long put options on the entire portfolio) – but only to a point. Firstly, it would be challenging doing it on a mass scale (the supply of required amount of put options could or could not be a problem, but their prices would likely go up so much across the board that it would have a substantial impact on the value and profitability of the entire portfolio). Secondly, one cannot have a portfolio whose value can truly only go up – it wouldn’t necessarily be the proverbial free lunch, but definitely a free starter. Put options have expiry dates (all options do), and their maturity is usually months, not years. Expiring options can be replaced (rolled) with longer-dated ones, but this would come with a cost. Perpetual downside protection of a portfolio with put options could erode its value over time (especially in adverse market conditions, i.e. underlying assets values not going up).

If More had something even more innovative in mind then it could require rewriting some of the financial markets rulebook (why would anyone invest the old-fashioned way without bankruptcy protection when everyone would have their bankruptcy-protected endowments?). I’m not saying it’s never going to happen – in fact I like the idea a lot (and I realise how much different my life could be from the material perspective had I received such endowment when I was entering adulthood), I’m just pointing out practical considerations to address.

And one last thing: speaking from personal experience, I’d say that this endowment *definitely* shouldn’t be paid in full upon reaching the age of 18 (at least not for guys… I was a total liability at that age; I’d squander any money in a heartbeat); nor 21. *Maybe* 25, but frankly, I think a staggered release from mid-20’s to mid-30’s would work best.


Artificial intelligence concept image

#articlesworthreading: Tyagi et al "A Randomized Trial Directly Comparing Ventral Capsule and Anteromedial Subthalamic Nucleus Stimulation in Obsessive-Compulsive Disorder: Clinical and Imaging Evidence for Dissociable Effects”

Tue 01-September-2020

“Biological psychiatry” is a journal I’m not very likely to cross paths with (I’m more of “expert systems with applications” kind of guy). The only reason I came across the article and the incredible work it describes is because my PhD supervisor, the esteemed Prof. Barbara Sahakian, was one of the contributors and co-authors.

Even if somebody pointed me directly to the article, I wouldn’t be able to appreciate the entirety of the underlying work. First and foremost due to lack of any medical/neuroscientific subject matter expertise, but also because this incredible work is described so modestly and matter-of-factly that one may easily miss its significance, which is quite enormous. A team of world-class (and I mean: WORLD-CLASS) scientists planted electrodes in the brains of six patients suffering from severe (and I mean: SEVERE; debilitating) obsessive-compulsive disorder (OCD). The results were nothing short of astonishing…

The experiment examined the impact of direct brain stimulation (DBS) on patients with treatment-resistant cases of OCD. I have mild OCD myself, but it’s really mild – it’s just being slightly over the top with to do lists and turning off the stove (the latter not being much of a concern given that I leave my place roughly once a fortnight these days). I tend to think that it’s less of an issue and more of an integral part of my personality. It did bother me more when I was in my early 20’s and I briefly took some medication to alleviate it. The meds took care of my OCD, and of everything else as well. I became a carefree vegetable (not to mention some deeply unwelcome side-effects unique to the male kind). Soon afterwards I concluded my OCD is not so bad considering. However mild my own OCD is, I can empathise with people experiencing it in much more severe forms, and the six patients who participated in the study have been experiencing debilitating, super-severe (and, frankly, heartbreaking) cases of treatment-resistant OCD.

DBS was a new term to me, but conceptually it sounded vaguely similar to BCI (brain / computer interface) and even more similar to TDCS (trans-cranial direct current stimulation). Wikipedia explains that DBS is “is a neurosurgical procedure involving the placement of a medical device called a neurostimulator (sometimes referred to as a “brain pacemaker”), which sends electrical impulses, through implanted electrodes, to specific targets in the brain (brain nuclei) for the treatment of movement disorders, including Parkinson’s disease, essential tremor, and dystonia. While its underlying principles and mechanisms are not fully understood, DBS directly changes brain activity in a controlled manner”. That sounds pretty amazing as it is, though the researchers in this particular instance were using DBS for a non-movement disorder (definition from the world-famous Mayo clinic is broader and does mention DBS being used for treatment of OCD).

Some (many) of the medical technicalities of the experiment were above my intellectual paygrade, but I understood just enough to appreciate its significance. Six patients underwent general anaesthesia surgery (in simple terms: they had holes burred in their skulls), during which electrodes were implanted into 2 target areas in order to verify how each of them will respond to stimulation in a double-blind setting. What mattered to me was whether *either* stimulations would lead to improvement – and they did; boy, did they improve…! The Y-BOCS scores (which are used to measure and define clinical severity of OCD in adults) plummeted like restaurants’ income during COVID-19 lockdown: in the optimum stimulation settings + Cognitive Behavioural Therapy (CBT) combo, the average reduction in y-COBS was an astonishing 73.8%; one person’s score went down by 95%, another one’s by 100%. The technical and modest language of the article doesn’t include a comment allegedly made by one of the patients post-surgery: “it was like a flip of a switch” – but that’s what the results are saying for those two patients (the remaining four ranged between 38% and 82% reduction).

There is something to be said about the article itself. First of all, this epic, multi-researcher, multi-patient, exceptionally complex and sensitive study is captured in full on 8 pages. I can barely say “hello” in under 10 pages… The modest and somewhat anticlimactic language of the article is understandable (this is how formulaic, rigidly structured academic writing works whether I like it or not [I don’t!]), but at the same time is not giving sufficient credit to the significance of the results. Quite often I’d come across something in an academic (or even borderline popular science) journal that should be headline news on BBC or Sky News, and yet it isn’t (sidebar: there was one time, literally one time in my adult life that I can recall a science story being front-page news – it was the discovery of the Higgs Boson around 2012). Professor Steve Fuller really triggered me with his presentation at TransVision 2019 festival (“trans” as in “transhumanism”, not gender) when he mentioned that everyone in academia is busy writing articles and hardly anyone is actually reading them. I wonder how many people who should know of the Tyagi study (fellow researchers, grant approvers, donors, pharma and life sciences corporations, medical authorities, OCD patients etc.) actually do. I also wonder how many connections between seemingly-unrelated research are waiting to be uncovered and how many brilliant theories and discoveries have been published once in some obscure journal (or even not published at all) and have basically faded into scientific oblivion. I’m not saying this is the fate of this particular article (“Biological Psychiatry” is a highly-esteemed journal with impact factor placing it around top 10 of psychiatry and neuroscience publications, and it’s been around since late-1950’s), but still, this research gives hope to so many OCD sufferers (and potentially also depression sufferers and addicts, as per the Mayo clinic, so literally millions of people) that it should have been headline news on the BBC – but it wasn’t…


Candle stick graph

Utilisation of AI / Machine Learning in investment management: views from CFA Institute, FCA / BoE, and Cambridge Judge Business School

Mon 31-August-2020

I spent a better part of the past 18 months researching Machine Learning in equity investment decision-making for my PhD. During that time two high-profile industry surveys and one not-so-high-profile were published (CFA / BoE, CFA Institute, and Cambridge Judge Business School respectively). They provided a valuable insight into the degree of adoption / utilisation of Artificial Intelligence in general and Machine Learning in particular in the investment management industry.

Below you will find a brief summary of their findings as well as some critique and discussion of individual surveys.

My research into ML in the investment management industry delivered some unobvious conclusions:

  • The *actual* level of ML utilisation in the industry is (as of mid-2020) low (if not very low).
  • There are some areas where ML is uncontroversial and essentially a win/win for everyone – chief among them anti-money laundering (AML), which I discussed a number of times in meetups and workshops like this one [link]. Other areas include chatbots, sales / CRM support systems, legal document analysis software, or advanced Cybersecurity.
  • There are some areas where using ML could do more harm than good: recruitment or personalised pricing (the latter arguably not being very relevant in investment management).
  • There is curiosity, openness, and appreciation of AI in the industry. Practicalities such as operational and strategic inertia on one hand and regulatory concerns on the other stand in the way. It’s not particularly surprising nor unexpected, and attitude towards this situation is stoical. Investment management has once been referred to as “glacial” in its adoption of new technologies – I think the industry has made huge progress in the past decade or so. I think that AI / ML adoption will accelerate, much like the adoption of the cloud had in recent years.
  • COVID-19 may (oddly) accelerate the adoption of ML, driven by competitive pressure, thinning margins (which started years before COVID-19), and overall push towards operational (and thus financial) efficiencies.

I was confident about my findings and conclusions, but I welcome three industry publications, which surveyed hundreds of investment managers among them. These reports were in the position to corroborate (or disprove) my conclusions from a more statistically significant perspective.

So… Was I right or was I wrong?

The joint FCA / BoE survey (conducted in Apr-2019, with the summary report[1] published in Oct-2019) covered the entirety of UK financial services industry, including but not limited to investment management. It was the first (chronologically) comprehensive publication concluding that:

  • Investment management industry as a subsector of financial services industry has generally low adoption of AI compared to, for example, banking;
  • The predominant uses of AI investment management are areas outside of investment decision making (e.g. AML). Consequently, many investment management firms may say “we use AI in our organisation” and be entirely truthful in saying so. What the market and general public infer from such general statements may be much wider and more sophisticated applications of the technology than they really are.

The CFA Institute survey was conducted around April and May 2019 and published[2] in Sep-2019. It was more investment-management centric than the FCA / BoE publication. Its introduction states unambiguously: “We found that relatively few investment professionals are currently exploiting AI and big data applications in their investment processes”.

I consider one of its statistics particularly relevant: of the 230 respondents who answered the question “Which of these [techniques] have you used in the past 12 months for investment strategy and process?” only 10% chose “AI / ML to find nonlinear relationship or estimate”. I believe that even the low 10% figure represented a self-selected group of respondents, who were more likely to employ AI / ML in their investment functions than those who decided not to complete the survey.

Please note that for any of the respondents who confirm that their firms use AI / ML in investment decision-making (or even broader investment process) it doesn’t mean that *all* of their firm’s AUM will be subject to this process. It just means that some fraction of the AUM will be. My educated presumption is that this fraction is likely to be low.

Please also note that both FCA / BoE and CFA Institute reports relied on *self-selected* groups of respondents. The former is based on 106 firms’ responses out of 287 the survey was sent to. 230 respondents answered the particular question of interest to me in the CFA Institute report – out of 734 total respondents the survey was sent to.

The Cambridge Judge Business School survey report[3] (published in Jan-2020) strongly disagrees with the two reports above. It concludes that “AI is widely adopted in the Investment Management sector, where it is becoming a fundamental driver for revenue generation”. It also reads that “59% of all surveyed investment managers are currently using AI in their investment process [out of which] portfolio risk management is currently the most active area of AI implementation at an adoption rate of 61%, followed by portfolio structuring (58%) and asset price forecasting (55%)”. I believe that Cambridge results are driven by the fact that the survey combined both FinTech startups and incumbents, without revealing the % weights of each in the investment management category. In my experience within the investment management industry, the quotes above make sense only in sample dominated by FinTechs (particularly the first statement, which I strongly disagree with on the basis of my professional experience and industry observations). I consider lumping FinTech’s and incumbents’ results into one survey as unfortunate due to extreme differences between the types of organisations.

That Cambridge Judge Business School publishes a report containing odd findings does not strike me as particularly surprising. It is, frankly, not uncommon for academics to get so detached from the underlying industry that their conclusions stand at odds with observable reality. However, the CJBS report has been co-authored by Invesco and EY, which I find quite baffling. Invesco is a brand-name investment management firm with USD 1+ Tn in AUM, which puts it in the Tier 1 / “superjumbo” category size-wise. I am not aware of it being on the forefront of cutting-edge technologies, but as is the case with US-centric firms, I may simply lack sufficiently detailed insight; Invesco’s AUM seem sufficient to support active research/implementation of AI. One way or another, Invesco should know better than to sign off on a report with questionable conclusions. EY is well on the forefront of the cutting-edge technologies (I know that from personal experience), so for them to sign off on the report is even more baffling.

Frankly, the Cambridge Judge report fails to impress and fails to convince (me). My academic research and industry experience (including extensive networking) are fully in line with FCA / BoE’s and CFA Institute’s reports’ findings.

The fact that AI adoption in investment management stands at a much more modest level than the hype would have us believe may be slightly disappointing, but isn’t that surprising. It just goes to show that AI as a powerful, disruptive technology is being adopted with caution, which isn’t a bad thing. There are questions regarding regulation applicable to AI which need to be addressed. Lastly, business strategies take time (particularly for larger investment managers), and at times the technology is developing faster than business can keep up. Based on my experiences and observation with cloud adoption (and lessons seemingly learned by the industry), I am (uncharacteristically) optimistic.

[1] https://www.fca.org.uk/publication/research/research-note-on-machine-learning-in-uk-financial-services.pdf

[2] https://www.cfainstitute.org/-/media/documents/survey/AI-Pioneers-in-Investment-Management.ashx

[3] https://www.jbs.cam.ac.uk/wp-content/uploads/2020/08/2020-ccaf-ai-in-financial-services-survey.pdf


Network connection in shape of running man

London Futurists: Adventures at the Frontier of Birth, Food, Sex & Death

Mon 13-Jul-2020

On Monday 13th of July 2020 I attended (virtually, of course) one of my favourite meetup series, David Wood’s London Futurists. The event was mostly focused around the key themes of Jenny Kleeman’s recent book (“Sex robots & vegan meat: Adventures at the Frontier of Birth, Food, Sex & Death”), with Kleeman herself featured as the keynote (but not the only) speaker.

Frankly, I think that addressing the future of all-that-really-counts-in-life in one book and one event is a somewhat over-ambitious task. Fortunately, the speakers didn’t attempt to present a comprehensive vision of mankind’s future according to themselves. Instead, they presented a series of concepts which are either in existence at present, or are extrapolated into the near future. This way each attendee could piece together their own vision of the future, which made for a much more thought-provoking and interesting experience than having a uniform vision laid out.

I paid limited attention to the part concerning vegan meat / meat alternatives / meat substitutes. I have a fairly complicated relationship with animal products as it is, a relationship that can be perfectly summed up in Ovid’s famous phrase “video meliora proboque, deteriora sequor” (“I see better things and I approve them, but I follow the worse things”). I *would like to* be vegan, but, frankly, I experience limited guilt eating chicken or fish. I eat pork in the form of ham on my sandwiches and in my scrambled eggs, so not that much. Milk and eggs are a challenge, as they are added as ingredients to *everything* (chief among them vegetable soup, milk chocolate [obvs], and cakes). I can live with guilty conscience, but I cannot live without vegetable soup, milk chocolate, or cake. I believe that the (not-so-) invisible hand of the market is driving food production quite aggressively towards veganism, and when I can switch with zero-to-little flavour sacrifice (even if it means paying a premium), I will do so. In any case, I think that vegan (or almost vegan) future is almost a done deal now; seeing vegan burger options at the likes of McDonald’s and KFC dispelled any doubts I may have had.

One thing that immediately comes to my mind when discussing meat-free alternatives to meat products is the acerbic hilarity brought on by the meat and dairy incumbents in the semantics department. Kleeman mentioned ongoing disputes over what exactly can correctly be referred to as “milk” or “meat”. I raise and check with US Egg Board (not making this up!) suing a vegan may start-up Just Mayo over… a misleading use of the term “mayo”. Feel free to Google it, it’s one of those stories where life beats fiction.

Sex robots are something I am relatively familiar with (in theory only! – I attended one or two presentations on the topic). On this one I am slightly unconvinced to be honest… I’m all for sex toys, but I think that when it comes to the full experience I couldn’t self-delude myself to the point of being able to… “engage” with a doll or a robot. I think that for those who do not have an intimate partner porn, camming, or sex workers are all much better options. That being said, technology might leap, and my views might change as I’m getting older… One observation made by the speakers really struck a chord with me: companion robots can get “weaponised” by commercial or political interests, influencing (effectively exploiting) their emotionally attached owners’ choices: from their pick of a shampoo brand to presidential elections. I can picture that happening in foreseeable future and upending the society. It will also be a legal minefield…

Anecdotes from the frontlines of birth and death have definitely rattled me the most. Apparently mankind is only a short time away from fully-functional external wombs, which has the potential to profoundly shake the foundations of the society.
The woman’s choice may no longer be between pregnancy and abortion. An unwanted healthy pregnancy could be transferred to an external womb, where it could gestate until birth. The ethical, moral, and legal implications of such option are staggering (in some countries this would be a choice; in others, where abortion is fully or largely prohibited, this could be enforced on pregnant women who would want to terminate the pregnancy). I think about abortion protests taking place all over Poland as we speak (and in countless cities with Polish diasporas outside Poland), I think about what far-right politicians are willing to do to women’s rights and bodies… and it’s chilling.
This doesn’t just concern unwanted pregnancies: what about expectant mothers with poor lifestyles (e.g. smokers)? Could they face forced extractions into a (healthier) artificial womb?
Lastly, this may become a lifestyle choice for women who do not want to carry their pregnancy. It is conceivable that standard pregnancy may at some point become a symbol of low status.

Finally, there’s death; still an inevitability (though, according to transhumanists, not for much longer). Frankly, I expected the wildest visions to concern sex, but I was badly mistaken: nothing beats death (so to speak). The speakers presented a logical, coherent, (and horrifying) prospect of death as an economic choice in modern, rapidly ageing societies. A terminally ill person could be presented by the state with a choice: to live out their life at a huge cost to the state, or go for immediate, painless euthanasia whereby a percentage of the saved amount (e.g. 25%) would be bequeathed to the person’s family. Similar choice could be offered to people serving long sentences in prison.

State-sponsored euthanasia may seem too dystopian (Logan’s Run-style), but what about taking one’s death into one’s own hands (particularly in light of severe physical or mental health issues)? Suicide has been a part of mankind’s experience forever and yet it remains highly stigmatised among many cultures and religions to this day. Euthanasia remains even more controversial, and is allowed only in handful of countries (Netherlands, Belgium, Luxembourg, Canada, Australia, and Colombia if I’m not mistaken). Every now and then a person who wishes to die but is unable to do so due to severe incapacitation makes headlines with their court case or attempts to take their own life. In the near future innovative solutions like Sarco euthanasia device may allow more people to end their lives when they so choose, in an (allegedly) painless way, thus effectively “democratising” suicide / euthanasia.

The visions presented in the London Futurists’ event range from lifestyle-altering (vegan alternatives to meat and dairy; sex robots) to profound (companion robots; artificial wombs; euthanasia). Some of them are already here, some of them are not. We can’t know for sure which ones will be mass-adopted, and which ones will be rejected by the society – we will only know that in time. It is also more than likely that there will be some inventions at the frontier of birth, food, sex, and death that even futurists can’t foresee. The best we can do is remain open-minded (perhaps cautiously hopeful) regarding the future and take little for granted.


DEVS image

“Devs” series review

Wed 06-May-2020

I don’t watch TV (I’m using the term loosely and inclusively, i.e. including Netflixes and Amazon Prime Videos of this world) much, and when I do, I’m usually so tired that it’s either Family Guy, or an umpteenth rerun of one of my go-to masterpieces: Aliens, The Matrix, The Matrix Reloaded, Alien: Resurrection, Resident Evil, Terminator 2, Total Recall, Contagion, Predator, Basic Instinct, Batman, Batman Returns, Dark Knight Rising – I think you get the idea. I try to make an effort to watch the more ambitious fare, but the truth is that with full-time work, studies, FOMO, and casual depression there isn’t always enough glucose in the brain to concentrate on something new and challenging. When there is, there is sometimes a great prize, a grand prix (most recently: Lee Chang-dong’s Burning… Jesus Marimba, what a movie…!). Last weekend, spurred by word of mouth, I watched Alex Garland’s (of Ex Machina and Annihilation fame) latest creation Devs.

While Dev’s production budget is impossible to come by on the Internet, I’m guessing low- to mid-eight figure (my working guess is USD 20m – 30m range; I’d be heavily surprised it were a single penny over USD 50m). The show isn’t as visually blockbusting as Westworld (and let’s be clear: the two will forever be mentioned and compared on one breath, for a number of reasons: artistic ambitions, philosophical under(or over-)tones, auteur aesthetic, focus on cutting-edge technologies, and airing at the exact same time [Devs vs. Westworld season 3] – lastly, a painful lack of any humour or irony, as both shows take themselves oh so very seriously).

Entertainment – quality entertainment – Is often revealing a lot about the collective mindset at the time of the release (aka zeitgeist). There are, of course exceptions: masterpieces such as 2001: A Space Odyssey or Solaris (Tarkovsky’s, not Soderbergh’s!) didn’t provoke fears of sentient AI back in the 1960’s or strange alien worlds in the 1970’s respectively; they were visionary works either untethered from the everyday experience, or decades ahead of their time. In recent years we could see how transhumanism became mainstream in (wildly uneven) productions such as Transcendence (can you spot the cameo of Wired magazine in one of the scenes? It’s the best acting performance of the entire movie), Limitless (which made 6x its budget), or Lucy (which made 11x its budget) – not to mention genre-defining classics such as Ghost in the Shell (The 1995 one! Not the Scarlett Johansson trainwreck) or the Matrix trilogy. Transhumanism has enormous cinematic potential, which has been nowhere near fully utilised; it has, however, had some less-than-ideal timing, because we still haven’t advanced anything like Lucy’s CPH4 or Limitless’ NZT-48 on the nootropics side, nor brain-computer interfaces (BCI’s) from the Matrix, cortical stacks from Altered Carbon, or cyberbrains and synthetic “shells” from Ghost in the Shell. With transhumanism still awaiting its breakthroughs, AI provided more than was required to fully capture mass imagination in recent years (Ex Machina, Westworld, Her, Humans). Devs are going a step further: to the fascinating world of quantum computing.

It’s a challenge to clearly define what genre Devs actually belongs to. It’s not science-fiction nor a technothriller, because technology – while absolutely central to the story – is not the story itself (unlike, for example, HAL in 2001). It’s not a murder mystery nor a crime thriller, because we see the murder in the first 15 minutes of the first episode, we see who killed – alas, we don’t know why they killed. The entire storyline of the series is all about understanding this seemingly absurd, unnerving “why”. The series is closest to philosophical and existential drama. It doesn’t feature red herrings or parallel timelines, and all characters, props, and plot devices appear for a reason, which makes for a refreshing break from oft-overengineered narratives of modern-day dramas. It is as close to minimalist as possible for an FX production. It is also highly stylised, beautiful, and high on artistic and technical merits.

Cinematography in Dev is an absolute delight; a technical and artistic tour de force (if Devs doesn’t get a clean sweep of the technical and art direction Emmy’s and Golden Globes, that’s it, I’m giving up on mankind). The cinematography is crisp and dreamlike at the same time, it has an incredible hyperreal feel. It is also rife in references – and as is often the case with not-so-overt ones, one wonders “is this an actual reference, or am I just inventing one?”. Night-time aerial sequences of LA seem like a clear reference to (original) Blade Runner’s visionary imagery, while daytime sequences share a degree of Sun saturation with Dredd (an underrated masterpiece Alex Garland wrote and executive-produced). The Devs unit design is homage to Vincenzo Natali’s Cube, there’s no way it’s accidental, while its brighter-than-sun lighting evokes 2007 Sunshine (written by Garland). The technical quality of the cinematography shines brightest in razor-sharp, vivid, saturated night shots – which makes one contemplate the incredible technological leap of the past 2 decades; just see one of big-budget Hollywood productions night shots from late 90’s (off the top of my head: Interview with the Vampire) to see how the technology has progressed.

Music is at times perhaps too arthouse, but it is clearly a part of the artistic vision, it’s a standout character, not a backdrop – and most of the time, it works oh so well. Once again, one can’t help but draw comparisons with Vangelis’ groundbreaking Blade Runner score, or the equally groundbreaking selection of music for 2001. On the intelligent ambient electronica side, Cristobal Tapia de Veer arresting score for Channel 4’s underrated masterpiece Utopia or Cliff Martinez’s score for Steven Soderbergh’s Contagion come to mind. Westworld’s soundtrack by Ramin Djawadi is nothing to sneeze at (especially the beautifully bittersweet Sweetwater), but it’s clearly illustration music. Devs seem closer to Brian Retizell’s avant-garde experimentations on Hannibal or subliminal ambient background to David Lynch’s 2017 Twin Peaks revival (sound-designed by David Lynch himself).

Quantum computerArt direction is interesting, though not at par with music or cinematography. The LA apartments of the protagonists are pretty basic (although we get the not-so-subtle message “even these basic apartments are nowadays affordable for the technocratic elite only”). The Devs’ “floating cube” is, by contrast, intentionally (?) over-the-top, while the Amaya statue towering over the campus is quite creepy, likely meant as a warning against not letting go. The way New York City was the fifth character on Sex and the City, in Devs it’s the quantum computer – which is either a carbon copy of IBM Q System One (I’ve seen it up close – a mock-up, of course, not the real, super-cooled thing), or it is actually the exact mock-up I saw (the one in Devs has some moving parts though… I’m not sure the IBM / D-Wave / Rigetti systems have any of those…). To a regular viewer it may look completely and utterly over the top, like an elongated baroque candelabra put on the floor – the weird thing is, this is exactly what a modern-day quantum computer looks like; there’s no OTT here – that’s scientific accuracy.

It doesn’t often happen that all elements click into place to create perfection (examples in recent years: American Horror Story season 1; Hannibal; Killing Eve, Fleabag; Twin Peaks: the Return), and unfortunately Devs aren’t quite so lucky. All its artistic and philosophical heft is undermined – painfully! – by terribly miscast leads. Some TV shows (again, using the term loosely) are cast pitch-perfect, the cast makes these shows (cases in point: Killing Eve; Hannibal; seasons 1, 2, and 5 of American Horror Story; Pose; characters of Dolores and Maeve in Westworld; Olive Kitteridge; Transparent). Given the budgets that go into modern series production, and the enormous stakes (as the shows become flagships for their platforms / networks and fight to compete in increasingly saturated marketplace) casting can make or break a show. The lead cast of Devs doesn’t quite break the show, but it’s not for the lack of trying.

Sonoya Mizuno as Lily is. a. disaster. She’s got an intriguing, androgynous appeal, but – looking at a broad and subjective spectrum of similarly intriguing actresses – she doesn’t have the sex-appeal of Tao Okamoto or Ève Salvail, the talent of Rinko Kikuchi or Saoirse Ronan, nor the charisma of Carrie-Ann Moss or Chiaki Kuriyama. We know that Alex Garland can direct its actors well (after all, Ex Machina propelled Alicia Vikander to Hollywood’s A-list; she got an Oscar a year later for The Danish Girl), so the fault is likely with Mizuno. She carries Lily with the depth of an emo teenager; her acting is flat, her emotional range is limited. I felt like crying during her crying scene, but it wasn’t tears of empathy – it was cringe-cry. Nick Offerman’s Forest takes the blank-stare dead eyes so far over the top that it’s at times almost comical. Karl Glusman – an otherwise talented actor – does Russian accent… poorly – and that’s his only contribution. Alison Pill also went over-the-top with the “brooding quantum physics genius”, doubling down on blank stare and mistaking flatness for depth – she simply isn’t convincing as a scientist. Relative newcomer Jin Ha as Jamie is the only casting bet that pays off (and just to be clear: one doesn’t need a cast of Hollywood A-listers to make a good series – just look at Succession). His character doesn’t get that much air time, but still comes across as a fully-fleshed character. He’s also refreshingly sensitive and vulnerable, which are not qualities we get to see in male protagonists very often yet. He’s nowhere near hypermasculine, he’s actually really slim-built, even slightly androgynous (androgyny is a major theme in Devs: Lily, Jamie, Lyndon – it almost seems like a message: “the intellectual / technocratic elite is above the archaic gender-binary”). Zach Grenier’s is the only other half-decent performance, with redeeming flashes of irony. No Emmy’s here. No Golden Globes.

Devs’ is a living proof that powerful vision is enough to create compelling, fascinating, binge-worthy TV. It does so without a plethora of international locations (I’m looking at you, Westworld!), an abundance of CGI (I’m still looking at you, Westworld), or elaborate purpose-built sets (my eyes still on you, Westworld). Devs has precisely one blockbusting scene (the motorway sequence with Lily and Kenton). By the very virtue of its location, it evokes Bullitt, otherwise, it is reminiscent of that motorway scene in Matrix Reloaded (obviously in a much more demure fashion – the Matrix Reloaded sequence was a stroke of cinematic genius, down to the Juno Reactor track).

As far as philosophy and technology are concerned… well, the latter is approached loosely… really loosely. It’s more of a mere notion of quantum computing than any actual science. It’s kind of the vaguest and loosest level of technological reference possible – but, judging by the reactions, it was enough to strike a chord. The reason may be two-fold: the artistic and entertainment merits of Devs, and perfect timing, as “quantum” becomes the hot new buzzword. Philosophy is more of a meditation and exploration – there are few (if any) answers or directions. It may, in fact, be one of the major strengths of Devs: it ponders and asks questions without fortune cookie-quality one-liner bits of wisdom (“There is ugliness in this world. Disarray. I choose to see beauty” – look me in the eye, Westworld…!). Westworld takes on sentience and consciousness; Devs takes on free will and determinism. It’s interesting (and impressive) how Devs gently manages to manoeuvre around the topic of religion and comes across as agnostic rather than explicitly atheistic.

Devs is a show one can describe as a “small masterpiece”. It’s slow-burning, low-key, with stunning aesthetics though no blockbusting qualities proper. The characters are not at all overtly sexy or sexual (in fact they come across somewhat asexual, even though we know they do, actually, have sex and enjoy it). There are no heists, explosions, or enormous amounts of money (money is approached a bit like sex in Devs – we know it exists, and in great amounts, but it’s almost a non-factor for the plot). One could say that not very much is happening in Devs altogether, and they wouldn’t necessarily be wrong. But somewhere between bold artistic vision, philosophical questions, stunning visuals, and outstanding music there is a true work of art: ambiguous, intriguing, thought-provoking, and truly beautiful. I’ll treasure it all the more knowing full-well how unlikely I am to see anything of remotely similar calibre anytime soon.

PS. Is it just me, or does it look like the scene of meeting Lily and Jamie was filmed on the non-existent ground floor of Exchange House in Primrose Street in London?


Artificial Intelligence concept

The Polish AI landscape

Wed 29-Apr-2020

Countries most synonymous with “AI powerhouses” are without a doubt the US and China. Both have economies of scale, resources, and strategic (not just business) interests in being at the forefront of AI. EU as a whole would probably come third, although there is always a degree of subjectivity in these rankingsi. The UK would probably come next (owing to Demis Hassabis and DeepMind, as well as thriving scientific and academic communities). In any case, it’s rather unlikely that Poland would be listed in the top tier. Poland is known for being an ideal place to set up corporate back- or (less frequently) middle office functions: much cheaper than Western Europe, with huge pool of well-educated talent, in the same time zone as the rest of the EU. A great alternative (or complement) to setting up a campus in India, but not exactly a major player in AI research and entrepreneurship. Plus, Poland and its young democracy (dating back to 1989) are currently going through a bit of a social, identity, and political rough patch. Not usually a catalyst or enabler of cutting-edge technology.

And despite all that (and despite being a mid-sized country at best… 38 million people; and despite being global number #70 globally in GDP per capita in 2018 out of 239 countries and territoriesii) for some mysterious reason Poland still made it to #15 globally in AI (using total number of companies as a metric) according to China Academy of Information and Communications Technology (CAICT) Data Research Centre Global Artificial Intelligence Industry Data Reportiii as kindly translatediv by my fellow academic Jeffrey Ding from Oxford University (whose ChinAI newsletter is brilliant and I encourage everyone to subscribe and read). I found this news so unexpected that it was the inspiration behind the entire post below.

Artificial Intelligence conceptThe recent (2019) Map of the Polish AI from Digital Poland Foundation reveals a vibrant, entrepreneurial ecosystem with a number of interesting characteristics. The official Polish AI Development Policy 2019 – 2027 released around the same time by a multidisciplinary team working across a number of government ministries paints a picture of impressive ambitions, though experts have questioned their realism.

The Polish AI scene is very young (50% of the 160 organisations polled introduced AI-based services in 2017 or 2018, the most recent years in the survey). Warsaw (unsurprisingly) steals the top spot, with 85% of all companies being located in one of the 6 major metropolitan areas. The companies tend to be small: only 22% have more than 50 people; 59% have 20 or fewer. Let’s not conflate company headcount with AI teams proper – over 50% of the companies surveyed have AI teams of 5 employees or fewer. Shortage of talent is a truly global theme in AI (which I personally don’t fully agree with – companies with resources to offer competitive packages [sometimes affectionately referred to as “basketball player salaries”] have no shortage of candidates; whether this level of pay is justifiable [the very short-lived bonanza for iOS app developers circa 2008 comes to mind] and fair to the smaller players is a different matter). The additional challenge in Poland is that Polish salaries simply cannot compete with what is on offer within 3 hours’ flight – many talented computer scientists are naturally tempted to move to places like Berlin, Paris, London or other major European AI hubs where there are more opportunities, more developed AI ecosystems, and much, much better money to be made.

What stands out is the ultra-close connection between business and academic communities. While the same is the case in most countries seriously developing AI, some of them are home to global tech corporates whose financial resources and thus R&D capabilities give them the luxury to develop on their own, at par (if not ahead) of leading research institutions. These corporates’ resources also enable them to poach world-class talent (e.g. Google hiring John Martinis to lead their quantum computer efforts [who has since left…], Facebook appointing Yann LeCun as head of AI research, or Google Cloud poaching [albeit briefly] Fei-Fei Li as their Chief Scientist of AI/ML). In Poland this will not apply – it does not have any large (or even mid-size) home-grown innovative tech firms. The ultra-close connection between business and academia is a logical consequence of these factors – plus in a 38-million-strong country with relatively few major cities serving as business and academic hubs, the entire ecosystem simply can’t be very populous.

The start-up scene might in part be constrained by limited amount of available funds (anecdotally the angel investor / VC scene in Poland is very modest). However, the Digital Poland report states:

Categorically, as experts point out, the main barrier to the development of the Polish AI sector is not the absence of funding or expertise but rather a very limited demand for solutions based on AI.

My personal contacts echo this conclusion – they are not that worried about funding. Anecdotally, there is a huge pool of state grants (NCiBR) with limited competition for those (although post-COVID-19 they may all but evaporate).

Multiple experts cited by Digital Poland all list domestic demand as the primary concern. According to the survey, potential local clients simply do not understand the technology well enough to realise how it can benefit them (41% of responses in a multiple choice questionnaire – single highest cause; [client] staff not understanding AI had its own mention at 23% and [managers] not understanding AI came at 22%).

The AI market in Poland is focused on more commercial products (Big Data analytics, sales, analytics) rather than cutting-edge innovative research. It is understandable – in an ecosystem of limited size with very limited local demand the start-ups’ decision to develop more established, monetisable, applications which can be sold to a broad pool of global clients is a reasonable business strategy.

One side-conclusion I found really interesting is that there’s quite a vibrant conference and meetup scene given how nascent and “unsolidified” the AI ecosystem is.

The Polish AI Policy document is an interesting complement to the Digital Poland report. While the latter is a thoroughly researched snapshot of the Polish AI market right here right now (2019 to be exact), the policy document is more of a mission statement – a mission of impressive ambitions. I always support bold, ambitious, and audacious thinking – but experience has taught me to curb my enthusiasm as far as Polish policy-making is concerned. Grand visions of the 2019 – 2027 come with not even a draft of a roadmap. The document is also, unfortunately, quite pompous and vacuous at times.

The report is rightly concerned about impact on jobs, concluding that the expectation is that more jobs will be created than lost, and concluding that some of this surplus should benefit Poland. One characteristic of Polish economy is that it (still) has a substantial number of state-owned enterprises in key industries (banking, petrochemicals, insurance, mining and metallurgy, civil aviation, defence), which are among the largest in their industries on a national scale. Those companies have the size and the valid business cases for AI, yet they don’t seem ready (from education and risk-appetite perspectives) to commit to AI. State-level policy could provide the nudge (if not outright push) towards AI and emerging technologies, yet, unfortunately, that is not happening.

The report rightly acknowledges the skills gap, as well as some issues on the education side (dwindling PhD rates, relatively low (still!) level of interest in AI among Polish students as measured by thesis subject choices). The quality of Polish universities merits its own article (merits its own research, in fact). On one hand, the anecdotal and first-hand experiences lead me to believe that Polish computer scientists are absolutely top-notch, on the other, the university rankings are… unforgiving (there are literally two Polish universities on QS Global 500 list 2020, at positions #338 and #349v).

Last but not least, a couple of Polish AI companies I like (selection entirely subjective):

  • Sigmoidal – AI business/management consultancy.
  • Edward.ai – AI-aided sales and customer relationship management (CRM) solutions.
  • Fingerprints.digital – behavioural biometrics solutions.

Disclaimer: I have no affiliations with any of the abovementioned companies.

[i] Are we looking at corporate research spending? Government funding/grants for academia? Absolute amounts or % of GDP? How reliable are the figures and how consistent are they between different states? etc. etc.

[ii] Source: World Bank (https://data.worldbank.org/indicator/NY.GDP.PCAP.CD?most_recent_value_desc=true)

[iii] You can read the original Chinese version here: http://www.caict.ac.cn/kxyj/qwfb/qwsj/201905/P020190523542892859794.pdf

[iv] Jeff’s English translation can be found here: https://docs.google.com/document/d/15WwZJayzXDvDo_g-Rp-F0cnFKJj8OwrqP00bGFFB5_E/edit#heading=h.fe4k2n4df0ef

[v] https://www.topuniversities.com/university-rankings/world-university-rankings/2020


Money laundering

Nerd Nite London – AI to the rescue! How Artificial Intelligence can help combat money laundering

15-Apr-2020

In April 2020, in the apex of the UK lockdown, I had the pleasure of being one of three presenters at online edition of Nerd Nite London. Nerd Nite is a wildly popular global meetup series, with multiple regional chapters. Each chapter is run by volunteers, and the proceeds from ticket sales (after costs) go to local charities. In this sense, lockdown did us an odd favour: normally Nerd Nites are organised in pubs, so there is venue rental cost. This time the venue were our living rooms, so pretty much all the money went to a local foodbank.

I had the pleasure of presenting on one of the topics close to my heart (and mind!), which is the potential for AI to dramatically improve anti-money laundering efforts in financial organisations. You can find the complete recording below.

Enjoy!


Artificial hand holding judge scales

UCL Digital Ethics Forum: Translating Algorithm Ethics into Engineering Practice

Tue 04-Feb-2020

On Tue 04-Feb-2020 my fellow academics at UCL held a workshop on algorithmic ethics. It was organised by Emre Kazim and Adriano Koshiyama, two incandescently brilliant post-docs from UCL. The broader group is run by Prof. Philip Treleaven, who is a living legend in academic circles and an indefatigable innovator with an entrepreneurial streak.

Algorithmic ethics is a relatively new concept. It’s very similar to AI ethics (a much better-known concept), with the difference being that not all algorithms are AI (meaning that algorithmic ethics is a slightly broader term). Personally, I think that when most academics or practitioners say “algorithmic ethics” they really mean “ethics of complex, networked computer systems”.

Artificial hand holding judge scalesThe problem with algorithmic ethics doesn’t start with them being ignored. It starts with them being rather difficult to define. Ethics are a bit like art – fairly subjective and challenging to define. Off the top of our heads we can probably think of cases of (hopefully unintentional) discrimination of job applicants on the basis of their gender (Amazon), varying loan and credit card limits offered to men and women within the same householdi (Apple / Goldman), online premium delivery services more likely being offered to white residents than blackii (Amazon again). And then there’s the racist soap dispenseriii (unattributed).

These examples – deliberately broad, unfortunate and absurd in equal measure – show how easy it is to “weaponise” technology without an explicit intention of doing so (I assume that none of the entities above intentionally designed their algorithms as discriminatory). Most (if not all) of the algorithms above were AI’s which trained themselves off of a vast training dataset, or optimised a business problem without sufficient checks and balances in the system.

With all of the above most of us will just know that they were unethical. But if we were to go from an intuitive to a more explicit understanding of algorithmic ethics, what would it encompass exactly? Rather than try to reinvent the ethics, I will revert to trusted sources: one of them will be Alan Turing Institute’s “understanding artificial intelligence ethics and safety”iv and the other will be a 2019 paper “artificial intelligence: the global landscape of ethics guidelines”v co-authored by Dr. Marcello Ienca from ETH Zurich, whom I had the pleasure of meeting in person at Kinds of Intelligence conference in Cambridge in 2019. The latter is a meta-analysis of 84 AI ethics guidelines published by various governmental, academic, think-tank, or private entities. My pick of the big ticket items would be:

  • Equality and fairness (absence of bias and discrimination)
  • Accountability
  • Transparency and explicability
  • Benevolence and safety (safety of operation and of outcomes)

There is an obvious fifth – Privacy – but I have slightly mixed feelings when it comes to throwing it in the mix with the abovementioned considerations. It’s not that privacy doesn’t matter (it matters greatly), but it’s not as unique to AI as the above. Privacy is a universal right and consideration, and doesn’t (in my view) feed and map to AI as directly as, for example, fairness and transparency.

Depending on the context and application, the above will apply in different proportions. Fairness will be critical in employment, provision of credit, or criminal justice, but I won’t really care about it inside a self-driving car (or a self-piloting plane – they’re coming!) – then I will care mostly about my safety. Privacy will be critical in the medical domain, but it will not apply to trading algorithms in finance.

The list above contains (mostly humanistic) concepts and values. The real challenge (in my view) is two-fold:

  1. Defining them in a more analytical way.
  2. Subsequently “operationalising” them into real-world applications (both in public and private sectors).

The first speaker of the day, Dr. Luca Oneto from the University of Genoa, presented mostly in reference to point #1 above. He talked about his and his team’s work on formulating fairness in a quantitative manner (basically “an equation for fairness”). While the formula was mathematically a bit above my paygrade, the idea itself was very clear, and I was sold on it instantly. If fairness can be calculated, with all (or as much as possible) ambiguity removed from the process, then the result will not only be objective, but also comparable across different applications. At the same time, it didn’t take long for some doubts to set in (although I’m not sure to what extent they were original – they were heavily inspired with some of the points raised by Prof. Kate Crawford in her Royal Society lecture, which I covered here). In essence, measuring fairness seems do-able when we can clearly define what constitutes a fair outcome – which, in many cases in real life, we cannot. Let’s take two examples close to my heart: fairness in recruitment and the Oscars.

With my first degree being from not-so-highly ranked university, I know for a fact I have been autorejected by several employers – so (un)fairness in recruitment is something I feel strongly about. But let’s assume the rank of one’s university is a decent proxy for their skills, and let’s focus on gender representation. What *should be* the fair representation of women in typically male-dominated environments such as finance or tech? It is well documented that and widely debated as to why women drop out of STEM careers at a high ratevi vii – but they do, around 40% of them. The explanations go from “hegemonic and masculine culture of engineering” to challenges of combining work and childcare disproportionately affecting new mothers. What would be the fair outcome in tech recruitment then? A % representation of women in line with present-day average? A mandatory affirmative action-like quota? (if so, who and how would determine the fairness of the quota?) 50/50 (with a small allowance for non-binary individuals)?

And what about additional attributes of potential (non-explicit) discrimination, such as race or nationality? The 2020 Oscars provided a good case study. There were no females nominated in the Best Director category (a category which historically has been close to 100% male, with exactly one female winner, Kathryn Bigelow for “the hurt locker” and 5 female nominees, and zero black winners and 6 nominees), and only one black person across all the major categories combined (Cynthia Erivo for “Harriet”). Stephen King caused outrage with his tweet about how diversity should be a non-consideration – only quality (he later graciously explained that it was not yet the case todayviii). Then South Korean “parasite” took the Best Picture gong – the first time in the Academy Awards history the top honour went to a foreign language film. My question is: what exactly would be fair at the Oscars? If it was proportional representation, then some 40% of the Oscars should be awarded to Chinese movies, another 40% to Indian ones, with the remainder split among European, British, American, Latin, and other international productions. Would that be fair? Should special quota be saved for the American movies given that the Oscars and the Academy are American institutions? Whose taste are the Oscars meant to represent, and how can we measure the fairness of that representation?

All these thoughts flashed through my mind as I was staring (somewhat blankly, I admit), at Dr. Oneto’s formulae. The formulae are a great idea, but determining the distributions to measure the fairness against… much more of a challenge.

The second speaker, Prof. Yvonne Rogers of UCL, tackled AI transparency and explainability. Prof. Rogers tackled the familiar topics of AI’s being black boxes and the need for explanations in important areas of life (such as recruitment or loan decisions). Her go-to example was AI software scrutinising facial expressions of candidates during recruitment process based on unverified science (as upsetting as that is, it’s nothing compared to fellas at Faception who declare they can identify whether somebody is a terrorist by looking at their face). While my favourite approach towards explainable AI, counterfactuals, was not mentioned explicitly, they were definitely there in spirit. Overall it was a really good presentation on a topic I’m quite familiar with.

The third speaker, Prof. David Barber of UCL, talked about privacy in AI systems. In his talk, he strongly criticised present-day approaches to data handling and ownership (hardly surprisingly…). He presented an up-and-coming concept called “randomised response”. Its aim is described succinctly in his paperix as “to develop a strategy for machine learning driven by the requirement that private data should be shared as little as possible and that no-one can be trusted with an individual’s data, neither a data collector/aggregator, nor the machine learner that tries to fit a model”. It was a presentation I should be interested in – and yet I wasn’t. I think it’s because in my industry (investment management) privacy in AI is less of a concern than it would be in recruitment or medicine. Besides, IBM sold me on homomorphic encryption during their 2019 event, so I was somewhat less interested in a solution that (if I understood correctly) “noisifies” part of personal data in order to make it untraceable, as opposed to homomorphic encryption’s complete, proper encryption.

In the only presentation from the business perspective, Pete Rai from Cisco talked about his company’s experiences with broadly defined digital ethics. It was a very useful counterpoint to at times slightly too philosophical or theoretical academic presentations that preceded it. It was an interesting presentation, but like many others, I’m not sure to what extent it really related to digital ethics or AI ethics – I think it was more about corporate ethics and conduct. It didn’t make the presentation any less interesting, but I think it inadvertently showed how broad and ambiguous area digital ethics can be – it’s very different things to different people, which doesn’t always help push the conversation forward.

The event was part of a series, so it’s quite regrettable I have not heard of it before. But that’s just a London thing – one may put all the work and research to try to stay in the loop of relevant, meaningful, interesting events – and some great events will slip under the radar nonetheless. There are some seriously fuzzy, irrational forces at play here.

Looking forward to the next series!

Sources:
i https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf

ii https://arxiv.org/ftp/arxiv/papers/1906/1906.11668.pdf

iii https://www.salon.com/2019/02/19/why-women-are-dropping-out-of-stem-careers/

iv https://www.sciencedaily.com/releases/2018/09/180917082428.htm

v https://ew.com/oscars/2020/01/27/stephen-king-diversity-oscars-washington-post/

vi https://arxiv.org/pdf/2001.04942.pdf

vii https://www.sciencedaily.com/releases/2018/09/180917082428.htm

viii https://ew.com/oscars/2020/01/27/stephen-king-diversity-oscars-washington-post/

ix https://arxiv.org/pdf/2001.04942.pdf


Smart city-blurry background of people crossing street

Royal Institution “Quantum in the City”

Sat 16-Nov-2019

On Sat 16-Nov-2019 the Royal Institution served its science-hungry patrons a real treat: a half-day quantum technologies showcase titled “Quantum in the City: the shape of things to come”. The overarching concept was to present what living in the “quantum city” of the future might look like.

It was organised with the participation of UK National Quantum Technologies Programme and ran a day after a big industry event at the QE2 Centre.

Weekend events at the RI usually differ from the standard evening lectures in that they are longer and cover one area in more depth. This one was no exception: in addition to a 1.5hr panel discussion, there was an extensive technology showcase across the 1st floor of the RI building, with no fewer than 20 exhibitors, most of them from academia or university spin-off companies.

Quantum in the City event-people networking in a roomOne of the chapters from Nassim Taleb’s “skin in the game” (full disclosure: I haven’t read the whole book; I only read the abridged chapter when it appeared in my news feed on, all of the places, Facebook[1]) describes a social group he (with his usual Kanye charm) calls “Intellectual Yet Idiot”. I tick pretty much all the boxes in that description (except the “comfort of his suburban home with 2-car garage” – try “precarious comfort of his Qatari-owned 2-bed rental“), but none more than “has mentioned quantum mechanics at least twice in the past five years in conversations that had nothing to do with physics”. Guilty as charged, that’s me. The context in which I mention quantum mechanics, physics, and technologies in conversations is usually the same – I don’t understand them. I understand one or two of the basic concepts, but I still completely don’t get how with each new qubit the computing power of a quantum computer doubles, what quantum (let alone quantum-safe) encryption is, and why the observer makes all the difference (and what does “observer” even mean?! A conscious observer?!).

Consequently, I keep going to different quantum lectures and presentations, in order to actually understand what this stuff’s about. I basically hope that if I hear it for the n-th time, something in my brain will click. It was that hope that sent me to the RI in November of 2019. Plus, I was really keen to see practical applications of quantum technology.

The discussion panel was great. The panellists were:

  • Miles Padgett, Principal Investigator for the QuantIC Hub
  • Kai Bongs, Director, UK Quantum Technology Hub for Sensors and Metrology (I previously attended Kai’s presentation on quantum sensors at New Scientist Live)
  • Dominic O’Brien, Co-Director, NQIT (UK Quantum Technology Hub for Networked Quantum Information Technologies)
  • Tim Spiller, Director, UK Quantum Technology Hub for Quantum Communications Technologies

The discussion revolved around current and future applications of quantum technologies. Like everyone, I know of quantum computers (I even saw IBM’s one during their Think!2019 event), and quantum encryption. I have a basic awareness of quantum sensors (from Kai’s talk at NS Live in 2019 or 2018) and some ambitious plans for quantum technologies-based medical imaging (“quantum doppelgangers” if I recall correctly… I heard of those during Science Museum Lates even on quantum). Paul Davies mentioned quantum biology in his own RI lecture “what is life”, as did Prof. Jim al-Khalili in some interview – but that’s about it.

Fundamentally though, my understanding was that quantum technologies are only beginning to emerge in academic and / or industrial settings. It was genuine news to me that existing technologies (chief among them semiconductors and transistors, which is basically all of modern technology and the Internet; also lasers and MRI scanners) are reliant on the effects of quantum mechanics and are referred to as “quantum 1.0”. The cutting-edge technologies emerging these days are “quantum 2.0”.

Imaging was a prominent use case for quantum technologies, across a number of fields: medical (endoscopy, brain imaging for dementia research), environmental, construction (what’s underneath the soil), industrial (seeing through dirty water or unclear air).

Quantum computing and encryption were also discussed at length. With quantum computing, we’re on the cusp of doing practically useful things at a much lower (energy and time) cost than traditional computing. (nb. the Google experiment was a test problem, not a real problem). In some use cases, quantum computing may be orders of magnitude cheaper in terms of energy consumption compared to conventional computing. In some other use cases this saving will be minimal (interesting comment – I assumed that quantum computers would generate orders-of-magnitude energy and time savings across the board). In terms of encryption, the experts at the RI repeated almost verbatim what Ian Levy from NCSC / GCHQ said at quantum computing panel at the Science Museum a few weeks prior: currently all our communications are encrypted and therefore assumed more or less safe. However, it is theoretically possible for an actor to store encrypted communications of today and decrypt them using quantum technology in the future. Work is underway to develop mathematical models for quantum-safe encryption.

There is work starting on standardization of quantum technologies to ensure their portability.

The panellists also discussed at length the research and investment landscape of quantum technologies across the UK. They noted that the UK was the first country in the world to come up with a national programme of academic + industry partnership and funding in quantum technology research. The US and their programme have (allegedly!) pretty much copied the British blueprint. To date, all distributed and committed funds are close to GBP 1bn. That’s a decent level of funding, but in part, because different groups and laboratories have been set up and funded through different sources before. If the GBP 1bn funding was to fund everything from scratch, then it might not be sufficient. Currently, a substantial part of UK quantum research funding (varies by group and programme) comes from the EU. Brexit is an obvious concern.

Separately, there is an acute talent shortage in engineering in general, and even more so in quantum technology. Big tech companies are in a strong position to compete for talent because they can offer great salaries and interesting careers.

Speaking of quantum talent, their rooms of the RI were filled with the country’s (and likely the world’s) best and brightest in the field. 20 exhibitors presented their projects, all of which were applications-based rather than pure research. Some of those were proofs of concept (PoC’s), some were prototypes, and some were in between. A handful of exhibitors stood out based on my subjective and oft-biased judgement:

  • Underwater 3D imaging, ultra-thin endoscope, and a camera looking around corners (all from QuantIC: UK tech hub for quantum enhanced imaging) were all practical examples of advanced imaging applications.
  • Trapped Ion Quantum Computer (University of Sussex). The technological details are a little above my paygrade, but apparently different engineering approaches towards quantum computing lend themselves differently to scaling. The researchers in Sussex use microwave technology, which differs from existing mainstream approaches and can be quite promising. I have had a soft spot and very high regard for the Sussex lab ever since I met its head, the fabulously brilliant and eccentric Winfried Hensinger when he presented at one of New Scientist Instant Expert events.
  • Quantum Money (University of Cambridge) was the only project related to my line of work and a slightly exotic one even in the weird and wonderful world of quantum technologies. S-Money, as it’s called, is at the intersection of quantum theory and theory of relativity, and could enable unhackable identification as well as lag-free transacting – on Earth and beyond. And they say the finance industry lacks vision…

In summary, the RI event was nothing short of awesome. I don’t know whether I got anywhere beyond the “Intellectual yet Idiot” point on the scale of quantum expertise, but I can live with that. I learned of new applications of quantum technologies, and I met some incandescently brilliant people; couldn’t really ask for much more.

[1] Fuller disclosure: I only ever read Nassim’s „black swan”, and I consider it to be a genuinely great book. I bought “fooled by randomness” and “antifragile” with an intention of reading them some day (meaning never). Still, if I mention the titles with sufficient conviction, most people usually assume I read those end-to-end. I don’t correct them.