Neal Stephenson “Fall, or Dodge in hell” book review
Neal Stephenson “Fall, or Dodge in hell” book review
Joseph Conrad’s “Heart of darkness” is widely regarded as one of the masterpieces of 20th century literature (even though it was technically written in 1899). It directly inspired one cinematic masterpiece (Francis Ford Coppola’s 1979 “Apocalypse now”) and was allegedly an inspiration for another masterpiece-adjacent one (James Gray’s 2019 “Ad Astra”). I personally consider it to be rather hollow, and thus allowing a talented artist to use it as a canvas to draw their own, unique vision on rather than a masterpiece in its own right, but I’m not going to deny its impact and relevance.
Moreover, Conrad managed to contain the complete novella within 65 pages. Aldous Huxley fit “Brave new world” into 249 pages. J. D. Salinger fit “Catcher in the rye” in 198 pages, while George Orwell’s “1984” is approx. 350 pages long (and yes, page count depends on the edition, *obviously*). I seriously worry that in the current environment of “serially-serialised” novels these self-contained masterpieces would struggle to find a publisher. This, plus we’re also seeing true “bigorexia” in literature: novels have been exploding in size in recent years, and probably even more so in the broadly defined sci-fi / fantasy genre. Back when I was a kid, a book over 300 pages was considered long, and “bricks” like Isaac Asimov’s “Foundation” or Frank Herbert’s “Dune” were outliers. Nowadays no one bats an eyelid at 500 or 800 pages.
And this is where Neal Stephenson’s “Fall, or Dodge in hell” comes in. At 883 pages it’s a lot to read through, and I probably wouldn’t have picked it up had it not been referenced by two academics I greatly respect, professors Frank Pasquale and Steve Fuller in their discussion some time ago. I previously read Stephenson’s “Snow crash” as a kid and I was pretty neutral about it: it was an OK cyberpunk novel, but it failed to captivate me; I stuck to Gibson. Still, with such strong recommendations I was more than happy to check “Fall” out.

I will try to keep things relatively spoiler-free. In a nutshell, we have a middle-aged, more-money-than-God tech entrepreneur (the titular Dodge) who dies during a routine medical. His friends and family are executors of his last will, which orders them to have Dodge’s mind digitally copied and uploaded into a digital realm referred to as Bitworld (as opposed to Meatspace, i.e. the real, physical world – btw if you’re thinking “that’s not very subtle”; well, nothing in “Fall” is; subtlety or subtext are most definitely *not* the name of the game here). It takes hundreds of (mostly unnecessary) pages to even get to that point, but, frankly, that part is the book’s only saving grace, because Stephenson hits on something real, which I think usually gets overlooked in the mind uploading discourse: what will it feel to be a disembodied brain in a completely alien, unrelatable, sensory stimuli-deprived environment? This is the one part (only part…) of the novel where Stephenson’s writing is elevated, and we can feel and empathise with the utter chaos and confusion of Dodge’s condition. There was a very, very interesting discussion on a related topic in a recent MIT Technology Review article titled “This is how your brain makes your mind” by psychology professor Lisa Feldman Barrett, which reads “consider what would happen if you didn’t have a body. A brain born in a vat would have no bodily systems to regulate. It would have no bodily sensations to make sense of. It could not construct value or affect. A disembodied brain would therefore not have a mind”. Stephenson’s Dodge is not in the exact predicament Prof. Barrett is describing (his brain wasn’t born in a vat), but given he has no direct memory of his pre-upload experience, it is effectively identical.
One last semi-saving grace is Stephenson’s extrapolated-to-the-extreme a vision of information bubbles and tribes. His America is divided so extremely along the lines of (dis)belief and (mis)information filtering that it is effectively a federation of passively hostile states rather than anything resembling united states. That scenario seems to be literally playing out in front of our eyes.
Unfortunately, Stephenson quickly runs out of intellectual firepower (even though he is most definitely a super-smart guy – after all, he invented the metaverse a quarter of a century before Mark Zuckerberg brought it into vogue) and after a handful of truly original and thought-provoking pages we find ourselves in something between the digital Old Testament and the Medieval, where all the uploaded (“digitally reincarnated”) minds begin to participate in an agrarian feudal society, falling into all the same traps and making all the same mistakes mankind did centuries ago; a sci-fi novel turns fantasy. It’s all very heavy-handed, unfortunately; it feels like Stephenson was paid by the page and not by the book, so he inflated it beyond any reason. If there is any moral or lesson to be taken away from the novel, it escaped me. It feels like the author at one point realised that he cannot take the story much further, or, possibly, just got bored with it and decided to wrap it up.
“Fall” is a paradoxical novel in my eyes: on one hand the meditation on the disembodied, desperately alone a brain is fascinating from the Transhumanist perspective; on the other I honestly cannot recall the last time I read a novel so poorly written. It’s just bad literature, pure and simple – which is particularly upsetting because it is a common offence in sci-fi: bold ideas, bad writing. I have read so many sci-fi books where amazing ideas were poorly written up, and I have a real chip on my shoulder about it, as I suspect that sci-fi literature’s second-class citizen status in the literary world (at least as I have perceived it all my life, perhaps wrongly) might be down to its literary qualities. The one novel that comes to mind as a comparator volume-wise and author’s clout-wise is Neil Gaiman’s “American gods”, and you really need to look no further to see, glaringly, the difference between quality and not-so-quality literature within broadly-defined sci-fi and fantasy genre: Gaiman’s writing is full of wonder with moments of genuine brilliance (Shadow’s experience being tied to a tree) whereas Stephenson’s is heavy, uninspired, and tired.
Against my better judgement, I read the novel through to the end (“if you haven’t read the book back-to-back, then it doesn’t count!” shouts my inner saboteur). Is “Fall” worth the time it takes to go through its 883 pages? No; sadly, it is not. You could read 2 – 3 other, much better books in the time it takes to go through it, and – unlike in that true-life story – there is no grand prize at the end.
What are the lessons to be taken away from 5 months’ worth of wasted evenings? Two, in my view:
Writing a quality novel is tough, but coming up with a quality, non-WTF ending is tougher; that is where so many fail (including Stephenson – spectacularly);
If a book isn’t working for you, just put it down. Sure, it may have a come to Jesus revelatory ending, but… how likely is that? Bad novels are usually followed by even worse endings.
_______________________________________________________
(1) Another case in point: Thomas Harris’ Hannibal Lecter series of novels – it’s mediocre literature at best, but it allowed a number of very talented artists to develop fascinating, charismatic characters and compelling stories, both in cinema and on TV.
(2) In my personal view, both novels are on the edge of fantasy and slipstream, but I appreciate that not everyone will agree with this one.
Studio Irma, “Reflecting forward” at the Moco Museum
Studio Irma, “Reflecting forward” at the Moco Museum Amsterdam, Nov-2021.
“Modern art” is a challenging term because it can mean just about anything. One wonders whether in a century or two (assuming humankind makes it this long, which is a pretty brave assumption considering) present day’s modern art will still be considered modern, or will it have become regular art by then.
I’m no art buff by any stretch of imagination, but certain works and artists speak to me and move me deeply – and they are very rarely old portraits or landscapes (J. W. M. Turner being one of the rare exceptions, his body of work is something else entirely). One of the more recent examples has been 2020 – 2021 Lux digital and immersive art showcase at 180 Studios in London (I expect Yayoi Kusama’s infinity rooms, also at Tate Modern, to make a similarly strong impression on me – assuming I’d be able to see them, as this exhibition has been completely sold out for many months now). I seem to respond particularly strongly to immersive, vividly colourful experiences, so it was no surprise that Irma Studio’s “Reflecting forward” infinity rooms project at the Moco Museum (“Moco” standing for “modern / contemporary”) caught my eye during a recent trip to the Netherlands.
I knew nothing of Studio Irma, “Reflecting forward”, or Moco prior to my arrival to Amsterdam; I literally Googled “Amsterdam museums” the day before; Rijksmuseum was a no-brainer because of the “Night watch”, while Van Gogh Museum was a relatively easy pass, but I was hungry for something original and interesting, something exciting and immersive – and then I came across “Reflecting forward” at the Moco. Moco Museum is fascinating in its own right – a two-storey townhouse of a very modest size by museum standards, putting up a double David vs. Goliath sandwiched between massive Van Gogh and Rijks museums in Amsterdam’s museum district.
Studio Irma is a brainchild of Dutch modern artist Irma de Vries, who specialises in video mapping, augmented reality, immersive experiences, and other emerging technologies. “Reflecting forward” falls into this theme: it consists of 4 infinity rooms (“We all live in bubbles”, “Kaleidoscope”, “Diamond matrix”, “Connect the dots & universe”) filled mostly with vivid, semi-psychedelic video projections and sounds, delivering a powerful, amazingly immersive, dreamlike experience.
To the best of my knowledge, there is no one, universal definition of an infinity room. It is usually a light and / or visual installation in a room covered with mirrors (usually including the floor and the ceiling), which gives the effect of infinite space and perspective (people with less-than-perfect depth perception, such as myself, may end up bumping into the walls). Done right, the result can be phenomenally immersive, arresting, and just great fun. They are probably easier to show than to explain. World’s best-known infinity rooms have been created by Yayoi Kusama, but, as per above, I am yet to experience those.
“Reflecting forward” occupies the entirety of Moco’s basement (and frankly, is quite easy to miss), which is a brilliant idea, because it blocks off all external stimuli and allows the visitors to fully immerse themselves in Irma’s work. Funny thing: main floors are filled with works by some of modern art’s most accomplished heavyweights (Banksy, Warhol, Basquiat, Keith Haring – even a small installation by Yayoi Kusama), and yet lesser-known Irma completely blows them out of the water.
I don’t know how from a seasoned art critic’s perspective “Reflecting forward” compares to Van Goghs or Rembrandts just a stone’s throw away (or even Warhols and Banksys upstairs at the Moco) in terms of artistic highbrow-ness. I’m sure it’d be an interesting, though ultimately very subjective discussion. Having visited both the Rijksmuseum and the Moco, I was captivated by “Reflecting forward” more than by classical Dutch masterpieces (with the exception of “Night watch”, which is… just mind-blowing).
With “Reflecting forward” Irma is promoting a new art movement, called connectivism, defined as “a theoretical framework for understanding and learning in a digital age. It emphasizes how internet technologies such as web browsers, search engines, and social media contribute to new ways of learning and how we connect with each other. We take this definition offline and into our physical world. through compassion and empathy, we build a shared understanding, in our collective choice to experience art”1.
An immersive experience is only immersive as a full package. Ambience and sound play a critical (even if at times subliminal) play a critical part in the experience. Studio Irma commissioned a bespoke soundscape for “Reflecting forward”, which takes the experience to the whole new level of dreamy-ness. The music is really quite hypnotic. The artist stated her intention to release it on Spotify, but as of the time of writing, it is not there yet.
I believe there is also an augmented reality (AR) app allowing us to take the “Reflecting forward” outside, developed with socially-distanced, pandemic-era art experience in mind, but I couldn’t find the mention of it on Moco’s website, and I haven’t tried it.
Overall, for the time I stayed in Moco’s basement, “Reflecting forward” has transported me to beautiful, peaceful, and hypnotic places, giving me an “out of time” experience. Moco deserves huge kudos for giving the project the space it needed and allowing the artist to fully realise her vision. Irma de Vries’ talent and imagination shine in “Reflecting forward”, and I hope to experience her work in the future.
______________________________________
BCS: AI for Governance and Governance of AI (20-Jan-2021)
BCS: AI for Governance and Governance of AI (20-Jan-2021)
BCS, The Chartered Institute for IT (more commonly known simply as the British Computer Society) strikes me as *the* ultimate underdog in the landscape of British educational and science societies. It’s an entirely subjective view, but entities like The Royal Institution, The Royal Society, British Academy, The Turing Institute, British Library are essentially household names in the UK. They have pedigree, they are venerated, they are super-exclusive, money-can’t-buy brands – and rightly so. Even enthusiasts-led British Interplanetary Society (one of my beloved institutions) is enjoying well-earned respect in both scientific and entrepreneurial circles, while BCS strikes me as being sort of taken for granted and somewhat invisible outside the professional circles, which is a shame. I first came across BCS by chance a couple of years ago, and have since added it to my personal “Tier 1” of science events.
Financial services are probably *the* number one non-tech industry most frequently appearing in BCS presentations, which isn’t surprising given the amount of crossover between finance and tech (“we are a tech company with a banking license” sort of thing).
One things BCS does exceptionally well is interdisciplinarity, which is something I am personally very passionate about. In their many events BCS goes well above and beyond technology and discusses topics such as environment (their event on the climate aspects of cloud computing was *amazing*!), diversity and inclusion, crime, society, even history (their 20-May-2021 event on the historiography of the history of computing was very narrowly beaten by receiving my first Covid jab as the best thing to have happened to me on that day). Another thing BCS does exceedingly well is balance general public-level accessibility with really specialist and in-depth content (something RI and BIS also do very well, while, in my personal opinion, RS or Turing… not so well). BCS has a major advantage of “strength in numbers”, because their membership is so broad that they can recruit specialist speakers to present on any topic. Their speakers tend to be practitioners (at least in the events I attended), and more often than not they are fascinating individuals, which oftentimes turns a presentation into a treat.

The ”AI for Governance and Governance of AI” event definitely belongs to the “treat” category thanks to the speaker Mike Small, who – with 40+ years of IT experience – definitely knew what he was talking about. He started off with a very brief intro to AI (which, frankly, beats anything I’ve seen online for conciseness and clarity). Moving on to governance, Mike very clearly explained the difference between using AI in governance functions (i.e. mostly Compliance / General Counsel and tangential areas) and governance *of* AI, i.e. the completely new (and in many organisations as-yet-nonexistent) framework for managing AI systems and ensuring their compliant and ethical functioning. While the former is becoming increasingly understood and appreciated conceptually (as for implementation, I would say that varies greatly between organisations), the latter is far behind, as I can attest from observation. The default thinking seems to be that AI is just new another technology, and should be approached purely as a technological means to a (business) end. The challenge is that with its many unique considerations AI (Machine Learning to be exact) can be an opaque means to an end, which is incompatible (if not downright non-compliant) with prudential and / or regulatory expectations. AI is not the first technology which requires an interdisciplinary / holistic governance perspective in the corporate setting, because cloud outsourcing has been an increasingly tightly regulated technology in financial services since around 2018.
The eight unique AI governance challenges singled out by Mike are:
- Explainability;
- Data privacy;
- Data bias;
- Lifecycle management (aka model or algorithm management);
- Culture and ethics;
- Human involvement;
- Adversarial attacks;
- Internal risk management (this one, in my view, may not necessarily belong here, as risk management is a function, not a risk per se).
The list, as well as a comparison of global AI frameworks that followed, were what really triggered me in the presentation (in the positive way) because of their themes-based approach to governance of AI (which happens to be one of my academic research interests). The list of AI regulatory guidances, best practices, consultations, and most recently regulations proper has been growing rapidly for at least 2 years now (and that is excluding ethics, where guidances started a couple of years earlier, and currently count a much higher number than anything regulation-related). Some of them come from government bodies (e.g. the European Commission), other from regulators (e.g. the ICO), other from industry associations (e.g. IOSCO). I reviewed many of them, and they all contribute meaningful ideas and / or perspectives, but they’re quite difficult to compare side-by-side because of how new and non-standardised the area is. Mike Small is definitely onto something by extracting comparable, broad themes from long and complex guidances. I suspect that the next step will be for policy analysts, academics, the industry, and lastly the regulators themselves to start conducting analyses similar to Mike’s to come up with themes that can be universally agreed upon (for example model management – that strikes me as rather uncontroversial) and those where lines of political / ideological / economic divides are being drawn (e.g. data localisation or handling of personal data in general).
One thing’s for sure: regulatory (be it soft or hard laws) standards for the governance of AI are beginning to emerge, and there are many interesting developments ahead in foreseeable future. It’s better to start paying attention at the early stages than play a painful and expensive game of catch-up in a couple of years.
You can replay the complete presentation here.
Accounting for panic runs: The case of Covid-19 (14-Dec-2020)
Accounting for panic runs: The case of Covid-19 (14-Dec-2020)
Professor Gulnur Muradoglu is someone I hold in high esteem. She used to be my Behavioural Finance lecturer at the establishment once known as Cass Business School in London, back in the day when it was still called Cass Business School. It was my first proper academic encounter with behavioural finance, which led to lifelong interest; professor Muradoglu deserves substantial personal credit for that.
Professor Muradoglu has since moved to the Queen Mary University of London, where she became a director of the Behavaioural Finance Working Group (BFWG). BFWG has been organising fascinating and hugely influential annual conferences for 15 years now, as well as standalone lectures and events. [Personal note: as someone passionate about behavioural finance, I find it quite baffling how the discipline has not (or at least not yet) spilled over to mainstream finance. Granted, some business schools now offer BF as an elective in their Master’s and / or MBA programmes, and concepts such as bias or bubble have gone mainstream, but it’s still a niche area. I was not aware of BF being explicitly incorporated in the investment decision-making process by several of the world’s brand-name asset managers I worked for or with (though, in their defence, concepts such as herding, or asset price bubble are generally obvious to the point of goes-without-saying). Finding a Master’s or PhD programme explicitly focused on behavioural finance is hugely challenging because there are very few universities offering those (Warwick University is one esteemed exception that I’m aware of). Same applies to BF events, which are very few and far between – which is yet another reason to appreciate BFWG’s annual conference.]

The panic runs lecture was an example of BFWG’s standalone events, with new being research jointly by Prof. Muradoglu and her colleague Prof. Arman Eshraghi (co-author of one of my all-time favourite academic articles “hedge funds and unconscious fantasy”) from Cardiff University Business School. Presented nine months after the first Covid lockdown started in the UK, it was the earliest piece of research analysing Covid-19-related events from the behavioural finance perspective I was aware of.
The starting point for Muradoglu and Eshraghi was a viral video of a UK critical care nurse Dawn Bilbrough appealing to general public to stop panic buying after she was unable to get any provisions for herself after a 48-hour-long shift (the video that inspired countless individuals, neighbourhood groups, and businesses to organise food collections for the NHS staff in their local hospitals). They observed that:
- Supermarket runs are unprecedented, although they have parallels with bank runs;
- Supermarket runs are incompatible with the concept of “homo economicus” (rational and narrowly self-interested economic agent).
They argue that viewing individuals as emotional and economic (as opposed to rational and economic) is more compatible with phenomena such as supermarket runs (as well as just about any other aspects of human economic behaviour if I may add). They make a very interesting (and so often overlooked) distinction between risk (a known unknown) and uncertainty (unknown unknown), highlighting humans’ preference for the former over the latter and frame supermarket runs as acts of collective flight from the uncertainty of the Covid-19 situation at the very beginning of the global lockdowns. Muradoglu and Eshraghi strongly advocate in favour of Michel Callon’s ANT (actor-network theory) framework whereby the assumption of atomised agents as the only unit is enhanced to include the network of agents as well. They then overlay it with the “an engine, not a camera” concept from David MacKenzie whereby showing and sharing images of supermarket runs turns from documenting into perpetuating. They also strongly object to labelling the Covid-19 pandemic as either an externality or a black swan, because it is neither (as far as I can recall, there weren’t any attempts to frame Covid-19 as a black swan, because with smaller, contained outbreaks of Ebola, SARS, MERS, N1H1 in recent years it would be impossible to frame Covid as an entirely unforeseeable event – still, it’s good to hear it from experts).
Even though I am not working in the field of economics proper, I do hope that in the 21st century homo economicus is nothing more than a theoretical academic concept. Muradoglu and Eshraghi simply and elegantly prove that in the face of a high-impact adverse event alternative approaches do a much better job of explaining observed reality. I would be most keen for a follow-up presentation where the speakers could propose how these alternative approaches could be used to inform policy or even business practices to mitigate, counteract, or – ideally – prevent behaviours such as panic buying or other irrational group economic behaviours which could lead to adverse outcomes, such as crypto investing (which, in my view, cannot be referred to as “investing” in the de facto absence of an asset with any fundamentals; it is pure speculation).
Somewhat oddly, I haven’t been able to find Muradoglu and Eshraghi’s article online yet (I assume it’s pending publication). You can see the entire presentation recording here.
“Nothing ever really disappears from the Internet” – or does it? (25-Apr-2022)
“Nothing ever really disappears from the Internet” – or does it? (25-Apr-2022)
How many times have you tried looking for that article you wanted to re-read only to find yourselves losing it an hour or two later, unsure as to whether you’ve ever read that article in the first place or whether that has been just a figment of your imagination?
I developed OCD some time in my teen years (which probably isn’t very atypical). One aspect of my OCD is really banal (“have I switched off the oven…?!”; “did I lock the front door…?!”) and easy to manage with the help of a camera-phone. Another aspect of my OCD is compulsive curiosity with a hint of FOMO (“what if that one article I miss is the one that would change my life forever…?!”). With the compulsion to *know* came the compulsion to *remember*, which is where the OCD can sometimes get a bit more… problematic. I don’t just want to read everything – I want to remember everything I read (and where I read it!) too.
When I was a kid, my OCD was mostly relegated to print media and TV. Those of us old enough to remember those “analogue days” can relate to how slow and challenging chasing a stubborn forgotten factoid was (“that redhead actress in one of these awful Jurassic Park sequels… OMG, I can see her face on the poster in my head, I just can’t see the poster close enough to read her name… she was in that action movie with that other blonde actress… what is her ****ing name?!” – that sort of thing).

Then came the Web, followed by smartphones.
Nowadays, with smartphones and Google, finding information online is so easy and immediate that many of us have forgotten how difficult it was a mere 20-something years ago. Moreover, more and more people have simply never experienced this kind of existence at all.
We get more content pushed our way than we could possibly consume: Chrome Discover recommendations; every single article in your social media feeds; all the newsletters you receive and promise yourself to start reading one day (spoiler alert: you won’t; same goes for that copy of Karl Marx’s “Capital” on your bookshelf); all the articles published daily by your go-to news source; etc. Most people read whatever they choose to read, close the tab, and move on. Some remember what they’ve read very well, some vaguely, and over time we forget most of it (OK, so it probably is retained on some level [like these very interesting articles claim 1,2], gradually building up our selves much like tiny corals build up a huge coral reef, but hardly anyone can recall titles and topics of the articles they read days, let alone years prior).
A handful of the articles we read will resonate with us really strongly over time. Some of us (not very many, I’m assuming) will Evernote or OneNote them as their personal, local-copy notes; some will bookmark them; and some will just assume they will be able to find these articles online with ease. My question to you is this: how many times have you tried looking for that article you wanted to re-read only to find yourselves losing it an hour or two later, unsure as to whether you’ve ever read that article in the first place or whether that has been just a figment of your imagination? (if your answer is “never”, you can probably skip the rest of this post). It happened to me so many, many times that I started worrying that perhaps something *is* wrong with my head.
And Web pages are not even the whole story. Social media content is even more “locked” within the respective apps. Most (if not all) of them can technically be accessed via the Web and thus archived, but this rarely works in practice. It works for LinkedIn, because LI was developed as browser-first application, and it’s generally built around posts and articles, which lend themselves to browser viewing and saving. Facebook was technically developed as browser-first, but good luck saving or clipping anything. FB is still better than Instagram, because with FB we can still save some of the images, whereas with Instagram that option is practically a non-starter. Insta wants us to save our favourites within the app, thus keeping it fully under its (i.e. Meta’s) control. That means that favourited images can at any one time be removed by the posters, or by Instagram admins, without us ever knowing. I don’t know how things work with Snap, TikTok, and other social media apps, because I don’t use them, but I suspect that the general principle is similar: content isn’t easily “saveable” outside the app.
Then there are ebooks, which are never fully offline the way paper books are. The Atlantic article 3 highlights this with a hilarious example of the word “kindle” being replaced with “Nook” in a Nook e-reader edition of… War and Peace.
Then came Iza Kaminska’s 2017 FT article “The digital rot that threatens our collective memory” and I realised that maybe nothing’s wrong with my head, and that maybe it’s not just me. While Iza’s article triggered me, its follow-up in The Atlantic nearly four years later (“The Internet is rotting”) super-triggered me. The part “We found that 50 percent of the links embedded in Court opinions since 1996, when the first hyperlink was used, no longer worked. And 75 percent of the links in the Harvard Law Review no longer worked” felt like a long-overdue vindication of my memory (and sanity). It really wasn’t me – it was the Internet.
It really boggles the mind: in just 2 – 3 decades the Web has replaced libraries and physical archives as humankind’s primary source of reference information (and arguably fiction as well, though paper books are still holding reasonably strong for now). The Internet is comprised of websites, and websites are comprised of posts and articles – all of which have a unique reference in the form of the URL. I can’t talk for other people, but I have always implicitly assumed that information lived pretty much indefinitely online. On the whole it may be the case (the reef so to speak), but that does not hold for specific pages (individual corals) anywhere near as much as I had assumed. There is, of course, the Internet Archive’s Wayback Machine, a brilliant concept and a somewhat unsung hero of the Web – but WM is far from solid (example: I found a dead link [ http://www.psychiatry.cam.ac.uk/blog/2013/07/18/bad-moves-how-decision-making-goes-wrong-and-the-ethics-of-smart-drugs/ ] in one of my blog posts [“Royal Institution: How to Change Your Mind”]. The missing resource was posted on one of the websites within the domain of Cambridge University, which indicates it is high-quality, valuable, meaningful content – and yet the Wayback Machine has not archived it. This is not a criticism of WM nor its parent the Internet Archive – what they do deserves the highest praise; it’s more about recognising the challenges and limitations it’s facing).
So, with my sanity vindicated, but my OCD and FOMO being as voracious as ever – where do we go from here?
There is the local copy / offline option with Evernote, OneNote, and similar Web clippers. I have been using Evernote for years now (this is not an endorsement), and, frankly, it’s far from perfect, particularly on a mobile (I said this was not an endorsement…) – but everything else I have come across has been even worse; and, frankly, there are surprisingly few players in that niche. Still, Evernote is the best I could find for all-in-one article clipping and saving – it’s either this or “save article as”, which is a bit too… 90’s. Then there is the old-fashioned direct e-mail newsletter approach, which, as proven by Substack, can still be very popular. I’m old enough to remember the life pre-cloud (or even pre-Web, albeit very faintly, and only because the Web arrived in Poland years after it arrived in some of the more developed Western countries) – C:\ drive, 1.44Mb floppy disks, and all that – and that, excuse the cheap pun, clouds my judgement: I embrace the cloud and I love the cloud, but I want to have a local, physical copy of my news archive. However, as Drew Justin rightly points out in his Wired article “As with any public good, the solution to this problem [the deterioration of the infrastructure for publicly available knowledge] should not be a multitude of private data silos, only searchable by their individual owners, but an archive that is organized coherently so that anyone can reliably find what they need”; me updating my personal archive in Evernote (at the cost of about 2hrs of admin per week) is a crutch, an external memory device for my brain. It does nothing for the global, pervasive link rot. Plus, Evernotes of this world don’t work with social media apps very well. There are dedicated content clippers / extractors for different social media apps, but they’re usually a bit cumbersome, and don’t really “liberate” the “locked” content in any meaningful way.
I see two ways of approaching this challenge:
- Beefing up the Internet Archive, the International Internet Preservation Consortium, and similar institutions to enable duplication and storage of as wide a swath of Internet as possible, as frequently as possible. That would require massive financial outlays and overcoming some regulatory challenges (non-indexing requests, the right to be forgotten, GDPR and GDPR-like regulations worldwide). Content locked in-app would likely pose a legal challenge to duplicate and store if the app sees itself as the owner of all the content (even if it was generated by the users).
- Accepting the link rot as inevitable and just letting go. It may sound counterintuitive, but humankind has lost countless historical records and works of art from the sixty or so centuries of the pre-Internet era and somehow managed to carry on anyway. I’m guessing that our ancestors may have seen this as inevitable – so perhaps we should too?
I wonder… is this ultimately about our relationship with our past? Perhaps trying to preserve something indefinitely is unnatural, and we should focus on the current and the new rather than obsess over archiving the old? This certainly doesn’t agree with me, but perhaps I’m the one who’s wrong here…
________________________________________________
1https://www.wired.co.uk/article/forget-idea-we-forget-anything
Ron Kalifa 2021 UK Fintech Review
Having delivered the 2020 UK budget, the Chancellor of the Exchequer Rishi Sunak1 commissioned / appointed a seasoned banking / FinTech executive Ron Kalifa OBE to conduct an independent review on the state of UK FinTech industry. The objective of the review was not merely an assessment of the current state of UK FinTech, but also developing a set of recommendations to support its growth, adoption, as well as global position and competitiveness.
On that last point, measuring UK’s (or any other country’s) position in FinTech has always been a combination of science and art. While certain financial metrics are fairly unambiguous and comparable (e.g. number or total value of IPOs, or total value of M&A transactions), the ranking of a given country in FinTech is more nuanced. Should a total value of all FinTech funding / deals be used on absolute basis? Or perhaps adjusted per capita? What about legal and regulatory environment – should it be captured in the score as well?
The abovementioned considerations are usually pondered by researchers in specialist publications or market data companies, while most people in financial services simply work out some sort of internal weighted average of the above, and usually get the Top 3 (the US, the UK, and Singapore) fairly consensually.
Lastly, FinTech industry is unlikely to flourish without world-class educational institutions and a certain entrepreneurial culture. UK universities are among the very best in the world across all disciplines in all global university rankings. The entrepreneurial culture, as an intangible, is challenging to quantify, but most people will agree that London is full of driven, risk-taking, entrepreneurial people from all over the world.

One way or another, UK’s FinTech powerhouse status is indisputable, and the Chancellor’s plan to build on that makes perfect sense. Brexit is the elephant in the room, because by leaving the single market, the UK created a barrier for the free movement of talent where there was none before. Brexit also arguably sent a less-than-positive message to the traditionally globalised / internationalised FinTech industry at large. On the other hand, there is some chance that Brexit could enable UK’s FinTech’s a more global (as opposed to EU-focused) expansion, although this claim can and will only be verified over time.
To his credit, Ron Kalifa approached his review from the “to do list” perspective of actions and recommendations rather than analysis per se. As a result, his document is highly actionable as well as reasonably compact.
The Kalifa Review is broken into five main sections:
- Policy and Regulation – which focuses on creating a new regulatory framework for emerging technologies and ensuring that FinTech becomes an integral part of UK trade policy (interestingly, this part includes a section on regulatory implications of AI as regards the PRA and FCA rules; this is the highest profile official UK publication to address these considerations that I’m aware of. Prior to the Kalifa Review AI regulation in financial services has been discussed at length from the GDPR perspective in ICO’s publications, but those do not directly feed into government policies).
- Skills and Talent – interestingly, the focus here is *not* on competing for Oxbridge / Russell Group graduates nor expansion of finance and technology Bachelor’s and Master’s course portfolio at UK universities. Instead the focus is on retraining and upskilling of existing adult workforce, mostly in the form of low-cost short courses. This aligns with broader civilisational / existential challenges such as the disappearance of “job for life” and the need for lifelong education in the increasingly complex and interconnected world. Separately, Kalifa proposes a new category of UK visa for FinTech talent to ensure UK’s continued access to high-quality international talent, which represents approx. 42% of FinTech workforce.
- Investment – in addition to standard investment incentives such as tax credits, the Kalifa Review recommends improvement of the UK IPO environment as well as creation of a (new?) global family of FinTech indices to give the sector greater visibility (this is an interesting recommendation for anyone who has ever worked for a market data vendor; indices are normally created in response to market demand and there are FinTech indices within existing, established market index families. Creating a new family of indices is something different altogether).
- International Attractiveness and Competitiveness.
- National Connectivity – this point is particularly interesting, as it seems solely focused on countering London-centric thinking and recognising and strengthening other FinTech hubs across the UK.
The Kalifa Review makes a crucial sixth recommendation: to create a new organisational link which the Review proposes to call the Centre for Finance, Innovation, and Technology (CFIT). The Review is slightly vague on how it envisions the CFIT structurally and organisationally (it does mention that it would be a public / private partnership but does not go into further details). CFIT is one recommendation of the Kalifa Review which seems more of a vision than a fully fleshed-out idea, but Ron Kalifa himself spoke about prominently in his live appearances and gave the impression of CFIT being the organisational structure upon which the delivery of his recommendations would largely hinge.
Upon release, the Kalifa Review was met with a great deal of interest from the financial services industry, as well as legal profession, policymakers, business leaders and academics. Ron Kalifa made multiple appearances in different online presentations on the topic, and most brand-name law firms carefully analysed and summarised his report. That, however, was the visible and easier part. The real question is to what extent the recommendations made in the Kalifa Review will be reflected in government policies for years to come.
You can read the report in its entirety here.
__________________________________________
(1) Rishi Sunak took over the job of the Chancellor of the Exchequer from his predecessor, Sajid Javid, less than a month prior to the publication of Budget 2020. Consequently, it can be debated whether the FinTech report was Rishi Sunak’s original idea, or one he inherited.
Frank Pasquale “Human expertise in the age of AI”
Frank Pasquale “Human expertise in the age of AI” Cambridge talk 26-Nov-2020
Frank Pasquale is a Professor of Law at the Brooklyn Law School and is one of the few “brand name” scholars in the nascent field of AI governance and regulation (Lilian Edwards and Luciano Floridi are other names that come to mind).
I had the pleasure of attending his presentation for the Trust and Technology Initiative at the University of Cambridge back in Nov-2020. The presentation was tied to Pasquale’s forthcoming book titled “New Law of Robotics – Defending Human Expertise in the Age of AI”.
Professor Pasquale opened his presentation listing what he describes as “paradigm cases for rapid automation” (areas where AI and / or automation have already made substantial inroads, or are very likely to do so in the near future) such as manufacturing, logistics, agriculture; as well as those I personally disagree with: transport, mining (1). He argues for AI complementing rather than replacing humans as they key to advancing the technology in many disciplines (as well as advancing those disciplines themselves) – a view I fully concur with.

Stifterverband, CC BY 3.0 >, via Wikimedia Commons
He then moves on to the critical – though largely overlooked – distinction between governance *of* artificial intelligence vs. governance *by* artificial intelligence (the latter being obviously more of a concern, whilst the former has until recently been an afterthought or a non-thought). He remarked that the push for researchers to increasingly determine policy is not technical, but political. It prioritises researchers over subject matter experts, which is not necessarily a good thing (n.b. I cannot say I witnessed that push in financial services, but perhaps in other industries it *is* happening?)
Prof. Pasquale identifies three possible ways forward:
- AI developed and used above / instead of domain experts (“meta-expertise”);
- AI and professionals melt into something new (“melting pot”);
- AI and professionals maintain their distinctiveness (“peaceable kingdom”).
In conclusion, Pasquale proposes his own new laws of robotics:
- Complementarity: Intelligence Augmentation (IA) over Artificial Intelligence (AI) in professions;
- Robots and AI should not fake humanity;
- Cooperation: no arms races;
- Attribution of ownership, control, and accountability to humans.
Pasquale’s presentation and views resonated with me strongly because I arrived at similar conclusions not through academic research, but by observation, particularly in financial services and legal services industries. Pasquale is one of the relatively few voices who mitigate some of the (over)enthusiasm regarding how much AI will be able to do for us in very near future (think: fully autonomous vehicles), as well as some of the doom and gloom regarding how badly AI will upend / disrupt our lives (think: “35% of current jobs in the UK are at high risk of computerisation”). I find it very interesting that for a couple of years now we’ve had all kinds of business leaders, thought leaders, consultants etc. express all kinds of extreme visions of the AI-powered future, but hardly any with any sort of common-sense middle-ground views. Despite AI evolving at breakneck speed, it seems that our visions and projections of it evolve slower. The acknowledgment that fully autonomous vehicles are proving more challenging and take longer to develop than anticipated only a few years back has been muted to say the least. Despite the frightening prognoses regarding unemployment, it has actually been at record lows for years now in the UK, even during the pandemic (2)(3). [Speaking from closer proximity professionally, paralegals were one profession that was singled out as being under immediate existential threat from AI – and I am not aware of that materialising in any way. On the contrary, price competition for junior lawyers in the UK has recently reached record heights,(4)(5)].
It is obviously incredibly challenging to keep up with a technology developing faster than just about anything else in the history of mankind – particularly for those of us outside the technology industry. Regulators and policy makers (to a lesser extent also lawyers and legal scholars) have in recent years been somewhat on the backfoot in the face of rapid development of AI and its applications. However, thanks to some fresh perspectives from people like Prof. Pasquale this seems to be turning around. Self-regulation (which, as financial services proved in 2008, sometimes spectacularly fails) and abstract / existential high-level discussions are being replaced with concrete, versatile proposals for policy and regulation, which focus on industries, use cases, and outcomes rather than details and nuances of the underlying technology.
__________________________________________
(1) Plus a unique case of AI in healthcare. While Pasquale adds healthcare to the “rapid paradigm shift” list, the pandemic-era evidence raises doubts over this: https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/
(3) https://www.bbc.com/news/business-52660591
(4) https://www.thetimes.co.uk/article/us-law-firms-declare-war-with-140-000-starting-salaries-gl87jdd0q
(5) https://www.ft.com/content/882c9f72-b377-11e9-8cb2-799a3a8cf37b
The Technological Revolution in Financial Services
LSE: The Technological Revolution in Financial Services 23-Apr-2021
The term “FinTech” is currently enjoying truly global recognition. Definition is much trickier though: a bit like art or artificial intelligence, everyone has a pretty good intuitive / organic understanding, while a formal, universally agreed-upon definition remains somewhat ambiguous. Given that this blog has an entire section dedicated to FinTech, I will proceed on the assumption that all of us have at least that intuitive understanding of the term.
It will be a truism to say that FinTechs are disrupting financial services on multiple levels, such as:
- Convenience (e.g. branch-less opening of an account using a camera and AI-powered KYC software in the background; availability of all services on a mobile);
- Cost competition vis-à-vis established models (think Robinhood’s commission-free share dealing vs. standard USD or GBP 6 – 10 fee per single trade [for retail clients] at established brokerages or investment firms; think Revolut’s zero-fee [or nearly zero-fee] fx transactions at interbank rates);
- Innovativeness and fresh thinking;
- Inclusion and reduction of barriers to entry (e.g. by allowing access to investment products below the once-standard minimum thresholds of GBP 1,000 or more or by making global remittances easier and cheaper);
- Smoother and sleeker user experience;
- Greater customisation of products to individual client’s needs;
- …and many more…

On a more personal note, FinTech (and many of its abovementioned benefits, chief among them cost reduction and innovativeness) are like future in that famous William Gibson quote: they’re here, but they’re not equally distributed. My primary home (London) is one of FinTech capitals of the world; we all take it for granted that all the latest innovations will be instantly available in London, and Londoners are (broadly generalising) early – if not very early – adopters. I instantly and very tangibly benefitted from all the forex transaction features from Revolut, because compared to what I had to contend with from my brand-name high-street bank it was a whole ‘nother level of convenience (one card with multiple sub-accounts in different currencies) and cost (all those ridiculous, extortionate forex transaction fees were gone, reduced to near-zero1). My secondary home (Warsaw) is neither a global nor even a European FinTech capital – consequently, it’s a markedly different FinTech environment out here. Challenger banks are close to non-existent; as for other FinTechs, they are clustered in payday lending (whose use of technology and innovation is notorious rather than beneficial), forex trading, and online payments (one area which is of some benefit, though sometimes the commissions charged by the payment operators behind the scenes seem higher than they should be).
The speakers invited by the LSE (Michael R. King and Richard W. Nesbitt) have recently concluded comprehensive industry research, summarised in their book “The Technological Revolution in Financial Services: How Banks, FinTechs and Customers Win Together”. They shared some of their insights and conclusions.
Firstly, they stated that technology itself is not a strategy (which, depending on the interpretation, might run a bit counter to the iconic “we are a technology company with a banking license” declarations of many CEO’s). Technology is a tool, which may be a part of the strategy.
Secondly, technology itself does not provide a sustained competitive advantage, because it is widely available, and it can be copied by competitors. I found this observation both interesting and counterintuitive. I have always thought that for FinTechs technology *defines* their competitive advantage, but perhaps they operative word here is “sustained”. It’s one thing to come up with a disruptive solution, but if it is widely copied and becomes the new normal, then indeed the initial advantage is eroded. Still, personal observations lead me to challenge this statement: Revolut may have introduced zero-fee forex conversions some years ago (I joined in 2018), and yet many high-street banks still charge conversion fees and use terrible, re-quoted rates. They could copy Revolut’s idea in an instant, and yet they choose not to. Another example: if technology does not provide FinTechs’ sustained competitive advantage, then how come challengers Revolut, Monzo, or Starling are enjoying growth in customer numbers and have strong brand recognition, while NatWest’s Bo was but a blip on the radar?
King and Nesbitt further argue that the biggest barrier in financial services is not technology or even regulation, but access to customers. Again, I acknowledge and respect that conclusion, but I can’t fully agree with it. All the brand-name banks have generated and sat on enormous troves of data for decades, and only PSD 2 compelled them to make this widely data available – Revolut and Monzo succeeded without reverting to Open Banking as their main selling point; they just offered innovative, sleek products; countless remittance companies entered the market and succeeded not because they gained access to data Western Union previously kept to itself – they just offered better rates.
Another major component of the “FinTech equation” is, according to the authors, trust in financial services (trust defined as both data and privacy). They argue that erosion of that trust post-2008 was what paved the way for FinTechs. I agree with the erosion of trust part, but it was neither data leaks nor customer data leaks that led to public’s distrust during the 2008 financial crisis: it was imprudent management of money and too-creative financial innovation (back then sometimes labelled as “financial engineering”, even though this term is a terrible misnomer, because finance, no matter how badly some people in the industry would want it, is not an exact science, it’s a social science).
On the social side, FinTech (expectedly or not) may lead to substantial benefits in terms of financial inclusion of the underbanked (e.g. people without a birth certificate and / or any proof of ID) and / or previously marginalised groups (e.g. women). One of the panellists, Ghela Boskovich, brought up India’s aadhar system, which allows everyone to obtain ID number based purely on their biometric data, and Kenya’s M*Pesa mobile payments system (which does not even require a smartphone – an old-school mobile is sufficient), which opened the financial system to women in ways that were not available prior.
On the more traditional thinking side, the authors concluded that regulation and risk management remain pillars of financial services. On the cybersecurity side they advocated switching from incumbent thinking of “if we are hacked” to FinTechs’ thinking of “when we are hacked”, with prompt and transparent disclosures of cybersecurity incidents.
King and Nesbitt concluded that in the end the partnership model between established banks and FinTech start-ups will be the winning combination. It is a very interesting thought. On one hand, many (perhaps most) FinTechs need these partnerships throughout most of their journey: from incubators and accelerators (like Barclays Rise in London), through flagship / strategic client relationships (whereby one established financial institution becomes a FinTech’s primary client, and the FinTech effectively depends on it for its survival). Sometimes established financials end up acquiring FinTech start-ups, though it doesn’t happen anywhere as often as in the tech industry.
Overall King, Nesbitt, and their esteemed guests gave me a huge amount of food for thought around the area of great interest to me. I may or may not fully agree with some of their conclusions, and it doesn’t matter that much – we will see how FinTech evolves in the coming years, and I’m quite certain its evolution will take some twists and turns few have foreseen. The really important thing for me is inclusion, because I see it as massive and undeniable benefit.
_______________________________________
1. Disclosure: I am not and have been an employee of Revolute to date. This is not an advertorial or any form of promotion.
Cambridge Zero presents “Solar & Carbon geoengineering”
Cambridge Zero presents “Solar & Carbon geoengineering”
Cambridge Zero is an interdisciplinary climate change initiative set up by the University of Cambridge. Its focus is research and policy, which also includes science communication through courses, projects, and events.
One of such events was a 29-Mar-2021 panel and discussion on geoengineering as a way of mitigating / offsetting / reducing global warming. Geoengineering has been a relatively popular term in recent months, mostly in relation to a high-profile experiment planned by Harvard University and publicised by the MIT Tech Review… which was subsequently indefinitely halted.
I believe the first time I heard the term was at the (monumental) Science Museum IMAX theatre in London, where I attended a special screening of a breathtakingly beautiful and heartbreaking documentary titled “Anote’s Ark” back in 2018. “Anote’s Ark” follows a then-president of the Republic of Kiribati, Anote Tong, as he attended multiple high-profile climate events trying to ask for tangible assistance to Kiribati, which as at a very realistic risk of disappearing under the waters of the Pacific Ocean in coming decades, as its elevation over the sea level is 2 meters at the highest point (come to think about it, Maldives could face a similar threat soon). Geoengineering was one of the discussion points among the scientists invited to the after-movie panel. I vividly remember thinking about the disconnect between the cutting-edge but ultimately purely theoretical ideas of geoengineering and a painfully tangible reality of Kiribati and its citizens, who witness increasingly higher waves penetrating increasingly deeper inland.

Prototype of CO2-capturing machine, Science Museum, London 2022
Geoengineering has all the attributes needed to make it into the zeitgeist: a catchy, self-explanatory name; explicit hi-tech connotations; and the potential to become the silver bullet (at least conceptually) that almost magically balances the equations of still-rising emissions with the desperate need to keep the temperature rise as close to 1.5 Centigrade as possible.
The Cambridge Zero event was a great introduction to the topic and its many nuances. Firstly, there are (at least) two types of geoengineering:
- Solar (increasing the reflectivity of the Earth to reflect more and absorb less of the Sun’s heat in order to reduce the planet’s temperature);
- Carbon (removing the already-emitted excess CO2 from the atmosphere).
The broad premise of solar geoengineering is to scatter sunshine in the stratosphere (most likely by dispersing particles of highly reflective compounds or materials). While some compare it to a result of a volcano explosion, the speaker’s suggestion was to compare it to a thin layer of global smog. The conceptual premise of solar geoengineering is quite easy to grasp (which is not the same as saying that solar geoengineering itself is in any way easy – as a matter of fact, it is extremely complex and, in the parlance of every academic paper ever, “further research is needed”). The moral and political considerations may be almost as complex as the process itself. There is a huge moral hazard that fossil fuel industry (and similar vested interests, economic and political) might perform a “superfreak pivot”, going from overt or covert climate change denial to acknowledging it and pointing to solar geoengineering as the only way to fix it. Consequently, these entities and individuals would no longer need to deny the climate is changing (which is becoming an increasingly difficult position to defend these days), while they could still push for business as usual (as in: *their* business as usual) and delay decarbonisation.
The quote from the book Has It Come to This: The Promises and Perils of Geoengineering on the Brink puts is brilliantly: “Subjectively and objectively, geoengineering is an extreme expression of a – perhaps *the* – paradox of capitalist modernity. The structures set up by people are perceived as immune to tinkering, but there is hardly any limit to how natural systems can be manipulated. The natural becomes plastic and contingent; the social becomes set in stone.”
The concept is carbon geoengineering is also very straightforward (conceptually): to remove excess carbon from the atmosphere. There are two rationales for carbon geoengineering:
- To offset carbon emissions that – for economic or technological reasons – cannot be eliminated (emissions from planes are one example that comes to mind). Those residual emissions will be balanced out by negative emissions resulting from geoengineering;
- To compensate for historical emissions.
The IPCC special report on 1.5°C states unambiguously that limiting global warming to 1.5°C needs to involve large-scale carbon removal from the atmosphere (“large-scale” being defined as 100 – 1,000 gigatons over the course of the 21st century). This, in my view, fundamentally differentiates carbon geoengineering from solar geoengineering in terms of politics and policy: the latter is more conceptual, a “nice to have if it works” lofty concept; the former is enshrined in climate policy plan from the leading scientific authority on climate change. This means that carbon capture is not a “nice to have”, it’s critical.
Carbon geoengineering is a goal that can be achieved using a variety of means: from natural (planting more trees, planting crops with roots trapping more carbon) to technological (carbon capture and storage). The problem is that it largely remains in the research phase, and is nowhere near deployment at scale (which, on carbon capture and storage side is akin to having a parallel energy infrastructure in reverse).
There is an elephant in the room that shows the limitations of geoengineering: the (im)permanence of the results. The effects of solar geoengineering are temporary, while carbon capture has its limits: natural (available land; be it for carbon-trapping vegetation or carbon capture plants), technological (capture and processing plants), and storage. Geoengineering could hopefully stave off the worst-case climate change scenarios, slow down the rate at which the planet is warming, and / or buy mankind a little bit of much-needed time to decarbonize the economy – but it’s not going to be a magic bullet.
Global AI Narratives – AI and Communism
Global AI Narratives – AI and Communism 07-May-2021
07-May-2021 saw the 18th overall (and the first one for me… as irony would have it, the entire event series slipped below my radar during the consecutive lockdowns) event in the Global AI Narratives (GAIN) series, co-organised by Cambridge University’s Leverhulme Centre for the Future of Intelligence (LCFI). Missing the first 17 presentations was definitely a downer, but the one I tuned in was *the* one: AI and Communism.
Being born and raised in a Communist (and subsequently post-Communist) country is a bit like music: you can talk about it all you want, but you can’t really know it unless you’ve experienced it (and I have). [Sidebar: as much as I respect everyone’s right to have an opinion and to voice it, I can’t help but cringe hearing Western-born 20- or 30-something year old proponents of Communism or Socialism, who have never experienced centrally planned economy, hyperinflation (or even good old-fashioned upper double-, lower triple-digit inflation), state-owned and controlled media, censorship, shortages of basic everyday goods, etc. etc. etc. I know that Capitalism is not exactly perfect, and maybe it’s time to come up with something better, but I don’t think that many Eastern Europeans would willingly surrender their EU passports, freedom of movement, freedom of speech etc. Then again, it *might* be different in the era of AI and Fully Automated Luxury Communism.]

Stanisław Lem in 1966
The thing about Communism (and that’s speaking from limited and still-learning perspective) is that there was in fact much, much more to it than many people realise. We’re talking decades (how many decades exactly depends on individual country) and hundreds of millions of people, so that’s obviously a significant part of the history of the 20th century. The Iron Curtain held so tight that for many years the West was either missing out altogether or had disproportionately low exposure to culture, art, or science of the Eastern Bloc (basically everything that was not related to the Cold War). As the West was largely about competition (including competition for attention) there was limited demand for Communist exports because there wasn’t much of a void to fill. That doesn’t mean that there weren’t exciting ideas, philosophies, works of art, or technological inventions being created in the East.
The GAIN event focused on the fascinating intersection of philosophy, literature, and technology. It just so happens that one of the (world’s) most prolific Cold War-era thinkers on the topic of the future of technology and mankind in general was Polish. I’m referring to the late, great, there-will-never-be-anyone-like-him Stanislaw Lem (who deserved the Nobel Prize in Literature like there was no tomorrow – one could even say more than some of the Polish recipients thereof). Lem was a great many things: he was a prolific writer whose works span a very wide spectrum of sci-fi (almost always with a deep philosophical or existential layer), satire disguised as sci-fi, and lastly philosophy and technology proper (it was in one of his essays in the late 1990’s or early 2000’s I have first read of the concept of the brain-computer interface (BCI); I don’t know to what extent BCI was Lem’s original idea, but he was certainly one of its pioneers and early advocates). He continued writing until his death in 2006.
One of Lem’s foremost accomplishments is definitely 1964’s Summa Technologiae, a reference to Thomas Acquinas’ Summa Theologiae dating nearly seven centuries prior (1268 – 1273). Summa discusses technology’s ability to change the course of human civilisation (as well as the civilisation itself) through cybernetics, evolution (genetic engineering), and space travel. Summa was the sole topic of one of the GAIN event’s presentations, delivered by Bogna Konior, an Assistant Arts Professor at the Interactive Media Arts department of NYU Shanghai. Konior took Lem’s masterpiece out of its philosophical and technological “container” and looked at it from a wider perspective of the Polish social and political system Lem lived in – a system that was highly suspicious, discouraging (if not openly hostile) to new ways of thinking. She finds Lem pushing back against the political status quo.
While Bogna Konior discussed one of the masterpieces of a venerated sci-fi giant, the next speaker, Jędrzej Niklas, presented what may have sounded like sci-fi, but was in fact very real (or at least planned to happen for real). Niklas told the story of Poland’s National Information System (Krajowy System Informatyczny, KSI) and a (brief) eruption of “technoenthusiasm” in early 1970’s Poland. In a presentation that sounded at times more like alternative history than actual one Niklas reminded us of some of the visionary ideas developed in Poland around late 1960’s / early 1970’s. KSI was meant to be a lot of things:
- a central-control system of the economy and manufacturing (pls note that at that time vast majority of Polish enterprises were state-owned);
- a system of public administration (population register, state budgeting / taxation, natural resources management, academic information index and search engine);
- academic mainframe network;
- “Info-highway” – a broad data network for enterprises and individuals linking all major and mid-size cities.
If some or all of the above sound familiar, it’s because they all became everyday uses cases of the Internet. [sidebar: while we don’t / can’t / won’t know for sure, there have been some allegations that Polish ideas from the 1970’s were duly noted in the West; whether they became an inspiration to what ultimately became the Internet we will never know].
While KSI ultimately turned out to be too ambitious and intellectually threatening for the ruling Communist Party, it has not been a purely academic exercise. The population register part of KSI became the PESEL system (an equivalent of the US Social Security Number or British National Insurance Number), which is still in use today, while all the enterprises are indexed by the REGON system.
And just like that, the GAIN / LCFI event made us all aware how many ideas which have materialised (or are likely to materialise in the foreseeable future) may not have originated exclusively in the Western domain. I’m Polish, so my interest and focus are understandably on Poland, but I’m sure the same can be said by people in other, non-Western, parts of the world. While the GAIN / LCFI events have not been recorded in their entirety (which is a real shame), they will form a part of the forthcoming book “Imagining AI: how the world sees intelligent machines” (Oxford University Press). It’s definitely one to add to cart if you ask me.
____________________________________
1. I don’t think that any single work of Lem’s can be singled out as his ultimate masterpiece. His best-known work internationally is arguably Solaris, which had two cinematic adaptations (by Tarkovsky and by Soderbergh), which is equal parts sci-fi and philosophy. Summa Technologiae is probably his most venerated work in the technology circles, possible in the philosophy circles as well. The Star Diaries are likely his ultimate satirical accomplishment. Eden and Return from the Stars are regarded as his finest sci-fi works.