Thu 24-Oct-2018

There are few things less fashionable than reading a book that was all the rage 2 years prior. One might as well not bother – the time for all the casual watercooler/dinner party mentions is gone, and the world has moved on. However, despite tape delay caused by “life” and with all social credit devalued, I decided to make an effort and reach for it nonetheless.

In terms of content, there’s a lot in it – and I mean *a lot*. Regardless of discipline, in non-fiction a lot can be a great thing, but also challenging (one can only process, let alone absorb, so much). However, a lot in what is essentially a techno-existential divagation is like… really a lot.

For starters, Bostrom deserves credit for defining the titular superintelligence as “a system that is at least as fast as a human mind and vastly qualitatively smarter”, and for consistently using the correct terminology. The term “AI” – as is increasingly called out these days (“Ultimately, the term artificial intelligence may be a misnomer. To be sure, these machines can solve complex, seemingly abstract problems that had previously yielded only to human cognition. But what they do uniquely is not thinking as heretofore conceived and experienced.”) – is routinely misused and applied to machine-learning and / or statistical systems which are all about pattern recognition, statistical discrimination, and accurate predictions; none of which are close to AI / AGI (artificial general intelligence) proper (ironically, as I’m writing this Microsoft’s AI commercial ft. Common – which conflates AI, VR, AR, and IoT and throws them all into one supermixed bag – is playing in the background; and btw, Common? Whatever made Satya Nadella choose a vaguely recognizable actor when he could choose and afford a great actor? Or a scientist for that matter?).

Anyway, back to “superintelligence”. One of the most repetitive and profound themes in Bostrom’s book is that we cannot predict nor comprehend how an agent orders of magnitude smarter than the smartest of humans will reason; what goals it might have; how it might go about reaching them. It’s kind of a tautology (“we cannot comprehend the incomprehensible; we cannot predict the unpredictable”), but still, makes one think.

Separately – and it gave me pause many times as I was reading it – the word “privilege” is used a lot these days: male privilege, white privilege, Western privilege, straight privilege etc. etc. These privileges are bad, but I think most of us humans – inequalities notwithstanding – is used to and quite fond of the homo sapiens privilege of being the smartest (to our knowledge…) species on Earth. “Smart” may not be the most important attribute for everyone (some people bank on their looks, others on humour, others still on personality, others on love), but I think few people don’t like to think of themselves as broadly smart. Some – myself included – chose to bank most of themselves, our lives, our sense of self, our self-esteem on being smart / clever / educated. What happens when the best, smartest, sharpest, wittiest possible version of me becomes available as an EPROM chip or a download? If ever / whenever the time comes when my intellect becomes vastly inferior to an artificial one, how will I be able to live? What will drive me? I don’t want to be kept as some superintelligence’s ******* pet…

The amount of ideas, technologies, and considerations listed in Bostrom’s book is quite staggering. He’s also not shy to think big, really big (computronium, colonizing the Hubble volume, lastly – what if we’re all living in a simulation in the first place?) – and I love it (the Asimov-loving kid inside me loves it too). Separately though… Bostrom seems to be quite confident that the first superintelligence would end up colonizing and transforming the observable Universe (there would be no 2ndsuperintelligence… and even if there were, there is only one Universe we know of for sure). However – as far as our still rather basic civilisation can observe – the universe is neither colonized nor transformed (unless we are all living in a simulation, in which case it can be). Has the path not been taken before…? To be the first (or only) civilisation in the history of the universe capable of developing AI sounds like being really, really lucky… *too* lucky almost. Then again, it may be the case of trying to comprehend the incomprehensible and predict the unpredictable

It may not be the best written book ever, but the guy did his homework and knows his stuff. Separately, the author deserves credit for not looking at AI in a technological silo, but broadly: from neuroscience, through politics, all the way to philosophy and ethics. For someone who’s a big believer in the future being more interdisciplinary (i.e. myself), that’s confirmation that having wide and diverse interests is worthwhile.

Reaching for hit books is a little bit like reaching for hit albums (regardless of the genre) – sometimes great ones deservedly become hits, and sometimes substance-less **** ones undeservedly become hits all the same. However, despite attaining recognition similar to “4 hour week” or “blink!”, “superintelligence” does actually have substance – and plenty of it. Along with substance comes a certain challenge in reading and following, which makes me wonder how many people who bought the book bought it to read it, and how many merely bought it to be seen reading it in public? (much like this). The substance of “superintelligence” can actually be really overwhelming – not just in terms of mind-boggliness of its content (although that too), but purely on volume. There is a lot in it – and I mean *a lot*. In his efforts to cram as much substance as possible, Bostrom forgot that in order for a book to be great it needs to be – has to be – well written. “Superintelligence” is a lot of things, but well-written it, unfortunately, is not. The first hundred pages are particularly tough to get through – the author could have easily trimmed them to 30 – 40, making the material more concise and comprehensible to a regular reader (it is, after all, meant to be a “popular science” book, not Principia Mathematica – Joe Average should be able to follow it). Yuval Noah Harari’s “homo deus” is a great example of a book that’s got substance, but reads really, really well (on an unrelated note: YNH is a much better auteur than he is a public speaker). Nassim Taleb’s “black swan” is another (though Nassim is all about Nassim more than Kanye is all about Kanye – it does get old real quick).

On top of that, AI occupies a unique place in the zeitgeist. On one hand, FT (“Artificial intelligence: winter is coming”) rightly points out that “We have not moved a byte forward in understanding human intelligence. We have much faster computers, thanks to Moore’s law, but the underlying algorithms are mostly identical to those that powered machines 40 years ago. Instead, we have creatively rebranded those algorithms. Good old-fashioned “data” has suddenly become “big”. And 1970s-vintage neural networks have started to provide the mysterious phenomenon of “deep learning”.”; on the other Alan Turing’s Institute notes that, “artificial intelligence manages to sit at the peak of ‘inflated expectations’ on Gartner’s technology hype curve whilst simultaneously being underestimated in other assessments”.

Consequently, in the end, I was struck by a peculiar dissonance: on one hand, reading the book (which is measured and balanced, it’s not unabashed evangelising) one might get the impression that the titular superintelligence really is inevitable – that it’s a matter of “when” rather than “if” (with the entire focus being on “how”), and that the likelihood of this becoming an existential threat to humanity is substantial. Then, around page 230, Bostrom gives the readers a bit of a cold shower by making them realise that it’s essentially impossible to express human values (such as “happiness”) in code. And then I’m left agitated (the good way) and confused (also the good way): inevitable or impossible? Which one is it?

 

PS. Not as an alternative, but as a condensed and very well written compendium, I cannot recommend this waitbutwhy classic enough.