United states of crypto crazy

 

Sat 02-Mar-2019

Some assorted reflections on cryptocurrencies a little more than a year since the peak (Dec-2017). Hindsight is always 20/20, but I never was on the crypto bandwagon (and there are timestamped records to prove it…), so I believe I have the right to a little bit of “told you so” smugness.

A number of friends and colleagues have in recent days mentioned the FCA cryptocurrency assets consultation paper, which made me reflect. That FCA is on top of fintech developments is in itself great; regulators haven’t historically always been known for being ahead of the curve, but in recent years there has been marked improvement (nb. FCA isn’t the only regulator proactively looking into cryptocurrencies – regulators in many jurisdictions including USA, German, France, China, Australia, Japan and EU (ESMA) published guidelines, consultation papers, or cautions pertaining to investing in coins and tokens).

Reading the FCA paper I recalled an article in Wired magazine (UK edition) published more or less exactly a year ago, at a time when bitcoin was only beginning the precipitous slide off its all-time peak of nearly USD 20,000 (which happened in Dec-2018), all things crypto were still the hottest topic in fintech, utilities and services were meant to become a better and less-centralized, and nothing could have possibly gone wrong. And there was plenty of money being thrown at crypto. PLEN-TY.

While the article was measured and not too hype’y, it still struck me as a little less critical than I’d expect from Wired. But that in itself is probably a reflection of the time it was written in: it was such a frenzied and insane period, even measured journalism would still reflect a little bit of that insanity, it had to (my favourite quote: “<<We had all the money we needed to build the software,>> block.one CEO Brendan Blumer told me. <<All the money that comes from the token sale will be block.one’s profit.>>” – I mean, that level of crazy puts the “AAA” CDO’s of the aughts to shame).

One thing that stood out factually in the article is that coins and tokens were referenced synonymously, while they shouldn’t be. I would have never picked up on it had it not been for a very useful session at Clifford Chance in Jun-2018, and that difference is useful to know: while the entire ecosystem is ultra-fluid (as you’d expect given that it’s entirely digital), coins are generally a medium of exchange native to given chain and do not represent any claims or assets, while tokens tend to represent claims against the issuer or some sort of rights. So it’s really not the same thing, with tokens falling quite closely under the definition of a security.

What was symbolic to me in this “what a difference a year makes” story is that *the* crypto investor extraordinaire, Brock Pierce featured in the Wired story has been a subject of a super-scathing expose by John Oliver (a part of entire episode-length scathing expose of crypto in general and bitcoin in particular) and the 2 ventures he’s been associated with are yet to revolutionise the world (I’m not saying they can’t or won’t, I’m saying that they haven’t as yet), while the other, seemingly much more measured crypto venture from the same article, Dovu, appears to still be out there, but (see above) is yet to deliver anything I would want to use.

More broadly one can’t help but notice that despite the hype, the interest, the obscene amounts of money, and genuinely innovative technology there hasn’t been a genuine game-changing disruption use case as yet; JP Morgan and its blockchain-based cross-country payment project may be one exception, the Japanese project of improving efficiency of the power grid may be another, but even those are still pilots / POC’s – definitely not verified success stories (at least not yet).


RI What is life Paul Davies

06-Feb-2019

On 05-Feb-2019 Royal Institution welcomed theoretical physicist/cosmologist/astrobiologist Paul Davies to give a lecture promoting his new book “The Demon in the Machine”.

The main themes of the lecture were “life as an informational process” and “how did life begin?”. We got a number of fascinating insights on the former, while for the latter… no, we still don’t know for sure. What was rather unique about the lecture was that it was more like an interdisciplinary discourse between some of the greatest minds in the history of modern science (Schrödinger, Crick, Maxwell, Shannon, Einstein) in which Prof. Davies was a contributor and a moderator rather than a monothematic first-person speech. And I loved it.

The lecture, with intro from esteemed Prof. Jim Al-Khalili himself, started somewhat philosophically, with Prof. Davies opening up with 2 questions-statements which would set the tone for his entire lecture:

  • We can’t define what (exactly) life is
  • We don’t know how it began on Earth

Professor Davies started by referencing Erwin Schrödinger’s seminal “What is Life?” and Francis Crick’s “Life Itself: Its Origin and Nature” (“An honest man, armed with all the knowledge available to us now, could only state that in some sense, the origin of life appears at the moment to be almost a miracle, so many are the conditions which would have had to have been satisfied to get it going”) and his own work in SETI Programme (Search for Extraterrestrial Intelligence), whose struggle is to find any life in the universe and thus (indirectly) answer the question: does life start easily? One of the current schools of thought posits that the universe should be teeming with life, and yet we have not yet found any evidence of it (Fermi’s Paradox). In the context of search for extraterrestrial life, Prof. Davies remarked that science requires a “life-meter”, which would enable it to determine not just whether a given organism is living or not (something we can do fairly well today, at least in terrestrial context), but also whether given system (e.g. planet or moon) is “on its way” to develop life, and if so, how far down the road it is.

Moving on to the main theme of his lecture, Prof. Davies remarked on the difference in ways scientists define life: physicists would talk about life in the terms of matter, force, energy, reaction rates, molecular binding etc. (“hardware speak”), biologists would define it in the sense of ribosomes, amino acids, genetic code, coding instructions in genes, transcription, gene editing and otherwise use the language of information (“software speak”).

Continuing to focus on the language of information, Prof. Davies discussed whether we can define what exactly information is and whether the biology of living systems (e.g. humans) could be mapped in the form of computer science logic. That, he said, could open up the possibility of “fixing” such issues as cancer, treating it the same way as corrupt software on a computer. He further discussed whether information as such as real, leaving the question open, but concluding scientists *do* consider information as real. From this vantage point, Prof. Davies raised the question as to how information (defined as bits and bytes) can have impact on matter, which led him to Christoph Adami’s definition: “Information is the currency of life. One definition of information is the ability to make predictions with a likelihood better than chance”. He further referenced Claude Shannon’s seminal work, in which information was defined as a reduction of uncertainty.

In the end, we didn’t get any closer to understanding how exactly life began, but some of the connections and observations Prof. Davies made between well-known and not-so-well-known theories and discoveries were truly fascinating and made for great food for thought, which is what popular science should be all about. His enthusiasm and energy shone through, while the ability to facilitate the one-man discourse among different branches of science (biology, physics, astrophysics, information theory) is – to me – what makes the difference between a good scientist and a great one.


The growing prominence of ESG in investments

 

Fri 23-Nov-2018

The first time I heard the abbreviation “ESG” was about a decade ago at Bloomberg. It was part of a test for a specialist role in the Analytics department (the <help><help> guys). I had no idea what it meant, which meant that for a little while longer I remained a generalist. I would say that my knowledge of ESG was fairly representative of financial services at the time.

Fast forward to present day, and it’s practically a brave new world. The financial crisis is over, the AI is coming (for our jobs…), and ESG leapt from something you’d put on a glossy (non-recyclable…) annual statement as a “nice-to-have” to a “business-as-usual-goes-without-saying”.

 

While the term itself doesn’t have an unambiguous definition, most people (finance professionals and general public alike) seem to have a good, organic understanding thereof: broadly defined ethical, environmentally-friendly investing. The width of the spectrum will differ among individuals: some will exclude industrial animal farming, some will not; some will exclude tobacco and alcohol, some will not; many will exclude hydrocarbons; and everyone will exclude assault weapons or landmines.

Change begins with awareness. 30 years ecology was either unheard of entirely or – at best – considered a fad. Today most people have a level of environmental awareness. There have always been activists who tried to raise awareness of inconvenient truths, but it took the explosion of social media to democratize previously unwelcome content (with the obvious flipside being fake news) and gave general public the opportunity to educate themselves on the environment, sustainability, corporate governance, animal welfare etc.

The next step from awareness is action, which isn’t always easy. Consumers in the developed world are not taking very kindly to the idea of “do without” (author included). Instead, they (we…) want ever more stuff – but this time environmentally-friendly, sustainable, ethical stuff (the prodigy architect Bjarke Ingels called this philosophy “hedonistic sustainability”). With our natural-born, neoliberal, capitalist awareness, we – the consumers – know all too well that brands and corporates depend on us for their survival. It is therefore unsurprising that different shades of consumer activism erupted in recent years: we want the manufacturers of our trainers to pay their labour force in South-East Asia living wages; we want cobalt in our consumer electronics to come from conflict-free mines; we want our coffee to be Fairtrade and the milk we add to it to be organic. Alongside all this activism there is also naming and shaming: of corrupt defence contractors; of polluting coal mines; of clothing manufacturers ignoring health and safety of their seamstresses etc. etc.

Financial services (especially banks; asset managers have reputationally fared much, much better) are not always synonymous with high ethical conduct (and I’m being really charitable here…). The list of prosecuted cases, no contest settlements, and plain lack of ethics of the past decade alone will feature many of the largest players in the industry (some of them included on multiple counts). On the upside, the broad social/regulatory climate has also changed in recent years and (unabashed) greed is no longer good. One hopes that increasingly high costs of misconduct will turn out to be the best, the most effective nudge financial services could ask for.

There’s a well-known saying that no man is an island; likewise, no business is an island which can exist ignoring changes in their customers’ lifestyles, values, and preferences (at least not for very long). Consequently, financial services had to start taking notice of pressures, trends, and opportunities in the ESG space. On the banking side that comes down to good, old-fashioned lending and project finance; and when a wind or solar farm begins to look competitively (or even favourably) compared to another mine or oil rig, then financing is a matter of common business sense, with huge intangible benefits in the shape of PR, publicity, investor relations, etc. etc. On the investment side certain assetscan – over a relatively short period of time – become highly unfashionable. Big Tobacco was first, around 1980’s / 1990’s, followed in more recent years by cases of high-profile (albeit sometimes delayed, and not always full) gradual divestment from fossil fuels (Norwegian sovereign wealth fund being the best-known example).

A unique problem of ESG is the difficulty of monitoring and scoring entities and investments, especially multinationals operating in multiple markets. Glock and Kalashnikov are fairly unambiguous, but what about EADS and Boeing with their defence and missile arms? Coca-cola seems rather neutral in terms of its environmental or social impact, but what about the impact of corn (for corn syrup) plantations or contribution to the obesity epidemic? Environmental and social impacts can at least be *somewhat* quantified, but what about governance? What defines good governance? Consistently beating quarterly EPS expectations? Low employee turnover? Paying high taxes? Avoiding high taxes? That problem isn’t new, it’s just becoming increasingly prominent – and with an investment decision being binary (you either invest in something or not; you can’t half- or three quarters-invest) there is no immediate solution in sight. London Business School’s Associate Professor of Strategy and Entrepreneurship Ioannis Ioannou eloquently captured this conundrum during a recent sustainability event: “I’m an academic researching sustainability and governance, I can see my pension portfolio on my mobile, but I can’t see its ESG breakdown”.

There are competing analytics vendors in purely commercial space, but there is also one alternative, more “grassroot’y” approach: B Corp. Awarded by non-profit B Lab organization, B Corp certification (B standing for “beneficial”) is a quantitative (score-based) measure of given company’s accountability, sustainability, and value added to the society. As of 2018, it’s still a somewhat niche designation, but it’s highly recognised in the ESG circles. It also carries a certain cachet which organisations increasingly see as an exclusive and prestigious differentiator. Furthermore, certification is inexpensive, which means low barrier for small organisations and reduced conflict of interest for large ones (they can’t be accused of buying a B Corp certification – with the cost being low, it’s more a matter of earning than buying it). Another interesting initiative is Natural Capital Coalition, which is a business management framework taking into account both impacts and dependencies on nature. For a small organisation NCC has been really successful at signing up large corporates such as Burberry, EY, Deloitte, Credit Suisse, (part of) university of Cambridge, Nestle or – somewhat unobviously – Royal Dutch Shell (in all fairness, it’s just implementation of the framework, which may or may not inform future actions, but still, it’s a promising start).

Still, despite certain “fuzzy logic” issues, many investment decisions can be made with a degree of confidence based on company profile, its core activities, its industry, and lastly its reputation. Overall business environment also seems to be moving, fairly quickly, towards increased adoption of ESG metrics and/or principles.

Earlier this year European Commission released first proposals of EU-wide framework facilitating sustainable investment (with a proposal to develop a clear ESG taxonomy being an added bonus and proposal to link remuneration to sustainability targets a literal one) while Bank of England issued recommendation for climate risks to be factored into broader credit risk framework. On the business side, there are almost daily developments, with: UBS Asset Management recently rolling ESG data for all its funds (May-2018), Nutmeg (UK’s largest robo-advisor) doing the same in Nov-2018, or fund giant BlackRock adding 6 UCITS funds to its growing family of sustainable iShares (Oct-2018).

The push for wider adoption of ESG investments and metrics is not going without some hurdles. A number of industry bodies (Alternative Investment Management Association [AIMA], ICI Global, European Fund and Asset Management Association) pushed back on European Commission’s recommendations. The pushback focuses on competitiveness, demand and materiality and relevance of ESG disclosures. It’s a slightly peculiar situation where many firms openly advocate and push ESG agenda while trade bodies speaking on their behalf are much less enthusiastic. It may be that industry-wide consensus is not exactly here yet; it may also be that some asset managers feel that they have no choice but to be (officially) ESG advocates, while in private they do not necessarily share the sentiment quite as much. Secondly, there is no clear conclusion as to whether ESG funds outperform, underperform, or perform at par with their non-ESG counterparts; there simply isn’t enough historical data to make a conclusive and statistically meaningful determination.

My little foray into ESG ended on an unexpectedly profound and emotional note. London’s Science Museum (alongside Royal Institution and Patisserie Valerie one of my happy places) held a special one-off screening of Anote’s Ark, a documentary chronicling titular character (Anote Tong, then-president of Kiribati) crisscrossing the globe and walking the corridors of power looking for practical solutions for Kiribati and its 110,000 nationals as their small island state is being gradually submerged and deprived of fresh water by not-so-gradually rising ocean levels. Seeing this spectacularly beautiful, benign and extremely vulnerable island paradise – and, more importantly, a home to its inhabitants – literally disappearing underwater was profoundly upsetting and put all things ESG in a completely different perspective.


Royal Society - Kate Crawford: You and AI: machine learning and bias (aka Just an engineer: the politics of AI)

Royal Society, Tue 17-Jul-2018

The lecture was one of eight in the broad “You and AI” series organised by the Royal Society (and, sadly, the only one I attended). This particular lecture was presented by an AI grandee, Microsoft’s Principal Researches and NYU’s Distinguished Research Professor Kate Crawford.

The lecture was hosted by Microsoft’s Chris Bishop, who himself gave a very interesting and interactive talk on AI and machine learning at the Royal Institution back in 2016 (it is also posted on RI’s YouTube channel here).

What made Prof. Crawford’s presentation stand out among many others (AI is a hot topic, and there are multiple events and meetups on it literally every day) was that it didn’t just focus on the technology in isolation, but right from the outset framed it within the broader context of ethics, politics, biases, and accountability. One might think there’s nothing particularly original about it, but there is. Much of the conversation to date has focused on the technology in sort of clinical sense, rarely referencing broader context (with occasional exceptions for concerns about the future of employment, which is understandable, given it’s one of top white collar anxieties this day and age). I think that in talking about the dangers of being “seduced by the potential of AI”, she really hit the nail on the head.

The lecture started with Prof. Crawford attempting to define AI, which makes more sense than it might seem: just like art, AI means different things to different people:

  • Technical approaches. She rightly pointed out that what is commonly referred to as AI is a mix of a number of technologies such as machine learning, pattern recognition, optimization (echoing a very similar mention by UNSW’s Prof. Toby Walsh in one of the New Scientist Instant Expert workshops in London last year; also – of literally *all* the people – Henry Kissinger (*the* Henry Kissinger) made the same observation not long ago in his Atlantic article: “ultimately, the term artificial intelligence may be a misnomer. To be sure, these machines can solve complex, seemingly abstract problems that had previously yielded only to human cognition. But what they do uniquely is not thinking as heretofore conceived and experienced.”). She also restated what many enthusiasts seem to overlook: AI techniques are in no way intelligent the way humans are intelligent. The use of “I” in “AI” may be somewhat misleading.
  • Social practices: the decision makers in the field and how their backgrounds, views and biases shape the applications of AI in the society
  • Industrial infrastructure. Using a basic Alexa interaction as an example, Prof. Crawford stated that – contrary to popular beliefs in thriving AI start-up scene – only few tech giants have resources to provide and maintain it.

She took AI outside of its purely technical capacity and made very clear exactly how political – and potentially politicised – AI can be, and that we don’t think enough about the social context and the ethics of AI. She illustrated that point by pulling multiple data sets used to train machine learning algorithms and making very clear how supposedly benign and neutral data is in fact not representative and highly skewed (along the “usual suspect” lines of race, sex, sexism, representativeness, societal / gender roles).

Prof. Crawford talked about the dangers of putting distance between AI engineering and the true human cost of its consequences, including poor quality training data reinforced biases, against the backdrop if ongoing resurgence of totalitarian views and movements. On more positive note, she mentioned the emergence of a “new industry” of fairness in machine learning – at the same time asking who and how will actually define fairness and equality. She discussed three approaches she had herself researched (improving accuracy, scrubbing to neutral, mirroring demographics), pointing out that we still need to determine what exactly we define as neutral and / or representative.

She mentioned feedback loops more than once, in the sense of reinforcing stereotypes in the real world, with systems and algorithms becoming more inscrutable (“black box”) and disguising very real political and social considerations and implications as purely technical, which they aren’t. She quoted Brian Brackeen, an (African-American) CEO of a US facial recognition company Kairos, who stated that his company would not sell their systems to law enforcement as they are “morally corrupt” (you can read his op-ed in its entirety on techcrunch).

As a regular on London tech / popular science events scene, I found it very interesting to pick out some very current and very relevant references to speeches given by other esteemed speakers (whether these references were witting or unwitting, I don’t know). In one of them Prof. Crawford addressed the fresh topic of “geopolitics of AI”, which was introduced and covered in detail by Evgeny Morozov in his Nesta Future Fest presentation (titled “the geopolitics of AI”… – you can watch it here); in another she mentioned the importance of ongoing conversation at the same time as Mariana Mazzucato talks about hijacking the economic narrative(s) by vested corporate and political interests in her new book (“the value of everything”) and accompanying event at the British Library (which you can watch here). Lastly, Crawford’s open questions (which, in my view, could have been raised a little more prominently) about the use of black box algorithms without clear path of an appeal in broadly defined criminal justice system resonated Prof. Lilian Edwards’ of Strathclyde University research into legal aspects of robotics and AI.

On the iconoclastic side, I found it unwittingly ironic that a presentation about democratising AI (both the technology and the debate around it) and the concerns of crowding out smaller players by acquisitive FAANG’s was delivered by a Microsoft employee at an event series hosted by Google.

You can watch the entire presentation (67 minutes) here. For those interested in Royal Society’s report on machine learning (referenced in the opening speech by Prof. Bishop), you can find it here.

 


Nick Bostrom "superintelligence" book review

 

Thu 24-Oct-2018

There are few things less fashionable than reading a book that was all the rage 2 years prior. One might as well not bother – the time for all the casual watercooler/dinner party mentions is gone, and the world has moved on. However, despite tape delay caused by “life” and with all social credit devalued, I decided to make an effort and reach for it nonetheless.

In terms of content, there’s a lot in it – and I mean *a lot*. Regardless of discipline, in non-fiction a lot can be a great thing, but also challenging (one can only process, let alone absorb, so much). However, a lot in what is essentially a techno-existential divagation is like… really a lot.

For starters, Bostrom deserves credit for defining the titular superintelligence as “a system that is at least as fast as a human mind and vastly qualitatively smarter”, and for consistently using the correct terminology. The term “AI” – as is increasingly called out these days (“Ultimately, the term artificial intelligence may be a misnomer. To be sure, these machines can solve complex, seemingly abstract problems that had previously yielded only to human cognition. But what they do uniquely is not thinking as heretofore conceived and experienced.”) – is routinely misused and applied to machine-learning and / or statistical systems which are all about pattern recognition, statistical discrimination, and accurate predictions; none of which are close to AI / AGI (artificial general intelligence) proper (ironically, as I’m writing this Microsoft’s AI commercial ft. Common – which conflates AI, VR, AR, and IoT and throws them all into one supermixed bag – is playing in the background; and btw, Common? Whatever made Satya Nadella choose a vaguely recognizable actor when he could choose and afford a great actor? Or a scientist for that matter?).

Anyway, back to “superintelligence”. One of the most repetitive and profound themes in Bostrom’s book is that we cannot predict nor comprehend how an agent orders of magnitude smarter than the smartest of humans will reason; what goals it might have; how it might go about reaching them. It’s kind of a tautology (“we cannot comprehend the incomprehensible; we cannot predict the unpredictable”), but still, makes one think.

Separately – and it gave me pause many times as I was reading it – the word “privilege” is used a lot these days: male privilege, white privilege, Western privilege, straight privilege etc. etc. These privileges are bad, but I think most of us humans – inequalities notwithstanding – is used to and quite fond of the homo sapiens privilege of being the smartest (to our knowledge…) species on Earth. “Smart” may not be the most important attribute for everyone (some people bank on their looks, others on humour, others still on personality, others on love), but I think few people don’t like to think of themselves as broadly smart. Some – myself included – chose to bank most of themselves, our lives, our sense of self, our self-esteem on being smart / clever / educated. What happens when the best, smartest, sharpest, wittiest possible version of me becomes available as an EPROM chip or a download? If ever / whenever the time comes when my intellect becomes vastly inferior to an artificial one, how will I be able to live? What will drive me? I don’t want to be kept as some superintelligence’s ******* pet…

The amount of ideas, technologies, and considerations listed in Bostrom’s book is quite staggering. He’s also not shy to think big, really big (computronium, colonizing the Hubble volume, lastly – what if we’re all living in a simulation in the first place?) – and I love it (the Asimov-loving kid inside me loves it too). Separately though… Bostrom seems to be quite confident that the first superintelligence would end up colonizing and transforming the observable Universe (there would be no 2ndsuperintelligence… and even if there were, there is only one Universe we know of for sure). However – as far as our still rather basic civilisation can observe – the universe is neither colonized nor transformed (unless we are all living in a simulation, in which case it can be). Has the path not been taken before…? To be the first (or only) civilisation in the history of the universe capable of developing AI sounds like being really, really lucky… *too* lucky almost. Then again, it may be the case of trying to comprehend the incomprehensible and predict the unpredictable

It may not be the best written book ever, but the guy did his homework and knows his stuff. Separately, the author deserves credit for not looking at AI in a technological silo, but broadly: from neuroscience, through politics, all the way to philosophy and ethics. For someone who’s a big believer in the future being more interdisciplinary (i.e. myself), that’s confirmation that having wide and diverse interests is worthwhile.

Reaching for hit books is a little bit like reaching for hit albums (regardless of the genre) – sometimes great ones deservedly become hits, and sometimes substance-less **** ones undeservedly become hits all the same. However, despite attaining recognition similar to “4 hour week” or “blink!”, “superintelligence” does actually have substance – and plenty of it. Along with substance comes a certain challenge in reading and following, which makes me wonder how many people who bought the book bought it to read it, and how many merely bought it to be seen reading it in public? (much like this). The substance of “superintelligence” can actually be really overwhelming – not just in terms of mind-boggliness of its content (although that too), but purely on volume. There is a lot in it – and I mean *a lot*. In his efforts to cram as much substance as possible, Bostrom forgot that in order for a book to be great it needs to be – has to be – well written. “Superintelligence” is a lot of things, but well-written it, unfortunately, is not. The first hundred pages are particularly tough to get through – the author could have easily trimmed them to 30 – 40, making the material more concise and comprehensible to a regular reader (it is, after all, meant to be a “popular science” book, not Principia Mathematica – Joe Average should be able to follow it). Yuval Noah Harari’s “homo deus” is a great example of a book that’s got substance, but reads really, really well (on an unrelated note: YNH is a much better auteur than he is a public speaker). Nassim Taleb’s “black swan” is another (though Nassim is all about Nassim more than Kanye is all about Kanye – it does get old real quick).

On top of that, AI occupies a unique place in the zeitgeist. On one hand, FT (“Artificial intelligence: winter is coming”) rightly points out that “We have not moved a byte forward in understanding human intelligence. We have much faster computers, thanks to Moore’s law, but the underlying algorithms are mostly identical to those that powered machines 40 years ago. Instead, we have creatively rebranded those algorithms. Good old-fashioned “data” has suddenly become “big”. And 1970s-vintage neural networks have started to provide the mysterious phenomenon of “deep learning”.”; on the other Alan Turing’s Institute notes that, “artificial intelligence manages to sit at the peak of ‘inflated expectations’ on Gartner’s technology hype curve whilst simultaneously being underestimated in other assessments”.

Consequently, in the end, I was struck by a peculiar dissonance: on one hand, reading the book (which is measured and balanced, it’s not unabashed evangelising) one might get the impression that the titular superintelligence really is inevitable – that it’s a matter of “when” rather than “if” (with the entire focus being on “how”), and that the likelihood of this becoming an existential threat to humanity is substantial. Then, around page 230, Bostrom gives the readers a bit of a cold shower by making them realise that it’s essentially impossible to express human values (such as “happiness”) in code. And then I’m left agitated (the good way) and confused (also the good way): inevitable or impossible? Which one is it?

 

PS. Not as an alternative, but as a condensed and very well written compendium, I cannot recommend this waitbutwhy classic enough.

 


Royal Institution: How to Change Your Mind

Mon 11-Jun-2018

The innocent and perhaps slightly dull title hid one of the more exciting RI events in recent weeks: an acclaimed author, “immersive” journalist (“when I was researching the food industry, I bought a cow”), and a UC Berkeley professor Michael Pollan was talking about his most recent book (carrying an innocent and perhaps slightly dull title “how to change your mind”…) about psychedelic substances, in particular psilocybin and LSD.

Psychedelics are currently enjoying a true (academic) renaissance. After half a century of complete obscurity (not to mention criminalisation), they are resurfacing in respected universities’ research facilities (subject to unbelievable admission rigours and control protocols, not to mention procurement), and receiving strong support from foremost academics as potential game-changers in psychiatry as well as cognitive performance enhancers. Ongoing legalisation of cannabis and shifting public perception of drugs’ harmfulness may be the catalysts.

Among the world-class academics leading the debate and the research are Prof. David Nutt of Imperial College London, who published a now-famous (re)classification of drugs based on their harmfulness in 2009, and was sacked by the Labour (sic!) government shortly thereafter, and Prof. Barbara Sahakian of Cambridge University, who – among many other things – is researching performance enhancement and “smart drugs” (you can buy her book on the topic here).

With mental illness spiralling, a prospect of single (supervised) application of LSD or psilocybin generating results comparable to *years* of traditional therapies was probably what interested the scientific community (you can watch Prof. Nutt’s presentation given at UCL’s Society for the Application of Psychedelics event in Nov-2017 here, it’s fascinating – you can hear me asking his views on legalisation towards the end). Silicon Valley – in its true “quantified self”, entrepreneurial, and somewhat quirky style – responded to “performance enhancement” (the concept of performance-enhancing “microdosing” was brought into public’s attention in a famous Sep-2016 Wired UK article you can read here; another well-known feature is the Aug-2017 article in the FT you can read here.)

Against this backdrop, any presentation on LSD and / or psilocybin is pretty much a guaranteed full house*, and this one was no exception. There were unusually many first-time-ever attendees (RI did a quick show of hands), many of which were friendly, Big Lebowski-style dudes with very long hair and an aversion to footwear. The event was MC’d by Dr. Robin Carhart-Harris, who is one of the rising stars of the younger generation of psychedelic research (you can watch him give his own presentation at UCL’s Society for the Application of Psychedelics event in Feb-2018 (a sequel to Prof. Nutt’s event) here).

Michael Pollan proved to be an experienced, engaging, and charismatic speaker, which definitely contributed to the quality of the event. In his presentation he talked about his (informal, and, by his own admission, at times not entirely legal) research and experiences of psychedelic substances, as well as experiences and views of other people who – for very different reasons – tried psychedelics.

The focus of his talk were mental health and related experiences, not on performance enhancement, which made it a little more profound (particularly the part about terminal patients confronting their fear of imminent death) than it would be otherwise. He stated, matter-of-factly, that the single guided LSD experience he had under the care of an experienced therapist (who takes huge personal risks by running such clandestine sessions) did for him what would take conventional therapy years to accomplish.

Another interesting point raised by Michael was the deep sense of closeness and togetherness with nature being the result of psychedelic experiences. I don’t recall this argument appearing in the public discourse, usually it’s just depression and performance enhancement, which made it all the more interesting.

The event went well into overtime with extended Q&A (you can hear me asking his views on legalisation towards the end) and will hopefully be uploaded to Royal Institution’s YouTube channel.

You can see upcoming RI events here, and make a small contribution here.


Royal Institution Patron’s Night: ExpeRience: A space odyssey

07-Jun-2018

June Patron’s Night event was dedicated to the legacy and themes raised in the cinematic masterpiece 2001: A Space Odyssey.

The movie, due to its many exceptional qualities (stunning in-camera visuals, aesthetics, striking technological predictions (see astronauts’ using tablet computers in a movie from 1968), the rogue AI, lastly its philosophical layer) has never been entirely out of the popular discourse, but as of late it’s enjoying additional resurgence due to its rogue AI, HAL9000, appearing in each and every single conversation about AI in general, AI ethics, and AI threats.

The event was focused on some of the other themes of the movie, such as space exploration in general, space robotics, hibernation and cryonics, and the psychological aspects of space travel. RI always has great speakers, and this time was no exception.

The panel included:

  • Marek Kukula, Public Astronomer at the Royal Observatory Greenwich, as the discussion host and moderator
  • Calum Hervieu, space systems engineer with a focus on lunar exploration mission architectures and crew engineer for a Mars mission simulation in Hawaii.
  • Professor Yang Gao, Associate Dean for Faculty of Engineering and Physical Sciences (FEPS) and the Professor of Space Autonomous Systems at Surrey Space Centre (SSC) and the Director of EPSRC/UKSA National Hub on Future AI & Robotics for Space (FAIR-SPACE).
  • João Pedro de Magalhães, Institute of Ageing and Chronic Disease, University of Liverpool and coordinator of the UK Cryonics and Cryopreservation Research Network (http://www.cryonics-research.org.uk/), which aims to advance research in cryopreservation and its applications.
  • Iya Whiteley, Director of the Centre for Space Medicine at Mullard Space Science Laboratory, UCL. She is a Space Psychologist, who worked on developing Tim Peake’s astronaut selection training programmes.

If the academic cred of the guests weren’t enough, Iya’s hobby is competitive skydiving, João is an amateur stand-up comedian, and Calum recreationally runs half-marathons. Try to compete with that.

The event paid tribute to Stanley Kubrick’s masterpiece and used it as a starting point for speakers’ individual short presentations. Rogue AI wasn’t one of them, and that’s a good thing, because there were enough interesting topics to discuss without it. Calum compared the scope and difficulty of lunar missions with the potential Mars mission, making it very clear how much more challenging the latter will be (if it ever happens…), and showed some concepts for human habitat on Mars (kudos to Foster+Partners for thinking about extraterrestrial markets so early in the game). Iya talked about the unique and numerous psychological challenges and considerations of a long-term (here defined as going to Mars and back, a couple of years altogether, including a minimum of 220 days in the space craft each way) space mission, some of which are difficult to relate to for ordinary people – e.g. the lack of sensation of wind or slightly varying temperature on one’s skin. Yang talked about UK’s ambition of becoming a leading player in space robotics, and the drive to enthuse the youngest generations to space exploration and robotics. João talked about hibernation and cryonics as they occur in some species of mammals and amphibians, and about ongoing research into both hibernation and cryonics of humans (mentioning Alcor, the cryonics company which first appeared on my radar in unforgettable “Future Fantastic” series hosted by Gillian Anderson many moons ago).

Presentations were followed by the usual Q&A, which was then followed by hands-on experiences, which aren’t usually the part of RI events. Attendees could see the impact of micrometeorites on space crafts, experience firsthand (literally) different rocket fuels, see what would happen to an unprotected human in the vacuum of space, or try the zero-G simulation chair. My favourite one were the rocket fuels, presented by charming and beautiful robot lady

Loved it!


Nesta FutureFest: Will my job exist in 2030?

Fri 06-Jul-2018

Hosted by Nesta’s Eliza Easton and Jed Cinnamon, the “will my job exist in 2030?” session took place at high noon of what was probably the hottest day of the year, in a modest size auditorium, under a glass roof.

To say that it was hot would be a misunderstatement: it was boiling, it was a sauna, it was Dubai. And still, the auditorium was *packed*. I know why I braved it, and I suspect everyone else in there braved it for the same reason: as white collar professionals we are mortified about the prospect of being rendered obsolete by an algo, and we want to find out:

  • How likely it is exactly
  • What (if anything) we can do about it

While the presentation echoed Nesta’s report bearing the same title, it was entirely self-contained and, as far as I could tell, told from a slightly different angle than the report.

Part of the presentation was delivered from the perspective of educating and training today’s middle- and high-school students (so only tangentially relevant to grown ups), though the skills and competencies listed as enhancing the employability of modern day teenagers’ were definitely good to know for everyone (along the lines of: do I have this? Could I plausibly argue that I have this? What can I still do in order to have this? How can I rephrase my resume in order to say I have this?). The list included:

  • Judgement
  • Decision making
  • Complex problem solving
  • Fluidity of ideas
  • Collaborative problem solving
  • Creative problem solving
  • Resilience
  • Critical thinking

I think that list alone made it worth attending the event, because the skills and competencies listed above are indeed in the Venn diagram sweet spot where analytical, creative, interpersonal and imaginative overlap; in short, those (at least in 2018) appear to be abilities relatively most difficult to automate.

Separate consideration was given to the interdisciplinary. The hosts mentioned how important it is to hone creative and artistic skills alongside modern STEM curriculum, as well as remove barriers between different, previously silo’ed disciplines (where Finland’s success was used as an example). This point echoed with me quite strongly because only a few years ago my own interdisciplinary approach towards career development (defined as pursuing a relatively wide variety of roles based on how interesting they appeared rather than how closely they aligned to my direct experience) earned rather limited support and understanding of my London Business School colleagues, most of whom spent their professional careers within one specialism (equity sales; credit derivs etc.) and were all about climbing the ranks of higher and better paid positions within their respective specialisms.

There was a somewhat fresh (and refreshingly sane) angle on the nemesis-du-jour topic of automation. While the authors echoed the general ennui of many jobs currently done by humans being automated away, they thought their conclusions over more thoroughly than the prevailing “unskilled jobs will all go, and many skilled jobs as well” and pointed out that some of the lower-skilled jobs are not only relatively safe, but also likely to grow (examples listed were agriculture and construction; I would also think that all types of non-specialised carer jobs were in the same category; basically jobs requiring body mobility and dexterity and / or emotional connection).

Another original observation was that while there is a continuity of certain occupations and job titles, the day-to-day work itself – and the skillset it requires – may have very little in common over the years (the example given was typesetter: modern day InDesign user vs. a heavy machinery operator from couple of decades ago; finance automatically comes to mind where many roles have much more to do with technology than any “pure” finance).

The final point of relevance to me was raised during the Q&A (by myself…): given my personal experiences with some employers being more open towards the concept of lifetime education than others (with few being actively hostile), I wanted to know whether businesses begin to recognise the value of ongoing education, learning and training of their staff as opposed to the entrenched view of seeing any upskilling as a distraction and / or a threat (basically a variation on: “they don’t need this for their current job… they will want more money and leave”). I was quite happy to hear that there is indeed a certain pivot happening (albeit not very fast) as we speak, and businesses begin to see the (commercial) added value of their employees gaining more skills.


Nesta FutureFest 2018

Fri 06 – Sat 07-Jul-2018

First weekend of July saw the 4th edition of Nesta FutureFest. For those of you who don’t know, Nesta is UK government’s agency for innovation (one of very few government bodies I feel isn’t wasting my taxpayer contributions), and FutureFest is its sorta-annual (taking place about once every 18 months) two-day festival.

The event consists of talks on a number of stages + smaller accompanying events and presentations. The closest reference point to FutureFest I can think of is New Scientist Live, but NS Live is mostly “pure” science: whether it’s quantum physics, CERN, DNA memory storage, or nuclear fusion, it’s mostly just talks on some exciting discoveries, ideas, and inventions. It is thought-provoking but largely uncontroversial and apolitical.

FutureFest is a little different. While the focus is firmly on the future (as the name would suggest), it is broader than science and technology and also considers social, urban, artistic, and even philosophical and religious angles. This holistic scope makes the event really unique.

The 2018 event was held in the Tobacco Dock in London over one of the hottest weekends of the year, which, given that most of the building is covered with glass roof, made attending some of the jam-packed talks (and vast majority of them were jam-packed…) something of a challenge, but trust me, well worth it. Plus the venue itself is really nice too (I believe some of the Wired magazine’s events are being held there as well).

The Nesta team did a brilliant job booking diverse headline speakers to deliver talks across a very broad spectrum of subject I’d describe as not just “future”, but more so “us humans and our future”.

Just to give you a little taste of the diversity of the 2018 talks, here are titles of the ones I attended:

  • The geopolitics of AI
  • Will my job exist in 2030?
  • 2027: When the post-work era begins
  • Digital workers of the world, unite?
  • How blockchain can, literally, save the world
  • The tangle of mind and matter
  • Future humans: augmented selves
  • Let there be bytes
    (I’ll expand on a couple of these in greater detail in separate posts).

In parallel, there was a “meet the author” stage, where I resisted buying even more books I may never get a chance to read (I regretted afterwards), and where I finally managed to talk to my idol Dr. Julia Shaw, and caught up with the inimitable and very, very candid Ruby Wax.

What struck me as surprising was that of the 2 days, Friday was definitely busier and more packed with attendees. I mean… I had to take a day off work to attend, was everyone else there on business? Or was it the World Cup quarterfinal match (England vs. Sweden – we won) on Saturday? In any case, I showed up on both days and had an absolute blast. It was intense and towards Saturday evening my brain was definitely overflowing, but it was absolutely worth it. I can’t wait for the next one.


Mariana Mazzucato: the value of everything

British Library, 09-Jul-2018

In 2018 Mariana Mazzucato is a brand name. I first came across her in the short article Wired’s Ideas Bank, which first made me aware that everything I thought I knew about entrepreneurship and innovation was a fallacy, sold to me by corporate sector’s PR and general “hijacking of the narrative”.

The article may have been short, but the Mazzucato’s point was huge. Then there was the lunch with FT, and then the British Library event. Mazzucato’s “the value of everything” lecture was part of the series organized by the British Library and UCL, and was linked to the release of her new book (also titled “the value of everything”) which analyses how (and why) modern economies reward value extraction and rent seeking rather than genuine value creation.

And let me tell you one thing: did she deliver. With passion, confidence, and charisma Mazzucato was one of a couple of speakers whose presentations I attended in the recent weeks who proved that not only *what* you say, but also *how* you say it really, really counts (others were Ruby Wax, Izabella Kaminska and Eugenia Cheng).

The quote the presentation revolved around comes from Big Bill Haywood, who in 1929 set up first trade union, which reads “The barbarous gold barons – they did not find the gold, the did not mine the gold, they did not mill the gold, but by some weird alchemy all the gold belonged to them”.

What I found very interesting was that rather than focus on purely economic arguments (which could lead to the discussion turning academic, niche, and otherwise boring), Mazzucato, originally, provocatively, even somewhat eccentrically, stated how important storytelling and narratives are in this discussion, and how important it is to contest what is told to the public as the official version of events (Mazzucato quoted Plato’s “storytellers rule the world”; by contrast I feel tempted to quote Reagan’s “government is no the solution to our problem; government *is* the problem”; to be clear, I’m siding with Plato).

Financial services sector took the brunt of Mazzucato’s criticism, with modern politics coming a close second. The criticism of financial services ran along the standard lines of productivity and adding value, while modern day liberal politicians and public sectoras a whole were criticised for not countering the neoliberal, neoclassical corporate narrative. Pharma came third for rigging prices of medicines.
In terms of innovative thoughts and ideas, I appreciated Mazzucato joining the ranks of more and more public figures who fight to recognise user-generated data as something that has value in the economic sense, and therefore something its creators should be compensated for (e.g. in the form of UBI).
On the more iconoclastic side, I was in equal measure surprised and happy to hear Mazzucato predicting a boom (and bubble) in cleantech. I’m all for it. I could not be happier to hear it. That’s one bubble I support in full.
One thing I missed throughout the lecture was Mazzucato clearly defining “value”. She was pointing out may dysfunctional aspects of modern economy (casino banking, rigging medicine prices) and constantly referring to what does and doesn’t contribute to value, but never once had she actually defined it. The question came up in the Q&A and Mazzucato defined it fairly loosely as public purpose- / mission-driven actions, such as, for example, cleaning up the oceans. She also said that “value is created collectively”, which stands in contrast to prevailing individualist approach to entrepreneurship and life in general, so that’s definitely food for thought.

You can watch the entire lecture (87 minutes) on YouTube.