Royal Society, Tue 17-Jul-2018

The lecture was one of eight in the broad “You and AI” series organised by the Royal Society (and, sadly, the only one I attended). This particular lecture was presented by an AI grandee, Microsoft’s Principal Researches and NYU’s Distinguished Research Professor Kate Crawford.

The lecture was hosted by Microsoft’s Chris Bishop, who himself gave a very interesting and interactive talk on AI and machine learning at the Royal Institution back in 2016 (it is also posted on RI’s YouTube channel here).

What made Prof. Crawford’s presentation stand out among many others (AI is a hot topic, and there are multiple events and meetups on it literally every day) was that it didn’t just focus on the technology in isolation, but right from the outset framed it within the broader context of ethics, politics, biases, and accountability. One might think there’s nothing particularly original about it, but there is. Much of the conversation to date has focused on the technology in sort of clinical sense, rarely referencing broader context (with occasional exceptions for concerns about the future of employment, which is understandable, given it’s one of top white collar anxieties this day and age). I think that in talking about the dangers of being “seduced by the potential of AI”, she really hit the nail on the head.

The lecture started with Prof. Crawford attempting to define AI, which makes more sense than it might seem: just like art, AI means different things to different people:

  • Technical approaches. She rightly pointed out that what is commonly referred to as AI is a mix of a number of technologies such as machine learning, pattern recognition, optimization (echoing a very similar mention by UNSW’s Prof. Toby Walsh in one of the New Scientist Instant Expert workshops in London last year; also – of literally *all* the people – Henry Kissinger (*the* Henry Kissinger) made the same observation not long ago in his Atlantic article: “ultimately, the term artificial intelligence may be a misnomer. To be sure, these machines can solve complex, seemingly abstract problems that had previously yielded only to human cognition. But what they do uniquely is not thinking as heretofore conceived and experienced.”). She also restated what many enthusiasts seem to overlook: AI techniques are in no way intelligent the way humans are intelligent. The use of “I” in “AI” may be somewhat misleading.
  • Social practices: the decision makers in the field and how their backgrounds, views and biases shape the applications of AI in the society
  • Industrial infrastructure. Using a basic Alexa interaction as an example, Prof. Crawford stated that – contrary to popular beliefs in thriving AI start-up scene – only few tech giants have resources to provide and maintain it.

She took AI outside of its purely technical capacity and made very clear exactly how political – and potentially politicised – AI can be, and that we don’t think enough about the social context and the ethics of AI. She illustrated that point by pulling multiple data sets used to train machine learning algorithms and making very clear how supposedly benign and neutral data is in fact not representative and highly skewed (along the “usual suspect” lines of race, sex, sexism, representativeness, societal / gender roles).

Prof. Crawford talked about the dangers of putting distance between AI engineering and the true human cost of its consequences, including poor quality training data reinforced biases, against the backdrop if ongoing resurgence of totalitarian views and movements. On more positive note, she mentioned the emergence of a “new industry” of fairness in machine learning – at the same time asking who and how will actually define fairness and equality. She discussed three approaches she had herself researched (improving accuracy, scrubbing to neutral, mirroring demographics), pointing out that we still need to determine what exactly we define as neutral and / or representative.

She mentioned feedback loops more than once, in the sense of reinforcing stereotypes in the real world, with systems and algorithms becoming more inscrutable (“black box”) and disguising very real political and social considerations and implications as purely technical, which they aren’t. She quoted Brian Brackeen, an (African-American) CEO of a US facial recognition company Kairos, who stated that his company would not sell their systems to law enforcement as they are “morally corrupt” (you can read his op-ed in its entirety on techcrunch).

As a regular on London tech / popular science events scene, I found it very interesting to pick out some very current and very relevant references to speeches given by other esteemed speakers (whether these references were witting or unwitting, I don’t know). In one of them Prof. Crawford addressed the fresh topic of “geopolitics of AI”, which was introduced and covered in detail by Evgeny Morozov in his Nesta Future Fest presentation (titled “the geopolitics of AI”… – you can watch it here); in another she mentioned the importance of ongoing conversation at the same time as Mariana Mazzucato talks about hijacking the economic narrative(s) by vested corporate and political interests in her new book (“the value of everything”) and accompanying event at the British Library (which you can watch here). Lastly, Crawford’s open questions (which, in my view, could have been raised a little more prominently) about the use of black box algorithms without clear path of an appeal in broadly defined criminal justice system resonated Prof. Lilian Edwards’ of Strathclyde University research into legal aspects of robotics and AI.

On the iconoclastic side, I found it unwittingly ironic that a presentation about democratising AI (both the technology and the debate around it) and the concerns of crowding out smaller players by acquisitive FAANG’s was delivered by a Microsoft employee at an event series hosted by Google.

You can watch the entire presentation (67 minutes) here. For those interested in Royal Society’s report on machine learning (referenced in the opening speech by Prof. Bishop), you can find it here.