Frank Pasquale “Human expertise in the age of AI” Cambridge talk 26-Nov-2020

Frank Pasquale is a Professor of Law at the Brooklyn Law School and is one of the few “brand name” scholars in the nascent field of AI governance and regulation (Lilian Edwards and Luciano Floridi are other names that come to mind).

I had the pleasure of attending his presentation for the Trust and Technology Initiative at the University of Cambridge back in Nov-2020. The presentation was tied to Pasquale’s forthcoming book titled “New Law of Robotics – Defending Human Expertise in the Age of AI”.

Professor Pasquale opened his presentation listing what he describes as “paradigm cases for rapid automation” (areas where AI and / or automation have already made substantial inroads, or are very likely to do so in the near future) such as manufacturing, logistics, agriculture; as well as those I personally disagree with: transport, mining (1). He argues for AI complementing rather than replacing humans as they key to advancing the technology in many disciplines (as well as advancing those disciplines themselves) – a view I fully concur with.

Frank Pasquale - how to regulate google, Facebook and credit scoring

Stifterverband, CC BY 3.0 >, via Wikimedia Commons

He then moves on to the critical – though largely overlooked – distinction between governance *of* artificial intelligence vs. governance *by* artificial intelligence (the latter being obviously more of a concern, whilst the former has until recently been an afterthought or a non-thought). He remarked that the push for researchers to increasingly determine policy is not technical, but political. It prioritises researchers over subject matter experts, which is not necessarily a good thing (n.b. I cannot say I witnessed that push in financial services, but perhaps in other industries it *is* happening?)

Prof. Pasquale identifies three possible ways forward:

  1. AI developed and used above / instead of domain experts (“meta-expertise”);
  2. AI and professionals melt into something new (“melting pot”);
  3. AI and professionals maintain their distinctiveness (“peaceable kingdom”).

In conclusion, Pasquale proposes his own new laws of robotics:

  1. Complementarity: Intelligence Augmentation (IA) over Artificial Intelligence (AI) in professions;
  2. Robots and AI should not fake humanity;
  3. Cooperation: no arms races;
  4. Attribution of ownership, control, and accountability to humans.

Pasquale’s presentation and views resonated with me strongly because I arrived at similar conclusions not through academic research, but by observation, particularly in financial services and legal services industries. Pasquale is one of the relatively few voices who mitigate some of the (over)enthusiasm regarding how much AI will be able to do for us in very near future (think: fully autonomous vehicles), as well as some of the doom and gloom regarding how badly AI will upend / disrupt our lives (think: “35% of current jobs in the UK are at high risk of computerisation”). I find it very interesting that for a couple of years now we’ve had all kinds of business leaders, thought leaders, consultants etc. express all kinds of extreme visions of the AI-powered future, but hardly any with any sort of common-sense middle-ground views. Despite AI evolving at breakneck speed, it seems that our visions and projections of it evolve slower. The acknowledgment that fully autonomous vehicles are proving more challenging and take longer to develop than anticipated only a few years back has been muted to say the least. Despite the frightening prognoses regarding unemployment, it has actually been at record lows for years now in the UK, even during the pandemic (2)(3). [Speaking from closer proximity professionally, paralegals were one profession that was singled out as being under immediate existential threat from AI – and I am not aware of that materialising in any way. On the contrary, price competition for junior lawyers in the UK has recently reached record heights,(4)(5)].

It is obviously incredibly challenging to keep up with a technology developing faster than just about anything else in the history of mankind – particularly for those of us outside the technology industry. Regulators and policy makers (to a lesser extent also lawyers and legal scholars) have in recent years been somewhat on the backfoot in the face of rapid development of AI and its applications. However, thanks to some fresh perspectives from people like Prof. Pasquale this seems to be turning around. Self-regulation (which, as financial services proved in 2008, sometimes spectacularly fails) and abstract / existential high-level discussions are being replaced with concrete, versatile proposals for policy and regulation, which focus on industries, use cases, and outcomes rather than details and nuances of the underlying technology. 

 

__________________________________________

(1) Plus a unique case of AI in healthcare. While Pasquale adds healthcare to the “rapid paradigm shift” list, the pandemic-era evidence raises doubts over this: https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/  

(2) https://www.ons.gov.uk/employmentandlabourmarket/peopleinwork/employmentandemployeetypes/bulletins/employmentintheuk/november2021

(3) https://www.bbc.com/news/business-52660591

(4) https://www.thetimes.co.uk/article/us-law-firms-declare-war-with-140-000-starting-salaries-gl87jdd0q

(5) https://www.ft.com/content/882c9f72-b377-11e9-8cb2-799a3a8cf37b