BCS: AI for Governance and Governance of AI (20-Jan-2021)

BCS, The Chartered Institute for IT (more commonly known simply as the British Computer Society) strikes me as *the* ultimate underdog in the landscape of British educational and science societies. It’s an entirely subjective view, but entities like The Royal Institution, The Royal Society, British Academy, The Turing Institute, British Library are essentially household names in the UK. They have pedigree, they are venerated, they are super-exclusive, money-can’t-buy brands – and rightly so. Even enthusiasts-led British Interplanetary Society (one of my beloved institutions) is enjoying well-earned respect in both scientific and entrepreneurial circles, while BCS strikes me as being sort of taken for granted and somewhat invisible outside the professional circles, which is a shame. I first came across BCS by chance a couple of years ago, and have since added it to my personal “Tier 1” of science events.

Financial services are probably *the* number one non-tech industry most frequently appearing in BCS presentations, which isn’t surprising given the amount of crossover between finance and tech (“we are a tech company with a banking license” sort of thing).

One things BCS does exceptionally well is interdisciplinarity, which is something I am personally very passionate about. In their many events BCS goes well above and beyond technology and discusses topics such as environment (their event on the climate aspects of cloud computing was *amazing*!), diversity and inclusion, crime, society, even history (their 20-May-2021 event on the historiography of the history of computing was very narrowly beaten by receiving my first Covid jab as the best thing to have happened to me on that day). Another thing BCS does exceedingly well is balance general public-level accessibility with really specialist and in-depth content (something RI and BIS also do very well, while, in my personal opinion, RS or Turing… not so well). BCS has a major advantage of “strength in numbers”, because their membership is so broad that they can recruit specialist speakers to present on any topic. Their speakers tend to be practitioners (at least in the events I attended), and more often than not they are fascinating individuals, which oftentimes turns a presentation into a treat.

AI for governance

 The ”AI for Governance and Governance of AI” event definitely belongs to the “treat” category thanks to the speaker Mike Small, who – with 40+ years of IT experience – definitely knew what he was talking about. He started off with a very brief intro to AI (which, frankly, beats anything I’ve seen online for conciseness and clarity). Moving on to governance, Mike very clearly explained the difference between using AI in governance functions (i.e. mostly Compliance / General Counsel and tangential areas) and governance *of* AI, i.e. the completely new (and in many organisations as-yet-nonexistent) framework for managing AI systems and ensuring their compliant and ethical functioning. While the former is becoming increasingly understood and appreciated conceptually (as for implementation, I would say that varies greatly between organisations), the latter is far behind, as I can attest from observation. The default thinking seems to be that AI is just new another technology, and should be approached purely as a technological means to a (business) end. The challenge is that with its many unique considerations AI (Machine Learning to be exact) can be an opaque means to an end, which is incompatible (if not downright non-compliant) with prudential and / or regulatory expectations. AI is not the first technology which requires an interdisciplinary / holistic governance perspective in the corporate setting, because cloud outsourcing has been an increasingly tightly regulated technology in financial services since around 2018.

The eight unique AI governance challenges singled out by Mike are:

  • Explainability;
  • Data privacy;
  • Data bias;
  • Lifecycle management (aka model or algorithm management);
  • Culture and ethics;
  • Human involvement;
  • Adversarial attacks;
  • Internal risk management (this one, in my view, may not necessarily belong here, as risk management is a function, not a risk per se).

The list, as well as a comparison of global AI frameworks that followed, were what really triggered me in the presentation (in the positive way) because of their themes-based approach to governance of AI (which happens to be one of my academic research interests). The list of AI regulatory guidances, best practices, consultations, and most recently regulations proper has been growing rapidly for at least 2 years now (and that is excluding ethics, where guidances started a couple of years earlier, and currently count a much higher number than anything regulation-related). Some of them come from government bodies (e.g. the European Commission), other from regulators (e.g. the ICO), other from industry associations (e.g. IOSCO). I reviewed many of them, and they all contribute meaningful ideas and / or perspectives, but they’re quite difficult to compare side-by-side because of how new and non-standardised the area is. Mike Small is definitely onto something by extracting comparable, broad themes from long and complex guidances. I suspect that the next step will be for policy analysts, academics, the industry, and lastly the regulators themselves to start conducting analyses similar to Mike’s to come up with themes that can be universally agreed upon (for example model management – that strikes me as rather uncontroversial) and those where lines of political / ideological / economic divides are being drawn (e.g. data localisation or handling of personal data in general). 

One thing’s for sure: regulatory (be it soft or hard laws) standards for the governance of AI are beginning to emerge, and there are many interesting developments ahead in foreseeable future. It’s better to start paying attention at the early stages than play a painful and expensive game of catch-up in a couple of years.

You can replay the complete presentation here.