Do financial services firms need an AI Ethics function?

AI has generated a tremendous amount of interest in the financial services (FS) industry in recent years, even though it had its peaks (2018 – 2019) and throughs (2020 – mid-2022, when the focus and / or hype cycle was on crypto, operational resilience or SPACs) during that relatively short period.

The multi-billion-dollar question about actual investment and adoption levels (particularly on the asset management side) remains open and some anecdotal evidence may lead us to believe that they may be lower than the surrounding hype would have us believe (I covered that to some extent in this post). However, the spectacular developments in generative AI dating back to late-2022 (even though decent quality natural text and image [deepfake] generating systems started to emerge around 2019) indicate that “this time is [really] different”, because both the interest and the perceived commercial opportunities are just too great to ignore.

However, as is being gradually recognised by the industry, there is more to AI than just technology. There are existing and upcoming hard and soft laws. There are cultural implications. There are risks to be taken into account and managed. There are reputational considerations. Lastly, there are ethics.

AI ethics experienced its “golden era” around 2015 – 2018, when just about every influential think tank, international agency, tech firm, government etc. released their takes on the topic, usually expressed as sets of broad, top-level principles which were benevolent and well-meaning on one hand, but not necessarily “operationalizable” on the other.

The arrival of generative AI underscored the validity of AI ethics; or at least underscored the validity of resuming a serious conversation about AI ethics in the business context.

Do financial services firms need an AI ethics function?

Interestingly, AI ethics per se have never been particularly controversial. It was probably because the “mass-produced” ethical guidances and principles were so universal that it would be really difficult to challenge them – alternative explanation could be that they were seen as more academic than practical and thus were discounted by the industry as a moot point; or both. Hopefully in 2023, with generative AI acting as the catalyst, there is a consensus in the business world that AI ethics are valid and important and need to be implemented as part of the AI governance framework. The question then is: is a dedicated AI Ethics function needed? I think we can all agree that there is a growing number of ethical implications of AI within the FS industry: all aspects of bias and discrimination, representativeness, diversity and inclusion, transparency, treating customers fairly and other aspects of fairness, potential for abuse and manipulation – and all of those considerations have existed pre-generative AI. Generative AI amplifies or adds its own flavour to existing concerns, including trustworthiness (hallucinations), discriminatory pricing, value alignment or unemployment. It also substantially widens the grey area at the intersection of laws, regulations and ethics such personal and non-personal data protection, IP and copyright, explainability or reputational concerns.

The number of possible approaches seems limited:

  • Disregard ethics and risk-accept it;
  • Expand an existing function to cover AI ethics;
  • Create a dedicated AI ethics function.

Risk-acceptance is technically possible, but with the number of things that could potentially go wrong with poorly governed AI and their reputational impacts on the organization, the risk / return trade-off does not appear very attractive.

The latter two options are similar, with the difference being the “FTE-ness” of the ethics function. Creating a new role allows more flexibility in terms of where that new role would sit in the organisational structure; on the other hand, at the current stage of AI adoption there may not be enough AI ethics work to fill a full-time role. Consequently, I think that adding AI ethics to the scope of an existing role is more likely than a creation of a dedicated role.

In either case the number of areas where AI ethics could sit appears to be relatively limited: Compliance, Risk or Tech; ESG could be an option too. A completely independent function would have certain advantages too, but it seems logistically unfeasible, especially in the long run.

Based on first-hand observations I think that the early days of “operationalizing” AI ethics within the financial services context will likely be somewhat… awkward. To what extent ethics are a part of the FS ethos is a conversation for someone else to have; I have seen and experienced organisations that were ethical and decent and others that were somewhat agnostic, so my personal experience is unexpectedly positive in that regard, but I might just have been lucky. I think that ESG is the first instance of the industry explicitly deciding to voluntarily make some ethical choices, and I think that ESG adoption may be a blueprint for AI ethics (I am also old enough to remember early days of ESG, and those sure did feel awkward).

While I have always been convinced of validity of operationalised AI ethics, I think that until very recently I had the minority opinion. It is only very recently that perceptions seem to be changing, driven by spectacular developments in generative AI and the ethical concerns accompanying them. Currently (mid-2023) we are in a fascinating place: AI has been developing rapidly for close to a decade now, but the recent developments have been unprecedented even by AI standards. Businesses find themselves “outrun” by the technology, and, in a bit of a panic knee-jerk reaction (and in part driven by genuine interest and enthusiasm, which currently abound) they become much keener and open-minded adopters than only a year or two ago. Now may be the time when the conversation on AI ethics reaches the point of operational readiness… or maybe not yet; time will tell. Sooner or later, mass implementation and operationalization of AI ethics will happen though. It must.