UK government AI regulation whitepaper review

On 29th March 2023 the UK government published its long-awaited whitepaper setting out its vision for regulating AI in the UK: A pro-innovation approach to AI regulation. It comes amidst generative AI hype, fuelled by spectacular developments of foundation models and solutions built on them (e.g., ChatGPT).

The current boom in generative AI has served as a catalyst for a renewed discourse on AI in financial services, which in recent years has been slightly muted due to the spotlight being mostly on crypto, as well as SPACs, digital operational resilience and new ways of working.

The whitepaper cannot be analysed without reference to another landmark AI regulatory proposal i.e. the EU AI Act. The EU AI Act was proposed in April 2021 and is currently proceeding through the legislative process of the European Union. It is widely expected to be enacted in the second half of 2023 or in early 2024.

UK government AI regulation whitepaper review

The parallels and differences between the UK whitepaper and the EU AI Act

The first observation is that the UK government whitepaper is just that: a list of considerations with an indication of a general “direction of travel”, but nowhere near the level of detail of the EU AI Act (which is not entirely surprising given that the latter is a regulation and the former just a whitepaper). Both laws (the UK one being a draft soft law and the EU one being a draft hard law) recognise that AI will function within a wider set of regulations, both existing (e.g., liability or data protection) and new, AI-specific ones. The UK whitepaper acknowledges the very challenge of defining AI, something the initial draft of EU AI Act caused notable controversy with1.

Both proposals offer general, top-level frameworks with detailed regulation delegated to industry regulators. However, the EU AI Act comes with detailed and explicit risk-weighted rules and prohibitions while the UK act stops shy of making any explicit guidelines in the initial phase.

The approach proposed by the UK government is to create a cross-sector regulatory framework rather than regulate the technology directly. The proposed framework centres around five principles:

  1. Safety, security, and robustness;
  2. Transparency and explainability;
  3. Fairness;
  4. Accountability and governance;
  5. Contestability and redress.

These principles are fairly standard, however, for certain systems and in certain contexts, some of these principles may become challenging to observe from the technical perspective. For example, a generative AI text engine may be difficult to query in terms of explaining why specific prompts generated certain outputs. I also note that these principles are high-level, and largely non-technical, whereas the EU AI Act sets out requirements which include similar principles (e.g., transparency), but tangible requirements (e.g., data governance or record keeping).

The most unique aspect of the UK AI whitepaper is that it is targeted to individual sector regulators, rather than users, manufacturers, or suppliers2 of AI directly. The UK approach could be called “regulating the regulators” and is different not just compared to the EU AI Act, but also other soft3 and hard4 laws applicable to AI in financial services – all of which primarily focus on the operators and applications of AI directly.

The UK whitepaper states that it does not initially intend to be binding (“the principles will be issued on a non-statutory basis and implemented by existing regulators”), but anticipates doing so further down the line (“following this initial period of implementation, and when parliamentary time allows, we anticipate introducing a statutory duty on regulators requiring them to have due regard to the principles”). It reads like the principles are expected to “morph” from soft to hard law over time.

Despite being explicitly about AI, the paper remains neutral with respect to specific AI tools or applications such as facial recognition (“regulating the use, not the technology”). It is one of very few regulatory papers to bring up the issue of AI autonomy up, which I think is very interesting.

In an uncharacteristically candid admission, the whitepaper notes that “It is not yet clear how responsibility and liability for demonstrating compliance with the AI regulatory principles will be or should ideally be, allocated to existing supply chain actors within the AI life cycle. […] However, to further our understanding of this topic we will engage a range of experts, including technicians and lawyers.”. In October 2020 the EU published a resolution on civil liability for AI5. The resolution advised strict liability for the operators of high-risk AI systems and fault-based liability for operators of all other AI systems. It is important to note that the definitions of “high-risk AI systems” as well as “operators” in the resolution are not the same as those proposed in the subsequently published draft of the EU AI Act, which may lead to ambiguities. Furthermore, European Parliament resolutions are not legally binding, so on one hand we have the UK admitting that liability, as a complex matter, requires further research, and the EU proposing simple approach which may be somewhat *too* simple for a complex, multi-actor AI value chain.

In another candid reflection, the UK government recognises that “[they] identified potential capability gaps among many, but not all, regulators, primarily in relation to: AI expertise [and] organisational capacity”. It remains to be seen how UK regulators (particularly financial and finance-adjacent ones, i.e., PRA, FCA, ICO) respond to these comments, particularly considering shortage of AI talent and heavy competition from tech firms and financials.

Conclusions of the whitepaper are consistent with observation within the FS industry – that while there is reasonably good awareness of high-level laws and regulations applicable to AI (MIFID II, GDPR, Equality Act 2010), there is a lack of regulatory framework to connect them. There is also a perception of regulatory gaps which upcoming AI regulations are expected to bridge.

There are both similarities and differences between the UK government whitepaper and the EU AI Act. The latter is much more detailed and fleshed out, putting EU ahead of the UK in terms of legislative developments. The EU AI Act is risk-based, focusing on prohibited and high-risk applications of AI. The prohibitions are unambiguous and this part of the Act is arguably rules-based, while the remainder is principles-based. The whole of the EU AI Act is outcomes-focused. The UK govt AI whitepaper explicitly rejects the risk-based approach (which is one of very few parts of the whitepaper that are completely unambiguous) and goes for context-specific, principles-based and outcomes-oriented approach. The rejection of the risk-based approach reads like a clear rebuke of the EU approach.

The main parallel between the UK govt whitepaper and the EU AI Act is sector-neutrality. Both the EU AI Act and the UK govt whitepaper are meant to be applicable to all sectors, with detailed oversight to be delegated to respective sectoral regulators. We also need to be mindful that the primary focus of general regulations is likely to be applications of AI that may impact health, wellbeing, critical infrastructure, fundamental rights etc. Financial services – as important as they are – are not as critical as healthcare or fundamental rights.

Both laws are meant to complement existing regulations, albeit in different ways. The EU AI Act, as a hard law, needs to clearly and unambiguously fit within the matrix of existing regulations across various sectors: partly as a brand new regulation and partly as a complement of existing regulations (e.g. product safety or liability). The UK principles, as a soft law (at least initially) are meant to “complement existing regulation, increase clarity, and reduce friction for businesses”.

In what appears to be an explicit difference in approaches, the UK whitepaper states that it will “empower existing UK regulators to apply the cross-cutting principles”, and that “creating a new AI-specific, cross-sector regulator would introduce complexity and confusion, undermining and likely conflicting with the work of our existing expert regulators.”. This stands in contrast with the proposed EU approach, which – despite being based on existing regulators – also anticipates creation of a pan-EU European Artificial Intelligence Board (EAIB) tasked with a number of responsibilities pertaining to the implementation, requirements, advice and enforcement of the EU AI Act.

However, the UK government proposes the introduction of some “central functions” to ensure regulatory coordination and coherence. The functions are:

  • Monitoring, assessment and feedback;
  • Support coherent implementation of the principles;
  • Cross-sectoral risk assessment;
  • Support for innovators (including testbeds and sandboxes);
  • Education and awareness;
  • Horizon scanning;
  • Ensuring interoperability with international regulatory frameworks.

Even though the whitepaper does not offer details about the logistics of these central functions, they do appear similar in principle to what the EAIB would be tasked with. It does note that the central functions would initially be delivered by the government, with an option to deliver them independently in the long run. The whitepaper references UK’s Digital Regulation Cooperation Forum (DRCF) – comprising the Competition and Markets Authority (CMA), Ofcom, Information Commissioner’s Office, and the Financial Conduct Authority (FCA) – as an avenue for delivery.

The fundamental difference between the UK and EU approaches is enforceability: the UK whitepaper is (at least initially) a soft law, and the EU AI Act will be a hard law. However, it is reasonable for a regulator to expect that their guidances be followed and to challenge regulated firms if they’re not, which means that a soft law is de facto almost a hard law. The EU AI Act has explicit provisions for monetary fines for non-compliance (up to EUR 30,000,000 or 6% of the global annual turnover (whichever is higher)6, which is even stricter than GDPR’s EUR 20,000,000 or 4% of annual global turnover (whichever is higher)) – the UK AI whitepaper has none. It is entirely plausible that when the UK whitepaper evolves from soft to hard law, additional provisions will be made for monetary fines, but as it stands right now, this aspect is missing.

Conclusion

Overall, it is somewhat challenging to provide a clear (re)view of the whitepaper. It has a lot of interesting recommendations, although most, if not all, have appeared in previous AI regulatory guidances worldwide and have otherwise been tried and tested. The whitepaper’s focus on regulators rather than end-users / operators of AI is both original and very interesting.

The time for AI regulation has come. I expect a lot of activity and discussions around it in the EU, UK, and beyond. I also expect the underlying values and principles to be relatively consistent worldwide, but – as the top-level comparison of the UK and EU proposals has shown – similar principles can be promoted, implemented and enforced within substantially different regulatory frameworks.

 

________________________________________________________

(1) https://www.wired.com/story/artificial-intelligence-regulation-european-union/

(2)  The EU AI Act introduces a catch-all concept of “operator” which applies to all the actors in the AI supply chain.

(3)  European Commission, ‘White Paper on Artificial Intelligence: A European Approach to Excellence and Trust’, 2020; European Commission, ‘Report on the Safety and Liability Implications of Artificial Intelligence, the Internet of Things and Robotics’, 2020; European Commission, ‘Liability for Artificial Intelligence and Other Emerging Digital Technologies’, 2019; The Ministry of Economy, Trade and Industry (‘METI’), ‘Governance Guidelines for Implementation of AI Principles ver. 1.0’, 2021; Hong Kong Monetary Authority (‘HKMA’), ‘High-Level Principles on Artificial Intelligence’, 2019; The International Organisation of Securities Commissions (‘IOSCO’), ‘The Use of Artificial Intelligence and Machine Learning by Market Intermediaries and Asset Managers’, https://www.iosco.org/library/pubdocs/pdf/IOSCOPD658.pdf ; Financial Stability Board, note 9 above; Bundesanstalt für Finanzdienstleistungsaufsicht (‘BaFin’), ‘Big Data Meets Artificial Intelligence – Results of the Consultation on BaFin’s Report’, 21 March 2019, https://www.bafin.de/SharedDocs/Veroeffentlichungen/EN/BaFinPerspektiven/2019_01/bp_19-1_Beitrag_SR3_en.html ; Information Commissioner’s Office, ‘Guidance on the AI Auditing Framework Draft Guidance for Consultation’, 2020; Information Commissioner’s Office, ‘Explaining Decisions Made with AI’, 2020; Personal Data Protection Commission, ‘Model Artificial Intelligence Governance Framework (2nd ed)’, Singapore, 2020.

(4) Directive 2014/65/EU of the European Parliament and of the Council of 15 May 2014 on Markets in Financial Instruments and Amending Directive 2002/92/EC and Directive 2011/61/EU; European Commission, ‘RTS 6’, 2016; Financial Conduct Authority, ‘The Senior Managers and Certification Regime: Guide for FCA Solo-Regulated Firms’ (‘SM&CR’), July 2019, https://www.fca.org.uk/publication/policy/guide-for-fca-solo-regulated-firms.pdf ; Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data (General Data Protection Regulation), 27 April 2016, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679&from=EN; European Parliament, ‘Proposal for an Artificial Intelligence Act’, 2021; People’s Bank of China, ‘Guiding Opinions of the PBOC, the China Banking and Insurance Regulatory Commission, the China Securities Regulatory Commission, and the State Administration of Foreign Exchange on Regulating the Asset Management Business of Financial Institutions’, 27 April 2018.

(5)  European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL))

(6)  Interestingly, poor data governance could trigger the same fine as engaging with outright prohibited practices.