In its white paper published last month, the UK Government set out its principles-based, adaptive approach to regulating AI. The UK approach stands in stark contrast to the more static and prescriptive approach of the EU AI Act. Instead of assigning responsibility for AI governance to a new single regulator, the UK Government is empowering existing regulators to come up with tailored approaches for specific sectors. The aim is to ensure that the UK remains a flexible and innovation-friendly jurisdiction for AI developers. The flipside is that the UK approach may leave gaps between regulators, by failing to take a more holistic approach along the lines of the EU.

We’ve set out our top 5 takeaways when comparing the UK and EU approach below.

  1. Principle-based guidance in the UK vs EU legislation

The EU AI Act represents a prescriptive legislative framework based on the EU’s New Legislative Framework, the EU model for product safety legislation. It imposes legislative obligations at all stages of the lifecycle of an AI system, from: training, testing and validation; to conformity assessments; risk management systems; and post-market monitoring.

The EU’s prescriptive approach has been rejected by the UK, which is opting for no further legislation at this stage. Instead, the UK white paper outlines 5 principles that UK regulators should consider to best facilitate the safe and innovative use of AI in the industries they monitor: (1) safety, security and robustness; (2) transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress. These principles are based on the OECD’s AI principles. Although the UK government will initially issue these principles on a non-statutory basis, it may introduce a statutory duty on regulators to have due regard to the principles at a later date.

  1. No new regulators in UK vs New EU regulators

The UK principles will be implemented by existing regulators, such as the Medicines and Healthcare products Regulatory Agency (MHRA), the Equality and Human Rights Commission, the Information Commissioner’s Office (ICO), and the Competition and Markets Authority. This approach makes use of regulators’ domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used. The UK Government is aware that given its sector-specific approach, regulatory co-ordination will be essential. The risk is that in the absence of a centralised, single regulator, UK regulators will diverge in approach. The white paper discusses measures that the UK Government will take to mitigate this risk, including issuing guidance for regulators in implementing the principles, central monitoring and evaluation of the framework and principles, and a multi-regulator AI sandbox.

By contrast, the EU approach will rely on a co-ordinated network of new and established regulators, including a central European AI Board and national competent authorities for AI in each Member State.

  1. Vertical sector-specific guidance in UK vs Horizontal cross-sector regulation in EU

The UK approach focuses on guidance for specific sectors and risks. This feeds into what we are already seeing from UK regulators, such as detailed and comprehensive guidance from the ICO on AI and data protection, and the MHRA’s Software and AI as a Medical Device Change Programme. This is a programme of work to ensure clear regulatory requirements for software and AI through guidance, streamlined processes, and the designation of standards. The Bank of England, Prudential Regulation Authority and Financial Conduct Authority have published a paper addressing AI in financial services.

In contrast, the EU approach spans across sectors, focussing on “prohibited” and “high-risk” AI systems. Whilst this ensures a greater degree of risk coverage at a centralised level, we are already seeing the potential for significant duplication and overlap between the EU AI Act and established product safety regimes, such as the EU Medical Device Regulations 2017/745, particularly where manufacturers may be looking at two parallel sets of requirements on conformity assessments and post-market surveillance.

  1. Penalties: No new UK penalties vs new EU penalties

The EU regime proposes penalties of up to EUR 30 million or up to 6% of global annual turnover, whereas the UK is introducing no new penalties at this stage.

  1. Liability: EU tackles head on vs “Wait and see” in UK

The white paper does not address changes to the UK liability regime at this stage. However, the UK Government is seeking views on the adequacy of existing redress mechanisms for harms caused by AI systems in its consultation on the white paper.

The EU, on the other hand, issued a proposal for a directive on adapting non-contractual civil liability rules to AI (AI Liability Directive) in September 2022. The Directive is aimed at complementing and modernising the EU liability framework to introduce new rules specific to damage caused by AI systems. These new rules help to ensure that persons harmed by AI systems enjoy the same level of protection as persons harmed by other technologies in the EU. The AI Liability Directive would create a rebuttable ‘presumption of causality’, to ease the burden of proof for victims to establish damage caused by an AI system.

Next steps

The UK Government has launched a consultation on the white paper, which is open until 21 June 2023. In Autumn of this year, the UK Government intends to issue a response to the consultation, cross-sectoral principles for regulators and design and publish an AI Regulation Roadmap.


Jaspreet Takhar is a senior associate in Baker McKenzie' London office and advises market-leading tech and healthcare companies on issues at the cutting-edge of digital health.