The European Union’s draft AI Act is an ‘early-mover’ in the arms race towards a global blueprint for AI regulation. In December 2022, the European Council approved a compromise version of the AI Act, and next month, the European Parliament is scheduled to vote on the draft text.

But despite its initial promise, the AI Act increasingly resembles the circumstances of its conception – a complex, one-time political compromise between thousands of MEPs with wildly differing views. What’s more, by adopting the infrastructure of the New Legislative Framework (NLF), the AI Act leaves gaps in the protection of individuals. This means that as your organisation develops an ethical AI framework, the AI Act will only be one part of the jigsaw in establishing AI-specific risk mitigation measures.

What does this mean if your organisation aspires to be an ethical adopter of AI systems?

The AI Act marks a pivotal point: a key opportunity to build the foundations for compliance going forward. But the AI Act has blind spots, and demonstrating compliance with the AI Act alone will not be enough to ensure responsible deployment of AI systems. To truly mitigate risk, we recommend that organisations:

  1. Acknowledge that the AI Act is ultimately positioned as EU product regulation. Leverage your organisation’s existing product safety infrastructure and resources to develop compliance.
  2. Consider supplementing your organisation’s AI Act compliance programme with a broader AI ethics framework. This framework should be built around an organisation’s core principles, ethical concerns, data protection principles and safeguarding fundamental rights.
  3. Carry out an initial assessment to identify where AI systems are being deployed or developed within your organisation. Use a “wide lens” to assess the level of risk in these systems – risk categorisation should not be based solely on the risk categories described in the AI Act.
  4. Carry out an in-depth analysis of your AI supply chains. Your organisation’s regulatory obligations hinge on applying concepts that do not easily fit into the reality of AI supply chains, so this will likely be an essential (but complex) analysis.
  5. Consider the impact on individuals, including in connection with transparency. This is a blind spot in the AI Act.

In our more detailed blog post (available here), we explore four key shortcomings of the AI Act, and the actions we’ve been discussing with our healthcare clients in light of these. These shortcoming include:

  • The implications of positioning the AI Act as product safety legislation using the NLF.
  • The AI Act’s flawed risk categorisations for AI use cases, including the assumption that all AI as a medical device is “high risk”.
  • Consumers have no rights, no recourse and effectively no role under the AI Act. 
  • The difficulties in mapping a typical, highly complex AI supply chain against the overly-simple, narrow classes of actors identified under the AI Act (including providers, importers, distributors and users).

If you’d like to discuss this further, please get in touch.

Author

Julia Gillert is Of Counsel at Baker McKenzie's London office, and has shaped her practice to focus exclusively on regulatory matters affecting the Healthcare & Life Sciences industry.

Author

Jaspreet Takhar is a senior associate in Baker McKenzie' London office and advises market-leading tech and healthcare companies on issues at the cutting-edge of digital health.