Introduction

Following our post here on the EMA’s consultation on the use of Artificial Intelligence (AI) in the medicinal product lifecycle, which is open for public consultation until 31 December 2023, it is the turn of the World Health Organization (WHO) to release a publication outlining key considerations for regulation of artificial intelligence for health.

In the same line as the EMA’s reflection paper that aims to initiate dialogue with all groups of stakeholders in this fast-evolving field, the WHO publication aims at dialogue among stakeholders, including developers, regulators, manufacturers, health workers and patients. Considering that AI health technologies are rising rapidly and given that we do not always have a clear understanding of their impact on users (patients or health-care professionals), the WHO focuses on six key regulatory considerations on AI for health.

The publication then lists 18 key recommendations (based on the 6 topics outlined above) that stakeholders are invited to take into account as they continue to develop frameworks and best practices for the use of AI in healthcare.

WHO’s regulatory considerations (6) and underlying key recommendations (18):

1. Documentation and transparency: it is important to maintain appropriate and effective documentation and record-keeping on the development and validation of AI systems including  their intended medical purposes and development process. This is essential to establish trust and allow for the regulatory assessment and evaluation of AI systems (including tracing back the complex development process).

WHO Key recommendations to consider:

  • pre-specifying and documenting the intended medical purpose and development process in a manner that allows tracing of the development steps.
  • risk-based approach for the level of documentation and record keeping.

2. Risk management and AI systems development lifecycle approach: risks associated with AI systems, such as cybersecurity threats, vulnerabilities and biases should be considered throughout the total product lifecycle and addressed as part of a holistic risk management approach. Such holistic risk evaluation and management need to take into account the full context in which the AI system is intended to be used.

WHO Key recommendations to consider:

  • a total product lifecycle approach throughout all phases in the life of a product.
  • a risk management approach addressing risks, such as cybersecurity threats and vulnerabilities, underfitting, algorithmic bias.

3. Intended use and analytical and clinical validation: Transparent documentation on the intended use of the AI system, on the training dataset composition underpinning an AI system, as well as on the external datasets and performance metrics should be available to demonstrate the safety and performance of the AI system.

WHO Key recommendations to consider:

  • transparency on the training dataset composition underpinning an AI system.
  • demonstrating performance beyond the training dataset (including external validation dataset).
  • a graded set of requirements for clinical validation based on risk.
  • a period of more intense post-deployment monitoring through post-market management and market surveillance for high-risk AI systems.

4. AI related data quality:  data is the most relevant asset to train AI systems. All AI solutions rely on data, and its quality will impact the systems’ safety and effectiveness. The development of the AI system must therefore be supported by data of sufficient quality to achieve the intended purpose. Data quality issues and challenges need to be identified and pre-release trials for AI systems need to be deployed to ensure they will not amplify biases and errors and create harm.

Key recommendations to consider:

  • whether data are of sufficient quality to achieve the intended purpose.
  • deploying rigorous pre-release evaluations for AI systems to ensure that they will not amplify biases or errors.
  • careful design or prompt troubleshooting to help early identification of data quality issues.
  • mitigating data quality issues that arise in health-care data and the associated risks.
  • collaborating to ecosystems that can facilitate the sharing of good-quality data sources.

5. Privacy and data protection: are to be considered from the outset of the design, development and deployment of AI systems, taking into account that health-data qualify as sensitive personal data, which are generally subject to a higher degree of protection. Privacy and cybersecurity risks must be considered as part of the compliance program. A good understanding of the privacy and data protection legal framework is key to ensure compliance. Beyond privacy and data protection, ethical considerations are also to be taken into account.

WHO Recommendations to consider:

  • privacy and data protection during the design and deployment of AI systems.
  • gaining a good understanding of applicable data protection regulations and privacy laws and ensure they are complied with in the development process.
  • implementing a compliance programme that addresses risks.

6. Engagement and collaboration:  engagement and collaboration among key stakeholders (developers, manufacturers, health-care practitioners, patients, patient advocates, policy-makers, regulatory bodies and others) should be fostered and facilitated in order to ensure that products and services stay compliant throughout the whole lifecycle.

WHO Recommendations to consider:

  • accessible and informative platforms to facilitate engagement and collaboration.
  • streamlining the oversight process for AI regulation through engagement and collaboration.

Next steps

In conclusion, as is the case in many other sectors, WHO recognises the potential of AI to rapidly advance research in healthcare and therapeutic development.

At the same time, in light of the evolving complexity of the AI landscape, it calls for international collaboration on AI regulations and standards to foster the safe and appropriate development and use of AI systems.

In practice

The challenge for many organizations in their AI development compliance journey is to follow a holistic approach and embed all those key considerations and recommendations, as well as the upcoming AI regulation, into their business operations and product development lifecycle.

Under the EU’s proposed Artificial Intelligence Act (AIA), the compliance journey for an AI system will depend on an organization’s position in the supply chain, the nature of the technology, and how the technology will be used. As a reminder, the AIA was introduced in 2021 and is currently in the trilogue phase (tripartite discussions between the Council, the European Parliament and the European Commission) to determine the final form of the Act. It is expected to enter into force at the beginning of 2024, with a possible two year transition period – meaning that the AIA will likely apply in full from the beginning of 2026.  

Within Baker McKenzie’s AI focus group, we assist organizations navigating the evolving AI landscape, developing a multi-faceted AI governance framework. This includes both (i) enterprise level governance (more static, process focused and adapted to capture new AI innovations across the business), and (ii) AI lifecycle management (more iterative and focused on ensuring existing AI technologies remain compliant as they are developed).

(i) Enterprise governance : focus on corporate governance including Responsible AI Principles and Responsible AI Policies;

(ii) AI Lifecycle Management : encompassing (a) due diligence (assessment and categorisation of AI systems) ; (b) lifecycle management to ensure compliance with the upcoming EU AI Act and other AI product regulation, (c) reporting and (d) escalations and decision making;

There are a number of practical steps that organizations can take in order to develop their AI governance and accountability framework taking into account available guidance, considerations and recommendations:

  • carry out a “AI scoping survey” or “AI mapping exercise” in order to understand (i) use of AI within the organization, (ii) products impacted by AI regulations.
  • understand the use of AI by your suppliers and implement a responsible AI approach through your  supply chain (due diligence, templates, contract management with third party vendors).
  • understand the use of AI by your employees and implement Generative AI Policies for employees and third party service providers.
  • establish a cross-functional team dedicated to AI.
  • create risk frameworks and impact assessments to identify and understand prohibited and high risk use of AI.
  • develop training and a culture of awareness around responsible AI principles within the organization.
  • establish standards, processes, guidance, templates and toolkits that embed the above principles of transparency and accountability.
Author

Elisabeth Dehareng is a Partner in Baker McKenzie's Brussels office.

Author

Kathy Harford is a Lead Knowledge Lawyer in Baker McKenzie's London office.