The European Medicines Agency (EMA) has published a draft reflection paper which considers the application of artificial intelligence (AI) and machine learning (ML) to the development, regulation and use of medicines.
This paper, which is now open for public consultation, evaluates the risks, benefits and opportunities that AI and ML present to the entire lifecycle for medicines. Underlying the paper is EMA’s concern around the challenges of integrating AI and ML into the medicines lifecycle, from data protection concerns, to technical considerations and implications for GxP. We’ve set out our top six takeaways on the paper below:
- MAHs are ultimately responsible for the use of AI/ML in the lifecycle of medicinal products: The paper highlights the key principle that marketing authorisation applicants (MA Applicants) and marketing authorisation holders (MAH) will bear responsibility for ensuring that any AI/ML tools they use are fit for purpose and comply with ethical, technical, scientific, and regulatory standards.
- Consider use cases of AI/ML throughout the medicines lifecycle: The paper considers the potential use cases for AI/ML throughout the medicines lifecycle and potential regulatory concerns / watch-outs. To take a few examples:
- AI in clinical trials and the implications for ICH GCP: All requirements in the ICH E6 guideline for good clinical practice apply to the use of AI/ML in clinical trials. If an organisation generates a model for clinical trial purposes, the full model architecture, logs from modelling, training data and description of the data processing pipeline will likely be part of the clinical trial data or trial protocol dossier. This means they should be made available for assessment at the time of market authorisation or clinical trial application. The use of any decentralised elements should be reflected in the protocol benefit-risk assessment.
- Product information: AI applications for drafting, translating, or reviewing medicinal product information documents should be used under close human supervision, particularly given that generative language models are prone to include plausible, but erroneous, output.
- Manufacturing: There are an increasing number of AI applications for process design and scale up, process quality control and batch release. Model development, performance assessment and life-cycle management should follow the quality risk management principles, taking patient safety, data integrity and product quality into account.
- Regulatory impacts and risk assessments are key: MA Applicants are expected to perform a regulatory impact and risk assessment of all AI/ML applications. The higher the regulatory impact or risk associated with the use of AI/ML models, the sooner EMA recommends the MA Applicant seeks guidance from regulators. The EMA paper advises early regulatory interaction (such as scientific advice) where an AI/ML system is used in the context of medicinal product development, evaluation, or monitoring, and is expected to impact (even potentially) on the benefit-risk of a medicinal product.
- Guidance on the AI/ML development lifecycle: The paper provides guidance on all stages of the AI/ML lifecycle, from data acquisition and augmentation; to training, testing and validation; to development and deployment. It warns readers about AI-specific risks that are often considered by data scientists, such as the need for representative datasets during training, the need to avoid over-fitting during validation and testing, and the need for explainability and transparency wherever possible. For example, at the data acquisition stage, the paper emphasises that organisations must make all efforts to acquire a balanced training dataset – they should consider the potential need to over-sample rare populations, and take into account all relevant bases of discrimination in the EU principle of non-discrimination. Organisations must also document the source(s) of data and the process of data acquisition, along with any processing such as cleaning, transformation, etc. in line with GxP requirements.
- Follow an ethical, human-centric approach: Developers should follow basic ethical principles presented in the Assessment List for Trustworthy Artificial Intelligence for self-assessment (ALTAI). These principles include: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; accountability; societal and environmental well-being; and diversity, non-discrimination, and fairness. A human-centric approach should guide all development and deployment of AI/ML technologies
- MAHs and applicants must consider good governance, data protection and integrity principles: Organisations should put in place SOPs implementing GxP principles on data and algorithm governance. In the case of personal data, the paper recommends that applicants and MAHs use anonymised data, synthetic data or other privacy techniques where possible. If personal data have been used for model training, organisation should evaluate whether such information can potentially be extracted through membership-, inference- and model inversion attacks, to mitigate the risk of re-identification where needed.
The consultation will be discussed during a joint HMA/EMA workshop scheduled for November 2023, and will close in December 2023. Given that the paper places the responsibility on MA Applicants and MAHs to ensure regulatory compliance, biotech and pharma companies should assess how this principle can work in practice and to what extent contractual protections can be put in place (and input into the consultation accordingly).