We’ve set out our top ten tips on ensuring GDPR compliance if your organisation is procuring AI solutions from third parties, whether this is to train an AI imaging system, integrate AI solutions into a patient-facing app or to allow your staff to make use of generative AI. These tips are based on the issues which we see are attracting regulatory scrutiny in practice, the potential stumbling blocks we’re coming across in supplier terms, as well as the ICO’s AI guidance. This guidance has quickly gained a reputation as some of the most impressive and comprehensive guidance on AI and data protection across Europe.

Want to know more? Please get in touch.

  1. The DPIA is king: We’re seeing a growing expectation from regulators that prior to procurement and deployment of an AI system, organisations have carried out a robust DPIA that is capable of standing up to regulatory scrutiny. We see this in the ICO’s guidance, which sets out that in the vast majority of cases, the use of AI will involve a type of processing likely to result in a high risk to individuals’ rights and freedoms. Article 35(3)(a) of the GDPR makes clear that a DPIA is required if, for example, your use of AI involves:
    • systematic and extensive evaluation of personal aspects based on automated processing, including profiling, on which decisions are made that produce legal or similarly significant effects; or
    • large-scale processing of special categories of data, such as health or genetic data.

However, you do need to think beyond the Article 35(3)(a) triggers when assessing whether you should conduct a DPIA. AI may involve processing operations that are themselves likely to result in a high risk, such as use of new tech or novel applications of existing tech. This may involve evaluation or scoring, systematic monitoring and / or large-scale processing. If this is the case, consider the need for a DPIA.

  1. Understand your AI supply chain: AI supply chains are never as straightforward as they first appear. Often, multiple organisations are involved in developing and deploying AI systems that process personal data. It’s possible that an organisation may be a controller or joint controller for some phases, and a processor for others.

Who are you procuring the AI system from? Is this the same as the organisation that trained and developed the AI system? (As the AI ecosystem evolves, we’re seeing the answer is increasingly “No” to this question.) Assess the GDPR designation of each organisation at each stage of the AI lifecycle (i.e. at both the research and development phase, as well as the deployment phase) – this will not always be transparent from contractual documentation, and these roles are likely to evolve and change as the AI system moves from the development to deployment phase. You may need to ask questions of the provider to truly understand your AI supply chain.

  1. Vendor due diligence is essential, not a “nice-to-have”: Regulators increasingly assume that you have conducted in-depth vendor due diligence at the procurement stage. We’re seeing a strong expectation that even if your organisation did not initially train or develop an AI system, that you nevertheless have an in-depth understanding of how that system was trained and developed to ensure purpose limitation, data minimisation and address AI-specific security risks (e.g. risks of data poisoning, model inversion attacks, membership inference attacks, etc.). It’s rare that all of these points will be addressed in the contractual documentation at the procurement stage – instead, only robust vendor due diligence will give you the answers.
  1. Understand data fields in AI inputs and AI outputs: You need absolute clarity on the personal data that may be inputted into the AI system, any data fields that may be outputs, and how those outputs may be used or combined with other personal data in order to make inferences, decisions, recommendations, etc. Only with this information can you then assess your measures to ensure data minimisation and purpose limitation. Understanding how this may involve special category data is key for life sciences. ICO guidance discusses measures to minimise personal data at the inference stage, such as:
    • converting personal data into less ‘human readable’ formats.
    • making inferences locally (e.g. on the device from which the query is generated and already collects and stores the individual’s data).
    • privacy-preserving query approaches

It won’t always be practical to implement all of these – they depend on the use case in question. However, they may have a role.

  1. Data Processing Agreement (DPA): The DPA needs to be watertight – does it cover all it should do? Have you addressed any international transfers of personal data? Have the security measures been assessed and approved by your organisation? Are audit rights addressed and how will your organisation exercise these in practice? Even in the standard DPAs of the most sophisticated of AI providers, not all of these points may be adequately addressed.
  1. Different lawful bases for different phases? When determining your lawful bases for processing, it will make sense to separate the research and development phase (including conceptualisation, design, training and model selection) from the deployment phase. This is because these are separate and distinct purposes, and will need to be separately assessed. If you choose to rely on legitimate interests, then you should conduct a legitimate interests assessment accordingly.
  1. Accuracy under the GDPR vs ‘statistical accuracy’ in the AI world: The GDPR requires that personal data is accurate and where necessary, kept up-to-date (‘accuracy principle’). ‘Statistical accuracy’ refers to a different concept; the accuracy of the AI system itself, and how often it guesses the correct answer, measured against correctly labelled test data. ICO guidance clarifies that an AI system does not need to be 100% statistically accurate to comply with the GDPR accuracy principle.

    However, you should regularly monitor and assess the accuracy of the AI system (including during deployment). For example, you should regularly assess ‘model drift’ i.e. where AI systems may become less statistically accurate with time, as there are changes in the characteristics of the populations that they are applied to over time. This may involve measuring the distance between classification errors over time.
  1. Transparency: Privacy notices need to be carefully adapted to reflect any new AI-related processing activities. This includes transparency around the existence of solely automated decision-making, including profiling, where this may produce legal or similarly significant effects. In this scenario, controllers must provide meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject. There are different ways to explain AI decisions, including:
    • process-based explanations: demonstrating how you following good governance processes and best practices throughout design and use.
    • fairness explanations: helping people understand the steps you take to ensure AI decisions are unbiased and equitable.
    • outcome-based explanations: may cover how the AI output meets criteria you established in the design process, and any human involvement in decisions.
  1. The effective exercise of data subject rights: Data subjects must be able to exercise applicable rights, at all potentially relevant stages, from the training data, to deployment and the result of the prediction itself, to personal data that may be contained in the model itself. You will need to facilitate the exercise of the GDPR rights in Articles 13 to 21 (including the rights of information, access, rectification, erasure, and objection – to name a few). Under Article 22, data subjects have a right not to be subject to a decision based solely on automated processing, including profiling, if this produces legal or similarly significantly effects.
  1. Prepare for a higher risk of regulatory scrutiny: There’s no getting away from the fact that regulators see AI as a core focus for the coming months. We are seeing external-facing use cases of third party AI solutions gain an unprecedented amount of regulatory attention from data protection authorities across Europe and beyond. Vendor due diligence and a strong narrative around compliance (especially in your DPIA) will be an essential shield against enforcement.

Are you procuring an AI solution? Then get in touch to find out more.

Author

Jaspreet Takhar is a senior associate in Baker McKenzie' London office and advises market-leading tech and healthcare companies on issues at the cutting-edge of digital health.

Author

Julia Gillert is Of Counsel at Baker McKenzie's London office, and has shaped her practice to focus exclusively on regulatory matters affecting the Healthcare & Life Sciences industry.