At the end of last week, two European Parliament Committees published the latest version of the EU AI Act. The new draft reflects months of political wrangling, but it also demonstrates that EU legislators have listened to the (many) criticisms levied at the EU AI Act until now. So what’s new?
We’ve set our top ten changes:
- Higher penalties: If you thought the previous proposal was eye-watering, it’s about to get steeper. The most serious breaches will now be subject to fines of up to EUR 40 million or 7% of global annual turnover (this was previously EUR 30 million or 6% of global annual turnover).
- New (and better) definition of AI: The previous definition of an AI system was strongly criticised for capturing too broad a suite of software applications. The EU have now aligned with the much tighter OECD definition i.e. “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.” This means there’s a stronger focus on machine learning and deep learning networks.
- Finally, there’s rights and recourse for EU consumers(!): One of the most powerful criticisms of the previous version of the AI Act was that in reality, it had a huge blind spot – it set out no rights, no recourse and effectively no role for individuals. EU legislators have paid attention, and placed a new focus on these “affected persons”. They now have rights to lodge complaints with supervisory authorities, a right to explanation of decision-making from deployers of high-risk systems, and there’s potential for representative actions. About time too!
- Foundation models are now in-scope (including generative AI): EU legislators have spent months wringing their hands about how to regulate foundation models i.e. AI models trained on broad data at scale, designed for generality of output, and may be adapted to a wide range of tasks. They’ve finally settled on an approach – these won’t be regulated as high-risk AI systems, but they are subject to stricter requirements around data governance, technical documentation and quality management systems. For generative AI (e.g. ChatGPT and Dall-E), you’ll need to notify individuals they are interacting with an AI system, and make a summary of the training data available that’s protected under copyright law.
- Changes to the list of high-risk AI systems: All categories of high-risk systems listed in Annex III have been subject to changes, including around HR, education, law enforcement and biometric systems. Importantly, any system listed in Annex III will only be considered high-risk if it meets a new hurdle and poses a “significant risk to the health, safety or fundamental rights of natural persons”. However, one point has not changed – high-risk systems still include AI systems intended to be used as a safety component in medical devices, and AI systems already required to undergo third party conformity assessment under the EU Medical Device Regulation.
- A new focus on the environment and climate change: A disturbing fact – according to research in 2019, training a single AI model may emit the equivalent of more than 284 tonnes of carbon dioxide (nearly five times as much as the entire lifetime of the average car in the US, including its manufacture). The latest draft takes these concerns seriously, and there’s a new focus on reducing energy consumption and increasing energy efficiency (tied to various record-keeping and documentation requirements).
- More obligations for deployers (and everyone else in the supply chain too): Deployers are the parties that deploy the system under their own authority. Deployers now face various enhanced compliance obligations. This includes a requirement to conduct a fundamental rights impact assessment (comparable to a data protection impact assessment) and providing certain information to individuals subject to a decision by a high-risk AI system. Importers, distributors, authorised representatives and providers also face enhanced compliance obligations. In the previous draft, “deployers” were (confusingly) called “users” (many assumed the concept of “users” was intended to capture “affected persons”, which it is not – so this is another welcome development).
- A shorter transition period: Once the AI Act passes, there will be a grace period before it applies. The latest draft proposes a period of two years, whereas the previous draft set out three years. There was already serious concern that three years will not be enough time to build up the infrastructure necessary for compliance across stakeholders (regulators, notified bodies, and industry). Two years may be a stretch too far. The proposal for two years is likely to go down like a lead balloon – expect this to be a hot topic for negotiations.
- General principles for AI: There’s a new set of six general principles applicable to all AI systems: (1) human agency and oversight; (2) technical robustness and safety; (3) privacy and data governance; (4) transparency; (5) diversity, non-discrimination and fairness; and (6) social and environmental well-being. These should be a first port of call for organisations designing ethical AI principles as a governance tool, and all operators are expected to make their best efforts to comply with these.
- Requirements for training data: Training, testing and validation datasets must be “relevant, sufficiently representative, appropriately vetted for errors and be as complete as possible in view of the intended purpose”. For anyone who has worked on medical dataset licensing for the purposes of training AI models, it will be a relief to see the EU take a more pragmatic approach to the realities of data quality when generated in (e.g.) the public health setting (at least when compared to the previous drafts of the AI Act).
So what’s next?
European Parliament are scheduled to vote on the draft next month in June, and following this, the trilogue will finally get underway – this is the three-way discussion between the European Parliament, Commission and Council. If this all goes smoothly (a big ‘if’), the AI Act could be formally adopted as early as the beginning of 2024. At this point, the transition period will begin.
Are you building your AI Act compliance programme? Then get in touch to find out how we can help.