After lengthy negotiations, representatives of the EU Council, European Parliament, and European Commission have reached a compromise in principle on rules for the use of artificial intelligence (AI), ushering in new safeguards, consumer rights, product liability, and fines, among many other components.
The European Parliament’s press release references the following changes found in the agreement:
Importantly, however, the final text of the AI Act has not been agreed yet. Notably, many of the key details of how the AI Act will, in practice, accomplish the outcomes set out above remain to be finalized during the first quarter of 2024, as we discuss in more detail below.
Recognizing the potential threat to citizens’ rights and democracy posed by certain applications of AI, the negotiators agreed to prohibit
For so-called high-risk AI systems, the negotiators agreed to include a mandatory fundamental rights impact assessment, among other requirements. AI systems in this category include the insurance and banking sectors as well as AI systems used to influence the outcome of elections and voter behavior. They also agreed on a complaint mechanism for individuals “to receive explanations about decisions based on high-risk AI systems that impact their rights.”
For GPAI, there will be two levels of regulation: high-impact and low-impact systems. For high-impact GPAI models with systemic risk, there will be very stringent obligations. If these models meet certain criteria, providers will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report serious incidents to the European Commission, ensure cybersecurity measures are being put in place, and report on the energy efficiency of AI systems.
All GPAI model providers must adhere to transparency requirements such as technical documentation and dissemination of relevant information (e.g., summaries of training content) for downstream operators of high-risk applications.
It appears that the European Parliament achieved most of what it wanted. However, there are only limited circumstances under which biometric identification systems may be used in publicly accessible spaces, i.e., where required for law enforcement purposes provided that this is based on a court order and the systems are being used for preventing serious crimes. Moreover, real-time remote biometric identification (RBI) systems will need to comply with strict conditions. Countries such as France have expressed particular interest in these RBI tools (e.g., to ensure the safety of the 2024 Olympic Games in France).
It will take time for the drafters to iron out the technical details and create a viable legal draft that the EU Council and Parliament can vote on before the EU elections. The ambitious plan is that the AI Act will be enacted in early spring of 2024, with a two-year grace period for compliance. Prohibited systems will have a shorter, six-month period to comply. High-risk AI models are also subject to a 12-month period for compliance with the transparency and governance requirements.
We expect the outcome of this review to produce a very complex legal text, which is unsurprising given the complexity of the concepts it is required to tackle. Details might change as the text is fine-tuned at the technical level in the coming weeks. Businesses and legal experts within and outside the EU will likely read the forthcoming texts thoroughly to assess whether they contain provisions or mechanisms that will allow these businesses to avoid their AI models being qualified as a high-risk AI or high-impact GPAI models. For businesses whose use of AI does not fall within either of these two categories, the compliance risk should be relatively manageable.
Of note is the AI Act’s definition of AI, which follows the updated OECD standard: “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
The AI Act will have an extraterritorial effect similar to the EU General Data Protection Regulation (GDPR). Noncompliance with the AI Act could lead to fines at a higher level than even GDPR fines—ranging from €7.5 million (or 1.5% of global turnover) to €35 million (or 7% of global turnover) depending on the infringement and size of the company. For this reason alone, it will be prudent for developers and users of AI to observe the development and seek compliance with the new European rules at an early stage.
Many open questions remain at this time, including the following:
If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following: