Representatives from the European Parliament, the European Council, and the European Commission agreed on wording to Article 6 of the draft AI Act addressing important classification rules for high-risk artificial intelligence (AI) systems, but other issues remain open.
The European Parliament approved a draft AI Act in May 2023. Prior to finalizing the AI Act, the European Parliament, the Council of the European Union (representing the European member states) and the European Commission must work together to adopt a final version of the law that the European Parliament and the European Council would then formally approve. This negotiation process between the three European bodies is referred to as the “trilogue.”
An important trilogue session ended on October 25, 2023, and the negotiators were hoping for as much progress as possible concerning various contentious issues. Various EU laws, including the General Data Protection Regulation (see our earlier LawFlash), already apply to certain aspects of AI systems. The AI Act would establish a comprehensive framework applicable to the development and use of AI systems.
It appears that as a result of the trilogue process, there will be a certification regime for high-risk AI systems intended for use in various scenarios deemed high risk. However, if an AI system satisfies four conditions, then such a system will not result in classification as high risk pursuant to the AI Act because these AI systems perform “purely accessory” tasks. To fall under this exemption, the AI system must
Consumer and privacy activists in Europe have expressed concerns about allowing companies to determine whether their own AI systems are high risk. While there is consensus around the need for legal certainty, various questions remain—for example, how is it possible to determine that an AI system qualifies for the exemption and what is the requisite burden of proof to establish that a particular AI system satisfies one of the relevant exemptions?
The European Commission must develop a comprehensive list of practical examples of high-risk use cases as well as use cases that are not high risk. Additionally, the European Commission wants to retain the authority to modify these exemptions when there is concrete and reliable evidence that AI systems do not pose a significant risk to people but do not satisfy any of the exemptions identified above. In these situations, the European Commission would like to have the authority to determine that such AI systems are not high risk since they do not pose a significant risk to people, allowing such systems to escape the burdensome legal requirements that would otherwise apply. However, this issue remains open.
The negotiators have also made progress on the use of AI in law enforcement by proposing text addressing the use of foundation models and of general-purpose AI; however, negotiators did not agree to specific text addressing these issues. European Council negotiators believe that law enforcement requires some of the AI tools that the European Parliament negotiators want to ban. Police and intelligence agencies will likely not easily forgo facial recognition and remote biometric identification systems, which are valued tools for them. Consumer advocates are lobbying against the use of these systems by law enforcement and intelligence agencies.
The outcome of the negotiations on the definition of AI is still unclear; Industry representatives advocate that the definition of AI must align with international frameworks, such as those released by the Organization for Economic Cooperation and Development and the United States’ National Institute of Standards and Technology, to foster international harmonization and market access.
Reportedly, there is agreement on a tiered approach to foundation models, with stricter obligations on “high-impact” models under the centralized supervision of a new European Commission AI Office. However, the criteria for designating a foundation model as “high impact” still requires definition, and the representatives of the European Parliament currently oppose the idea of financing the AI Office with a management fee.
Negotiators have considered several other contentious elements of the AI Act that remain open, including developing and use of foundation models and general-purpose AI. The issues that were not resolved during the mentioned trilogue session were pushed to the next trilogue session scheduled for December 6. This ambitious schedule may delay the adoption of the AI Act to 2024, although the timetable remains unclear.
The Spanish presidency of the EU Council has repeatedly maintained that it plans to come to full agreement of the AI Act by the end of 2023, which will make the December trilogue meeting a high-stakes affair. Nine “technical meetings” of the trilogue negotiators are scheduled to find common ground on the AI Act’s most complex and consequential aspects. The negotiators are considering open issues to form a “package deal” for December 6 that would address compromises to proposed bans of high-risk AI systems, law enforcement exceptions, the fundamental rights impact assessment, and sustainability provisions.
Failure to reach full agreement on these issues could push negotiations to early 2024, increasing the risks of additional delays due to the June 2024 election for European Parliament representatives.
If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following: