BLOG POST

Tech & Sourcing @ Morgan Lewis

TECHNOLOGY TRANSACTIONS, OUTSOURCING, AND COMMERCIAL CONTRACTS NEWS FOR LAWYERS AND SOURCING PROFESSIONALS

EU Commission Releases Proposal for Regulating Artificial Intelligence in Europe

The EU Commission recently released its proposal to legislate a European Union–wide artificial intelligence (AI) framework. The EU Commission’s intention is that the proposed regulation on AI will provide greater safety and fundamental rights protection, while also supporting innovation and enabling trust without preventing innovation.

The proposal was released as part of the EU Commission’s “AI package,” which consists of a proposal for a regulatory framework on AI, the Artificial Intelligence Act, and a revised coordinated plan on AI. The AI package is intended to promote the development of AI and address the potential high risks AI poses to individuals.

The European Parliament and EU member states will need to adopt the EU Commission's proposal on AI in the ordinary legislative procedure. Once adopted, the proposal will be directly applicable across the European Union, and member states will be expected to designate one or more national competent authorities to supervise its application and implementation, as well as carry out market surveillance activities. The timeline for the adoption of the proposal by the European Parliament and EU member states is not yet clear.

The proposed regulatory framework introduces a set of rules applicable to the design, development, and use of certain high-risk AI systems. This legislation is intended to apply to both public and private actors inside and outside the European Union, provided that the AI system itself is available on the market or its use affects people located in the European Union. Therefore, this could apply equally to the suppliers of AI and the users of AI, and in particular users of high-risk AI systems. The EU Commission calls out remote biometric identification systems, which will always be considered high risk and subject to strict requirements.

The Artificial Intelligence Act defines high-risk use cases of AI systems as instances “where the risks that the AI systems pose are particularly high.” High-risk AI systems or applications could include those that manipulate human behavior to circumvent their users' free will; systems that allow “social scoring” by governments; systems that support migration, asylum, and border control management; or systems developed to evaluate creditworthiness.

The draft act proposes to do the following:

  1. Enhance transparency and minimize risk by providing technology-neutral definitions of AI systems
  2. Avoid regulatory overreach by only intervening where this is strictly needed, i.e., in cases of high-risk uses of AI
  3. Provide that high-risk AI systems need to respect a set of specifically designed requirements (including the obligation to use high-quality datasets, a requirement to share adequate information with the user, and ensuring appropriate human oversight measures, among others)
  4. Encourage the use of regulatory sandboxes, with a view to ensuring that innovative companies, small and medium enterprises, and startups continue innovating in compliance with the new rules

One of the effects of the Artificial Intelligence Act is to ban any AI systems that are considered a clear threat to the safety, livelihoods, and rights of people. For example, the use of real-time remote biometric identification systems (such as facial recognition tools used in public spaces) for law enforcement purposes would by default be prohibited in publicly accessible spaces, and would only be allowed when exceptionally authorized by law, with such authorization being subject to specific safeguards.

Applications such as spam filters or AI-enabled video games are considered low risk and will be subject to only minimal transparency requirements. The EU Commission confirms in its AI package that the majority of AI systems will fall into this low-risk category.

The proposal sets out that any noncompliance with the Artificial Intelligence Act would mean heavy GDPR-style fines for companies, with levies of up to 6% of their worldwide annual turnover for the preceding financial year.

Margrethe Vestager, executive vice president for A Europe Fit for the Digital Age, stated: “On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

This announcement from the EU Commission comes hot on the heels of an announcement from the UK government of its intention to publish a new National Artificial Intelligence Strategy later this year.