LawFlash

The EU AI Act Is Here: 10 Key Takeaways for Business and Legal Leaders

July 26, 2024

The European Union’s new AI Act (Act) will come into effect on August 1, 2024. The Act is the world’s first comprehensive artificial intelligence and machine learning (collectively, AI) focused law. It will have a sweeping impact on many businesses, including those operating outside the EU, that currently design, develop, integrate, or use AI systems or models or plan to do so in the future. So, what do business and legal leaders need to know about this landmark new law?

 

1. What kinds of AI will the Act apply to?

The Act will regulate two kinds of AI: “AI systems,” and “General purpose AI models” (GPAI models). These kinds of AI are defined in broad terms under the Act and drawing boundaries with traditional (non-AI) software systems may depend on (yet to be issued) delegated legislation and regulatory guidance.

AI systems are defined as “machine-based system[s]” designed to operate with “varying levels of autonomy” (which may “exhibit adaptiveness”) and infer an output from the input received. This output may include “predictions, content, recommendations, or decisions that can influence physical or virtual environments.” In turn, depending on whether the AI system is treated as “prohibited,” “high-risk,” so-called transparency risk, or minimal risk, the Act will apply a tiered set of obligations tied to potential “risks” arising from its use.

Infographic - Datasource Item: EU AI Act Chart

GPAI models are defined as AI models “trained with a large amount of data using self-supervision at scale,” which display “significant generality,” “competently perform a wide range of distinct tasks,” and may be integrated into a variety of downstream systems or applications. (For example, OpenAI’s GPT large language models.) In turn, depending on whether the GPAI model is treated as involving “systemic risk” or not, the Act will apply a tiered set of obligations tied to potential “risks” arising from its use.

The Act applies fewer obligations to GPAI models relative to AI systems.

 

2. Which stakeholders in an AI ecosystem will the Act apply to?

The Act applies to multiple stakeholders across the AI ecosystem—specifically, “providers,” “deployers,” “importers,” “distributors,” “representatives,” and “affected persons” in the EU. However, the Act’s obligations primarily apply to “providers and deployers.” Importantly, individuals—affected persons—also have rights under the Act, including, in certain circumstances, the right to obtain explanations of decisions made by “high-risk” AI systems and the right to lodge complaints with regulators.

A company responsible for the development of an AI system or GPAI model and its placing on the EU market under its own name or trademark will typically be treated as the “provider.” By contrast, a company that uses an AI system in its business will typically be regarded as a “deployer.” Whether a company qualifies as a provider or not will be important, because the bulk of the Act’s obligations apply to providers.

The distinction between a provider and deployer may not always be clear in practice. For example, a deployer may be treated as a provider where the deployer materially customizes or white labels a previously implemented AI system. Therefore, a company’s legal and technology teams will need to closely collaborate around AI design and implementation.

 

3. Will the Act apply to companies that do not have offices in the EU?

Yes, the Act is intended to have extraterritorial effect and will apply to companies without a physical presence in the EU in certain circumstances. Notably, the Act will apply to

  • providers, even those established outside the EU, which place on the EU market, or “put into service” in the EU, AI systems or GPAI models; and
  • deployers, which have their place of establishment, or that are located, within the EU.

Importantly, the Act will also apply to both providers and deployers to the extent that the “output” of the AI system is “used in the EU.” In other words, AI-generated predictions, content, recommendations, or decisions—if used in the EU—could potentially result in the application of the Act in perhaps unexpected circumstances. For example, the use of AI-generated outputs in the EU by a downstream deployer of an AI system from abroad may potentially trigger the application of the Act.

 

4. Does the Act itself contemplate any exemptions from its application?

Yes, there are certain limited exemptions from the application of the Act—for example:

  • AI systems and GPAI models released under free and open-source licenses—however, “prohibited,” “high-risk,” and certain other AI systems and GPAI models will not benefit from this exemption;
  • certain R&D activities occurring prior to the AI system being placed on the market or put into service if these activities occur outside real world conditions;
  • AI systems used by human beings in purely personal non-professional activities; and
  • AI systems exclusively used for national security and defense purposes.
 

5. What are ‘prohibited’ AI systems under the Act?

The Act regulates AI systems and GPAI models based on “risks” said to arise from its use. Reflecting such risk assessment, the Act prohibits, effective February 1, 2025, AI systems that (in summary):

  • deploy subliminal techniques beyond a person’s consciousness to materially distort a person’s behavior;
  • purposefully manipulate or deceive to materially distort a person’s behavior;
  • exploit the vulnerabilities of a person (for example, age and disability) to materially distort a person’s behavior;
  • evaluate, classify, or score persons based on social behavior, personal or personality characteristics;
  • assess the risk of a person committing a criminal offense;
  • conduct facial recognition by untargeted scraping of facial images from the internet or CCTV;
  • perform emotion inference in the workplace or in educational institutions;
  • use biometric categorization systems to infer demographic background; or
  • use real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes.

Importantly, these prohibitions are subject to certain qualifiers, such as materiality of harm thresholds, and limited safety-related and law enforcement-related exemptions. In addition, the scope of these prohibitions may also depend on (yet to be issued) delegated legislation and guidance.

 

6. What are ‘high-risk’ AI systems and the key obligations applicable to such AI systems?

The Act focuses on the regulation of “high-risk” AI systems. Nonetheless, the Act’s scheme to determine what is a “high-risk” AI system is complex. In summary, an AI system is “high risk” if it

  • is itself a product regulated under other EU legislation set out in the Act;
  • is a safety component of a product regulated under other EU legislation set out in the Act; or
  • meets the description of the AI systems listed as “high risk” in the Act.

AI systems in the first two categories above may include (for example) certain AI systems used in vehicles, transportation systems, industrial and fuel machinery, medical device, civil aviation, and in other industries.

AI systems in the third category above include (for example) certain AI systems used for (in summary):

  • biometric identification and categorization, and emotion recognition;
  • critical infrastructure (including critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity);
  • educational and vocational training (including relating to access, admission, evaluation of learning outcomes, and detection of prohibited student behaviors);
  • employment, “worker’s management,” and access to self-employment (including recruitment or selection of candidates, and evaluation of performance);
  • access to certain private and public services (including in assessing eligibility for public assistance benefits, healthcare services, creditworthiness, and insurance-related risk);
  • certain law-enforcement related activities; and
  • migration, asylum and border control management, and judicial functions.

Importantly, AI systems otherwise categorized as “high risk” may be exempt from such treatment if they satisfy certain materiality of risk qualifiers or other qualifiers, set out in the Act.

Providers of “high-risk” AI systems may be subject to significant obligations under the Act, including (in summary):

  • registration in a public EU database;
  • implementing risk and quality management systems;
  • effective data governance processes (for example) relating to bias mitigation, and use of representative training data;
  • transparency around (for example) instructions for use, and technical documentation;
  • human oversight (for example) relating to explainability, auditable logs, and human-in-the-loop; and
  • accuracy, resilience, cybersecurity measures, and reporting of “serious incidents.”

Deployers of “high-risk” AI systems may also be subject to notable obligations. For example, banks may need to conduct “Fundamental Rights Impact Assessment” when evaluating individuals’ creditworthiness.

 

7. What are the key obligations applicable to providers of GPAI models under the Act?

The Act sets out obligations applicable to providers of all GPAI models, and additional obligations for providers of GPAI models involving “systemic risk.” The former include (in summary):

  • drawing up technical documentation, including training, testing process, and evaluation results;
  • drawing up information and documentation to disclose to downstream providers intending to integrate the GPAI model into their own AI system;
  • publishing sufficiently a detailed summary about the content used in training the GPAI model; and
  • developing a policy to comply with EU copyright law and related rights.

Providers of GPAI models involving “systemic risk” have additional obligations, including (in summary):

  • performing model evaluations;
  • reporting “serious incidents” involving the model; and
  • mitigating potential “systemic risks.”
 

8. Who will enforce the AI Act? What penalties apply for infringement of the Act?

The Act contemplates a complex enforcement mechanism. Enforcement action with respect to AI systems marketed in a specific EU member state will typically be led by designated “market surveillance” authorit(ies) in that member state. A newly created “European AI Board” will coordinate enforcement action across EU member states but will not itself possess enforcement powers. Importantly, a separate “European AI Office” will lead enforcement relating to GPAI models.

The Act allows regulators to potentially impose penalties, which are capped at the greater of

  • 7% of global group annual revenues or €35 million (approximately $38 million) for prohibited AI system infringements;
  • 3% of global group annual revenues or €15 million (approximately $16 million) for most other infringements; and
  • 1.5% of global group annual revenues or €7.5 million (approximately $8.1 million) for supplying incorrect information.

The Act (like the EU General Data Protection Regulation (GDPR) and EU competition law) allows for parental liability (piercing the corporate veil) possibly allowing regulators to bring enforcement action against parent companies for infringements committed by affiliates under certain joint and several liability principles. As a result, the parent company may potentially be fined a proportion of its global revenues. This liability could potentially extend to any entity (potentially private equity and venture capital sponsors) that exercise “decisive influence” over an infringing party. The Act allows natural or legal persons to bring complaints before the regulators, who in turn will determine whether or not to pursue enforcement action.

 

9. Will other laws in the EU continue to apply to AI?

Yes, other laws in the EU relevant to AI will continue to apply in parallel with the Act (notably, the GDPR, EU member state copyright laws, and product specific regulation) unless those laws have been repealed or modified by the Act. The Act will apply across each EU member state without the need for national implementing legislation. The European Commission intends to issue delegated legislation which will supplement the Act in certain key areas.

The GDPR contains potentially impactful AI-related restrictions relating to “automated decision-making” and “profiling.” In turn, the GDPR may be as important as the Act with respect to regulating AI involving “personal data.” In fact, the EU’s highest court recently broadened the scope of these AI-related restrictions in the GDPR. Further, many GDPR regulators have brought high-profile AI-related enforcement action, and certain GDPR regulators have adopted restrictive positions relative to data scraping.

 

10. When will the Act’s provisions start to apply?

Infographic - Datasource Item: EU AI Act Timeline

The Act has different timescales with respect to certain AI systems and GPAI models that benefit from the Act’s “grandfathering” provisions applicable to AI already in use on certain specified dates.

HOW WE CAN HELP

Morgan Lewis lawyers are well suited to help companies navigate AI Act and related AI-related compliance, enforcement, and litigation matters. Our team stands ready to assist companies designing, developing, or using AI navigating this evolving and complex legal landscape.

Contacts

If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following: