BLOG POST

Tech & Sourcing @ Morgan Lewis

TECHNOLOGY TRANSACTIONS, OUTSOURCING, AND COMMERCIAL CONTRACTS NEWS FOR LAWYERS AND SOURCING PROFESSIONALS

AI Compliance: A Quick Reminder

Artificial intelligence (AI) is reshaping modern society, enabling the automation and modification of routine human activities and, consequently, enhancing efficiency and productivity. Like any technological development, AI presents both benefits and risks. Concerns include potential biases, privacy intrusions, and ethical dilemmas.

According to the Artificial Intelligence Index Report 2024, a 2023 Ipsos survey found that 66% of respondents anticipate AI will significantly change their lives in the near future, while 54% of respondents believe its benefits outweigh its downsides. But public sentiment is mixed: 52% reported feeling nervous about AI products and services, with a 13% increase from 2022. Globally, the most significant concerns revolve around AI being misused for harmful purposes (49%), its impact on employment (49%), and potential violations of privacy (45%).

Authorities around the globe are trying to keep pace with AI's rapid development and mitigate associated risks and public concerns through regulations. A landmark example is the EU AI Act, which constitutes the world’s first AI-focused legal framework for the development, deployment and use of AI systems and general-purpose AI models. The EU AI Act came into effect on August 1, 2024, with the first set of impactful rules taking effect on February 2, 2025 which focus on (1) prohibited AI systems and (2) AI literacy obligations.

Prohibited AI Systems

Under the EU AI Act, certain AI systems are prohibited due to an unacceptable risk to fundamental rights. These include AI systems that:

  • use subliminal, manipulative, or deceptive techniques that distort behavior and impair decision-making, causing significant harm;
  • exploit vulnerabilities related to age, disability, or socioeconomic status to distort behavior, leading to harm;
  • provide biometric categorization to infer or deduce status in sensitive groups;
  • provide social scoring that results in unfair or detrimental treatment based on behavior or personal traits;
  • provide criminal risk assessment based on profiling or personality traits;
  • create facial recognition databases via mass scraping from the internet or CCTV footage;
  • provide emotion recognition in the workplace or education; and
  • conduct real-time remote biometric identification in public spaces for law enforcement purposes.

These prohibitions come with certain qualifiers, as well as safety- and enforcement-related exemptions. To ensure the consistent and uniform application of the EU AI Act in this respect, in February 2025, the European Commission published two draft guidelines: (1) the Guidelines on AI system definition and (2) the Guidelines on prohibited AI practices.

AI Literacy and Compliance

AI literacy is another crucial aspect of the EU AI Act that forms part of a governance framework. It means that employers must ensure that their employees involved in AI deployment understand how these systems work, associated risks, and any potential challenges they present.

Article 4 of the EU AI Act mandates that providers and deployers of AI systems must take measures to ensure their associated personnel possess sufficient AI literacy “taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used”. The goal of this obligation is to foster a culture of responsible AI use, supporting compliance and innovation.

This post serves as a brief reminder of the necessity to comply with regulatory requirements. For further details on compliance and steps to be taken, refer to our publications The EU AI Act Is Here: 10 Key Takeaways for Business and Legal Leaders and The EU AI Act Compliance: 10 Key Steps for Providers and Deployers of AI Systems.