The European Union published on July 12, 2024 the final text of its Artificial Intelligence (AI) Act, in force on August 1, 2024, which will implement material cybersecurity and incident reporting requirements, among other requirements, for companies in response to increasing cyberattacks on AI systems and models. These regulatory obligations mirror initiatives taken by other governments to address cyberattacks on AI systems, notably, the United States’ National Institute of Standards and Technology (NIST) releasing guidelines earlier this year on preventing and mitigating such incidents. As governments intensify their efforts against these attacks, organizations should consider maintaining robust information governance and security policies and assessing the regulatory obligations and legal risks associated with cyberattacks on AI systems and models.
As AI and machine learning (collectively, AI) systems and models become more ubiquitous in the marketplace, they are more frequently the target of cyberattacks. These technologies can be lucrative targets because they often contain vast troves of data, some of which may be commercially sensitive or personal. Attackers may target AI models to gain access to underlying information or to disrupt the model’s processes. Many leading developers of AI systems and models are taking these risks seriously.
Notably, OpenAI Inc. announced on June 13, 2024 that US General Paul Nakasone, former leader of US Cyber Command and former US National Security Agency Director, was joining its board of directors. OpenAI acknowledged that this development “underscore[d] the growing significance of cybersecurity as the impact of AI technology continues to grow.”
To begin addressing AI-related cybersecurity concerns, the US Department of Commerce’s National Institute of Standards and Technology (NIST) published guidance on January 4, 2024 that identified four specific types of cyberattacks and offered ways for companies to prevent or mitigate the impact of those attacks. This LawFlash focuses on that guidance in addition to the cybersecurity and incident reporting obligations under the EU’s new AI Act, which comes into force on August 1, 2024. The final text of the act was published on July 12, 2024.
On October 30, 2023, President Joseph Biden issued an executive order to establish standards around the use of AI.[1] Titled Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the order, among other things, directed NIST to develop guidelines and best practices for companies to follow to mitigate or avoid cyberattacks, with the stated goal of promoting “consensus industry standards” to protect AI systems.
In early 2024, NIST published Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, which identified four primary areas of adversarial machine learning (ML) attacks: (1) data “poisoning” attacks, (2) data “abuse” attacks, (3) privacy attacks, and (4) “evasion” attacks.[2] As NIST noted, “[t]he spectrum of effective attacks against ML is wide, rapidly evolving, and covers all phases of the ML lifecycle,” but these four attacks are currently prevalent and important to understand.
Each of these attacks may be easy to mount. NIST cautions that poisoning attacks, for example, can be mounted by controlling a few dozen training samples—a small percentage of an entire training dataset. Going forward, companies involved in developing or implementing AI systems may wish to consider monitoring these types of cyberattacks, as well as novel attacks that arise, as cyberattacks in the AI and ML space are continually evolving.
Reflecting the seriousness of the risks identified by NIST, the EU AI Act specifically acknowledges certain of these risks—including “data poisoning,” “model evasion,” and “adversarial” attacks (recital 77; article 15(5))—and the consequences of these risks arising (recital 110). This could include the loss of human control, interferences with critical infrastructure, disinformation, harmful bias and discrimination, and societal risks. In turn, like the EU General Data Protection Regulation (GDPR), the AI Act imposes cybersecurity and incident reporting obligations; these obligations are different and run in parallel to the GDPR and EU sector-specific laws.
Cybersecurity Obligations Under the AI Act
Notably, the AI Act requires “providers” of “high-risk” “AI systems” and “General Purpose AI” (GPAI) models to implement security and resilience measures, including as described below:
Incident Reporting Obligations Under the AI Act
The AI Act also imposes incident reporting obligations on both providers and “deployers” of AI systems and GPAI models, even in certain circumstances where AI systems are being tested (article 60(7)). Notably, providers (and in certain circumstances, deployers) of high-risk AI systems and GPAI models (which present systemic risks) must report “serious incidents” to the appropriate governmental authorities, and in certain circumstances, relevant participants in the AI chain (article 55(1)(c); 73; 26(5)). Serious incidents may include death or serious harm to a person, serious and irreversible disruption to critical infrastructure, serious harm to property or the environment, or infringement of fundamental rights laws (article 3(49)).
Importantly, the timeframes for reporting incidents under the EU AI Act are tight, even relative to those under the GDPR. However, the timeframe will depend on the circumstances. For example, if a causal link is established between the AI system and the serious incident, the incident must be reported immediately.
Significantly, the act contains specific time limits on reporting timeframes that depend on the seriousness and impact of the incident. For example, a serious incident with “widespread infringement” (which could involve cross-border or critical infrastructure impacts) must be reported “immediately” but not later than two days following “awareness” (article 73(3)). Similar to reporting “personal data breaches” under the GDPR, the initial report may be “incomplete” and thereafter followed by a “complete report” (article 73(5)).
While, according to NIST, defenses to cyberattacks on AI models are “incomplete at best,”[3] organizations may wish to consider taking—in addition to their existing information security plans and policies—measures such as the following:
The EU AI Act also sets out (illustratively) information security measures which may be undertaken. For example, the AI Act suggests with respect to GPAI models with systemic risks “securing model weights, algorithms, servers, and data sets, such as through operational security measures for information security, specific cybersecurity policies, adequate technical and established solutions, and cyber and physical access controls, appropriate to the relevant circumstances and the risks involved” (recital 115).
Organizations may wish to carefully consider the regulatory and legal risks—which could include both regulatory enforcement and private litigation—from successful and unsuccessful cyberattacks to AI systems and models, including those that arise from:
In addition, organizations may be subject to notable AI-related cyber incident reporting obligations arising from longstanding laws like the GDPR and newer AI laws. In turn, organizations may need to update their existing information security and incident response plans, and conduct AI-focused cybersecurity “tabletop exercises” to reflect the unique cybersecurity risks relating to AI systems and models.
Morgan Lewis Lawyers are well suited to help companies navigate AI-related enforcement and litigation matters in the European Union and United States. Our team stands ready to assist companies designing, developing, or using AI navigating this evolving and challenging cyber threat landscape.
If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following:
[1] Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Oct. 30, 2023).
[2] Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, National Institute of Standards and Technology (Jan. 4, 2024).
[3] NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems (Jan. 4, 2024).