Insight

Existing and Proposed Federal AI Regulation in the United States

09 апреля 2024 г.

The rapid rate at which technology is advancing poses a significant challenge to global regulatory authorities, and perhaps nowhere is this more evident than with respect to artificial intelligence (AI). While AI continues to quickly develop, efforts to regulate the burgeoning technology with applications across several industries have been slower to emerge.

In the absence of overarching regulation in the United States, AI is currently governed by a mix of the federal government, state governments, industry itself, and the courts. These tools, however, are limited and have challenges of their own, including possible conflicts of interest in the case of industry self-governance, compliance issues that can arise from overlapping or conflicting regulations by multiple state governments, and the limitations on courts to adjudicate AI-related disputes within the confines of existing law.

The United States does have certain existing regulatory tools that it is leveraging to address AI, in addition to developing new regulations to manage AI-associated risks. It is important for companies operating in or doing business with the United States to understand the country’s current AI regulatory landscape, which includes both the executive branch’s development of regulatory authorities and the investigative and legislative activities of US Congress.

US Export Controls Laws and Regulations

AI has extensive military, defense, and intelligence capabilities, from the use of autonomous vehicles to the collection of intelligence data to the evaluation, analysis, and synthesis of huge language models and data points.

As of now, US export controls currently do not specifically “control” AI as a broad category. Instead, the different components that contribute to the development of AI are controlled in a variety of ways, the majority of which still fit in an uncontrolled or a lightly controlled category of the Export Administration Regulations (EAR99).

These components include, but are not limited to,

  • integrated circuits/semiconductors;
  • technology for designing, developing, adapting, or embedding AI functionality into products or platforms;
  • equipment to manufacture the integrated circuits/semiconductors used for AI functionality; and
  • assistance deemed to be “US support” or facilitation in these areas and tangentially covered by other direct or indirect items or activities.

While AI also has several commercial applications through the proliferation of generative platforms, the technology has not yet reached a level of sufficient integrity to cede its functionalities to a standalone AI category. However, the speed and relative open development environment have raised significant concerns among the United States and other governments regarding the potential for misuse. This has led to questions of whether certain AI technologies would be more appropriately controlled under the International Traffic in Arms Regulations (ITAR), rather than the EAR99. While there are elements of AI overall, such as hardware, types of software modules, or technical data on system design and development, that could find their way onto ITAR’s US Munitions List, the ITAR does not currently include specific AI defined items (whether products, technology or software) that are subject to export control requirements.

One of the key challenges facing any potential regulatory scheme is definitional (i.e., exactly how does one define AI?). A variety of federal agencies and Congress have proposed definitions and while some common elements exist, the distinctions raise questions regarding the ability of the agencies drafting the regulations to find the definition that works best within the regulatory framework.

For example, in 2018, the US Department of Defense issued its first AI Strategy. The definition, however, was ubiquitous, and therefore, somewhat unhelpful. Soon after, the Fiscal Year 2023 National Defense Authorization Act (FY23 NDAA) provided a more granular definition with three levels of AI—basic, reactive, and capable of “thinking,” with the third level being the most difficult to determine whether export controls can be used. The FY23 NDAA examined the functionality and potential applications of AI and some terminology within the definition, such as “acting rationally,” has caused consternation.

The US government remains concerned with the development of AI systems that enable the military modernization of countries of concern—including weapons, intelligence, and surveillance capabilities—and that have applications in areas such as cybersecurity and robotics.

Outbound Investment and AI

In the summer of 2023, President Joseph Biden issued Executive Order (EO) 14105, which directed the US Department of the Treasury to establish new regulations to restrict certain outbound investment to countries of concern. The EO and Treasury’s anticipated regulations currently cover three categories of sensitive technology: semiconductors and microelectronics, quantum information technologies, and AI. The EO provides the administration leeway to add other technology areas or sectors, and it remains possible that new sensitive areas such as biotechnology and battery technology could be added. For now, the short list reflects the acute focus by the US government on AI.

The view of the US government is that any resources, including capital, provided by US investors to countries of concern that may help them gain an edge in a sensitive area of intense competition are problematic. As of right now, there are no outbound investment restrictions, as Treasury is still in the rulemaking process and has signaled it is unlikely to implement the program until 2025. Following the Advanced Notice of Proposed Rulemaking (ANPRM) that accompanied issuance of the EO, the next step will be for Treasury to issue a Notice of Proposed Rulemaking (NPRM) that will propose regulatory text. Some transactions will likely be prohibited under the proposed regulations, while others will be subject to notification to the government.

The ANPRM reflects the US government’s primary concern with the development of AI systems that enable the military modernization of countries of concern—including weapons, intelligence, and surveillance capabilities—and that have applications in areas such as cybersecurity and robotics. The challenge lies in regulating AI in a way that protects the assets that Treasury and the US government view as especially sensitive, while not regulating in an overbroad fashion that could be unduly burdensome or stifle innovation.

CFIUS and AI

In addition to export controls, EOs, and outbound investment, the United States is focused on AI from an inbound investment perspective. The Committee on Foreign Investment in the United States (CFIUS) reviews cross-border investments involving US businesses or assets. A review can be triggered through various means, such as transactions involving foreign control of US businesses or certain-control but non-passive investments in US businesses that involve critical technology, critical infrastructure, or sensitive personal data. Both critical technology and sensitive personal data may be implicated by AI, which means that a foreign investment in a US AI business—which can include both businesses that develop AI and businesses that use AI—should be carefully evaluated for CFIUS jurisdiction.

A critical technology investment could require not just a voluntary CFIUS filing but a mandatory filing in some circumstances. It also bears mention that in September 2022, President Biden issued EO 14803 on CFIUS, directing that CFIUS should pay particular attention to certain key technologies, including AI, when assessing risk.

Additional Guidance from the Executive Branch

Issued in October 2023, EO 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, takes a “whole of government” approach to the control, management, and use of AI, defining the technology as the following:

“[A] machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments; … use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.” (15 USC § 9401(3))

EO 14410 tasks every US government agency, plus offices related to the president, to develop working groups to evaluate the development and use of AI; develop regulations for each agency; identify and establish public-private engagement (including advisory committees); and focus on the use of AI in specific technologies “to promote competition and innovation in the semiconductor industry, recognizing that semiconductors power AI technologies.”

EO 14410 is also not self-effectuating (EOs 14105 and 14083) and requires actions by identified administrative agencies and budget—which started in November 2023. In the EO, the president tasked the US Office of Management and Budget, in addition to other agencies, to issue specific AI guidance (Guidance for Regulation of Artificial Intelligence Applications, M-21-06, November 2023). To date, there has been some action by agencies that have appointed responsible individuals for AI, such as the US Departments of State, Agriculture, Commerce, Education, Energy, and the US Department of Justice (DOJ); the National Science Foundation (NSF), and NASA. Additionally, the US Department of Homeland Security has created a working group to address AI regulatory requirements.

Over the coming months, we anticipate regulations affecting government procurement, technology development, and “ethical” use requirements for AI subject to specific agency authorities. Updates could occur to the EAR99 and ITAR, and regulations of the US Department of Energy (110 and 810), US Food and Drug Administration, and research agencies, such as the NSF and the National Institutes of Health. These regulations may be significantly affected by the 2024 election and actions by Congress.

Another executive order, issued in February of this year, is EO 14117 on Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern, which was accompanied by an ANPRM issued by the DOJ. The EO notes that countries of concern can analyze and manipulate personal data using advanced technologies, such as AI, to improve their ability to identify “potential strategic advantages over the United States . . . thereby improving their ability to exploit the underlying data and exacerbating the national security and foreign policy threats.”

Partly because of the enhanced data risks posed by the use of AI by countries of concern, EO 14117 directs the DOJ to issue new regulations to restrict the transfer of sensitive data to countries of concern, and the regulations will apply to certain data-broker transactions, vendor agreements, investment agreements, and employment agreements.

Congress and AI

Leadership from the US House of Representatives and the Senate recognize the importance of AI and are currently marshaling resources to determine how best to regulate the technology. The Creating Helpful Incentives to Produce Semiconductors (CHIPS) and Science Act highlight congressional support of high-tech industrial policy. So far, movement, including bipartisan efforts, has occurred in the form of numerous hearings, across jurisdictions, covering a wide variety of issues. There are bipartisan Senate forums on AI, including a nine-part series on issues like intellectual property, workforce, privacy, and national security. Additionally, the House launched a bipartisan task force on AI, with support of House Speaker Mike Johnson and Democratic Leader Hakeem Jeffries.

The US House Select Committee on the Strategic Competition between the United States and the Chinese Communist Party was established in the 118th Congress with support of then-Speaker Kevin McCarthy and Democratic Leader Jeffries. The committee has investigatory but not legislative jurisdiction—it may investigate, request information, oversee conduct, and coordinate with committees of jurisdiction to develop and review legislation. Currently, the committee is investigating four US venture capital firms regarding investments in Chinese AI, semiconductor companies, and quantum computing companies.

In February 2024, the committee issued a report following a bipartisan investigation into five US venture capital firms that found the companies had invested funds in and provided expertise and other benefits to critical technology companies, including several aiding the Chinese military. The committee is also probing pension and endowment investments. Still, not all sentiments on AI are bipartisan. Disagreements focus on the manner in which restrictions may apply. These methods include a positive list of prohibited investments, a licensing scheme to allow some investments, or a sanctions-type approach that would limit or preclude investments in named parties.

Impending Presidential Election and AI

While early congressional interest in AI has been bipartisan, to date there are stark differences regarding AI regulation when it comes to the workplace, financial services, healthcare, and other industries.

If reelected, former President Donald Trump has vowed to overturn the current administration’s EO on AI (14110), however, regarding national security regulation and industrial policy, the future is less clear.

Key Takeaways

  • The US government is leveraging existing regulatory tools to address AI, including export controls and CFIUS
  • The government is also developing new regulatory authorities to address risks posed by AI, including outbound investment restrictions and regulation of bulk sensitive personal data and government-related data; the need for new authorities is likely to increase
  • The government is also seeking to coordinate a whole-of-government approach by clearly defining roles and responsibilities for the various executive branch agencies
  • Although the executive branch was first to begin regulating AI, Congress is rapidly increasing its own investigative and legislative activities