Beginning in 2019, the US federal government ramped up its involvement in, and regulation of, the use of artificial intelligence (AI). The federal government is grappling with how to incentivize AI innovation responsibly, while acknowledging the importance of the United States to lead in this area. The duality of promoting AI while regulating it may seem counterintuitive, but that is the overarching takeaway from the current state of US AI regulation.
Computerized systems that work and react in ways commonly thought to require intelligence, such as the ability to learn, solve problems and achieve goals under varying conditions. AI encompasses a range of methodologies and application areas, including machine learning, natural language processing, and robotics.
Discussions regarding the federal government review of AI is not new. In 2016, the government released a report presenting a risk-based approach to adopting regulations addressing the use of AI. However, the federal government’s pace substantially increased over the past few years. In late 2020, the Office of Management and Budget (OMB) issued final guidance for the regulation of AI establishing a general framework for government agencies in proposing regulatory guidance. For proposed regulations, agencies must conduct a regulatory impact and risk analysis and provide a clear need for the new regulation illustrating a new policy supporting a clear preference for AI development. One of the clear exceptions to increasing tolerance for the use and implementation of AI is the use of AI in connection with weapons and when such use impacts national defense.
In March 2022, the National Institute of Standards and Technology (NIST) posted a special publication detailing the challenges of AI and set the stage for creating a standard for identifying and managing bias in AI. The National Institute of Standards and Technology (NIST) plans to publish a final seven-point AI risk management framework in January 2023. At a high level, the NIST publication will provide guidance to industry stakeholders to improve the ability to incorporate trustworthiness in design, development, and use of AI systems. Although the framework is not mandatory, it is likely to influence AI industry standards.
In October 2022, the White House Office of Science and Technology Policy issued an AI Bill of Rights white paper identifying five non-binding backstop principles to guide the design, use, and deployment of AI systems to protect the US public in the age of AI. It broadly applies to all automated systems but does not prohibit any AI deployments or provide any mechanism for enforcement.
Established in 2021, the National AI Research Resource (NAIRR) Task Force is composed of 12 members split between the federal government, academic institutions, and private sector. The NAIRR serves as a large-scale, shared cyberinfrastructure that fuels AI discovery and innovation. The group will submit a final report to the president and Congress in early 2023 detailing its vision and implementation plan.
Additionally, the Federal Trade Commission (FTC) issued an advance notice of proposed rulemaking in August 2022 that seeks to address “commercial surveillance” and data security practices as applied to AI. The FTC is tackling AI from the lens of programs that affect consumers and specifically sought comment on “whether it should implement new trade regulation rules” governing AI-powered technologies.
With so many regulations or rulemaking proceedings in progress, companies can take proactive steps to prepare for any potential rules in the new year, including (1) identifying AI applications within their operations; (2) conducting documented risk assessments; and (3) integrating structural compliance measures throughout the organization.
If you are interested in Artificial Intelligence Boot Camp, we invite you to subscribe to Morgan Lewis publications to receive updates on trends, legal developments, and other relevant areas.