The US Department of Energy (DOE) and the Department of Commerce (DOC) announced on October 30, 2024 a Memorandum of Understanding (MOU) signed earlier this year to collaborate on safety research, testing, and standards for artificial intelligence (AI). The National Institute of Standards and Technology (NIST), a federal agency within the DOC and a leader in standards development across a range of industries, will represent the DOC under the MOU.
The MOU seeks to facilitate the evaluation and/or creation of guidelines for evaluating AI models and AI risk mitigation tools and techniques, including coordinating testing “to support the development of safe, secure, and trustworthy AI technologies” (collectively, AI Activities). While the DOE and DOC will jointly cooperate on AI Activities, the MOU also assigns specific activities for each agency to engage in.
For example, the DOE intends to
- establish joint AI Activities between DOC and DOE National Laboratories;
- share with the DOC information on the DOE’s high performance computing resources and, where appropriate, classified cloud-based testbeds;
- develop and evaluate privacy-enhancing technologies for scientific and technical use cases, among others.
Similarly, the MOU explains that the DOC intends to
- lead evaluations of AI models for impacts on national security, public safety, and society, including cyber, biological, and chemical threats;
- negotiate with AI model companies for their models to be evaluated and accessed by the DOE and DOC; and
- create guidelines for agencies to evaluate the efficacy of differential-privacy guarantee protections, including for AI, among others.
The MOU reflects the federal government’s growing interest in ensuring the appropriate and safe use of AI and AI-enabled technologies, particularly in critical infrastructure sectors. The US AI Safety Institute (AISI), housed within NIST, is positioned to help those efforts under the MOU. The AISI was established within NIST at the direction of President Joseph Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued in 2023. The AISI was also recently designated under the federal government’s first-ever National Security Memorandum on AI as the primary US government point of contact with private sector AI developers to facilitate voluntary pre- and post-public deployment testing for safety, security, and trustworthiness of AI models.
Similar coordination efforts among federal agencies are likely to continue as the White House and regulators advance a whole-of-government approach to address the responsible use of AI. As US Energy Secretary Jennifer M. Granholm remarked when announcing the MOU, “the federal government [is] committed to advancing AI safety and today’s partnership ensures that Americans can confidently benefit from AI-powered innovation and prosperity for years to come.”
The MOU became effective upon its being signed by parties from the DOE and DOC on July 22, 2024 and August 22, 2024, respectively. It will expire in 2029 unless renewed or discontinued by the parties.
Law clerk Lea Giotto contributed to this blog post.