LawFlash

California’s SB 1047 Would Impose New Safety Requirements for Developers of Large-Scale AI Models

2024年08月29日

The California State Assembly passed on August 28, 2024 proposed bill SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which aims to add new requirements to the development of large AI models by setting out various testing, safety, and enforcement standards. The proposed bill seeks to curb AI’s “potential to be used to create novel threats to public safety and security” such as weapons of mass destruction and cyberattacks.

The bill will return to the Senate floor for a final vote and, if approved, Governor Gavin Newsom will have until September 30, 2024 to veto the bill.

WHICH AI DEVELOPERS WILL BE AFFECTED?

The bill would only apply to developers of “covered models,” which is a defined term that shifts over time based on computing power threshold. Prior to January 1, 2027, “covered models” are defined as AI models that are either trained (1) using computing power “greater than 10^26 integer or floating-point operations” (FLOP) that cost over $100 million to develop or (2) using fine-tuning with computing power of three times 10^25 integer or FLOP costing over $10 million.[1] This is the same computing threshold as set in the Biden administration’s recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.[2]

After January 1, 2027, the cost threshold will remain the same (adjusted for inflation), but the computing power threshold will be determined by the US federal government’s Government Operations Agency.[3] Notably, the pre-2027 computing power threshold exceeds current capabilities of AI training models,[4] but it is expected that the next generation of highest-capability models will exceed this figure.

The bill would broadly cover any AI developers that offer their services in California regardless of whether the developer is headquartered in California.

KEY TESTING AND SAFETY REQUIREMENTS

The bill sets out various testing and safety requirements, including the following:

  • Shutdown capabilities: Before training a covered AI model, developers must implement the ability to “promptly enact a full shutdown,” which includes halting all covered model operations, including training. However, the bill does not define what constitutes “prompt.”
  • Safety assessment and testing: Covered AI developers would be required to have a documented safety and security protocol to avoid “critical harm,” which is defined as mass casualties, at least $500 million in damage, or other comparable harms to public safety. Before using the model or making it publicly available, a developer must assess whether there is a possibility that the model could cause critical harm, record and retain test results from the assessment, and implement appropriate safeguards.
  • Computer cluster policies: A person[5] operating a computing cluster—a network of multiple computers connected to work together as a single system that meets certain computing power thresholds—must have policies and procedures in place to address situations in which a customer uses compute resources sufficient to train a covered model. They must obtain identifying information for any customer, assess whether the customer intends to use the computing cluster to train a covered model, and implement the capability to promptly shut down any resources being used to train or operate models under the customer’s control. As aforementioned, the bill does not define what constitutes “prompt.”

The bill further sets out enforcement authority and guidelines to ensure compliance: 

  • Auditing and reporting: Beginning in 2026, covered AI developers must retain a third-party auditor to perform an independent audit of a developer’s compliance each year. The bill would also require these AI developers to make redacted copies of their safety and security protocol and auditors’ reports public, and, upon request, provide to the California Attorney General (AG) unredacted copies of those documents. The bill further requires annual submission of compliance statements to the AG and to report safety incidents to the AG within 72 hours.
  • AG civil suits: The bill authorizes the AG to bring a civil action for violations of the bill that cause death or bodily harm, harm to property, theft, or misappropriation of property, or imminent public safety risks. The AG may seek civil penalties, monetary damages (including punitive damages), injunctive relief, or declaratory relief. Civil penalties are capped at 10% of the cost of computing power used to train the covered model.
  • Whistleblower protections: The bill provides certain whistleblower protections for employees. For example, developers of covered models cannot prevent an employee from or retaliate against an employee for disclosing the developer’s noncompliance with the bill to the AG or Labor Commissioner or that the AI model is unreasonably dangerous.

RECENT MAJOR AMENDMENTS

The legislation has been amended from earlier versions to (1) exclude criminal penalties and (2) allow for civil penalties only where actual harm has occurred or imminent threats to public safety exist. We understand that these changes reflect input from the technology community to ensure that such laws do not quash innovation.

KEY TAKEAWAYS

California has demonstrated a keen and focused interest in regulating AI and, in the interest of balancing the boom of AI innovation within the state, intends to be at the forefront of doing so nationally. The bill is currently focused on only the largest and most powerful AI models and, based on current computing power and cost thresholds, the requirements are less likely to impact AI startups, at least in the near term.

We are following this bill closely and will report back on any further developments.

Contacts

If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following:

AI Contacts
Dion M. Bregman (Silicon Valley)
Andrew J. Gray IV (Silicon Valley)
Minna Lo Naranjo (San Francisco)
David Plotinsky (Washington, DC)
Doneld G. Shelkey (Boston / Pittsburgh)
State AG Contacts
Diana Cortes (Philadelphia)
Nicholas M. Gess (Washington, DC)
Rebecca J. Hillyer (Philadelphia)
Martha B. Stolley (New York)

[1] S.B. 1047 § 3(e)(1)(A)(i), (ii) (Cal. 2024). “Fine-tuning” is the process of taking a pretrained model and adjusting it to better fit the existing model’s data.

[2] Executive Order No. 14110, 88 Fed. Reg. 75,191 (Oct. 30, 2023). While the executive order does not directly regulate private industry outside of potential national security implications, it requires federal agencies including the Department of Commerce to issue standards and guidance and use their regulatory authority to monitor AI.

[3] S.B. 1047 § 3(e)(1)(B).

[4] Computation Used to Train Notable Artificial Intelligence Systems, Our World in Data, last updated August 5, 2024.

[5] SB 1047 defines a “person” as an individual, proprietorship, firm, partnership, joint venture, syndicate, business trust, company, corporation, limited liability company, association, committee, or any other nongovernmental organization or group of persons acting in concert. S.B. 1047 § 3(m) (Cal. 2024). A developer is defined as a “person that performs the initial training of a covered model” by training or fine-tuning under the computing power and cost specified in the bill. Id. § 3(i).