Artificial intelligence (AI) is quickly transforming the employment landscape, automating tasks, streamlining processes, and enhancing decision-making. At the same time, the technology raises concerns about potential biases, accuracy, and increasingly complex legal compliance.
As AI’s influence grows in the United States, so too has government oversight. Lawmakers and policymakers—from the Biden administration to city governments—have issued guidance, policies, and laws to govern the use of AI in the workplace, giving employers a new legal landscape to navigate.
Employers are turning to AI to make time-consuming tasks more efficient, using the technology to streamline the recruiting process, find ways to eliminate human bias, and advance diversity. However, employers should be aware that using AI is not without risk.
While employers could use AI to help increase diversity, a poorly designed or trained AI tool has the potential to discriminate on a much larger scale. Even if the algorithm ignores demographic information, certain attributes correlate with demographics. Further, biased model inputs are likely to lead to biased outputs. In an effort to predict success, AI may improperly develop correlations and assumptions based on factors that are not job related.
Aside from potential discrimination and regulatory compliance risks, the use of AI in a workplace raises concerns of potential leaks of sensitive or confidential information (of the company, candidates, employees, or third parties). There are also questions about whether something created by AI can be protected as propriety company property, and whether the use of AI might hinder employee development.
The Federal Government
The White House
In October 2022, the White House Office of Science and Technology Policy issued the Blueprint for an AI Bill of Rights, which asserts principles and guidance around equitable access and the use of AI systems.
In October 2023, President Joseph Biden issued an executive order titled the Safe, Secure, and Trustworthy Development and Use of AI, requiring various government agencies to set new standards for safety and security. The order also called on developers of large-format AI systems to share safety and test results with the federal government. Additionally, the order provides funding for research and partnerships intended to protect privacy rights, provides guidance for federal contractors to use AI in a way that mitigates discrimination.
It also creates a partnership between the US Department of Justice and federal civil rights offices on how to investigate and prosecute civil actions relating to the use of AI. Furthermore, the order directs federal agencies to publish reports on the use of AI, including how AI is impacting ongoing programs, and provides funding for government employee trainings on AI use.
In May 2024, the White House issued new guidance titled Critical Steps to Protect Workers from Risks of Artificial Intelligence aimed at protecting workers from risks related to an employer’s use of AI. The guidance outlines the White House’s key principles for the development and deployment of AI in the workplace.
These principles include: giving employees input into the way that AI is used; supporting ethical development of AI systems; establishing clear governance systems, procedures, and human oversight; ensuring employers are transparent with employees and job seekers about when AI is used and how it impacts their employment; ensuring AI systems do not violate or undermine workers’ rights to organize, health and safety rights, wage and hour rights, and anti-discrimination and anti-retaliation protections; using AI systems to assist, complement, and enable workers and improve job quality; supporting workers whose jobs are impacted by AI; and ensuring the responsible use of worker data.
Equal Employment Opportunity Commission
Under the Biden administration, the Equal Employment Opportunity Commission is stepping up its enforcement efforts around AI and machine learning-driven hiring tools. The agency’s efforts include the following:
State and Local Governments
From coast to coast, states such as California, Colorado, Georgia, Illinois, New York, and Washington have adopted, or are considering adopting, AI regulations. Some key regulations to consider for the following jurisdictions include:
New York City
One of the first to address AI use and employment decision-making, New York City’s AI Law, which took effect in July 2023, makes it unlawful for an employer to use an automated employment decision tool (AEDT) to screen candidates for employment or promotion in New York City unless certain criteria are met, including:
Colorado
In May 2024, Colorado became the first US state to enact comprehensive AI legislation. Effective February 1, 2026, the law applies to both developers and deployers (i.e., users) and requires the use of reasonable care to avoid algorithmic discrimination. The law targets “high-risk artificial intelligence systems,” which is any AI system that “makes, or is a substantial factor in making, a consequential decision.” A “consequential decision” is a decision that has “a material legal or similarly significant effect” on the provision or denial to Colorado residents of services, including those related to employment.
To comply with the law, employers must implement a risk management policy and program, complete an annual impact assessment, notify employees or applicants about the employer’s use of AI where AI is used to make a decision about the employee or applicant, make a publicly available statement summarizing the types of high-risk systems that the employer currently deploys, and disclose to the Colorado attorney general the discovery of algorithmic discrimination within 90 days of discovery.
The law establishes a rebuttable presumption that the employer has used “reasonable care” where the employer complies with the law’s requirements, indicating that if an employer complies with the law, an employer will have a much stronger defense in the event it faces a discrimination claim.
Employers should be aware that there is currently no private right of action under the law, so enforcement is left to the Colorado Office of the Attorney General. However, the law also includes language indicating that a violation may be considered a “deceptive trade practice” under Colorado law, which could open the door to additional claims.
California
California, too, is joining in on the efforts to regulate AI. In May 2024, the California Civil Rights Council proposed regulations related to the use of AI and employment. The proposed regulations seek to:
Moreover, California lawmakers are considering more than two dozen AI-related bills. From an employment perspective, the one to watch is AB 2930, which would prohibit employers from using an “automated decision tool” in a way that contributes to algorithmic discrimination.
The proposal seeks to regulate “automated decision tools” that make “consequential decisions.” It would impose obligations on employers that use automated decision tools to evaluate the impact of an automated decision tool and prepare an annual impact assessment (i.e., a bias audit), provide notice to impacted employees regarding its use, form an internal governance program, and make publicly available a policy that provides the types of AI systems in use and how the employer is managing the reasonably foreseeable risks of discrimination.
The risks posed by AI and increased government oversight mean employers should consider taking steps to protect themselves. Some key takeaways to consider that can help mitigate legal risk include the following: