Insight

AI in the Workplace: The New Legal Landscape Facing US Employers

01. Juli 2024

Artificial intelligence (AI) is quickly transforming the employment landscape, automating tasks, streamlining processes, and enhancing decision-making. At the same time, the technology raises concerns about potential biases, accuracy, and increasingly complex legal compliance.

As AI’s influence grows in the United States, so too has government oversight. Lawmakers and policymakers—from the Biden administration to city governments—have issued guidance, policies, and laws to govern the use of AI in the workplace, giving employers a new legal landscape to navigate.

AI RISK OVERVIEW

Employers are turning to AI to make time-consuming tasks more efficient, using the technology to streamline the recruiting process, find ways to eliminate human bias, and advance diversity. However, employers should be aware that using AI is not without risk.

While employers could use AI to help increase diversity, a poorly designed or trained AI tool has the potential to discriminate on a much larger scale. Even if the algorithm ignores demographic information, certain attributes correlate with demographics. Further, biased model inputs are likely to lead to biased outputs. In an effort to predict success, AI may improperly develop correlations and assumptions based on factors that are not job related.

Aside from potential discrimination and regulatory compliance risks, the use of AI in a workplace raises concerns of potential leaks of sensitive or confidential information (of the company, candidates, employees, or third parties). There are also questions about whether something created by AI can be protected as propriety company property, and whether the use of AI might hinder employee development.

GOVERNMENT RESPONSE TO AI

The Federal Government

The White House

In October 2022, the White House Office of Science and Technology Policy issued the Blueprint for an AI Bill of Rights, which asserts principles and guidance around equitable access and the use of AI systems.

In October 2023, President Joseph Biden issued an executive order titled the Safe, Secure, and Trustworthy Development and Use of AI, requiring various government agencies to set new standards for safety and security. The order also called on developers of large-format AI systems to share safety and test results with the federal government. Additionally, the order provides funding for research and partnerships intended to protect privacy rights, provides guidance for federal contractors to use AI in a way that mitigates discrimination.

It also creates a partnership between the US Department of Justice and federal civil rights offices on how to investigate and prosecute civil actions relating to the use of AI. Furthermore, the order directs federal agencies to publish reports on the use of AI, including how AI is impacting ongoing programs, and provides funding for government employee trainings on AI use.

In May 2024, the White House issued new guidance titled Critical Steps to Protect Workers from Risks of Artificial Intelligence aimed at protecting workers from risks related to an employer’s use of AI. The guidance outlines the White House’s key principles for the development and deployment of AI in the workplace.

These principles include: giving employees input into the way that AI is used; supporting ethical development of AI systems; establishing clear governance systems, procedures, and human oversight; ensuring employers are transparent with employees and job seekers about when AI is used and how it impacts their employment; ensuring AI systems do not violate or undermine workers’ rights to organize, health and safety rights, wage and hour rights, and anti-discrimination and anti-retaliation protections; using AI systems to assist, complement, and enable workers and improve job quality; supporting workers whose jobs are impacted by AI; and ensuring the responsible use of worker data.

Equal Employment Opportunity Commission

Under the Biden administration, the Equal Employment Opportunity Commission is stepping up its enforcement efforts around AI and machine learning-driven hiring tools. The agency’s efforts include the following:

  • Designating the use of AI in employment as a top “subject matter priority”
  • Issuing guidance on the application of the Americans with Disabilities Act to AI tools in employment
  • Launching an initiative to ensure that AI and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws
  • Appointing a chief AI officer
  • Pursuing investigations and complaints against employers related to their use of AI in employment

State and Local Governments

From coast to coast, states such as California, Colorado, Georgia, Illinois, New York, and Washington have adopted, or are considering adopting, AI regulations. Some key regulations to consider for the following jurisdictions include:

New York City

One of the first to address AI use and employment decision-making, New York City’s AI Law, which took effect in July 2023, makes it unlawful for an employer to use an automated employment decision tool (AEDT) to screen candidates for employment or promotion in New York City unless certain criteria are met, including:

  • The AEDT has undergone an independent bias audit no more than one year prior to its use
  • A summary of the most recent bias audit is made publicly available on the employer’s or employment agency’s website
  • The employer must notify candidates living in New York City at least 10 business days before an interview that AI will be used, the job qualifications and characteristics that the tool will assess, and instructions for how to request an alternative selection process

Colorado

In May 2024, Colorado became the first US state to enact comprehensive AI legislation. Effective February 1, 2026, the law applies to both developers and deployers (i.e., users) and requires the use of reasonable care to avoid algorithmic discrimination. The law targets “high-risk artificial intelligence systems,” which is any AI system that “makes, or is a substantial factor in making, a consequential decision.” A “consequential decision” is a decision that has “a material legal or similarly significant effect” on the provision or denial to Colorado residents of services, including those related to employment.

To comply with the law, employers must implement a risk management policy and program, complete an annual impact assessment, notify employees or applicants about the employer’s use of AI where AI is used to make a decision about the employee or applicant, make a publicly available statement summarizing the types of high-risk systems that the employer currently deploys, and disclose to the Colorado attorney general the discovery of algorithmic discrimination within 90 days of discovery.

The law establishes a rebuttable presumption that the employer has used “reasonable care” where the employer complies with the law’s requirements, indicating that if an employer complies with the law, an employer will have a much stronger defense in the event it faces a discrimination claim.

Employers should be aware that there is currently no private right of action under the law, so enforcement is left to the Colorado Office of the Attorney General. However, the law also includes language indicating that a violation may be considered a “deceptive trade practice” under Colorado law, which could open the door to additional claims.

California

California, too, is joining in on the efforts to regulate AI. In May 2024, the California Civil Rights Council proposed regulations related to the use of AI and employment. The proposed regulations seek to:

  • Clarify that it is a violation of California law to use an automated decision-making system if it harms applicants or employees based on protected characteristics
  • Ensure employers and covered entities maintain employment records, including automated decision-making data, for a minimum of four years
  • Affirm that the use of an automated decision-making system alone does not replace the requirement for an individualized assessment when considering an applicant’s criminal history
  • Clarify that third parties are prohibited from aiding and abetting employment discrimination, including through the design, sale, or use of an automated decision-making system
  • Provide clear examples of tests or challenges used in automated decision-making system assessments that may constitute unlawful medical or psychological inquiries
  • Add definitions for key terms used in the proposed regulations, such as “automated-decision system,” “adverse impact,” and “proxy”

Moreover, California lawmakers are considering more than two dozen AI-related bills. From an employment perspective, the one to watch is AB 2930, which would prohibit employers from using an “automated decision tool” in a way that contributes to algorithmic discrimination.

The proposal seeks to regulate “automated decision tools” that make “consequential decisions.” It would impose obligations on employers that use automated decision tools to evaluate the impact of an automated decision tool and prepare an annual impact assessment (i.e., a bias audit), provide notice to impacted employees regarding its use, form an internal governance program, and make publicly available a policy that provides the types of AI systems in use and how the employer is managing the reasonably foreseeable risks of discrimination.

KEY TAKEAWAYS

The risks posed by AI and increased government oversight mean employers should consider taking steps to protect themselves. Some key takeaways to consider that can help mitigate legal risk include the following:

  • Be transparent: Job candidates and employees should be informed of AI tools being used in their selection process or evaluations. On the flip side, employers may want to ask for confirmation that candidates did not use AI to produce application materials.
  • Prepare for accommodations: Have accommodation plans in place should a candidate seek a disability accommodation, particularly recognizing that many laws and federal regulations instruct employers to provide an alternative to the AI tool.
  • Develop AI usage policies: In crafting policies, employers should consider how their employees may use AI along with how employers want them to use the technology. Policies should have usage guidelines and best practices.
  • Check vendors: Employers should be mindful in selecting AI vendors that ensure their systems are not biased, can be audited, and can duly address reasonable accommodations in the recruiting and hiring process. If possible, employers should require for representations used by AI tools in workplace contexts to be legally compliant and attempt to negotiate indemnification protections from AI vendors and their cooperation in defending against related claims.
  • Validate results: Employers should make sure to have a diverse applicant pool before the application of AI and consider hiring an industrial organization psychologist to conduct validation research. Validate the results of the use of the tool and compare to human decision-makers’ results.
  • Stay informed: It is important to stay up to date on existing and pending legislation related to AI to ensure AI tools are consistent with federal, state, and local law, and to update policies and practices consistent with legal developments.