Insight

Thinking About Implementing AI in 2023? What Organizations Need to Know

January 10, 2023

Artificial intelligence (AI) tools have the power to transform how businesses operate and generate efficiencies that can improve an organization’s ability to analyze data, resulting in increased profitability and reducing costs. AI promises to continue streamlining business processes and decision-making, thereby reducing overhead costs, time, and labor. This evolution, however, comes at a price if AI technologies are not deployed within a safe and fair regulatory framework.

Companies run the risk of violating privacy and data protection laws, being accused of bias or discrimination, and engaging in unfair practices that, ultimately, could lead to legal issues, such as investigations by federal and state agencies as well as class action lawsuits.

As decision makers enter the new year, this forward-looking piece highlights some of the current and pending AI-related legislation and guidance in the United States and identifies issues that boards and management should consider and address to protect their organizations.

  1. Are you a general counsel?
  2. Are you in human resources?
  3. Are you on a governance board?
 

ARE YOU A GENERAL COUNSEL?

As companies increasingly integrate AI into their regular business operations, general counsels now find themselves monitoring a new patchwork of federal and state regulations. With new legislation, regulations, guidance, and recommendations already enacted or anticipated, here’s what general counsels should be monitoring in 2023.

State-level

 Similar to data privacy regulations, states rather than the federal government have taken the lead in developing and enacting AI legislation.

  • Alabama Act No. 2021-344: Alabama established a council to review, issue, and advise the government, legislature, and other interested parties on the use and development of AI in the state.
  • Colorado S.B. 22-113: Colorado prohibits insurers from using external consumer data in a way that unfairly discriminates.
  • Mississippi H.B. 633: Mississippi is requiring computer science instruction for all K-12 students that includes AI and machine learning.
  • Vermont H.B. 410: Vermont established the Artificial Intelligence Commission that aims to support the “ethical use and development” of AI technology in Vermont.
  • Washington S.B. 5693: Washington appropriated funds to create an automated decision-making working group.
  • Pending state laws: Massachusetts H.B. 119 would establish a commission on automated decision-making by government in the commonwealth; Hawaii HB 454 would establish an income tax credit for investment in qualified businesses that develop cybersecurity and artificial intelligence; and Washington, DC’s Stop Discrimination by Algorithms Act (B24-0558) would prohibit both for-profit and nonprofit organizations from knowingly or unknowingly using algorithms that make decisions based on protected personal traits.

Federal level

  • The Federal Trade Commission (FTC) issued an advance notice of proposed rulemaking in August 2022 that seeks to address “commercial surveillance” and data security practices as applied to AI. The FTC is tackling AI from the lens of programs that affect consumers and specifically sought comment on “whether it should implement new trade regulation rules” governing AI-powered technologies.
  • The National Institute of Standards and Technology (NIST) plans to publish a final seven-point AI risk management framework in January 2023. At a high level, the NIST publication will provide guidance to industry stakeholders to improve the ability to incorporate trustworthiness in the design, development, and use of AI systems. Although the framework is not mandatory, it is likely to influence AI industry standards.
  • The National AI Research Resource (NAIRR) Task Force is finalizing a report to the President and Congress detailing its vision and implementation plan – a national cyberinfrastructure that would democratize access to resources and tools that fuel AI research and development.
 

ARE YOU IN HUMAN RESOURCES?

A quarter of employers already incorporate AI into their employment-related technology. This widespread application has brought some scrutiny to employer’s efforts to avoid bias and discriminatory tendencies when employers use AI systems in their hiring processes.

  • The Equal Employment Opportunity Commission (EEOC) issued guidance warning employers that using algorithms and AI in making hiring decisions can result in discrimination based on disability. Under the Biden administration, the EEOC is stepping up its enforcement efforts towards AI and machine learning-driven hiring tools. The EEOC recently announced that it is launching an initiative to ensure that AI and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws. This builds on the May 2022 guidance on the application of the American with Disabilities Act (ADA) to AI tools in employment.
  • Effective April 15, 2023 (postponed from the original effective date of January 1), the New York City Council enacted Local Law 144, making it unlawful for an employer or employment agency to use an automated employment decision tool (AEDT) to screen a candidate or employee within New York City unless certain bias audit and notice requirements are met. New York City joins Illinois, Maryland, and several other jurisdictions that have laws in place to regulate AI in the workplace in an effort to decrease hiring and promotion bias.
  • The California Civil Rights Department (formerly the Department of Fair Employment and Housing) has proposed regulations on AI to screen job applicants or make other employment decisions. The regulations would make it unlawful for an employer or covered entity to use “automated decision systems, or other selection criteria that screen out or tend to screen out an applicant or employee” on the basis of a protected characteristic, unless the “selection criteria” used “are shown to be job-related for the position in question and are consistent with business necessity.”
  • There are several litigation risks associated with using AI in employment-related decisions, including a risk of class action lawsuits wherein plaintiffs’ lawyers use AI failures or biases to form classes in support of failure-to-hire claims. Vendors and companies that use AI should prepare to defend their use of algorithms in hiring to ensure that there is no implicit or unintended bias.
 

ARE YOU ON A GOVERNANCE BOARD?

Ensuring responsible use of AI is a team sport requiring engagement from the business, marketing, security, privacy, legal, compliance, and human resources departments. Using an established governance process when implementing AI is critical to ensure consideration of all relevant risks associated with the use of AI by an enterprise. Importantly, aside from the legal risks, businesses face potentially significant reputational risks if they do not implement AI tools carefully and deliberately. The business sponsor or owner of the platform, along with a cross-functional team, should address specific threshold questions about the use of AI and the relevant risks.

Proactive companies create governance or steering committees to review and approve guiding principles, identify relevant risks, discuss solutions that fall within a gray area, create individual management action plans to ensure accountability for each solution, and outline escalation paths. This group should report to the board of directors or a subcommittee advising with respect to the use and implementation of the AI and seeking guidance and approval for use cases that present novel or enhanced risks, as senior management and the board are ultimately responsible for balancing the overall risk to the company.

AI is transforming the way companies engage in organizational decision-making and implement risk management practices. These tools can also enhance corporate governance and effective leadership strategies by ensuring operations are running with maximum efficiency. Some of the ways AI can elevate corporate performance include: providing more reliable and accurate market predictions; incorporating data-driven decision-making and analysis; supporting risk management; protecting against fraud; and improving real-time information processing to better inform business decisions and strategies.

For companies operating in multiple locations, AI use raises other legal concerns. What if certain AI practices are illegal in some jurisdictions or subject to differing regulations depending on where businesses operate? Is it possible to implement the use of AI in an inconsistent but lawful manner throughout the enterprise? And if so, is it advisable?

Looking at the employment space, would doing so raise fundamental employee fairness questions or other issues? Take for an example, a business implements AI technologies allowing for enhanced employee surveillance in certain jurisdictions in accordance with relevant law. What if an employer rewards or subjects employees to adverse employment action as a result of such monitoring, but at the same time, other employees at the same company who engage in similar conduct, receive none of the benefits or burdens due to different state laws that restrict that type of AI-enabled surveillance?  These are some of the questions a governance board can help address.

Conclusion

When implemented carefully and deliberately, AI tools and software technologies can make a substantial difference for companies. Consulting company Gartner expects that by 2026, enterprises that operationalize AI transparency, trust, and security will see their AI models achieve a 50% result improvement in terms of adoption, business goals, and user acceptance. Furthermore, by 2028, Gartner predicts that AI-driven machines will account for 20% of the global workforce and 40% of all economic productivity. In the United States, as AI technology continues to grow in its popularity and implementation into business operations at every level, decisions makers should keep up to date on the ever-developing regulatory landscape to support and to achieve their business objectives.


Related Content

 

BiologicsArtificial Intelligence Boot Camp
Our AI Boot Camp features a thorough analysis of and insights into AI and its impact on companies of all sizes and industries.


BiologicsGlobal Digital Transformation Webinar Series
Insights on how strategic technology and commercial transactions can enable businesses of all sizes and industries to effectively leverage technology and digital solutions to operate, modernize, and grow.


BiologicsChatGPT: The Arrival of a Disruptive AI Tool
According to The New York Times, “ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general public.” The tool is currently in a research preview and is free to use.


BiologicsHow AI Can Be Used Ethically to Monitor Worker Productivity, Bloomberg Law
Partner Amy Schuh wrote an article for Bloomberg Law detailing best practices for companies looking to use artificial intelligence (AI) to monitor employees.


BiologicsWhite House Publishes Blueprint for AI Bill of Rights
While the Blueprint is nonbinding and does not constitute US government policy, many of its provisions reflect protections that are provided in the US Constitution or have been implemented under existing US laws.


BiologicsIncreases in Global Artificial Intelligence Legislation Noted in AI Report
While there are many notable takeaways from the report, the AI policy and governance section provides interesting insight.


BiologicsNew York City Proposes New Rules to Clarify Law on Employers’ Use of Artificial Intelligence
The New York City Department of Consumer and Worker Protection recently published proposed rules providing guidance on the artificial intelligence law enacted in December 2021 that prohibits employers from using automated employment selection tools unless specific bias audit and notice requirements are met.


BiologicsEEOC Releases Guidance on Algorithms, AI, and Disability Discrimination in Hiring
Produced as part of the Artificial Intelligence and Algorithmic Fairness Initiative, the guidance reflects the agency’s growing interest in employer use of AI, including machine learning, natural language processing, and other emerging technologies in employment decisions.