BLOG POST

Tech & Sourcing @ Morgan Lewis

TECHNOLOGY TRANSACTIONS, OUTSOURCING, AND COMMERCIAL CONTRACTS NEWS FOR LAWYERS AND SOURCING PROFESSIONALS

One of the commonly advertised features of AI is that it is beneficial for automation and increasing productivity. When a company considers improving its productivity and employing an AI tool, it will typically go through a contracting process with the service provider and assess the terms of use and associated risks for the business. But what happens if an employee presses on and starts using an AI tool that was not vetted by the company?

The short answer is that the company may be exposed to risks that it has not contemplated, such as breach of confidentiality or privilege, the company’s intellectual property or cybersecurity being compromised, or contamination of the company’s data by AI “hallucination,” among many other outcomes. In this publication, we look into what it takes to adopt an AI usage policy for employees and mitigate the risks.

Content of an AI Usage Policy

In essence, an AI usage policy should clearly set forth what is expected of employees in connection with the use of any AI tools, whether provided by the company or made publicly available.

The contents of each AI usage policy should reflect the risks applicable to the specific business and will require customization, but generally could include the following:

  • Definition of the scope and purpose of the policy, e.g., providing guidelines on the use of AI to both employees and contractors, defining which AI tools fall under the policy, setting out criteria for responsible use of AI
  • Provisions concerning access and use controls, such as prohibitions on using certain or any AI tools for work purposes or restricting the use of AI tools to those preapproved by the company and using company-provided logins, requirement to validate any AI outputs by human review, etc.
  • Requirements to disclose the use of AI, such as internal disclosure where an employee must report to the supervisor the AI use as well as external disclosure, as may be mandated by applicable law or terms of use of a particular AI tool
  • Confidentiality and data protection provisions, such as addressing potential breach of trade secrets through the use of AI tools
  • Intellectual property–related provisions, such as addressing patent and copyright issues arising in connection with the use of an AI tool
  • Best practices and frequently asked questions to provide additional guidelines on certain practical issues
  • Contacts of the responsible officer to whom employees may report AI use violations and who is responsible for training and implementation of the policy

Other Considerations

The policy may need to be compliant with statutory law, for instance the EU AI Act.

A company may also wish to follow one or more of the existing frameworks and standards when developing an AI usage policy, such as the NIST AI Risk Management Framework; Singapore Model AI Framework; UK Regulatory Framework for Artificial Intelligence; Council of Europe Human Rights, Democracy, and the Rule of Law Assurance Framework Convention; ISO 31000:2018 Risk Management – Guidelines; or IEEE 7000-21 Standard Model Process for Addressing Ethical Concerns during System Design.

Related Thought Leadership