Artificial intelligence (AI) magnifies the ability to analyze personal information in ways that may intrude on privacy interests, which can give rise to legal issues. Generally, there are two types of concerns with AI and privacy: input concerns, including the use of large datasets that can include personal information, and output concerns (a newer phenomenon with the rise of AI), such as whether AI is being used to arrive at certain conclusions.
Although they do not always expressly speak to AI, there are regulations and guidance throughout the United Kingdom, Europe and United States that cover the privacy principles.
European Union
In Europe, there is one comprehensive privacy law in the European Union and United Kingdom: the General Data Protection Regulation (GDPR). It is relevant to all industries and applies to all personal data, regardless of type or context, including automated processing of data, which is tightly regulated. It contains a robust requirement to inform people how their data going to be used and what will happen with it. Notably, it includes a requirement to conduct a data protection impact assessment, which could lead to further investigation by regulators.
The GDPR also covers “automated individual decision-making, include profiling” requiring explicit consent from a data subject for the processing of data, with certain exemptions. For AI tools, lawfulness, fairness, and transparency are key requirements under the GDPR.
AI Ethics Framework Proposal
In 2021, the European Commission proposed new rules and actions in an effort to turn Europe into a global hub for “trustworthy” AI, the AI Act, and coordinated plan, which is an outline that goes hand in hand with the AI Act. Certain AI systems are prohibited under the AI Act, including a number that are sometimes highlighted as issues in the context of social media.
The UK
The AI Act no longer applies to the UK, yet it is still relevant to UK businesses as a result of its extraterritorial reach, as in the US. From a privacy perspective, the UK needs to maintain data protection equivalence with the EU to maintain its adequacy status—which is up for review by December 2024.
In 2022, the UK government announced a 10-year plan to make the UK an “AI Superpower” in its National AI Strategy and in March 2023 published its white paper setting out the UK government’s framework and approach to the regulation of AI, providing a principles-based approach. UK regulators are expected to publish non-statutory guidance in the next 12 months demonstrating divergence from the EU’s approach.
UK White paper
The Department for Science, Innovation, and Technology (DSIT) also published a long-awaited AI white paper in March 2023 setting out five principles which regulators must consider to build trust and provide clarity for innovation. The UK regulators will incorporate these principles into guidance to be issued over the next 12 months. Following its 2022 toolkit, the ICO has published its own detailed guidance and a practical toolkit on AI and data protection, updated in March 2023.
The United States
The US comprises a myriad of privacy laws based on jurisdiction and sector which contain principles relating to AI; however, specific AI guidance is expected. The White House announced a blueprint for an AI Bill of Rights, with recommended principles to deploy AI, particularly privacy provisions.
The National Institute of Standards and Technology’s cyber security guidance has been widely adopted. The AI Risk Management Framework released in January 2023 specifically identifies privacy as significant for input and output risk.
FTC Enforcement Actions
The Federal Trade Commission (FTC) is the enforcement authority that regulates data privacy issues and has issued a series of reports on AI and related consumer and privacy issues, most recently in April 2021. There have been a series of enforcement actions relating to algorithms, particularly algorithmic disgorgement where underlying data was found to be unlawfully used to target individuals for advertising.
California Consumer Privacy Act
The California Consumer Privacy Act (CCPA), effective July 1, 2020, is similar to the principles under the GDPR and entails a broad definition of personal information, intended to include robust consumer profile and preference data collected by social media companies and online advertisers.
The CCPA has been amended in a way that starts to speak directly to AI, including a definition of “profiling” and rules about “automated decision-making.” It requires a data privacy impact assessment for processing activities, including profiling, and requires the new California Privacy Protection Agency to issue regulations “governing access and opt-out rights with respect to businesses’ use of automated decision-making technology, which contains a broad mandate. The draft regulation is expected within the next few months.
In 2023, similar rules were enacted through the Virginia Consumer Data Protection Act, the Colorado Privacy Act, and Connecticut Data Privacy Act.
The Way Forward
New regulations and guidance are on the way in the UK, EU, and US requiring AI projects to safeguard the often-large datasets at hand. There are ways to potentially navigate risks through anonymization and de-identification, the use of privacy policies, and contractual provisions; however, close attention should be paid to whether AI has the right to use data in an AI system and how the system uses and discloses information.
If you are interested in AI and Data Privacy: US and European Privacy Laws, as part of our Technology Marathon 2023, we invite you to subscribe to Morgan Lewis publications to receive updates on trends, legal developments, and other relevant areas.