The Federal Trade Commission (FTC) on September 25, 2024 announced Operation AI Comply, targeting what the agency has characterized as the use of artificial intelligence (AI) “to supercharge deceptive or unfair conduct that harms consumers.” This initiative represents the latest in a series of actions taken by the FTC in this space.
As the agency has acknowledged in both public discourse[1] and in formal commentary,[2] the development and deployment of AI raise issues that implicate both competition and consumer protection policy. Moreover, enforcers globally have echoed the same sentiment.[3] Thus, although the cases announced in Operation AI Comply involve alleged violations of consumer protection laws, companies utilizing AI would be wise to view the actions as a harbinger of the FTC’s broader commitment to addressing AI issues from both the consumer protection and competition perspectives.
The launch of Operation AI Comply featured five initial enforcement actions, four of which dealt with traditional consumer protection concerns around deceptive and unfair advertising. The fifth case, however, introduced a more novel theory that, when read in the context of commentary from the current FTC majority, aligns with the agency’s desire to connect its consumer protection and competition missions. The FTC described these cases as “just the latest” in its “ongoing work to combat AI-related issues in the marketplace from every angle.”
The four cases mentioned briefly below illustrate the FTC’s efforts to combat how companies are allegedly misusing AI to promote misleading or fraudulent schemes. The following section discusses the Rytr action and its broader implications.
In Ascend Ecom, Ecommerce Empire Builders, and FBA Machine, the FTC obtained temporary relief from a federal district court to halt the alleged conduct, pending a full preliminary injunction hearing.
Rytr promoted an AI writing assistant for a number of uses, one of which the company specifically marketed for “Testimonial & Review” generation. The FTC contended that the tool provided the means and instrumentalities to enable the creation of false and misleading reviews, potentially deceiving consumers and undermining the integrity of online markets. The agency also alleged that the conduct amounted to an unfair act or practice. The FTC issued a complaint detailing the allegations and a proposed order settling the allegations, which is subject to a 30-day public comment period.
The FTC’s enforcement action against Rytr drew significant dissent from FTC Commissioners Melissa Holyoak and Andrew Ferguson. Rytr’s AI writing tool enables users to generate various text content, including product reviews. While the FTC majority saw this as enabling false and misleading reviews, Holyoak and Ferguson opposed enforcement, arguing that penalizing Rytr for potential misuse set a troubling precedent.
The dissenters cautioned that holding tech companies liable for how users might misuse their products overextends consumer protection laws and could stifle innovation. They emphasized the productivity and creative benefits of generative AI and warned that aggressive regulation based on speculative harm could deter AI investment and development. Ferguson compared Rytr’s tool to common technologies like word processors, noting that tools with lawful and unlawful uses shouldn’t be regulated unless there is concrete harm.
Both dissenting opinions suggest that while the FTC has a role in protecting consumers, it must be careful not to overregulate emerging technologies based on speculative risks, as this could have long-term detrimental effects on innovation and market competitiveness.
The FTC’s action in the Rytr case reflects an attempt by the agency to exercise its consumer protection authority in a way that dually addresses consumer protection and competition concerns in the emerging AI industry. In prepared remarks at the FTC’s Tech Summit on AI earlier this year, FTC Chair Lina Khan stated that the FTC was “squarely focused on how business models drive incentives” and that AI “model training [was] emerging as another feature that could further incentivize” the collection of data at all costs.[4] She further “recognize[d] the ways that consumer protection and competition enforcement are deeply connected,” purporting that firms can engage in consumer protection violations to build market power.[5] By using consumer protection laws, the FTC aims to neuter potentially anticompetitive business models and incentive structures before they become entrenched.
This proactive stance is informed by the FTC’s past experiences in the “Web 2.0” era, where some have argued delayed regulatory action allowed tech companies to engage in problematic practices that “fuel[ed] market dominance,” thereby stifling competition and innovation.[6] For example, a recent FTC staff report alleges that social media and streaming companies engaged in “vast surveillance” of users, collecting excessive amounts of personal data to fuel ad-driven business models. The FTC argues that this unchecked data collection helped these platforms entrench their dominant positions, creating high barriers to competition.
Chair Khan has emphasized that the agency is committed to preventing these past perceived mistakes from repeating in the AI era. Thus, it appears that the FTC is leveraging consumer protection laws—particularly those targeting deceptive and unfair practices—not just as a tool to safeguard consumers today, but also as a mechanism to prevent entrenched markets and market practices in the future.
The FTC’s Operation AI Comply marks an escalation in addressing deceptive AI practices, as well as a broader effort by the FTC to proactively root out business models and incentive structures it perceives as posing competitive concerns. Businesses operating in the AI space must navigate this evolving regulatory landscape carefully. Compliance with consumer protection laws is essential to avoid enforcement actions and to contribute to a fair and open AI market. By fostering transparency, fairness, and responsible innovation, companies can help ensure that AI technologies deliver widespread benefits.
To avoid enforcement actions and to contribute to a fair and open AI market, it is important for companies employing AI to consider taking the following measures:
By integrating these considerations, businesses can navigate the complexities of the AI regulatory landscape, contribute to a competitive market, and promote consumer trust in AI technologies.
If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following:
[1] See Federal Trade Commission, Consumers Are Voicing Concerns About AI, FTC (Oct. 4, 2023).
[2] See Federal Trade Commission, Comment Submitted to US Copyright Office, FTC Raises AI-Related Competition and Consumer Protection Issues, FTC (Nov. 2, 2023).
[3] See G7 2024 Digital Competition Communiqué, AGCM (Oct. 2024) (stating that “Competition risks in AI markets are closely related to and may spill over into other key aspects of our societies” and that with respect to consumer protection “AI-generated outputs have the potential to mislead consumers, shape their preferences, and prevent them from making informed choices. Ensuring that AI systems do not distort consumer decision-making processes through false or misleading information is critical to maintaining consumer trust and promoting a healthy competitive environment.”).
[4] See Lina M. Khan, Chair, Fed. Trade Comm'n, Remarks at the Opportunity Tech Summit (Jan. 25, 2024).
[5] Id.
[6] See Fed. Trade Comm'n, Examining the Data Practices of Social Media and Streaming Services (Sept. 11, 2024).