LawFlash

Global Regulators Issue Joint Statement on Competition in the Generative AI Sector

31. Juli 2024

The European Commission, UK Competition and Markets Authority, US Department of Justice, and US Federal Trade Commission have issued for the first time a joint statement outlining their commitment to working together to create a common understanding of how to ensure fair competition in the generative AI sector. The statement underscores generative AI as an enforcement priority for international competition regulators and provides a concise, albeit high-level, summary of their collective concerns at this point in the development of generative AI technology.

Despite differing jurisdictions and legal powers, the authorities have pledged a coordinated effort to address AI-related competition risks that they claim transcend international boundaries. The statement notes that they will collaborate on a unified approach while maintaining their respective sovereign decision-making powers. The agencies identified what they see as the key competitive risks and several guiding principles for ensuring fair, open, and competitive AI markets.

IDENTIFIED COMPETITION AND CONSUMER PROTECTION RISKS

While acknowledging “the great potential benefits from the new services that AI is helping bring to Market,” the regulators identified the following key competition and consumer protection risks from their perspective:

  • Concentrated Control of Key Inputs: The development of AI foundation models depends on specialized chips, substantial computational power, large-scale data, and technical expertise. According to the authorities, a small number of companies controlling these critical inputs could exploit “bottlenecks,” limiting disruptive innovation and fair competition.
  • Entrenching or Extending Market Power: The authorities are concerned that incumbent digital firms with significant market power could leverage their existing advantages to protect against AI-driven disruption, potentially entrenching their positions and harming future competition.
  • Partnerships and Investments: The statement acknowledges that partnerships and financial investments related to the development of generative AI are common but suggests that, in some instances, these commercial arrangements among AI players could be used to undermine competition or steer market outcomes in favor of major firms at the expense of the public.
  • Algorithmic Decision-Making Risks: Beyond generative AI itself, the statement also notes that the authorities are mindful of additional risks associated with AI deployment, such as the potential for algorithms to facilitate price fixing, collusion, and unfair price discrimination.
  • Consumer Protection Concerns: The UK Competition and Markets Authority, US Federal Trade Commission (FTC), and US Department of Justice (DOJ), “which have consumer protection authority,” have stated that they will monitor consumer protection concerns related to AI, noting that companies “that deceptively or unfairly use consumer data to train their models can undermine people’s privacy, security, and autonomy” and citing other issues such as potential exposure of competitively sensitive information and the importance of informing consumers about how and when AI is being used in products.

STATED PRINCIPLES FOR PROTECTING COMPETITION

The regulators outlined three key principles for the AI ecosystem that, based on their experience “in related markets,” will “generally serve to enable competition and foster innovation,” while recognizing that “competition questions in AI will be fact-specific”:

  • Fair Dealing: The agencies are encouraging firms with market power to engage in fair dealing rather than “exclusionary tactics” that could stifle competition and innovation. The agencies did not identify what specific tactics they view as raising fair dealing concerns. Based on past commentary, however, some of the practices that could invite potential scrutiny may include, depending on the circumstances, exclusive agreements, below-cost pricing, and self-preferencing.
  • Interoperability: The agencies contend that greater interoperability among AI products and services and their inputs “will likely” foster greater competition and innovation. They warn that any claims that interoperability compromises privacy or security will be closely scrutinized.
  • Choice: The agencies state that “[b]usinesses and consumers in the AI ecosystem will benefit if they have choices among diverse products and business models resulting from a competitive process.” Under this rubric, the agencies identified three areas for potential scrutiny:
    • “Mechanisms of lock-in,” although note no specific mechanisms are identified in the statement. Based on past commentary, we expect this category of scrutiny may include exclusive contracting, bundling, or tying arrangements.
    • Investments and partnerships between incumbents and newcomers to ensure such agreements are not “sidestepping merger enforcement” or “handing incumbents undue influence or control in ways the undermine competition.”
    • For content creators, ensuring that they can exercise choice among buyers and avoid monopsony power “that can harm the free flow of information in the marketplace of ideas.”

IMPLICATIONS

The competition concerns outlined by the regulators focus mainly on vertical foreclosure and conglomerate concerns related to withholding or degrading access to critical inputs or to notions of entrenching or extending market power. For further background, the FTC and DOJ’s 2023 Merger Guidelines discuss in depth how those agencies analyze such issues in the context of mergers. However, many open questions remain as to how US courts, as opposed to the enforcement agencies, will approach these theories of harm in contested cases.

Due to the high-level nature of the joint statement and its principles, the statement leaves unaddressed important substantive questions. Significantly, it does not address the details about how the authorities will pursue these high-level principles under their respective competition enforcement authorities. For example, the scope of the authorities’ interoperability concerns are unclear. In the abstract, an absence of interoperability between any two products is common and not necessarily an indication of a competition concern or legal violation.

Further, the factual basis for the authorities’ generalization that greater interoperability will foster greater competition is also not explained. From the EU perspective, interoperability is a traditional tool for addressing competition concerns in appropriate cases, including through remedies, which has been extended recently to emerging technologies, such as through the EU Digital Markets Act.

Previous commentary from these authorities—which we have discussed in a prior analysis—may provide a greater level of detail concerning the specifics of their respective approaches to generative AI competition concerns.

The US agencies have previously raised concerns about algorithmic decision-making in several recent public statements and filings in private civil cases. From the European perspective, the concerns about algorithmic decision-making risks have been specifically addressed in the recent review of the EU’s Horizontal Guidelines. The fundamental point there remains unchanged: what constitutes anticompetitive decision-making between human beings is also anticompetitive if facilitated via algorithmic tools.

EU GDPR AND AI PRIVACY PRINCIPLES

Regarding the concerns about “deceptive and unfair practices that harm consumers,” for Europe the new EU AI Act and the already fully applicable General Data Protection Regulation (GDPR) address such issues. In Europe, it is currently not clear who will enforce these rules for AI. The European Commission competition authorities claim jurisdiction over AI input and output in addition to the national data protection authorities and the new EU AI Office in Brussels.

Some countries in the EU, such as Spain, have set up their own national AI regulatory authorities. Germany is discussing plans to appoint the (existing) Federal Network Agency (BNetzA) as the market surveillance authority under the EU AI Act, while others intend to rely on the existing data protection agencies.

In any event, all of these regulators have authority to impose hefty fines even on organizations that have no offices or branches in the EU. Further, there are different views in Europe and the United States on whether and when publicly available personal data can legally be used for AI training. As there is no counterpart to the EU AI Act and GDPR in the United States, it remains to be seen how joint enforcement in AI-related consumer protection will work in practice and how the various regulators will cooperate.

CONCLUSION

Generative AI is spurring new competition and new products throughout the global economy, giving rise to innovation and benefits for consumers and businesses. The joint statement is a sign that generative AI is a high priority for global antitrust and competition regulators and that such regulators are seeking to align their approaches where possible. It thus provides insight into the commonalities in approach across jurisdictions in this rapidly developing sector.

Companies involved in generative AI should identify and carefully consider the implications of the joint statement for their ongoing or contemplated business activities. Morgan Lewis lawyers continuously monitor this evolving landscape and stand ready to assist.

Contacts

If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following: