Listen to this post

As generative AI becomes an increasingly integral part of the modern economy, antitrust and consumer protection agencies continue to raise concerns about the technology’s potential to promote unfair methods of competition. Federal Trade Commission (“the FTC”) Chair Lina Khan recently warned on national news that “AI could be used to turbocharge fraud and scams” and the FTC is watching to ensure large companies do not use AI to “squash competition.”[1] The FTC has recently written numerous blogs on the subject,[2] signaling its intent to “use [the FTC’s] full range of tools to identify and address unfair methods of competition” that generative AI may create.[3] Similarly, Jonathan Kanter, head of the Antitrust Division at Department of Justice (“the DOJ”), said that the current model of AI “is inherently dependent on scale” and may “present a greater risk of having deep moats and barriers to entry.”[4] Kanter recently added that “there are all sorts of different ways to deploy machine learning technologies, and how it’s deployed can be different in the healthcare space, the energy space, the consumer tech space, the enterprise tech space,” and antitrust enforcers shouldn’t be so intimidated by artificial intelligence and machine learning technology that they stop enforcing the laws.[5]

These warnings are not hyperbole. The FTC has reportedly launched an investigation into OpenAI and ChatGPT, issuing a Civil Investigative Demand seeking extensive information to determine if the company has engaged in unfair and deceptive practices. The DOJ recently announced “Project Gretzky,” an agency effort to understand AI by hiring data scientists and experts in the field, that Kanter said is named after hockey legend Wayne Gretzky known for a line about “skating to where the puck is going.”[6]

The competition issues associated with generative AI may be as endless as the possibilities and ideas that the technology creates. Notwithstanding its advantages, businesses should be aware of competition enforcers’ focus on generative AI’s ability to boost anticompetitive practices they already deem illegal. As a former FTC official once explained, “[e]verywhere the word ‘algorithm’ appears, please just insert the words ‘a guy named Bob’ . . . . If it isn’t ok for a guy named Bob to do it, then it probably isn’t ok for an algorithm to do it either.”[7]

Here are some competition and consumer protection issues identified by the DOJ and FTC as potentially problematic when AI, or your employee Bob, is involved:


  1. Bundling and tying of services. Bundling occurs when businesses offer multiple products in one single package. Tying occurs when the sale of one product is conditioned upon the sale of another product. The agencies are concerned that businesses may be able to link new generative AI products with existing core products thus potentially reducing the value of a competitor’s standalone AI offering.
  2. Algorithmic Pricing and Collusion. Businesses often use AI algorithms to set prices and submit bids. The agencies appear to be concerned that these algorithms could be used to engage in price fixing, bid rigging, or market allocation. Further, the agencies or private litigants could scrutinize businesses that merely adopt an AI-based platform looking for evidence of collusion by AI, even if those businesses do not directly share competitively sensitive information or agree to exclusively follow the products algorithm.

Antitrust Division officials recently touted that “executives who conspire to fix prices have already been prosecuted for using algorithms as a tool for implementing their anticompetitive schemes, raising the specter that someday bots may collude on prices even without human intervention.”[8] While it seems unlikely that “bots colluding” could lead to per se or criminal antitrust culpability under the Sherman Act without a real, live human agreeing with a real-live competitor to fix prices, rig bids or allocate markets, so-called “algorithmic collusion,” is a subject that is often discussed among antitrust academics worldwide.[9] It is important to understand that if a guy named Bob who works for your company trains or uses an algorithm to effectuate a price-fixing agreement with a competitor, Bob could land in jail and the company could face criminal charges and treble-damage civil liability for such conduct.[10]

  • Mergers and Acquisitions. The agencies likely will investigate merging parties’ AI products in evaluating the competitive effects of mergers, including the respective size or significance of each party’s AI products and the extent to which the AI products compete with one another. If, for example, merging parties have substantially similar AI algorithms, the agency reviewing the deal may consider such similarities to be evidence of closeness of competition, thus creating additional concerns about the underlying merger. Given the agencies have already raised concerns about scale and the creation or enhancement of barriers to entry,[11] a company’s AI platform may also be viewed as a form of dominance under the new Draft Merger Guidelines, and a transaction could be viewed as entrenching or extending that dominance.[12]

AI is also becoming a major focus of coordinated effects analysis in merger review. The new Draft Merger Guidelines state that the use of algorithms or artificial intelligence to gauge competitor pricing or actions increases market transparency and may increase the risk of competitor coordination. The agencies consider market transparency when evaluating a merger under Section 7 of the Clayton Act and seek to block mergers that substantially increase competitor coordination. Further, the agencies will scrutinize acquisitions of post-merger use of AI by merging parties to coordinate. Finally, as companies and their advisors employ AI for diligence, merging parties should be mindful about information sharing to ensure that the AI programs do not facilitate inappropriate sharing of competitively sensitive information.

Consumer Protection

  1. AI Deception. Because generative AI tools can create new content from vast amounts of data rather than merely manipulating existing data, generative AI can be used to generate “fake” content that is often indistinguishable from human-made content. These include the potential for AI to create fake websites, fake profiles, and voice clones. The FTC recently published a blog reiterating that Section 5 of FTC Act applies to those who use, make, or create a tool (like generative AI) that is designed to deceive, even if that’s not the intended or sole purpose. This raised many unanswered questions about what the FTC expects of a company that merely provides AI content generation to customers and how the FTC allocates liability to such companies.
  2. Fake Consumer Reviews. Online consumers have become increasingly reliant on consumer reviews when purchasing a product, and businesses rely on them to attract consumers. The agencies are concerned that bad actors may be using generative AI to write fake consumer reviews and could focus on the roles of both the makers of AI and companies that provide AI as a service to users. In fact, the FTC recently proposed a rule banning fake reviews and testimonials.[13] The proposed rule prohibits conduct that “deceive[s] consumers looking for real feedback on a product or service and undercut[s] honest businesses.” The rule, if promulgated, would trigger civil penalties up to 50,000 dollars per violation.[14] In announcing the proposed rule, Samuel Levine, the Director of the FTC’s Bureau of Consumer Protection, said “[o]ur proposed rule on fake reviews shows that we’re using all available means to attack deceptive advertising in the digital age.” Use of AI to generate fake reviews and deceptive advertising certainly appears top-of-mind at the FTC.

Companies should be aware of heightened antitrust and consumer protection scrutiny surrounding generative AI and must carefully consider how AI is used within their business practices. Without proper compliance programs, training, and safeguards in place, companies using generative AI remain exposed to vigorous and costly antitrust and consumer protection investigation, enforcement actions, or litigation.

[1] John Dickerson and Analisa Novak, FTC Chair Lina Khan says AI could “turbocharge” fraud, be used to “squash competition”, CBS News (Jul. 27, 2023),

[2] See, e.g., Michael Atleson, Chatbots, deepfakes, and voice clones: AI deception for sale, Fed. Tr. Comm’n. (Mar. 20, 2023),; Michael Atleson, The Luring Test: AI and the Engineering of Consumer Trust, Fed. Tr. Comm’n. (May 1, 2023),; FTC Staff in the Bureau of Competition & Office of Technology, Generative AI Raises Competition Concerns, Fed. Tr. Comm’n. (May 1, 2023),

[3] FTC Staff, supra note 1.

[4] Jonathan Kanter, Remarks at the Joint Enforcers Summit (Mar. 27, 2023), available at

[5] Khushita Vasant and Chris May, AI shouldn’t intimidate agencies from enforcing US antitrust laws, DOJ’s Kanter

says,” MLEX (Aug. 3, 2023),

[6] Jonathan Kanter, Remarks at the 2023 SXSW Conference and Festival (Mar. 11, 2023), available at

[7] Ohlhausen, M.K. (2017), “Should We Fear The Things That Go Beep In the Night? Some Initial Thoughts on the Intersection of Antitrust Law and Algorithmic Pricing,” Remarks from the Concurrences Antitrust in the Financial Sector Conference, New York, NY.

[8] Marvin Price and Emma Burnham, Antitrust Division Updates: Enhancing Accessibility, Competition Pol’y Intl (Aug. 2022), at 3.

[9] See, e.g., Jill Priluck, When Bots Collude, New Yorker(Apr. 25, 2015),; Maurice E. Stucke and Ariel Ezrachi, How Pricing Bots Could Form Cartels and Make Things More Expensive, Harvard Bus. Rev. (Oct. 27, 2016),

[10] Press Release, Dep’t of Just., Former E-Commerce Executive Pleads Guilty to Price Fixing; Sentenced to Six Months (Jan. 28, 2019),

[11] Kanter, supra note 4.

[12] See Dep’t of Just. & Fed. Trade Comm’n., Draft Merger Guidelines (Jul. 19, 2023); see also our prior update on the new merger guidelines here:

[13] Fed. Trade Comm’n, Trade Regulation Rule on the Use of Consumer Reviews and Testimonials (proposed June 30, 2023)

[14] Geoffrey A. Fowler, Those 10,000 5-star reviews are fake. Now they’ll also be illegal., Wash. Post (Jun. 30, 2023),