Advisories October 2, 2024

Consumer Protection/FTC Advisory: The FTC Takes Aim at Deceptive AI Claims

Executive Summary
Minute Read

Our Consumer Protection/FTC Team investigates a slew of Federal Trade Commission enforcement actions against companies using allegedly deceptive artificial intelligence (AI) claims.

  • Operation AI Comply is already in full swing just days after its introduction
  • Two of the cases have already settled; three more will go to court
  • More enforcement is around the corner from the FTC and state attorneys general

Since ChatGPT debuted in November 2022, generative artificial intelligence (AI) has taken the world by storm. Businesses everywhere appear to be embracing AI or advertising new AI-powered products that guarantee impressive results. As AI has become seemingly ubiquitous in the marketplace, so too have claims around AI’s promise to improve peoples’ lives through automation and problem solving. These claims have piqued the interest of the Federal Trade Commission (FTC) and – if last week’s FTC enforcement sweep is any indication – will likely become a mainstay of the FTC’s enforcement agenda for years to come.

The FTC’s Response: Operation AI Comply

On Wednesday, September 25, 2024, the FTC unveiled Operation AI Comply, its new law enforcement sweep cracking down on allegedly deceptive claims about and unfair or deceptive uses of AI. The sweep included complaints against five companies that the FTC alleges have “seized on the hype surrounding AI and are using it to lure consumers into bogus schemes, and are also providing AI powered tools that can turbocharge deception.”

Those companies caught in the FTC’s crosshairs are:

  • DoNotPay, a UK-based online subscription service that was allegedly unfair or deceptive under the FTC Act by touting itself as “the world’s first robot lawyer” capable of preparing “ironclad” documents under U.S. laws.
  • Ascend Ecom, a group of companies doing business throughout the United States that was allegedly deceptive under the FTC Act by (1) promising prospective customers thousands of dollars per month in passive income if they used Ascend Ecom’s “risk free” AI-powered tools to drive up sales on online storefronts on a variety of popular e-commerce platforms; (2) failing to provide prospective customers with required disclosures and documents substantiating earnings in violation of the Business Opportunity Rule; and (3) pressuring customers to modify or delete negative reviews in violation of the Consumer Review Fairness Act (CRFA).
  • Ecommerce Empire Builders, a Wyoming-founded marketer and seller of e-commerce business opportunities that allegedly (1) engaged in the same unfair or deceptive conduct as Ascend Ecom in violation of the FTC Act and Business Opportunity Rule; and (2) made clients sign unfair or deceptive contracts that prevent clients from writing and posting negative reviews, in violation of the CRFA. 
  • FBA Machine, a group of New Jersey companies and their owners accused of the same allegedly unfair or deceptive conduct as Ascend Ecom and Ecommerce Empire Builders, also in violation of the FTC Act, Business Opportunity Rule, and CRFA.
  • Rytr, a Delware-based company offering an AI-enabled review-writing-assistant tool that allegedly unfairly or deceptively allowed customers to quickly generate an unlimited number of fake product reviews and testimonials, in violation of the FTC Act.

Early Resolutions of the FTC’s Complaints

Two targets of the Operation AI Comply sweep have already moved to resolve the FTC’s complaints with proposed settlements. For example, DoNotPay agreed to pay $193,000, notify customers about its product’s limitations, and avoid advertising that its product can replace any professional service without evidentiary support for such a claim.

Rytr likewise agreed to a proposed settlement prohibiting it – or anyone working with it – from advertising or selling any service that promotes generating reviews. Notably, Rytr agreed to these terms over a full-throated dissent by two FTC commissioners that: (1) emphasized the complaint did not allege that Rytr’s users in fact posted any draft reviews or that the content of any of Rytr’s AI-generated reviews was false or inaccurate; and (2) opined that “[b]anning products that have useful features but have the potential to be misused is not consistent with the [FTC’s] unfairness authority [under Section 5(a) of the FTC Act]” and could stifle innovation.

The other three FTC complaints will play out in federal court.

Key Takeaways from FTC and State Enforcement Efforts

These cases are the latest in a string of AI-related enforcement actions initiated earlier this year or late last year by the FTC against Automators, Career Step, NGL Labs, CRI Genetics, and Rite Aid. Most notably, the FTC alleged that Rite Aid failed to implement reasonable procedures with its use of AI-powered facial recognition surveillance technology to identify individuals who engaged in shoplifting, which resulted in more false-positive results for Blacks and Latinos.

Cases like these show that the FTC intends to broadly regulate the use of AI beyond the unfair or deceptive advertising context. Indeed, as FTC Chair Lina M. Khan emphasized as part of the unveiling of Operation AI Comply, “[t]he FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books.”

State attorneys general (AG) are taking a similar approach, exemplified most recently by a first-of-its-kind settlement between the Texas AG and health care technology company Pieces Technologies on August 21, 2024. The Texas AG investigation revealed accuracy concerns about an AI tool hospitals were using to summarize patient health care data in real time. As part of the settlement, Pieces agreed to disclose the extent of its products’ accuracy and ensure that hospital staff using its generative AI products understand the extent to which they should rely on Pieces’s products.

The FTC’s and state AGs’ early enforcement targets highlight the areas of regulators’ greatest concern: deceptively advertising AI’s capabilities to lure consumers into believing AI can fully replace professional services, offering or leveraging AI tools to short circuit the consumer review process and engineer consumer trust, offering products that promise to employ AI when they do not, and using or offering for sale AI-powered products with potentially misleading, inaccurate, or discriminatory capabilities.

More enforcement is around the corner. Individual lawsuits, putative class actions, and more state AG enforcement actions involving similar allegations of deceptive or unfair practices using AI are likely to follow. To curb risk, businesses offering AI-powered products should:

  • Evaluate their past, current, and planned product advertisements to confirm the veracity of any claims about AI-powered products.
  • Ensure that claims about an AI-powered tool’s ability to replace professional services is substantiated with credible evidence.
  • Place boundaries on the way a facially neutral and non-deceptive AI-powered tool may be used if the tool could be used in a way that could mislead the user or the public.
  • Do not attempt to restrict – through contract or other means – how customers evaluate or review an AI-powered tool.
  • Consider whether an AI-powered tool could be viewed as being used for a discriminatory purpose or whether its use could have a discriminatory effect, and implement reasonable procedures to prevent harm to consumers.

You can subscribe to future advisories and other Alston & Bird publications by completing our publications subscription form. If you have any questions, or would like additional information, please contact one of the attorneys on our Consumer Protection / FTC Team.

Meet the Authors
Media Contact
Alex Wolfe
Communications Director

This website uses cookies to improve functionality and performance. For more information, see our Privacy Statement. Additional details for California consumers can be found here.