Why AI Washing Is the FTC’s Sharpest Enforcement Weapon Right Now
In September 2024, the Federal Trade Commission launched Operation AI Comply, a coordinated campaign to pursue companies that attach “AI” to product descriptions without the evidence to back it up. The timing was deliberate. By mid-2024, AI had become the single most overused term in commercial marketing — a magic word that implied capability, autonomy, and returns that most products could not deliver.
Eighteen months later, the campaign has proven durable. Despite a change in administration that paused or rolled back many federal regulatory initiatives, Operation AI Comply has continued under the new leadership at the FTC. In April 2026, Morgan Lewis reported that AI enforcement is accelerating precisely because federal AI legislation has stalled — the FTC’s existing Section 5 authority is filling the vacuum.
This matters to every team that touches product marketing, sales enablement, or investor communications. The FTC’s enforcement theory does not require a new AI law. The same authority that governs every other advertising claim in American commerce — the prohibition on unfair or deceptive acts or practices — now applies to anything your company says about its AI features.
What the Enforcement Record Actually Shows
The FTC has brought at least a dozen AI-washing cases under Operation AI Comply since September 2024. Three cases define the enforcement boundary:
Click Profit (2025): The FTC alleged that Click Profit marketed an “automated, AI-powered system” for generating passive income. The reality: 20% of participants earned nothing, and 33% earned under $2,500. The judgment exceeded $20 million — the largest AI-washing penalty on record at the time.
Workado (April 2025): Workado advertised its AI content-detection tool as achieving 98% accuracy. Independent testing found the actual rate was approximately 53%. The FTC issued a consent order requiring Workado to cease all accuracy claims and submit to ongoing compliance monitoring. The lesson: third-party validation data that contradicts your marketing claims is not a compliance defence — it is evidence.
Air AI (March 2026): The FTC alleged Air AI bilked entrepreneurs and small businesses out of roughly $19 million through deceptive earnings claims tied to “conversational AI” software that was marketed as replacing human customer service representatives. The settlement includes an $18 million monetary judgment, $50,000 in immediate consumer relief, and a permanent ban on marketing business opportunities. The owners, not just the company, were personally named.
Growth Cave (January 2026): DLA Piper’s analysis of the Growth Cave resolution noted the case targeted misrepresented automation capabilities — Growth Cave described an income-generation system as AI-powered when substantial manual intervention was required to produce the advertised results.
The pattern is consistent: financial or performance claims tied to AI features, without evidence that the AI actually produces those outcomes.
Advertisement
What Enterprise Compliance Officers Should Do About It
The FTC has not confined its actions to fringe players or obvious scams. The Colorado AI Act, New York’s Algorithmic Pricing Disclosure Act, and California’s AI transparency measures are creating parallel state-level compliance obligations — but the FTC’s federal authority remains the most immediate risk for companies with national reach. Here is how enterprise compliance and marketing teams should respond.
1. Audit Every AI Claim in Your Current Marketing Stack
Run a systematic inventory across your website, paid advertising, sales decks, press releases, email campaigns, and investor materials. For each appearance of the words “AI,” “artificial intelligence,” “machine learning,” “automated,” or “intelligent,” document exactly what the underlying system does, how it was validated, and what evidence you hold. The FTC’s theory covers both explicit claims (“our AI achieves 98% accuracy”) and implicit ones (“AI-powered” next to a performance metric implies the AI caused the metric).
Do not outsource this to legal alone. Marketing, product, and engineering need to sit in the same room and agree on what the product actually does — legal can then assess whether the language matches the evidence.
2. Build a Substantiation File Before Any Campaign Launches
Every claim about AI capability requires what the FTC calls “competent and reliable evidence” — typically internal testing data, third-party benchmarks, or peer-reviewed validation. This standard has existed for decades in traditional advertising law. It now applies to AI with additional scrutiny because the FTC has explicitly said that “adding ‘AI’ to the description of a product or service invites additional scrutiny.”
The substantiation file should live in a documented, retrievable location and be reviewed every time a product update changes the underlying model’s behavior. If your AI model is retrained quarterly and accuracy shifts, your marketing must reflect the current performance, not a historical high-water mark.
3. Separate AI-Assisted from AI-Autonomous in All Customer-Facing Language
A recurring element in multiple enforcement actions is the gap between “automated” and “requires human review.” Air AI marketed software as replacing human customer service agents; the product required substantial human intervention. Click Profit described a “passive income” system that in practice demanded active management.
Compliance teams should establish two clearly defined categories in marketing vocabulary: systems where AI produces an output that a human reviews and approves, versus systems that operate without human sign-off. The first category can legitimately use “AI-assisted” or “AI-augmented.” The second category requires substantially higher substantiation and legal review before any public claim.
4. Treat Earnings Claims as Regulated Financial Advertising
When AI is tied to income, returns, or financial outcomes — whether for businesses or consumers — the FTC’s historical standards for earnings claims apply in full, and AI amplifies the legal risk. Every case in the Operation AI Comply portfolio that resulted in a significant judgment involved earnings or performance promises. If your product or service implies that AI will generate revenue, save costs, or deliver a financial outcome, you are in the same factual territory as these cases.
The standard: all advertised outcomes must reflect what a typical customer achieves, not edge cases or best-performer results. If 33% of your customers earn under $2,500, you cannot advertise the experience of the top 1%.
The Structural Lesson
Operation AI Comply is not primarily an enforcement campaign — it is a market signal. The FTC is establishing, case by case, that AI does not change the evidentiary standards for commercial claims. Every enforcement action is calibrated to reinforce the same message: capability claims require capability evidence.
The durability of the campaign through an administration change confirms that this is institutional, not political. The FTC’s career enforcement staff built Operation AI Comply as a Section 5 application — the same legal foundation as every advertising enforcement action since 1938. As long as companies continue marketing AI products with unsubstantiated claims, the FTC has all the authority it needs to act.
For compliance officers, the practical implication is straightforward: AI marketing risk is advertising law risk. It belongs in the same governance framework as product liability claims, environmental benefit claims, and earnings claims — with the same documentation requirements, the same pre-launch review process, and the same post-launch monitoring protocol.
Frequently Asked Questions
What is Operation AI Comply and when did it start?
Operation AI Comply is a Federal Trade Commission enforcement initiative launched in September 2024, targeting companies that make unsubstantiated or deceptive claims about AI product capabilities. The campaign uses the FTC’s existing authority under Section 5 of the FTC Act — the same statute governing all commercial advertising — rather than new AI-specific legislation. As of May 2026, it has produced more than a dozen enforcement actions.
What kinds of AI claims attract FTC scrutiny?
The FTC targets three main categories: financial or earnings promises tied to AI features (e.g., “our AI generates passive income”), capability claims that overstate accuracy or automation level (e.g., claiming 98% accuracy when independent testing shows 53%), and AI tools that facilitate deception such as fake review generation. Both explicit statements and implicit performance implications are covered.
Does FTC enforcement only apply to US companies?
Section 5 of the FTC Act applies to any company conducting commerce in or affecting the United States, regardless of where it is incorporated. Foreign companies that sell to US customers, run US-targeted advertising, or operate through US subsidiaries are within the FTC’s jurisdiction. The Air AI and Click Profit cases both involved companies with primarily US-facing operations, but the legal theory would extend to any foreign entity engaging in deceptive AI marketing directed at American consumers.
—
Sources & Further Reading
- FTC Announces Crackdown on Deceptive AI Claims and Schemes — Federal Trade Commission
- Air AI Settlement — Federal Trade Commission Press Release, March 2026
- One Year In, FTC’s Operation AI Comply Continues — Benesch Law, 2026
- FTC Resolves Another AI-Washing Case: Growth Cave — DLA Piper, February 2026
- AI Enforcement Accelerates as Federal Policy Stalls and States Step In — Morgan Lewis, April 2026
- Operation AI Comply: Every Major FTC AI Enforcement Action — PR News / Everything-PR














