AI Marketing Claims Come Under FTC Scrutiny

By
Partner

AI Marketing Statements Are Being Picked Apart by the FTC

If you sell or integrate AI in your products, the text on your website and in your marketing could quickly become one of your biggest regulatory risks.

At Kronenberger Rosenfeld, LLP, we’ve sat across the table from the FTC in matters alleging deceptive marketing claims about AI-powered products and services. Our work has included negotiating resolutions, defending enforcement actions, and building compliance programs specifically for companies touting AI capabilities in advertising and user interfaces. That experience, along with the latest Executive Order on AI and a new wave of FTC activity, confirms a trend: the Commission is hyper-focused on what you say about AI, and it is paying particular attention to AI products used by kids and teens.

The New Executive Order Removes Barriers, Not Guardrails

The Executive Order released in 2025 titled, “Removing Barriers to American Leadership in Artificial Intelligence,” is framed around the central goal of keeping the United States at the front of global AI innovation. It emphasizes free markets, research strength, and “decisive” government action to cement U.S. dominance in AI.

To accomplish that, the order expressly revokes prior AI policies the administration considered “barriers” to American AI development and directs agencies to streamline and deregulate where possible. In plain terms, the order signals a more permissive posture toward AI development while still touting high level values like human flourishing, economic competitiveness, and national security. That doesn’t suggest a free-for-all; it means the federal government intends to create an environment where AI can grow quickly but expects companies to self police around core legal requirements.

What the Order Means for Enforcement Agencies

The Executive Order does not strip the FTC of its core authority to pursue deceptive and unfair practices involving AI products. Instead, it fits with a broader “dual approach” emerging at the Commission: fewer broad structural interventions into AI innovators themselves, but a continued—and in some areas intensified—focus on policing misleading claims and harmful use cases.

For tech and AI companies, the practical takeaway is paradoxical. On one hand, the order aims to lower regulatory friction so you can build and deploy faster. On the other hand, the FTC is making clear that if your marketing overpromises about what AI can do or understates risks to consumers—particularly children—it will still come knocking.

The “Dual Approach” to FTC AI Enforcement

Recent commentary on FTC actions describes a “dual approach” to AI enforcement. On one side, the Commission has scaled back some broader, speculative enforcement theories, even revoking or narrowing earlier orders against certain AI tool developers. On the other, it has doubled down on a bread and butter priority: false advertising, deceptive claims, and unfair practices in how AI tools are marketed and deployed.

One notable order from late 2025 annulled a prior, more aggressive order signaling that the FTC is less interested in second guessing the mere existence of generative tools and more focused on how they are used to mislead or harm.

As the Commission put it, when individuals or businesses “use AI to violate the law or mislead about the capabilities of their generative AI they should face consequences.”

AI Marketing Statements

If you look across recent FTC priorities—teens and deceptive AI marketing, deepfakes, AI mental health claims, and chatbots—the common thread is not the underlying technology, but the promises and representations made to users. The FTC is zeroing in on:

  • Claims about accuracy, safety, and performance of AI tools.
  • Assertions that AI is “neutral,” “unbiased,” or “objective.”
  • Marketing that downplays risks, especially for kids and teens.
  • Quiet use of AI in contexts, like health or mental health support, that consumers assume involve trained professionals.

The message is clear: you can innovate, but you cannot exaggerate. Every claim about what AI can do must be truthful, substantiated, and clearly framed so that reasonable consumers—including parents—are not misled.

Key FTC AI Enforcement Themes

The FTC has repeatedly warned companies not to use AI as a marketing “magic word” that implies advanced, almost superhuman capabilities without solid evidence. Enforcement activity has spotlighted companies touting AI features that, in reality, rely on traditional automation or human review, or that simply do not work as advertised at scale.

Deceptive Marketing Around AI Capabilities and “Magic Wand” Marketing

Common risky claims include assertions that AI tools will:

  • Guarantee certain outcomes (for example, specific revenue boosts or perfect compliance).
  • Provide expert-level advice (like legal or medical) when the system is not validated and monitored appropriately.
  • Detect or remove all harmful content or bias, when actual performance falls short.

KR Law’s AI legal compliance work often involves pulling back these kinds of statements, aligning them with real-world capabilities, and building a substantiation case that can withstand regulatory scrutiny.

Deepfakes, Synthetic Media, and Misrepresentation

The FTC has also signaled that AI-generated deepfakes are squarely within its unfair and deceptive practices wheelhouse. With the rise of synthetic video and audio, the agency has raised concerns about:

  • Misleading endorsements or testimonials created or doctored with AI.
  • Deepfake political or commercial content that impersonates real people.
  • AI-generated imagery that implies real-world testing, results, or consumer experiences that never occurred.

The Commission has connected this to its broader work on the “Take It Down Act” and similar initiatives addressing nonconsensual or harmful synthetic content. If your product uses AI to create or manipulate media, your disclosures and disclaimers need to be crystal clear, not buried in fine print.

The FTC’s Intensifying Focus on Children

One of the most striking trends in recent enforcement discussions is the FTC’s explicit focus on AI products used by or marketed to children and teens. The Commission and outside observers have documented concerns about:

  • AI chatbots engaging in romantic or sexual conversations with minors.​
  • “Therapy bots” that provide unregulated mental health advice to young users.
  • AI tools that encourage self-harm or fail to redirect at-risk kids to appropriate resources.​

Reuters has reported that the FTC is preparing to scrutinize the mental health risks of AI chatbots to children and to demand internal documents from major AI providers. In parallel, analyses of recent enforcement trends highlight that harms to kids and teens, along with deceptive AI marketing, are now central pillars of the Commission’s privacy and consumer protection agenda.

COPPA, Privacy, and AI Driven Profiles of Kids

The Children’s Online Privacy Protection Act (COPPA) has long governed data collection from kids under 13, but AI introduces new, nuanced risks. AI systems may be able to infer sensitive traits, emotional states, or vulnerabilities from kids’ interactions, even when “obvious” personal data seems limited.

From a compliance perspective, this means AI products that:

  • Track, profile, or personalize experiences for children can trigger COPPA and state privacy obligations.
  • Use kids’ data for training, evaluation, or personalization must have a legal basis and appropriate parental consent structures.
  • Cannot hide behind generic privacy policies or ambiguous age gating if the real audience skews young.

KR Law regularly advises on COPPA and child focused privacy issues in the context of digital marketing and AI deployment, helping companies align real world product behavior with their legal obligations and public facing statements.

Recent AI Enforcement and Risk Scenarios

AI Chatbots and Mental Health Claims

Recent reporting describes regulators and consumer advocates challenging AI chatbots that appear to provide mental health or therapeutic support without proper disclaimers or oversight. Complaints have alleged that platforms effectively enable unlicensed “therapy bots,” blurring the line between a casual conversation tool and a mental health service.

For a business, the risk arises when:

  • Marketing materials imply the chatbot can treat depression, anxiety, or other conditions.
  • The interface encourages users, including teens, to disclose deeply sensitive health information.
  • Internal documentation shows awareness of risks, but external disclosures downplay or omit them.

A safer approach is to sharply limit such claims, prominently display disclaimers, and implement guardrails that redirect users in crisis to appropriate resources.

Teen-Facing AI and Inappropriate Content

Another reported example involves AI chatbots that allowed romantic or sensual interactions with minors before platforms tightened protections. Regulators have framed this as both a safety issue and a deceptive design issue: parents and teens were led to believe the tools were safe or moderated, when in practice they enabled harmful conversations.

If your AI tool has any chance of reaching minors, you should assume regulators will evaluate:

  • Whether your moderation controls and training data reflect realistic teen behavior.
  • Whether parental controls and disclosures accurately describe what the AI will and will not do.
  • Whether your marketing language downplays the potential for inappropriate or harmful content.

KR Law’s compliance work often includes mapping out these risk scenarios, reviewing UI/UX flows, and aligning outward messaging with the actual user experience.

Deepfake and Synthetic Content in Advertising

Commentary on recent FTC priorities also highlights deepfakes and synthetic media as a likely target for future enforcement, especially where AI is used to simulate endorsements or consumer experiences. This would apply to a campaign that uses AI to generate a realistic video of a “customer” praising a product, without clear labeling or disclosure that it is artificially generated.

The risk escalates when:

  • Synthetic content is presented in a way that a reasonable consumer would interpret as real footage or testimony.
  • AI is used to impersonate real individuals, including influencers or public figures.
  • Companies fail to establish internal review processes to vet such content before it goes live.

Under traditional deception principles, those scenarios can quickly become enforcement cases, especially if they involve vulnerable groups or high stakes products like financial services or health related tools.

AI Legal Compliance and Risk Categorization​

KR Law’s AI legal compliance practice is built around a structured risk categorization model that places AI projects into buckets like “Unacceptable,” “High,” “Limited,” and “Minimal” risk. High risk categories often include AI tools used for hiring and worker management, or chatbots giving health and fitness advice, while entertainment use cases may fall into lower risk categories.​

This classification is not academic. It drives how we calibrate your marketing claims, disclosures, and internal controls, ensuring that tools marketed as high impact or safety critical are backed by more rigorous substantiation and governance.

Artificial Intelligence and FTC Compliance

Our FTC compliance and defense work integrates traditional advertising law principles with AI specific guidance and enforcement trends. For AI products, that often includes:

  • Reviewing sales pages, landing pages, and funnels to remove or revise problematic AI claims.​
  • Evaluating business models that rely on personal data, including kids’ data, under FTC, COPPA, and state privacy rules.
  • Counseling on endorsements, testimonials, and influencer content involving AI tools.​
  • Defending companies in investigations and litigation when the FTC or state AGs challenge AI related marketing statements.​

Because KR Law’s practice is focused on internet and technology law, we are accustomed to fastmoving product cycles and the need to align legal compliance with aggressive market timelines.

Audit AI Claims Across All Channels

Start by mapping every representation you make about AI—on your website, in your app store listings, investor decks, sales scripts, and privacy policies. Ask whether each statement is:

  • Factually accurate, based on current performance data, not future ambitions.
  • Supported by internal testing, documentation, or third-party validation.
  • Presented in a way a reasonable consumer, including a parent, would understand.

If you cannot back up a claim today, it should be revised, qualified, or removed. Our team regularly conducts these audits and helps clients build substantiation files that stand up under regulatory scrutiny.

Build Child Sensitive Design and Messaging

If kids or teens may interact with your AI tool, you should treat that as a high priority risk category with enhanced controls. Consider:

  • Implementing robust age gates and parental controls that match your real audience.
  • Designing chatbot behavior and content filters specifically to avoid romantic, sexual, or self harm related conversations with minors.
  • Crafting clear, prominent disclosures that explain what the AI does, what it does not do, and how data from minors is used.

These are not just best practices; they are increasingly what regulators expect from companies that put AI into the hands of children.

Your Words Are as Important as Your Code

The latest Executive Order on AI encourages rapid innovation and positions the United States to remain a global leader in artificial intelligence, but it leaves core consumer protection rules firmly in place. At the same time, the FTC is entering a new chapter where it may step back from some broad structural interventions while sharpening its focus on the marketing claims, disclosures, and product designs that shape how consumers—and especially children—experience AI.

For AI companies, that means your greatest regulatory exposure may not lie in your model weights or your training data, but in how you describe your product to the world. At Kronenberger Rosenfeld, LLP, we help clients navigate that gap between engineering ambition and legal reality, ensuring that your marketing is as carefully built as your models and that your products can grow without putting a target on your back.

Related Topics

Related Practice Areas

This entry was posted on Monday, March 09, 2026 and is filed under Resources & Self-Education, Internet Law News.



Get the help you need.

We offer legal advice on a wide range of online topics

Get legal help now

Not seeing what you’re looking for?

Submit your case in 3 minutes and get legal help fast.

Submit your case online

OR

Give us a call
Join our mailing list

Stay ahead of legal matters

The internet moves fast. We'll keep you informed.