Children’s Advertising Review Unit (CARU) Clarifies Guidance for Chatbots, Companions, and other AI Products
Protecting Children in an AI Future
With input from a working group of industry leaders, The Better Business Bureau’s Children’s Advertising Review Unit, or CARU, recently released “Generative AI & Kids: A Risk Matrix for Brands & Policymakers”. This thoughtful compliance guide includes the CARU Risk Matrix - a framework for understanding and mitigating the particular risks children may encounter in connection with generative AI.
Children are uniquely susceptible to misleading, manipulative, and harmful content on the internet, particularly as the use of AI becomes more pervasive. CARU’s recent guidance is an important step to address the unique issues raised as children are increasingly confronted with AI on the internet.
The CARU Risk Matrix
The first part of the CARU Risk Matrix is designed to help companies developing or deploying AI products or systems, such as ads or apps. The second part of the Matrix is directed at parents and provides guidelines for parents worried about the risks to children as children increasingly use AI or interact with AI-generated content.
Guidelines for Children’s AI Safety
The Matrix addresses the following eight topics and risk areas for companies engaging with audiences under 18 to consider in developing AI products or deploying AI systems:
- Misleading & deceptive advertising
- Influencer & endorser practices
- Privacy & data protection
- Safe & responsible use of AI
- Mental health & development
- Manipulation & commercialization
- Exposure to harmful content
- Lack of transparency & explainability
For each risk area, the CARU Risk Matrix provides real-life situations and identifies potential issues or risks to consider. The Matrix further provides guidance and/or risk mitigation strategies, with links to relevant legal and regulatory frameworks to review for additional information, including CARU Guidelines, FTC Act Section 5, COPPA, APA Guidelines, and OECD AI Principles.
Parental Guidelines for AI and Children
The Matrix also provides practical guidelines for parents concerned about the risks AI may pose for children growing up in a world increasingly shaped by AI. The Matrix identifies currently known risks, including the potential for AI to expose children to age-inappropriate content, coupled with practical suggestions for parents, including the use of privacy filters and settings, as well as specific topics of conversation to have with children.
Comparison to U.S. Regulations for Adults
Category | CARU Matrix (Children) | AI Laws/Regs (Adults, U.S.) |
Advertising | Narrow focus on deceptive/misleading ads, blurring fantasy/reality, parasocial relationships. | More general—FTC Act Section 5 applies; less emphasis on cognitive vulnerability. |
Influencers/ | Advises clear disclosures for AI influencers, child-specific oversight, moderation, and robust content substantiation. | Applies FTC Endorsement Guides focus on transparency, but no child-targeted requirements. |
Privacy/Data Protection | Emphasizes COPPA compliance, privacy-by-design, high default privacy settings, parental verification/notice; recommends no AI data reuse. | No equivalent federal privacy law—patchwork of state laws (e.g., CCPA, CPRA); opt-out models common. |
AI Use and Oversight | Advocates humans-in-the-loop, vendor oversight, child-specific content moderation, bias assessments; “Greenlight Committee” suggested. | No federal AI law; NIST AI Risk Mitigation Framework, voluntary best practices. Biden’s 2023 Executive Order sets risk management goals. |
Mental Health | Highlights addictive design, screen time limits, digital wellness, emotional monitoring, and avoidance of human-mimic chatbots. | No legal requirements—industry self-regulation and advocacy organization recommendations. |
Commercialization | Advises restrictions on behavioral targeting, nudging, and personalized ads; advises clear disclosures. | No outright bans; behavioral advertising is regulated mainly by FTC (UDAP). |
Harmful Content | Advocates for strong moderation tools, age verification, source verification; addresses AI-generated explicit material and deepfakes. | No federal mandate; Section 230 offers platform immunity, with exceptions. |
Transparency | Promotes explainability tools and clear disclosures regarding use of AI, opt-in/out mechanisms, and user-friendly privacy terms. | No general consumer rights to explanations, although transparency is an emerging focus in federal guidelines. |
- Child-Focused Safeguards: The CARU Risk Matrix is more precautionary because it uses a “best interests of the child” standard. The Matrix provides guidance on parental consent, recommends no data reuse for AI training, and promotes greater explainability and bias mitigation proposals that are more stringent than what applies to general adult audiences.
- Legal Backdrop: COPPA provides a strong baseline for data practices involving children under 13, mandating verifiable parental consent and minimal data collection. No equivalent comprehensive privacy framework exists for adults in the U.S., apart from state statutes such as CCPA or FTC enforcement in specific cases.
- Guidance & Voluntary Best Practices: The CARU Risk Matrix blends legal requirements with voluntary best practices drawn from international frameworks, presenting a more proactive and nimble approach when compared to the more slow-moving regulatory landscape, particularly at the Federal level.
- AI-Specific Regulation: For adults, the U.S. relies on general consumer protection law (FTC Act, Section 5), sectoral privacy laws, and emerging federal guidance, rather than AI-specific laws. Self-regulatory guidelines (NIST AI RMF, the White House Executive Order, OECD Principles) shape the landscape but are not prescriptive.
Practical Impact of the CARU Risk Matrix
- For Children: Companies and their brands are advised to implement heightened safeguards, clear disclosures, bias checks, and parental controls whenever deploying AI in child-directed contexts.
- For Adults: Companies face mostly general standards, such as avoiding deception or unfairness, offering transparency where possible, and following sector-specific rules (e.g., finance, healthcare, etc.), with much less emphasis on cognitive vulnerability or protection against psychological/behavioral manipulation, which may be a particular concern with children.
The CARU Risk Matrix reflects important guidance from industry leaders that is specific to child protection issues in the context of generative AI. The Risk Matrix advocates that, for children, there should be greater transparency, privacy, and focus on ethical design than generally is expected for adults in the United States. It attempts to fill regulatory gaps currently unaddressed by U.S. law, which takes an adult-centric approach, and recognizes that children constitute a distinct, vulnerable class necessitating greater industry accountability and care.
This entry was posted on Thursday, November 06, 2025 and is filed under Resources & Self-Education, Internet Law News.