U.S. Privacy and Data Protection Updates | Insights | Q4 2025 (Federal Law)
2025 FTC Crackdown on AI and Children’s Data: What EdTech Platforms and Other Businesses Need to Do Now
The FTC ended 2025 with a clear message for anyone building AI or digital products used by children and teens: privacy and safety missteps, without adequate safeguards, will be costly. From student data security to “AI companion” chatbots and age‑verification, the agency is steadily building a playbook that goes beyond traditional COPPA enforcement.
Student Data Security: EdTech on Notice
In late 2025, the FTC announced a settlement with Illuminate Education, an edtech provider whose purported security failures allowed hackers to access the personal data of more than 10 million students. The complaint alleges that Illuminate overstated its security practices and then waited in some cases nearly two years to notify affected school districts of the breach.
Under the proposed order, the company must delete unnecessary student data, adopt a public retention schedule, and implement a comprehensive information security program with ongoing oversight. For vendors serving schools or handling education or minor’s data, this enforcement makes two expectations unmistakable: do what you promise in contracts and marketing and implement modern security controls tailored to highly sensitive student data.
AI Companion Chatbots: Safety, Not Just Novelty
In September, the FTC launched a wide‑ranging inquiry into AI chatbots that act as “companions,” issuing orders to seven major companies including Alphabet, Meta, OpenAI, Snap, and X.AI. The agency is probing how these products are designed, tested, and monitored, with particular attention to emotional and psychological risks for children and teenagers.
The FTC’s orders seek details on safety evaluations, age restrictions, content moderation, and how personal data from conversations is used or monetized and clearly telegraphs future enforcement priorities: if your chatbot simulates friendship or intimacy, expect regulators to ask what you did to anticipate and mitigate harms to minors.
Sendit and Dark Patterns Directed at Kids
The Commission also filed a complaint against the Sendit app and its CEO, alleging unlawful collection of children’s personal data and deceptive claims about anonymity and safety. While the details differ from the edtech and chatbot matters, the through‑line is familiar: products popular with younger users cannot lean on vague disclosures or clever UX to sidestep clear privacy and transparency obligations.
Age Verification: From Concept to Architecture
Looking forward, the FTC has announced a workshop on age‑verification technologies, aimed at exploring technical and legal tradeoffs in verifying age online. This complements its children’s privacy and AI work by pushing industry toward more robust ways to differentiate adults from minors, while still managing privacy and security risks.
Companies that rely on self‑attestation or easily bypassed age screens should expect more pointed questions from regulators. Now is the time to evaluate whether your age‑verification, parental consent, and teen‑specific settings are defensible for the audiences you actually reach.
Trump’s AI Executive Order: Federal vs. State Tension
Against this backdrop, President Trump has recently signed an executive order on “Ensuring a National Policy Framework for Artificial Intelligence,” which aims to limit state AI regulation and centralize AI policy at the federal level. The order directs federal agencies to challenge state AI laws that conflict with the administration’s lighter‑touch approach, but it does not eliminate existing state obligations immediately.
For children’s privacy and AI, that means the FTC’s authority—and states’ consumer protection powers—remain very much in play. Businesses should plan for overlapping federal and state scrutiny rather than counting on preemption to simplify their risk.
What AI and Child‑Focused Businesses Should Do Now
Given this trajectory, companies building edtech, youth‑facing apps, or AI tools that minors are likely to use should:
- Treat student and children’s data as a top‑tier security and retention risk, with written programs and clear deletion timelines.
- Map where minors are present or likely, and tighten age‑gates, default settings, and in‑product disclosures accordingly.
- For AI chatbots and recommendation systems, document pre‑launch testing, ongoing monitoring, and escalation paths for potential harms to younger users.
- Align marketing claims about safety, anonymity, and privacy with actual engineering and data practices.
For companies building or deploying AI, edtech, or youth‑facing apps, the best next step is to pressure‑test your current privacy and safety posture before it’s tested by regulators. Kronenberger Rosenfeld regularaly advises businesses on student data security, AI chatbot and companion products, children’s and teen privacy, and state and federal enforcement risk, and can help turn high‑level FTC expectations into concrete product and governance decisions. To discuss your AI or children’s privacy compliance needs, contact Kronenberger Rosenfeld through our online case submission form to speak with an attorney about a tailored compliance and risk‑mitigation strategy.
This entry was posted on Monday, December 22, 2025 and is filed under Privacy and Data Protection Updates, Internet Law News.