Figma Lawsuit Opens Door to New Damages Claims
Figma, a cloud-based collaborative design platform, faces a proposed class action in California Federal Court for allegedly using customer designs and data without permission to train AI models, boosting its valuation for a $1.2 billion IPO.
Why was Figma Sued?
The lawsuit claims users were unknowingly opted in to Figma’s AI training program, violating promises and risking billions in intellectual property value. The complaint alleges that the company improperly leveraged customers’ design files and other proprietary content to develop and refine its generative AI tools, without adequate authorization or notification.
Figma’s Response
Figma denies that it uses customer data without proper authorization, including to train its AI models, and asserts further that it takes steps to de-identify customer data used with authorization.
“Our training is focused on general patterns – not on customers’ unique content, concepts and ideas,” a Figma spokesperson said in a statement. Figma’s approach to AI model development according to its website is designed to protect customers’ “privacy and confidential information.”
Why the Figma Class Action Lawsuit Matters
With businesses accelerating their AI adoption, this case highlights the expanding cope of legal questions concerning disclosure, consent, and the use of user-generated works for a platform’s own commercial gain.
AI Harm
The legal action is part of growing controversy over tech firms’ use of proprietary content for generative AI development, but it is novel in its focus on claims of unauthorized access and trade secret misappropriation. Most prior cases have relied on copyright infringement theories. The complaint also emphasizes alleged breach of contractual assurances regarding customer content use based on new terms of service issued by Figma at the time of its pivot to AI. This issue has piqued interest at the FTC as of late.
AI Legal Damages and Business Rights
If the theories in the Figma complaint are held sufficient to state a claim, a new avenue for consumers (including businesses) without copyright protection over their content to sue for use of their data to train AI models may be opened.
This case is poised to shape evolving norms and legal standards governing transparency, data consent practices, and recourse for consumers and businesses whose creative assets are repurposed by service providers for the advancement of their AI models for their own benefit.
For consumers and businesses, it is critical to understand what rights and authorizations you are granting service providers, and how, to access and use your proprietary information.
For organizations developing and training AI models, it is essential to ensure your terms of service, privacy policies, and communications with customers about use of their data is clear and transparent, and that your practices match those representations.
If your business utilizes AI training models in any capacity, it’s more important than ever that your disclosures are up-to-date and you know how to properly manage user data to prevent potential legal action.
Contact our team today if you have questions about AI harm or if your business has suffered damages as a result of AI.
Related Topics
This entry was posted on Friday, December 05, 2025 and is filed under Resources & Self-Education, Internet Law News.