Trump’s AI Executive Order: What It Means for State AI Laws in California, New York, Texas, and Beyond
President Trump’s new executive order on artificial intelligence is a bid to pull AI regulation away from the states and instead to centralize it in Washington D.C., with a stated tilt toward “light-touch” rules meant to protect innovation and U.S. competitiveness against China. For states that already passed AI laws, this sets up a potential clash over federal preemption, funding leverage, and the future of state police powers in the digital economy.
What the executive order actually does
The order, often referred to as the AI Preemption EO, frames state regulation as a threat to national competitiveness, especially in the race with China, echoing the President’s view that AI companies cannot navigate 50 different rule books.
Details of the Order include:
- Directs the Attorney General to create an “AI Litigation Task Force” to identify and challenge state AI laws as preempted by federal law where possible.
- Instructs the Commerce Department to compile a list of “problematic” state AI regulations that, in the administration’s view, unduly burden innovation or regulate beyond state borders.
- Signals that certain federal grant programs, including broadband and other technology-related funds, may be conditioned on states refraining from or rolling back conflicting AI laws.
States already regulating AI
Even before this EO, states were far from passive on AI. By 2025, many states had introduced AI-related bills, with a growing number enacting sector-specific or cross-cutting AI statutes. These laws range from deepfake rules and AI hiring transparency, to “high-risk” AI governance and government use standards.
Some examples that matter in the EO:
- California has pursued a patchwork approach, including laws and proposals on election deepfakes, content labeling, protections for performers’ digital replicas, and training-data disclosure for powerful models, despite a prior veto of a comprehensive AI bill.
- New York has enacted disclosure rules around algorithmic pricing and has considered broader AI accountability measures, including frontier model proposals similar to California’s.
- Texas, Utah, Delaware, Montana, and others have leaned into “innovation-first” frameworks, such as regulatory sandboxes and “right to compute” provisions, alongside targeted accountability rules.
In addition, multiple states have adopted AI laws focused on:
- Automated decision-making in employment, credit, and housing.
- Guardrails on government use of AI (e.g., procurement standards, governance committees, and impact assessments).
- Civil rights, discrimination, and consumer protection in AI deployment.
These are exactly the kinds of state initiatives that could now be scrutinized as “onerous” or “conflicting” with the federal policy direction articulated in the EO.
What this means for states with AI laws
For states that have already enacted AI statutes, the EO does not automatically erase those laws—but it does materially increase legal and political risk around them. The new AI Litigation Task Force is tasked with finding preemption theories, which may lead to federal court challenges arguing that certain AI provisions conflict with federal law. States with ambitious AI rules—particularly those governing private-sector models, mandatory safety regimes for “frontier” systems, or requirements that AI outputs be altered to meet normative or “ideological” criteria may find themselves under pressure. At the same time, the administration has signaled it does not plan to target “kid safety” or narrow consumer-protection measures, leaving some room for states to continue legislating in specific harm-focused areas.
This sets up three near-term dynamics for states:
- Strategic drafting and amendment: Legislatures and attorneys general may reframe AI bills to emphasize traditional police powers (fraud, discrimination, public safety) and avoid direct conflict with any emerging federal standards.
- Litigation and test cases: Expect a handful of early lawsuits challenging high-profile laws—likely in tech-heavy or regulation-forward states—as bellwethers on how far executive-branch-driven preemption can go.
- Bifurcation of approaches: Some states may pause expansive AI projects to protect funding and avoid litigation, while others may double down as a matter of principle, using state constitutional arguments and rights-based frameworks that are harder to displace.
For businesses operating across states, this combination of federal assertiveness will mean monitoring not just statutes, but also federal rule makings, policy statements, and grant conditions that can indirectly reshape the playing field.
Implications for companies, developers, and users
For AI developers, the EO is designed to be a relief valve: it signals that the federal government wants to prevent a balkanized, state-by-state AI compliance burden, particularly for large foundation models and rapidly scaling startups. In the near term, however, companies may face more complexity, not less, as they track what is truly preempted, what remains enforceable, and which state obligations still apply while lawsuits play out.
Enterprise users and deployers of AI—employers, financial institutions, health systems, and public agencies—will have to navigate a tightening mesh of federal standards (from agencies like the FTC and FCC) layered on top of partially intact state regimes. Consumers and workers may see uneven protections: some state-level safeguards could be weakened or delayed, while new federal rules may focus more on disclosure, reporting, and competition than on substantive limits on AI-related harms. It is important to retain counsel to navigate these complex legal challenges to reduce risk of potential litigation or penalties. Contact Kronenberger Rosenfeld today through our online submission form to discuss further.
This entry was posted on Thursday, December 18, 2025 and is filed under Resources & Self-Education, Internet Law News.