Washington Draws a Line on State AI Regulation
On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence — the most detailed blueprint yet for how the federal government intends to bring order to America’s increasingly fragmented AI regulatory landscape. The document is not binding law. It is a set of legislative recommendations for Congress. But its central message is unmistakable: the era of 50 different state-level AI rulebooks is ending, one way or another.
The framework builds on Executive Order 14365, signed in December 2025, which directed the Commerce Secretary to identify “onerous” state AI laws within 90 days and established an AI Litigation Task Force under the Attorney General to challenge state measures deemed unconstitutional or preemptive of federal authority. That 90-day deadline passed on March 11, 2026, with no public report from Commerce — but the White House framework itself now fills the policy vacuum with sweeping recommendations.
The Patchwork Problem
The urgency behind the framework is quantifiable. As of March 2026, state lawmakers across 45 states have introduced 1,561 AI-related bills — already surpassing the total for all of 2024, with most legislative sessions still in progress. In 2025 alone, 145 state AI bills were enacted into law across all 50 states.
The result is a compliance labyrinth. Colorado’s AI Act — signed in 2024, delayed multiple times, and now being substantially reworked after a working group reached consensus in March 2026 — imposes algorithmic accountability requirements on developers and deployers. Utah has enacted multiple AI laws governing mental health chatbots, generative AI disclosures, and consumer protection. Georgia passed its own chatbot bill. Alabama is establishing an AI and Children’s Internet Safety Study Commission. California, Texas, Illinois, and dozens of other states have their own overlapping and sometimes contradictory frameworks.
For companies building AI systems, this patchwork means navigating a different regulatory regime in nearly every state where they operate — a burden the White House argues is unsustainable for maintaining American competitiveness.
What the Framework Actually Proposes
The framework’s preemption architecture rests on a clear distinction: states would lose authority over AI model development but retain power over specific harms.
What states cannot do under the proposed framework:
- Regulate AI model development, training, or the underlying technology itself
- Impose liability on AI developers for unlawful conduct by third parties using their models
- Burden Americans’ use of AI for activities that would be lawful if performed without AI
- Create requirements that conflict with the proposed federal “minimally burdensome” standard
What states can still do:
- Enforce laws of general applicability (consumer protection, fraud prevention)
- Protect children through age-assurance requirements and data collection limits
- Exercise zoning authority over AI infrastructure like data centers
- Govern their own procurement and use of AI in public services
This developer-versus-deployer distinction is deliberate. The framework effectively creates a liability shield for AI model creators — companies like OpenAI, Anthropic, Google, and Meta — while leaving the door open for states to regulate how those models are applied in practice.
Advertisement
Seven Policy Pillars
Beyond preemption, the framework lays out legislative recommendations across seven domains:
Child safety sits at the top. The framework urges Congress to mandate age-assurance tools, parental controls for privacy and engagement settings, and strict limits on data collection from minors and online behavioral advertising targeting them.
Consumer protection recommendations focus on transparency — ensuring consumers know when they are interacting with AI and can seek recourse for AI-caused harm.
Energy policy addresses the data center boom directly, calling for streamlined federal permitting for AI infrastructure and a legal requirement that residential ratepayers not bear increased electricity costs from new AI data center construction.
National security provisions push for technical evaluation capacity within security agencies to assess advanced AI systems.
Intellectual property is where the framework takes its most controversial position: the Administration states that “training of AI models on copyrighted material does not violate copyright laws,” while acknowledging the courts will ultimately decide.
Free speech protections aim to prevent the government from “coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas.”
Workforce provisions address the displacement and transformation of jobs by AI, though specific legislative mechanisms remain vague compared to the other pillars.
The Enforcement Mechanism
The framework is not just recommendations on paper. Executive Order 14365 created the AI Litigation Task Force, announced on January 9, 2026, with a mandate to challenge state AI laws on constitutional grounds — including claims that they unconstitutionally regulate interstate commerce or are preempted by federal regulation.
Additionally, the executive order instructs federal agencies to evaluate states’ AI regulatory frameworks when determining eligibility for federal funding — a powerful financial lever that could pressure states into compliance even before Congress acts.
No New Regulator — But New Tensions
Notably, the framework explicitly recommends that Congress not create any new federal AI regulatory body. Instead, it calls for sector-specific regulation through existing agencies with subject matter expertise, supplemented by industry-led standards. This approach aligns with the Administration’s broader deregulatory posture but raises questions about coordination and enforcement capacity.
The framework has already generated friction in Congress. In July 2025, the Senate voted 99-1 to strip a proposed ten-year moratorium on state AI regulation from a budget reconciliation bill — a strong signal that broad preemption faces bipartisan skepticism. Senator Marsha Blackburn (R-Tenn.) has released her own discussion draft, the “TRUMP AMERICA AI Act,” which incorporates child protection measures and deepfake protections but notably diverges from the White House on copyright, asserting that AI training on copyrighted works does not constitute fair use.
Democrats including Representatives Yvette Clarke and Don Beyer, along with Senator Brian Schatz, have raised concerns about accountability gaps in the preemption approach.
What This Means Globally
The framework positions the United States firmly in the “innovation-first” camp of AI governance, in sharp contrast to the European Union’s risk-based AI Act and China’s sector-specific AI regulations. For nations developing their own AI governance strategies, the US approach offers a clear signal: Washington prioritizes speed and market leadership over precautionary regulation.
For countries still building their AI regulatory frameworks, the US debate illustrates a fundamental tension that every jurisdiction will face — how to balance the desire for national coherence with the legitimate need for local protections. The federal preemption model may be distinctly American in its constitutional framing, but the underlying question is universal: who gets to write the rules for artificial intelligence, and at what level of government?
The answer, for now, rests with a Congress that has shown it can agree on protecting children from AI harms but remains deeply divided on nearly everything else.
Frequently Asked Questions
Does the US AI Policy Framework have the force of law?
No. The framework released on March 20, 2026 is a set of legislative recommendations for Congress, not binding legislation. It builds on Executive Order 14365, which has enforcement mechanisms including an AI Litigation Task Force, but the preemption provisions require congressional action to become law.
What is the developer-versus-deployer distinction in the framework?
The framework proposes that states cannot regulate how AI models are built, trained, or developed. However, states retain authority to regulate how AI is deployed in practice, including consumer protection, child safety, and fraud prevention. This effectively shields AI model creators while allowing regulation of harmful applications.
How does this framework compare to the EU AI Act?
The US framework takes an explicitly “innovation-first” approach, avoiding the EU’s risk-based classification system and precautionary regulation. While the EU categorizes AI systems by risk level and imposes requirements accordingly, the US approach favors minimal regulation of development with sector-specific oversight through existing agencies rather than a new AI regulator.
Sources & Further Reading
- Insilico Medicine Announces Nature Medicine Publication of Phase IIa Results of Rentosertib — Insilico Medicine
- MSU Study Demonstrates Faster Discovery of Therapeutic Drugs Through AI — Michigan State University
- Using Generative AI, Researchers Design Compounds That Can Kill Drug-Resistant Bacteria — MIT News
- AI Speeds Up Drug Design for Parkinson's by Ten-Fold — University of Cambridge
- Here's How AI Is Reshaping Drug Discovery — World Economic Forum
- AI-Discovered Drugs Reach Phase III — Humai Blog
- AI Drug Discovery 2026: 173 Programs, FDA Framework and Market — Axis Intelligence
- FDA and EMA Set Common Principles for AI in Medicine Development — European Medicines Agency
















