The Federal Vacuum That Created the Patchwork
The Trump Administration’s January 2025 executive order directing federal agencies to prioritize AI innovation over regulation did not eliminate US AI compliance exposure — it redirected it. By signalling that federal AI governance would be permissive, the EO effectively gave state legislatures more reason to fill the void. The result, through the first half of 2026, is a legislative environment where according to Wilson Sonsini’s 2026 AI regulatory preview, enterprises must track not a single federal standard but a matrix of state-level requirements that differ in scope, threshold, enforcement mechanism, and effective date.
The challenge for compliance teams is not that any individual state law is impossible to satisfy. California’s training data disclosure requirements, Colorado’s high-risk AI governance obligations, and Connecticut’s frontier model transparency rules are each individually manageable. The challenge is the combinatorial compliance burden: a multi-state enterprise must maintain systems that simultaneously satisfy the different audit trail, notification, opt-out, bias audit, and documentation requirements of a dozen or more jurisdictions, with different effective dates and different enforcement bodies.
The Gunderson Dettmer 2026 AI laws update makes the enterprise implication concrete: organizations face “indirect compliance and contracting risks” even when they do not directly deploy regulated AI systems, because their vendors — the ATS platforms, credit scoring tools, HR analytics systems they use — are themselves regulated. Vendor due diligence and contract terms have become a compliance liability in their own right.
The Key State Frameworks Every Enterprise Must Map
California: Multiple Laws, Layered Obligations
California’s AI regulation is not a single law but a stack of requirements that each address different system types. Senate Bill 53 (the Transparency in Frontier AI Act) requires frontier AI developers to publish safety and security frameworks and report incidents to the California Government Operations Agency. Assembly Bill 2013 requires AI developers to disclose detailed summaries of training datasets — including size, protected IP inclusion, and licensing status — with litigation exposure: xAI has challenged AB 2013’s constitutionality, arguing it compels trade secret disclosure.
For enterprises deploying rather than developing AI, California’s existing Consumer Privacy Act (CCPA) has been interpreted to apply to automated decision-making in ways that create opt-out rights for consumers. The practical implication: any automated decision system affecting California residents requires opt-out mechanisms, not just disclosure.
Colorado: The Most Watched Rewrite in US AI Law
Colorado’s original AI Act (Senate Bill 205, enacted 2024) was the most ambitious US state AI regulation — requiring developers and deployers of high-risk AI to conduct algorithmic impact assessments, provide consumers with explanations of AI-driven decisions, and enable opt-outs. The tech industry lobbied intensively for revision. According to the Colorado Sun’s May 2026 coverage, the replacement bill (Senate Bill 189) significantly scaled back these requirements: instead of mandating disclosure of how AI makes decisions, it now requires only notification that AI was used and provides consumers with an appeal right.
Critically, SB 189 pushes the effective date to January 2027, giving enterprises a preparation window. The rewrite was not without controversy — Senate Majority Leader Robert Rodriguez described it as a compromise where “everybody lost and everybody won,” acknowledging the dilution of consumer protection while preserving the law’s structural integrity.
Connecticut and the Northeastern Expansion
Connecticut’s AI bill, covering frontier models, chatbots, employment, and provenance, passed the legislature in 2026 with gubernatorial approval expected. According to the Troutman Privacy group’s May 2026 tracking report, Connecticut’s measure addresses a broader range of AI use cases than most state laws, including requirements around chatbot interactions and AI-generated content disclosure. New York’s RAISE Act (S.B. S6953B) similarly imposes requirements on frontier AI developers operating in the state.
Maryland became the first state to regulate algorithmic pricing practices, with HB 895 signed into law — requiring that AI-driven pricing systems disclose when a price was set algorithmically using personal data.
Advertisement
What Enterprise Compliance Teams Must Do About the State Patchwork
1. Build a Jurisdiction-Mapped AI Inventory That Tracks Regulatory Status per State
The baseline compliance tool for the state patchwork is an AI system inventory that is not just internally focused but jurisdiction-mapped: for each AI system in deployment, the compliance record should note which state regulations apply based on where affected data subjects reside. A hiring AI used by a company with California, Colorado, and New York employees is simultaneously subject to California’s CCPA automated decision obligations, Colorado’s SB 189 notification requirements (from January 2027), and New York City’s AEDT bias audit rule (in effect since July 2023). A single compliance programme that does not track these jurisdictional overlays will inevitably miss an obligation.
2. Conduct Vendor AI Audits with State-Law Compliance Clauses
The indirect compliance risk is the most underappreciated dimension of state AI law exposure. Enterprises do not need to develop AI to face liability — they face it when their vendors provide AI systems that are out of compliance and the enterprise cannot demonstrate due diligence. Every AI vendor contract entered or renewed in 2026 should include: (a) an obligation on the vendor to maintain compliance with applicable state AI laws, (b) audit rights allowing the enterprise to verify compliance, (c) a notification obligation if the vendor receives a regulatory inquiry or enforcement action, and (d) liability allocation language that clarifies which party bears consequences if the vendor’s AI system fails a state-law obligation. Without these clauses, the enterprise assumes the vendor’s compliance risk by default.
3. Implement Employment AI Bias Audit Programmes Now
Employment AI governance is the highest-risk compliance category across US state laws because it combines multiple overlapping obligations: New York City’s AEDT rule requires annual independent bias audits and pre-use candidate notices; California’s SB 53 framework imposes transparency obligations on AI affecting employment; Colorado’s revised SB 189 covers employment AI decisions. For enterprises using AI in any stage of the talent lifecycle — sourcing, screening, assessment, promotion, performance management — an annual independent bias audit programme is the most defensible posture. The audit documentation becomes evidence in regulatory inquiries and lawsuit discovery.
4. Build Compliance-by-Design into AI Product Development
The most cost-effective state AI compliance strategy is building documentation, disclosure, and oversight capabilities into AI systems during development rather than retrofitting them. The Gunderson Dettmer guidance recommends forming cross-functional teams — legal, privacy, product, and HR — who review AI system designs against state law requirements at specification stage. Systems designed without notification mechanisms, explanation generation, or override capabilities are expensive to retrofit after deployment. Organizations that establish a pre-launch AI compliance checklist — covering state notification, opt-out, bias testing, and documentation requirements — avoid the emergency remediation cycles that have characterized the first wave of state AI enforcement actions.
The Federal Preemption Question
The uncertainty that makes state AI patchwork compliance particularly challenging is the open question of federal preemption: whether a future federal AI law will preempt state regulations and simplify the compliance matrix. The Trump Administration’s AI executive order hints at preemption intent, but executive orders cannot preempt enacted state statutes — that requires federal legislation or a successful legal challenge. According to Gunderson Dettmer’s 2026 guidance, enterprises should “continue state compliance” until courts clarify the EO’s reach and not assume that federal policy signals will resolve the compliance obligation at the state level.
The practical implication is that enterprises building compliance programmes for 2026 should treat the state patchwork as the durable operating environment — not as a temporary problem awaiting a federal solution. The compliance infrastructure built for California, Colorado, and Connecticut today is also the foundation for compliance with the additional state laws that will be enacted in 2027 and beyond.
Frequently Asked Questions
How many US states have enacted AI laws in 2026, and which ones are most important for enterprises?
By mid-2026, the most impactful AI regulatory frameworks are in California (multiple laws including SB 53, AB 2013, and CCPA automated decision obligations), Colorado (SB 189, effective January 2027), Connecticut (comprehensive bill pending signature), New York (RAISE Act for frontier AI, plus NYC’s AEDT bias audit rule in effect since 2023), Maryland (algorithmic pricing disclosure), and Montana and Maine (chatbot regulations effective 2025). The number of bills in motion exceeds 600 across all 50 states, though most address narrower topics like chatbot safety, AI-generated content labeling, and healthcare AI.
What is the difference between California’s AI laws and Colorado’s revised AI Act?
California’s approach is additive: multiple laws each targeting specific AI use cases (training data disclosure, frontier model safety, consumer privacy rights around automated decisions). Colorado’s SB 189, which replaced the 2024 AI Act, takes a lighter touch — requiring consumer notification when AI is used in consequential decisions and providing appeal rights, but no longer mandating disclosure of how AI makes decisions. Colorado’s law takes effect January 2027; California’s laws are already in effect. For enterprises, California exposure is typically broader because of its large consumer market and aggressive enforcement posture.
Does the Trump Administration’s AI executive order eliminate state AI compliance obligations?
No. The executive order directs federal agencies toward AI-permissive policies but cannot legally preempt enacted state statutes. State AI laws in California, Colorado, Connecticut, New York, and others remain enforceable under state authority. Federal preemption of state AI laws would require federal legislation or successful court challenges — neither of which has been established as of mid-2026. Enterprises should maintain state compliance programmes regardless of federal policy direction.
—















