What TRAIGA Is and Who It Covers
Governor Greg Abbott signed HB 149 into law on June 22, 2025. It became operative on January 1, 2026. The final version is noticeably lighter than the sprawling original bill that began the 89th Legislature — heavy trimming in March 2025 replaced most prescriptive compliance mandates with a framework of prohibited uses, governance structures, and a regulatory sandbox.
The scope is nonetheless broad. TRAIGA applies to:
- Any person or entity conducting business in Texas.
- Any person or entity offering products or services to Texas residents.
- Any entity developing or deploying AI systems within the state.
- Government entities in Texas (with additional restrictions).
This includes out-of-state and international organizations whose AI systems are accessible to Texans — a Brussels-Effect-lite extraterritorial reach. A European SaaS vendor or a California AI startup with Texas customers is in scope.
The Prohibited Uses
TRAIGA is built around a list of “developing or deploying with intent to” prohibitions. In plain language, you cannot build or deploy an AI system in Texas intended to:
- Manipulate human behavior in ways that incite or encourage self-harm, harm to others, or criminal activity.
- Infringe or restrict constitutional rights guaranteed under U.S. or Texas law.
- Unlawfully discriminate against a protected class (race, color, national origin, sex, age, religion, disability, and related categories).
- Produce or distribute certain sexually explicit content, including CSAM and non-consensual intimate imagery.
Additional provisions restrict government entities from using AI for:
- Social scoring systems that rate individuals on behavior, characteristics, or other attributes.
- Real-time biometric identification of specific individuals without consent.
- Biometric categorization inferring protected characteristics.
Private employers and developers are not uniformly banned from these uses, but the rules for government agencies in Texas are notably stricter than for private actors — a deliberate Texas-flavored choice distinguishing state action from private enterprise.
Separately, TRAIGA requires clear, conspicuous notice that an AI system is in use when interacting with consumers, without “dark patterns” designed to obscure the disclosure.
Enforcement: Attorney General Only, With a Cure Period
Enforcement under TRAIGA is centralized and predictable — an explicit design goal.
No private right of action. Only the Texas Attorney General can sue for TRAIGA violations. Private plaintiffs cannot bring class actions under the statute.
Mandatory notice and 60-day cure period. Before filing suit, the AG must send a notice of violation. The party then has 60 days to cure the alleged violation, explain the cure, and identify policy changes made to prevent recurrence. This is a genuine safe harbor — a responsive company can often resolve alleged violations without any penalty exposure.
Civil penalties. Where cure fails or the violation is uncurable:
- Curable violations: $10,000 – $12,000 per violation.
- Uncurable violations: $80,000 – $200,000 per violation.
- Continuing violations: $2,000 – $40,000 per day.
- Licensed professionals: state licensing boards may suspend or revoke licenses and impose additional fines up to $100,000 on the AG’s recommendation.
Consumer complaint portal. The AG is required to build a website-based mechanism for Texans to report suspected violations, which will likely be the primary funnel for enforcement referrals.
Safe harbors / affirmative defenses. TRAIGA explicitly credits parties that:
- Receive feedback from a developer, deployer, or other person and remediate.
- Conduct adversarial testing or red-teaming that surfaces issues.
- Follow state agency guidelines.
- Discover and remediate through internal review processes.
…provided the party is otherwise in compliance with a nationally recognized AI risk management framework, most notably the NIST AI Risk Management Framework (AI RMF 1.0). This is arguably the single most consequential detail in the statute: it effectively codifies NIST AI RMF as the de facto compliance backbone in Texas.
Advertisement
The Regulatory Sandbox
TRAIGA creates the first-in-the-nation state AI regulatory sandbox. Participants can test AI systems under modified regulatory conditions, with supervision from a designated state office, for up to 36 months. The sandbox is meant to lower the compliance uncertainty for startups and innovative use cases, and it’s likely to draw interest from fintech, healthcare AI, and Texas-based generative AI startups.
How TRAIGA Compares to Colorado and California
Three comprehensive U.S. state AI regimes are now in flight, and each has a distinct philosophy.
Colorado AI Act (effective February 1, 2026 — original, now with delay discussions underway):
- Focus: algorithmic discrimination in high-risk use cases (employment, credit, education, housing, insurance, healthcare, essential government services, legal services).
- Obligations: developers and deployers must use “reasonable care” to avoid algorithmic discrimination, conduct annual impact assessments, provide consumer notice and opt-outs.
- Enforcement: Colorado Attorney General; 60-day cure provision.
- Most similar to the EU AI Act in approach.
California AI Transparency Act (SB 942, effective August 2, 2026 after delay):
- Focus: generative AI disclosure — latent metadata in generated content and user-accessible disclosure tools.
- Applies to covered providers with more than 1 million monthly users.
- Enforcement: California AG; penalty of $5,000 per violation.
- Narrow scope, consumer-protection flavor.
Texas TRAIGA (effective January 1, 2026):
- Focus: prohibited uses (intent-based), government restrictions, and disclosure.
- Applies broadly to anyone with AI systems reaching Texas residents.
- Enforcement: Texas AG; $10K-$200K civil penalties; 60-day cure; NIST AI RMF safe harbor.
- Business-friendlier framing than Colorado, with a regulatory sandbox.
The net effect for any AI developer or deployer operating nationally is that compliance programs must now be multi-state by default. Building to NIST AI RMF is increasingly the common denominator; Texas rewards it explicitly as a safe harbor, Colorado treats it as a strong pathway to “reasonable care,” and California’s narrower transparency rules dovetail rather than conflict.
What Companies Should Be Doing Now
For any organization with Texas customers — which is to say, virtually every significant U.S. B2C and B2B company — the practical to-do list for 2026 is concrete:
- Inventory AI systems that touch Texas users. Know what you have, what it does, and who owns it.
- Map each system against TRAIGA’s prohibited uses. Intent matters legally, but documented controls that demonstrate the absence of prohibited intent matter in practice.
- Update user-facing disclosures. Clear, conspicuous notice when AI is in use — no dark patterns. Front-door product surfaces, chatbots, recommendation systems, automated decision tooling.
- Adopt NIST AI RMF as the governance backbone. Document governance, mapping, measurement, and management activities. This unlocks the safe harbor.
- Review hiring, lending, housing, healthcare, and benefits algorithms for discrimination exposure under the unlawful-discrimination prohibition.
- Build a cure-response playbook. You have 60 days. Know in advance who owns the response, how you’d document remediation, and what policy changes you’d file.
- Monitor the AG’s consumer reporting portal once launched. Complaints there will drive enforcement priorities.
The Bigger Picture
Texas is not the strictest U.S. AI law — that title still belongs to Colorado’s comprehensive framework. But TRAIGA is the one most likely to shape the emerging U.S. compliance template: prohibited-uses architecture rather than comprehensive impact assessments, NIST-anchored safe harbors, centralized AG enforcement, meaningful cure periods, and a sandbox for experimentation.
If federal AI legislation remains stalled through 2026, TRAIGA’s model — especially the NIST-RMF safe harbor — is likely to be copied into other state statutes. For compliance teams, that’s good news: the work done to meet Texas’s requirements largely carries over to future state laws, and to the NIST-aligned pieces of the EU AI Act. Build once, comply many times.
Frequently Asked Questions
Does TRAIGA apply to companies outside Texas?
Yes. Any entity offering products or services to Texas residents — including out-of-state and international companies — is in scope if its AI system is accessible to Texans. The reach is functionally extraterritorial, similar to GDPR’s approach.
What does the NIST AI RMF safe harbor actually protect against?
Parties that document compliance with NIST AI RMF 1.0 can invoke the safe harbor for violations remediated through feedback, red-teaming, state agency guidelines, or internal review. It does not immunize prohibited uses, but it materially reduces penalty exposure for good-faith operators.
How is TRAIGA different from the Colorado AI Act?
Colorado focuses on algorithmic discrimination in high-risk use cases with mandatory annual impact assessments and consumer opt-outs. Texas focuses on intent-based prohibited uses, government restrictions, and disclosure, with a regulatory sandbox and an explicit NIST AI RMF safe harbor. Colorado is closer to the EU AI Act; Texas is more business-flexible.
Sources & Further Reading
- The Texas Responsible AI Governance Act: What your company needs to know before January 1 — Norton Rose Fulbright
- Texas Signs Responsible AI Governance Act Into Law — Latham & Watkins
- TRAIGA: Key Provisions of Texas’s New Artificial Intelligence Governance Act — Greenberg Traurig
- Texas Enacts Responsible AI Governance Act: What Companies Need to Know — Baker Botts
- Navigating TRAIGA: Texas’s New AI Compliance Framework — Ropes & Gray
- New State AI Laws are Effective on January 1, 2026 — King & Spalding
- From Colorado to Texas: How States Are Rewriting AI Laws — Miller Nash






