What the RAISE Act Does
The Responsible AI Safety and Education Act (RAISE Act) is New York’s first statutory regime aimed specifically at frontier AI developers — the handful of companies (OpenAI, Anthropic, Google DeepMind, Meta, xAI, and a few others) training models at the highest compute thresholds.
The original bill passed both chambers in June 2025. After a long negotiation period, amendments were introduced on January 6, 2026, passed the second chamber on March 11, 2026, and were signed into law by Governor Hochul on March 27, 2026, according to Norton Rose Fulbright’s client alert.
The amended law applies to frontier models “developed, deployed, or operating” in New York and takes effect January 1, 2027.
The Five Requirements Frontier Developers Now Owe
Based on analyses from Davis Wright Tremaine, Alston & Bird, and Wiley, the amended RAISE Act imposes five core duties:
- Publish a Frontier AI Framework — replaces the original “safety and security protocol” requirement. Must describe the developer’s approach to identifying and mitigating catastrophic risk.
- Publish a Transparency Report — disclosing the model’s release date, supported languages, output modalities, intended uses, and restrictions on use. This mirrors California’s Transparency in Frontier Artificial Intelligence Act (TFAIA).
- File catastrophic risk assessments with the new oversight office.
- Report incidents — material failures or misuse that could cause catastrophic harm.
- Retain records for examination by the NYDFS office.
The New Enforcement Home
The amendments shift regulatory oversight from the NY Division of Homeland Security and Emergency Services to a new office inside the NY Department of Financial Services (DFS). This is a meaningful move: NYDFS has a reputation for aggressive enforcement, a cybersecurity regulation with real teeth, and institutional experience auditing entities it regulates. For frontier AI labs, the DFS inherits the file.
Civil penalties start at $1 million for an initial violation and can reach $3 million for subsequent violations — far higher than most state tech statutes but well short of the EU AI Act’s 7% of global revenue maximum.
Advertisement
Why the California Alignment Matters
Cooley’s March 31, 2026 analysis is pointed: the amended New York law has been “Californified” — it now tracks California’s TFAIA closely on transparency-report structure, while preserving NY-specific enforcement mechanics. This matters for two reasons:
- Dual-state compliance is now cheaper. Frontier labs that produce one TFAIA-compliant framework document can largely reuse it for NY RAISE.
- It anticipates federal preemption. If Congress passes an AI preemption statute in the wake of the White House March 2026 framework, the state regimes with the greatest overlap are the ones most likely to survive as safe harbors rather than be wholly displaced.
Scope: Who Actually Gets Pulled In
The RAISE Act targets “large frontier developers” — a term of art that, in practice, captures models trained above a compute threshold the bill ties to leading models circa 2024-2025 (the 10^26 FLOP range commonly cited in state AI bills). That captures:
- OpenAI, Anthropic, Google DeepMind, Meta AI, xAI, Mistral, Cohere
- Probably the largest open-weight releases (Llama, DeepSeek V4-class)
- Not: startups fine-tuning frontier models, companies building downstream products
For enterprise teams using frontier models, RAISE is not a direct compliance obligation — but the published frameworks and transparency reports become free, high-quality procurement inputs.
Global Context: The Frontier Regulation Race
RAISE joins a small but growing global cohort of frontier-specific AI regimes:
- EU AI Act — applies to “general-purpose AI with systemic risk” above 10^25 FLOPs, with its own code of practice.
- California TFAIA — published by Governor Newsom’s administration as the US frontier-transparency template.
- UK AI Safety Institute voluntary testing agreements — non-statutory but influential.
- Kenya AI Bill 2026 — risk-based, broader in scope, targets all developers including smaller ones.
The RAISE Act sits in the middle: narrower than the EU AI Act, more enforceable than the UK’s voluntary regime, more penalty-heavy than most state tech laws.
What Enterprise AI Buyers Should Track
Three practical takeaways for CIOs and procurement leads contracting with frontier labs:
- Request the labs’ RAISE Act disclosures as part of vendor due diligence starting Q4 2026. The published frameworks will reveal how each lab reasons about catastrophic risk.
- Treat the transparency reports as ground truth for model capabilities and restrictions — they are filed under enforceable accuracy standards with NYDFS.
- Review incident-notification chains in frontier lab contracts. RAISE-mandated reporting timelines may flow through to customer-notification obligations.
The RAISE Act’s biggest impact may not be on the frontier labs themselves — most already run elaborate safety teams — but on the ecosystem of downstream buyers who finally get a standardized, regulator-backed view of what the models can and cannot do.
Frequently Asked Questions
Who is a “frontier developer” under the RAISE Act?
The RAISE Act defines “large frontier developers” by compute threshold and revenue scale, capturing the handful of labs training the most capable AI models — OpenAI, Anthropic, Google DeepMind, Meta AI, xAI, Mistral, Cohere, and similar. Startups fine-tuning existing frontier models, or companies building application-layer products on top of them, are not in scope.
What happens if a frontier developer doesn’t comply?
The NY Department of Financial Services is empowered to assess civil penalties starting at $1 million per violation and rising to $3 million for subsequent violations. Enforcement lives with NYDFS’s new oversight office. Unlike most state tech statutes, RAISE has no private right of action — enforcement is centralized with the regulator.
Is the RAISE Act the same as California’s TFAIA?
They are closely aligned after the March 2026 amendments, but not identical. Both require published transparency reports and frontier AI frameworks; TFAIA is administered by California’s Department of Technology while RAISE is administered by NYDFS. Frontier labs can largely reuse one framework document across both regimes, which was an explicit design goal of the March amendments.
Sources & Further Reading
- Governor Hochul Signs Nation-Leading Legislation to Require AI Frameworks for Frontier Models — New York State Governor
- NY Overhauls Transparency and Governance Requirements for Frontier AI Developers — Davis Wright Tremaine
- New York Finalizes RAISE Act for Frontier AI Models; Law Takes Effect January 1, 2027 — Wiley
- The New York Responsible AI Safety and Education (RAISE) Act: What You Need to Know — Norton Rose Fulbright
- New York’s Frontier AI Law Gets a California Makeover — Cooley
- With the RAISE Act, New York Aligns With California on Frontier AI Laws — Carnegie Endowment
















