⚡ Key Takeaways

Governor Hochul signed amendments to the New York RAISE Act on March 27, 2026. Effective January 1, 2027, frontier AI developers must publish a Frontier AI Framework and Transparency Report, with enforcement by a new NYDFS oversight office and civil penalties of $1M-$3M per violation.

Bottom Line: AI buyers and procurement teams should update vendor due diligence to require RAISE Act disclosures from US-based frontier model providers, using the regulator-backed transparency reports as ground truth for capabilities and restrictions.

Read Full Analysis ↓

Advertisement

🧭 Decision Radar

Relevance for Algeria
Medium

Algerian enterprises and startups buying frontier model APIs from the major US labs will benefit from the regulator-backed transparency reports that RAISE mandates, even without any direct local obligation.
Infrastructure Ready?
Yes

No infrastructure changes required; Algerian buyers already consume frontier models over the internet.
Skills Available?
Partial

Procurement and legal teams in Algeria have limited experience parsing US state AI disclosures; this will need capability-building.
Action Timeline
12-24 months

Published frameworks and transparency reports will start landing in Q4 2026 and become standard procurement inputs through 2027.
Key Stakeholders
CIOs, procurement leads, legal counsel, AI product teams
Decision Type
Educational

Algerian teams are end-users of the regime rather than subjects; the value is in understanding what to demand from AI vendors.

Quick Take: Algerian CIOs and AI product teams should update vendor due-diligence checklists to request RAISE Act disclosures and frontier AI frameworks from their US-based model providers. This gives Algerian buyers free access to regulator-grade documentation on model capabilities, limits, and catastrophic-risk posture — information that previously required expensive consulting or NDAs to obtain.

What the RAISE Act Does

The Responsible AI Safety and Education Act (RAISE Act) is New York’s first statutory regime aimed specifically at frontier AI developers — the handful of companies (OpenAI, Anthropic, Google DeepMind, Meta, xAI, and a few others) training models at the highest compute thresholds.

The original bill passed both chambers in June 2025. After a long negotiation period, amendments were introduced on January 6, 2026, passed the second chamber on March 11, 2026, and were signed into law by Governor Hochul on March 27, 2026, according to Norton Rose Fulbright’s client alert.

The amended law applies to frontier models “developed, deployed, or operating” in New York and takes effect January 1, 2027.

The Five Requirements Frontier Developers Now Owe

Based on analyses from Davis Wright Tremaine, Alston & Bird, and Wiley, the amended RAISE Act imposes five core duties:

  • Publish a Frontier AI Framework — replaces the original “safety and security protocol” requirement. Must describe the developer’s approach to identifying and mitigating catastrophic risk.
  • Publish a Transparency Report — disclosing the model’s release date, supported languages, output modalities, intended uses, and restrictions on use. This mirrors California’s Transparency in Frontier Artificial Intelligence Act (TFAIA).
  • File catastrophic risk assessments with the new oversight office.
  • Report incidents — material failures or misuse that could cause catastrophic harm.
  • Retain records for examination by the NYDFS office.

The New Enforcement Home

The amendments shift regulatory oversight from the NY Division of Homeland Security and Emergency Services to a new office inside the NY Department of Financial Services (DFS). This is a meaningful move: NYDFS has a reputation for aggressive enforcement, a cybersecurity regulation with real teeth, and institutional experience auditing entities it regulates. For frontier AI labs, the DFS inherits the file.

Civil penalties start at $1 million for an initial violation and can reach $3 million for subsequent violations — far higher than most state tech statutes but well short of the EU AI Act’s 7% of global revenue maximum.

Advertisement

Why the California Alignment Matters

Cooley’s March 31, 2026 analysis is pointed: the amended New York law has been “Californified” — it now tracks California’s TFAIA closely on transparency-report structure, while preserving NY-specific enforcement mechanics. This matters for two reasons:

  • Dual-state compliance is now cheaper. Frontier labs that produce one TFAIA-compliant framework document can largely reuse it for NY RAISE.
  • It anticipates federal preemption. If Congress passes an AI preemption statute in the wake of the White House March 2026 framework, the state regimes with the greatest overlap are the ones most likely to survive as safe harbors rather than be wholly displaced.

Scope: Who Actually Gets Pulled In

The RAISE Act targets “large frontier developers” — a term of art that, in practice, captures models trained above a compute threshold the bill ties to leading models circa 2024-2025 (the 10^26 FLOP range commonly cited in state AI bills). That captures:

  • OpenAI, Anthropic, Google DeepMind, Meta AI, xAI, Mistral, Cohere
  • Probably the largest open-weight releases (Llama, DeepSeek V4-class)
  • Not: startups fine-tuning frontier models, companies building downstream products

For enterprise teams using frontier models, RAISE is not a direct compliance obligation — but the published frameworks and transparency reports become free, high-quality procurement inputs.

Global Context: The Frontier Regulation Race

RAISE joins a small but growing global cohort of frontier-specific AI regimes:

  • EU AI Act — applies to “general-purpose AI with systemic risk” above 10^25 FLOPs, with its own code of practice.
  • California TFAIA — published by Governor Newsom’s administration as the US frontier-transparency template.
  • UK AI Safety Institute voluntary testing agreements — non-statutory but influential.
  • Kenya AI Bill 2026 — risk-based, broader in scope, targets all developers including smaller ones.

The RAISE Act sits in the middle: narrower than the EU AI Act, more enforceable than the UK’s voluntary regime, more penalty-heavy than most state tech laws.

What Enterprise AI Buyers Should Track

Three practical takeaways for CIOs and procurement leads contracting with frontier labs:

  • Request the labs’ RAISE Act disclosures as part of vendor due diligence starting Q4 2026. The published frameworks will reveal how each lab reasons about catastrophic risk.
  • Treat the transparency reports as ground truth for model capabilities and restrictions — they are filed under enforceable accuracy standards with NYDFS.
  • Review incident-notification chains in frontier lab contracts. RAISE-mandated reporting timelines may flow through to customer-notification obligations.

The RAISE Act’s biggest impact may not be on the frontier labs themselves — most already run elaborate safety teams — but on the ecosystem of downstream buyers who finally get a standardized, regulator-backed view of what the models can and cannot do.

Follow AlgeriaTech on LinkedIn for professional tech analysis Follow on LinkedIn
Follow @AlgeriaTechNews on X for daily tech insights Follow on X

Advertisement

Frequently Asked Questions

Who is a “frontier developer” under the RAISE Act?

The RAISE Act defines “large frontier developers” by compute threshold and revenue scale, capturing the handful of labs training the most capable AI models — OpenAI, Anthropic, Google DeepMind, Meta AI, xAI, Mistral, Cohere, and similar. Startups fine-tuning existing frontier models, or companies building application-layer products on top of them, are not in scope.

What happens if a frontier developer doesn’t comply?

The NY Department of Financial Services is empowered to assess civil penalties starting at $1 million per violation and rising to $3 million for subsequent violations. Enforcement lives with NYDFS’s new oversight office. Unlike most state tech statutes, RAISE has no private right of action — enforcement is centralized with the regulator.

Is the RAISE Act the same as California’s TFAIA?

They are closely aligned after the March 2026 amendments, but not identical. Both require published transparency reports and frontier AI frameworks; TFAIA is administered by California’s Department of Technology while RAISE is administered by NYDFS. Frontier labs can largely reuse one framework document across both regimes, which was an explicit design goal of the March amendments.

Sources & Further Reading