The Template for American AI Regulation

Colorado Senate Bill 24-205 — the Colorado Artificial Intelligence Act — is the most comprehensive state-level AI regulation in the United States. Signed by Governor Jared Polis on May 17, 2024, the law applies broad requirements to all “high-risk AI systems” across every sector, establishing impact assessments, consumer-facing disclosures, and opt-out notifications for Colorado residents. It has become the de facto template against which every subsequent state AI proposal is measured.

The law’s journey from passage to implementation has been anything but smooth. Governor Polis himself expressed reservations in his signing statement, encouraging the bill’s sponsors to “significantly improve” their approach before SB 205 takes effect, and specifically called on the federal government to enact legislation that would preempt the bill he had just signed.

Originally set to take effect on February 1, 2026, the Colorado legislature held a special session in August 2025 where several amendment bills were introduced to revise the AI Act. Proposals ranged from narrowing the definition of “consequential decision” to employment and public safety, to exempting businesses with fewer than 250 employees or less than $5 million in annual revenue. None achieved the necessary consensus. Ultimately, only one AI-related bill was approved: SB 25B-004, signed by Governor Polis on August 28, 2025, which simply postponed the effective date to June 30, 2026, giving lawmakers another chance to amend the law during the 2026 regular session.

Meanwhile, the Trump administration’s December 11, 2025, executive order titled “Ensuring a National Policy Framework for Artificial Intelligence” has placed the Colorado AI Act squarely in the crosshairs. The order directs the Attorney General to establish a DOJ AI Litigation Task Force within 30 days to challenge state AI laws deemed inconsistent with the administration’s policy of maintaining a “minimally burdensome national policy framework” for AI. The Commerce Department has 90 days to publish an evaluation of existing state AI laws and identify “onerous” regulations. The executive order specifically criticizes Colorado’s requirements, claiming they could force AI systems to produce false results to avoid differential treatment of protected groups.

The result is a law that exists in a state of suspended uncertainty. The 2026 Colorado legislative session — which runs through May 13, 2026 — represents the final window for amendments before the June 30 deadline. Yet as of early 2026, there has been very little legislative activity or discussion about changes to the Act from the legislature, the Governor’s office, or the Attorney General.

What the Colorado AI Act Requires

The Colorado AI Act establishes two categories of regulated entities — “developers” (who create high-risk AI systems) and “deployers” (who use them) — and imposes distinct obligations on each.

For developers, the primary obligations are transparency and documentation. Before making a high-risk AI system available, developers must provide deployers with comprehensive documentation including a plain-language description of the system’s intended uses and known limitations, the types of data used to train the system, known or reasonably foreseeable risks of algorithmic discrimination, a description of the data governance measures used during development, and the results of any evaluations conducted to assess the system’s performance across demographic groups. This documentation may include industry-standard artifacts such as model cards and dataset cards.

Developers must also make a public disclosure on their website of any high-risk AI systems they have developed and how they manage known or reasonably foreseeable risks of algorithmic discrimination. Additionally, developers must report to the Colorado Attorney General and all known deployers any discovered risks of algorithmic discrimination within 90 days.

For deployers, the obligations are more extensive and more operationally demanding. Before deploying a high-risk AI system, deployers must complete a comprehensive impact assessment that evaluates the system’s purpose and intended benefits, the categories of individuals affected and the nature of the decisions the system influences, the system’s data inputs and the governance measures applied to that data, the metrics used to evaluate the system’s performance, the risk of algorithmic discrimination and the measures taken to mitigate it, the transparency measures provided to affected individuals, and the human oversight mechanisms in place.

The impact assessment must be completed before deployment, updated annually, and revisited within 90 days of any intentional and substantial modification to the system. All documentation must be retained for at least three years and made available to the Colorado Attorney General within 90 days of a request.

Deployers must also implement a risk management policy aligned with recognized standards such as the NIST AI Risk Management Framework or ISO/IEC 42001. Consumers must be notified when interacting with an AI system and given the reasons behind adverse consequential decisions, with the right to correct inaccuracies or appeal.

The “High-Risk” Definition Debate

The Colorado AI Act’s scope turns on its definition of “high-risk AI system.” The statute defines this as any AI system that, when deployed, makes or is a substantial factor in making a “consequential decision.” A “consequential decision” is defined as a decision that has a material legal or similarly significant effect on a consumer in areas including education, employment, financial services, government services, healthcare, housing, insurance, and legal services.

This definition is intentionally broad. Unlike the EU AI Act, which provides a specific list of high-risk use cases, the Colorado Act uses a functional definition that captures any AI system making significant decisions in any of the enumerated domains. The breadth of this definition is both the law’s greatest strength and its most controversial feature.

Supporters argue that a broad definition is necessary because the risks of AI-driven decisions are not limited to specific use cases. An AI system that recommends denial of a loan application and an AI system that recommends a specific medical treatment both make consequential decisions about individuals, and both should be subject to impact assessment and transparency requirements regardless of whether they appear on a predetermined list.

Critics argue that the broad definition creates a compliance mandate so expansive that it is practically unworkable. Small businesses that use off-the-shelf AI tools for scheduling, customer service, or inventory management may find themselves subject to impact assessment requirements that were designed for high-stakes decision-making systems. The Colorado Attorney General holds exclusive enforcement authority, but the sheer scope of covered systems raises questions about effective oversight.

Advertisement

The Special Session and Failed Reforms

The August 2025 special session called by Governor Polis represented the most serious attempt to reshape the AI Act before its implementation. Several bills were introduced, each reflecting different stakeholder priorities.

Senate Bill 25B-004, introduced by Senate Majority Leader Robert Rodriguez as the “AI Sunshine Act,” originally proposed substantially narrowing the law. Its initial draft would have narrowed the definition of “consequential decision” to employment and public safety contexts and proposed exempting businesses with fewer than 250 employees and those with less than $5 million in annual revenue. This approach addressed legitimate small business compliance concerns but was stripped down during the legislative process.

House Bill 25B-1008 focused on transparency and consumer protections, requiring AI systems to disclose at the beginning of any interaction with a consumer that they are not human.

Senate Bill 25B-008 sought to clarify that Colorado’s existing anti-discrimination laws apply fully to conduct executed or facilitated by AI, algorithmic systems, or other digital technologies.

Despite bipartisan support for reform, none of the substantive amendment proposals gained sufficient traction. Lawmakers opted to delay implementation rather than enact partial reforms. The final version of SB 25B-004 that Governor Polis signed on August 28, 2025, simply substituted “June 30, 2026” for “February 1, 2026” throughout the relevant statutes without altering the substance of the AI Act.

The Federal Preemption Threat

The Trump administration’s December 11, 2025, executive order has added an existential dimension to the Colorado AI Act’s prospects. The order directs the creation of a DOJ AI Litigation Task Force to challenge state AI laws that are “unconstitutional, preempted, or otherwise unlawful.” It also directs the Commerce Department to inventory state AI regulations that conflict with the federal policy of maintaining a minimally burdensome AI regulatory environment.

The preemption challenge would likely proceed on dormant Commerce Clause grounds, arguing that the Colorado AI Act imposes requirements on AI systems used in interstate commerce that unduly burden the free flow of goods and services across state lines. Many AI systems regulated by the Colorado Act are developed in other states — particularly California — and deployed nationally, making their regulation by any single state a potential burden on interstate commerce.

However, legal scholars have noted that the dormant Commerce Clause argument faces significant hurdles. States have long regulated insurance, healthcare, housing, and employment — the very domains covered by the AI Act — without successful Commerce Clause challenge. Constitutional law experts have argued that dormant Commerce Clause challenges to state AI laws will rise or fall based on concrete evidence of infeasibility or excessive burden, not on generalized assertions about innovation or competitiveness.

The constitutional analysis is genuinely uncertain. The Supreme Court has not addressed the application of dormant Commerce Clause doctrine to state technology regulations. Notably, the executive order itself includes carve-outs that expressly prohibit federal preemption of state AI laws relating to child safety, AI compute and data center infrastructure, and state government procurement of AI.

It is worth noting that Governor Polis has expressed support for federal preemption as a concept. In his original signing statement, he specifically called for a national solution that would supersede state law. Yet the state Attorney General retains enforcement authority, and the law remains on track for its June 30 effective date regardless of the federal challenge’s timeline.

Why Colorado Matters for Everyone

The Colorado AI Act’s significance extends far beyond Colorado’s borders. Its fate will likely influence the trajectory of AI regulation in the United States for years to come.

Multiple states are watching Colorado closely. Connecticut’s SB 2 — a comprehensive AI bill that passed its Senate in 2025 — stalled in the House after Governor Ned Lamont threatened a veto. Virginia’s HB 2094 passed both chambers in 2025 only to be vetoed by the governor. Washington state has advanced multiple AI bills through committee in its 2026 session, including HB 2157 on high-risk AI systems and HB 2144 on employment AI. According to BSA | The Software Alliance, 45 states considered nearly 700 AI-related bills in 2024 alone, with 113 enacted into law.

If the Colorado Act survives both the amendment process and the federal preemption challenge, it will establish the principle that states can comprehensively regulate AI systems operating within their borders. If it is significantly narrowed through amendment, it will signal that the comprehensive approach to AI regulation is politically unworkable at the state level. If it is preempted by federal action, it will set a precedent that state AI regulation is subordinate to federal policy — even without a comprehensive federal AI law in place.

For companies operating AI systems, the Colorado AI Act represents the most detailed operational requirements that any American AI regulation has imposed. The impact assessment framework, the transparency requirements, the algorithmic discrimination reporting obligations, and the risk management policy mandates represent significant compliance work. Companies that prepare for Colorado are, by extension, preparing for whatever AI regulatory framework eventually emerges nationally.

The June 30, 2026, deadline is approaching. The 2026 Colorado legislative session closes May 13. The federal preemption timeline is uncertain but likely extends well beyond June. For the foreseeable future, the Colorado AI Act stands as America’s most consequential AI regulation — and the test case for whether states can lead on AI accountability.

Advertisement

🧭 Decision Radar (Algeria Lens)

Dimension Assessment
Relevance for Algeria Medium — Algeria has no comparable AI-specific legislation, but the Colorado model offers a regulatory template that Algerian policymakers studying AI governance frameworks should understand
Infrastructure Ready? No — Algeria lacks the institutional infrastructure (specialized AG enforcement, AI governance frameworks, compliance ecosystems) needed to implement Colorado-style regulation
Skills Available? Partial — Algerian legal professionals are building AI literacy, but the specialized intersection of AI governance, impact assessment methodology, and algorithmic auditing remains underdeveloped
Action Timeline 12-24 months — Monitor how Colorado’s law performs after June 2026 implementation and whether the federal preemption challenge succeeds, as outcomes will shape global AI regulatory norms
Key Stakeholders Ministry of Digital Economy, Algerian telecom regulators, legal professionals specializing in technology law, AI startups and deployers operating in Algeria
Decision Type Educational — Understanding the Colorado AI Act helps Algerian stakeholders prepare for inevitable AI regulation discussions domestically

Quick Take: The Colorado AI Act is a preview of the regulatory frameworks that will eventually reach Algeria, whether through direct legislation or through compliance requirements imposed by international partners and technology vendors. Algerian organizations deploying AI in healthcare, finance, and government services should begin voluntary impact assessments now, using the Colorado framework as a reference model, to build institutional readiness before regulation arrives.

Sources & Further Reading