What the March 2026 Framework Actually Says
The White House National Policy Framework for Artificial Intelligence, released on March 20, 2026, is a 47-page document that establishes the federal government’s approach to AI governance across three dimensions: promoting beneficial AI development, protecting against AI risks, and positioning the United States as the global leader in AI innovation.
The framework’s most consequential structural choice is the one it does not make: it does not propose a new federal AI agency, a new omnibus AI statute, or a centralized AI licensing regime. Instead, the framework explicitly endorses what regulatory lawyers at Holland & Knight’s AI practice describe as the “existing statutory authority” model — directing each sector agency to use the regulatory tools it already has to govern AI in its domain.
This is a deliberate departure from the EU’s approach (the AI Act is a single horizontal law covering all sectors) and from state-level AI legislation (Colorado’s SB 24-205, for example, creates cross-sector obligations for high-risk AI). The framework’s accompanying policy memo directed twelve federal agencies — including the SEC, FDA, FTC, OCC, CFPB, EEOC, and DOE — to publish AI guidance or rules within their existing regulatory perimeters within 180 days of the framework’s release, meaning by September 2026.
Three principles animate the framework’s approach. First, sector regulators understand their industries better than a generalist AI regulator would. Second, existing consumer protection, antidiscrimination, financial stability, and safety laws already prohibit AI outputs that cause the same harms they were designed to prevent (discriminatory lending, unsafe medical devices, deceptive trade practices). Third, a new federal AI law would take years to pass Congress and would likely be less agile than sector-specific guidance that can be updated as the technology evolves.
The framework is not toothless — it establishes baseline principles that all federal AI governance must follow: transparency (AI users must know when they are interacting with AI), accountability (organizations must be able to explain AI-driven decisions), fairness (AI systems must not systematically disadvantage protected classes), and safety (AI systems in high-stakes domains must meet safety standards). But it leaves the operationalization of these principles to sector agencies.
The Sector-by-Sector Landscape
The “existing agencies” model creates materially different compliance realities depending on which industry you operate in. Four sectors face particularly active regulatory development in the period between March 2026 and the September 2026 agency guidance deadline.
Financial services face the most developed AI governance framework. The OCC published AI model risk guidance in Q4 2025 extending the 2011 SR 11-7 principles to large language models and generative AI systems, requiring documentation of validation, explainability, and bias testing for AI influencing credit decisions, fraud detection, or customer communication. The CFPB issued guidance under the Equal Credit Opportunity Act requiring automated credit denial decisions to include meaningful explanations. The EEOC’s 2023 AI employment guidance is expected to become formal rulemaking by Q4 2026.
Healthcare and life sciences follow a parallel FDA track. The FDA’s 2024 AI/ML Action Plan has been accelerated under the March 2026 framework, with finalized guidance on Software as a Medical Device (SaMD) AI risk classification expected mid-2026 — defining when AI-based diagnostic support and patient monitoring tools require 510(k) clearance or PMA approval.
Technology platforms face FTC scrutiny under Section 5 (unfair or deceptive practices). The FTC’s AI surveillance task force, established Q1 2026, targets AI systems using behavioral data for differential pricing, advertising targeting vulnerable populations, or automated content moderation. The framework signals enforcement actions rather than rulemaking in the near term — meaning 2026 consent decrees will define behavioral norms before formal rules are written.
Advertisement
What Enterprises Must Do Under the Sector-Fragmented Model
1. Map Your AI Systems to Their Regulatory Homes
The immediate strategic action under the March 2026 framework is sector regulatory mapping. For each AI system in your portfolio, document which federal agency has primary jurisdiction — and which of that agency’s existing statutory tools could reach your system. A consumer credit scoring model maps to the CFPB (ECOA, FCRA), the OCC (SR 11-7, fair lending), and the EEOC (employment-linked credit use). An AI-based insurance underwriting tool maps to state insurance commissioners as well as CFPB if it involves credit-linked products. A healthcare AI system maps to the FDA and potentially CMS if Medicare/Medicaid billing is involved. Wilson Sonsini Goodrich & Rosati’s 2026 AI regulatory preview identifies this mapping exercise as the prerequisite to building any proportionate AI governance program under the framework — without it, organizations routinely miss applicable regulators.
2. Treat Each Agency’s September 2026 Guidance as a Compliance Milestone
The 180-day guidance deadline (September 2026) set by the March 2026 framework means that between May and September 2026, every regulated sector will receive published agency AI guidance. These documents will define — for the first time in many sectors — the specific practices each regulator considers adequate AI risk management. Enterprise compliance teams should be monitoring each relevant agency’s regulatory agenda now, attending public comment periods, and positioning to implement guidance requirements on a fast timeline. The Consumer Finance Monitor’s analysis of the framework notes that several agencies are expected to publish in the August-September 2026 window, giving enterprises only weeks between publication and the implicit expectation of compliance.
3. Build a Centralized AI Risk Register With Sector-Specific Overlays
The fragmented regulatory landscape creates a governance challenge: a single AI system may be subject to overlapping obligations from multiple agencies with different documentation requirements. The operational solution is a centralized AI risk register that tracks each AI system’s regulatory obligations by sector, documentation requirements by agency, validation status, and change management history. This register becomes the master compliance artifact that each regulatory layer can reference. Verify Wise’s 2026 US AI governance report recommends structuring the register with a master system record at the top (model name, version, owner, use case, training data summary) and agency-specific appendices that map the system’s properties to each regulator’s standards. For a financial services AI system, the appendices would include OCC model risk fields, CFPB adverse action explanation documentation, and EEOC demographic impact testing results.
4. Engage Sector Agency Comment Periods Proactively
The “existing agencies” model means that AI governance standards in each sector will be shaped significantly by industry comment during the rulemaking and guidance process. Unlike a single omnibus AI law where lobbying is centralized, sector-specific guidance development distributes this influence across twelve agencies simultaneously. Enterprises with significant AI exposure in regulated sectors should identify the two or three agencies with the most material impact on their AI portfolio and invest in substantive comment submissions. Comment submissions that include real-world operational data — how long a particular validation process takes, what accuracy thresholds are technically achievable, what bias testing methodologies are most reliable for a specific use case — are more influential than generic policy positions. The EEOC, in particular, has explicitly invited industry data on AI employment screening performance across demographic groups as it develops its 2026 guidance.
The Structural Lesson
The White House framework’s “existing agencies” model is politically stable and technically pragmatic — it avoids a legislative battle, leverages existing enforcement infrastructure, and acknowledges that a healthcare AI risk is different from a credit AI risk. But it creates real operational complexity for enterprises: a fragmented regulatory landscape with inconsistent documentation standards, overlapping jurisdictions, and an enforcement-first approach that means industry learns the rules from consent decrees and enforcement letters rather than published guidance.
The practical response is to build AI governance programs that are modular by sector rather than monolithic. A centralized AI risk register with sector-specific compliance overlays is the architecture that makes this manageable. Enterprises that implement this structure before the September 2026 agency guidance wave will be positioned to rapidly bolt on new requirements as each agency’s guidance emerges — rather than rebuilding governance programs under enforcement pressure.
Frequently Asked Questions
What is the key difference between the US White House AI framework and the EU AI Act?
The EU AI Act is a single horizontal regulation applying uniform risk-based requirements across all sectors, enforced by a new AI Office. The White House framework routes AI governance through existing sector agencies (FDA, SEC, FTC, OCC, EEOC, etc.), each applying their existing statutory authority to AI in their domain. The US model is faster to implement but creates sector-fragmented obligations; the EU model is more consistent but more burdensome to comply with across the full product portfolio.
When will the sector agencies publish their AI guidance under the March 2026 framework?
The White House framework set a 180-day deadline for twelve named agencies to publish AI guidance or rules within their existing statutory authority, placing the publication window in August-September 2026. Financial services regulators (OCC, CFPB) and the EEOC are expected to publish first, based on existing rulemaking timelines. The FDA’s SaMD AI guidance was already in advanced draft stages and is expected mid-2026.
Does the US AI framework create enforceable compliance requirements for enterprises?
Not directly as a standalone document — the framework itself is not law. However, it directs sector agencies to apply their existing statutory authority to AI, and those agencies’ guidance and enforcement actions are legally binding. Enterprises face real legal exposure through sector-specific regulations: CFPB enforcement on automated credit decisions, FTC enforcement on deceptive AI practices, EEOC enforcement on AI-driven employment discrimination. The framework accelerates the pace at which these enforcement actions will define AI compliance norms.
—
















