AI & AutomationCybersecurityCloudSkills & CareersPolicyStartupsDigital Economy

Trump’s AI Revolution: Deregulation, Dominance, and the Fight Over America’s AI Future

February 22, 2026

Washington DC office with laptop showing AI policy holographic projections and American flag

On January 20, 2025, within hours of taking office for the second time, President Donald Trump signed one of his most consequential technology executive orders: revocation of Joe Biden’s landmark AI safety framework, Executive Order 14110. With a stroke of a pen, the most comprehensive US government AI policy in history — with its requirements for safety testing, equity considerations, and labor protections — was gone.

What replaced it reflects a fundamentally different philosophy about AI governance: one centered on American competitive dominance, minimal regulatory friction, and aggressive international positioning against China. Understanding the Trump AI policy framework is now essential for anyone building, deploying, or investing in AI technology worldwide.


The Biden Legacy: What Was Removed

Biden’s Executive Order 14110, signed October 30, 2023, had been the most ambitious US government attempt to shape AI development:

  • Required that developers of the most powerful AI models share safety test results with the government before public release
  • Created standards for AI watermarking to identify AI-generated content
  • Directed agencies to protect workers from AI-related job displacement
  • Required agencies to advance equity and civil rights in AI applications
  • Set guidelines for privacy-preserving techniques in AI training
  • Called for developing international AI safety standards through bilateral and multilateral channels

Critics on the right called it regulatory overreach that would slow American AI innovation relative to China. Critics on the left called it insufficient — all carrots, no sticks, with no binding enforcement. Trump ended the debate by eliminating the framework entirely.


EO 1: “Removing Barriers to American Leadership in Artificial Intelligence” — January 23, 2025

The first Trump AI executive order set the philosophical tone for the administration’s approach.

Key Provisions

Revocation: The order explicitly revoked Biden’s EO 14110 and directed all agencies to identify and remove any regulations, policies, or guidance flowing from it.

America First framing: The order states plainly that it is “the policy of the United States to sustain and enhance America’s global AI dominance.” Not safety. Not equity. Not even economic growth. Dominance.

Innovation priority: Agencies are directed to review their regulations for any that “unduly burden AI development” and propose modifications within 180 days.

Technology stack export: The order directs relevant agencies to promote the export of the “American AI technology stack” — hardware, software, and services — as a geopolitical instrument for maintaining US influence globally.

Rejection of safety framing: The order explicitly abandoned the language of AI “risks” and “safety” that had defined Biden’s approach, replacing it with the language of “opportunities” and “leadership.”

What Changed Immediately

  • The AI Safety Institute’s mandate shifted away from mandatory safety evaluations toward voluntary cooperation with industry
  • NIST’s AI Risk Management Framework remains, but its status as a compliance reference became more advisory
  • Agency-level AI ethics requirements were rolled back
  • The requirement for safety testing of powerful models before public release was eliminated

EO 2: “Ensuring a National Policy Framework for Artificial Intelligence” — December 11, 2025

By December 2025, it became clear that a more significant threat to coherent US AI policy was emerging: a patchwork of state-level AI regulations that threatened to create compliance nightmares for companies operating nationally.

By end of 2025, over 15 US states had enacted or were advancing AI-specific legislation — California’s SB 1047 (though eventually withdrawn), Colorado’s AI Act, Texas AI regulation proposals, and many others. The Trump administration’s second AI executive order directly targeted this fragmentation.

Core Provisions

National preemption: The order declares it the policy of the US to maintain a “minimally burdensome national policy framework” for AI — signaling intent to preempt state regulations deemed excessive.

Commerce Department evaluation: The Secretary of Commerce is directed to publish an evaluation within 90 days identifying state AI laws that are “onerous” or conflict with national AI policy. Laws that:

  • Require AI models to alter truthful outputs
  • Compel excessive disclosures that would harm US competitiveness
  • Impose requirements inconsistent with constitutional protections

Financial pressure mechanism: States identified as having onerous AI laws become ineligible for certain federal broadband funding (BEAD Program funds). This is the stick behind the preemption threat.

Federal primacy: The order signals that federal AI policy — which is deregulatory — will take precedence over state-level attempts at stricter AI oversight.

Constitutional Questions

Several legal scholars immediately questioned whether the order’s preemption approach would survive constitutional challenge. States have historically had significant authority over consumer protection, employment law, and civil rights — all domains where AI’s impacts are most directly felt. The Commerce power-based preemption strategy faces genuine uncertainty in the courts.


The AI Action Plan: America’s Comprehensive AI Strategy

Beyond executive orders, the Trump administration released a formal US AI Action Plan — a comprehensive strategy document coordinating AI policy across government. Its key pillars:

1. Innovation and Competitiveness

  • Remove barriers to commercial AI development
  • Streamline federal AI procurement
  • Promote American AI companies’ global market access
  • Develop domestic AI infrastructure (data centers, compute, energy)

2. National Security

  • Integrate AI into military and intelligence operations
  • Protect critical AI infrastructure from foreign adversaries
  • Counter Chinese AI influence in emerging markets
  • Maintain US export controls on advanced AI chips (the Nvidia export restrictions remain a key tool)

3. International Positioning

  • Counter China’s influence in international standards bodies (IEEE, ISO, ITU)
  • Build AI partnerships with allies (UK, Japan, South Korea, Australia, India, Gulf states)
  • Promote adoption of American “AI stack” in partner countries
  • Sign AI-focused bilateral technology agreements

4. Government AI Adoption

  • Use AI to improve federal government efficiency (strongly linked to DOGE initiatives)
  • Deploy AI in federal agencies for fraud detection, service delivery, and administration
  • GSA partnership with Meta’s Llama for government-wide AI deployment

The DOGE Factor: AI Meets Government Efficiency

One of the most dramatic developments of 2025 was the intersection of AI policy with the Department of Government Efficiency (DOGE), the Elon Musk-led initiative that used AI systems to analyze federal spending, identify redundancies, and accelerate personnel decisions.

DOGE deployed AI tools to:

  • Analyze federal contracts and spending patterns at scale
  • Identify employees for potential reduction
  • Review regulatory agency outputs for elimination
  • Automate communications and processing workflows

This represented the most dramatic deployment of AI in government decision-making in US history — and raised significant legal questions about due process, accuracy, and accountability when AI systems affect the jobs of federal employees.


Advertisement

The Chip Export War: AI Policy Through Trade Controls

While executive orders shape domestic AI governance, export controls on advanced semiconductors represent US AI foreign policy in its most concrete form.

The Biden-era chip export restrictions — limiting Nvidia’s H100 and A100 exports to China — were maintained and in some respects strengthened under Trump. These controls are based on the strategic calculation that US AI supremacy depends partly on denying China access to the most advanced training hardware.

The restrictions have:

  • Significantly slowed China’s ability to train the largest frontier models
  • Created enormous pressure on Chinese companies to develop domestic alternatives (Huawei Ascend chips)
  • Pushed some computing to third countries as workarounds
  • Generated friction with US allies who export restricted chips into restricted markets

DeepSeek’s January 2025 demonstration that highly capable AI could be trained with fewer chips (and on older hardware) challenged the core premise of this strategy — though US officials maintain that the restrictions still matter for the most ambitious AI projects.


The Contrast with Biden: A Comparison

Policy Area Biden Approach Trump Approach
Safety testing Mandatory reporting for powerful models Voluntary cooperation with industry
Equity/civil rights Explicit AI equity requirements Removed
Labor protections AI impact on workers addressed Not addressed
State regulations Federal floor, state flexibility Federal preemption of stricter state rules
International Safety-focused multilateral cooperation Competitiveness-focused bilateral deals
China policy Chip controls + diplomatic engagement Chip controls + hardline competition
Regulatory philosophy Risk-based precaution Innovation-first, minimal friction
AI governance voice OSTP, Commerce OSTP, DOGE, NSC

Industry Response: Mostly Applause, Some Anxiety

The technology industry broadly welcomed the deregulatory direction, with some nuances:

Enthusiastically supportive: Frontier AI companies (OpenAI, Anthropic, Google, Meta) appreciated removal of pre-release safety reporting requirements and the signals against new regulatory burdens.

Cautiously watching: Enterprise AI vendors that sell to government need clarity on procurement rules, not just deregulation.

Concerned: AI ethics researchers, civil society organizations, labor unions, and some state governments worry that removing federal guardrails without state preemption creates genuine safety and rights gaps.

International friction: The EU, UK, and Canada have expressed concerns about the US departure from safety-focused multilateralism, complicating coordination on frontier AI governance.


What It Means for Global AI Development

The US shift has ripple effects globally:

  1. Race-to-bottom risk: If the largest AI economy signals that safety requirements are optional, other countries may deprioritize them to remain competitive
  2. EU-US divergence: The widening gap between EU’s precautionary approach and US’s innovation-first approach creates real compliance complexity for multinationals
  3. China positioning: The US deregulatory move somewhat undercuts the “democratic values” narrative for AI governance — making it harder to argue that the alternative to China’s approach is necessarily better for human rights
  4. Standards fragmentation: With the US and EU pulling in different directions in international standards bodies, consensus AI governance norms become harder to achieve

Conclusion

The Trump AI executive orders represent the most significant reversal in US technology policy since the early days of internet regulation. By prioritizing competitive dominance over precautionary safety, and by attempting to preempt state-level innovation in AI governance, the administration has made a clear bet: that AI development unfettered by regulation will produce better outcomes than AI development shaped by rights-based guardrails.

The outcomes of this bet — for American competitiveness, for AI safety, for workers affected by AI deployment, and for the global governance ecosystem — will play out over the next decade.

What is clear already: the era of bipartisan consensus on AI policy is over. AI governance has become a front in the broader culture war over the role of government in technology and society.


Advertisement

Decision Radar (Algeria Lens)

Dimension Assessment
Relevance for Algeria High — US AI policy directly shapes the tools, platforms, and chip availability that Algerian enterprises and government agencies depend on. Export controls affect GPU access for CERIST and university AI labs. The US push to export its “AI stack” to partner nations could influence Algeria’s own technology procurement decisions.
Infrastructure Ready? Partial — Algeria lacks domestic AI compute infrastructure and depends on imported hardware subject to US export tiers. Algerie Telecom and Mobilis are investing in data center capacity, but advanced GPU clusters for AI training remain out of reach under current import channels.
Skills Available? Partial — CERIST, ESI (Ecole Nationale Superieure d’Informatique), and USTHB produce AI researchers, but the talent pipeline is thin relative to the pace of global AI deployment. The Ministry of Digital Economy and Startups has launched upskilling programs, but workforce readiness for enterprise AI governance and compliance remains limited.
Action Timeline 6-12 months — Algeria’s nascent AI strategy (Algeria Digital 2030) should account for the US regulatory divergence from the EU. Procurement decisions for government AI systems and telecom AI deployments (Djezzy, Ooredoo, Mobilis) need to factor in which regulatory framework their vendors follow.
Key Stakeholders Ministry of Digital Economy and Startups, Ministry of National Defence, ANSSI (national cybersecurity agency), CERIST, Sonatrach IT division, Sonelgaz, Algerie Telecom, Djezzy, Mobilis, ESI, USTHB AI departments
Decision Type Strategic — The US-EU regulatory split forces Algeria to choose alignment paths for AI procurement, data governance, and international partnerships. This is not a tactical issue but a long-term strategic positioning question.

Quick Take: The US deregulatory turn on AI creates a bifurcated global landscape that Algeria cannot ignore. As Algeria develops its own AI strategy under Digital 2030, policymakers at the Ministry of Digital Economy must decide whether to align AI procurement and governance standards with the EU precautionary model, the US innovation-first model, or chart a hybrid path. For Algerian enterprises like Sonatrach and Sonelgaz deploying AI in critical infrastructure, the absence of clear US safety mandates means vendor due diligence becomes even more important — Algerian buyers cannot rely on US regulatory floors that no longer exist.


Leave a Comment

Advertisement