AI & AutomationCybersecurityCloudSkills & CareersPolicyStartupsDigital Economy

The China-US AI Race: How DeepSeek and Open-Source Models Are Reshaping the Industry

February 21, 2026

Two chess pieces representing the China-US AI competition on a digital circuit board

The DeepSeek Shock

On January 20, 2025, a Chinese AI lab called DeepSeek released DeepSeek-R1 — a reasoning model that matched OpenAI’s o1 on key benchmarks, ran at roughly 95% lower inference cost, and was released as open-weight software under the MIT License. The reaction was immediate: on January 27, Nvidia’s stock dropped nearly 17%, wiping out nearly $600 billion in market capitalization in a single session.

One year later, the disruption has not faded. Chinese AI labs have sustained their momentum, open-source models have overtaken closed models in global downloads, and the geopolitical dimensions of AI competition have become central to both technology strategy and foreign policy.

Why DeepSeek-R1 Mattered

The prevailing narrative in 2024 was that frontier AI required massive compute, enormous training budgets ($100M+ per run), and unrestricted access to the latest Nvidia GPUs. US export controls restricting chips like the H100 from China were assumed to be a meaningful brake on Chinese AI progress.

DeepSeek shattered that assumption. R1 achieved frontier performance using significantly less compute — through Mixture of Experts architectures, optimized training procedures, and novel reasoning techniques. The company’s officially claimed training cost was just $5.6 million for the final V3/R1 training run, though analysts at SemiAnalysis estimated actual total spending closer to $1.6 billion when accounting for hardware acquisition, infrastructure, and prior research. Either way, the gap between DeepSeek’s spend and OpenAI’s $100M+ training runs was striking.

The implications were profound: if algorithmic sophistication can partially offset compute scale, then hardware export controls are a less effective competitive barrier than Washington assumed.

Open-Source Models Go Global

DeepSeek-R1 was not just a benchmark winner — it was free to download, run locally, and fine-tune. That combination of frontier performance and open access made it one of the fastest-adopted open-weight models on HuggingFace and the number-one free app on the iOS App Store, surpassing ChatGPT.

By early 2026, Chinese open-source AI had hit a remarkable milestone. Alibaba’s Qwen model family overtook Meta’s Llama in cumulative HuggingFace downloads, exceeding 700 million total downloads. A joint MIT and HuggingFace study confirmed that Chinese open-source models had surpassed US models in total downloads (17.1% vs 15.8%). A RAND Corporation report found Chinese AI models cost roughly one-sixth to one-fourth of comparable US systems.

However, downloads do not equal usage. The same RAND report found that US models still captured approximately 93% of global LLM site visits as of August 2025. American AI retains an overwhelming lead in actual commercial adoption, even as Chinese models dominate open-source distribution.

The Price War

DeepSeek’s efficiency sparked a price war across the industry. When a high-performance model runs cheaply, every API provider must cut prices to compete.

The cascade has been dramatic. OpenAI, Anthropic, Google, and major cloud providers have all reduced API pricing significantly since January 2025. Chinese labs — ByteDance, Alibaba, Zhipu AI, and others — have piled into the price war, with some offering API access at near-zero cost as a user acquisition strategy.

This commoditization of inference shifts the competitive battleground. The question is no longer who can afford to use the best model, but whose APIs, tools, and platforms become the default infrastructure — and for whom.

Advertisement

The Export Control Pivot

The US government’s response to Chinese AI progress has evolved significantly — and not in the direction the original export control architects intended.

The Biden administration issued its AI Diffusion Rule on January 15, 2025, aiming to tighten chip export restrictions globally. But the Trump administration rescinded the rule on May 13, 2025, before it ever took effect. Following a bilateral US-China meeting in Busan, South Korea in October 2025, the two governments moved to ease economic tensions. By December 2025 and January 2026, the Commerce Department revised its license review policy, approving Nvidia H200 exports to China under case-by-case review with strict conditions.

The policy shift reflects an uncomfortable reality: export controls demonstrably failed to prevent frontier AI development and arguably incentivized the very efficiency innovations that made DeepSeek possible.

Meanwhile, China has accelerated domestic chip development. Huawei’s Ascend AI processor roadmap, announced at Connect 2025, includes the 950 series launching throughout 2026, with plans to double Ascend 910C output to 600,000 units. Manufacturing runs on SMIC’s enhanced 7nm process — behind Nvidia’s TSMC 4nm, but rapidly scaling. China’s 15th Five-Year Plan calls for decisive breakthroughs in integrated circuits, though Goldman Sachs estimates domestic suppliers met only ~14% of China’s semiconductor demand in 2024, projected to rise to ~37% by 2030.

What Is Coming: DeepSeek-R2 and the Spring Festival Surge

As of February 2026, the AI community is anticipating DeepSeek-R2. Based on previews and reporting, R2 integrates vision, audio, and basic video understanding into the reasoning framework, reportedly spans 100+ languages in training data, and is expected to follow R1’s open-weight release strategy.

DeepSeek is not alone. Around China’s Spring Festival 2026, a coordinated surge of model releases landed: Zhipu AI’s GLM-5 (open-source agentic intelligence, February 11), Alibaba’s Qwen 3.5 (multimodal, 200 languages, February 16), ByteDance’s Doubao 2.0 (155 million weekly active users in China), and MiniMax’s M2.2.

The Global South and the Values Question

For governments, universities, and businesses across Africa, Southeast Asia, and Latin America, open-source Chinese AI models offer something Western models often do not: affordable, locally deployable access with no ongoing API costs and no data leaving national borders. Institutions are building on DeepSeek, Qwen, and other Chinese models not out of political alignment but out of pragmatic economics.

The US is responding. The Trump administration is planning a Tech Corps initiative — a Peace Corps-style program deploying 5,000 American tech volunteers to partner nations, partly as a counter to growing Chinese AI influence in the developing world.

This competition carries a values dimension. AI models embed assumptions through their training data — what they discuss, how they characterize events, what perspectives they present as mainstream. As Chinese open-source models become global infrastructure, the question of whose values shape the world’s AI defaults becomes increasingly consequential.

A Bipolar AI Reality — With Caveats

The comfortable narrative of US AI dominance is no longer fully accurate. In 2026, the world has entered a genuinely bipolar AI landscape: US labs lead in enterprise safety, multi-agent frameworks, and frontier reasoning quality. Chinese labs lead in price efficiency, open-source distribution, and multilingual reach. US cloud providers are projected to invest $600 billion in AI infrastructure in 2026 — double 2024 spending — underscoring the scale of American investment.

But the picture is more nuanced than headlines suggest. Chinese models dominate downloads; American models dominate actual usage. The gap between distribution and adoption matters enormously, and the race is far from settled.

For users, businesses, and governments outside both superpowers, this competition creates options that did not exist two years ago — and new questions about which models to trust, which infrastructure to build on, and what values to embed in the AI systems shaping their digital futures.

Advertisement


🧭 Decision Radar (Algeria Lens)

Dimension Assessment
Relevance for Algeria High — Algeria has strategic interest in AI sovereignty and avoiding dependency on any single technology bloc. Chinese open-source models offer cost-effective alternatives for a market where Western API pricing is prohibitive.
Infrastructure Ready? Partial — Algeria has growing internet connectivity and data center capacity, but zero domestic semiconductor manufacturing. All AI hardware is imported. Local deployment of open-weight models is feasible on commodity servers; training frontier models is not.
Skills Available? Partial — Algeria’s developer community is expanding and already uses open-source tools heavily. Fine-tuning and deploying models like DeepSeek or Qwen is within reach for skilled teams, but deep ML research talent remains scarce. Universities are graduating more CS students, though AI specialization programs are still limited.
Action Timeline Immediate — Open-weight models are available now. Algerian startups, universities, and government agencies can begin deploying and fine-tuning Chinese and Western open-source models today at minimal cost.
Key Stakeholders Ministry of Digital Economy, Ministry of Higher Education, Algerian Startup Fund, university AI labs, private tech companies, telecom operators (Djezzy, Mobilis, Ooredoo) exploring AI services
Decision Type Strategic — The choice of which AI models to build on carries long-term implications for data sovereignty, vendor lock-in, compliance exposure, and geopolitical alignment.

Quick Take: The China-US AI race hands Algeria a rare advantage: two competing superpowers producing increasingly capable, increasingly cheap AI models. Algeria’s strong trade ties with China, its growing developer base, and its need to avoid Western compliance friction (given FATF grey list status) make Chinese open-source models a pragmatic foundation for national AI capabilities. The strategic move is to build on open-weight models from both blocs, avoiding lock-in to either.


Sources

Leave a Comment

Advertisement