⚡ Key Takeaways

Vietnam’s Law on Artificial Intelligence (No. 134/2025) took effect March 1, 2026, making it Southeast Asia’s first standalone AI legislation. The 35-article law introduces a three-tier risk classification, mandates AI-generated content labeling, and provides grace periods up to September 2027 for health, education, and finance sectors. The consolidated Ministry of Science and Technology leads governance through a National Single-Window AI Portal.

Bottom Line: Companies operating AI in Southeast Asia should begin mapping their systems against Vietnam’s three-tier risk framework now, as grace periods expire March 2027 and implementing decrees specifying exact classification criteria and penalties are expected throughout 2026.

Read Full Analysis ↓

Advertisement

🧭 Decision Radar (Algeria Lens)

Relevance for Algeria
High

Vietnam’s law offers the most relevant model for Algeria as a fellow developing economy pursuing AI regulation. Algeria adopted its National AI Strategy in December 2024 but has no dedicated AI legislation. Vietnam’s lifecycle approach and risk tiers provide a concrete template to study.
Infrastructure Ready?
Partial

Algeria has ASAL and digital infrastructure plans, but lacks a dedicated AI regulatory institution comparable to Vietnam’s consolidated MOST with its AI portal. No conformity assessment framework or AI auditing capacity exists yet.
Skills Available?
Partial

Algeria has legal and technical talent, but specialized AI governance expertise for impact assessments, regulatory sandboxes, and algorithmic auditing is limited. Workforce development would be essential before adopting a comparable framework.
Action Timeline
12-24 months

Vietnam’s implementing decrees and enforcement experience through 2027 will provide critical lessons. Algeria should study outcomes before adapting elements, particularly around institutional capacity and SME compliance burden.
Key Stakeholders
Ministry of Post and Telecommunications, Ministry of Digital Economy and Startups, ASAL, ARPCE, university AI research labs, tech startup ecosystem
Decision Type
Strategic

Vietnam’s law informs long-term national AI governance planning. The phased implementation model and regulatory sandbox concept are directly adaptable to Algeria’s institutional capacity.

Quick Take: Vietnam’s standalone AI law offers Algeria a practical regulatory model from a fellow developing economy. Algeria’s policymakers should monitor Vietnam’s implementing decree rollout and enforcement experience through 2027, as lessons on institutional capacity building, grace period management, and balancing SME compliance burden with regulatory rigor will be directly applicable to Algeria’s own governance journey.

Why Southeast Asia’s First AI Law Matters Now

On March 1, 2026, Vietnam became the first country in Southeast Asia to enforce a standalone law dedicated to artificial intelligence. Passed by the National Assembly on December 10, 2025, Law No. 134/2025/QH15 represents a decisive shift in a region where digital economies are booming but governance frameworks have lagged behind deployment.

The timing is significant. Southeast Asia’s AI market reached an estimated $12 billion in 2025 and is growing at roughly 37% annually, according to Statista. Vietnam itself has emerged as a hub for AI talent and software outsourcing. Yet until this legislation, no ASEAN member had enacted a comprehensive, binding legal instrument specifically governing AI. Countries like Singapore, Thailand, and the Philippines have relied on voluntary guidelines, ethical frameworks, or sector-specific regulations that leave significant governance gaps.

Vietnam’s approach covers the entire AI lifecycle — from research and development through deployment to end-use — and introduces a three-tier risk classification system. The law applies to both domestic and foreign entities engaged in AI activities on Vietnamese territory, giving it extraterritorial reach that mirrors the EU AI Act. Companies that build AI abroad but serve Vietnamese users fall under its jurisdiction.

The law’s effectiveness hinges on implementing decrees that remain pending as of March 2026. How Vietnam navigates the gap between legislative text and operational enforcement will determine whether this law becomes a governance model for the Global South or another well-intentioned statute that struggles in practice.

How the Three-Tier Risk System Works

The centerpiece of Law 134/2025 is a risk-based classification system that determines regulatory obligations for different AI applications. Classification depends on the level of impact on human rights, safety, and security; the fields where the system operates; user scope; and the scale of potential consequences.

High-Risk AI Systems are those that could cause significant harm to life, health, legitimate rights and interests, or national security. Examples include healthcare diagnostics, financial services AI, biometric identification, and critical infrastructure management. The Prime Minister will issue an official list specifying which high-risk systems require pre-market conformity certifications. All high-risk systems face periodic audits, mandatory impact assessments, transparency obligations covering training data and decision logic, human oversight with override capability, and ongoing monitoring with incident reporting.

Medium-Risk AI Systems are defined as those with the potential to confuse, influence, or manipulate users because users are unaware they are interacting with an AI system or consuming AI-generated content. This tier captures deepfake generation tools, undisclosed chatbots, and AI content systems that could be mistaken for human-produced work. Medium-risk systems face supervision through reports, sample audits, and assessments by independent organizations.

Low-Risk AI Systems cover everything that does not fall into the higher tiers — spam filters, recommendation engines, scheduling tools, and similar applications. These face minimal obligations: monitoring based on incidents, complaints, or as-needed safety checks.

The law also mandates that AI-generated audio, image, and video content be conspicuously labeled in a machine-readable format. Deployers must notify users when AI-generated content poses a risk of confusion about real events or real people.

Institutional Architecture After the Ministry Merger

The law designates the Ministry of Science and Technology (MOST) as the primary regulatory body for AI governance. This designation carries particular weight because, as of March 1, 2025, the former Ministry of Information and Communications (MIC) merged into MOST, consolidating responsibilities for science, technology, digital transformation, and AI under a single ministry with 25 divisions.

This consolidation gives MOST broader authority than any previous Vietnamese ministry held over digital technology. Network security functions transferred to the Ministry of Public Security, while press and publishing management moved to the Ministry of Culture, Sports, and Tourism. For AI specifically, MOST operates a National Single-Window AI Portal and coordinates implementing decree development.

Provincial-level People’s Committees handle local implementation, monitoring AI deployment within their jurisdictions and reporting compliance issues to MOST. This decentralized enforcement mechanism recognizes Vietnam’s geographic and economic diversity — what works in Ho Chi Minh City’s tech startup ecosystem may require different approaches in rural provinces.

An important detail for foreign companies: initial proposals for an independent National AI Committee were scrapped during the legislative process. All AI oversight is centralized under the Government, with MOST as lead coordinator rather than an autonomous regulatory body.

Advertisement

Grace Periods and the Implementing Decree Challenge

Vietnam’s legislative system operates through two tiers: the National Assembly passes laws establishing principles and frameworks, while the government issues detailed implementing decrees specifying procedures, thresholds, penalties, and technical standards. The AI law’s implementing decrees remain in development as of March 2026.

The law provides structured grace periods for existing AI systems. Companies in most sectors have 12 months — until March 1, 2027 — to achieve compliance. For AI systems in health, education, and finance, the grace period extends to 18 months, until September 1, 2027. Systems deemed to pose serious damage risk can be suspended during the transition regardless of grace period status.

Several critical questions await decree-level answers. The specific criteria for determining which risk tier an AI system falls into have not been published. The penalty structure — including fine amounts, escalation procedures, and enforcement mechanisms — remains unspecified. The methodology for mandatory high-risk impact assessments, including approved assessors and assessment criteria, is still in development. And the framework for mutual recognition of foreign AI conformity assessments requires bilateral negotiations that have not concluded.

MOST has indicated that implementing decrees will be issued in phases throughout 2026 and into 2027, with high-risk AI provisions prioritized. Industry consultations on draft decrees began in February 2026. The government’s guiding decree, the Prime Minister’s official list of high-risk AI systems, and MOST’s National AI Ethics Framework are all pending.

Regulatory Sandboxes and the Innovation Signal

The law includes provisions for regulatory sandboxes — controlled environments where companies can test innovative AI applications under relaxed compliance requirements with government oversight. Sandbox participants receive temporary exemptions from certain obligations in exchange for data sharing and regulatory engagement.

This mechanism is designed especially for high-tech startups. Participants benefit from reduced testing costs and waived legal liabilities during the sandbox period. For regulators, sandboxes provide direct exposure to cutting-edge AI applications, building institutional knowledge that desk-based regulation alone cannot deliver.

The sandbox also sends a deliberate signal to the international AI industry: Vietnam’s regulatory posture is not reflexively restrictive. Companies willing to participate gain first-mover advantages — direct relationships with regulators, input into implementing decree development, and early compliance experience that competitors must replicate later.

Ripple Effects Across ASEAN

Vietnam’s legislation arrives in a region where every major economy is wrestling with AI governance, and its approach will influence neighboring choices.

Singapore has led Southeast Asian AI governance through voluntary frameworks — the Model AI Governance Framework (2019) and the AI Verify testing toolkit. These instruments are internationally respected but legally non-binding. If Vietnam demonstrates that binding AI legislation does not deter investment, Singapore may face growing pressure to move beyond voluntary mechanisms, particularly as the EU AI Act raises the global regulatory baseline.

Thailand’s Digital Government Development Agency has been developing a draft AI Act since 2023, using the EU AI Act as a template. The process experienced a quiet period with limited traction, and the Electronic Transactions Development Agency (ETDA) is currently revising the consolidated draft after public consultation in 2025. The timeline for passage remains unclear, though Vietnam’s progress has intensified regional attention on AI governance.

The Philippines launched its National AI Strategy Roadmap 2.0 in July 2024, emphasizing AI adoption. Indonesia’s AI National Strategy 2020-2045 focuses on capacity building across five priority areas. Both countries are monitoring Vietnam’s implementation to inform their own approaches.

At the regional level, the ASEAN Guide on AI Governance and Ethics (2024) — expanded in January 2025 to cover generative AI — establishes voluntary principles. Vietnam’s binding national law is compatible with the ASEAN framework but goes substantially further. If multiple ASEAN members follow Vietnam’s lead, pressure will grow for a binding regional instrument harmonizing risk classifications and cross-border enforcement.

Follow AlgeriaTech on LinkedIn for professional tech analysis Follow on LinkedIn
Follow @AlgeriaTechNews on X for daily tech insights Follow on X

Advertisement

Frequently Asked Questions

What is Vietnam’s AI law and when did it take effect?

Vietnam’s Law on Artificial Intelligence (No. 134/2025/QH15) is the first standalone AI legislation in Southeast Asia. Passed by the National Assembly on December 10, 2025, it took effect on March 1, 2026. The law consists of 8 chapters and 35 articles covering the entire AI lifecycle from research through deployment and use. Existing AI systems have grace periods of 12 months (general) or 18 months (health, education, finance sectors) to achieve compliance.

How does Vietnam’s three-tier risk system classify AI applications?

The law classifies AI into high-risk (systems that could cause significant harm to life, health, rights, or national security), medium-risk (systems that could confuse or manipulate users unaware of AI interaction or AI-generated content), and low-risk (all others). High-risk systems face mandatory impact assessments, periodic audits, and potential pre-market certification. The Prime Minister will publish an official list of high-risk categories, with implementing decrees specifying exact classification criteria still pending.

What impact will Vietnam’s law have on other ASEAN countries?

Vietnam’s law is intensifying AI governance discussions across the region. Thailand has been developing a draft AI Act since 2023 that is still under revision. Singapore faces pressure to move beyond its voluntary Model AI Governance Framework. The Philippines and Indonesia are monitoring Vietnam’s implementation. At the ASEAN level, the 2024 Guide on AI Governance and Ethics is voluntary; if multiple members adopt binding laws following Vietnam’s lead, pressure will build for a harmonized regional AI framework agreement.

Sources & Further Reading