What the Law Actually Says — and Why It Matters Outside Korea
The official title is the Act on the Development of Artificial Intelligence and Establishment of Trust, published August 27, 2025 and effective January 22, 2026. Together with its enforcement decree (the Presidential Decree implementing the Act), it makes South Korea the second country, after the European Union, to adopt a horizontal AI law that covers definitions, obligations, and penalties across the entire AI value chain.
For non-Korean vendors, three structural choices in the law matter most. First, the law applies extraterritorially: foreign AI providers above specific thresholds — one trillion KRW (~$700 million) in annual revenue, 10 billion KRW (~$7 million) from AI services, or one million daily Korean users averaged over the prior three months — must designate a domestic representative based in Korea. That representative is the legal contact point for compliance submissions, transparency notices, and regulatory inspections by MSIT under Article 40.
Second, the law splits AI systems into two regulated tiers that overlap but are not identical. High-impact AI is defined functionally — systems that “significantly affect human life, safety, or fundamental rights” in healthcare, energy, nuclear operations, transportation, biometric analysis, hiring, education, and public decision-making. High-performance AI is defined quantitatively — systems trained with at least 10^26 cumulative floating-point operations (FLOPs), the same threshold the EU AI Act uses for general-purpose AI models with systemic risk. Most production foundation models from OpenAI, Anthropic, Google DeepMind, and Meta cross this line; smaller fine-tuned derivatives generally do not.
Third, generative AI providers face standalone transparency rules under Article 31. Operators must notify users in advance that AI is being used, and must clearly label outputs (sound, image, video, or text) that are difficult to distinguish from human-created content. The law does carve out flexibility for “artistic and creative expressions,” but the default for business deployments is mandatory labeling.
The One-Year Grace Period Is Not a Pause — It’s a Deadline
MSIT’s grace period is widely misread as a soft launch. It is not. Administrative fines under Article 43 — up to 30 million KRW per violation for failures including missing domestic representatives, unmet notification obligations, refused inspections, and ignored corrective orders — are deferred until January 22, 2027. But the substantive obligations of the Act have been in force since January 22, 2026. MSIT can still issue corrective orders, conduct on-site investigations, compel data submissions, and order service suspensions where it judges a system to pose a safety threat. The grace period suspends fines, not enforcement powers.
For global AI providers, this creates a hard 12-month sprint with three workstreams running in parallel: legal entity setup (designating a Korean domestic agent), system classification (which models cross the 10^26 FLOPs line and which products qualify as high-impact AI), and product changes (labeling pipelines for generative outputs, risk management documentation, and user-facing transparency notices). A vendor that waits until December 2026 to start will not finish before fines apply.
The Cloud Security Alliance and OneTrust both note the grace window is the single most actionable feature of the law for non-Korean teams: it is the only jurisdiction with a comprehensive AI law where the timeline to first fines is published and predictable. The EU AI Act, by contrast, staggers enforcement across multiple deadlines through 2027, and member-state-level enforcement varies. Korea’s single grace expiry simplifies project planning.
Advertisement
The Article 34 Obligations for High-Impact AI
Operators of high-impact AI carry the heaviest compliance load. Article 34 imposes five obligations that map directly to product and operational deliverables: establish and operate a risk management plan; provide explanation mechanisms for AI-generated decisions; implement user protection measures; ensure human oversight and supervision; and preserve safety and reliability documentation. Pre-deployment, operators must also conduct a self-assessment to determine whether their system qualifies as high-impact AI and may, optionally, request MSIT confirmation of that classification.
These obligations are functionally similar to the EU AI Act’s high-risk regime but with two practical differences. Korean self-assessment puts the classification burden on the provider — there is no published list of high-impact AI products, so vendors must read the sectoral examples in the law and decide. And the “explanation mechanism” obligation is broader than EU explainability rules: it requires operators to provide reasoning for AI-generated results to affected users, not just to regulators. Healthcare AI, hiring AI, biometric ID systems, and energy-grid optimization tools all fall squarely in scope.
What Vendors and Compliance Officers Should Do During the Grace Window
1. Designate the Korean domestic representative now, not in late 2026
Under Article 36, foreign AI providers above the user or revenue thresholds must appoint a domestic representative with a South Korean address before fines apply. The representative handles compliance submissions, takes safety measures, and serves as the legal point of contact for MSIT inspections. Vendors that wait until Q4 2026 will face a compressed market for qualified representatives. Cooley and OneTrust both report that in-house counsel, Korean law firms, and existing local subsidiaries are the three viable structures. Don’t try to use a marketing affiliate — MSIT requires the representative to have actual authority to respond to regulatory orders.
2. Run the 10^26 FLOPs classification on every shipped model and every fine-tune
The 10^26 cumulative training FLOPs threshold mirrors the EU AI Act’s GPAI systemic-risk bar, which means most teams already have a number from their EU compliance work. Apply it: the cumulative compute counts pre-training, instruction tuning, RLHF, and any continual pretraining stages. For derivatives and fine-tunes, MSIT’s enforcement decree clarifies that the count is cumulative across the development pipeline. Document the calculation, the tooling used (W&B, MLflow, or custom logs), and store the artefact — MSIT can request it during an Article 40 inspection.
3. Deploy generative AI labeling before Korean traffic crosses the user threshold
Article 31 transparency obligations apply to any generative AI output reaching Korean users — not just to operators with Korean offices. If your product serves Korean users at all and outputs sound, image, video, or text that’s difficult to distinguish from human-created work, you need pre-use notification (“you are interacting with AI”) and post-generation labeling. C2PA content credentials are the cleanest technical implementation: they survive most platform pipelines and are recognized by the EU AI Act Article 50 as well, so a single deployment satisfies both regimes. Reserve the “artistic and creative expressions” carveout for genuine creative tools; do not stretch it to cover business chatbots or image-generation APIs.
4. Build the Article 34 risk management documentation in EU AI Act format
The Article 34 risk management plan, explanation mechanism, user protection measures, oversight controls, and safety/reliability documentation map nearly one-to-one to the EU AI Act high-risk regime’s Annex IV technical documentation, the GDPR Article 35 DPIA, and the NIST AI RMF. Build the documentation pack once in EU AI Act format and add a Korean cover sheet — don’t create a parallel documentation pipeline. Cooley reports that most multinational AI vendors are taking exactly this approach to avoid duplicate compliance overhead.
5. Stand up the corrective-order response process before MSIT issues one
MSIT’s Article 40 powers — on-site investigation, data compulsion, and service suspension — are already active during the grace window even though fines aren’t. Operators that ignore a corrective order during 2026 will start January 2027 with an existing infraction on the record and immediate fine exposure. Designate an internal owner (typically the Korean domestic representative plus a global head of AI compliance), define the SLA for responding to MSIT requests (target: 5 business days), and run one tabletop exercise before December 2026.
Where This Fits in the 2026 AI Governance Landscape
Korea’s AI Basic Act is the second domino in what is becoming a four-jurisdiction baseline for serious AI compliance: the EU AI Act, Korea’s AI Basic Act, the United States’ patchwork of state laws (with Colorado’s AI Act anchoring the high-risk regime), and China’s interim measures on generative AI. Each has a different structural philosophy — the EU is process-heavy, Korea balances innovation and oversight, the US is fragmented by sector, and China is content-driven — but the operational overlap is large. A vendor with mature EU AI Act compliance can satisfy 70-80% of Korea’s obligations with rebadged documentation and a domestic representative.
The strategic implication is that compliance is now a fixed cost of operating in any major AI market, and the cost is becoming standardized. Vendors that build their compliance stack in 2026 — risk management, transparency labeling, audit-ready documentation, jurisdictional representatives — will have a durable advantage when the next jurisdiction (likely the United Kingdom or Japan) lands its own framework. Vendors that defer compliance until the first fines arrive will find themselves doing the same work in 2027 under regulatory time pressure, with less leverage and less talent availability.
For Algerian and broader MENA AI providers, the practical lesson is that even relatively small jurisdictions are moving faster than expected, and the compute threshold (10^26 FLOPs) creates a clean line between obligations that apply to frontier-model developers and obligations that apply to everyone else. Most Algerian AI products are well below this line and will only face the generative AI labeling rules and the transparency obligations — material but manageable.
Frequently Asked Questions
What exactly is the difference between “high-impact” and “high-performance” AI under the Korean law?
High-impact AI is defined by use case — systems deployed in healthcare, energy, biometrics, transportation, hiring, education, or public decision-making that affect life, safety, or fundamental rights. High-performance AI is defined by training compute — at least 10^26 cumulative FLOPs. A frontier model deployed in a hospital triage system would qualify under both categories and carry the obligations of both Article 34 (high-impact) and the relevant high-performance provisions.
When do administrative fines actually start applying?
The Act took effect January 22, 2026, but MSIT has deferred fines under Article 43 until January 22, 2027. During the grace year, MSIT can still issue corrective orders, conduct investigations, and order service suspensions — but cannot levy the 30 million KRW per-violation fines. Substantive obligations are in force throughout the grace period.
Does the law apply to AI vendors with no Korean office?
Yes, if the vendor exceeds any of three thresholds: one trillion KRW annual revenue, 10 billion KRW from AI services, or one million daily Korean users averaged over three months. Such vendors must designate a domestic representative with a South Korean address under Article 36. Below these thresholds, the labeling and transparency obligations of Article 31 still apply to outputs reaching Korean users, but the domestic representative requirement does not.
—
Sources & Further Reading
- South Korea’s AI Basic Act: Overview and Key Takeaways — Cooley
- South Korea: Comprehensive AI Legal Framework Takes Effect — Library of Congress
- South Korea’s New AI Framework Act: A Balancing Act Between Innovation and Regulation — Future of Privacy Forum
- South Korea Artificial Intelligence (AI) Basic Act — U.S. Department of Commerce
- Global AI Governance: South Korea — IAPP
- What You Need to Know About South Korea’s AI Basic Act — Cloud Security Alliance















