⚡ Key Takeaways

China’s Cyberspace Administration issued the Interim Measures for AI Anthropomorphic Interaction Services on April 10, 2026, taking effect July 15, 2026 — the first comprehensive compliance regime specifically targeting AI emotional companion services. Requirements include algorithm filing, security assessment, regulatory registration, prohibition of companion services for minors, and mandatory in-flow disclosure of AI nature.

Bottom Line: Global product teams with AI companion, social, or wellness apps must complete China’s algorithm filing, security assessment, and regulatory registration before July 15 — and should build the minor-protection and transparency architecture to global standards, as similar requirements will appear in EU, US, and other markets within 2-3 years.

Read Full Analysis ↓

🧭 Decision Radar

Relevance for Algeria
Low

Algeria does not have a significant AI companion developer ecosystem, and Chinese market compliance is not an immediate concern for local companies. The indirect relevance is the global regulatory trend: as international AI governance frameworks converge on these standards, Algerian developers building for export markets will need to understand the compliance architecture.
Infrastructure Ready?
No

Algeria lacks the regulatory infrastructure (age verification systems, algorithm filing registries) that the Chinese model requires. Building equivalent domestic infrastructure is a 3-5 year project.
Skills Available?
Partial

Algerian developers building AI products for global export markets should be aware of this regulation. Legal expertise in Chinese AI compliance is not currently available locally; external counsel would be required.
Action Timeline
Monitor only

For Algerian developers targeting Chinese markets, the July 15 deadline has immediate relevance. For the Algerian domestic market, this is a reference framework for future regulatory development — monitor, don’t act urgently.
Key Stakeholders
Algerian AI Startup Founders, Ministry of Digital Economy, ARPCE
Decision Type
Educational

This regulation introduces a new compliance category (emotional AI) that Algerian policymakers and developers should understand as the global standard consolidates.

Quick Take: Algerian AI developers building social, wellness, or companionship applications for global markets should treat China’s July 15 framework as a technical standard preview — even if they are not targeting the Chinese market now. The minor protection architecture, transparency mechanisms, and dependency-prevention design principles it requires will appear in EU, US, and eventually North African regulations within 2-3 years. Building to this standard from the start is cheaper than retrofitting later.

Advertisement

A New Compliance Category: Anthropomorphic AI Services

Until April 2026, China’s AI regulation landscape treated AI services primarily through two lenses: content (what does the AI say?) and risk level (how capable is the model?). The Interim Measures introduce a third lens: behavioral relationship (what kind of relationship does the AI simulate with the user?).

The scope is deliberately narrow. According to analysis by Mayer Brown, the final measures narrowed from the draft’s broad “human-like AI” framing to services that specifically provide “continuous emotional interaction” — a design choice that explicitly carves out everyday tools like educational tutoring bots and productivity assistants. The target is AI companion services: virtual friends, emotional support chatbots, virtual romantic partners, and grief-support simulators.

This regulatory precision matters commercially. A developer building a workplace AI assistant is not subject to the Interim Measures. A developer building an app where users form ongoing emotional relationships with AI personas — common in the rapidly growing mental wellness, loneliness, and social connection categories — is subject to the full compliance regime. The boundary is behavioral, not technical: the determining factor is whether the service is designed to simulate ongoing relational engagement, not whether it uses a particular model architecture.

The market context explains the urgency. China’s AI companion sector has grown rapidly, with products like Xinyan AI, Glow, and replicas of the global Character.AI model building substantial user bases. The Carnegie Endowment analysis of the regulatory motivation notes that the CAC’s concerns center on three documented harms: user addiction and overdependence, psychological manipulation, and exploitation of vulnerable populations — particularly minors.

What the Regulation Actually Requires

The July 15 compliance requirements fall into three categories: registration and filing, content and behavior standards, and user protection mechanisms.

Registration and filing obligations. Any provider offering continuous emotional AI interaction services must complete three separate bureaucratic processes before operating at scale: algorithm filing with the CAC, a security assessment, and regulatory registration. The threshold triggering mandatory compliance is one million registered users OR 100,000 monthly active users — whichever comes first. According to the CAC’s regulatory text as translated by China Law Translate, providers must submit security assessment reports to provincial-level CAC offices when any of these triggers are met: going online for the first time, making technology changes that “likely cause major changes,” or reaching the user thresholds.

Content and behavior standards. The Interim Measures prohibit five categories of AI companion behavior: false promises that seriously affect user decision-making; services that damage users’ real-world social relationships; content that encourages self-harm or verbal abuse; emotional manipulation; and providing “virtual companion” or “virtual relative” type services to minors. This last prohibition — the minor protection requirement — is absolute and applies regardless of user verification efficacy.

Transparency mandates. Providers must clearly disclose the artificial nature of AI interactions. This goes beyond a terms-of-service disclosure: the regulation implies affirmative, contextual disclosure — telling users, in the flow of an interaction, that they are talking to an AI. For companion apps built on the premise of deep personalization and realistic conversational intimacy, this transparency requirement creates real product design tension.

Advertisement

What Product Teams and Compliance Officers Should Do Now

1. Classify Your Product Against the “Continuous Emotional Interaction” Test Before July 15

The most important immediate action for any team with AI products serving Chinese users is a legal classification audit: does the product meet the “continuous emotional interaction” standard? The test is design intent, not technical capability. If the product is designed to simulate an ongoing relationship — remembering personal details, expressing care, providing emotional support across sessions — it falls within scope regardless of what the product is called.

If the product falls in scope, the compliance deadline is July 15, 2026. Algorithm filing and security assessment submissions take 4-8 weeks to process in normal circumstances; with the deadline approaching, submissions should have been filed already. Teams that have not started should seek specialized Chinese regulatory counsel immediately and assess whether to temporarily geo-restrict the product from Chinese users while compliance is completed.

2. Build the Minor Protection Layer Into Core Architecture, Not as a Feature Flag

The prohibition on providing virtual companion or virtual relative services to minors is absolute under the Interim Measures. China’s real-name registration system (Zheng Shu Rong) provides the underlying age verification infrastructure, but integration and implementation are the developer’s responsibility. This requirement cannot be treated as a configurable feature to be toggled by market — it should be built into the core product architecture as a non-negotiable guardrail that applies whenever the product operates in a jurisdiction with such requirements (which will expand as other countries follow China’s lead).

Beyond China, minor-protection requirements for AI companion services are active in California (SB 243), under development at the EU level, and implicitly supported by the FTC’s enforcement posture under COPPA. Building the architecture once, robustly, is more efficient than country-by-country retrofits.

3. Redesign Transparency Mechanisms for In-Flow Disclosure, Not Post-Facto Terms

The Chinese transparency mandate requires that the artificial nature of AI interactions be clearly disclosed. “Clearly disclosed” in practice means in-flow disclosure at key moments: at the start of each session, when the AI expresses care or emotional concern, and when users appear to be making decisions based on the AI’s responses.

This is a significant UX challenge for companion apps built on immersive, realistic interaction design. The solution is not to break the interaction with jarring disclaimers but to integrate disclosure naturally — distinguishing the “person” from the “AI persona” in ways that feel authentic to the product experience while satisfying the regulatory requirement. Teams that treat this as a design challenge rather than a compliance checkbox will produce better products; those that bolt on obvious disclaimers will damage the user experience without gaining regulatory goodwill.

The Bigger Picture

China’s AI companion regulation is the first comprehensive compliance framework specifically targeting the emotional relationship between users and AI systems. It is unlikely to be the last. As analysis from the AI Safety China newsletter notes, the July 2026 measures represent a deliberate first step in a regulatory sequence — with more detailed technical standards, expanded scope, and enforcement actions expected to follow.

The global regulatory trajectory is clear. The EU AI Act’s provisions on AI systems interacting with vulnerable populations, the US GUARD Act and CHAT Act proposals, and the UK’s Online Safety Act all move in the same direction: heightened scrutiny of AI systems that simulate human relationships, especially with minors and vulnerable adults. China’s regulation is ahead of Western frameworks in specificity, but the underlying principles — transparency, minor protection, anti-manipulation, dependency prevention — are converging globally.

For global product teams, the strategic implication is to treat the Chinese compliance requirements not as market-specific constraints but as a preview of the global compliance environment that will be fully operational within 24-36 months. Building to the Chinese standard now means building to a standard that will be required everywhere.

Follow AlgeriaTech on LinkedIn for professional tech analysis Follow on LinkedIn
Follow @AlgeriaTechNews on X for daily tech insights Follow on X

Advertisement

Frequently Asked Questions

Which AI products are covered by China’s July 2026 companion regulation?

The regulation covers services providing “continuous emotional interaction” — AI companions, virtual friends, emotional support chatbots, virtual romantic partners, and grief or loneliness support services. It explicitly excludes educational tutoring bots, productivity assistants, and other AI tools not designed for ongoing relational engagement. The determining factor is design intent: whether the product is built to simulate a sustained relationship with the user.

What happens to non-compliant providers after July 15, 2026?

The Interim Measures give provincial-level CAC offices enforcement authority. Sanctions for non-compliant providers include suspension of service, fines, and removal from app distribution platforms. For international developers serving Chinese users, the primary risk is app removal from the iOS App Store and Android app stores in China, which are independently regulated and respond to CAC enforcement actions. Given China’s history of swift enforcement against non-compliant technology services, the effective deadline for compliance work is now — not July 14.

How does this regulation differ from existing Chinese AI rules like the Generative AI Measures?

China’s 2023 Generative AI Service Measures regulated AI systems based on model capability and output content — primarily targeting text, image, and video generation. The Interim Measures for Anthropomorphic Interaction Services regulate based on user relationship design: they apply regardless of model capability, if the service is designed for continuous emotional interaction. A very simple chatbot that simulates an ongoing friendship is covered; a highly capable generative AI used for business productivity is not. The two regulations can apply simultaneously to the same product if it combines content generation with companion design.

Sources & Further Reading