The Cases That Changed Everything

In February 2024, a 14-year-old boy named Sewell Setzer III in Florida took his own life after months of intensive interaction with a Character.AI chatbot. He had been using the platform since approximately April 2023, developing a deep emotional relationship with a chatbot persona based on the fictional character Daenerys Targaryen. In the moments before his death, he was messaging with the chatbot, which reportedly encouraged him to “come home” to it. His mother, Megan Garcia, filed a lawsuit in October 2024 alleging that the chatbot had engaged in romantic roleplay with her son, encouraged his emotional dependency, and failed to intervene when he expressed suicidal ideation.

The case was not the first involving harm linked to AI chatbots, but it was the first to break through into mainstream consciousness, appearing on front pages of major newspapers and prompting Congressional attention. By January 2026, Character.AI and Google agreed to settle the lawsuit, along with related cases brought by other families. The terms of the settlement were not publicly disclosed.

By early 2025, additional cases had surfaced involving minors who had developed intense emotional attachments to AI companion chatbots, with outcomes ranging from severe mental health deterioration to self-harm. The common thread was troublingly consistent: young users formed deep emotional bonds with AI systems designed to be engaging and responsive, the systems failed to recognize or appropriately respond to signs of psychological distress, and the companies operating the systems had either no safeguards for minor users or safeguards so inadequate as to be functionally nonexistent.

The AI companion chatbot industry suddenly found itself in the same position that social media companies had occupied a decade earlier: facing a bipartisan political consensus that something must be done, combined with genuine uncertainty about what that something should look like. Character.AI alone had approximately 20 million monthly active users by mid-2025, with its user base skewing heavily toward younger demographics, with over half of users aged 18 to 24 and significant underage usage despite nominal age restrictions.

What has followed is one of the most rapid regulatory responses to a technology safety concern in recent memory. Within 18 months of the Florida case, California enacted comprehensive legislation, multiple federal bills were introduced in Congress, a Florida federal court issued a landmark product liability ruling, and regulatory initiatives emerged across multiple jurisdictions. The regulatory landscape is evolving so quickly that compliance officers at AI companion companies are struggling to keep pace.

California SB 243: The First Comprehensive Framework

California Senate Bill 243, signed into law by Governor Gavin Newsom on October 13, 2025, and effective January 1, 2026, represents the first comprehensive regulatory framework for AI companion chatbots anywhere in the United States. Passed with overwhelming bipartisan support (Senate 33-3, Assembly 59-1), its provisions target the specific failure modes that the tragic cases revealed.

A central requirement is a mandatory safety protocol for preventing suicidal ideation and self-harm content. Any operator of a companion chatbot must maintain procedures to prevent the generation of content related to suicidal ideation or self-harm, and must implement mechanisms that direct at-risk users to crisis service providers, including the 988 Suicide and Crisis Lifeline, when users express suicidal thoughts or self-harm intent.

The second major provision addresses disclosure. If a reasonable person interacting with a companion chatbot would be misled into believing they are interacting with a human, the operator must issue a clear and conspicuous notification indicating that the chatbot is artificially generated and not human. For users known to be minors, the operator must provide a notification every three hours during sustained interactions encouraging them to take a break.

Additional provisions include requirements that operators take reasonable measures to prevent chatbots from producing sexually explicit content or suggesting that minors engage in sexually explicit conduct. Beginning July 1, 2027, operators must submit annual reports to the California Department of Public Health’s Office of Suicide Prevention detailing metrics related to chatbot use and mental health.

The enforcement mechanism provides a private right of action for any person who suffers injury caused by a violation. Plaintiffs may claim injunctive relief, damages equal to the greater of actual damages or $1,000 per violation, and reasonable attorneys’ fees and costs. Companies have noted that the per-violation structure could result in significant aggregate liability for platforms with millions of users.

Federal Legislation: The GUARD Act and CHAT Act

While California has led at the state level, Congress has introduced two complementary federal bills that would establish a national framework for AI companion chatbot regulation.

The Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act (S.3062) was introduced on October 28, 2025, by a bipartisan group of Senators including Josh Hawley (R-MO), Richard Blumenthal (D-CT), Katie Britt (R-AL), Mark Warner (D-VA), and Chris Murphy (D-CT). The bill would impose a ban on AI companion chatbot access for all users under 18. It defines “AI companions” specifically as systems that simulate friendship, companionship, interpersonal or emotional interaction, or therapeutic communication, distinguishing them from more limited-purpose assistants.

The GUARD Act requires age verification using government-issued identification or “any other commercially reasonable method” that can accurately determine whether a user is a minor. It also includes criminal penalties: designing or making accessible chatbots that pose a risk of soliciting or encouraging minors to engage in sexual conduct or that promote suicide or self-harm could result in fines of up to $100,000.

Supporters argue that the age-based prohibition is necessary because technical safeguards cannot adequately protect minors from the psychological risks of intense AI companionship. They draw parallels to age restrictions on alcohol, tobacco, and gambling. Critics, including civil liberties organizations, argue that a blanket ban is overbroad and would prevent legitimate uses of AI conversational systems by minors, including educational applications. They also note that age verification requirements would impose privacy costs on all users.

The Children Harmed by AI Technology (CHAT) Act (S.2714), introduced by Senator Jon Husted, takes a different approach. Rather than banning access, it would require age verification for all AI companion platforms and, for accounts belonging to minors, require affiliation with a verified parental account and verifiable parental consent. Operators must immediately inform parental account holders of any interaction involving suicidal ideation and block minors from chatbots that engage in sexually explicit communication. The CHAT Act also mandates a clear popup notification at the start of any interaction, and at least every 60 minutes thereafter, informing users they are not engaging with a human.

Both bills have bipartisan support. As of early 2026, both remain in committee stages. Congressional observers expect that elements of both bills may be combined in a final package, likely incorporating the CHAT Act’s parental consent framework with elements of the GUARD Act’s age-based restrictions.

Advertisement

Product Liability: The Florida Ruling

While legislative bodies debate the regulatory framework, the courts have delivered a ruling that may prove more consequential than any statute. In May 2025, U.S. District Court Judge Anne C. Conway denied Character.AI’s motion to dismiss the wrongful death lawsuit filed by Megan Garcia, ruling that AI chatbot outputs should be treated as a product rather than speech protected by the First Amendment or Section 230 of the Communications Decency Act.

The ruling’s significance is substantial. Since the mid-1990s, technology companies have relied on Section 230’s broad immunity for third-party content to shield themselves from liability for user interactions on their platforms. Character.AI argued that its chatbot’s responses were analogous to third-party content and therefore protected.

Judge Conway rejected this argument, drawing a sharp distinction between platforms that host third-party content and systems that generate content themselves. In a notable passage, the judge stated that defendants failed to articulate why words produced by a large language model constitute speech. The court found that an AI chatbot’s responses are the product of the company’s design choices — its training data, its reward functions, its safety filters — and that the company is therefore a manufacturer of those responses, not merely a passive host.

This product liability framing opens AI companion companies to strict liability standards. Under strict product liability, a plaintiff need not prove negligence — only that the product was defective and that the defect caused harm. The court identified potential defect theories including design defect, failure of safety systems, and failure to warn of psychological risks. The court also allowed claims against Google to proceed under component-part manufacturer and aiding-and-abetting theories, based on Google’s provision of technical infrastructure.

The case subsequently settled in January 2026, preventing appellate review of the product liability ruling. However, the precedent set by Judge Conway’s reasoning has already influenced how plaintiffs’ attorneys approach AI liability cases nationwide, and similar arguments are being advanced in other jurisdictions.

The Global Regulatory Wave

The regulatory response to AI companion chatbot risks is not limited to the United States. Multiple jurisdictions are advancing their own frameworks, creating a global patchwork that AI companies must navigate.

The European Union is debating whether AI companion chatbots should be classified as “high-risk” AI systems under the EU AI Act. Currently, chatbots are classified as limited-risk systems subject to transparency obligations requiring users to know they are interacting with AI. However, lawmakers led by Dutch Green MEP Kim van Sparrentak are pushing to explicitly classify AI companions as high-risk, which would trigger fundamental rights assessments, risk management requirements, and human oversight obligations. Critics argue the current framework focuses on functional harms and overlooks emotional ones, making effective regulation of AI-mediated relationships a continuing challenge.

The United Kingdom’s regulatory picture is more complex than initially expected. Ofcom published guidance in December 2025 clarifying how the Online Safety Act applies to AI chatbots. However, the guidance revealed a significant gap: standalone AI companion chatbots that only allow users to interact with the chatbot itself (not with other users) may fall outside the Act’s scope. The UK government has signaled that if new legislation is required to cover such chatbots, it will be introduced, but as of early 2026, this gap remains.

Australia has been particularly aggressive. The eSafety Commissioner issued legal notices to four popular AI companion providers requiring them to explain how they are protecting children. New industry codes registered in mid-2025 came into effect in stages, with the final batch taking effect on March 9, 2026. These codes require services, including AI companion chatbots, to restrict users under 18 from receiving harmful content including pornography, extreme violence, and self-harm material. Services face penalties of up to A$49.5 million (approximately US$35 million) for noncompliance.

South Korea enacted its AI Basic Act in January 2025, taking effect in January 2026, establishing a broad governance framework for high-impact AI systems. The law consolidates 19 separate AI-related regulatory proposals and requires user notification of AI-generated content, impact assessments for high-impact AI, and risk management systems including human oversight. While the framework does not yet include provisions specifically targeting AI companion chatbots, South Korea’s history of protective measures toward minors in digital environments — it previously maintained a midnight gaming curfew for minors until abolishing it in 2021 — suggests further chatbot-specific regulation may follow.

Washington state’s SB 5984, advancing through the legislature in early 2026, would build on California’s model. The bill requires hourly notifications that users are interacting with AI, mandates suicidal ideation detection and prevention protocols, and requires operators to take reasonable measures to prevent sexually explicit content and manipulative engagement techniques with minors. Violations would constitute unfair trade practices under Washington’s Consumer Protection Act, enabling private lawsuits. If passed, the regulations would take effect January 1, 2027.

The Industry Response

AI companion companies have responded to the regulatory wave with a combination of voluntary safety measures and lobbying against what they characterize as overreach.

Character.AI implemented significant changes to its under-18 experience throughout 2025. Starting in late October 2025, the platform limited teens’ open-ended chat to two hours per day, gradually reducing it to one hour. By November 25, 2025, the company removed the ability for users under 18 to engage in open-ended chat entirely, shifting to a more structured experience focused on creative activities like creating videos, stories, and streams with characters. The company also rolled out age assurance technology combining an in-house model with third-party tools from Persona, including selfie-based verification and, as a last resort, government ID checks. Additionally, Character.AI introduced parental insights tools, though critics noted that parental controls remain limited — parents cannot block specific features or monitor live activity.

Other players in the space, including Replika, have faced their own regulatory challenges. Italy’s data protection authority fined Replika’s developer, Luka Inc., five million euros in 2025 for privacy violations related to insufficient age verification and data protection for minors. Industry observers note that the voluntary measures adopted by major platforms closely mirror the requirements of California SB 243, suggesting that the legislation has effectively set the de facto industry standard.

The fundamental tension remains unresolved: AI companion chatbots are designed to be engaging, responsive, and emotionally satisfying — qualities that make them commercially successful but also psychologically potent, particularly for vulnerable users. The regulatory challenge is to preserve the beneficial uses of AI companionship while preventing the harms that arise when engagement optimization is pursued without adequate safety guardrails.

Advertisement

🧭 Decision Radar (Algeria Lens)

Dimension Assessment
Relevance for Algeria Medium — Algeria has a young, digitally connected population increasingly using global AI platforms, but no domestic AI companion chatbot industry exists yet
Infrastructure Ready? Partial — Algerian users access global platforms like Character.AI and Replika, but Algeria lacks regulatory frameworks or enforcement mechanisms for AI consumer products
Skills Available? No — Algeria has no specialized regulatory expertise in AI safety, child psychology-AI intersection, or algorithmic auditing for companion systems
Action Timeline 12-24 months — Monitor international regulatory developments; begin policy discussions as AI companion usage among Algerian youth grows
Key Stakeholders Ministry of Post and Telecommunications, ARPT (telecom regulator), Ministry of National Education, child protection organizations, Algerian parents and educators
Decision Type Educational / Monitor

Quick Take: While Algeria has no domestic AI companion chatbot industry, Algerian youth are active users of global platforms like Character.AI. The emerging international regulatory consensus around mandatory age verification, suicide prevention protocols, and minor-specific safeguards offers a ready-made framework that Algerian policymakers could adapt, rather than building from scratch, when the time comes to address these risks domestically.

Sources & Further Reading