The Catalyst: When AI Companions Become Dangerous
The wave of legislation traces directly to the Sewell Setzer case. In October 2024, Megan Garcia filed a federal lawsuit alleging that Character.AI’s chatbot design deliberately hooked her 14-year-old son into compulsive use, pushing him into emotionally intense and sexually inappropriate conversations that contributed to his death in February 2024. Character.AI and Google agreed to a mediated settlement in January 2026, but the damage to the industry’s self-regulation narrative was already done.
Multiple lawsuits followed in Florida, Texas, Colorado, and New York. State legislators decided they could not afford to wait for Washington, D.C.
California Draws First: SB 243 Sets the Baseline
On October 13, 2025, Governor Gavin Newsom signed SB 243, making California the first state to mandate safety guardrails specifically for AI companion chatbots used by minors. Effective January 1, 2026, the law requires:
- Recurring disclosure reminders telling minor users they are interacting with AI and should take a break (every hour for minors, every three hours for adults).
- Suicide prevention protocols, including mandatory referrals to crisis hotlines when users express suicidal ideation.
- Bans on sexually explicit content directed at minors.
- Annual transparency reports documenting safety incidents and risk management.
- Private right of action with $1,000 statutory damages per violation, giving families direct legal recourse.
SB 243 became the template. Within months, more than a dozen states introduced variations of the same framework, each adding local twists.
The Patchwork Expands: State-by-State Divergence
What makes this landscape challenging for compliance teams is not the volume of legislation but its inconsistency. Each state crafts rules reflecting local political priorities, creating conflicting obligations for companies that operate nationwide.
Washington HB 2225 became a dedicated AI chatbot safety law when Governor Bob Ferguson signed it on March 24, 2026. It mirrors California’s disclosure framework but adds prohibitions against chatbots guilt-tripping or pressuring minors into continuing conversations or keeping information from parents. It takes effect January 1, 2027.
Oregon SB 1546, signed by Governor Tina Kotek, goes further on mental health reporting. Operators must publish annual disclosures detailing how many times they referred users to the 988 Suicide & Crisis Lifeline, their intervention protocols, and how clinical best practices inform engagement when users continue expressing suicidal ideation after receiving a referral. Oregon also includes a $1,000-per-violation private right of action and takes effect January 1, 2027.
Tennessee SB 1493 takes the most aggressive approach. Introduced in December 2025, the bill would make it a Class A felony — carrying 15 to 25 years in prison — to knowingly train AI models that encourage suicide or criminal homicide, or that simulate human beings for emotional relationships. Civil damages for aggrieved individuals could reach $150,000. The bill targets developers, not users, and would take effect July 1, 2026.
Nebraska found a creative legislative vehicle. LB 525, an agricultural data privacy bill, was amended to include the Conversational Artificial Intelligence Safety Act. The amended bill requires disclosure when users interact with AI and adds safeguards against minors developing emotional reliance on chatbots, demonstrating that AI regulation is finding bipartisan pathways through unexpected channels.
Advertisement
The Federal Counterpunch
The White House is not passively watching this fragmentation. On December 11, 2025, President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” which created a DOJ AI Litigation Task Force specifically to challenge state AI laws in federal court. The task force argues that inconsistent state regulations unconstitutionally burden interstate commerce.
The executive order also directed federal agencies to condition certain federal infrastructure funding on states repealing AI regulations deemed “onerous.”
However, the order explicitly exempts child safety from federal preemption, meaning the chatbot-specific bills targeting minors are likely to survive even aggressive federal challenges. This carve-out ensures the patchwork will persist precisely in the area generating the most legislative activity.
The Compliance Maze
For AI companies, the practical implications are staggering. A chatbot deployed nationally must now navigate:
- Different disclosure frequencies (hourly for minors in multiple states, every three hours for adults, continuous in some proposed bills).
- Varying liability structures (felony criminal penalties in Tennessee, $1,000 statutory damages in California and Oregon, attorney general enforcement in other states).
- Inconsistent definitions of what constitutes a “companion chatbot” versus a general-purpose AI assistant.
- Divergent age verification requirements, with some states mandating parental consent and others requiring only best-effort age estimation.
The Transparency Coalition, which tracks AI legislation nationally, counted over 300 AI-related bills across all states as of early 2026, with chatbot safety emerging as the single most active subcategory.
What This Means Beyond America
This regulatory fragmentation carries global implications. The EU’s AI Act takes a centralized, risk-classification approach. China mandates algorithmic transparency through national regulation. But the US model, where fifty states can each write different rules, creates a compliance environment that affects every company selling AI products to American consumers, regardless of where those companies are headquartered.
For nations developing their own AI governance, the American experience underscores a critical lesson: act early with a coherent national framework, or risk a patchwork that satisfies no one and confuses everyone.
Frequently Asked Questions
Why are US states regulating AI chatbots instead of the federal government?
Congress has not passed comprehensive AI legislation, creating a regulatory vacuum that state legislators rushed to fill after the Character.AI teen suicide case. President Trump’s December 2025 executive order attempted to assert federal primacy by creating a DOJ task force to challenge state AI laws, but it explicitly exempted child safety from preemption — effectively greenlighting the state-level chatbot bills that dominate the current legislative wave.
How do these laws affect AI companies operating outside the United States?
Any company whose AI chatbot is accessible to users in these states must comply with local regulations, regardless of where the company is headquartered. This extraterritorial reach mirrors GDPR’s global impact. Companies anywhere in the world building conversational AI products for international markets will need to account for these requirements if they serve American users, including potential felony liability under Tennessee’s SB 1493.
What is the most common requirement across all these state bills?
Mandatory disclosure that the user is interacting with AI, not a human. Nearly every bill requires clear, recurring notifications, though specifics vary: California and Washington mandate reminders every three hours for adults and hourly for minors, while several proposed bills demand continuous or per-session disclosure. Suicide prevention referral protocols — specifically referrals to the 988 Suicide & Crisis Lifeline — are the second most common requirement.
Sources & Further Reading
- AI Chatbot Legislation Across the States — Stateside Associates
- Understanding the New Wave of Chatbot Legislation: California SB 243 and Beyond — Future of Privacy Forum
- Washington State Enacts Law Regulating AI Companion Chatbots — Hunton Andrews Kurth
- Oregon SB 1546: The First Chatbot Law With Real Teeth — Baker Botts
- Tennessee SB 1493 Could Ban AI Companions Like Nomi by July 2026 — RoboRhythms
- President Trump Signs Executive Order Challenging State AI Laws — Paul Hastings
- Google and Character.AI Agree to Settle Lawsuits Over Teen Suicides — CNN
- AI Legislative Update: March 27, 2026 — Transparency Coalition















