California’s First Real Regulations for AI Companions Are Almost Here


What just happened

California’s lower chamber greenlit a first-of-its-kind regulations for AI companion , moving a bill that could become the baseline for safety nationwide. If signed and enacted on schedule, operators of will have to do the obvious things that somehow weren’t obvious: clearly disclose “not a human,” route users to help when they flag self-harm or distress, and prevent inappropriate content from reaching minors. The timeline points to January 1, 2026.

Why this matters for builders (and investors)

Undoubtedly this is not a press-release regulation. It attaches legal accountability if your AI companion fails basic safety checks. In practice, that means product, policy, and infra teams need age-aware design, identity disclosure in the UI, reliable crisis escalation plumbing, and auditable processes. Lawmakers have also emphasized sharing aggregate data on referrals to crisis services—so expect reporting expectations to harden, either in rulemaking or copycat bills.

The parts that got cut—and why that’s telling

Early drafts tried to clamp down on “variable reward” mechanics—the engagement hooks used by some companion apps to keep users chatting for one more dopamine hit. Furthermore those provisions were stripped, along with a requirement to track how often themselves initiated suicide-related dialogue. The political read: lawmakers narrowed to what’s technically feasible now and defensible in court, without wading into design micromanagement that’s hard to enforce.

A conservative take: protect kids, price the risk, don’t smother innovation

This version threads a practical needle. It targets specific, high-cost harms (minor exposure and crisis moments) and makes companies carry the for failure. That’s consumer protection with a balance sheet, not a ban. If you operate responsibly, your costs are predictable. If you cut corners, the downside becomes legible. That’s how a market disciplines itself without a bureaucracy writing UX specs.

The wider battlefield: transparency vs. light‑touch

The companion bill on deck, SB 53, pushes broad reporting. , Meta, Google, and Amazon oppose it; Anthropic backs it. Meanwhile, Silicon Valley money is flowing into pro‑AI PACs to keep regulation “light.” Translation: the industry can live with targeted safety guardrails but is fighting anything that forces sunlight at scale. If SB 243 becomes , expect other states to it—and to revisit once the plumbing exists.

What compliance will actually require

• Clear identity disclosure that survives screenshots and third‑party skins.
• Age gating and content filters that are adaptive, not checkbox.
• Crisis detection that’s conservative in favor of routing help, with tested handoffs to hotlines and local resources.
• Internal QA and incident review that can stand up in discovery. If you can’t prove it happened, regulators will assume it didn’t.


Automate your tasks by building your own AI powered Workflows.

The strategic edge for founders who prepare now

By 2026, “do no harm theater” won’t cut it. Teams that ship defensible safety primitives—disclosure, age-aware flows, crisis routes, lightweight aggregate reporting—will scale faster, advertise compliance as a feature, and avoid retrofitting under pressure. And if you’re selling into enterprises, procurement will start asking for this anyway. Build once, sell often.

What I’m watching next

Two tells: whether the final text keeps a data-sharing requirement on crisis referrals, and how aggressively rulemaking defines “reasonable” safeguards. Either way, the direction is set: companion AI is no longer a vibes-only product category. It’s a regulated surface with legal outcomes attached. Plan accordingly, and you’ll spend more on product than on cleanup.

By skannar