⬤ New York and California just became the first states to put real legal guardrails on AI companion services. New York now requires these tools to include suicide risk detection systems and clear disclosures reminding users they're chatting with a bot, not a real person. California's Senate Bill 243 kicks in on January 1, 2026, bringing tougher protections for minors and new reporting rules for companies running AI companions.
⬤ These are the first concrete legal boundaries drawn specifically for AI companion tech in the U.S. New York's focus is on transparency and safety—companies have to actively monitor for suicide risk and make it obvious the service isn't human. California's SB-243 goes harder on youth safeguards, requiring extra protections and reporting standards for any service that interacts with kids.
⬤ As more people use conversational AI, state-level regulation is starting to fill the oversight gap. The push for suicide risk detection and strict disclosure rules shows lawmakers want to make sure AI companions don't confuse users about what they're talking to or put vulnerable people at risk. The debate around AI and young users is just getting started, and standards will likely keep changing.
⬤ These moves matter because New York and California are huge markets, and their rules could shape how AI companion products get built and managed nationwide. Requirements for risk detection, transparency, and youth protections will affect compliance costs, product features, and what regulators expect from the entire AI industry going forward.
Usman Salis
Usman Salis