In the wake of a number of lawsuits against AI companies regarding the possible involvement of their platforms in encouraging suicide and other criminal behavior, including homicide, California Senator Steve Padilla, introduced SB 867 which would prohibit in California, until 1/1/31 the sale, exchange, possession with intent to sell or exchange to a retailer a toy that includes a companion chatbot.
A “toy” is defined as a product designed or intended by the manufacturer for use in play by children 12 years of age or less.
This raises the question: is a moratorium on a product or service capable of being misused an effective strategy for regulation?
California and New York already have laws that regulate companion bots:
California’s SB 243, that went into effect on 1/1/26:
- Requires clearly notifying the individual that they are interacting with a bot, not a human
- Allows a companion chatbot to engage with users only if it maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, including, but not limited to, by providing a notification to the user that refers the user to crisis service provider
For minors it also requires:
- Reminding the user, every three hours to take a break and that the companion is not human
- Instituting reasonable measures to prevent the companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
Starting in 2027, it will also impose an annual reporting requirement to the California’s Office of Suicide Prevention (OSP):
New York’s AI Companion Law ( N.Y. Gen. Business Law § 1700, et seq.) has similar requirements regarding making reasonable efforts for detecting and addressing suicidal ideation or expressions of self-harm expressed by a user to the AI companion, and reminding the user every three hours (though here, the reminder is not limited to just minors).
The New York law is enforceable by the Attorney General, while the California law allows for a private right of action.
The discussion of whether to regulate a technology or its potential harmful uses is getting center stage in other AI regulation spaces as well. In December, the Federal Trade Commission decided to set aside its previous holding in the Rytr case, for deceptive conduct rooted in technology capable of being use for mass production of fake reviews. The FTC proceeded to launch a campaign enforcing the new Consumer Review Rule, addressing the actual writing of fake reviews conduct, instead.