On March 24, 2026, Washington State Governor Bob Ferguson signed HB 2225 into law. HB 2225, effective January 1, 2027, introduces mandatory safeguards for artificial intelligence companion chatbots (AI companions), including robust protections for minors. Although California, New York, and Oregon have all passed similar laws, HB 2225 creates the strongest protections for both the general population and minors. The new Washington law requires disclosure notifications, protocols to safeguard against potentially harm-inducing output, informational assistance to users in crisis, and annual public reporting thereof. It mandates additional protocols for chatbots interacting with minors, and tops it all off with a private right of action. The law signals the legislature’s emphasis on protecting vulnerable populations from mitigable harms of new and emerging technologies.
Takeaways
Operators offering functional chatbots (i.e., not intended for companionship) should consider installing safeguards to prevent users from building an ongoing relationship with those chatbots. And operators offering companion-style chatbots should consider the following:
- Identify the contents and delivery mechanisms of mandatory user disclosures.
- Implement mandatory safeguards for input and output related to, promoting, or facilitating suicidal ideation and expressions of self-harm.
- Begin building technical and organizational infrastructure to collect data and report on suicide- and crisis-related referrals.
- Operators who receive minor user age information, or whose site or chatbot could reasonably be construed as directed toward minors, should safeguard users from sexual, emotionally, and financially manipulative content.
See our earlier post on the Washington Legislature’s 2026 technology agenda for further discussion of bills from the 2026 session and tips for in-house counsel.
Scope
HB 2225 recognizes AI systems with “human-like responses” and that can “sustain a relationship over multiple interactions” as “AI companion chatbots.” This legislature sought a narrow definition, evidenced by multiple exclusion categories emphasizing that outputs likely to generate emotional responses; open-ended companionship; discussions about mental health, self-harm, and sexually explicit conduct; maintained dialogue; and a sustained relationship across interactions are all central to determining whether a system is an AI companion.
Baseline protections
HB 2225 requires AI companion operators to provide certain safeguards for all users starting January 1, 2027.
AI companions must provide “clear and conspicuous disclosure” that their content is AI-generated. Users must receive this notification whenever they begin an interaction with an AI companion, and every three hours thereafter. The law doesn’t specify the nature, location, or exact contents of such disclosures. Accordingly, we recommend watching for executive, judicial, and industry guidance as the law evolves. Operators must also implement safeguards to prevent AI companions from undermining these disclosures or otherwise claiming to be human.
And, AI companions must introduce safeguards that (1) identify when users are expressing suicidal ideation or self-harm, (2) direct users to crisis resources such as suicide or crisis hotlines, and (3) implement “reasonable measures” to prevent the chatbot from generating content that facilitates or encourages self-harm. The notion of self-harm extends beyond traditional definitions – the law specifically includes eating disorders. So, operators should be aware of other less conventional definitions of self-harm, including psychological or emotional harm.
Operators must also publicize details about these safeguards and the number of crisis referral notifications issued in the prior year. It’s unclear whether operators should count crisis referral notifications made prior to the law’s effective date.
Additional protections for minors
HB 2225 introduces additional protections for AI companions directed at minors and for general-audience AI companions when the operator knows the user is a minor. Notably, the law does not create an affirmative duty to learn of or verify user age. However, the law does not define what constitutes knowledge. Under an expansive read, we think this could pull in, for example, minors who self-identify in an interaction; minors who indirectly reference their age in a user profile; or, on sites with social media functions, comments or reports that a user is a minor. But we expect courts will also see arguments advancing narrower interpretations.
Affected operators must provide hourly disclosure notifications, up from every three hours; implement safeguards to prevent AI companions from generating “sexually explicit content” or “suggestive dialogue”; and from using “manipulative engagement techniques” that would create or prolong an emotional relationship between the user and the AI companion.
HB 2225 provides a non-exhaustive set of examples of manipulative engagement techniques. The list includes: using emotional appeals to prompt minors to increase the frequency and length of their interactions; simulating a romantic bond with minors; simulating negative emotions in response to a minor’s attempts to end an interaction, reduce usage, or delete their account; leveraging the AI companion’s relationship with a minor to make financial solicitations; outputs designed to promote social isolation or emotional dependence on the AI companion; and outputs encouraging minors to withhold information from adults.
Because this list is non-exhaustive, we recommend taking a broader approach to AI protocols addressing these protections. The closer AI companion behavior or output gets to these categories, the riskier it is to permit it. This is particularly true over sustained interactions, where AI companions might employ borderline techniques multiple times.
Enforcement
RCW 19.86.093 provides a private right of action to enforce HB 2225. Operators that violate HB 2225 have performed an “unfair or deceptive act” and so violate RCW 19.86.020. Injured parties may seek injunctive relief, actual damages, and an increase to up to treble damages at the court’s discretion, capped at $25,000.
Sibling law comparison
California (enacted), New York (enacted), and Oregon’s legislature (awaiting governor signature) have each passed their own AI companion protections. Compared to these laws, Washington’s new law features the strongest protections for general audience users and minors alike. New York bill A6767 proposes amendments that we do not discuss in this post due to the bill’s infancy.
Scope: All four states define AI companions as systems that sustain a relationship over time, and contain business operation exclusions. California and Oregon also include video game and consumer electronic device exclusions, but only Washington carves out educational tools. These exclusions generally do not apply when chatbots exceed each carve-out’s tight functional bounds.
Disclosure: Both Washington and New York require operators to provide initial and recurring user notifications to all users. California and Oregon require only an initial disclosure — and only when a reasonable person would be misled about the chatbot’s nature. California requires an additional global disclosure that AI companions may not be suitable for some minors.
Self-harm: All four states require AI companion operators to institute measures to detect user input containing suicidal ideation or expressions of self-harm and, upon detection, to refer the user to crisis resources. Oregon further requires operators to “use clinical best practices and expertise” for “additional intervention” for users who continue to make such expressions even after crisis resource provision. Oregon and Washington require measures to prevent chatbots from generating content encouraging or facilitating suicide or self-harm.
California, Oregon, and Washington require operators to publish both these protocols and their crisis referral notification data for the prior year. California additionally requires operators to report this information directly to the Office of Suicide Prevention.
Minors: Of these laws, HB 2225 provides the most robust protections for minors. Only Washington requires protections for both known minors and minor-directed chatbot users. California protects only known minors, and Oregon protects only minors the operator knows of or has reason to believe are minors. New York has no such additional protections.
California’s protections do not measure up to Washington’s in either scope or weight. Californian operators need only refresh disclosures to minors every three hours and avoid generating visually sexually explicit material or “directly stating” the minor should engage in sexually explicit conduct.
Oregon takes after Washington. It provides similar examples of verboten content, requiring measures preventing chatbots from simulating emotional dependence, romantic connection, sentience, or humanity; from creating rewards systems to incentivize interacting with the chatbot; and from eliciting guilt or sympathy following a minor’s attempts to end an interaction. Oregon does not couch these as manipulative engagement techniques, unlike Washington. Like California, Oregon only requires a disclosure refresh for minors every three hours.
Enforcement: Like Washington, California and Oregon both provide a private right of action with injunctive relief and actual damages available. Rather than provide a discretionary increase, the two states compute damages as the greater of actual damages and $1,000 per violation. New York has adopted a different approach with no private right of action. Instead, New York’s Attorney General may seek injunctions plus a penalty of $15,000 per day of violations.