California’s 2025 legislative session ended with a familiar message to businesses: privacy compliance is expanding in scope, and artificial intelligence (AI) governance is moving quickly from voluntary best practices to enforceable transparency and safety obligations. On the last day of 2025, lawmakers introduced 33 privacy and AI bills and passed 16 for Governor Gavin Newsom to sign or veto. Ultimately, the governor signed four privacy bills and seven AI bills into law, while vetoing five others. Below is a summary of the most consequential enacted measures and some practical compliance takeaways for organizations operating in, or selling/providing services in, California. The through-line is consistent: California is regulating data practices and AI system behavior through disclosure, documentation, accountability, and enforcement hooks.
New Privacy Law
- The California Opt Me Out Act requires browser-based universal opt-out signals and mandates that companies developing or maintaining a web browser provide consumers with a universal opt-out preference signal that applies to all websites they visit. Even though the mandate is aimed at browser developers, businesses should prepare for more machine-readable opt-out signals, and ensure that adtech, analytics, personalization, consent tools, and vendor flows can recognize and honor universal signals consistently across websites.
- AB1043 (online age verification signals) embraces an alternate model for children’s online safety by shifting obligations; under this regulation, operating system providers must offer an interface for users to input age-verification information during account setup. The model diverges from other states by imputing liability to developers and excluding operating system providers and app stores from liability. App developers should treat OS-provided age signals as compliance-critical attributes.
- SB 361 (data brokers data collection and deletion) expands data broker oversight through broader disclosures and tougher enforcement. Brokers must disclose whether they collect personal information in numerous categories, including sexual orientation, citizenship status, biometric information, and government identification numbers. Brokers must disclose to the California Privacy Protection Agency whether they sold or shared data with foreign actors, federal or state agencies (including law enforcement), or generative AI system developers. As a result of this regulation, data brokers, and companies that resemble brokering in practice, should revisit registration, reporting, and deletion workflows. Data purchasers should strengthen vendor diligence because these disclosures can surface regulatory and reputational exposure tied to data sources and onward transfers, including to AI developers.
- AB 45 (collection of health and location data limits) extends prohibitions to include collection, use, disclosure, sale, sharing, or retention of personal information of an individual located at or within the precise geolocation of clinics or reproductive health care service centers. Because this regulation provides a private right of action for violations, organizations should inventory location collection and retention, geofence and near sensitive locations advertising logic, SDK behavior including third-party trackers, and research data governance and law enforcement response playbooks.
AI Transparency and Safety
The Transparency in Frontier Artificial Intelligence Act requires certain frontier AI developers, with revenues of at least $500 million, to disclose safety efforts, including:
- Disclosures to the Office of Emergency Services plus mandatory third-party audits;
- Disclosures must identify which accepted standards were integrated into their frameworks; and
- A clear safety framework must be adopted and published on the developer’s website.
In-scope developers should make governance audit-ready through written safety frameworks, mapped standards, and disclosure processes. Enterprises buying frontier model services should expect procurement and due diligence changes as vendors align to audit and disclosure expectations.
Additionally, SB 243 (companion chatbots with disclosure, safety protocols, and suicide prevention reporting) regulates companion chatbots, described as systems designed to use human-like responses, satisfy social needs, and sustain relationships. The regulation requires:
- Clear and conspicuous disclosure that the user is engaging with AI, not a human, if a reasonable person could be misled; and
- Safeguards that prevent engagement if there is no protocol to prevent harmful content, including self-harm and suicide ideation-related outputs, or sexual content if the user is a known minor.
What to Expect in 2026
The second half of the California legislative session has begun. More than 22 bills will be carried over, with a February 20, 2026, deadline for new bills. Privacy bills in committee include tweaks to the California Consumer Privacy Act (CCPA) and the California Invasion of Privacy Act (CIPA), plus workplace surveillance bills. Potential new data broker obligations and new reporting for businesses collecting precise geolocation are also on the list. Ten AI bills are pending further action, including multiple bills on algorithmic pricing, plus bills on AI bots, high-risk automated decision-making, copyright, and discrimination.
Together, these enactments confirm that California is steadily converting privacy and AI risk management into operational requirements that regulators and plaintiffs can test through documentation, disclosures, and demonstrable controls. Organizations should align engineering, product, procurement, and legal around a single compliance roadmap that covers universal opt-out signal recognition, age and minor related safeguards, data broker style reporting and deletion workflows, sensitive geolocation and health data guardrails, and AI transparency and safety governance that can withstand audit and vendor diligence. With dozens of bills carrying over and new proposals imminent, the most resilient posture is to institutionalize repeatable processes now, including data mapping, vendor oversight, policy and notice updates, testing and monitoring, and incident response playbooks, so that new California requirements can be absorbed as incremental changes rather than disruptive rework.