What SB 574 requires: four considerations for lawyers using AI
The bill doesn’t ban AI use in legal practice. Instead, it clarifies that existing professional obligations (confidentiality, competence, accuracy, and fairness) still apply when using AI tools.
The bill defines generative artificial intelligence as an “artificial intelligence system that can generate derived synthetic content, including text, images, video, and audio that emulates the structure and characteristics of the system’s training data.”
1. Client confidentiality and AI
SB 574 would prohibit lawyers from entering confidential, personally identifying, or nonpublic information into public AI systems.
What the bill says: Attorneys must ensure that “confidential, personal identifying, or other nonpublic information is not entered into a public generative artificial intelligence system.”
The bill doesn’t define what “public generative AI systems” are, but it does define personal identifying information to include:
- Driver’s license numbers.
- Dates of birth.
- Social Security numbers.
- National Crime Information and Criminal Identification numbers.
- Addresses and phone numbers of parties, victims, witnesses, and court personnel.
- Medical or psychiatric information.
- Financial information.
- Account numbers.
- Any other content sealed by court order or deemed confidential by court rule or statute.
In practice, this means attorneys can’t copy-paste client emails, case facts, or discovery materials into public AI platforms that don’t support attorney-client privilege.
2. AI citation verification
The bill requires attorneys to verify and correct AI-generated content before using it.
What the bill says: Attorneys must take reasonable steps to:
- “Verify the accuracy of generative artificial intelligence material, including any material prepared on their behalf by others.”
- “Correct any erroneous or hallucinated output in any material used by the attorney.”
- “Remove any biased, offensive, or harmful content in any generative artificial intelligence material used, including any material prepared on their behalf by others.”
This means that attorneys must verify factual accuracy, identify hallucinations, maintain quality control over all AI-generated work, and remove problematic content, including work done by others on their behalf. AI may assist with the work, but responsibility remains with the lawyer.
3. Preventing AI bias in legal practice
The bill requires that AI use can’t result in discrimination against protected groups.
What the bill says: Ensure “the use of generative artificial intelligence does not unlawfully discriminate against or disparately impact individuals or communities based on age, ancestry, color, ethnicity, gender, gender expression, gender identity, genetic information, marital status, medical condition, military or veteran status, national origin, physical or mental disability, political affiliation, race, religion, sex, sexual orientation, socioeconomic status, and any other classification protected by federal or state law.”
This provision addresses concerns that AI systems can perpetuate biases from their underlying training data. Under SB 574, attorneys could be held accountable if they use AI known to produce discriminatory outcomes.
4. AI disclosure considerations
Attorneys must consider whether to disclose AI use when creating content for the public. While disclosure isn’t mandatory in all circumstances, firms may want to develop clear policies on when and how to inform the public about AI-generated content.
What the bill says: “The attorney considers whether to disclose the use of generative artificial intelligence if it is used to create content provided to the public.”
Why California’s AI bill matters for lawyers nationwide
California isn’t alone in wrestling with AI’s role in legal practice. The bill acknowledges what many in the profession already know: Not all AI tools are suitable for legal practice. The California Bar already issued guidance on AI use, but SB 574 would codify those principles into enforceable law, moving from “should” to “must.”
Other states are likely to follow suit. The 2025 Legal Trends Report found that 79% of legal professionals use AI, and nearly half of them are using generic AI tools such as ChatGPT, Gemini, and Claude. As adoption grows, so does the need for practical guardrails that help firms use AI without compromising professional responsibilities to protect both lawyers and clients.
How to use AI responsibly in legal practice
For many firms, the question now is what responsible AI use actually looks like day to day, and how to make intentional choices about where and how AI fits into their legal work.
If you’re evaluating your current AI setup or thinking about adding new tools, the following considerations can help guide safer, more practical adoption, regardless of whether SB 574 is enacted.
Use legal AI grounded in the law
General-purpose AI tools, while helpful for brainstorming or general research, generate responses based on patterns in their training data rather than retrieving information from verified legal databases. That’s why hallucinated cases can appear in court filings when lawyers fail to verify citations. The AI doesn’t know it’s inventing citations because it doesn’t actually check case law.
Legal AI platforms such as Clio Work are built on a different foundation. They’re grounded in authenticated legal databases, with access to primary and secondary law in relevant jurisdictions. This significantly reduces the risk of hallucinations, which means lawyers can conduct research more efficiently and get results they can trust.
Choose AI tools that support professional oversight
Meeting verification obligations requires that lawyers have the means to review and confirm AI-generated work. With general-purpose AI, verification often means first figuring out whether a citation or statement is real at all, tracking down sources from scratch and untangling potential hallucinations. That process is time-consuming and increases the risk that errors slip through.
AI built for legal workflows shortens that gap. By providing direct access to verified source documents and enabling side-by-side comparison, legal AI shifts verification from a hunt for missing sources to a straightforward review, making it easier to meet oversight requirements without slowing down legal work.
Protect client information with AI built for confidentiality
Consumer AI tools, especially free or publicly available versions, operate under standard terms of service. This means your clients’ confidential data may be used to improve these models, as there are no contractual guarantees stipulating that client information is exempt. Ultimately, these platforms weren’t designed for attorney-client privilege because they weren’t designed for attorneys in the first place.
Instead, firms should look for AI platforms that offer contractual guarantees that data won’t be retained or used to train models, encryption at rest and in transit, SOC 2 or similar security certifications, and integration with practice management systems.
When AI is built into legal software, client information stays within a controlled ecosystem under the lawyer’s control, and that data is never retained by an AI nor is it shared or used for training. Confidentiality becomes a foundational feature.
Develop firm-wide AI policies
Selecting the right tools is only part of responsible AI adoption. Firms also need clear policies that define when and how AI can be used.
An effective AI policy should address which tools are approved for different types of work, what information can and can’t be entered into AI systems, and how to verify AI-generated content before use. It should also establish training requirements so everyone on the team understands both the capabilities and limitations of your AI tools.
Firms that prepare now, by evaluating current AI tools, documenting verification processes, and establishing clear policies, will find themselves better positioned as regulations like SB 574 continue to evolve.
Adopt AI responsibly with legal AI
Even if SB 574 doesn’t become law, this bill reflects a growing recognition among lawmakers that AI use in legal practice requires clear safeguards. The bill would codify requirements for verification, confidentiality protections when using public AI systems, and professional oversight.
While AI offers real benefits, the risks vary significantly depending on the type of tool being used. Public, consumer-focused AI platforms may be useful for general tasks, but they can pose serious challenges for lawyers when client confidentiality and accuracy are at stake.
Legal-specific AI tools like Clio Work, by contrast, are built for the realities of legal practice. They are designed to support attorney-client privilege, rely on verified sources, and support the level of review and accountability required for lawyers.
See how Clio Work helps lawyers use AI responsibly. Explore AI built for legal practice, with verified sources, built-in review tools, and safeguards designed to protect client confidentiality.