On June 22, Texas Governor Greg Abbott (R) signed the Texas Responsible AI Governance Act (“TRAIGA”) (HB 149) into law.  The law, which takes effect on January 1, 2026, makes Texas the second state to enact comprehensive AI consumer protection legislation, following the 2024 enactment of the Colorado AI Act.  Unlike the Colorado AI Act, however, TRAIGA’s AI consumer protection framework sets out categories of “prohibitions on use” of AI that will apply to persons that develop, deploy, or distribute AI systems (as relevant for different sections), while also establishing AI disclosure requirements for healthcare providers and government entities and amending Texas’s biometric and data privacy laws. 

Prohibited Uses of AI.  In contrast to other state AI consumer protection frameworks that focus on risk mitigation for “high-risk” AI use cases, TRAIGA will categorically prohibit the development, deployment, or distribution (as applicable) of AI systems with the “intent” or “sole intent” that the AI system:

  • Incite or encourage self-harm, harm to another person, or criminal activity.
  • Infringe, restrict, or otherwise impair individual rights guaranteed under the U.S. Constitution.
  • Unlawfully discriminate against a protected class in violation of state or federal law.  The law further provides that “disparate impact” is insufficient to show an intent to discriminate for purposes of this prohibition.  Notably, TRAIGA’s prohibition on AI-based unlawful discrimination does not apply to insurance entities and financial institutions.
  • Produce, assist or aid in producing, or distribute (1) visual material depicting child pornography, as prohibited under Section 43.26 of the Texas Penal Code, or (2) deepfake videos depicting intimate imagery or sexual conduct, as prohibited under Section 21.165 of the Texas Penal Code.
  • Engage in “text-based conversations that simulate or describe sexual conduct” while “impersonating or imitating a child younger than 18 years of age.” 

TRAIGA also will prohibit certain government uses of AI, including the use of AI that evaluate or classify persons “with the intent to calculate or assign a social score” and the use of AI “for the purpose of uniquely identifying a specific individual” using biometric or publicly available data collected in violation of state or federal law and without the individual’s consent.

Healthcare & Government AI Disclosure Requirement.  TRAIGA will require healthcare providers that use an AI system “in relation to health care service or treatment” to disclose to patients that they are interacting with an AI system, and to provide such disclosures “not later than the date the service or treatment is first provided.”  The law also will require government agencies to provide such disclosures to consumers that interact with an AI system that is “intended to interact with consumers” and made available by the government agency.

CUBI Amendments.  TRAIGA amends Texas’s Capture or Use of Biometric Identifiers (“CUBI”) law, which generally prohibits the capture of an individual’s biometric identifier for commercial purposes unless the individual provides informed consent.  TRAIGA amends CUBI to clarify that an individual is not informed of and does not consent to the capture of their biometric identifiers based solely on the existence of “publicly available” media that contains their biometric identifiers, unless the media was made publicly available by the individual. 

Additionally, TRAIGA creates an exception to CUBI for the processing of biometric identifiers involved in developing, training, evaluating, disseminating, or otherwise offering AI models or systems, unless the system is used or deployed for the purpose of uniquely identifying a specific individual.  TRAIGA also creates an exception to CUBI for entities that develop or deploy an AI model or system for certain security and fraud prevention purposes.

Data Processor Requirements.  TRAIGA amends the Texas Data Privacy & Security Act to require processors to assist controllers regarding complying with requirements related to personal data collected, stored, and processed by AI systems, if applicable.

Exemptions.  TRAIGA will exempt a defendant from liability under its provisions for alleged violations caused by “another person[’s]” use of the defendant’s AI system, and will prohibit enforcement actions against any person for an AI system “that has not been deployed.”  Additionally, TRAIGA will preclude liability for defendants that discover a violation of TRAIGA through (1) feedback from developers, deployers, or other persons, (2) testing, (3) following state agency guidelines, or (4) an internal review process if the defendant is substantially complaint with the National Institute of Standards and Technology’s AI Risk Management Framework: GenAI Profile or another nationally or internationally recognized AI risk management framework.

Enforcement.  TRAIGA will be enforced by the Texas Attorney General, who will be required to establish an online mechanism for consumers to report TRAIGA violations and authorized to request various categories of information from potential violators.  Violations will be punishable by $10,000 to $12,000 in civil penalties for failures to cure violations, $80,000 to $200,000 in civil penalties for “uncurable” violations, and $2,000 to $40,000 in civil penalties for each day that a violation continues, in addition to injunctive relief.

Upon the Texas Attorney General’s recommendation, Texas state agencies also will be authorized to impose sanctions against persons found in violation of TRAIGA if the person is licensed, registered, or certified by the state agency.  For such persons, state agency sanctions include the suspension or revocation of the person’s agency-issued license and up to $100,000 in monetary penalties.

*              *              *

For more updates on developments related to artificial intelligence and technology, see our Inside Global TechGlobal Policy Watch, and Inside Privacy blogs.

Photo of Lindsey Tonsager Lindsey Tonsager

Lindsey Tonsager helps national and multinational clients in a broad range of industries anticipate and effectively evaluate legal and reputational risks under federal and state data privacy and communications laws.

In addition to assisting clients engage strategically with the Federal Trade Commission, the…

Lindsey Tonsager helps national and multinational clients in a broad range of industries anticipate and effectively evaluate legal and reputational risks under federal and state data privacy and communications laws.

In addition to assisting clients engage strategically with the Federal Trade Commission, the U.S. Congress, and other federal and state regulators on a proactive basis, she has experience helping clients respond to informal investigations and enforcement actions, including by self-regulatory bodies such as the Digital Advertising Alliance and Children’s Advertising Review Unit.

Ms. Tonsager’s practice focuses on helping clients launch new products and services that implicate the laws governing the use of endorsements and testimonials in advertising and social media, the collection of personal information from children and students online, behavioral advertising, e-mail marketing, artificial intelligence the processing of “big data” in the Internet of Things, spectrum policy, online accessibility, compulsory copyright licensing, telecommunications and new technologies.

Ms. Tonsager also conducts privacy and data security diligence in complex corporate transactions and negotiates agreements with third-party service providers to ensure that robust protections are in place to avoid unauthorized access, use, or disclosure of customer data and other types of confidential information. She regularly assists clients in developing clear privacy disclosures and policies―including website and mobile app disclosures, terms of use, and internal social media and privacy-by-design programs.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection…

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection counsel to companies, including on topics related to privacy policies and data practices, the California Consumer Privacy Act, and cyber and data security incident response and preparedness.

August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients for complying with federal, state, and global privacy and competition frameworks and AI regulations. He also assists clients in investigating compliance issues, preparing for federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.