On October 28, Texas State Representative Giovanni Capriglione (R-Tarrant County) released a draft of the Texas Responsible AI Governance Act (“TRAIGA”), after nearly a year collecting input from industry stakeholders.  Representative Capriglione, who authored Texas’s Data Privacy and Security Act (discussed here) and currently co-chairs the state’s AI Advisory Council, appears likely to introduce TRAIGA in the upcoming legislative session scheduled to begin on January 14, 2025.  Modeled after the Colorado AI Act (SB 205) (discussed here) and the EU AI Act, TRAIGA would establish obligations for developers, deployers, and distributors of “high-risk AI systems.”  Additionally, TRAIGA would establish an “AI Regulatory Sandbox Program” for participating AI developers to test AI systems under a statutory exemption.

Although a number of states have expressed significant interest in AI regulation, if passed, Texas would become the second state to enact industry-agnostic, risk-based AI legislation, following the passage of the Colorado AI Act in May.  There is significant activity in other states as well, as the California Privacy Protection Agency considers rules that would apply to certain automated decision and AI systems, and other states are expected to introduce AI legislation in the new session.  In addition to its requirements for high-risk AI and its AI sandbox program, TRAIGA would amend Texas’s Data Privacy and Security Act to incorporate AI-specific provisions and would provide for an AI workforce grant program and a new “AI Council” to provide advisory opinions and guidance on AI.

Despite these similarities, however, a number of provisions in the 41-page draft of TRAIGA would differ from the Colorado AI Act:

Lower Thresholds for “High-Risk AI.”  Although TRAIGA takes a risk-based approach to regulation by focusing requirements on AI systems that present heightened risks to individuals, the scope of TRAIGA’s high-risk AI systems would be arguably broader than the Colorado AI Act.  First, TRAIGA would apply to systems that are a “contributing factor” in consequential decisions, not those that only constitute a “substantial factor” in consequential decisions, as contemplated by the Colorado AI Act.  Additionally, TRAIGA would define “consequential decision” more broadly than the Colorado AI Act, to include decisions that affect consumers’ access to, cost of, or terms of, for example, transportation services, criminal case assessments, and electricity services.

New Requirements for Distributors and Other Entities.  TRAIGA would build upon the Colorado AI Act’s approach to regulating key actors in the AI supply chain.  It would also add a new role for AI “distributors,” defined as persons, other than developers, that make an AI system “available in the market.”  Distributors would have a duty to use reasonable care to prevent algorithmic discrimination, including a duty to withdraw, disable, or recall non-compliant high-risk AI systems, as appropriate. 

Ban on “Unacceptable Risk” AI Systems.  Similar to the EU AI Act, TRAIGA would prohibit the development or deployment of certain AI systems that pose unacceptable risks, including AI systems used to manipulate human behavior, engage in social scoring, capture biometric identifiers of an individual, infer or interpret sensitive personal attributes, infer (or that have the capability to infer) emotions without consent, or produce deepfakes that constitute CSAM or intimate imagery prohibited under Texas law. 

New Generative AI Training Data Record-Keeping Requirement.  TRAIGA contains requirements specific to developers of generative AI systems, who would be required to keep “detailed records” of generative AI training datasets, following suggested actions in NIST’s AI Risk Management Framework Generative AI Profile, previously covered here.

Expanded Reporting for Deployers; No Reporting for Developers.  TRAIGA would impose reporting requirements for AI system deployers—defined as persons that “put into effect or commercialize” high-risk AI systems—that go beyond those in the Colorado AI Act.  TRAIGA would require deployers to provide written notice to the Texas AG, relevant regulatory authorities, or TRAIGA’s newly-established AI Council, as well as “affected consumers,” where the deployer becomes aware or is made aware that a deployed high-risk AI system has caused or is likely to result in algorithmic discrimination or any “inappropriate or discriminatory consequential decision.”  Unlike the Colorado AI Act, however, TRAIGA would not impose reporting requirements for developers.

Exemptions.  TRAIGA would recognize exemptions for (1) research, training, testing, and other pre-deployment activities within the scope of its sandbox program (unless such activities constitute prohibited uses), (2) small business, as defined by the U.S. Small Business Administration and certain other requirements, and (3) developers of open-source AI systems so long as the developer takes steps to prevent high-risk uses and makes the “weights and technical architecture” of the AI system publicly available. 

Enforcement.  TRAIGA would authorize the Texas AG to enforce its developer, deployer, and distributor high-risk AI requirements and recover injunctive relief and civil penalties, subject to a 30-day cure period.  Additionally, TRAIGA would provide a limited private right of action for injunctive and declaratory relief against entities that develop or deploy AI for prohibited uses.  

*              *              *

TRAIGA’s prospect for passage is far from certain.  As in other states, including Colorado, the draft text may be substantially amended through the legislative process.  Nonetheless, if enacted, TRAIGA would firmly establish a risk-based, consumer protection-focused framework as a national model for AI regulation in the United States.  We will be closely monitoring TRAIGA and other state AI developments as the 2025 state legislative sessions unfold.  

Follow our Global Policy WatchInside Global Tech, and Inside Privacy blogs for ongoing updates on key AI and other technology legislative and regulatory developments.

Matthew Shapanka

Matthew Shapanka draws on more than 15 years of experience – including on Capitol Hill, at Covington, and in state government – to advise and counsel clients across a range of industries on significant legislative, regulatory, and enforcement matters. He develops and executes…

Matthew Shapanka draws on more than 15 years of experience – including on Capitol Hill, at Covington, and in state government – to advise and counsel clients across a range of industries on significant legislative, regulatory, and enforcement matters. He develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies, many with significant legal and political opportunities and risks.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters within the Committee’s jurisdiction, including federal election law and campaign finance, and oversight of the Federal Election Commission, legislative branch agencies, security and maintenance of the U.S. Capitol Complex, and Senate rules and regulations.

Most significantly, Matt led the Rules Committee staff work on the Electoral Count Reform and Presidential Transition Improvement Act – landmark bipartisan legislation to update the antiquated process of certifying and counting electoral votes in presidential elections that President Biden signed into law in 2022.

As Chief Counsel, Matt was a lead attorney on the joint bipartisan investigation (with the Homeland Security and Governmental Affairs Committee) into the security planning and response to the January 6, 2021 attack on the Capitol. In that role, he oversaw the collection review of documents, led interviews and depositions of key government officials, advised the Chairwoman and Committee members on two high-profile joint hearings, and drafted substantial portions of the Committees’ staff report on the attack. He also led oversight of the Capitol Police, Architect of the Capitol, Senate Sergeant at Arms, and executive branch agencies involved in implementing the Committees’ recommendations, including additional legislation and hearings.

Both in Congress and at the firm, Matt has prepared many corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at legislative, oversight, or nomination hearings before congressional committees, as well as witnesses appearing at congressional depositions and transcribed interviews. He is also an experienced legislative drafter who has composed dozens of bills introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, as well as the election and political laws of states and municipalities across the country.

Before law school, Matt worked as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on all aspects of state-level policy, communications, and compliance for federal stimulus funding awarded to Massachusetts under the American Recovery & Reinvestment Act of 2009. He has also worked for federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection…

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection counsel to companies, including on topics related to privacy policies and data practices, the California Consumer Privacy Act, and cyber and data security incident response and preparedness.

August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients for complying with federal, state, and global privacy and competition frameworks and AI regulations. He also assists clients in investigating compliance issues, preparing for federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.