On September 10, Senate Commerce, Science, and Transportation Committee Chair Ted Cruz (R-TX) released what he called a “light-touch” regulatory framework for federal AI legislation, outlining five pillars for advancing American AI leadership.  In parallel, Senator Cruz introduced the Strengthening AI Normalization and Diffusion by Oversight and eXperimentation (“SANDBOX”) Act (S. 2750), which would establish a federal AI regulatory sandbox program that would waive or modify federal agency regulations and guidance for AI developers and deployers.  Collectively, the AI framework and the SANDBOX Act mark the first congressional effort to implement the recommendations of the AI Action Plan the Trump Administration released on July 23. 

  1. Light-Touch AI Regulatory Framework

Senator Cruz’s AI framework, titled “A Legislative Framework for American Leadership in Artificial Intelligence,” calls for the United States to “embrace its history of entrepreneurial freedom and technological innovation” by adopting AI legislation that promotes innovation while preventing “nefarious uses” of AI technology.  Echoing President Trump’s January 23 Executive Order on “Removing Barriers to American Leadership in Artificial Intelligence” and recommendations in the AI Action Plan, the AI framework sets out five pillars as a “starting point for discussion”:

  • Unleashing American Innovation and Long-Term Growth.  The AI framework recommends that Congress establish a federal AI regulatory sandbox program, provide access to federal datasets for AI training, and streamline AI infrastructure permitting.  This pillar mirrors the priorities of the AI Action Plan and President Trump’s July 23 Executive Order on “Accelerating Federal Permitting of Data Center Infrastructure.”
  • Protecting Free Speech in the Age of AI.  Consistent with President Trump’s July 23 Executive Order on “Preventing Woke AI in the Federal Government,” Senator Cruz called on Congress to “stop government censorship” of AI (“jawboning”) and address foreign censorship of Americans on AI platforms.  Additionally, while the AI Action Plan recommended revising the National Institute of Standards & Technology (“NIST”)’s AI Risk Management Framework to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change,” this pillar calls for reforming NIST’s “AI priorities and goals.”
  • Prevent a Patchwork of Burdensome AI Regulation.  Following a failed attempt by Congressional Republicans to enact a moratorium on the enforcement of state and local AI regulations in July, the AI Action Plan called on federal agencies to limit federal AI-related funding to states with burdensome AI regulatory regimes and on the FCC to review state AI laws that may be preempted under the Communications Act.  Similarly, the AI framework calls on Congress to enact federal standards to prevent burdensome state AI regulation, while also countering “excessive foreign regulation” of Americans.
  • Stop Nefarious Uses of AI Against Americans.  In a nod to bipartisan support for state digital replica protections – which ultimately doomed Congress’s state AI moratorium this summer – this pillar calls on Congress to protect Americans against digital impersonation scams and fraud.  Additionally, this pillar calls on Congress to expand the principles of the federal TAKE IT DOWN Act, signed into law in May, to safeguard American schoolchildren from nonconsensual intimate visual depictions.
  • Defend Human Value and Dignity.  This pillar appears to expand on the policy of U.S. “global AI dominance in order to promote human flourishing” established by President Trump’s January 23 Executive Order by calling on Congress to reinvigorate “bioethical considerations” in federal policy and to “oppose AI-driven eugenics and other threats.”
  1. SANDBOX Act

Consistent with recommendations in the AI Action Plan and AI Framework, the SANDBOX Act would direct the White House Office of Science & Technology Policy (“OSTP”) to establish and operate an “AI regulatory sandbox program” with the purpose of incentivizing AI innovation, the development of AI products and services, and the expansion of AI-related economic opportunities and jobs.  According to Senator Cruz’s press release, the SANDBOX Act marks a “first step” in implementing the AI Action Plan, which called for “regulatory sandboxes or AI Centers of Excellence around the country where researchers, startups, and established enterprises can rapidly deploy and test AI tools.”

Program Applications.  The AI regulatory sandbox program would allow U.S. companies and individuals, or the OSTP Director, to apply for a “waiver or modification” of one or more federal agency regulations in order to “test, experiment, or temporarily provide” AI products, AI services, or AI development methods.  Applications must include various categories of information, including:

  • Contact and business information,
  • A description of the AI product, service, or development method,
  • Specific regulation(s) that the applicant seeks to have waived or modified and why such waiver or modification is needed,
  • Consumer benefits, business operational efficiencies, economic opportunities, jobs, and innovation benefits of the AI product, service, or development method,
  • Reasonably foreseeable risks to health and safety, the economy, and consumers associated with the waiver or modification, and planned risk mitigations,
  • The requested time period for the waiver or modification, and
  • Each agency with jurisdiction over the AI product, service, or development method.

Agency Reviews and Approvals.  The bill would require OSTP to submit applications to federal agencies with jurisdiction over the AI product, service, or development method within 14 days.  In reviewing AI sandbox program applications, federal agencies would be required to solicit input from the private sector and technical experts on whether the applicant’s plan would benefit consumers, businesses, the economic, or AI innovation, and whether potential benefits outweigh health and safety, economic, or consumer risks.  Agencies would be required to approve or deny applications within 90 days, with a record documenting reasonably foreseeable risks, the mitigations and consumer protections that justify agency approval, or the reasons for agency denial.  Denied applicants would be authorized to appeal to OSTP for reconsideration.  Approved waivers or modifications would be granted for a term of two years, with up to four additional two-year terms if requested by the applicant and approved by OSTP. 

Participant Terms and Requirements.  Participants with approved waivers or modifications would be immune from federal criminal, civil, or agency enforcement of the waived or modified regulations, but would remain subject to private consumer rights of action.  Additionally, participants would be required to report incidents of harm to health and safety, economic damage, or unfair or deceptive trade practices to OSTP and federal agencies within 72 hours after the incident occurs, and to make various disclosures to consumers.  Participants would also be required to submit recurring reports to OSTP throughout the term of the waiver or modification, which must include the number of consumers affected, likely risks and mitigations, any unanticipated risks that arise during deployment, adverse incidents, and the benefits of the waiver or modification.

Congressional Review.  Finally, the SANDBOX Act would require the OSTP Director to submit to Congress any regulations that the Director recommends for amendment or repeal “as a result of persons being able to operate safely” without those regulations under the sandbox program.  The bill would establish a fast-track procedure for joint resolutions approving such recommendations, which, if enacted, would immediately repeal the regulations or adopt the amendments recommended by OSTP.

The SANDBOX Act’s regulatory sandbox program would sunset in 12 years unless renewed.  The introduction of the SANDBOX Act comes as states have pursued their own AI regulatory sandbox programs – including a sandbox program established under the Texas Responsible AI Governance Act (“TRAIGA”), enacted in June, and an “AI Learning Laboratory Program” established under Utah’s 2024 AI Policy Act.  The SANDBOX Act would require OSTP to share information with these state AI sandbox programs if they are “similar or comparable” to the SANDBOX Act, in addition to coordinating reviews and accepting “joint applications” for participants with AI projects that would benefit from “both Federal and State regulatory relief.” 

Photo of Holly Fechner Holly Fechner

Holly Fechner has two decades of legal, legislative and public policy experience in the public and private sectors.  Ms. Fechner has a broad-based practice handling legislative and regulatory matters for clients in areas including healthcare, tax, intellectual property, education, and employee benefits.  Drawing…

Holly Fechner has two decades of legal, legislative and public policy experience in the public and private sectors.  Ms. Fechner has a broad-based practice handling legislative and regulatory matters for clients in areas including healthcare, tax, intellectual property, education, and employee benefits.  Drawing on her extensive congressional and private sector experience, Ms. Fechner offers clients comprehensive advocacy services, including strategic advice, substantive legal and regulatory expertise, and policy and message development.  She has a proven track record in assisting clients fulfill their government affairs goals.

Matthew Shapanka

Matthew Shapanka draws on more than 15 years of experience – including on Capitol Hill, at Covington, and in state government – to advise and counsel clients across a range of industries on significant legislative, regulatory, and enforcement matters. He develops and executes…

Matthew Shapanka draws on more than 15 years of experience – including on Capitol Hill, at Covington, and in state government – to advise and counsel clients across a range of industries on significant legislative, regulatory, and enforcement matters. He develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies, many with significant legal and political opportunities and risks.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters within the Committee’s jurisdiction, including federal election law and campaign finance, and oversight of the Federal Election Commission, legislative branch agencies, security and maintenance of the U.S. Capitol Complex, and Senate rules and regulations.

Most significantly, Matt led the Rules Committee staff work on the Electoral Count Reform and Presidential Transition Improvement Act – landmark bipartisan legislation to update the antiquated process of certifying and counting electoral votes in presidential elections that President Biden signed into law in 2022.

As Chief Counsel, Matt was a lead attorney on the joint bipartisan investigation (with the Homeland Security and Governmental Affairs Committee) into the security planning and response to the January 6, 2021 attack on the Capitol. In that role, he oversaw the collection review of documents, led interviews and depositions of key government officials, advised the Chairwoman and Committee members on two high-profile joint hearings, and drafted substantial portions of the Committees’ staff report on the attack. He also led oversight of the Capitol Police, Architect of the Capitol, Senate Sergeant at Arms, and executive branch agencies involved in implementing the Committees’ recommendations, including additional legislation and hearings.

Both in Congress and at the firm, Matt has prepared many corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at legislative, oversight, or nomination hearings before congressional committees, as well as witnesses appearing at congressional depositions and transcribed interviews. He is also an experienced legislative drafter who has composed dozens of bills introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, as well as the election and political laws of states and municipalities across the country.

Before law school, Matt worked as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on all aspects of state-level policy, communications, and compliance for federal stimulus funding awarded to Massachusetts under the American Recovery & Reinvestment Act of 2009. He has also worked for federal, state, and local political candidates in Massachusetts and New Hampshire.

August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients for complying with federal, state, and global privacy and competition frameworks and AI regulations. He also assists clients in investigating compliance issues, preparing for federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.