Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

TRUMP America AI Act Bill Sets Direction for Future US AI Regulation

By Odia Kagan on March 20, 2026
Email this postTweet this postLike this postShare this post on LinkedIn

On March 18, 2026, Senator Marsha Blackburn (R-TN) introduced the TRUMP AMERICA AI Act: formally, The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act. This massive, 291 page bill sets out to establish the first comprehensive federal framework for artificial intelligence regulation in the United States. The bill simultaneously addresses issues including: AI innovation, protection of children, AI risk and liability, intellectual property and content regulation.

This sets out a short a overview of the key provisions. To follow are deeper dives on how to approach the various sections where they meet your business.

The eye openers: 

  • Federal products liability framework for AI systems with a private right of action
  • Annual third party bias audit requirement for AI products 
  • Federal right of publicity with no fair use protection to unauthorized reproduction for AI training or inference.
  • Protection of minors from online platforms and AI chatbots
  • Termination of platform protection under Section 230 of the CDA. 

Preemption: 

Contrary to the expectation under the AI Executive Order, the law does NOT preempt generally applicable law, such as a body of common law or laws addressing artificial intelligence.   

AI Systems Liability:

The bill:

  • Creates a federal products liability framework for AI systems with a private right of action. Under the bill developers can be held liable for harm caused by defective design, failure to warn, express warranty, or an unreasonably dangerous product. The bill also bars unconscionable liability limitations in AI product contracts 
  • Requires every provider of a high-risk AI system to conduct an annual independent third-party audit to detect viewpoint discrimination or discrimination based on political affiliation.  In addition, all covered entities must provide annual ethics training to all personnel using an FTC-established curriculum. 
  • Imposes a (vague) duty on chatbot developers to exercise reasonable care in the design, development, and operation of such chatbot to prevent and mitigate reasonably foreseeable harms to users of such chatbots mitigated by an instruction to the FTC to promulgate rules for minimum safeguards for compliance with enforcement authority for the FTC and AG’s. 
  • Risk-Based Framework for AI Systems: Requires systems trained using more than 10²⁶ integer or floating-point operations (a threshold that captures only the most powerful frontier models) to participate in the newly established Advanced Artificial Intelligence Evaluation Program within the Department of Energy with additional mandatory disclosure obligations.  

IP Protection:

The bill:

  • Creates a federal right of publicity protecting individuals’ voice and visual likenesses from unauthorized digital replicas, applicable also post-mortem. 
  • Expressly provides that the unauthorized reproduction, copying, or computational processing of copyrighted works for purposes of AI training, fine-tuning, development, or inference does NOT constitute fair use. 
  • Directs NIST to develop voluntary standards for content provenance information, watermarking, and synthetic content detection and requires providers to allow owners to attach provenance information. It cannot be removed. 
  • Enables copyright holders to obtain an administrative subpoena requiring disclosure of information about how their copyrighted works were used to train AI models 

Content moderation: 

  • Section 230 of the Communications Decency Act: Repealed in its entirety. This is the liability shield that has protected online platforms from civil liability for third-party content since 1996. 
  • The bill requires federal agencies to procure only large language models that comply with “Unbiased AI Principles”. The Director of OMB (Office of the Management of the Budget) must issue implementing guidance within 120 days of enactment. 

Children and Teens: 

  • Kids Online Safety Act (KOSA): (Deep dive incoming). The bill imposes a duty of care on covered platforms to prevent and mitigate reasonably foreseeable harms to minors, and restricts the use of opaque algorithms. This section is preemptive of conflicting State laws but allows states to enact stricter more protective laws.   
  • Guard Act: Imposes criminal and civil obligations specifically protecting minors from AI chatbots and requires age verification. 

Innovation: 

The bill also has several sections about AI research, protections for data centers and encouraging AI Innovation thorugh the Center for AI Standards and Innovation (CAISI) at NIST. 

  • Posted in:
    Privacy & Data Security
  • Blog:
    Privacy Compliance & Data Security
  • Organization:
    Fox Rothschild LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo