Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Australian Government AI Transparency Guide Helpful for US Companies too

By Odia Kagan on December 23, 2025
Email this postTweet this postLike this postShare this post on LinkedIn
From Australian Government’s National AI Centre “Being Clear about AI Generated Content”

Australian Government’s National AI Centre publishes guide “Being Clear about AI Generated Content” setting out when and how to consider disclosing use of AI in processing. 

The guide suggests considering the level of AI Involvement together with its potential impact: The higher the potential negative impact the more likely you need to use transparency mechanisms to disclose its use and the more robust they should be. Similarly, the higher the involvement of the AI (e.g. fully automated, or minimal human oversight) – the greater the need for transparency. 

The guide gives some practical examples:

  • A journalist using an AI-enabled word processer to write a news article who personally reviews and edits the final product: moderate risk level; The guide recommends considering labeling “Article enhanced by AI”. 
  • Developing an AI system to enhance medical images for diagnostic insights: Overall risk is deemed high: The guide recommends labelling: “image enhanced by AI”, watermarking the images and developing and maintaining secure, accessible and complete metadata logs.
  • Lawyer using an AI system to produce fully AI-generated drafts of legal contracts.  Overall risk deemed high. The guide recommends labelling the contracts with ‘Initial draft generated by AI’ within the law firm; making sure the lawyer retains human oversight and responsibility for accuracy and maintaining metadata logs. 

Helpful for US Companies as well:

More than just best practice, in the US, a number of laws require transparency regarding the use of AI: 

  • FTC Act and State unfair and deceptive acts or practices consumer protection laws would require disclosing AI use if their use would be considered a is a material fact whose omission is likely to mislead reasonable consumers, . 
  • The Utah Artificial Intelligence Policy Act requires to disclose to the person that they are interacting with AI up, clearly and conspicuously, at the outset of any interaction in certain cases, and when a person asks about it. 
  • California SB 942 and AB 2013 laws require a similar manifest disclosure (labelling) but also, in some cases a latent disclosure and detailed information about the datasets used. 
  • The Maine Communications with Consumers via AI  law requires disclosing to users that they are interacting with an AI chatbot.

US companies may find this guide helpful for some disclosure tips.

  • Posted in:
    Privacy & Data Security
  • Blog:
    Privacy Compliance & Data Security
  • Organization:
    Fox Rothschild LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo