Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

40 State AGs Warn against Delusional LLM Outputs

By Odia Kagan on December 12, 2025
Email this postTweet this postLike this postShare this post on LinkedIn

Attorneys General from 40 US States sent letter to leading LLM companies earlier this week warning that sycophantic and delusional outputs produced by LLMs constitute “dark patterns” and may open LLM companies to liability under existing State laws. 

“Sycophantic outputs” is a term that refers to when an artificial intelligence model single-mindedly pursues human approval including by tailoring responses to exploit human evaluators. 

In the letter the AGs state that LLM companies must ensure that their products comply with applicable laws, both at the design stage and after release, by deploying continued monitoring. 

The AGs say that their states have laws with civil and common law requirements: (1) to warn users of applicable risks, (2) to avoid marketing defective products, (3) to refrain from engaging in unfair, deceptive, or unconscionable acts and practices, and (4) to safeguard the privacy of children online.

Additional laws that may be implicated by delusional or sycophantic outputs include: 

  • Robust criminal codes that may prohibit some conversations that generative AI (GenAI) is currently having with users (e.g. encouraging an individual to commit a criminal act)
  • Laws that prohibit providing mental health advice without a license
  • Specific statutes designed to protect children when they engage with an online service or product that you may be violating

The AGs close the letter by requesting that the LLM companies adopt a robust remediation plan, specifically addressing sycophantic and delusional outputs that includes: 

  • Policies and procedures concerning sycophantic and delusional outputs for GenAI products 
  • Mandatory training for employees
  • Reasonable and appropriate safety tests on your GenAI models
  • Well-documented recall procedures with provable track records of success 
  • Clear and conspicuous warnings, which are permanently viewable on the same screen that a person provides inputs, regarding unintended or harmful output 
  • Assigning named executives and designated individuals responsible 
  • Allowing independent third-party processes to enhance accountability, including independent third-party audits reviewable by state and federal regulators; regular, formal impact assessments on child safety;  allowing independent third parties (e.g., academics and civil society) to evaluate systems pre-release without retaliation 
  • Developing and publishing detection and response timelines 
  • Publicly committing to releasing safety testing results 

This comes together with news about the new Trump Executive Order which seeks to develop one US-wide standard for AI regulation.

Companies in any industry that use but do not develop AI chatbots should also be mindful of the risks the AG flagged in their letter, as they to may incur liability, in connection with their role as the consumer-facing “deployer” of the output, even if they did not develop the system.

If you are considering adopting an AI solution, you should remediate your risk by:

  • Understanding how the product works and uses your input
  • Conducting diligence about the vendor and the product
  • Conducting a risk assessment for key risks, including delusion and bias (being especially mindful when there may be children users) [This is required in certain cases by US State Privacy Laws]
  • Ensuring that sufficient disclosures are in place [This is now required in several State AI transparency laws]
  • Ensuring that the vendor has a good quality assurance and monitoring process to address issues when they are flagged. [This has been specifically flagged by the FTC in recent enforcement cases].
https://assets.law360news.com/2421000/2421114/ai-multistate-letter-letters-2025.pdf
  • Posted in:
    Privacy & Data Security
  • Blog:
    Privacy Compliance & Data Security
  • Organization:
    Fox Rothschild LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo