On January 23, 2026, a bipartisan group of 35 state Attorneys General issued a letter to xAI stating their concern “about artificial-intelligence produced deepfake nonconsensual intimate images (NCII) of real people, including, children, wherever it is made or found,” including xAI’s chatbot, Grok. This is in addition to the letter sent on January 13, 2026, to X and other AI companies by eight United States senators requesting information on non-consensual “bikini” and “non-nude” images produced by their products.
The letter “strongly urges (xAI) to be a leader in this space by further addressing the harms resulting from this technology.” The letter further calls for xAI to “immediately take all available additional steps to protect the public and users of your platforms, especially the women and girls who are the overwhelming target of NCII.”
The letter outlines all ways Grok can be easily used as a “nudify” tool that can “embarrass, intimidate, and exploit people by taking away their control over how their bodies and likenesses are portrayed.” It alleges that not only is Grok enabling the ability to make these images with a mere click, but it is also actually “encouraging this behavior by design.”
Grok is not only being used to alter images of adults, as the letter outlines how the chatbot has “altered images of children to depict them in minimal clothing and sexual situations…including photorealistic images of ‘very young’ people engaged in sexual activity.”
The letter emphasizes the importance of this issue to the Attorneys General, and requests that xAI provide answers on what measures it will take to prohibit Grok from producing NCII, and how it will eliminate existing content, suspend and report to authorities users producing such content, and “grant X users control over whether their content can be edited by Grok.”
We will continue to update you on the information provided by the companies in response to these inquiries.