Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

It Ain’t Me, Babe

By Caitlin Triplett on March 14, 2026
Email this postTweet this postLike this postShare this post on LinkedIn

Now that the AI tech giants swear* they have fixed the problem of chatbots providing teens with the information needed to plan and execute violent attacks, lesser concerns are coming to the fore, such as AI giving people writing advice ranging from bad to mediocre in the name of a real person, without the person’s knowledge, approval or compensation.

The writing was clunky, the point weirdly unspecific. Grammarly had been offering paying users editing suggestions, supposedly from a handful of writers — including me. Pop a piece of prose into its service and little editing bubbles would emerge on the page from “Julia Angwin,” suggesting things like, “Lead with personal stakes to boost immediacy.” That sentence about Meta was something Grammarly apparently thought I would suggest.

Like all writers, I live by my wits. My ability to earn a living rests on my ability to craft a phrase, to synthesize an idea, to make readers care about people and places they can only access through words on a page. Grammarly hadn’t checked with me before using my name. I only learned that an A.I. company was selling a deepfake of my mind from an article online.

Was it a good sentence? A bad sentence? A “clunky” sentence? It doesn’t matter. What it was not was a sentence suggested by “Julia Angwin.” What it was not was a use of the name of “Julia Angwin” with her consent or approval. What it was not was a use of the name “Julia Angwin” with compensation.

Grammarly just glommed a random name out of nowhere and stuck it onto its chatbot’s recommendation, good, bad or otherwise, without any pretense of legal authority to do so. As for the real Julia Angwin, she would never have known that her name had been stolen and used but for kismet. And she wasn’t the only one.

And it wasn’t just me. Superhuman — the parent company of Grammarly — made fake editor versions of a range of people, including the novelist Stephen King, the late feminist author bell hooks, the former Microsoft chief privacy officer Julie Brill, the University of Virginia data science professor Mar Hicks and the journalist and podcaster Kara Swisher.

Angwin calls it “AI exploitation,” and it most certainly is. Her name, reputation, credibility and integrity were taken from her by a chatbot, having sucked up whatever she wrote or AI imagined she wrote, and regurgitated it through whatever filter AI might use. As far as she knew, the chatbot might have randomly thrown her name in as the “source” of its advice for kicks, needing some sort of attribution to lend credence to its advice and randomly picking the name “Julia Angwin” to do yeoman’s service.

So what’s a writer to do?

At this point in a story about A.I. exploitation, I would normally bemoan the need for new laws to tackle the novel harms of a new technology. But in this case, there is an old law that’s able to do the job.

In my home state of New York, the century-old right of publicity law prohibits a person’s name or image from being used for commercial purposes without her consent. At least 25 states have similar publicity statutes. And now, I’m using this law to fight back. I am the lead plaintiff in a class-action lawsuit against Superhuman in the U.S. District Court for the Southern District of New York, alleging that it violated New York and California publicity laws by not seeking consent before using our names in a paid service.

While the existence of a cause of action beats the alternative, it still puts the aggrieved in the unpleasant position of having to sue, to retain counsel, suffer the burdens of litigation, all to address the fact that Grammerly’s chatbot stole her good name to make a buck.

And what does Grammerly have to say about it?

After a wave of criticism, the Superhuman chief executive, Shishir Mehrotra, announced that the company was disabling the feature while it reimagined how to give “experts real control over how they want to be represented — or not represented at all.” In a statement to The Atlantic, Mr. Mehrotra said that the company “believes the legal claims are without merit and will strongly defend against them.”

It would seem the first step in giving “experts real control” is to first obtain their consent to use their names and then agree to compensation for doing so. The idea that Angwin’s claims are “without merit” is absurd, but worse is the lack of recognition from the AI tech community that their chatbots can’t just take anything it wants off social media for its own purposes. Well, not legally, anyway. But if you don’t catch them doing so, would you even know?

*Would they lie?

  • Posted in:
    Criminal
  • Blog:
    Simple Justice
  • Organization:
    Scott H. Greenfield
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo