Why AI Copyright Litigation Matters Now

The litigation wave has moved from theory to active precedent

AI copyright litigation no longer sits in the abstract. Courts have now issued early fair use rulings on AI training, and those rulings have started to shape how publishers, creators, model developers, and investors assess legal exposure. Morrison Foerster notes that 2025 produced the first fair use decisions in the AI training cases involving Meta and Anthropic, and that more training cases involving OpenAI and Google are expected to be decided in 2026.

That shift matters because the legal fight has moved from pleading stage theories to actual merits analysis. Businesses can no longer treat AI copyright litigation as a distant policy debate. Federal judges are now testing how traditional fair use doctrine applies to model training, what counts as transformation, and how courts should think about market harm in an AI setting.

The volume of litigation also matters. Multiple courts across the country are now dealing with claims over training data, outputs, licensing, and piracy. Even where plaintiffs and defendants frame the disputes differently, the same core questions keep surfacing: what was copied, how it was acquired, what the model does with it, and whether the use harms an existing or emerging market for the original works. Morrison Foerster’s latest overview makes clear that 2026 is likely to bring more rulings, but not a final nationwide answer.

The law is developing unevenly

The early rulings do not point in one direction across the board. They show some convergence, but they also expose sharp disagreements. Morrison Foerster says a judicial consensus is developing around one point: training a general purpose AI model is highly transformative. At the same time, courts still disagree on major issues tied to source material, market harm, and licensing.

Three fault lines stand out. First, courts are not treating pirated and lawfully acquired works the same way. Second, courts are split on whether downstream competition from AI generated outputs belongs in the market harm analysis at the training stage. Third, the cases are testing whether a real licensing market for AI training data now exists and whether that market deserves legal weight under the fourth fair use factor.

For businesses and creators, this means AI copyright litigation has entered a more serious phase without becoming predictable. Some judges are willing to treat model training as highly transformative. Others remain more receptive to substitution and licensing market arguments. The result is a doctrine that is moving, but not settled.

The Three Fair Use Rulings So Far

Thomson Reuters v. Ross Intelligence

The first major fair use ruling came in Thomson Reuters v. Ross Intelligence. In February 2025, Judge Bibas rejected Ross’s fair use defense in a case involving Westlaw headnotes used to build an AI legal research tool. The court found the use commercial and not transformative, and it held that market harm weighed strongly in Thomson Reuters’ favor because Ross’s product competed in the same space and threatened a legitimate derivative market.

This ruling matters for two reasons. First, it gave copyright owners an early win in AI copyright litigation. Second, it treated the market harm factor with unusual force. Judge Bibas did not view the copying as a distant technological input removed from the original work’s commercial value. He treated it as a direct threat to an existing product and to a licensing market tied to the copied material.

The opinion also has limits. The court made clear that the case involved non generative AI. That matters because later generative AI cases asked a different question about general purpose model training and whether that kind of training transforms the original works in a broader way. So Thomson Reuters remains important, but it does not resolve the fair use analysis for every AI model.

Bartz v. Anthropic

Bartz v. Anthropic pushed the doctrine in a different direction. According to Morrison Foerster, Judge Alsup ruled in June 2025 that training Claude on lawfully acquired books was highly transformative fair use, while drawing a hard line against Anthropic’s use of pirated books in its central library. That split gave both sides something important. AI developers gained a strong fair use ruling on lawful training inputs. Copyright owners gained a clear judicial rejection of piracy as part of the training pipeline.

The key point in Bartz is the court’s separation of acquisition from training. The ruling treated lawful copies used for training differently from pirated copies amassed outside any legitimate acquisition channel. That means AI copyright litigation may turn as much on where the training data came from as on what the model ultimately does with it.

Bartz also narrowed one form of market harm argument. As summarized by Morrison Foerster, the court did not accept the idea that possible future competition from AI generated works should control the fair use analysis at the training stage. That position sharply contrasts with arguments from rights holders who say model training cannot be separated from the outputs it enables.

Kadrey v. Meta Platforms

Kadrey v. Meta followed two days later and added another layer of complexity. Morrison Foerster reports that Judge Chhabria also found fair use on the record before the court, but his opinion was narrower and more cautious than it first appeared. He did not say all model training is lawful. He said these plaintiffs failed to build the right record and failed to present the strongest market harm case.

That matters because Kadrey was not a blanket win for AI companies. The ruling left room for future plaintiffs to do better with stronger evidence on market dilution, licensing harm, or the commercial effects of AI outputs on original works. In other words, Meta won the motion before the court, but the opinion did not close the door on future copyright claims built on a stronger factual record.

Taken together, Bartz and Kadrey show why AI copyright litigation remains unsettled even after two high profile defense wins. Both rulings treated general model training as transformative. Neither ruling erased the importance of record development, source material, or market harm. That is why these cases matter so much. They shape the next round of arguments without ending the fight.

What These Rulings Actually Tell Us

Courts increasingly view general model training as highly transformative

A pattern is starting to emerge in AI copyright litigation. In the generative AI cases, courts have shown a greater willingness to view general model training as highly transformative. Morrison Foerster says a judicial consensus is developing around that point, even though the broader doctrine remains unsettled.

That does not mean every AI training use now qualifies as fair use. Thomson Reuters v. Ross Intelligence still cuts the other way, but that case involved non generative AI and a narrower competitive setting. The generative AI cases involving Anthropic and Meta ask a different question: whether training a general purpose model on large volumes of text changes the use enough to favor fair use under the first factor.

For readers trying to make sense of these rulings, the key point is this: courts are starting to separate general model training from direct substitution cases. That distinction does not end the fair use fight, but it does shape where the real battle now sits.

The biggest splits remain unresolved

The first split is over source material. Bartz drew a sharp line between lawfully acquired books and pirated books. That distinction matters because it suggests the fair use analysis may change depending on how the training data was obtained, not only how it was later used.

The second split is over market harm. Some courts are less willing to treat downstream competition from AI generated outputs as part of the training phase fair use analysis. Others have shown more openness to broader market dilution theories, especially where plaintiffs build a stronger record. Morrison Foerster identifies this as one of the central open questions.

The third split is over licensing markets. Thomson Reuters treated the licensing market issue seriously, while the newer generative AI cases have not produced one clear rule on whether a training data licensing market should control factor four. That question matters because it could shape both litigation outcomes and licensing strategy across the AI industry.

The Anthropic Settlement Changed the Economics

Why the $1.5 billion settlement matters

The Anthropic settlement changed the economics of AI copyright litigation even without a final appellate ruling. AP reported that Judge William Alsup granted preliminary approval in September 2025 to a $1.5 billion settlement between Anthropic and authors who alleged the company used pirated books to train Claude. AP also reported that affected authors or publishers may receive about $3,000 per book.

That number matters because it creates a visible compensation benchmark. It shows that copyright claims tied to AI training can produce real financial exposure, especially where piracy is involved. Even if courts continue to debate fair use for lawfully acquired works, the settlement sends a strong signal about the cost of building or maintaining a pirated training library.

Piracy now carries a different risk profile

Bartz already made piracy a weak point for Anthropic. The settlement makes that weakness much more concrete. Judge Alsup’s ruling treated lawful training and pirated acquisition differently, and the settlement then attached a major dollar figure to the piracy side of the case.

That changes how rights holders and AI companies will evaluate risk going forward. Disputes over lawfully acquired material still raise hard fair use questions. Pirated inputs now carry a more direct litigation and settlement risk. In practical terms, companies training models on unlicensed and unlawfully copied works face a very different exposure profile than companies training on materials acquired through lawful channels. This last sentence is an inference drawn from the Bartz ruling and the Anthropic settlement reports.

The Biggest Pending Cases to Watch

NYT v. OpenAI and Microsoft

The New York Times case against OpenAI and Microsoft remains one of the most important pending cases. It is still in discovery, and one of the biggest disputes involves access to massive volumes of ChatGPT logs. Bloomberg Law reported that a federal judge upheld an order requiring OpenAI to turn over 20 million anonymized ChatGPT logs in the consolidated copyright litigation.

That discovery fight shows how far the case still is from a clean fair use ruling on the merits. The court is still dealing with evidence gathering, relevance, and privacy disputes. So while the case remains critical to watch, it is unlikely to deliver a near term final answer on fair use. The final sentence is an inference drawn from the case’s current discovery posture.

Getty Images v. Stability AI

Getty Images v. Stability AI also remains important, but the UK decision did not create a sweeping copyright rule. Bird and Bird reports that the UK High Court largely dismissed Getty’s copyright claims after Getty abandoned its primary copyright case during trial, while also making limited trademark infringement findings for early versions of Stable Diffusion.

That result matters, but it is narrower than many headlines suggest. It does not settle the broader copyright fight over AI training across jurisdictions, and Getty continues to pursue related claims and appeals.

OpenAI and Google training cases expected in 2026

Morrison Foerster says courts are expected to decide AI training cases involving OpenAI and Google in 2026. Those decisions could sharpen the doctrine by forcing courts to address fair use, market harm, and licensing arguments in a new set of factual records.

Even so, Morrison Foerster also says 2026 is unlikely to bring final answers to the core copyright questions around AI training. That is the right expectation. The next wave of rulings may clarify the pressure points, but it is unlikely to end the split across courts. 

Is a Judicial Consensus Emerging?

Yes on transformation

Yes, but only on one part of the analysis. Courts are increasingly treating general purpose model training as highly transformative. Morrison Foerster says a judicial consensus is developing on that point, especially in the generative AI training cases. 

That does not mean courts are blessing all AI training. It means judges are showing more willingness to view model training as a new use with a different function from the original works. That trend appears in the Anthropic and Meta rulings, even though Thomson Reuters came out differently in a non generative AI setting. 

For anyone tracking AI copyright litigation, this is the strongest pattern so far. On transformation, the cases are starting to line up. On the rest of fair use, they are not. 

No on market harm and licensing

No clear consensus has emerged on factor four. Courts remain divided on whether market harm should include downstream competition from AI outputs and whether an AI training data licensing market deserves legal protection. Morrison Foerster identifies both issues as major open questions.

That split matters because factor four can decide the case. Thomson Reuters treated substitution and licensing harm seriously. The newer generative AI cases have taken a less uniform approach, especially where the record on market harm was underdeveloped or where the court focused more heavily on transformation. 

So the answer is mixed. Courts are moving toward one shared view on transformation, but they are still divided on piracy, licensing markets, and output driven substitution. Those issues will keep driving AI copyright litigation forward. 

What Small Creators and Publishers Can Do Now

The licensing market is no longer theoretical

Small creators and publishers should not assume the licensing market exists only for large media companies. Copyright Alliance now tracks a large and growing list of AI licenses, partnerships, text and data mining deals, and collective licensing solutions across literary, music, audio, and image works. Its database includes deals involving publishers, independent outlets, and specialized content providers. 

The infrastructure is also getting more practical. RSL offers machine readable licensing terms for AI uses, including pay per crawl and pay per inference models. Cloudflare has introduced pay per crawl in private beta. Microsoft has promoted a publisher marketplace approach for licensed content. CCC continues to position licensing as a key commercial path for AI use of copyrighted works. Together, those developments show a real market structure is forming. 

For smaller rightsholders, that matters because it changes the conversation from pure enforcement to licensing leverage. AI copyright litigation is still important, but a growing licensing market gives creators another path to monetization and control. This last sentence is an inference drawn from the expansion of licensing tools and recorded deal activity. 

Registration and collective action now matter more

Small creators still face a scale problem. One blog, newsletter, or niche publication usually lacks the leverage to negotiate with a major AI company on its own. But machine readable licensing standards and organized collectives are starting to reduce that weakness. 

Registration also matters because enforcement still depends on it. A creator with unregistered work has a weaker position than a creator with a clean registration record and a documented content portfolio. Collective action strengthens that further by pooling rights, standardizing terms, and making smaller claims harder to ignore. The final two points are inferences drawn from the growth of licensing infrastructure and the settlement pressure now visible in AI copyright litigation. 

For creators and publishers, the practical move is clear:

  • Register important works early
  • Keep clean ownership records
  • Evaluate machine readable licensing tools
  • Watch collective licensing and publisher group options
  • Treat AI licensing as a live business issue, not a future theory

The Legal Divide Is Getting Sharper

AI copyright litigation is starting to produce real doctrine, but it has not produced a stable rule. Courts are moving closer on transformation. They are still divided on piracy, licensing markets, and market harm. Morrison Foerster’s view is the right one here: more decisions are coming, but the hardest questions remain open. 

That is why this area still demands close attention. More courts will likely treat general model training as transformative. The biggest fights will remain focused on pirated inputs, licensing markets, and whether AI outputs can dilute or substitute for original works in ways copyright law should recognize.

If your business trains AI models, licenses content, or depends on copyrighted material in product development, now is the time to assess exposure. Traverse Legal can help you evaluate fair use risk, review training data practices, structure licensing strategy, and prepare for the next wave of AI copyright litigation.

The post AI Copyright Litigation: Where the Key Cases Stand first appeared on Traverse Legal.