The session “AI Goes to Court” at the MIT EmTech Digital conference was led by Amir Ghavi, AI, Tech Transactions & IP Partner at Fried Frank LLP. Ghavi, who represents several prominent AI companies such as Stability AI, took the stage with a lively and engaging presentation, acknowledging the challenge of discussing lawsuits before lunch. He aimed to shed light on why foundation model developers are frequently getting sued, with 24 lawsuits already filed in 2024.

Copyright Basics and Historical Context

Ghavi began with a crash course in copyright law, tracing its origins to the Statute of Anne in 1709 and explaining its purpose: to protect original works fixed in a tangible medium. He clarified that while copyright covers literary, dramatic, and artistic works, it does not protect ideas, facts, or styles—phew, the Impressionists can breathe easy.

The Current Legal Landscape

Most of the lawsuits against AI companies are centered around copyright infringement, with artists and authors claiming that their works have been used without permission to train AI models. Ghavi explained the two main types of claims:

  1. Input Claims: These assert that AI models were trained on copyrighted material without consent.
  2. Output Claims: These argue that the outputs of AI models are either derivative works or so close to the original that they constitute infringement.

The defense often hinges on the doctrine of “fair use,” a concept as essential to copyright law as a punchline is to a joke. Fair use allows for limited use of copyrighted material without permission, under certain conditions such as parody, commentary, or educational purposes.

Notable Lawsuits and Their Implications

Ghavi provided an overview of significant lawsuits, including:

  • Sarah Andersen et al. v. Stability AI: A class action suit by artists alleging that AI models were trained on their works without permission.
  • Getty Images v. Stability AI: Getty claims its images were used to train AI models without proper licensing.

He also highlighted high-profile cases like the one involving Sarah Silverman, noting the irony of a comedian suing for copyright infringement given the inherently derivative nature of comedy.

Historical Analogies: Photocopiers, VCRs, and Napster

Drawing parallels to past technological disruptions, Ghavi discussed:

  • Photocopiers: Initially seen as a threat by publishers, courts ruled that making copies for personal use was fair use.
  • VCRs: The Supreme Court ruled that recording TV shows for personal use was fair use, paving the way for the home video market.
  • Napster: Unlike AI, Napster involved direct copying of music files, making it a poor analogy for current AI copyright issues.

The Future of AI Litigation

Ghavi predicted that while the velocity of IP cases might slow down, other legal challenges such as algorithmic bias and antitrust issues will rise. He noted that most legal scholars view the training of AI models on publicly available data as fair use, but this remains a contentious issue.

Audience Interaction and Key Takeaways

During the Q&A session, Ghavi addressed questions about the legality of web scraping, the implications of specific lawsuits, and the potential for licensing deals as an alternative to litigation. He emphasized the importance of adapting existing legal frameworks to new technologies rather than reinventing the wheel.

Ghavi concluded with a pragmatic view: while legal battles are inevitable, the broader societal and policy implications of AI will shape its development. He stressed the need for a balanced approach that considers both the rights of content creators and the benefits of AI innovation.

Conclusion

The “AI Goes to Court” session provided a detailed and engaging overview of the current legal challenges facing AI developers. Ghavi’s insights underscored the complexity of applying traditional copyright laws to modern AI technologies and highlighted the need for ongoing dialogue and adaptation. By drawing on historical precedents and emphasizing the principle of fair use, Ghavi offered a nuanced perspective on the evolving legal landscape of AI.