Welcome back, my astute legal innovators! In our last discussion, we explored the impact of AI on the legal profession, exploring the data and uncovering the truth behind its adoption. But as AI continues to advance at a breakneck pace, a new challenge emerges: how do we ensure that our legal system keeps up with this rapidly evolving technology? 🧑⚖️🚀
For those just joining the conversation, buckle up – we’re about to embark on a thrilling journey. From stretching existing laws to crafting new legislation, we’ll examine the legal system’s response to the challenges posed by AI. 📜💡
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
This article marks my second contribution to the National Law Review‘s Artificial Intelligence newsletter, building upon the themes we explored in the first piece. (As I am not limited by a word count here, I’ve expanded on those ideas.)
As AI becomes increasingly interwoven into the fabric of modern society, the legal system faces a daunting task: how do we regulate this powerful technology? From deepfakes to AI-generated content, the challenges posed by AI often push the boundaries of existing legal frameworks. We’ll examine the adaptability of current laws, the necessity of new legislation, and the role of case law in shaping the new legal landscape for AI. I’ll also propose a framework for determining which is the more suitable approach.
If this sounds interesting to you, please read on…
Stretch Existing Laws Or Pass New Legislation?
AI is now integral to modern life, curating our news, driving autonomous vehicles, and aiding in medical diagnoses. While these advancements bring convenience and efficiency, they also present legal and ethical challenges that existing laws haven’t anticipated. Deepfake technology exemplifies this issue: hyper-realistic fake videos of individuals doing or saying things they never did. For instance, scammers have used AI to create deepfake videos featuring popular TV doctors to promote counterfeit health products on social media platforms. This raises a new, but pivotal question:
Can current laws stretch to address AI-related issues, or is there a pressing need for new legislation specifically tailored to AI’s complexities?
1. Existing Laws: Stretching to Fit AI
One might wonder whether our current legal frameworks are robust enough to manage the challenges posed by AI. Balancing the benefits of AI with its potential risks requires a critical examination of our legal frameworks. The Federal Trade Commission Act, for instance, empowers the FTC to combat deceptive advertising and unfair business practices. This authority doesn’t exclude AI developers and companies that incorporate AI into their products and services. AI companies are held to the same standards of truthfulness and transparency as traditional businesses. Misrepresenting an AI product’s capabilities can, and does, lead to FTC enforcement actions.
On September 25, 2024, as part of its Operation AI Comply, the FTC announced five cases exposing AI-related deception. In the FTC’s complaint against DoNotPay, a company that marketed itself as “the world’s first robot lawyer” and an “AI lawyer,” the FTC alleged that DoNotPay’s services did not live up to its claims, thereby misleading consumers. The proposed settlement requires DoNotPay to cease its deceptive practices, pay $193,000, and inform certain subscribers about the case.
A week earlier, the Federal Election Commission voted 5-1 to issue a notice clarifying that the existing Federal Election Campaign Act’s prohibition against “fraudulent misrepresentation” applies to AI-generated content and is “technology neutral.” The notice specifically addressed the use of AI to generate misleading campaign ads that appear to be authorized by opponents when they are not, and is explicitly aimed at candidates running in the 2024 election cycle.
2. New Legislation: Addressing AI’s Unique Challenges
While existing laws cover certain aspects of AI, there are areas where they are stretched thin, necessitating new legislation. Deepfakes are a good example of when existing laws fall short. Using deepfake technology, a bad actor can create realistic videos depicting individuals in explicit situations without their consent. The problem with existing laws is that they contemplate the publication of real-life, explicit videos without the individual’s consent, so called “revenge porn.”
California Penal Code Section 647(j)(4), criminalizes the intentional distribution of non-consensual explicit images: “Any person who intentionally distributes the image of the intimate body part or parts of another identifiable person, or an image of the person depicted [without consent]…” This language focuses on the distribution of actual images of an identifiable person, implying that the law applies to real photographs or videos. The statute does not explicitly address digitally fabricated or AI-generated content, where the depicted individual’s likeness is synthetically created.
California’s Assembly Bill No. 602, effective October 2019, provided individuals with a cause of action against creators and distributors of non-consensual sexually explicit material produced using digital or electronic technology. The law redefined “depicted individual” as someone who appears, as a result of digitization, to be performing in sexually explicit material they did not actually perform. This legislation allows victims to seek damages and obtain injunctions against those responsible for creating or distributing such deepfake content. As of September 2024, 23 states have passed some form of nonconsensual deepfake law.
3. Case Law: Reshaping the Application of Existing Laws to AI
Large language models (LLMs) have raised complex questions about copyright. LLMs are created through training on vast amounts of data, some of which is copyrighted material. In December 2023, the New York Times filed a lawsuit against OpenAI and Microsoft, alleging that the companies used millions of its articles without permission to train its AI models. Similar lawsuits have been filed by numerous authors. OpenAI contends that ingesting copyrighted works to create LLMs, falls under the fair use doctrine, and is transformative and does not substitute for the original works. It is unclear whether these arguments will win the day, but the case will set a precedent when it is decided.
The current copyright framework in the United States protects original works of authorship fixed in a tangible medium, created by human authors. AI-generated works raise questions about who holds the copyright—the programmer, the user, or perhaps no one at all. In March 2023, the U.S. Copyright Office provided this guidance: “When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application.”
In October 2024, artist Jason M. Allen appealed the U.S. Copyright Office’s denial of copyright registration for his AI-generated artwork, “Théâtre D’opéra Spatial.” The Office had refused registration, stating that works created solely by AI without human authorship are not eligible for copyright protection. Allen’s appeal challenges this stance, arguing for recognition of human creativity in the use of AI tools. We shall have to wait and see whether case precedent will stretch the law to fit or leave a gap that needs to be filled by new legislation.
Conceptual Framework for Determining When AI Requires New Legislation vs. Reinterpretation of Existing Laws
As Artificial Intelligence continues to evolve and integrate into various aspects of society, it presents both opportunities and challenges. To decide whether new legislation is necessary for AI applications or if existing laws can be reinterpreted to address AI-related issues, we can use a conceptual framework that considers several key factors:
1. Novelty and Unanticipated Challenges
Assess whether the AI technology introduces fundamentally new issues not contemplated by existing laws.
-
Technological Innovation: Determine if the AI application represents a significant technological leap that existing laws did not foresee.
Example: Deepfake technology enables the creation of highly realistic but falsified videos. Existing laws on defamation and impersonation may not fully address the complexities introduced by deepfakes.
-
Unforeseen Legal Gaps: Identify any legal areas where the AI technology creates new types of harm or infringement that current laws do not cover.
Example: AI-generated art raises questions about copyright ownership since traditional copyright laws assume human authorship.
2. Adequacy of Existing Legal Frameworks
Evaluate whether current laws can be effectively applied to the AI technology with reasonable interpretation.
-
Legal Elasticity: Assess if existing statutes have the flexibility to encompass AI applications through judicial interpretation or regulatory guidance.
Example: The FTC used existing consumer protection laws to address misleading claims by DoNotPay, indicating that current laws can sometimes be stretched to cover AI-related misconduct.
-
Precedent and Case Law: Consider previous court decisions that have extended existing laws to new technologies.
Example: In White v. Samsung Electronics America, Inc., the court expanded the right of publicity to cover the appropriation of identity, suggesting that common law can adapt to new contexts.
3. Potential for Harm and Impact on Rights
Consider the magnitude and nature of harm that could result from the AI technology.
-
Severity of Risks: Determine if the AI application poses significant risks to individuals, groups, or society that are not adequately mitigated by existing laws.
Example: Autonomous vehicles present safety risks that may not be fully addressed by traditional product liability laws due to the autonomous decision-making involved.
-
Fundamental Rights: Evaluate whether the AI technology affects fundamental rights such as privacy, equality, or freedom of expression.
Example: AI algorithms used in hiring processes can inadvertently perpetuate discrimination, impacting equal employment opportunities protected under civil rights laws.
4. Clarity and Enforcement Challenges
Assess whether applying existing laws would lead to ambiguity or enforcement difficulties.
-
Legal Ambiguity: Determine if reinterpreting existing laws would create uncertainty, inconsistent application, or loopholes.
Example: Stretching defamation laws to cover deepfakes might result in varying interpretations across jurisdictions, leading to legal unpredictability.
-
Enforcement Practicality: Consider if regulators and courts have the capacity to enforce existing laws effectively against AI-related violations.
Example: Regulators may lack technical expertise to monitor complex AI systems under current laws, necessitating specialized legislation.
5. Technological Neutrality vs. Specificity
Decide whether laws should remain technology-neutral or if technology-specific regulations are needed.
-
Advantages of Technology-Neutral Laws: Existing laws that are framed broadly can be applied to various technologies, promoting consistency.
Example: Privacy laws like HIPAA protect health information regardless of the technology used, covering AI applications in healthcare.
-
Need for Specific Regulations: Some AI technologies may require tailored legislation to address unique characteristics.
Example: Specific laws targeting deepfakes can define and criminalize the creation and distribution of such content, providing clear legal standards.
6. International and Cross-Border Considerations
Account for the global nature of AI and the need for harmonization across jurisdictions.
-
Global Consistency: Evaluate if new legislation would facilitate international cooperation and consistency in regulating AI.
Example: The European Union’s AI Act aims to set standards that could influence global AI regulation, promoting harmonized practices.
-
Jurisdictional Challenges: Recognize that AI applications often operate across borders, complicating enforcement under existing national laws.
Example: Data processed by AI systems may involve users from multiple countries, requiring international data protection agreements.
7. Ethical Implications and Public Trust
Reflect on the ethical concerns raised by the AI technology and the importance of maintaining public trust.
-
Ethical Standards: Determine if existing laws adequately address ethical issues such as transparency, accountability, and fairness.
Example: AI decision-making lacks transparency (“black box” problem), which may erode public trust if not properly regulated.
-
Societal Impact: Consider the broader societal implications of the AI technology, including potential biases and social inequalities.
Example: Biased AI algorithms can disproportionately affect marginalized communities, necessitating laws that enforce ethical AI development.
8. Precedent for Regulatory Success
Look at historical examples where new legislation was enacted to address technological advancements.
-
Past Legislative Responses: Examine how lawmakers have previously addressed emerging technologies that posed similar challenges.
Example: The advent of the internet led to new laws like the Digital Millennium Copyright Act (DMCA) to address digital copyright issues.
-
Regulatory Evolution: Recognize that technology often outpaces legislation, requiring periodic updates to legal frameworks.
9. Stakeholder Engagement
Involve various stakeholders in the decision-making process to ensure comprehensive understanding and acceptance.
-
Expert Consultation: Engage with technologists, legal experts, industry leaders, and ethicists to gain insights into the AI technology’s implications.
-
Public Input: Consider public opinion and concerns, especially for technologies that significantly impact society.
10. Balancing Innovation and Regulation
Strive to protect society without unduly hindering technological progress.
-
Promoting Innovation: Ensure that regulations do not stifle creativity and advancement in AI technologies.
Example: Overly restrictive laws might discourage investment and development in beneficial AI applications.
-
Risk Mitigation: Implement regulations that mitigate risks while allowing for responsible innovation.
Applying the Framework: Examples
Deepfakes
-
Novelty and Harm: Deepfakes introduce new forms of deception with significant potential harm to individuals and society.
-
Existing Law Gaps: Current defamation and privacy laws may not fully address the creation and spread of deepfakes.
-
Conclusion: New legislation specifically targeting deepfakes is warranted to address these unique challenges effectively.
AI in Employment
-
Existing Laws Applicability: Anti-discrimination laws can be applied to AI tools used in hiring, holding employers accountable.
-
Regulatory Guidance Sufficiency: Agencies like the EEOC can provide guidelines to ensure compliance.
-
Conclusion: Reinterpretation and enforcement of existing laws, supplemented by regulatory guidance, may suffice.
AI-Generated Content
-
Authorship Challenges: AI-generated works challenge traditional notions of authorship under copyright law.
-
Legal Ambiguity: Existing laws do not clearly address ownership rights for AI-created content.
-
Conclusion: New legislation or amendments to copyright law may be necessary to define and protect rights related to AI-generated works.
Closing Thoughts
The decision to enact new legislation or rely on reinterpretation of existing laws for AI technologies should be guided by a careful analysis of the specific circumstances and implications. This conceptual framework provides a structured approach to evaluate:
-
The novelty and unanticipated challenges posed by the AI technology.
-
The adequacy and flexibility of existing legal frameworks.
-
The potential for harm and the impact on fundamental rights.
-
The clarity and enforceability of regulations.
-
The balance between technological neutrality and the need for specificity.
-
International considerations and the importance of harmonization.
-
Ethical implications and the necessity of maintaining public trust.
-
Historical precedents and lessons learned from past regulatory responses.
-
Stakeholder engagement to inform and legitimize the decision-making process.
-
The need to balance innovation with societal protection.
By systematically applying this framework, policymakers and legal professionals can make informed decisions about when new legislation is essential and when existing laws can be effectively adapted.
Ultimately, safeguarding society from the potential harms of AI while fostering innovation requires a dynamic legal framework that is both flexible and robust. Collaboration among lawmakers, technologists, legal experts, and the public is crucial to achieving this balance. By proactively addressing the legal challenges posed by AI, we can harness its benefits and mitigate its risks, ensuring that technological progress contributes positively to society.
By the way, if you’d like to learn more about how how AI works and how it will impact the legal profession, you should apply to LawDroid University!
My NEW 5-part webinar series, Generative AI for Lawyers: Empowering Solos and Small Law Firms, is now available at LawDroid University.
LawDroid University is available for free for everyone to use.
-
Free to use – It’s 100% free educational content for everyone, just sign up below.
-
Insightful – Get educated about the intersection of artificial intelligence and the law as taught by experts.
-
Value Packed – Filled with videos, summaries, key takeaways, quotable quotes, transcripts and more! Find sessions on AI and the State of the Art, Ethics, Access to Justice, Practice of Law, Education, and the Business of Law.
-
AI Q&A – Ask a chatbot questions about the content and get fully informed answers immediately.
👉 To immerse yourself in this enriching educational voyage, learn more, or sign up, please visit https://lawdroid.com/subscriptions/lawdroid-university/.