This quarterly update highlights key legislative, regulatory, and litigation developments in the second quarter of 2024 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and data privacy and cybersecurity. 

I.       Artificial Intelligence

Federal Legislative Developments

  • Impact Assessments: The American Privacy Rights Act of 2024 (H.R. 8818, hereinafter “APRA”) was formally introduced in the House by Representative Cathy McMorris Rodgers (R-WA) on June 25, 2024.  Notably, while previous drafts of the APRA, including the May 21 revised draft, would have required algorithm impact assessments, the introduced version no longer has the “Civil Rights and Algorithms” section that contained these requirements.
  • Disclosures: In April, Representative Adam Schiff (D-CA) introduced the Generative AI Copyright Disclosure Act of 2024 (H.R. 7913).  The Act would require persons that create a training dataset that is used to build a generative AI system to provide notice to the Register of Copyrights containing a “sufficiently detailed summary” of any copyrighted works used in the training dataset and the URL for such training dataset, if the dataset is publicly available.  The Act would require the Register to issue regulations to implement the notice requirements and to maintain a publicly available online database that contains each notice filed.
  • Public Awareness and Toolkits: Certain legislative proposals focused on increasing public awareness of AI and its benefits and risks.  For example, Senator Todd Young (R-IN) introduced the Artificial Intelligence Public Awareness and Education Campaign Act (S. 4596), which would require the Secretary of Commerce, in coordination with other agencies, to carry out a public awareness campaign that provides information regarding the benefits and risks of AI in the daily lives of individuals.  Senator Edward Markey (D-MA) introduced the Social Media and AI Resiliency Toolkits in Schools Act (S. 4614), which would require the Department of Education and the federal Department of Health and Human Services to develop toolkits to inform students, educators, parents, and others on how AI and social media may impact student mental health.
  • Senate AI Working Group Releases AI Roadmap: On May 15, the Bipartisan Senate AI Working Group published a roadmap for AI policy in the United States (the “AI Roadmap”).  The AI Roadmap encourages committees to conduct further research on specific issues relating to AI, such as “AI and the Workforce” and “High Impact Uses for AI.”  It states that existing laws (concerning, e.g., consumer protection, civil rights) “need to consistently and effectively apply to AI systems and their developers, deployers, and users” and raises concerns about AI “black boxes.”  The AI Roadmap also addresses the need for best practices and the importance of having a human in the loop for certain high impact automated tasks.

Federal Regulatory Developments

  • Federal Communications Commission (“FCC”): FCC Chairwoman Jessica Rosenworcel asked the Commission to approve a Notice of Proposed Rulemaking (“NPRM”) seeking comment on a proposal to require a disclosure when political ads on radio and television contain AI-generated content.  According to the FCC’s press release, the proposal would require an on-air disclosure when a political ad—whether from a candidate or an issue advertiser—contains AI-generated content.  The requirements would apply only to those entities currently subject to the FCC’s political advertising rules, meaning it would not encompass online political advertisements.  Shortly after Chairwoman Rosenworcel’s statement, Commissioner Brendan Carr issued a statement indicating that there is disagreement within the Commission concerning the appropriateness of FCC intervention on this topic.
  • Department of Homeland Security (“DHS”): DHS announced the establishment of the AI Safety and Security Board (the “Board”), which will advise the DHS Secretary, the critical infrastructure community, private sector stakeholders, and the broader public on the safe and secure development and deployment of AI technology in our nation’s critical infrastructure. In addition, DHS Secretary Alejandro N. Mayorkas and Chief AI Officer Eric Hysen announced the first ten members of the AI Corps, DHS’s effort to recruit 50 AI technology experts to play pivotal roles in responsibly leveraging AI across strategic mission areas.
  • President’s Council of Advisors on Science and Technology (“PCAST”): PCAST released a report that recommends new actions that will help the United States harness the power of AI to accelerate scientific discovery.  The report provides examples of research areas in which AI is already impactful and discusses practices needed to ensure effective and responsible use of AI technologies.  Specific recommendations include expanding existing efforts, such as the National Artificial Intelligence Research Resource pilot, to broadly and equitably share basic AI resources, and expanding secure and responsible access of anonymized federal data sets for critical research needs.
  • U.S. Patent and Trademark Office (“USPTO”): The USPTO published a guidance on the use of AI-based tools in practice before the USPTO.  The guidance informs practitioners and the public of the issues that patent and trademark professionals, innovators, and entrepreneurs must navigate while using AI in matters before the USPTO.  The guidance also highlights that the USPTO remains committed to not only maximizing the benefits of AI and seeing them distributed broadly across society, but also using technical mitigations and human governance to cabin risks arising from AI use in practice before the USPTO.
  • National Security Agency (“NSA”): The NSA released a Cybersecurity Information Sheet (“CSI”) titled “Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems.”  As the first CSI led by the Artificial Intelligence Security Center, the CSI is intended to support National Security System owners and Defense Industrial Base companies that will be deploying and operating AI systems designed and developed by an external entity.

State Legislative Developments

  • Algorithmic Discrimination & Consumer Protection: The Colorado AI Act (SB 205) was signed into law on May 17, making Colorado the first state to enact AI legislation addressing risks of algorithmic discrimination in the development and deployment of AI.  The Act, which comes into effect February 1, 2026, primarily regulates the use of “high risk AI,” or AI systems that make or are a substantial factor in making consequential decisions on behalf of consumers.  Key requirements include a duty of care for AI developers and deployers to prevent algorithmic discrimination, developer disclosures of information about training data, performance, and discrimination safeguards, reporting to the state Attorney General of risks or instances of algorithmic discrimination, deployer “risk management policies and programs” for mitigating algorithmic discrimination risks, deployer algorithmic discrimination impact assessments, notices to consumers affected by AI consequential decisions, and opportunities for consumers to correct personal data and appeal adverse decisions.  On June 13, Colorado Governor Jared Polis, Colorado Attorney General Phil Weiser, and Colorado Senate Majority Leader Robert Rodriquez issued a public letter announcing a “process to revise” the Act to “minimize unintended consequences associated with its implementation” and consider “delays in the implementation of this law to ensure . . . harmonization” with other state and federal frameworks.
  • Election-Related Synthetic Content Laws: Alabama (HB 172), Arizona (SB 1359), Colorado (HB 1147), Florida (HB 919), Hawaii (SB 2687), Mississippi (SB 2577), and New York (A 8808) enacted laws regulating the creation or dissemination of AI-generated election content or political advertisements, joining Idaho, Indiana, Michigan, New Mexico, Oregon, Utah, Washington, Wisconsin, and other states that enacted similar laws in late 2023 and early 2024.  New Hampshire (HB 1596) passed a similar law that is awaiting the Governor’s signature.  These laws generally prohibit, within 90 days of an election, the knowing creation or distribution of deceptive content created or modified by AI if such content depicts candidates, election officials, or parties, or is intended to influence voting behavior or injure a candidate.  Some of these laws permit the distribution of prohibited content if they contain an audio or visual disclaimer that the content is AI-generated.  Other laws, like Arizona SB 1359, impose independent requirements that deepfakes of candidates or political parties contain AI disclaimers within 90 days of an election.
  • AI-Generated CSAM & Intimate Imagery Laws: Alabama (HB 168), Arizona (HB 2394), Florida (SB 1680), Louisiana (SB 6), New York (A 8808), North Carolina (HB 591), and Tennessee (HB 2163) enacted laws regulating the creation or dissemination of AI-generated CSAM or intimate imagery, joining Idaho, Indiana, South Dakota, and Washington.  These laws generally impose criminal liability for the knowing creation, distribution, solicitation, or possession of AI- or computer-generated CSAM, or the dissemination of AI-generated intimate imagery with intent to coerce, harass, or intimidate.
  • Laws Regulating AI-Generated Impersonations & Digital Replicas: Arizona (HB 2394) enacted  a law prohibiting the publication or distribution of digital replicas and digital impersonations without the consent of the person depicted.  Illinois (HB 4875) passed a similar bill that is awaiting the Governor’s signature.  Illinois (HB 4762) also passed a bill regulating services contracts that allow for the creation or use of digital replicas in place of work that the individual would otherwise have performed, rendering such provisions unenforceable if they do not contain a reasonably specific description of the intended uses of the digital replica and if the individual was not properly represented when negotiating the services contract.  This bill also awaits the Governor’s signature.
  • California AI Bills Regulating Frontier Models, Training Data, Content Labeling, and Social Media Platforms: On May 20, the California Assembly passed AB 2013, which would require AI developers to issue public statements summarizing datasets used to develop their AI systems, and AB 2877, which requires AI developers to receive affirmative authorization before using personal information from persons under sixteen years of age to train AI.  On May 21, the California Assembly passed AB 1791, which would require social media platforms to redact personal provenance data and add content labels and “system provenance data” for user-uploaded content, and AB 2930, a comprehensive bill that would regulate the use of “automated decision tools” and, like Colorado SB 205, would impose impact assessment, notice, and disclosure requirements on developers and deployers of automated decision-making systems used to make consequential decisions, with the goal of mitigating algorithmic discrimination risks.  On the same day, the California Senate passed the Safe & Secure Innovation for Frontier AI Models Act (SB 1047), which would impose sweeping regulations on developers of the most powerful AI models, and the California AI Transparency Act (SB 942), which would require generative AI providers to create “AI detection tools” and add disclosures to AI content.  On May 22, the California Assembly passed the Provenance, Authenticity, and Watermarking Standards Act (AB 3211), which would require generative AI providers to ensure that outputs are labeled with watermarks and require large online platforms to add “provenance disclosures” to content on their platforms.

Al Litigation Developments

  • New Copyright Complaints
    • On June 27, the Center for Investigative Reporting, a nonprofit media organization, filed a complaint against OpenAI and Microsoft alleging copyright infringement from use of the plaintiff’s copyrighted works to train ChatGPT.  Center for Investigative Reporting, Inc v. OpenAI et al., 1:24-cv-4872 (S.D.N.Y.).
    • On June 24, Universal Music Group, Sony Music Entertainment, Warner Records, and other record labels filed complaints against Suno and Udio, companies that allegedly used copyrighted sound recordings to train generative AI models that “generate digital music files that sound like genuine human sound recordings in response to basic inputs.”  UMG Recordings, Inc. et al. v. Suno, Inc. et al., 1:24-cv-11611 (D. Mass.), and UMG Recordings, Inc. et al. v. Unchartered Labs, Inc. et al., 1:24-cv-04777 (S.D.N.Y.).
    • On May 16, several voice actors filed a complaint against Lovo, Inc., a company that allegedly uses AI-driven software to create and edit voice-over narration, claiming that Lovo used their voices without authorization.  Lehrman et al. v. Lovo, Inc., 1:24-cv-3770 (S.D.N.Y.). 
    • On May 2, authors filed a putative class action lawsuit against Databricks, Inc. and Mosaic ML, and another against Nvidia, alleging that the companies used copyrighted books to train their models.  Makkai et al. v. Databricks, Inc. et al., 4:24-CV-02653 (N.D. Cal), and Dubus et al. v. Nvidia, 4:24-cv-02655 (N.D. Cal). 
    • On April 26, a group of photographers and cartoonists sued Google, alleging that Google used their copyrighted images to train its AI image generator, Imagen.  Zhang et al. v Google LLC et al., 5:24-cv-02531 (N.D. Cal.).
    • On April 30, newspaper publishers who publish electronic copies of older print editions of their respective newspapers filed a complaint in Daily News et al. v. Microsoft et al., 1:24-cv-03285 (S.D.N.Y.), alleging, among other things, that the defendants copied those publications to train GPT models.
  • Copyright and Digital Millennium Copyright Act (“DMCA”) Case Developments
    • On June 24, the court in Doe v. GitHub, Inc. et al., 4:22-cv-6823 (N.D. Cal.), partially granted GitHub’s motion to dismiss.  Among other things, the court granted the motion with prejudice as to plaintiffs’ copyright management removal claim under the DMCA, again finding that plaintiffs had failed to satisfy the “identicality” requirement.  The court declined to dismiss claims over breach of open-source license terms. 
    • On May 7, the court in Andersen v. Stability AI, 3:23-cv-00201 (N.D. Cal.), issued a tentative ruling on the defendants’ motions to dismiss the first amended complaint.  Among other things, the court was inclined to deny all the motions as to direct and “induced” copyright infringement and DMCA claims, to rule that there were sufficient allegations to support a “compressed copies” theory (i.e., that the plaintiffs’ works are contained in the AI models at issue such that when the AI models are copied, so are the works used to train the model), to allow the false endorsement and trademark claims to proceed, and to give the plaintiffs a chance to plead an unjust enrichment theory not preempted by the Copyright Act.  The court has yet to issue a final ruling.
  • Class Action Dismissals: On May 24, the court in A.T. et al v. OpenAI LP et al.,3:23-cv-04557 (N.D. Cal.), granted the defendants’ motion to dismiss with leave to amend, holding that the plaintiffs had violated Federal Rule 8’s “short and plain statement” requirement.  The court described the plaintiffs’ 200-page complaint, which alleged ten privacy-related statutory and common law violations, as full of “unnecessary and distracting allegations” and “rhetoric and policy grievances,” cautioning the plaintiffs that if the amended complaint continued “to focus on general policy concerns and irrelevant information,” dismissal would be with prejudice.  On June 14, plaintiffs notified the court that they did not intend to file an amended complaint.  The plaintiff in A.S. v. OpenAI LP, et al.,3:24-cv-01190 (N.D. Cal.), a case with similar claims to A.T., voluntarily dismissed their case after the decision in A.T.
  • Consent Judgment in Right of Publicity Case: On June 18, a consent judgment was entered in the suit brought by the estate of George Carlin against a podcast company over its allegedly AI-generated “George Carlin Special.”  Main Sequence, Ltd. Et al v. Dudesy, LLC et al, 24-cv-00711 (C.D. Cal).

II.       Connected & Automated Vehicles

  • Continued Focus on Connectivity and Domestic Violence: Following letters sent to automotive manufacturers and a press release issued earlier this year, on April 23, 2024, the FCC issued a NPRM seeking comment on the types of connected car services in the marketplace today, whether changes to the FCC’s rules implementing the Safe Connections Act are needed to address the impact of connected car services on domestic violence survivors, and what steps connected car service providers can proactively take to protect survivors from being stalked or harassed through the misuse of connected car services.  On April 25, Rep. Debbie Dingell (D-MI) wrote a letter to the Chairwoman of the FCC noting that she would like to “work with the FCC, [her] colleagues in Congress, and stakeholders to develop a comprehensive understanding of and solutions to the misuse of connected vehicle technologies” in relation to domestic abuse and “implement effective legislative and regulatory frameworks that safeguard survivors’ rights and well-being.”
  • Updated National Public Transportation Safety Plan: On April 9, 2024, the Federal Transit Administration (“FTA”) published an updated version of the National Public Transportation Safety Plan.  The FTA noted that the National Safety Plan “does not create new mandatory standards but rather identifies existing voluntary minimum safety standards and recommended practices,” but that FTA will “consider[] mandatory requirements or standards where necessary and supported by data” and “establish any mandatory standards through separate regulatory processes.”
  • Investigations into Data Retention Practices: On April 30, Sen. Ron Wyden (D-OR) and Sen. Edward Markey (D-MA) sent a letter to the Federal Trade Commission (“FTC”) asking the FTC to investigate several automakers for “deceiving their customers by falsely claiming to require a warrant or court order before turning over customer location data to government agencies” and urging the FTC to “investigate these auto manufacturers’ deceptive claims as well as their harmful data retention practices” and “consider holding these companies’ senior executives accountable for their actions.”  This letter follows other, similar, letters Sen. Markey sent to automakers and the FTC in December 2023 and February 2024, respectively.  Following this activity, on May 14, the FTC published a blog post on the collection and use of consumer data in vehicles, warning that “[c]ar manufacturers–and all businesses–should take note that the FTC will take action to protect consumers against the illegal collection, use, and disclosure of their personal data,” including geolocation data.
  • AI Roadmap – CAV Highlights: The AI Roadmap, discussed above, encourages committees to: (1) “develop emergency appropriations language to fill the gap between current spending levels and the [spending level proposed by the National Security Commission on Artificial Intelligence (“NSCAI”)],” including “[s]upporting R&D and interagency coordination around the intersection of AI and critical infrastructure, including for smart cities and intelligent transportation system technologies”; and (2) “[c]ontinue their work on developing a federal framework for testing and deployment of autonomous vehicles across all modes of transportation to remain at the forefront of this critical space.”
  • Senate Hearing on Roadway Safety: On May 21, the Subcommittee on Surface Transportation, Maritime, Freight & Ports within the U.S. Senate Committee on Commerce, Science & Transportation convened a hearing entitled “Examining the Roadway Safety Crisis and Highlighting Community Solutions.”  Sen. Gary Peters (D-MI), Chair of the Subcommittee, stated in his opening statement that “digital infrastructure that improves crash response to predictive road maintenance and active traffic management” are “essential to achieving safe system goals” and that “safe and accountable development, testing, and deployment of autonomous vehicles” can “help us reduce serious injuries and death on our roadways.”
  • Connected Vehicle National Security Review Act: On May 29, Rep. Elissa Slotkin (D-MI) announced proposed legislation entitled the Connected Vehicle National Security Review Act, which would establish a formal national security review for connected vehicles built by companies from China or certain other countries.  The legislation would allow the Department of Commerce to limit or ban the introduction of these vehicles from U.S. markets if they pose a threat to national security.
  • Updates to Federal Motor Vehicle Safety Standards: On May 9, the National Highway Traffic Safety Administration (“NHTSA”) within the Department of Transportation (“DOT”) issued a Final Rule that adopts a new Federal Motor Vehicle Safety Standard to require AEB, including pedestrian AEB, systems and forward collision warning systems on light vehicles weighing under 10,000 pounds manufactured on or after September 1, 2029 (September 1, 2030 for small-volume manufacturers, final-stage manufacturers, and alterers).  The AEB system must “detect and react to an imminent crash with both a lead vehicle or a pedestrian.”

III.       Data Privacy & Cybersecurity

Privacy Developments

  • Proposed Comprehensive Federal Privacy Law: As noted above, in June, lawmakers formally introduced the APRA, which, if passed, would create a comprehensive federal privacy regime.  The APRA would apply to “Covered Entities,” which are defined as “any entity that determines the purposes and means of collecting, processing, retaining, or transferring covered data” and is subject to the FTC Act, is a common carrier, or is a nonprofit.  Covered entities do not include government entities and their service providers, specified small businesses, and certain nonprofits.
  • National Security & Privacy: Also in April, the President signed the Protecting Americans’ Data from Foreign Adversaries Act of 2024 (“PADFAA”) into law.  Under the law, data brokers are prohibited from selling, transferring, or providing access to Americans’ “sensitive data” to certain foreign adversaries or entities controlled by foreign adversaries, including identifiers such as social security numbers, geolocation data, data about minors, biometric information, private communications, and information identifying an individual’s online activities over time and across websites or online services.  Separately, the President enacted legislation that reauthorizes the Foreign Intelligence Surveillance Act section 702.  The law permits the U.S. government to collect without a warrant the communications of non-Americans located outside the country to gather foreign intelligence.
  • Health Data & Privacy: In April, HHS published a final rule that modifies the Standards for Privacy of Individually Identifiable Health Information under the Health Insurance Portability and Accountability Act (“HIPAA”) regarding protected health information concerning reproductive health.  Relatedly, the FTC also voted 3-2 to issue a final rule that expands the scope of the Health Breach Notification Rule (“HBNR”) to apply to health apps and similar technologies and broadens what constitutes a breach of security, among other updates. 
  • New State Privacy Laws: Maryland, Minnesota, Nebraska, and Rhode Island became the latest states to enact comprehensive privacy legislation, joining California, Virginia, Colorado, Connecticut, Utah, Iowa, Indiana, Tennessee, Montana, Oregon, Texas, Florida, Delaware, New Jersey, New Hampshire, and Kentucky.  In addition, Alabama enacted a new genetic privacy law, and Colorado and Illinois amended existing privacy laws.

Cybersecurity Developments

  • CIRCIA: On July 3, the U.S. Cybersecurity and Infrastructure Security Agency closed the public comment period for the NPRM related to the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (“CIRCIA”).  The final rule, expected in September 2025, will significantly alter the landscape for federal cyber incident reporting notifications, consistent with the Administration’s whole-of-government effort to bolster the nation’s cybersecurity.  

We will continue to update you on meaningful developments in these quarterly updates and across our blogs.

Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is co-chair of the firm’s Communications & Media Practice Group.  She represents and advises broadcast licensees, trade associations, and other media entities on a wide range of issues, including:  regulatory and policy advocacy; network affiliation and other programming agreements; media joint…

Jennifer Johnson is co-chair of the firm’s Communications & Media Practice Group.  She represents and advises broadcast licensees, trade associations, and other media entities on a wide range of issues, including:  regulatory and policy advocacy; network affiliation and other programming agreements; media joint ventures, mergers and acquisitions; carriage negotiations with cable, satellite and telco companies; media ownership and attribution; and other strategic, regulatory and transactional matters.

Ms. Johnson assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission and Congress and through transactions and other business arrangements.  Her broadcast clients draw particular benefit from her deep experience and knowledge with respect to network/affiliate issues, retransmission consent arrangements, and other policy and business issues facing the industry.  Ms. Johnson also assists investment clients in structuring, evaluating and pursuing potential media investments.  She has been recognized by Best Lawyers, Chambers USA, Legal 500 USA,Washington DC Super Lawyers, and the Washingtonian as a leading lawyer in her field.

Nicholas Xenakis

Nick Xenakis draws on his Capitol Hill experience to provide regulatory and legislative advice to clients in a range of industries, including technology. He has particular expertise in matters involving the Judiciary Committees, such as intellectual property, antitrust, national security, immigration, and criminal…

Nick Xenakis draws on his Capitol Hill experience to provide regulatory and legislative advice to clients in a range of industries, including technology. He has particular expertise in matters involving the Judiciary Committees, such as intellectual property, antitrust, national security, immigration, and criminal justice.

Nick joined the firm’s Public Policy practice after serving most recently as Chief Counsel for Senator Dianne Feinstein (C-DA) and Staff Director of the Senate Judiciary Committee’s Human Rights and the Law Subcommittee, where he was responsible for managing the subcommittee and Senator Feinstein’s Judiciary staff. He also advised the Senator on all nominations, legislation, and oversight matters before the committee.

Previously, Nick was the General Counsel for the Senate Judiciary Committee, where he managed committee staff and directed legislative and policy efforts on all issues in the Committee’s jurisdiction. He also participated in key judicial and Cabinet confirmations, including of an Attorney General and two Supreme Court Justices. Nick was also responsible for managing a broad range of committee equities in larger legislation, including appropriations, COVID-relief packages, and the National Defense Authorization Act.

Before his time on Capitol Hill, Nick served as an attorney with the Federal Public Defender’s Office for the Eastern District of Virginia. There he represented indigent clients charged with misdemeanor, felony, and capital offenses in federal court throughout all stages of litigation, including trial and appeal. He also coordinated district-wide habeas litigation following the Supreme Court’s decision in Johnson v. United States (invalidating the residual clause of the Armed Career Criminal Act).

Phillip Hill

Phillip Hill is an attorney with deep experience in media, entertainment, sports, and technology. Phillip’s practice focuses on litigation, counseling, transactions, and regulatory matters in the fields of copyright, trademark, right of publicity, art, and cyber law. Phillip represents and advises clients in…

Phillip Hill is an attorney with deep experience in media, entertainment, sports, and technology. Phillip’s practice focuses on litigation, counseling, transactions, and regulatory matters in the fields of copyright, trademark, right of publicity, art, and cyber law. Phillip represents and advises clients in a wide range of industries, including music, media and entertainment, technology, metaverse, NFTs, video games, sports, live event production, film/television, theater, fashion, print publishing, visual art, consumer products, financial services, private equity, real estate, and healthcare.

Phillip is an active member of the American Bar Association, where he is the Chair of the Music and Performing Arts Committee. He also represents and advises non-profit corporations and individuals in a variety of pro bono matters, including songwriters and recording artists, music venues, orchestras, museums, comic book artists, micro-entrepreneurs, asylum applicants, and civil rights organizations. He also is active in educational efforts, including courses on “The Law of Music” at the New England Conservatory and Carnegie Hall, and Harvard Law School’s “CopyrightX” course.

Photo of Shayan Karbassi Shayan Karbassi

Shayan Karbassi is an associate in the firm’s Washington, DC office. He is a member of the firm’s Data Privacy and Cybersecurity and White Collar and Investigations Practice Groups. Shayan advises clients on a range of cybersecurity and national security matters. He also…

Shayan Karbassi is an associate in the firm’s Washington, DC office. He is a member of the firm’s Data Privacy and Cybersecurity and White Collar and Investigations Practice Groups. Shayan advises clients on a range of cybersecurity and national security matters. He also maintains an active pro bono practice.

Photo of Madeleine Dolan Madeleine Dolan

Madeleine (Maddie) Dolan is an associate in the Washington, DC office. Her practice focuses on product liability and mass torts litigation, commercial litigation, and pro bono criminal defense. Maddie’s primary experience is in discovery and trial work.

Prior to joining Covington, Maddie served…

Madeleine (Maddie) Dolan is an associate in the Washington, DC office. Her practice focuses on product liability and mass torts litigation, commercial litigation, and pro bono criminal defense. Maddie’s primary experience is in discovery and trial work.

Prior to joining Covington, Maddie served as a law clerk to U.S. District Judge Mark R. Hornak of the Western District of Pennsylvania in Pittsburgh. She also previously worked as a consultant and strategic communications director and managed communications and marketing campaigns for federal government agencies.

Photo of Vanessa Lauber Vanessa Lauber

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal…

Vanessa Lauber is an associate in the firm’s New York office and a member of the Data Privacy and Cybersecurity Practice Group, counseling clients on data privacy and emerging technologies, including artificial intelligence.

Vanessa’s practice includes partnering with clients on compliance with federal and state privacy laws and FTC and consumer protection laws and guidance. Additionally, Vanessa routinely counsels clients on drafting and developing privacy notices and policies. Vanessa also advises clients on trends in artificial intelligence regulations and helps design governance programs for the development and deployment of artificial intelligence technologies across a number of industries.

Photo of Lauren Gerber Lauren Gerber

Lauren Gerber is an experienced litigator focused on product liability and mass tort defense and complex civil litigation across technology and pharmaceutical industries.
Lauren has represented clients at all stages of litigation, including fact and expert discovery, dispositive motions, as well as pre-trial…

Lauren Gerber is an experienced litigator focused on product liability and mass tort defense and complex civil litigation across technology and pharmaceutical industries.
Lauren has represented clients at all stages of litigation, including fact and expert discovery, dispositive motions, as well as pre-trial Daubert motions and motions in limine. She also has experience representing clients preparing for trial in patent, insurance recovery, and employment discrimination cases in federal and state court.

Lauren has tried multiple cases to verdict, including the pro bono representation of a defendant charged with first degree murder. Lauren has also represented dozens of children and caregivers in D.C. Superior Court at trial and in evidentiary hearings during a six-month full-time rotation at the Children’s Law Center, DC’s largest non-profit legal services provider.

August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients for complying with federal, state, and global privacy and competition frameworks and AI regulations. He also assists clients in investigating compliance issues, preparing for federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

Photo of Zoe Kaiser Zoe Kaiser

Zoe Kaiser is an associate in the firm’s San Francisco office, where she is a member of the Litigation and Investigations and Copyright and Trademark Litigation Practice Groups. She advises on cutting-edge topics such as generative artificial intelligence.

Zoe maintains an active pro…

Zoe Kaiser is an associate in the firm’s San Francisco office, where she is a member of the Litigation and Investigations and Copyright and Trademark Litigation Practice Groups. She advises on cutting-edge topics such as generative artificial intelligence.

Zoe maintains an active pro bono practice, focusing on media freedom.

Photo of Jorge Ortiz Jorge Ortiz

Jorge Ortiz is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and the Technology and Communications Regulation Practice Groups.

Jorge advises clients on a broad range of privacy and cybersecurity issues, including topics related…

Jorge Ortiz is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and the Technology and Communications Regulation Practice Groups.

Jorge advises clients on a broad range of privacy and cybersecurity issues, including topics related to privacy policies and compliance obligations under U.S. state privacy regulations like the California Consumer Privacy Act.

Photo of Olivia Dworkin Olivia Dworkin

Olivia Dworkin minimizes regulatory and litigation risks for clients in the pharmaceutical, food, consumer brands, digital health, and medical device industries through strategic advice on FDA compliance issues.

Olivia defends her clients against such litigation as well, representing them through various stages of…

Olivia Dworkin minimizes regulatory and litigation risks for clients in the pharmaceutical, food, consumer brands, digital health, and medical device industries through strategic advice on FDA compliance issues.

Olivia defends her clients against such litigation as well, representing them through various stages of complex class actions and product liability matters. She maintains an active pro bono practice that focuses on gender-based violence, sexual harassment, and reproductive rights.