Section 230 of the Communications Decency Act (the “CDA” or “Section 230”), known prolifically as “the 26 words that created the internet,” remains the subject of ongoing controversy. As extensively reported on this blog, the world of social media, user-generated content, and e-commerce has been consistently bolstered by Section 230 protections.
Communications, Media & Entertainment
Chairwoman Rosenworcel Announces CSRIC Members

The FCC has announced the appointment of members to the ninth Communications Security, Reliability, and Interoperability Council (CSRIC IX). CSIRC IX’s charter will run for a period of two years, with the first meeting being held on June 28, 2024. CSRIC IX will specifically focus on how artificial intelligence and machine learning can improve the…
The King is Back (in the Digital Era) | The ELVIS Act, Generative AI and Right of Publicity
On March 21, 2024, in a bold regulatory move, Tennessee Governor Bill Lee signed the Ensuring Likeness Voice and Image Security (“ELVIS”) Act (Tenn. Code Ann. §47-25-1101 et seq.) – a law which, as Gov. Lee stated, covers “new, personalized generative AI cloning models and services that enable human impersonation and allow users to…
FCC Proposes $8 Million in Fines Against Telecom Company and Political Consultant for Using Deepfake Generative Artificial Intelligence
In a pair of Notices of Apparent Liability for Forfeiture this week, the Federal Communications Commission (FCC) has proposed a collective $8 million in fines against telecommunications company Lingo Telecom and political consultant Steven Kramer.
Robocalls, Generative AI, and Deepfakes
The FCC alleges Kramer violated the Truth in Caller ID Act. According to the…
From Encryption to Employment, U.S. Federal Agencies Brace for the Effects of Quantum Computing, AI and More
In this week’s edition of Consumer Protection Dispatch, we look at the latest regulatory developments from the U.S. Department of Commerce, Consumer Financial Protection Bureau, and the Securities and Exchange Commission regarding data and AI.
Continue Reading →
Trump Verdict Raises Concerns About A Nasty Election Campaign Getting Nastier – Looking at a Broadcaster’s Potential Liability for Attack Ads
With the verdict in the first criminal case against former President (and now candidate) Trump having been released, we can envision a whole raft of attack ads likely to be airing before the November elections. The verdict is likely to also increase political divisions within the country, and potentially fuel many other nasty attack ads…
Supporting Responsible Influencer Content: EU’s New Recommendations
Recognising Influencers’ Impact
The Council’s recommendations stem from the recognition that influencers play a critical role in shaping public opinion and disseminating information across various social media platforms. As influencers gain more prominence, the need for regulatory frameworks to ensure their content adheres to legal standards and promotes social responsibility becomes increasingly crucial. To address…
$8M in Fines Levied Against Caller and Carrier for Robocalls

The FCC has proposed robocall fines against a caller and, for the very first time, against the carrier involved. First, the FCC issued a Notice of Apparent Liability for Forfeiture proposing a $6 million fine against political consultant Steve Kramer for perpetrating an illegal robocall campaign targeting potential voters two days before the New Hampshire’s…
This Week in Regulation for Broadcasters: May 20, 2024 to May 24, 2024
Here are some of the regulatory developments of significance to broadcasters from this past week, with links to where you can go to find more information as to how these actions may affect your operations.
- FCC Chairwoman Rosenworcel announced that she had circulated among the Commissioners for their review and approval a draft Notice of
…
The FCC and Congress Advance Proposals to Regulate Artificial Intelligence in Political Advertising
We’ve written several times (see for instance our articles here, here, and here) about all of the action in state legislatures to regulate the use of artificial intelligence in political advertising – with approximately 17 states now having adopted laws or rules, most requiring the labeling of “deep fakes” in such ads,…