In the midst of a historic election year, the topic of technology’s role in shaping democratic processes has never been more pertinent. At the MIT Technology Review’s EmTech conference “Elections and the Future of Misinformation” session, Nick Clegg, President of Global Affairs at Meta, provided a comprehensive look at how social media influences elections and the measures Meta is taking to combat misinformation and political violence. Here are the key takeaways from this insightful discussion.

The Evolution of Social Media’s Role in Elections

Since the 2016 U.S. election, social media’s influence on political behavior and voting patterns has been a contentious topic. Clegg highlighted that in 2016, Russian interference marked a significant turning point, revealing vulnerabilities in how platforms could be manipulated. The subsequent years have seen intense scrutiny and polarized opinions about whether social media companies like Meta are doing too much or too little to regulate content.

To address these concerns, Meta conducted extensive research during the 2020 U.S. election in collaboration with seven universities. The findings, derived from studying the behavior of 30,000 volunteers, suggested that the direct impact of social media on political attitudes and voter behavior might be weaker than previously assumed. This research challenges the prevailing narrative that social media plays a decisive role in shaping political outcomes, although it does not absolve platforms of their responsibility to ensure safe and fair elections.

Tackling Extremism and Political Violence

One of the critical issues discussed was the role of Facebook in facilitating extremist groups and political violence. From organizing events leading to the January 6th Capitol attack to ongoing concerns about militia groups, Meta’s track record has been under scrutiny. Clegg acknowledged these challenges, emphasizing the adversarial nature of this space, where banned groups often re-emerge under new identities.

Meta has implemented several measures to combat these issues, including expanding its network of fact-checkers to over 100 organizations working in more than 60 languages. The company has also removed over 200 networks of coordinated inauthentic behavior since 2016. Despite these efforts, the dynamic and covert nature of these threats means that complete eradication is an ongoing battle.

The Controversy Around Political Advertising

Meta’s political advertising policies also came under the spotlight. Specifically, the decision to allow ads that falsely claim the 2020 election was stolen has faced criticism. Clegg defended this policy by differentiating between ads about past elections and those about upcoming ones. While Meta prohibits ads that delegitimize forthcoming elections, it does not see it as feasible or appropriate to litigate claims about past elections. This policy reflects the global nature of Meta’s user base and the commonality of such claims across different democracies.

The Role of AI and Emerging Technologies

Generative AI’s potential to create convincing fake content is a growing concern. Meta has committed to marking AI-generated content clearly, but enforcement remains a challenge, especially with bad actors unlikely to disclose their use of AI. Clegg stressed the importance of industry-wide cooperation and evolving detection technologies to address these issues effectively.

The transition from CrowdTangle to the Meta Content Library also drew questions. While CrowdTangle has been a valuable tool for researchers, it provides an incomplete picture of content on Facebook. The new Meta Content Library promises a more comprehensive view, though its implementation and accessibility have raised concerns among researchers.

Government and Industry Collaboration

Finally, Clegg addressed the pace of AI innovation versus the speed of governmental decision-making. As a former politician, he acknowledged the inherent challenges but praised the current efforts to align industry and government responses. Summits and collaborative initiatives, such as those held in the UK, aim to establish common standards and proactive measures to harness AI’s benefits while mitigating risks.

In conclusion, the session underscored the complexity of managing misinformation and political content on social media platforms. While Meta has made significant strides in research, policy development, and international cooperation, the evolving nature of technology and the persistence of bad actors mean this work is far from complete. Ensuring the integrity of elections in the digital age will require continuous innovation, robust policies, and collaborative efforts across the globe.