Facebook’s sprawling report on disinformation
First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. Cookie Policy.

Facebook’s sprawling report on disinformation

This week, Facebook published a detailed 44-page report that looked into four years of disinformation campaigns. The report predictably focused on the growing sophistication of Russian and Iranian actors, but also underlined the role domestic groups played in the US 2020 election.

The report’s framing was notable. Facebook billed it as a study of “coordinated inauthentic behavior,” a term the platform had coined years earlier to much criticism. In a 2018 video, Facebook’s head of cybersecurity policy defined it as “when groups of pages or people work together to mislead others about who they are or what they’re doing.” But this definition has been criticized as vague and Facebook has been accused of applying it inconsistently. And the report’s focus on false identities masks another problem. As Columbia Journalism Review writer Mathew Ingram noted, the January 6 Capitol insurrection was organized largely in the open by people not attempting to hide their identities.

The report’s release also put a spotlight back on the claims of former Facebook data scientist Sophie Zhang, who said last year that the platform had failed to act promptly on clear signs of inauthentic activity that threatened elections in several places worldwide.

But the major disinformation stories this week weren’t only about Facebook. The European Commission issued guidance to beef up the EU’s Code of Practice on Disinformation, a voluntary self-regulation program to which Facebook is a signatory, which has produced mixed results so far. The Commission highlighted a need to “demonetize disinformation” by better sharing information and increasing transparency around disinformation in advertising. (TechCrunch’s Natasha Lomas suggested the Commission’s guidance might have influenced the timing of Facebook’s own report.)

Elsewhere, there were reminders that while governments use social platforms as venues for disinformation campaigns, they also aim to manipulate the information environment in blunter ways. Yesterday, Twitter denounced what it called “intimidation tactics” by the government of India’s Prime Minister Narendra Modi, which has demanded among other things that Twitter remove posts critical of Modi’s handling of the Covid-19 pandemic. Twitter’s statement came days after Indian police raided its empty Delhi headquarters as TV cameras rolled, a botched stunt that nonetheless marked an “escalation in the stifling of domestic criticism in India,” as Ashoka University political scientist Gilles Verniers told The New York Times. — First Draft staff

 

This article is from our daily briefing email newsletter. Subscribe for the key stories caught by our monitoring team each day, and be sure to check out our weekly briefing the best misinformation reads.

A roundup of the latest and most important misinformation narratives that you need to know about each day.

A weekly review of the best misinformation reads and talking points from around the world.

News from First Draft and invitations to all of our training and events.

Get briefings and updates delivered direct to your inbox.

Subscribe