Thanks to urging from stakeholders, social media platforms took visible steps to curb mis- and disinformation in 2020. Despite an array of alerts, banners, labels and information hubs, misinformation continued to poison online communities. Voter fraud myths proliferated across the globe, medical professionals reported the deaths of Covid-19 patients who believed the virus was not real, and now misinformation may hinder the ability to administer a vaccine to mitigate the pandemic.
Google, Facebook and Twitter now all have their own transparency websites, and publish reports on the state of misinformation on their platforms. We know more about the decisions being made around content moderation than ever before. However, because these reports are published by the platforms with no independent oversight, we’re left having to trust their conclusions. The reports from Twitter, Facebook and YouTube after the US election were full of figures, but if those reports had been published by an independent agency, they would have felt more like actual transparency reports than opportunities for PR.
The policies social media platforms have enacted have done little to stem the tide of misinformation. It’s time for these companies to look outside for help. Platforms need to commit to working with local stakeholders to create policies based on subject matter and cultural expertise. They also need to institute regular independent audits to ensure these policies are working, and give auditors the autonomy to determine what data points they require. Law Professor Nicolas Suzor from Australia’s QUT Digital Media Research Centre said the platforms are not in the best position to choose the research or the questions that need to be asked: “I think it’s dangerous to expect platforms to do this by themselves and I think it’s arrogant of them to expect that they can figure this out. This is a social conversation and a democratic conversation that can only be solved as a first step with more research that understands what’s going wrong.” (Suzor is a member of the Facebook Oversight Board, but stressed that he was speaking in his personal capacity, and not as a representative of Facebook or its Oversight Board.)
Even before 2020, the dangerous offline effects of mis- and disinformation had been well-documented — from lynchings in India to children in the US dying of preventable causes because their parents believed misinformation — but it took a pandemic to spur platforms to make their most far-reaching changes yet. Beginning in early March, the platforms took a series of promising steps to address misinformation. Twitter expanded its content labeling policy, and both Apple and Google culled apps with questionable coronavirus information. Some changes were meant to empower users with information, like Facebook’s Coronavirus Information Center or YouTube’s info panels, both of which direct users to credible health organizations. Other actions promoted digital media literacy, like Facebook’s investment in initiatives aimed at educating the public. These efforts seemed to be an indication platforms were stepping up their efforts to root out misinformation.
But while the platforms received much public praise for their actions, the implementation of these decisions wasn’t always consistent. Content moderation, fact checking and labeling efforts were applied haphazardly to different communities in the US ahead of Election Day. Policies varied from country to country, without good reason: Tweets from US President Donald Trump promoting hydroxychloroquine were left on the platform, even though similar claims posted a week later by Brazilian President Jair Bolsonaro were removed. Voter fraud claims that warranted labels in the United States were not labeled in the Bihar election in India. Twitter and Facebook announced mid-year they would label state-affiliated media, but users soon found that Twitter was selectively labeling some countries’ state-run outlets while ignoring others, leading to charges of bias.
Nor were the platforms’ measures always effective. Some researchers have argued Twitter’s corrective interventions may not be a practical way to address misinformation. Those intent on spreading false narratives found simple ways to evade moderation policies. For example, screenshots were used to share content previously removed by Facebook and Twitter. First Draft regularly identified ads and social media posts that violated platform policies.
Some policies were controversial from the moment they were announced: After Facebook issued a ban on political ads through the Georgia runoff election in January, Georgia candidates on both sides of the aisle worried it would kneecap their ability to effectively campaign.
Despite obvious challenges, social media companies could do more to improve the safety of their products, if only they would rely on stakeholders outside of their paid employees. Creating policy in conjunction with relevant stakeholders, committing to regular independent audits and providing open access to data could make a measurable difference in reducing harm caused by misinformation. In Myanmar, where Facebook’s failures in 2018 contributed to a genocide of Rohingya Muslims, the platform worked with local fact-checking organizations to institute more rigorous hate-speech policies. Though there are still issues with the dangerous misinformation in Myanmar, initial analysis indicates progress was made once Facebook began addressing content flagged by local misinformation monitors.
As in Myanmar, platform policies should be instituted in partnership with local experts and stakeholders, and tailored to make the most impact in the country where they’re implemented. Dr. Timothy Graham, a senior lecturer at QUT Digital Media Research Centre, told First Draft that platform responses should be “attuned to specific geographic, historical and cultural contexts.” Policies about vaccines should be written in consultation not only with doctors who understand the medicine behind the vaccine, but also the local public health organizations that understand how their community perceives vaccination.
And these policies cannot go unexamined. To ensure they’re making a difference, there needs to be independent auditing. Facebook commissioned an independent audit of misinformation on its platform in 2018; the results, released in July, found that Facebook made decisions that “represent significant setbacks for civil rights.” There should be more of these audits, but regularly scheduled ones, not those convened at a time and frequency of the platforms’ choosing. And, as Graham explains, “platforms such as Facebook should not be the final arbiters of data access” — the auditors should determine what they need access to to do their jobs effectively.
Finally, organizations should be able to use publicly available data to monitor dissemination of disinformation across platforms without being held in breach of terms of service agreements. Facebook is currently fighting a New York University political research project over its ability to do just that. Allowing open access to data would enable a network of researchers to investigate how much misinformation is out there, where it’s coming from, and how to reduce it.
Social media companies have in no way exhausted the list of potential ways to mitigate misinformation. If they work with researchers and other experts to craft policy, then allow those policies to be independently evaluated, we may finally see misinformation in online spaces begin to wither.
Anne Kruger contributed to this report.