TikTok, Facebook and Google face challenges of content moderation
First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. Cookie Policy.

This website is hosted in perpetuity by the Internet Archive.

Uneven content moderation shows the need for ecosystem-level action

TikTok is among the many platforms dealing with online misinformation. (Reuters)

This week, several reports highlighted the challenge of content moderation.

On Monday, The Verge reported how two Republican candidates for governor of Michigan are testing the limits of TikTok’s approach toward misinformation. As Makena Kelly wrote, Garrett Soldano and Ryan Kelley have used the video-sharing platform to rack up tens of thousands of followers and have spread falsehoods. One of Soldano’s videos that spread Covid-19 misinformation, she writes, garnered at least 2.5 million views.

On Wednesday, Google-owned YouTube announced it had removed more than a million videos that spread Covid-19 misinformation. YouTube added that it removes nearly 10 million videos a quarter, with most of them having fewer than 10 views. But the platform cautioned that sometimes misinformation is “less clear cut” and that removing videos can hinder freedom of expression. The big numbers also recall last week’s news that Facebook removed 20 million pieces of Covid-19 misinformation last quarter, but questions linger about how much misinformation is on Facebook and how effective it all was.

Also on Wednesday, a group of Reddit moderators called on the platform to tackle the “rampant Coronavirus misinformation” in a post that has 181,000 upvotes as of this writing. “It is clear that even after promising to tackle the problem of misinformation on this site, nothing of substance has been done aside from quarantining a medium sized subreddit, which barely reduces traffic and does little to stop misinformation,” they wrote. A day later, CEO Steve Huffman appeared to push back.

Some parts of the digital ecosystem are harder to moderate, even if all platforms were keen on larger and more decisive steps. As an article in the Brookings Institution’s “Tech Stream” pointed out, podcasts are a major part of the digital landscape but have escaped scrutiny. (First Draft’s Shaydanay Urbani has previously written about the trouble with moderating audiovisual content.) Some 116 million Americans tune into podcasts each month. Brookings looked at 8,000 episodes of political podcasts and nearly one-tenth have potentially false information.

But perhaps the focus on individual platforms and mediums misses the wider point. Even when platforms such as Twitter take decisive action, misinformation on them can spread to other mainstream or fringe spaces, as a new study by Harvard Kennedy School’s Misinformation Review that looks at Donald Trump’s election-related tweets shows. “[O]ur results emphasize the importance of considering content moderation at the ecosystem level,” the study stated. — First Draft staff

This article is from our daily briefing email newsletter. Subscribe for the key stories caught by our monitoring team each day, and be sure to check out our weekly briefing the best misinformation reads.

A roundup of the latest and most important misinformation narratives that you need to know about each day.

A weekly review of the best misinformation reads and talking points from around the world.

News from First Draft and invitations to all of our training and events.