Is Facebook losing its war on fake news?
First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. Cookie Policy.

This website is hosted in perpetuity by the Internet Archive.

Is Facebook losing its war on fake news?

Despite vowing to give articles from hoax sites less prominence in its newsfeed, Facebook seems to be making little headway in the battle for truth.

Whether it’s unlikely refugee stories, improbable free gas station concerts, or Zika epidemic conspiracies, fake news on Facebook is nothing new. Spurious sources abound, preying on political biases, shock value, and straight-up, too-good-to-be-true temptation.

As research shows, such stories tend to travel faster and wider than any attempt to correct them. On occasion, even Facebook’s “Trending” curators have been caught out.

In January 2015, Facebook declared war on fake news. Recognizing the negative impact on the experience of its Newsfeed users (i.e. everyone who uses Facebook) of too many hoaxes, they introduced a new feature that aimed to crowdsource the flagging of fake news and limit its spread. The idea was simple, maybe even simplistic, but well-intended. However, just over a year since its introduction, quantitative and anecdotal research from BuzzFeed suggests the approach isn’t working.

According to the research, sources known for pushing fake stories have been penalized in the Newsfeed algorithm, and engagement has suffered as a result. One example cited by BuzzFeed, fake news site National Report, saw average engagements per post drop from 370.8 in January 2015 to 92.9 in July 2015. By December, however, that figure was back up to 251.7.

Managers of fake news pages like National Report and Empire News also point out that even if one ‘source’ of fake news (i.e. a domain or Facebook page) is penalized, sharing exactly the same fake story via another domain sees it spread far and wide. Registering domains is trivial, and although Facebook plays whack-a-mole blocking each, they struggle to keep up.

The motivation to keep churning out fakes is strong, too. Not only do such pages succeed in deceiving large numbers of unsuspecting Facebook users, but they also turn a profit.

Why is the world’s largest social network – and fourth most valuable company – struggling to contain run-of-the-mill hoaxers manipulating its platform? As Techdirt and others point out, relying on users to flag fake news relies on the assumption that users can spot hoaxes. All too often, such fakes are designed with the specific intention to deceive, so we shouldn’t be surprised that, from time to time, they succeed in spectacular fashion. One fake published by Empire News, claiming Boston Marathon bombing suspect Dzhokar Tsarnaev had been seriously injured in prison, garnered more than 240,000 likes, 43,000 shares, and 28,000 comments according to BuzzFeed.

Even when users do spot fakes, it turns out flagging stories isn’t as easy as you might like. While “Liking” (or “Loving” or any of the other new ways to “react” to content on Facebook) is a one-click process, flagging a story as fake requires some serious determination: even if you know exactly what to do, the process will take at least six clicks through four screens.

At the end of this frustratingly long process you’re left with an unsatisfying array of options which is still very unclear as to whether a flag has been applied or not. Instead, the user is presented with advice to block the source, hide the source, or message the source. If you’re trying to make sure a fake story doesn’t get picked up by friends on the network, none of these options feels quite right.

fakebook

A Facebook spokesperson told First Draft: “We are always working to improve the experience for people who use Facebook. We have heard from people that they want to see fewer hoaxes in News Feed, so are actively working on more ways to reduce their prominence in feed.

“Overall since we rolled out updates to down-rank hoaxes on Facebook, we have seen a decline in shares on most hoax sites and posts.”

Finding a solution to the challenge of fake news on Facebook that scales for 1.65 billion monthly active users is certainly a complex challenge. Aside from improving the flagging experience for users, what else can be done to stop the hoaxers? Here are a few suggestions which might spark more ideas and a conversation about how to counteract the fakes.

Talk about the problem more

Aside from Facebook’s January 2015 post introducing the flagging feature, there has been precious little public discussion on the nature of the problem of fake news on Facebook.

As BuzzFeed point out in their post, Facebook didn’t provide any data for analysis. Facebook software engineers do publish updates to the ‘newsroom’ blog about showing fewer hoaxes, understanding viral stories and other elements of the news feed, but could go further. Opening up this data and starting a public dialogue would be a great way to raise awareness of the challenges in this space and could spark ideas about how to counteract the spread of fakes.

Following the recent public outcry over the “Trending” section of the site, more transparency around news on Facebook could benefit the platform elsewhere, too.

Supplement flagging with active moderation

Flagging alone clearly isn’t working. Making the flagging of spurious stories easier for users would be a step in the right direction but, as pointed out previously, it would create a system open to exploitation in ways that have been observed elsewhere on Facebook with certain groups flagging the legitimate activities of political opponents.

Active moderation of flagged posts by well-trained community managers could help limit the spread of misinformation, while allowing well-intentioned and light hearted satire to be shared unimpinged. Granted, managing the number of flags for such a huge userbase could be a drain on resources, but when active human moderation exists elsewhere why not fake news too?

Writing openly and publicly about this process, as per the previous point, would mark Facebook as a thought leader among platforms (Facebook certainly isn’t the only one to suffer from the fake news epidemic) and could contribute to a meaningful discussion on basic digital media literacy among users.

Promote debunks in the Newsfeed

This suggestion merits further discussion in a longer post, but the basic idea is that if Facebook – and other platforms, which are increasingly looking to mimic Facebook’s algorithmic feed – is able to penalize fake news sites in the Newsfeed, then there is surely potential for the algorithm to better promote debunks and fact-checking.

Increasing the visibility of accurate and fact-based reporting, especially during times of crisis, when the consequences of misinformation and rumor spread by fake news sites are particularly high risk, could have a serious impact in countering the effect of the so-called Law of Incorrect Tweets – that “initial, inaccurate information” will almost always be shared much more widely than an ensuing correction.

There is room for creativity here too: could Facebook somehow notify users who have shared a fake post that the content is designed to mislead, for example? Or could the Newsfeed algorithm somehow boost worth hoax-fighting projects like Snopes, DaBegad and HoaxSlayer? Again, as the platform others look to when it comes to algorithm-driven feeds, Facebook has an opportunity to play a leading role here.

Fake news and misinformation on Facebook isn’t going away any time soon. The motivations to mislead are many, from profitable clickbait to political propaganda. But more can be done to fight the fakes, and the motivation to do so should be equally as strong.

Facebook is an unprecedented platform for sharing knowledge and information but every time a fake story goes viral the platform is undermined as a source for news and the credibility of other, legitimate stories is tarnished with it. Protecting that credibility through fighting fakes is not only a journalistic responsibility, but it is essential in maintaining Facebook’s core user experience.

Follow First Draft on Twitter and Facebook for the latest updates in social newsgathering, verification and fake news.

Update: The article wrongly misattributed the quote from a Facebook spokesperson to a BuzzFeed spokesperson when first published. This has now been corrected.

Leave a Reply