In case you missed it (17 Feb 2020)
Mark Zuckerberg: Big Tech needs more regulation (Financial Times)
Mark Zuckerberg admits Facebook was slow on Russian disinformation (Financial Times)
Russia Knows Just Who to Blame for the Coronavirus: America (Foreign Policy)
Attempts at Debunking “Fake News” about Epidemics Might Do More Harm Than Good (Scientific American)
Health misinformation spreads extremely fast, as the recent coronavirus epidemic shows. The outbreak has led to fear mongering and false cures peddled on social media, prompting the World Health Organisation to label the information crisis an “infodemic”. Contrary to popular belief, a recent study shows that debunking false claims might be counterproductive. Studying the 2015-2016 zika virus crisis in Brazil, researchers concluded that working with “trusted institutions and leaders” could prove more effective than debunking false information. This is not a conclusive study however — other research shows that corrective messaging can work in certain circumstances, exposing the difficulty in managing misinformation during a health crisis. While there is no indisputable method to counter health misinformation, its effects can be devastating.
Out-of-context photos are powerful low-tech form of misinformation (The Conversation)
Misinformation comes in different shapes and sizes, from a politician spreading false statistics in a press conference to a friend sharing a misleading article on Facebook. Images are one of the most common mediums for misinformation, and research shows that people are generally very poor at spotting manipulated images. Among the examples highlighted by Lisa Fazio in The Conversation is a tweet by the conservative group Turning Point USA sharing an image of empty supermarket shelves with the caption “#SocialismSucks” — even though the image was from the aftermath of an earthquake in Japan in 2011. First Draft wrote about visual misinformation in 2017, but the problem still remains at large today. One solution proposed by Fazio is for social media platforms to label images with their publishing details to give viewers more information.
How social network sites and other online intermediaries increase exposure to news (PNAS)
In contradiction to the prominent ‘filter bubble’ theory, new empirical evidence seems to suggest that how people access news on platforms like Google or Facebook is, in fact, more diverse than previously thought. After collecting data from more than 5,000 participants and tracking what news people come across when using search engines or recommendations on social media, this recent research published in the Proceedings of the National Academy of Sciences of the United States of America journal dismantles the assumption that search and recommendation algorithms bias news diets toward users’ preferences and, thus, decrease content diversity. Read a summary on this Twitter thread.
Forget Fake News: Why We’re Wrong About Nearly Everything (The Daily Beast)
“People are often incredibly wrong about key social and political realities in their countries”, writes Bobby Duffy, director of the Policy Institute at King’s College. As humans, we view the present more negatively than the past, despite being given information to the contrary. Called the ‘negativity bias’ in psychological studies, it is the notion that people have a tendency to pay more attention to negative events than positive ones. Misleading information plays on biases such as these, writes Duffy, and has the potential to threaten our grip on reality. This makes it increasingly important for us to fight back in an age of social media, as these platforms magnify, entrench and validate our negative misperceptions about the world. He argues that we must be wary of those exploiting these biases to “convince us that we are living in a new dystopian era.”
In Case You Missed It (11 Feb 2020)
Facebook’s $9bn Irish tax row due to begin in US court (The Irish Times)
Exclusive: Trump-linked religious ‘extremists’ target women with disinformation worldwide (openDemocracy)
An openDemocracy investigation into a worldwide anti-abortion campaign has uncovered dozens of reports of dangerous disinformation for women in need of medical help and advice. Heartbeat International is a US-based global network behind these efforts, with financial and political support from Washington, DC, including US Vice President Mike Pence and President Donald Trump. The group funded hundreds of affiliates across the world, many of whom used shame, guilt and fear to dissuade vulnerable women in countries such as Ukraine, South Africa and Mexico. Now, some state leaders are demanding the clinics be investigated and regulated. “These activities don’t empower anyone, don’t inform, don’t give counsel…” said Argentinian parliamentarian Mónica Macha. “They just seek to scare and create panic to push [women] into decisions based on false information and ideological traps. The goal is clear: prevent women’s autonomy.”
Social media’s bomb of hate goes TikTok (The Jerusalem Post)
As TikTok’s use is growing rapidly around the globe, the Chinese-owned app is also drawing increasing scrutiny over its privacy and data security policy. Writing in The Jerusalem Post, columnist Emily Schrader criticized the app’s focus on children and teens as “exceptionally dangerous”, and pointed out the company’s notable failures in removing and preventing harmful content. But Schrader also highlighted how some extremists are using the platform to share anti-semitic content and push hate among Palestinian teens. While there is a notable lack in TikTok’s immediate response, Schrader said these problems exist across different social media platforms. But, she concludes, TikTok’s popularity with children and what it allows them to access makes it more problematic.
Linfonational: Impersonating politicians on Facebook to amplify disinformation (EU DisinfoLab)
An open-source intelligence investigation by researchers at the EU Disinfo Lab highlights their techniques after successfully uncovering a rampant campaign of political disinformation on Facebook. The website L’Infonational established Pages for French President Emmanuel Macron and 2017 election candidates Marine Le Pen, and Jean-Luc Mélenchon, republishing polarizing content from French sources and advertising across their web of connected sites to build up their audience for their own fake content. The investigation revealed a coordinated effort to engage with users’ established beliefs and worldviews using built-in Facebook models that brought in financial rewards, and also highlights the inherent vulnerabilities of social media networks to exploitation.
How Political Campaigns Can Fight Disinformation (NPR)
The topic of disinformation has worked its way into our daily vocabulary in the post-2016 era. It’s to the extent some newsrooms have established a whole beat for it. At the same time, political campaigns are bringing in their own disinformation experts, actively anticipating and monitoring social media platforms for false content. In a conversation with NPR, Lisa Kaplan, a campaign disinformation specialist, said the goal of countering this problem is to ensure “everybody is able to access the information that they need in order to exercise their right to vote.” The key, she said, is showing voters how a rumor is created and spread online.