First Draft uses cookies to distinguish you from other users of our website. They allow us to recognise users over multiple visits, and to collect basic data about your use of the website. Cookies help us provide you with a good experience when you browse our website and also allows us to improve our site. Check our cookie policy to read more. Cookie Policy.

It’s crucial to understand how misinformation flows through diaspora communities

The way misinformation travels through diaspora communities — including the Chinese diaspora — deserves more of our attention.

The 2020 news cycle provided an object lesson in how multilingual misinformation can travel at high velocity across social platforms and geographical borders, often iterating faster than platforms and fact checkers can correct it. For those who monitor the online information ecosystems of diaspora communities, 2020 allowed us to further our understanding of common tropes, tactics and narratives that take root. But part of what we’ve also learned this year is how much we still don’t know. The way misinformation flows across these networked audiences, effective interventions within closed messaging apps, the correlation between imbalanced media coverage and community members’ reliance on alternative news sources — these are all subjects that deserve more attention from platforms, researchers, journalists and other information providers in 2021.

The Chinese diaspora is a useful case study. The overseas Chinese consume, discuss and share news in ways that often bypass traditional media gatekeepers. In addition to established media sources and the overseas Chinese press, news is obtained on platforms such as Facebook, Twitter and YouTube. Weibo, a Chinese-language microblogging platform akin to Twitter, is also used by some with ties to mainland China to traverse the country’s Great Firewall and stay connected with friends and family.

Additionally, millions of overseas Chinese use the messaging features in WhatsApp and WeChat (the latter has a social networking and a group chat component) to share personal updates and news with loved ones.

These platforms and messaging apps are vectors for cross-border disinformation and distortion. “Mainstream” social media such as Twitter and Facebook offer greater transparency and perhaps better chances that users will encounter corrective information than they would in closed messaging apps. But their accessibility can be manipulated by those targeting Chinese-speaking audiences. For example, researchers have identified alleged China-backed information operations that targeted the 2019 Hong Kong protests and amplified coronavirus conspiracies on platforms such as Twitter.

Opponents of the Chinese Communist Party (CCP) use these platforms to spread Chinese-language disinformation as well. Our monitoring this year showed that some in the Chinese diaspora who harbor more extreme political views toward the CCP collaborated with American right-wing figures to spread misinformation about the pandemic and the US election on Twitter, Facebook and YouTube. They also reached people via offline pamphlets.

Meanwhile, China-based platforms such as WeChat and Weibo are heavily regulated by the Chinese government; for instance, messages with flagged keywords are blocked on WeChat. Looking at the availability of coronavirus information on WeChat and YY (a live streaming platform) at the start of the pandemic, Toronto-based online watchdog Citizen Lab found the scope of censorship “may restrict vital communication related to disease information and prevention.”

Misinformation circulating in WeChat and WhatsApp can be pernicious in additional ways: In group chats, existing trust among participants could lead users to process the information with less scrutiny. The closed or semi-closed nature of these spaces makes it difficult for journalists and researchers to obtain a complete picture of the volume and flow of misinformation. Crucially, the labels applied to misinformation in “open” spaces such as Twitter and Facebook do not travel with the false or misleading posts when they are shared on other platforms. Rather, inaccurate information circulates unchecked across the diaspora once it leaves the platforms where the contextual warnings were applied.

Injecting fact checks into closed and semi-closed spaces appears to be one obvious way to rectify dubious claims, but those who specialize in countering Chinese-language disinformation in WhatsApp and WeChat face immense challenges. WeChat “is already inherently fertile ground for misinformation, because unlike other messaging apps and social media, the platform hosts a vast number of native content publishers vying for attention,” noted Chi Zhang, a former researcher for Columbia University’s Tow Center for Digital Journalism, in 2018. The disinformation flow appears nearly unstoppable when considering its volume against the handful of WeChat-based Chinese-language fact checkers, such as 反海外谣言中心 (Centre Against Overseas Rumours) and 反吃瓜联盟 (No Melon Group) — the former consists of a team of 21 people (according to an introductory message the account sends to new followers), compared to the app’s over 1.2 billion users worldwide.

Understanding and countering misinformation in diaspora communities is complex and can’t be accomplished by fact checkers alone. Here are our recommendations for platforms, researchers, media outlets and other information providers:

Platforms must take into account the fact that misinformation can proliferate beyond a platform’s perimeters, and ensure that interventions such as contextual labels travel along with the post if it is shared off-platform. Misinformation policies should be applied consistently across languages, and platforms that offer labels should ensure their availability in languages other than English. Data about the cross-platform flow of labeled content should be shared with misinformation researchers and journalists.

For researchers, more study is needed on the development of new information ecosystems as immigrants move across geographical borders, the importance of alternative news sources in various diaspora communities, the impact of misinformation on these communities, and effective ways of correcting or prebunking misinformation in different communities.

News outlets must approach reporting on diaspora communities (and the misinformation to which they are vulnerable) with nuance and empathy. Mentions of geopolitics may be relevant in coverage of some diasporas, but placing too much emphasis on “rifts,” or amplifying an “us versus them” narrative, might over time erode these communities’ trust in traditional media sources.

In the case of the overseas Chinese, the pandemic and China’s strained relations with Western countries this year had already fueled emotionally charged public discourse. Chinese American and Chinese Canadian communities were stung by pandemic-related racism. Rapidly deteriorating Australian-Chinese relations left Chinese Australians and New Zealanders vulnerable to racist attacks, with some branded potential spies and others publicly questioned about their loyalties. On top of all this came imbalanced media coverage buoying a dichotomy of “China” versus “the West,” which could prompt members of the diaspora to retreat further into alternative news sources and closed platforms that act as “echo chambers.”

Additionally, outlets should be aware that journalists covering China-related propaganda or disinformation could come under online attack. One Chinese Australian journalist’s work for ABC Australia about a controversial skit on Chinese history drew horrific trolling (content warning: graphic and abusive language) from pro-CCP Twitter users, while the Australian public broadcaster was hit with accusations of publishing “CCP propaganda” when it reported on the highly organized anti-CCP Himalaya movement.

Finally, information providers seeking to disseminate quality content in diaspora communities should work with trusted community leaders to ensure that accurate information, couched in appropriate cultural context, is effectively communicated in the online spaces where the communities congregate.

Damage done in one community’s information ecosystem can bleed outwards into the broader ecosystem. Our work on the Chinese diaspora — as well as the experiences of those monitoring Latin American and African communities online — has shown that we need a better understanding of the ways harmful information bypasses traditional gatekeepers and gains footholds across borders, and more robust ways of preventing and mitigating this damage.

Stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.