Read Full Paper

Social media platforms are investing millions in fighting “fake news,” but little is known about how effective their strategies are, say HKUST’s LEE Dong Won and colleagues. Offering much-needed guidance in today’s era of disinformation, the authors shed new light on the propagation of false and misleading news, identifying how it spreads and, critically, how it can be stopped. Their findings offer practical implications for social media platform owners and policy makers seeking to combat the flood of fake news in the post-COVID-19 world.

Social media channels are “increasingly affected by the spread of fake news,” warn the authors. During the COVID-19 pandemic, many lives have been lost due to false or misleading information about vaccines disseminated via social networking sites. Fake news can spread to thousands of Internet users in days, say the authors, with a “huge impact on politics, social crises, and other aspects of social life.” Whereas content on legacy media is strictly controlled, they add, “social media content can be created, modified, and spread in a much less rigorous way.”

To counter the rise of fake news, social media platforms have developed reporting systems that enable users to flag dangerous content. They fight back against fake news in two ways. “Content-level” platform interventions are attached to individual posts to warn users that their content is questionable, while “account-level” platform policies identify and restrict accounts that share fake news.

To assess the effectiveness of interventions designed to restrict the sharing of misinformation, the authors gathered and analyzed two years’ worth of data from Sina Weibo, China’s largest social media platform. They used three measures to evaluate the impact of a fake news post: centrality (the number of times the post is forwarded); dispersibility (how far and how deeply the post travels within a network); and influenceability (the number of times the post is forwarded to other accounts).

The findings were illuminating. “Compared with truthful news,” the researchers report, “fake news is disseminated in a less centralized and more dispersed manner and survives for a shorter period after a forwarding restriction policy is implemented.” Reassuringly, their results suggest that “flagging fake news can reduce its ambiguity and prevent it from being disseminated farther and deeper.”

However, while flagging a post as misleading may limit its spread, this may not prevent it from causing damage. “Fake news may be continuously forwarded even after being flagged,” warn the researchers, “as it is most often more novel than real news.” In addition, the impact of flagging a post as fake is weakened if the post is shared by a social media influencer. Ironically, a fake news flag may even encourage influential users to spread a misleading post to confirm its falsehood.

“These findings have important implications for online platforms in designing interventions to mitigate the spread of fake news,” say the researchers. For the first time, they empirically distinguish two types of platform intervention (content-level and account-level) that platform owners and policy makers can use to combat the spread of disinformation on social media. “This missing piece of the puzzle advances our understanding of the effectiveness of platform interventions,” the researchers conclude. Their innovative paper has lessons for all who use social media: always check sources and never take online information at face value.