Online platforms attract hundreds of millions of users, who post regularly on many topics. However, the algorithms used to direct them to content have faced criticism. HKUST’s Jia Liu and a colleague examine the phenomenon of “popularity bias” in recommender systems, taking advantage of a quasi-experiment conducted by China’s biggest online knowledge-sharing platform.
“Recommender systems are responsible for a large proportion of the content/items that people see and interact with online,” say the authors. “However, algorithmic selections have long been criticized for creating ‘filter bubbles’ that confine individuals to self-confirming feedback loops.” This can result in intellectual polarization and market homogenization.
Today, with the proliferation of social media, recommender systems increasingly draw on users’ network information to improve their recommendations. However, we still know little about the effects of such “social-embedded” recommendation algorithms on users’ online behaviors, or how these effects differ from users’ “inherent tendencies.” Liu and colleague set out to empirically fill this research gap.
Their study context was peer-to-peer knowledge-sharing platforms, whose users rely heavily on personalized content flows to obtain information. The researchers focused on Zhihu, the largest peer-to-peer knowledge-sharing platform in China, whose more than 76 million users use the platform to subscribe to topic areas, post questions and answers, and follow other users.
On its launch in 2011, Zhihu used a content-based filtering algorithm to recommend topics that users were already interested in. However, in 2012, it changed its approach by shifting to a social filtering algorithm, which considers the activities of users’ followers and those whom they follow. “This large-scale quasi-experiment provides an ideal setting for comparing the effects of content-based filtering algorithms and social filtering algorithms on user content and social interests,” explain the researchers.
Statistical and graphical analysis of this unique dataset shed light on the consequences of replacing content-based with social filtering. “At the platform level,” note the researchers, “we show that the intervention significantly shifted user interests from content-oriented to social-oriented.” However, users’ topical interests also became less concentrated and popular topics gained significantly fewer subscribers than unpopular topics.
Exploring the mechanisms underlying these seemingly contradictory findings, the researchers suggest that “social filtering is particularly helpful for providing content to users who have no clearly defined targets or do not know what types of content they can explore.” Content-based algorithms may be preferred by established users with a clear interest in certain topics. “Our findings also suggest that new users are likely to face a tough ‘cold-start’ problem if social-embedded algorithms are in use,” the authors add, “because new users have little network connectivity and thus little visibility.”
The researchers conclude that online platforms should be wary of opting for a single algorithm. “We recommend that platforms adopt hybrid recommendation algorithms that assign different weights to different algorithms for individual users.” Indeed, Zhihu itself has since applied a hybrid social and content-based algorithm. In today’s rapidly changing online environment, platforms such as Zhihu must continuously evolve to survive—and this important study will help them to do so.