Social media platforms enable millions of people worldwide to share stories, images, and other content. However, sharing the private information of others (peers) may have unintended and far-reaching consequences, both positive and negative. HKUST professors Kai-Lung Hui and Hong Xuand their co-researcher set out to determine how the benefits of peer disclosure, such as the pleasure brought by a friendly compliment, can be balanced against the potential harm done by making personal photos, videos, or comments public without consent.

“The pressing issue,” the researchers tell us, “is to help users interact effectively without excessively infringing on other people’s privacy in online social communities.” They aimed to answer two important questions. How do the privacy issues associated with peer disclosure affect users’ decisions to join social media platforms and share information about others? How can and should platforms regulate these activities? To shape “a healthy online environment for social interaction,” the researchers explain, we need practical ways of limiting the potential harm done by posting.

Inspired by economics theories of product pricing, the authors treated harmful posts as external forces imposed on others. They developed a theoretical model representing the environment of an actual social media community, considering users’ attachment to the community, the relationships between them, and how and what they post. They were then able to test various ways to “reduce privacy harm without sacrificing information contribution.”

Fascinatingly, they found that the most effective approach was one of indirect control, not strict prohibition. This method, known as “nudging,” reflects the original friendly and welcoming intentions of social media platforms. “A nudge,” explain the researchers, “is a soft paternalistic measure that operates as a cue to remind users of the potential privacy damage that their posts could bring to others.”

The authors do, however, acknowledge that platforms may need more direct methods of control in their armory. Noting that “the classical approach is to use command-and-control regulations that directly restrict agents’ actions,” they suggest applying a quota system to limit the length and number of users’ posts. This would encourage users to consider their involvement with the platform and hopefully also their posting behavior.

Together, these methods offer a novel “belt and braces” approach to regulating peer disclosure on social media. Nudging can encourage users to “think twice about the privacy consequences of their posts,” explain the researchers. Meanwhile, the quota method can reduce the level of posting and thus the likelihood of harm, all without putting users off.

The challenge now is to determine how these ideas can be most effectively applied. For example, the authors tell us, “nudging can take different forms in practice, such as warning messages or visual cues.” However, they also recognize the fine balance between persuasive approaches and imposed control on social media, particularly as users can leave at any time. This tension is at the center of broader debates about privacy rights and responsibilities on the Internet. The researchers offer invaluable insights into how our social duty to protect the vulnerable can be extended to online communities in the Information Age.