Government regulation may curb social media harmful content
A study on the complexities of moderating harmful online content offers valuable insights into how policymakers can address these challenges to create a healthier and safer digital space.
A recent study suggests that even with a short turnaround time, government–mandated external moderation can be effective in reducing the harm caused by harmful social media content.
The study conducted by Dr Marian-Andrei Rizoiu from the University of Technology Sydney (UTS) and Philipp J. Schneider from École Polytechnique Fédérale de Lausanne highlights the dynamics of content dissemination on social media platforms. The Effectiveness of Moderating Harmful Online Content has been published in the journal PNAS.
The study experts explored the relationship between moderation delay and harm reduction by examining two key measures: potential harm and content half-life. Potential harm refers to the number of harmful offspring generated by a single post, while content half-life represents the time it takes for half of all offspring (reshares, retweets etc.) to be generated from the original post.
The study found that the effectiveness of moderation depends on the content’s half-life and potential harm. Social media platforms vary in terms of content half-life. For example, Twitter has a half-life of 24 minutes, Facebook 105 minutes, Instagram 20 hours, LinkedIn 24 hours and YouTube 8.8 days. A shorter content half-life indicates that most harm occurs shortly after content is posted, emphasizing the need for rapid content moderation to be effective.
This study’s implications are relevant for policymakers aiming to introduce similar legislation in other countries. The research provides insights into mechanisms for content moderation, focusing on trusted flaggers, effective reporting tools, and calculating appropriate moderation reaction times.