Health

Study Examines Meta’s Efforts in Suppressing Misinformation on Facebook During COVID-19 Pandemic

This week’s top business news brings to light a study conducted by digital and social media researchers from the University of Technology Sydney and the University of Sydney. The study delves into Meta’s efforts in suppressing misinformation on Facebook during the COVID-19 pandemic and its effectiveness in curbing the spread of harmful content.

Published in the journal Media International Australia, the study scrutinized Meta’s content moderation policy, particularly focusing on strategies such as content labeling and shadowbanning. Shadowbanning involves the algorithmic reduction of problematic content in users’ newsfeed, search, and recommendations.

According to UTS Associate Professor Amelia Johns, the analysis revealed that Meta’s content moderation policies have not been entirely successful in deterring the dissemination of harmful content. In some cases, far-right and anti-vaccination accounts experienced increased engagement and followers following Meta’s content policy announcements, raising doubts about the company’s commitment to removing harmful content.

Associate Professor Johns emphasized that Meta’s preference for content labeling and algorithm-driven suppression over removal raises concerns about the seriousness of the company’s efforts in combating misinformation. The study suggests that while Meta argues for the inefficacy of content removal, their reliance on shadowbans and content labeling has proven to be only partially effective, incentivizing users to find workarounds and perpetuate misinformation.

The research highlights the resilience of far-right and anti-vaccination communities in circumventing Meta’s policies, demonstrating that users have actively collaborated to manipulate the algorithm, rather than allowing it to dictate their access to content. This challenges the success of Meta’s strategy, indicating that the suppression of misinformation is inconsistent and seemingly indifferent to the impact on vulnerable communities and users encountering false information.

The study sheds light on the complexities of content moderation and the challenges faced by social media platforms in combatting the spread of misinformation. As Meta continues to navigate these issues, the effectiveness of its content moderation policies remains a subject of scrutiny.

More information: Amelia Johns et al, Labelling, shadow bans and community resistance: did meta’s strategy to suppress rather than remove COVID misinformation and conspiracy theory on Facebook slow the spread?, Media International Australia (2024). DOI: 10.1177/1329878X241236984

Provided by University of Technology, Sydney Explore furtherMeta toughens content curbs for teens on Instagram, Facebook

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *