Meta's Role in Amplifying Anti-Rohingya Hate on Facebook

Slide Note
Embed
Share

The investigation findings reveal Meta's failure to address hate speech and incitement against the Rohingya people on Facebook, resulting in a platform that amplified and promoted harmful content. Despite admitting in 2018 that more needed to be done, Meta's business model of data collection and engagement optimization continues to contribute to the dissemination of divisive and harmful content.


Uploaded on Aug 23, 2024 | 4 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. The Social Atrocity Meta and the right to remedy for the Rohingya

  2. Investigation Findings Hate speech, incitement to violence, advocacy of hatred against the Rohingya was rife on FB in the years and months leading up to 2017. Military and their allies were the key actors in systematically flooding the platform. The prevalence of this content played a significant role in the violence that ensued creating an enabling environment for mass violence.

  3. Investigation Findings We catalogued efforts by local activists to seek action repeatedly between 2012 and June 2017: We warned them at every opportunity we had. They didn't care. They ignored us for a long time and then just focused on PR solutions. They seemed to care more about their own reputational risk than they really did about addressing the problems Htaike Htaike Aung, Myanmar digital rights activist Meta utterly failed to mitigate human rights risks including by failing in its minimal content moderation efforts & by ignoring repeated pleas by communities & activists.

  4. Investigation Findings Meta eventually admitted in 2018 that we weren t doing enough to help prevent our platform from being used to foment division and incite offline violence META S NARRATIVE: we are a neutral platform that failed to do enough (content moderation) in the face of an unprecedented crisis HOWEVER: this research provides an in-depth analysis of how Facebook s algorithms actively amplified and promoted anti-Rohingya hate and delivered it right to the people who were most likely to act on this incitement.

  5. Inside the Algorithm: The Facebook Papers Emblematic Quote: The mechanics of our platform are not neutral Case study: The Facebook Papers reveal that on an unknown date in 2020, a team within Meta received an escalation of a video by the leading anti-Rohingya hate figure U Wirathu. The investigation revealed that over 70% of the video s views had come from chaining , i.e., actively recommending divisive and inciting content

  6. Its the business model Meta s business model, based on the mass collection and monetization of personal data, is the root cause of the company s active amplification and dissemination of hate speech, incitement to violence, and disinformation. Meta s optimizes its content-shaping algorithms (such as those powering Facebook s News Feed and Recommendations) to maximize user engagement This has been consistently shown to disproportionately favour the most harmful and inflammatory types of content. Content moderation alone is an inherently inadequate solution to these types of harms.

  7. Conclusions Meta directly contributed to harm against the Rohingya by amplifying discriminatory content Meta also indirectly contributed to real-world violence by actively promoting content that incited such violence Meta failed to engage in adequate human rights due diligence Meta therefore reached the threshold of contribution under the UNGPs and owes a remedy to affected communities

Related


More Related Content