According to a rights group, Facebook and Instagram consistently employ ‘six key patterns of undue censorship’ on content that supports Palestine
A recent report from Human Rights Watch (HRW) asserts that Meta has been involved in “systemic and global” censorship of pro-Palestinian content since the onset of the Israel-Gaza war on October 7th.
In an extensive 51-page report, the organization meticulously documented and examined over a thousand reported instances of Meta removing content and suspending or permanently banning accounts across Facebook and Instagram. The company demonstrated “six key patterns of undue censorship” concerning content supporting Palestine and Palestinians. These patterns encompassed the removal of posts, stories, and comments, as well as the disabling of accounts, restricting users’ interaction with others’ posts, and implementing “shadow banning,” which significantly reduces the visibility and reach of an individual’s content, as stated by HRW.
The report highlights instances from over 60 countries, primarily in English, all expressing “peaceful support of Palestine in diverse ways.” Notably, even HRW’s posts seeking examples of online censorship were flagged as spam.
The group stated in the report that the censorship of content related to Palestine on Instagram and Facebook is both systematic and global. Meta’s inconsistent application of its policies, with the erroneous removal of content about Palestine, is attributed to “erroneous implementation, overreliance on automated tools to moderate content, and undue government influence over content removals,” according to the report.
In response to The Guardian, Meta acknowledged making frustrating errors but refuted the implication of deliberate and systemic suppression of a particular voice. Meta argued that claiming 1,000 examples, amidst the vast amount of content about the conflict, as proof of ‘systemic censorship’ may create a compelling headline but does not diminish the claim’s misleading nature.
Meta asserted that it is the sole company globally to have publicly disclosed human rights due diligence concerning matters linked to Israel and Palestine.
The company’s statement argues, “This report disregards the challenges of implementing our policies worldwide during a rapidly evolving, highly polarized, and intense conflict, resulting in an upsurge in reported content. Our policies are crafted to ensure everyone has a voice while simultaneously maintaining the safety of our platforms.
For the second time this month, Meta faces scrutiny over allegations of systematically suppressing pro-Palestinian content and voices.
Recently, Elizabeth Warren, the Democratic senator for Massachusetts, penned a letter to Meta’s co-founder and CEO, Mark Zuckerberg. The letter demanded information in response to numerous reports from Instagram users, dating back to October, indicating that their content was downgraded or removed, and their accounts were subject to shadow banning.
On Tuesday, Meta’s oversight board declared that the removal of two specific videos depicting the conflict from Instagram and Facebook was a mistake. The board emphasized the videos’ value in “informing the world about human suffering on both sides.” One video portrayed the aftermath of an airstrike near al-Shifa hospital in Gaza on Instagram, while the other depicted a woman being taken hostage during the October 7 attack on Facebook. Both videos were reinstated.
Users of Meta’s platforms have reported perceived technological bias in favor of pro-Israel content and against pro-Palestinian posts. Notably, Instagram’s translation software replaced “Palestinian” followed by the Arabic phrase “Praise be to Allah” with “Palestinian terrorists” in English. Additionally, WhatsApp’s AI, when asked to generate images of Palestinian boys and girls, produced cartoon children with guns, while images of Israeli children did not include firearms.