Meta says it failed to adequately remove hate, incitement during May 2021 conflict

A report commissioned by Meta, the company that owns Facebook, Instagram, and WhatsApp, has found that content inciting hatred and violence against all parties to the May 2021 conflict between Israel and the Hamas regime in Gaza was not adequately removed by those social media networks.

The report, which was released Thursday, stated that under-enforcement of Meta policies on incitement to violence and hatred, and policies relating to dangerous organizations and individuals, meant that such content had remained online while offending accounts went unpunished.

The document also found that over-enforcement of its policies, meaning the unjustified removal of content, was also a problem and negatively impacted freedom of expression, freedom of assembly and association, freedom from incitement, and bodily security of its users.

The report, commissioned by Meta to assess the impact of its platforms during the May 2021 conflict, asserted that Arabic-language content suffered from both over-enforcement and under-enforcement to a higher degree than Hebrew-language content.

The document thus found that Meta’s policy enforcement actions had an especially adverse effect on Palestinian users’ freedom of expression, freedom of assembly, and political participation.

This restricted the ability of Palestinians to share information and insights about their experiences during the conflict, the report found.

At the same time, the report asserted that Meta, as a result of under-enforcement of its own policies, failed to remove Arabic-language content that incited violence against Israelis and praised the Hamas terror organization.

It also noted that antisemitic content remained online despite violating Meta’s hate-speech policies.

More generally, the report noted that Meta’s platforms had all been used to post “hate speech and incitement to violence against Palestinians, Arab Israelis, Jewish Israelis, and Jewish communities outside the region” throughout the conflict.

The report attributed the failure to remove some of this content to the high volume of such cases during the conflict and a lack Arabic- and Hebrew-speaking staff to adequately monitor and address the spike.

The report did not examine whether this kind of content had any real-world consequences.

On Wednesday, Israel Police chief Kobi Shabtai said that content on social media networks had “driven people onto the streets” during riots in Israel both by Arab and Jewish Israelis.

Shabtai asserted that the real-world harm of such content posted on social media warranted a broad shutdown of such networks in order to calm future situations of serious civil unrest.

The report published on Thursday was commissioned by Meta in July this year following a recommendation from its Oversight Board that it perform due diligence on its impact on human rights — in this case in the Israel-Palestinian conflict and more broadly in global conflicts.

It is the ninth report of its kind into the consequences of content on Meta’s social media networks, following similar reports regarding Facebook’s impact on the genocide of Rohingya Muslims in Myanmar, as well as the effect of Meta’s platforms on human rights violations in the Philippines, Cambodia, Indonesia, and Sri Lanka, among others.

Although the document made broad assertions and recommendations about the impact of Meta’s enforcement policies, it cited almost no concrete examples of over- or under-enforcement of Meta policies, making it difficult to evaluate the findings.

The case that led to the commissioning of the report was, however, disclosed.

In that situation, a Facebook user in Egypt shared a post in May 2021 from a verified Al Jazeera page that included quotes attributed to a spokesperson for Hamas’s Al-Qassam Brigades paramilitary terror militia.

The spokesperson made comments demanding Israel carry out certain actions in Jerusalem and warning of consequences should it not comply.

The post was taken down for violating Meta’s policy on Dangerous Individuals and Organizations (DOI), which proscribes expressing praise or support for such entities and people, or creating content that represents them, such as Facebook pages and events.

Upon appeal, the user insisted they had shared the post to update people on the developing crisis, and was merely sharing the Al Jazeera post, which was itself not removed.

After reviewing the case, Facebook identified the removal of the post as a case of over-enforcement and restored the content.

Another key example of over-enforcement in Arabic-language posts that was cited in the actual report was when the #AlAqsa hashtag was added to a hashtag block list due to the term’s similarity to the Al Aqsa Brigades, a banned terrorist group proscribed by Meta’s DOI policy.

The #AlAqsa hashtag was therefore hidden from search results despite being used by Meta users to discuss the Al-Aqsa Mosque in Jerusalem, one of the holiest sites in Islam and a focal point for the May 2021 conflict.

According to the report, Arabic content was subjected to greater over-enforcement — meaning the erroneous removal of content expressing the “Palestinian voice” — on a per-user basis than Hebrew content.

At the same time, “proactive detection rates” in Arabic were also higher, likely due to Meta’s legal obligation to remove content relating to US-designated foreign terror organizations, including Hamas and other Palestinian terror militias.

Similarly, the report found that Palestinians were more likely to violate Meta’s DOI policy since Hamas, one such designated group, runs Gaza.

Since DOI violations incur steep user penalties, Palestinians were more likely to face stiffer consequences “for both correct and incorrect enforcement of policy.”

The report also found cases of under-enforcement on Hebrew- and Arabic-language content, meaning that some posts with incitement to violence were not removed.

But under-enforcement was more prevalent in Hebrew-language content, meaning that incitement to violence and other content against Palestinians and Arab Israelis faced lower levels of removal and users posting such content faced lower rates of disciplinary action.

This was largely due to the fact that Meta did not have at the time a Hebrew “classifier,” an algorithm that identifies and sorts content, including posts that have a high likelihood of violating Meta’s policies, the report found.

Meta has stated that it has now created a Hebrew classifier to deal with this concern.

One other problem stated in the report was higher error rates in the Arabic classifiers for the Palestinian dialect of Arabic, and a failure to send flagged content for review to staff who understand that dialect.

The report also noted that case volume for content that possibly violated Meta’s standards spiked dramatically, as much as ten times normal volume, which made effective review and enforcement more challenging.

One key consequence of over-enforcement during the May 2021 conflict was the loss of visibility and engagement for users after posts were erroneously removed, the report asserted.

The report noted that Meta does not differentiate between different types of hate speech, but acknowledged that organizations that monitor antisemitism “tracked clearly antisemitic content that violated Meta’s policies that were not detected and removed during this period,” citing studies by the World Jewish Congress and the UK’s Community Security Trust.

It said this was a result of “insufficient cultural competency on the part of content moderators” and “insufficient linguistic capacity” by those setting policy for languages in which antisemitic content appeared.

The report also specifically noted cases where WhatsApp was used by right-wing Israelis in Israel to incite violence and coordinate attacks against both Arab and Jewish Israelis, as well as against Israeli journalists.

It asserted that despite the errors in content moderation across Meta’s platforms, there was no “intentional bias” at the company, nor was there evidence of “racial, ethnic, nationality or religious animus in governing teams or evidence that Meta had intentionally sought to benefit or harm any particular group because of their race, religion, nationality, ethnicity, or any other protected characteristic when developing or implementing policies.”

The report, conducted for Meta by the consultancy group BSR, made a series of recommendations to address the problems, which Meta said it is in the process of implementing, partly implementing, or assessing the viability of implementation.

“There are no quick, overnight fixes to many of these recommendations, as BSR makes clear,” Meta said in response to the report.

“While we have made significant changes as a result of this exercise already, this process will take time — including time to understand how some of these recommendations can best be addressed, and whether they are technically feasible.”

you might also be interested in:

Report to us

If you have experienced or witnessed an incident of antisemitism, extremism, bias, bigotry or hate, please report it using our incident form below:

Subscribe to website

Enter your email address to receive notifications of new items