Best practices to combat antisemitism on social media
With the advent of the Internet, antisemitic messages are disseminated more quickly and widely than ever before, and often go unchallenged. Veritable norms of antisemitism have been established in some social media circles. Within these circles, those who disagree with the antisemitic norm and venture into the conversation are ridiculed, attacked, or excluded, seriously impacting any ability to exert a positive influence on the conversation. Even more troubling, antisemitic messages often include incitement to violence and are contextualized in big-picture world-order
ideologies that are bolstered by alternative news sources and “alternative facts.”
This study looks at attempts to combat antisemitism on social media, and utilizes a survey with non-governmental organizations (NGOs) from Europe, Israel and North America which have been working in the field. Additionally,
analysis of antisemitic posts and their disseminators and observations on interactions with disseminators of such posts provide us with additional background information to use in the development of new strategies to combat
Attempts to work with social media and online platforms to take down antisemitic content have shown some success in recent years. However, major obstacles remain, and too many hate messages, including calls for violence, are never taken down. A clearer legal framework and closer cooperation between social media and online platforms, NGOs, and authorities will be necessary to take down antisemitic content in a timely manner without undue restrictions on freedom of speech. In view of European case law and legislative changes in some European states, such as Germany and France, IT companies will increasingly be held responsible for content that is disseminated on their platforms. NGOs can play an important role in flagging antisemitic content and in providing training to social media providers in correctly identifying antisemitic content, but ways have to be found so that the bulk of the work and financial burden does not remain with NGOs. As of now, it is still common to have social media employees dismiss user reports on blatantly antisemitic content. To help fix this situation, providers need to invest in training and technical solutions for monitoring hateful content.
Terms of service that do restrict the dissemination of such messages need to be enforced. Policy makers can pave the way for effective regulations in countries where hate speech is illegal, and they can encourage the enforcement of terms of service that restrict hate messages. Social media interactions defy borders but effective measures in combating antisemitism need to take into account the regulatory framework, traditions, and forms of antisemitism, that are specific to each country and their constituent demographics. Social media users often utilize hyperlinks to antisemitic content on websites such as YouTube and blogs. A comprehensive approach that takes into account both social media and website content is therefore necessary.
While there seems to be a consensus among NGOs in the fieldthat extreme antisemitic messages should not remain published on social media and taking such content down should be a priority, not all antisemitic content can be tackled in such a manner. Counter-narratives will have to complement these efforts and reduce the negative impact of antisemitic messages that are not taken down. This can be done by directly challenging antisemitic messages,
and by calling out the disseminators for their hateful rhetoric. Another proactive method involves disseminating positive narratives or non-biased facts about Jewish people and Israel. However, counter-narratives face a number of challenges to being effective, such as reaching the target audience, being convincing, and not counter-roductively giving antisemitic messages greater visibility than would have resulted if the messages where simply ignored. As
current counter-narrative messages are done manually, they are time consuming and labor intensive, and, if done by individual users, expose them to attacks.
Our research about major disseminators of antisemitic messages in English shows three main groups of isseminators whose ideologies sometimes overlap:
1) White supremacists;
2) Users who seem to be obsessed with Israel and who often consider themselves anti-Zionists and claim to be pro-Palestinian;
3) Users who might only use fragments of supremacist or anti-Zionist ideologies but who believe in a wide array of conspiracy theories.
Anti-Zionist conspiracy theories are often a common denominator, although direct interaction between white upremacist and anti-Zionist disseminators of antisemitic messages appears limited.
Disseminators of the most influential antisemitic messages in terms of reach and re-posting tend to post such content regularly, peaking during relevant current events involving Jews or Israel. Closing accounts of these
disseminators would be an effective means to reduce antisemitic content online even if they will be recreated under different names. Our observations of attempts to engage critically with disseminators of antisemitic posts show a
number of challenges for counter-narrative efforts. The majority of disseminators simply ignore critical responses. Others double-down on their hateful messages and attack those who question or criticize their antisemitic
posts. Antisemitic Twitter users react more aggressively and rudely than Facebook users, possibly due to a greater level of anonymity on Twitter. Very few disseminators of antisemitic posts feel the need to justify their position,
and only exceptionally do they excuse themselves for using antisemitic tropes or insults.