Two new studies: From bad to worse: amplification and auto-generation of hate

pair of new studies from ADL (the Anti-Defamation League) and Tech Transparency Project (TTP) shows how some of the biggest social media platforms and search engines at times directly contribute to the proliferation of online antisemitism, hate and extremism.

For the studies, researchers created a series of online personas designed to mirror what real users experience on the tech platforms. They searched for terms related to conspiracy theories as well as popular topics across four of the biggest social media platforms, YouTube, Facebook, Instagram and Twitter, to test the algorithms.

In the first study, From Bad to Worse: Algorithmic Amplification of Antisemitism and Extremism, three of four platforms recommended extreme, contemptuously antisemitic and hateful content. One platform, YouTube, did not take the bait. It was responsive to the personas but resisted recommending antisemitic and extremist content, proving that it is not just a problem of scale or capability.

The second study, Auto-generating and Auto-completing Hate, tested search functions, all of which made finding hateful content and groups a frictionless experience by autocompleting terms. In some cases, the platforms even automatically generated content to provide hateful content to users. Notably, the companies didn’t autocomplete terms or auto-generate content for other forms of offensive content, such as pornography, showing once again, that this is not just a problem of scale or capability.

“These investigations ultimately revealed that tech companies’ hands aren’t tied. Companies have a choice in what to prioritize, including when it comes to tuning algorithms and refining design features to either exacerbate or to curb antisemitism and extremism,” said Yael Eisenstat, ADL Vice President and head of the ADL Center for Technology & Society. “While there’s no single fix to the scourge of online hate and antisemitism, as policy debates rage, our findings underscore both the urgency and the potential for platforms and governments to do more.”

In one of the most egregious findings, the study found that Meta’s Instagram recommended virulent and graphic antisemitic content to a 14-year-old persona, including a profile filled with anti-Jewish messages, such as the phrase “we own it all,” a reference to the antisemitic trope of Jewish power and control.

“It’s shocking that the most vile and hateful antisemitic content was repeatedly pushed into the 14-year-old teenager’s Instagram feed,” said Katie Paul, Director of the Tech Transparency Project. “Tech platforms can and must do more to create a safer and less hate-filled internet. This should begin by protecting the youngest social media users from being bombarded daily with suggestions to read antisemitic and hateful content.”

Based on the findings, ADL proposes the following reforms for the social media industry and government:
 

  • Tech companies should fix product features that escalate antisemitism and hate, such as tuning algorithms and recommendation engines to avoid promoting hate content.

  • Congress must update Section 230 of the Communications Decency Act to hold tech platforms accountable for their algorithms’ impact on hate, harassment, and extremism.

  • Companies should be more transparent about their recommendation engines and design practices, and Congress should pass federal transparency legislation.

you might also be interested in:

Report to us

If you have experienced or witnessed an incident of antisemitism, extremism, bias, bigotry or hate, please report it using our incident form below:

Skip to content