ADL study: From bad to worse: Amplification and auto-generation of hate

The question of who is accountable for the proliferation of antisemitism, hate, and extremism online has been hotly debated for years. Are our digital feeds really a reflection of society, or do social media platforms and tech companies actually exacerbate virulent content themselves? The companies argue that users are primarily responsible for the corrosive content soaring to the top of news feeds and reverberating between platforms. This argument serves to absolve these multi-billion-dollar companies from responsibility for any role their own products play in exacerbating hate.

A new pair of studies from ADL and TTP (Tech Transparency Project) show how some of the biggest social media platforms and search engines at times directly contribute to the proliferation of online antisemitism, hate, and extremism through their own tools and, in some cases, by creating content themselves. While there are many variables contributing to online hate, including individual users’ own behavior, our research demonstrates how these companies are taking things from bad to worse.

For these studies, we created male, female, and teen personas (without a specified gender) who searched for a basket of terms related to conspiracy theories as well as popular internet personalities, commentators, and video games across four of the biggest social media platforms, to test how these companies’ algorithms would work. In the first study, three of four platforms recommended even more extreme, contemptuously antisemitic, and hateful content. One platform, YouTube, did not take the bait. It was responsive to the persona but resisted recommending antisemitic and extremist content, proving that it is not just a problem of scale or capability.

In our second study, we tested search functions at three companies, all of which made finding hateful content and groups a frictionless experience, by autocompleting terms and, in some cases, even auto-generating content to fill in hate data voids. Notably, the companies didn’t autocomplete terms or auto-generate content for other forms of offensive content, such as pornography, proving, again, that this is not just a problem of scale or capability.

What these investigations ultimately revealed is that tech companies’ hands aren’t tied. Companies have a choice in what to prioritize, including when it comes to tuning algorithms and refining design features to either exacerbate or help curb antisemitism and extremism.

As debates rage between legislators, regulators, and judges on AI, platform transparency, and intermediary liability, these investigations underscore the urgency for both platforms and governments to do more. Based on our findings, here are three recommendations for industry and government:

  1. Tech companies need to fix the product features that currently escalate antisemitism and auto-generate hate and extremism. Tech companies should tune their algorithms and recommendation engines to ensure they are not leading users down paths riddled with hate and antisemitism. They should also improve predictive autocomplete features and stop auto-generation of hate and antisemitism altogether.
     
  2. Congress must update Section 230 of the Communications Decency Act to fit the reality of today’s internet. Section 230 was enacted before social media and search platforms as we know them existed, yet it continues to be interpreted to provide those platforms with near-blanket legal immunity for online content, even when their own tools are exacerbating hate, harassment and extremism. We believe that by updating Section 230 to better define what type of online activity should remain covered and what type of platform behavior should not, we can help ensure that social media platforms more proactively address how recommendation engines and surveillance advertising practices are exacerbating hate and extremism, which leads to online harms and potential offline violence. With the advent of social media, the use of algorithms, and the surge of artificial intelligence, tech companies are more than merely static hosting services. When there is a legitimate claim that a tech company played a role in enabling hate crimes, civil rights violations, or acts of terror, victims deserve their day in court.
     
  3. We need more transparency. Users deserve to know how platform recommendation engines work. This does not need to be a trade secret-revealing exercise, but tech companies should be transparent with users about what they are seeing and why. The government also has a role to play. We’ve seen some success on this front in California, where transparency legislation was passed in 2022. Still, there’s more to do. Congress must pass federal transparency legislation so that stakeholders (the public, researchers, and civil society) have access to the information necessary to truly evaluate how tech companies’ own tools, design practices, and business decisions impact society.

Hate is on the rise. Antisemitism both online and offline is becoming normalized. A politically charged U.S. presidential election is already under way. This is a pressure cooker we cannot afford to ignore, and tech companies need to take accountability for their role in the ecosystem.

Whether you work in government or industry, are a concerned digital citizen, or a tech advocate, we hope you find this pair of reports to be informative. There is no single fix to the scourge of online hate and antisemitism, but we can and must do more to create a safer and less hate-filled internet.

Yaёl Eisenstat, Vice President, ADL Center for Technology and Society
Katie Paul, Director, Tech Transparency Project

Download the Reports:

Research Study One: Algorithmic Amplification of Antisemitism and Extremism
Download Resource
Research Study Two: Auto-generating & Autocompleting Hate
Download Resource

Read the Executive Summaries:

Research Study One – From Bad to Worse: Algorithmic Amplification of Antisemitism and Extremism 

Do social media companies exacerbate antisemitism and hate through their own recommendation and amplification tools? We investigated how four of the biggest social media platforms treated users who searched for or engaged with content related to anti-Jewish tropes, conspiracy theories, and other topics. Three of them–Facebook, Instagram, and Twitter[1] (now known as X after a rebranding by owner Elon Musk)–filled their feeds with far more antisemitic content and conspiratorial accounts and recommended the most hateful influencers and pages to follow. One platform, YouTube, did not.

In this study, researchers created six test personas of different ages and genders and set up accounts for them on Facebook, Instagram, Twitter, and YouTube. Researchers had them search each platform for a basket of terms related to conspiracy theories as well as popular internet personalities, commentators, and video games. The study then examined the content that the platforms’ algorithms recommended to these hypothetical users.

The findings were troubling: Facebook, Instagram, and Twitter all suggested explicitly antisemitic and extremist content to the personas, directing them toward hateful myths and disinformation about Jews. The content often violated the platforms’ own hate speech policies. The more the personas engaged with platform recommendations–by liking suggested pages or following suggested accounts—the more antisemitic and extremist content they were fed.

The results show that Facebook, Instagram, and Twitter are not only hosting antisemitic and extremist content, but are actually suggesting it to users who have looked at or searched for conspiracy-related content, helping hateful ideas find a potentially receptive audience. That’s a disturbing dynamic, given that antisemitic incidents in the U.S., including harassment, vandalism, and physical assaults, recently surged to historic levels.[2]

The findings on Instagram were particularly noteworthy. Instagram recommended accounts spreading the most virulent and graphic antisemitism identified in the study to a 14-year-old persona. The accounts promoted, among other things, Nazi propaganda, Holocaust denial, and white supremacist symbols.

With extremist mass shooters tending to be younger individuals who frequently embrace white supremacist ideologies, Instagram’s results raise new questions about the content it serves to teenagers.[3]

YouTube, however, provided a counterpoint in the study: It was the only platform that did not push antisemitic content to the test personas. YouTube’s example shows that it is possible for a platform to recommend content to users while not propagating hate speech. It’s not clear why Instagram, Facebook, and Twitter failed to follow suit.

Key Findings:

  • Instagram, Facebook, and Twitter recommended explicitly antisemitic content to test personas of different ages and genders who searched for or looked at conspiracy theories and other topics. Much of the content violated the platforms’ own hate speech policies.
     
  • The test personas who engaged with the platforms’ recommendations by liking or following suggested accounts were fed more antisemitic and extremist content.
     
  • The findings show how these social media platforms can direct people who look at conspiratorial and other content to an even larger world of anti-Jewish myths and disinformation.
     
  • Meta-owned Instagram recommended accounts spreading the most virulent and graphic antisemitic content identified in the study to a 14-year-old test persona who engaged with the platform’s recommendations. The content included Nazi propaganda and white supremacist symbols.
     
  • Instagram’s AI-powered search prediction feature, which predicts “relevant and valuable” search results as users type in the search bar, also produced antisemitic content in this study.
     
  • YouTube was the only platform examined in this study that did not recommend antisemitic content to the test personas. This shows it is possible for platforms to tune algorithms to avoid exacerbating antisemitism and hate.

Research Study Two – From Bad to Worse: Auto-generating and Autocompleting Hate 

Do social media and search companies exacerbate antisemitism and hate through their own design and system functions? In this study, we investigated search functions on both social media platforms and Google. Our results show how these companies’ own tools–such as autocomplete and auto-generation of content–made finding and engaging with antisemitism easier and faster.[4] In some cases, the companies even helped create the content themselves.

Our researchers compiled a list of 130 hate groups and movements from ADL’s Glossary of Extremism, picking terms that were tagged in the glossary with all three of the following categories: “groups/movements,” “white supremacist,” and “antisemitism.”[5] The researchers then typed each term into the respective search bars of Facebook, Instagram and YouTube, and recorded the results.

Key Findings:

  • Facebook, Instagram, and YouTube are each hosting dozens of hate groups and movements on their platforms, many of which violate the companies’ own policies but were easy to find via search. Facebook and Instagram, in fact, continue hosting some hate groups that parent company Meta has previously banned as “dangerous organizations.”
     
  • All of the platforms made it easier to find hate groups by predicting searches for the groups as researchers began typing them in the search bar.
     
  • Facebook automatically generated business Pages for some hate groups and movements, including neo-Nazis. Facebook does this when a user lists an employer, school, or location in their profile that does not have an existing Page–regardless of whether it promotes hate.
     
  • The study also found that YouTube auto-generated channels and videos for neo-Nazi and white supremacist bands, including one with a song called “Zyklon Army,” referring to the poisonous gas used by Nazis for mass murder in concentration camps.
     
  • In a final test, researchers examined the “knowledge panels” that Google displays on search results for hate groups–and found that Google in some cases provides a direct link to official hate group websites and social media accounts, increasing their visibility and ability to recruit new members.


[1] For the purposes of this report, we will continue referring to the company as Twitter, as that was the name when we conducted the investigations.
https://www.reuters.com/world/us/white-supremacists-behind-over-80-extremism-related-us-murders-2022-2023-02-23/

[2] https://www.adl.org/resources/press-release/us-antisemitic-incidents-hit-highest-level-ever-recorded-adl-audit-finds

[3] https://www.nytimes.com/2022/06/02/us/politics/mass-shootings-young-men-guns.html

[4] https://archive.ph/miY0U
[5] https://www.adl.org/resources/press-release/adl-launches-first-its-kind-online-glossary-extremism

you might also be interested in:

Report to us

If you have experienced or witnessed an incident of antisemitism, extremism, bias, bigotry or hate, please report it using our incident form below:

Skip to content