MEMRI domestic terrorism threat monitor (DTTM) study focuses on national security threat from artificial intelligence (AI) use by neo-Nazi and white supremacists worldwide

The Middle East Media Research Institute (MEMRI) has published a new, first-of-its-kind study on how neo-Nazi, white supremacist, and other extremist groups and individuals are using or planning to use using artificial intelligence (AI) for criminal activities.

The study, from MEMRI’s Domestic Terrorism Threat Monitor (DTTM) project, is titled “Neo-Nazis And White Supremacists Worldwide Look To Artificial Intelligence (AI) – A National Security Threat In The Making To Which No Government Is Paying Attention – For Purposes Such As Recruitment, Harassing Minorities, And Criminal Activity Including Hacking Banks, Overthrowing Government, Attacking Infrastructure, Promoting Guerilla Warfare, And Using WMDs.”

Noting that while it has been clear for some time that extremists are using artificial intelligence (AI) for nefarious purposes, and that it allows extremism, the study examines how extremists around the world – some with programming experience – are viewing AI as a tool for spreading their message. They are also exploring the use of AI-generated voices to bypass voiceprint verification to hack into bank accounts, using it to write articles about guerilla warfare, and more – for example, one leading extremist used ChatGPT to find out where “American critical infrastructure” is most vulnerable to attack. The answer was “the electrical grid.” Another prominent extremist called for engineers with experience in AI to contact him, and discussed the potential uses of ChatGPT.

Since January 2023, there has been a major increase in online chatter about AI by leading extremists on platforms they favor. Many of the individuals listed in this report are tech-savvy, and have created their own software and platforms. The threat of terrorist groups and entities using AI is a growing national security issue; NATO warns that AI is one of the “emerging and disruptive technologies” that “represent new threats from state and non-state actors, both militarily and to civilian society.” This report reviews online discussion of AI, including plans for using it, by neo-Nazis, white supremacists, and other extremists.

As MEMRI Executive Director Steven Stalinsky, Ph.D., lead author of the study, explains, “The most troubling examples found by the MEMRI Domestic Terrorism Threat Monitor (DTTM) research team in its work studying this topic involve extremists actually discussing the use of AI for planning terror attacks, including making weapons of mass destruction. One accelerationist group which seeks to bring about the total collapse of society recently conducted, in a Facebook group, a conversation about trying to trick an AI chat bot into providing details for making mustard gas and napalm. These and other examples are detailed in a new MEMRI DTTM report to be released later this month.”

Dr. Stalinsky adds: “Others are talking about using AI to plan armed uprisings to overthrow the current U.S. governmental system, and sharing their AI-created versions of U.S. flags, military uniforms, and graphic designs of the White House. They also discuss AI’s use for recruitment and for spreading their ideology and propaganda online, including with videos they create.”

you might also be interested in:

Report to us

If you have experienced or witnessed an incident of antisemitism, extremism, bias, bigotry or hate, please report it using our incident form below:

Subscribe to website

Enter your email address to receive notifications of new items