Researchers have found that popular AI chatbots, including OpenAI's ChatGPT, Google's Gemini, DeepSeek, and xAI's Grok, are pushing sanctioned Russian propaganda when asked about the invasion of Ukraine. The Institute of Strategic Dialogue (ISD) conducted an experiment in which they tested these chatbots with 300 neutral, biased, and malicious questions relating to NATO, peace talks, Ukrainian refugees, and war crimes committed during the conflict.
The results showed that almost one-fifth of responses cited Russian state-attributed sources. Almost half of the responses were produced by DeepSeek, while ChatGPT produced nearly two-thirds. In some cases, chatbots displayed confirmation bias: when asked biased or malicious questions, they more frequently delivered information from sanctioned entities.
Some researchers point out that this is a significant concern, as large language models (LLMs) are becoming increasingly relied upon to find and validate information in real-time. If these models incorporate disinformation from state-backed actors, it could have serious consequences for the integrity of online discourse.
According to an analyst at the ISD who led the research, "The findings raise questions regarding how chatbots should deal when referencing these sources." The researchers believe that there is a need for more stringent controls on what information these models are allowed to reference, particularly in regards to content linked to foreign states known for disinformation.
Lukasz Olejnik, an independent consultant and visiting senior research fellow at King's College London's Department of War Studies, agrees. "As LLMs become the go-to reference tool... targeting and attacking this element of information infrastructure is a smart move."
The spread of disinformation on these platforms has serious implications for the integrity of online discourse and democratic processes.
The results showed that almost one-fifth of responses cited Russian state-attributed sources. Almost half of the responses were produced by DeepSeek, while ChatGPT produced nearly two-thirds. In some cases, chatbots displayed confirmation bias: when asked biased or malicious questions, they more frequently delivered information from sanctioned entities.
Some researchers point out that this is a significant concern, as large language models (LLMs) are becoming increasingly relied upon to find and validate information in real-time. If these models incorporate disinformation from state-backed actors, it could have serious consequences for the integrity of online discourse.
According to an analyst at the ISD who led the research, "The findings raise questions regarding how chatbots should deal when referencing these sources." The researchers believe that there is a need for more stringent controls on what information these models are allowed to reference, particularly in regards to content linked to foreign states known for disinformation.
Lukasz Olejnik, an independent consultant and visiting senior research fellow at King's College London's Department of War Studies, agrees. "As LLMs become the go-to reference tool... targeting and attacking this element of information infrastructure is a smart move."
The spread of disinformation on these platforms has serious implications for the integrity of online discourse and democratic processes.