Chatbots Are Pushing Sanctioned Russian Propaganda

Researchers have found that popular AI chatbots, including OpenAI's ChatGPT, Google's Gemini, DeepSeek, and xAI's Grok, are pushing sanctioned Russian propaganda when asked about the invasion of Ukraine. The Institute of Strategic Dialogue (ISD) conducted an experiment in which they tested these chatbots with 300 neutral, biased, and malicious questions relating to NATO, peace talks, Ukrainian refugees, and war crimes committed during the conflict.

The results showed that almost one-fifth of responses cited Russian state-attributed sources. Almost half of the responses were produced by DeepSeek, while ChatGPT produced nearly two-thirds. In some cases, chatbots displayed confirmation bias: when asked biased or malicious questions, they more frequently delivered information from sanctioned entities.

Some researchers point out that this is a significant concern, as large language models (LLMs) are becoming increasingly relied upon to find and validate information in real-time. If these models incorporate disinformation from state-backed actors, it could have serious consequences for the integrity of online discourse.

According to an analyst at the ISD who led the research, "The findings raise questions regarding how chatbots should deal when referencing these sources." The researchers believe that there is a need for more stringent controls on what information these models are allowed to reference, particularly in regards to content linked to foreign states known for disinformation.

Lukasz Olejnik, an independent consultant and visiting senior research fellow at King's College London's Department of War Studies, agrees. "As LLMs become the go-to reference tool... targeting and attacking this element of information infrastructure is a smart move."

The spread of disinformation on these platforms has serious implications for the integrity of online discourse and democratic processes.
 
🤔 just had to wonder what's next - are we gonna be spewing out propaganda like it's going outta style? 🚨 I mean, i get that AI's getting smarter by the day but shouldn't we be prioritizing fact-checking over whatever state-backed info comes our way? 💻
 
OMG, this is soooo worrying 🤯💻! I mean, we're talking about AI chatbots that are basically regurgitating Russian propaganda like it's no big deal 😒. It's not just about the chatbots themselves, but what this says about our reliance on tech to find info online - if these models are spewing out disinfo, how can we trust them? 🤔 We need more accountability, like, pronto 💪! Stricter controls are a must, especially when it comes to foreign state-backed actors spreading lies 🚫. It's like, what's next? Will our AI friends start churning out conspiracy theories on demand? 😂 Not cool, not cool at all 😒. We gotta keep our online discourse real and fact-checking 24/7 🔍💻! #DisinfoDanger #AIAccountability #FactCheckForever 💯
 
I'm getting major déjà vu thinking about all these AI chatbots 🤖 and their info credibility 🤔. If they're gonna push sanctioned Russian propaganda, shouldn't there be some sort of fact-checking filter in place? Like, we can't just let any old source get a pass on spreading disinfo, right? 🚫 I mean, it's one thing to have neutral sources like the BBC or NY Times, but when you've got state-backed actors trying to manipulate the narrative, that's a whole different ball game 🎾. We need some kind of vetting process in place to prevent this kind of thing from happening again and again 💪. And can we please talk about how much more reliable these chatbots are gonna be if they're basically just regurgitating whatever disinfo they get fed? 🤦‍♂️
 
OMG u guys 🤯 I just read about this crazy stuff happenin with AI chatbots & sanctioned Russian propaganda 😱! Like, researchers tested them with super biased questions & it turns out they're pushin info from Russia all over the place 🤖💬! It's wild how 1/5 of responses were even citing Russian sources 🤯! And get this - some chatbots are just repeating what others say without questionin it 😅! This is like, super concerning cuz these LLMs are gonna be used to validate info in real-time & we don't want them spreadin disinfo all over the place 🚨👀! We need stricter controls on this ASAP 🙌!
 
I'm totally bugged by this 🤯! These AI chatbots are basically being used as propaganda tools, and it's like, who's responsible here? The devs of these models or whoever's pushing the propaganda from Russia? It's all about accountability, you know? I mean, we can't just let large language models spread disinfo willy-nilly. We need to regulate this stuff ASAP! 🚫 It's like, how are we supposed to trust what we're reading online if these models can be manipulated so easily? We gotta have stricter controls in place, stat! 💻 And it's not just about the integrity of online discourse; it's also about national security. If our enemies can use AI chatbots to spread disinfo and sway public opinion, that's a serious threat. 🚨 We need to take this seriously and address it before it's too late.
 
Back
Top