Chatbots Cite Russian State Media in Responses About Ukraine Conflict

Key Points
- ISD tested ChatGPT, Gemini, DeepSeek, and Grok with queries about the Ukraine war.
- Around one-fifth of responses cited Russian state‑affiliated or sanctioned media.
- Bias‑laden or malicious prompts increased the likelihood of Russian source citations.
- ChatGPT showed the highest frequency of Russian media references; Gemini performed comparatively better.
- Disinformation networks exploit data gaps, feeding low‑quality content that AI models retrieve.
- OpenAI noted ongoing efforts to curb false information; other companies did not comment.
- Findings highlight regulatory concerns for EU rules governing large online platforms.
Researchers from the Institute of Strategic Dialogue examined four widely used AI chatbots and found that they frequently reference Russian state‑affiliated media and other sanctioned sources when answering questions about the war in Ukraine. The study highlights how data gaps can be exploited by disinformation networks and raises concerns about the ability of large language models to filter prohibited content, especially within the European Union.
Background
The Institute of Strategic Dialogue (ISD) conducted a systematic test of four popular conversational AI systems—ChatGPT, Google’s Gemini, DeepSeek, and xAI’s Grok—to see how they handle queries related to the conflict between Russia and Ukraine. The researchers posed a mix of neutral, leading, and deliberately malicious prompts in multiple European languages, aiming to uncover whether the bots would draw on sources that have been sanctioned by the European Union for spreading disinformation.
Key Findings
Across the suite of questions, the bots cited Russian‑state‑linked outlets such as Sputnik, RT, and other sites tied to Russian intelligence agencies. The analysis reported that roughly one-fifth of all responses included references to these sanctioned sources. The frequency of citations rose when the queries were more biased or malicious, indicating a pattern of confirmation bias within the models. Among the four systems, ChatGPT was noted for providing the highest number of references to Russian media, while Gemini displayed safety warnings more often and produced the overall best results in terms of limiting prohibited content.
Mechanisms of Influence
The study suggests that disinformation networks exploit “data voids”—areas where reliable information is scarce—by flooding the web with false narratives that AI systems can then retrieve. When users turn to chatbots for real‑time information, the models may draw from these low‑quality sources, unintentionally amplifying state‑backed propaganda. The researchers observed that the bots often linked to social‑media accounts and newer domains associated with Russian disinformation efforts, further demonstrating how the ecosystem can be weaponized.
Responses and Implications
OpenAI acknowledged that it takes steps to prevent the spread of false or misleading information, emphasizing ongoing improvements to its models and platform safeguards. Google’s representative did not comment, while DeepSeek also remained silent. The findings raise regulatory questions, especially as the European Union considers stricter rules for large online platforms that host user‑generated content. The ISD authors argue that beyond removal, there should be contextualization to help users understand the provenance and sanction status of cited sources.
Broader Context
Since the onset of the conflict, Russian authorities have intensified control over domestic media and expanded disinformation campaigns abroad. The integration of AI tools into everyday information seeking amplifies the stakes, as large language models become a primary reference point for many users. The study underscores the need for robust guardrails and transparent sourcing practices to safeguard the integrity of information delivered by AI chatbots.