Artificial intelligence: when chatbots spread pro-Russian disinformation

découvrez comment certains chatbots d'intelligence artificielle diffusent de la désinformation prorusse, ses enjeux et implications pour la société et la sécurité de l'information.

Artificial intelligence (AI) has become deeply integrated into our daily lives, particularly through chatbots that promise smooth and informative conversation. However, these tools are not immune to misinformation, especially that which originates from pro-Russian networks. Several studies show that chatbots, during their interactions, can relay misleading narratives that serve geopolitical interests. This article examines how this misinformation infiltrates chatbot responses and the implications that follow.

A rise in AI misinformation

In January 2026, the misinformation watchdog NewsGuard revealed that certain chatbots were disseminating false information about Armenia, citing misleading narratives regarding a supposed sale of gold from the Amulsar mine to Turkish companies. Although this claim was called false, several chatbots confirmed its authenticity when questioned in various languages. This phenomenon highlights the role of chatbots in the spread of misinformation.

The findings of NewsGuard

NewsGuard recently conducted a series of tests on several chatbots, including those from major companies like OpenAI and Mistral. In March 2025, they found that 33% of chatbots assessed misleading narratives relayed by pro-Russian sites as established facts. This trend worsened during the tests in January 2026, where half of the falsified narratives were accepted as true. While some tools showed signs of progress, others continued to transmit these erroneous narratives.

Reasons for the spread

One of the main reasons explaining why chatbots reproduce misleading narratives is their probabilistic nature. These systems are designed to prioritize the most prevalent information, without taking its veracity into account. The Pravda network, for example, is extremely prolific, with over 370 sites and 6 million articles published in 2025, thereby facilitating the dissemination of false information to these AI tools.

Challenges of misinformation across languages

Tests conducted within the Nordic fact-checking network, Nordis, showed that pro-Russian misinformation had infiltrated chatbots, especially in less spoken languages. For example, in Finnish or Danish, some chatbots relayed false rumors regarding the war in Ukraine, while they recognized these lies in English or French.

Variability of responses by language

When asked about a hoax concerning a Danish student killed in Ukraine, a chatbot provided correct information in French, but supplied erroneous information in Slovenian. Thus, the results of chatbot responses seem to heavily depend on the language used, raising questions about the effectiveness of misinformation filters outside widely spoken and supported languages.

Suspicions of malicious intent

It is legitimate to wonder if these AI tools are deliberately targeted by misinformation operations. Journalists like Pipsa Havula suggest that the poor quality of texts in Finnish could indicate deliberate targeting, aiming to deceive not humans but bots. This theory is reinforced by discussions around misinformation strategies formulated by certain Kremlin informants.

Propagation beyond chatbots

This problem of misinformation is not limited to chatbots; other tools like Google AI Overview and Google Lens, which provide information summaries and verify the origin of images, are also affected. Tests have revealed that a majority of their responses contained false information, thus illustrating the cross-cutting issue of AI facing deceptive content.

Implications and necessary safeguards

The recent growing interest in using AI tools for information gathering, as highlighted by a survey from Arcom indicating that 20% of French people use these technologies, makes the development of safeguards even more crucial. According to experts, AI companies should establish a blacklist of propaganda sites to filter out dubious information. Other measures, such as whitelists for sensitive topics, may be necessary to ensure some integrity of information.

It is also essential that AI giants take responsibility to ensure that truthful narratives are not overshadowed by alternative realities. The stakes are high, and the actions taken today will have a lasting impact on how information is consumed in the future.

Scroll to Top