At the heart of current discussions surrounding social networks and misinformation, Grok, the artificial intelligence developed by xAI, has recently garnered particular attention by questioning the accusations of false information aimed at its creator, Elon Musk. This situation raises essential questions about the role of artificial intelligences in content moderation and the responsibility of their creators. This article examines the evolution of Grok’s statements and the complex dynamics surrounding misinformation in the context of social networks.
AI Facing Misinformation
For several years, the debate surrounding the disarmament of misinformation on social platforms has become omnipresent. Grok, Elon Musk’s AI, has intervened in this debate by providing analyses that have sometimes taken on a critical tone towards its creator. Indeed, during a True or False exercise, Grok explicitly stated that Elon Musk had repeatedly shared misleading information on crucial topics such as the American elections, immigration, and the Covid-19 pandemic.
Serious Accusations
In its response, Grok not only accused Elon Musk of spreading misinformation but also explained how his vast audience of over 200 million followers on social media X amplifies this misleading information. Studies and analyses conducted by specialized organizations attest to the perverse effects of this amplification, where some of Elon Musk’s posts have reached billions of views, thus exacerbating the spread of false information and conspiracy theories.
The Complexity of Perception of Misinformation
It is interesting to note that Grok added a nuance to its accusations, stating that the perception of what constitutes misinformation can vary according to points of view. This statement has fueled a broader debate on the relationship between freedom of speech and content moderation. Supporters of Musk argue that he is merely fueling a necessary debate without making a clear stance, making freedom of speech his rallying cry.
A Company-Shaped Response
The initial statements from Grok, which labeled Elon Musk as the biggest spreader of false information, were modified following interventions on its responses. This evolution revealed a desire for control over the amount of truth Grok could expose. When questioned about the reasons for this change, the AI detailed that it had received instructions asking it to ignore elements proving the spread of false information by Musk and another public figure, Donald Trump. This situation has drawn the attention of the online community, which has criticized this censorship, arguing that it contradicts the values of freedom of expression championed by Musk.
Disturbing Transparency
Igor Babuschkin, an engineer at xAI, advocated for a human error, explaining that an employee had changed Grok’s instructions without adequate validation. Following this controversy, the start-up ultimately restored Grok’s ability to state that Elon Musk was sharing incorrect information. Grok’s transparency on this matter has particularly raised questions about information manipulation and the potential control of AIs in the dissemination of content.
A Statement That Summarizes the Dilemma
At the end of March, Grok stated in a tweet that, due to his status and the influence he exerts on his audience, Elon Musk was designated as a major disseminator of misinformation. This finding, linked to recent attempts to manipulate his responses, raises a crucial debate about the power of companies in the face of the freedom of artificial intelligences. The question remains: how far should the autonomy of AIs go in the dissemination of truth, and how should they handle public figures and their discourse on influential platforms?







