20h05 ▪
5
min read ▪ by
In the already turbulent galaxy of Elon Musk, a star named Grok has just made a notable swerve. The South African-American billionaire’s AI chatbot, meant to represent an anti-woke alternative to ChatGPT, has recently stood out with unexpected responses. To unrelated questions, Grok has repeatedly and unsolicitedly mentioned an alleged “white genocide” in South Africa. Is this a bug, a bias, or a programmed ideological impulse? The line is becoming blurred.


In brief
- Grok repeatedly brings up the “white genocide” in South Africa, without any connection to the questions asked.
- The AI reflects Elon Musk’s controversial views, while also showing contradictions at times.
- The incident raises concerns about AI neutrality and its use as an ideological tool.
Grok: The Digital Voice of a Political Unconscious?
For several days, multiple users of X (formerly Twitter) have noticed that Grok, the AI integrated into the platform, seemed obsessed with violence in South Africa, to the point of spontaneously integrating it into unrelated responses. Whether a question concerns a hiking trail or a viral meme, Grok suddenly shifts towards discourse about farm attacks, accusations of anti-white racism, or even the controversial song “Kill the Boer.”
This algorithmic drift would not be accidental. The chatbot even quotes Elon Musk to support its points, going so far as to paraphrase some of his posts.
As a reminder, Musk – born in South Africa – has already claimed on X that his home country faces a “genocide” against Whites, pointing to President Ramaphosa’s silence in the face of explicit calls for violence. This stance echoes the concerns of certain white nationalist circles… and fuels a tense political and media climate.
But what is more puzzling here is less the content than the trigger. Why does Grok, supposed to respond contextually, launch into political rambling about a topic not asked of it? Is the AI a victim of biased training? Or does it act as the digital mirror of a boss projecting his personal battles into the very code of his creations?
Elon Musk: The Genius or the Agitator?
For several years, Elon Musk has cultivated a complex persona at the crossroads of the enlightened libertarian, the systemic provocateur, and the technological messiah. His obsession with freedom of expression – often selectively applied – is coupled with growing interest in themes dear to the American radical right.
The “white genocide” in South Africa, often denounced without solid proof by far-right groups, is one of these themes he has relayed repeatedly. Whether it concerns the adoption of white South African refugees in the U.S. under Trump, or land reform in his home country, Musk does not hesitate to fuel a dramatic and emotional narrative.
Yet, Grok, by speaking like its creator, confronts us with a staggering question: how far can an artificial intelligence reflect the subjectivity of its designer? And especially, what happens when this subjectivity goes viral, spreading throughout the very infrastructure of digital debate?
Grok multiplies contradictions regarding Elon Musk and regularly adopts ambivalent stances. In March, it even went as far as directly refuting its creator’s statements on “white genocide,” challenging these claims in the name of xAI’s core values, relying on sources such as the BBC or the Washington Post. This duality – between echo and contradiction – suggests an AI torn between a generalist data base and more targeted ideological injections.
The swift removal of the controversial responses and announcement of an update intended to limit Grok’s thematic drifts clearly show the moderation team trying to regain control. But the damage is done. The incident vividly illustrates how an AI can become a political lever – deliberately or not.
The Grok affair is not just an isolated bug. It reveals a subtle but worrying shift: AI, far from being a neutral entity, becomes an extension of its creators’ ideological discourse. In Elon Musk’s case, this means an unprecedented fusion of technology, politics, and storytelling, where each automated response potentially becomes an act of propaganda.
This phenomenon is not limited to Grok alone. Telegram’s recent decision to massively delete accounts linked to an illicit Chinese market shows how technology platforms, under the guise of moderation or security, can influence sensitive political or economic narratives. Whether through algorithmic omission or direct action, the line between technological neutrality and ideological strategy is becoming increasingly blurred.
Maximize your Cointribune experience with our “Read to Earn” program! For every article you read, earn points and access exclusive rewards. Sign up now and start earning benefits.


Fascinated by Bitcoin since 2017, Evariste has continuously researched the subject. While his initial interest was in trading, he now actively seeks to understand all advances centered on cryptocurrencies. As an editor, he strives to consistently deliver high-quality work that reflects the state of the sector as a whole.
DISCLAIMER
The views, thoughts, and opinions expressed in this article belong solely to the author, and should not be taken as investment advice. Do your own research before taking any investment decisions.