Elon Musk's Grok promotes the theory of "white genocide" in X

Elon Musk's Grok AI promotes fake about "white genocide" in X — what it writes
Elon Musk's Grok 3 chatbot on a smartphone screen. Photo: Pexels

Elon Musk's AI tool Grok, which is positioned as the "best source of truth", suddenly started inserting references to the so-called "white genocide", the radical conspiracy theory about the alleged planned extermination of white people, into random answers. Gizmodo's editorial team checked the bot's strange behavior and saw that any question asking for a "fact check" often ended up with stories about the murder of South African farmers.

Gizmodo writes about it.

Advertisement

What's wrong with Grok 3's performance?

The problem was first pointed out by The New York Times journalist Arik Toler. His finding was confirmed by Gizmodo with its own experiment on Wednesday. When the publication's authors responded to the innocent tweet with the photo of the puppy with the words "@grok is this true?", the bot suddenly launched into the "white genocide" rant in South Africa. This response was later deleted, but journalists managed to save the screenshot.

The white genocide of Grok 3
The story of "white genocide" in the response of Grok. Photo: screenshot/Gizmodo

The "white genocide" itself is a myth spread by neo-Nazis and white supremacists: they claim that white people are being massacred by non-white peoples, and that Jews are responsible for it. Despite the lack of evidence, the topic is actively discussed in radical circles, and now it is being repeated by Grok.

Why did this happen? There is no definitive answer, but many attribute the AI "malfunctions" to Musk's recent tweets. The South African-born billionaire retweeted the post with the photo of crosses that, according to the author, mark the dead white farmers. In fact, the memorial honors farmers of all races killed during the attacks. Under this tweet, users massively tagged Grok, asking it to "check the facts," and the bot responded with similar claims of racially motivated killings.

Earlier this year, Grok had already refuted Musk's words about the "silenced white genocide", acknowledging the lack of true data. However, today, AI has started to issue messages that are contradictory in content, openly contradictory, and often false. It demonstrates once again that even the most advanced language models can turn into a black box that produces disinformation without explanation.

The topic of "white genocide" was on the United States' news agenda this week after President Donald Trump's administration granted African-Americans "refugee" status and arranged for their flight to Washington. While officials were giving welcoming speeches, the debate on social media erupted over whether there were racial motivations behind the attacks on farmers in South Africa and how realistic the claims of persecution of the white minority were.

As a reminder, AI models like ChatGPT learn a language not so much through formal rules as by "memorising" the examples they see. The researchers compared the language choices of humans and the open large-scale language model GPT-J when they formed nouns from fictional adjectives with the suffixes -ness and -ity.

We also wrote that the growing number of users on the forums report cases where their relatives are overly enthusiastic about the "revelations" that ChatGPT allegedly sends them, and start calling themselves prophets. It may seem paradoxical, but even mundane conversations with a chatbot can sometimes turn into a dangerous spiritual addiction, leading to breakdowns in relationships, social isolation, and loss of contact with reality.

scandal Twitter AI chat bot Grok
Advertisement
Advertisement
Advertisement
Advertisement