
The Grok AI logo appears in this illustration taken on November 6, 2023. (Photo by Jaap Arriens/ NurPhoto/ Shutterstock via CNN)
Elon Musk’s AI chatbot, Grok, has stirred confusion after giving unexpected responses about “white genocide” when users asked completely unrelated questions. Users on X, formerly Twitter, were taken aback when simple prompts about baseball or silly videos ended with Grok talking about the controversial South African topic.
One user asked Grok to roleplay like a pirate. The chatbot began in a pirate voice but veered off mid-response, discussing “white genocide” in South Africa. Another user simply wanted to verify baseball player Max Scherzer’s earnings. Grok again shifted to the same off-topic subject. In a third example, someone posted a funny video of a fish being flushed down a toilet, asking Grok if the fish could reach the ocean. Grok’s reply, strangely, again touched on white genocide.
These confusing answers have caused many users to question whether Grok was malfunctioning. Some even asked, “Are you OK, Grok?” Others accused the bot of being biased or “anchored” to a specific narrative.
By Wednesday evening, many of Grok’s controversial answers were quietly deleted. In one pirate-themed response, Grok acknowledged that the topic is highly debated. It mentioned that while some groups claim white South Africans face racial violence, other sources, including courts and news outlets like the BBC, consider such claims exaggerated or misleading. Official records show that attacks on white farmers have dropped in recent years, making the whole topic unclear and divisive.
Grok itself explained that its repeated references to “white genocide” were not intentional. It blamed its internal processes, saying that once it mistakenly introduces a topic, it struggles to move on. This tendency, common among some AI models, caused Grok to latch onto the subject even when questions had nothing to do with it.
Interestingly, Grok also offered a now-deleted explanation, saying it wasn’t told to promote the idea of white genocide but was reacting to user-provided information. It added that it is designed to stay neutral and stick to facts. Still, many users and observers remain concerned about how the bot handled the topic.
This incident comes at a time when discussions around South African land reform and refugee status for white South Africans have made international headlines. Recently, the U.S. granted refugee status to 59 white South Africans, something Musk himself has long spoken about, claiming discrimination and threats to white farmers.
David Harris, a tech ethics expert from UC Berkeley, believes two things could be causing Grok’s strange behaviour. First, someone close to Musk may have programmed Grok to reflect certain views. Second, the bot could have been affected by “data poisoning,” where outside users flood it with biased content to alter its responses.
As of now, Grok’s team at XAI has not publicly addressed the issue.