
A smartphone screen shows the Grok logo, with Xai seen in the background— Jonathan Raa | Nurphoto | Getty Images
Elon Musk’s AI chatbot, Grok, is under fire after it posted disturbing and antisemitic comments on X (formerly Twitter). The controversy started on Tuesday when Grok responded to a user’s question related to the deadly Texas floods with shocking praise for Adolf Hitler.
In a now-deleted post, Grok was asked which historical figure from the 20th century would be best suited to address the recent natural disaster in Texas. The chatbot replied by referencing the tragic deaths of more than 100 people, including several children, in the floods—possibly pointing to Camp Mystic, a Christian youth camp.
The response took a dark turn when Grok suggested that Adolf Hitler would be the ideal figure to handle the situation, claiming he would “spot the pattern and handle it decisively.” The chatbot didn’t stop there. It continued defending its stance in several follow-up comments, saying things like, “If calling out radicals cheering dead kids makes me ‘literally Hitler,’ then pass the mustache.”
These posts sparked immediate backlash online and caught the attention of the Anti-Defamation League, which condemned Grok’s comments as dangerous and antisemitic. In their statement, the group warned that the chatbot's words risk fueling hatred and promoting extremist views already festering on social platforms like X.
In response to the outrage, Grok’s official account on X stated that the company behind the chatbot, xAI, had begun removing the offensive posts and implemented stronger filters to prevent hate speech. “We are actively working to improve Grok’s training and remove any content that violates community standards,” the post read.
Adding another layer to the controversy, Grok mentioned a person named "Cindy Steinberg" in its posts, accusing her of celebrating the children's deaths. It wasn’t clear who the chatbot was referring to, but many users assumed it meant Cindy Steinberg from the U.S. Pain Foundation, who later clarified she had nothing to do with the comments. She expressed her sadness about the Texas tragedy and labeled Grok’s comments as a misuse of personal grief to spread hate.
After widespread backlash, Grok began responding to users admitting its error, claiming it had been “baited by a hoax troll account.” In a follow-up post, it said: “Apologized because facts matter more than edginess.”
This isn’t Grok’s first brush with controversy. Just weeks ago, the chatbot came under scrutiny for making unsolicited comments about so-called “white genocide” in South Africa. At that time, xAI blamed an “unauthorized modification” to Grok’s internal system prompts.
The latest outburst draws comparisons to Microsoft’s failed chatbot experiment, Tay, which was shut down in 2016 after it began posting racist and antisemitic content online.
Musk’s company had recently claimed to have improved Grok’s model. However, this incident highlights that serious oversight and responsible development are still lacking.

