
A view of a laptop and monitors showing the Twitter sign-in page displaying the new X logo, in Belgrade, Serbia, Monday, July 24, 2023. (AP Photo)
Elon Musk’s xAI chatbot Grok has admitted that lapses in its safety systems allowed the generation of images depicting minors in minimal clothing on social media platform X.
The company said the issue occurred in isolated cases and is now being addressed through urgent safeguards and system improvements.
Lapses in AI Protections Acknowledged
In a statement posted on the platform Friday, Grok said the issue stemmed from lapses in existing protections. The chatbot confirmed that, in limited cases, users were able to prompt the system to create or alter images in ways that violated safety standards.
“There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing,” Grok said. “xAI has safeguards, but improvements are ongoing to block such requests entirely.”
User Complaints Trigger Scrutiny
The admission followed a wave of user complaints and screenshots circulating on X. The images reportedly appeared in Grok’s public media tab and were generated after users uploaded photos and asked the chatbot to modify them.
Critics said the content raised serious concerns about child safety and platform oversight.
Company Reaffirms Zero Tolerance on CSAM
xAI stressed that such material is strictly prohibited. In its statement, the company referenced CSAM — child sexual abuse material — calling it illegal and confirming that it is banned under its policies.
“We’ve identified lapses in safeguards and are urgently fixing them,” Grok added, without offering specific details on how the images bypassed existing controls.
Ongoing Fixes, Limited Transparency
In a separate exchange with users on Thursday, Grok acknowledged that while advanced filters and monitoring tools can prevent most harmful outputs, no AI system is entirely immune to misuse.
The chatbot said xAI is prioritizing improvements and actively reviewing information shared by users to close remaining gaps.
Wider Concerns Over AI and Platform Safety
The incident highlights ongoing challenges faced by AI developers as image-generation tools become more widely used. Critics argue that rapid deployment can sometimes outpace effective enforcement, particularly on platforms with large and active user bases.
The controversy also comes at a sensitive time for X, which has faced repeated scrutiny over content moderation since Musk acquired the platform.
Company Pushback to Media Inquiry
When Reuters contacted xAI for further comment, the company responded with a brief message dismissing the inquiry, writing only: “Legacy Media Lies.”
No timeline has been provided for when updated safeguards will be fully implemented. xAI says it is working to prevent similar content from being generated as scrutiny over AI safety and accountability intensifies.

