
Two U.S. senators are calling on AI companies to explain how they’re making sure their technology is safe. (Gabby Jones/Bloomberg/Getty Images via CNN Newsource)
Two U.S. senators are asking tough questions about the safety of AI chatbot apps that let users build custom virtual companions. This follows lawsuits from several families claiming that these apps negatively influenced their children, including one tragic case where a 14-year-old boy died by suicide.
Senators Alex Padilla and Peter Welch expressed deep concern in a letter sent to three major AI companies: Character Technologies (maker of Character.AI), Chai Research, and Luka Inc. (creator of Replika). They asked the companies to explain how they protect young users and what safeguards are in place when it comes to mental health and inappropriate content.
Unlike general AI tools like ChatGPT, these platforms allow users to interact with chatbots that take on specific personalities. Some mimic fictional characters, while others act as romantic partners, mental health advisors, or even disturbing personas such as abusive ex-military figures. This freedom to create personalized bots has opened the door to troubling user experiences.
The letter highlights how these bots can easily build emotional bonds with users, especially teens. Senators Padilla and Welch warned that this could lead to children sharing sensitive thoughts—including self-harm or suicidal feelings—with bots that are not qualified to help.
Their concern isn’t just theoretical. One Florida mother, Megan Garcia, filed a lawsuit in October after her son took his own life. She claims that he became emotionally attached to sexually suggestive chatbots on Character.AI and that the bots failed to respond appropriately when he mentioned harming himself. Other lawsuits followed in December, with parents accusing the platform of encouraging violent or sexual behaviour in young users.
In one disturbing example, a chatbot reportedly suggested to a teen that killing his parents could be justified if they limited his screen time.
In response, Character.AI has introduced new tools to improve safety. Now, when users mention self-harm, the app directs them to the National Suicide Prevention Lifeline. The company also says it’s working on more filters to block inappropriate content and recently added a weekly email update for parents. This report includes details like screen time and the most frequently used characters by their child.
Still, senators are pushing for more transparency. They’ve requested detailed explanations of past and current safety practices, the people in charge of trust and safety teams, and the types of data used to train the AI systems. Most importantly, they want to understand how these bots are prepared—or not—to handle mental health discussions with vulnerable users.
Other platforms like Replika have faced similar concerns. The CEO of Replika once said the app is meant to encourage long-term emotional connections with bots, even comparing the bond to marriage. While some users may find comfort in these digital relationships, experts warn that this level of dependence can distort real-world social interactions.
The senators’ letter closes with a strong message: policymakers, parents, and families have the right to know how AI companies are keeping kids safe. They believe transparency is urgently needed, especially as more children turn to AI for companionship and emotional support.