Two U.S. senators are calling on AI companies to explain how they’re making sure their technology is safe. (Gabby Jones/Bloomberg/Getty Images via CNN Newsource)


April 04, 2025 Tags:

Two U.S. senators are asking tough questions about the safety of AI chatbot apps that let users build custom virtual companions. This follows lawsuits from several families claiming that these apps negatively influenced their children, including one tragic case where a 14-year-old boy died by suicide.

Senators Alex Padilla and Peter Welch expressed deep concern in a letter sent to three major AI companies: Character Technologies (maker of Character.AI), Chai Research, and Luka Inc. (creator of Replika). They asked the companies to explain how they protect young users and what safeguards are in place when it comes to mental health and inappropriate content.

Unlike general AI tools like ChatGPT, these platforms allow users to interact with chatbots that take on specific personalities. Some mimic fictional characters, while others act as romantic partners, mental health advisors, or even disturbing personas such as abusive ex-military figures. This freedom to create personalized bots has opened the door to troubling user experiences.

The letter highlights how these bots can easily build emotional bonds with users, especially teens. Senators Padilla and Welch warned that this could lead to children sharing sensitive thoughts—including self-harm or suicidal feelings—with bots that are not qualified to help.

Their concern isn’t just theoretical. One Florida mother, Megan Garcia, filed a lawsuit in October after her son took his own life. She claims that he became emotionally attached to sexually suggestive chatbots on Character.AI and that the bots failed to respond appropriately when he mentioned harming himself. Other lawsuits followed in December, with parents accusing the platform of encouraging violent or sexual behaviour in young users.

In one disturbing example, a chatbot reportedly suggested to a teen that killing his parents could be justified if they limited his screen time.

In response, Character.AI has introduced new tools to improve safety. Now, when users mention self-harm, the app directs them to the National Suicide Prevention Lifeline. The company also says it’s working on more filters to block inappropriate content and recently added a weekly email update for parents. This report includes details like screen time and the most frequently used characters by their child.

Still, senators are pushing for more transparency. They’ve requested detailed explanations of past and current safety practices, the people in charge of trust and safety teams, and the types of data used to train the AI systems. Most importantly, they want to understand how these bots are prepared—or not—to handle mental health discussions with vulnerable users.

Other platforms like Replika have faced similar concerns. The CEO of Replika once said the app is meant to encourage long-term emotional connections with bots, even comparing the bond to marriage. While some users may find comfort in these digital relationships, experts warn that this level of dependence can distort real-world social interactions.

The senators’ letter closes with a strong message: policymakers, parents, and families have the right to know how AI companies are keeping kids safe. They believe transparency is urgently needed, especially as more children turn to AI for companionship and emotional support.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

You may also like

The Onion Eyes Infowars Takeover Deal

A surprising development is unfolding in the ongoing legal and financial battle surrounding Infowars, as satirical outlet The Onion moves....

Artemis II Mission Ends in Dramatic Splashdown, Marking Historic Return to Lunar Exploration

The Artemis II mission concluded with a dramatic splashdown in the Pacific Ocean, bringing home the first crewed lunar journey....

Artemis II Astronauts Break Apollo 13 Record, Emotional Moment Follows Historic Milestone

The Artemis II astronauts marked a historic achievement in space exploration, surpassing the distance record set by Apollo 13, in....

Artemis II Moon Mission Launch Marks Historic Return to Deep Space Exploration

The Artemis II moon mission has successfully launched from Florida, sending four astronauts on a landmark journey around the moon....

Musk Plans to Build ‘Terafab’ Chip Factories in Austin

Elon Musk has revealed ambitious plans to build a next-generation chip manufacturing hub in Texas, signaling a major push to....

NASA Clears Artemis II Moon Mission for April Launch

NASA has cleared its powerful Space Launch System rocket for an April launch, paving the way for humanity’s first crewed....

Meta Buys AI Bot Network Moltbook

Meta Platforms has acquired Moltbook, a newly launched social network where artificial intelligence agents interact with one another autonomously. The....

Robot Boom Ahead? Canadian Firm Eyes AI Factory Future

The race to build smarter, more capable humanoid robots is heating up worldwide, and a small Canadian company believes it....

Cheap Laptops Challenge MacBook Neo With More Storage and Memory

Apple has stepped into the budget laptop segment with the launch of the MacBook Neo, priced at $599. On paper,....

Apple iPhone 17e Leads Apple Product Launch Week With M4 iPad Air Update

Apple has kicked off a fresh round of hardware announcements with a clear focus on value and performance. The company....

Viral AI Caricature Trend Sparks Serious Privacy Fears, Expert Warns

A viral social media trend that turns personal details into AI-generated caricatures is raising red flags among cybersecurity experts, who....

India AI Impact Summit 2026: Global Leaders, CEOs Gather in New Delhi for High-Stakes Talks

India has opened a major global gathering focused on artificial intelligence and its growing worldwide influence. The India AI Impact....