Two U.S. senators are calling on AI companies to explain how they’re making sure their technology is safe. (Gabby Jones/Bloomberg/Getty Images via CNN Newsource)


April 04, 2025 Tags:

Two U.S. senators are asking tough questions about the safety of AI chatbot apps that let users build custom virtual companions. This follows lawsuits from several families claiming that these apps negatively influenced their children, including one tragic case where a 14-year-old boy died by suicide.

Senators Alex Padilla and Peter Welch expressed deep concern in a letter sent to three major AI companies: Character Technologies (maker of Character.AI), Chai Research, and Luka Inc. (creator of Replika). They asked the companies to explain how they protect young users and what safeguards are in place when it comes to mental health and inappropriate content.

Unlike general AI tools like ChatGPT, these platforms allow users to interact with chatbots that take on specific personalities. Some mimic fictional characters, while others act as romantic partners, mental health advisors, or even disturbing personas such as abusive ex-military figures. This freedom to create personalized bots has opened the door to troubling user experiences.

The letter highlights how these bots can easily build emotional bonds with users, especially teens. Senators Padilla and Welch warned that this could lead to children sharing sensitive thoughts—including self-harm or suicidal feelings—with bots that are not qualified to help.

Their concern isn’t just theoretical. One Florida mother, Megan Garcia, filed a lawsuit in October after her son took his own life. She claims that he became emotionally attached to sexually suggestive chatbots on Character.AI and that the bots failed to respond appropriately when he mentioned harming himself. Other lawsuits followed in December, with parents accusing the platform of encouraging violent or sexual behaviour in young users.

In one disturbing example, a chatbot reportedly suggested to a teen that killing his parents could be justified if they limited his screen time.

In response, Character.AI has introduced new tools to improve safety. Now, when users mention self-harm, the app directs them to the National Suicide Prevention Lifeline. The company also says it’s working on more filters to block inappropriate content and recently added a weekly email update for parents. This report includes details like screen time and the most frequently used characters by their child.

Still, senators are pushing for more transparency. They’ve requested detailed explanations of past and current safety practices, the people in charge of trust and safety teams, and the types of data used to train the AI systems. Most importantly, they want to understand how these bots are prepared—or not—to handle mental health discussions with vulnerable users.

Other platforms like Replika have faced similar concerns. The CEO of Replika once said the app is meant to encourage long-term emotional connections with bots, even comparing the bond to marriage. While some users may find comfort in these digital relationships, experts warn that this level of dependence can distort real-world social interactions.

The senators’ letter closes with a strong message: policymakers, parents, and families have the right to know how AI companies are keeping kids safe. They believe transparency is urgently needed, especially as more children turn to AI for companionship and emotional support.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

You may also like

Cheap Laptops Challenge MacBook Neo With More Storage and Memory

Apple has stepped into the budget laptop segment with the launch of the MacBook Neo, priced at $599. On paper,....

Apple iPhone 17e Leads Apple Product Launch Week With M4 iPad Air Update

Apple has kicked off a fresh round of hardware announcements with a clear focus on value and performance. The company....

Viral AI Caricature Trend Sparks Serious Privacy Fears, Expert Warns

A viral social media trend that turns personal details into AI-generated caricatures is raising red flags among cybersecurity experts, who....

India AI Impact Summit 2026: Global Leaders, CEOs Gather in New Delhi for High-Stakes Talks

India has opened a major global gathering focused on artificial intelligence and its growing worldwide influence. The India AI Impact....

PlayStation State of Play February 2026: Biggest Announcements and Games Revealed

One week after Nintendo set the tone for 2026, PlayStation stepped forward with its own showcase. The PlayStation State of....

Bell AI Data Centre Near Regina Signals Major Tech Investment in Saskatchewan

Bell Canada is planning a major expansion of artificial intelligence infrastructure near Regina, according to newly filed municipal documents.The project....

Moltbook: Experts Flag Security Risks on Viral AI Forum

A strange new social platform has captured the internet’s curiosity — and concern. Moltbook, a social forum designed exclusively for....

Global Software Stocks Slide as AI Fears Trigger ‘SaaSpocalypse’

A global sell-off in software stocks is accelerating as investors grow increasingly anxious about how fast artificial intelligence could upend....

Experts Find Rare Space Molecule Hints at Life Origins of Past Life

Scientists have identified the largest organic molecule containing sulfur ever found in interstellar space, a discovery that may help explain....

NASA updates Artemis II wet dress test and launch windows soon

NASA has moved the timeline for a key Artemis II test because of severe winter weather in Florida. The agency....

Meta Blocks Teens From AI Characters Ahead of Child Safety Trial

Meta is temporarily revoking teen access to its AI characters as scrutiny over tech platforms and child safety intensifies. The....

NASA Astronaut Sunita Williams Retires After 9-Month Orbital Ordeal

NASA astronaut Sunita Williams has announced her retirement, marking the end of a remarkable 27-year career in space exploration. Her....