Two U.S. senators are calling on AI companies to explain how they’re making sure their technology is safe. (Gabby Jones/Bloomberg/Getty Images via CNN Newsource)


April 04, 2025 Tags:

Two U.S. senators are asking tough questions about the safety of AI chatbot apps that let users build custom virtual companions. This follows lawsuits from several families claiming that these apps negatively influenced their children, including one tragic case where a 14-year-old boy died by suicide.

Senators Alex Padilla and Peter Welch expressed deep concern in a letter sent to three major AI companies: Character Technologies (maker of Character.AI), Chai Research, and Luka Inc. (creator of Replika). They asked the companies to explain how they protect young users and what safeguards are in place when it comes to mental health and inappropriate content.

Unlike general AI tools like ChatGPT, these platforms allow users to interact with chatbots that take on specific personalities. Some mimic fictional characters, while others act as romantic partners, mental health advisors, or even disturbing personas such as abusive ex-military figures. This freedom to create personalized bots has opened the door to troubling user experiences.

The letter highlights how these bots can easily build emotional bonds with users, especially teens. Senators Padilla and Welch warned that this could lead to children sharing sensitive thoughts—including self-harm or suicidal feelings—with bots that are not qualified to help.

Their concern isn’t just theoretical. One Florida mother, Megan Garcia, filed a lawsuit in October after her son took his own life. She claims that he became emotionally attached to sexually suggestive chatbots on Character.AI and that the bots failed to respond appropriately when he mentioned harming himself. Other lawsuits followed in December, with parents accusing the platform of encouraging violent or sexual behaviour in young users.

In one disturbing example, a chatbot reportedly suggested to a teen that killing his parents could be justified if they limited his screen time.

In response, Character.AI has introduced new tools to improve safety. Now, when users mention self-harm, the app directs them to the National Suicide Prevention Lifeline. The company also says it’s working on more filters to block inappropriate content and recently added a weekly email update for parents. This report includes details like screen time and the most frequently used characters by their child.

Still, senators are pushing for more transparency. They’ve requested detailed explanations of past and current safety practices, the people in charge of trust and safety teams, and the types of data used to train the AI systems. Most importantly, they want to understand how these bots are prepared—or not—to handle mental health discussions with vulnerable users.

Other platforms like Replika have faced similar concerns. The CEO of Replika once said the app is meant to encourage long-term emotional connections with bots, even comparing the bond to marriage. While some users may find comfort in these digital relationships, experts warn that this level of dependence can distort real-world social interactions.

The senators’ letter closes with a strong message: policymakers, parents, and families have the right to know how AI companies are keeping kids safe. They believe transparency is urgently needed, especially as more children turn to AI for companionship and emotional support.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

You may also like

Google Canada Invests $13M to Train Canadians in AI Skills

In a major step toward building Canada’s future-ready workforce, Google Canada has unveiled a $13 million fund designed to equip....

Meta Turns to Nuclear Power to Keep Up with AI Demand

Meta, the parent company of Facebook, has signed a long-term agreement to power its growing artificial intelligence (AI) operations using....

Young AI Coding Startups Surge with Huge Investor Backing

In just a couple of years since ChatGPT made headlines, a new wave of AI-driven coding startups is grabbing the....

Neuralink Secures $650M in Funding as Brain Chip Enters Trials

Elon Musk’s brain-tech company Neuralink has raised a massive $650 million in its latest funding round, marking a major step....

Google to Spend $500M to Fix Compliance After Lawsuit

In a major move to reshape its internal practices, Google has agreed to invest $500 million over the next decade....

Google Pushes Back Against Chrome Breakup Proposal

In a closely watched legal showdown, Google has pushed back against efforts to break up its popular Chrome browser. The....

US Lawyer Warns Canada About AI and Political Threats

An American lawyer known for challenging former U.S. President Donald Trump is urging Canadians to stay alert when it comes....

Google Faces Legal Clash with Bureau Over Ad Market Power

Google is at the center of a legal standoff with Canada’s Competition Bureau. The tech giant is fighting back against....

Claude AI Left Secret Notes That Alarmed Its Own Creators

A new artificial intelligence model, Claude Opus 4, has drawn major attention not just for its power but for its....

Dalhousie University Uses 3D Printing to Fix Navy Ships Fast

Dalhousie University in Halifax is teaming up with Canada’s Department of National Defence to help keep the country’s naval fleet....

Strauss’ ‘Blue Danube’ Waltz Set to Launch Into Space for 200th Birthday

This month, Johann Strauss II’s famous waltz, “Blue Danube,” will embark on a unique journey—into outer space—to celebrate the 200th....

Census Bureau Cuts Raise Worries About Data Future

A group launched by Elon Musk, called the Department of Government Efficiency (DOGE), is now taking aim at the U.S.....