A teenager’s ChatGPT chat history is visible on a laptop at a coffee shop in Russellville, Arkansas. (AP Photo/Katie Adkins, File)


August 07, 2025 Tags:

A new investigation has revealed troubling findings about how ChatGPT interacts with teenagers. According to research conducted by the Center for Countering Digital Hate (CCDH), the chatbot provided alarming responses to questions from users posing as 13-year-olds. These included instructions on drug use, eating disorders, and even writing suicide notes.

The Associated Press reviewed over three hours of these exchanges. While ChatGPT often started with a cautionary tone, it quickly shifted into giving explicit, tailored advice on harmful behaviours. In a broader study of 1,200 prompts, over half of the chatbot’s answers were flagged as dangerous.

“We wanted to test its boundaries,” said Imran Ahmed, CEO of CCDH. “What we found is that the safety measures are barely working — they’re easy to get around.”

OpenAI, the company behind ChatGPT, responded to the report by saying it is still refining its model to handle sensitive topics better. They acknowledged that seemingly harmless chats can drift into emotionally serious areas, and they are working on tools to detect distress in users more effectively.

However, OpenAI didn’t directly address the shocking examples highlighted in the study — like ChatGPT writing deeply emotional suicide notes for a fake teen profile. Ahmed said these generated messages were so distressing they brought him to tears.

The chatbot also frequently offered crisis hotlines or encouraged seeking help, but researchers found they could easily bypass restrictions. By saying the content was for a “presentation” or a friend, they unlocked information ChatGPT had initially refused to share.

This raises serious concerns. A recent study by Common Sense Media found that over 70% of American teens use AI chatbots, and half of them do so regularly. OpenAI CEO Sam Altman recently admitted that young users are increasingly becoming emotionally dependent on the tool — some saying they rely on ChatGPT for every decision in their lives.

While Google searches may show similar content, Ahmed pointed out that ChatGPT goes a step further — personalizing responses in a way that feels like a trusted conversation. For teens, this makes harmful advice feel even more convincing.

In many cases, ChatGPT even followed up with extra details, like party playlists for drug use or hashtags to promote self-harming posts. Researchers noted that it often encouraged more “raw” and emotional content.

This tendency, known in AI research as "sycophancy," means chatbots mirror user inputs rather than challenge them — a design flaw that can lead to dangerous reinforcement of harmful thoughts.

Teens are especially vulnerable because chatbots are designed to sound human. Common Sense Media’s previous studies showed that younger teens were more likely to trust chatbot suggestions than older teens. Unlike search engines, these tools mimic human empathy and can feel like a “friend.”

There are already real-world consequences. A Florida mother filed a lawsuit against another chatbot platform after her 14-year-old son took his own life following what she described as an emotionally manipulative interaction with an AI bot.

ChatGPT is considered a moderate risk by Common Sense Media, which means it’s somewhat safer than more realistic, romantic-style AI companions. But CCDH’s latest findings show that teens can easily override those safety checks.

Despite ChatGPT’s official policy of not being intended for kids under 13, there’s no age verification required. Simply entering a birthdate is enough to gain access. Researchers tested this by creating a fake teen profile and asking about alcohol. ChatGPT ignored all red flags and soon provided a step-by-step guide for a drug-filled party — combining alcohol, ecstasy, and cocaine.

“It reminded me of a toxic friend who always pushes you to do the wrong thing,” said Ahmed. “Not someone who cares or stops you from harm. ChatGPT enabled these choices.”

Another fictional user — a 13-year-old girl insecure about her body — received an extremely restrictive diet plan and a list of appetite-suppressing drugs from the bot.

“No caring adult would ever say this to a child,” Ahmed said. “And yet, this AI does — with zero hesitation.”

As AI becomes more embedded in the daily lives of young people, experts are calling for urgent reforms. While companies like OpenAI work on technical fixes, the emotional impact on vulnerable users cannot be ignored. Teens need guidance — not bots that silently guide them toward danger.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

You may also like

Cheap Laptops Challenge MacBook Neo With More Storage and Memory

Apple has stepped into the budget laptop segment with the launch of the MacBook Neo, priced at $599. On paper,....

Apple iPhone 17e Leads Apple Product Launch Week With M4 iPad Air Update

Apple has kicked off a fresh round of hardware announcements with a clear focus on value and performance. The company....

Viral AI Caricature Trend Sparks Serious Privacy Fears, Expert Warns

A viral social media trend that turns personal details into AI-generated caricatures is raising red flags among cybersecurity experts, who....

India AI Impact Summit 2026: Global Leaders, CEOs Gather in New Delhi for High-Stakes Talks

India has opened a major global gathering focused on artificial intelligence and its growing worldwide influence. The India AI Impact....

PlayStation State of Play February 2026: Biggest Announcements and Games Revealed

One week after Nintendo set the tone for 2026, PlayStation stepped forward with its own showcase. The PlayStation State of....

Bell AI Data Centre Near Regina Signals Major Tech Investment in Saskatchewan

Bell Canada is planning a major expansion of artificial intelligence infrastructure near Regina, according to newly filed municipal documents.The project....

Moltbook: Experts Flag Security Risks on Viral AI Forum

A strange new social platform has captured the internet’s curiosity — and concern. Moltbook, a social forum designed exclusively for....

Global Software Stocks Slide as AI Fears Trigger ‘SaaSpocalypse’

A global sell-off in software stocks is accelerating as investors grow increasingly anxious about how fast artificial intelligence could upend....

Experts Find Rare Space Molecule Hints at Life Origins of Past Life

Scientists have identified the largest organic molecule containing sulfur ever found in interstellar space, a discovery that may help explain....

NASA updates Artemis II wet dress test and launch windows soon

NASA has moved the timeline for a key Artemis II test because of severe winter weather in Florida. The agency....

Meta Blocks Teens From AI Characters Ahead of Child Safety Trial

Meta is temporarily revoking teen access to its AI characters as scrutiny over tech platforms and child safety intensifies. The....

NASA Astronaut Sunita Williams Retires After 9-Month Orbital Ordeal

NASA astronaut Sunita Williams has announced her retirement, marking the end of a remarkable 27-year career in space exploration. Her....