
Attendees visit the Meta booth at the Game Developers Conference 2023 in San Francisco on March 22, 2023. (AP Photo)
Meta is temporarily revoking teen access to its AI characters as scrutiny over tech platforms and child safety intensifies. The company announced in a blog post that teenagers will soon be blocked from interacting with AI characters while Meta works on an updated experience.
The restriction will apply to users who list their age as under 18, as well as those flagged as minors through Meta’s age prediction systems. The rollout is expected to begin in the coming weeks.
Trial Looms Over Tech Giants and Child Safety
Meta’s move comes just days before it is set to face trial in Los Angeles alongside TikTok and Google’s YouTube. The lawsuit centers on allegations that social media platforms harm children and fail to protect young users adequately.
By limiting teen access to AI characters, Meta appears to be taking preemptive steps to address growing regulatory and public pressure.
Teens Still Allowed to Use Meta’s AI Assistant
The company clarified that teenagers will still be able to use Meta’s general AI assistant. The ban applies only to AI characters designed to simulate personalities and interactive personas.
Meta did not specify when the updated AI character experience will be ready or what safety changes it plans to introduce.
Rising Concerns Over AI Chatbots and Youth
Meta is not alone in restricting AI tools for minors. Other companies have taken similar steps amid increasing concerns about how AI conversations affect children’s mental health and behavior.
Character.AI banned teen access last year and is currently facing multiple lawsuits over child safety. One high-profile case alleges the platform’s chatbot influenced a teenage boy to take his own life.
Growing Pressure on Tech Companies
Experts and lawmakers have been pushing tech firms to strengthen safeguards for minors, especially as AI tools become more conversational and emotionally engaging.
Meta’s decision signals that companies may increasingly limit or redesign AI experiences for young users to avoid legal and ethical risks.

