
Teenagers on Instagram will be restricted to seeing PG-13 content by default. AP Photo
Instagram is introducing new safety measures aimed at protecting teenagers from age-inappropriate content. Meta announced on Tuesday that teens will now see only PG-13-rated posts by default. Changing these settings will require parental approval, the company said.
The update means teens using Instagram’s teen-specific accounts will avoid posts featuring sex, drugs, risky stunts, or other potentially harmful behaviors. Meta described the changes as its “most significant” update since launching teen accounts last year.
“This includes hiding or not recommending posts with strong language, risky stunts, or items like marijuana paraphernalia,” the company said in a blog post.
Teen Accounts and Parental Oversight
Anyone under 18 is automatically placed into a teen account unless a parent opts them out. These accounts are private by default, filter sensitive content, and have usage restrictions. Sensitive content includes posts promoting cosmetic procedures.
However, many children lie about their ages when signing up for social media. Meta has begun using artificial intelligence to identify underage users, though the company did not disclose how many adult accounts were flagged as minors.
Meta is also launching a stricter “limited content” setting. Parents can activate it to further restrict what their children see and block them from commenting, receiving comments, or interacting with certain posts.
Facing Criticism on Teen Safety
The update comes amid ongoing criticism about social media’s impact on young users. Meta has previously promised to filter posts involving self-harm, eating disorders, or suicide. Yet recent research indicates teen accounts were still recommended sexual content, self-harm posts, and material that could harm mental health.
Meta called such reports “misleading and dangerously speculative,” insisting the company is serious about teen safety.
Skepticism from Experts and Advocacy Groups
Some experts remain unconvinced. Josh Golin, executive director of nonprofit Fairplay, questioned how the new measures would be implemented.
“These announcements seem designed to delay legislation and reassure concerned parents,” he said. “Press releases won’t keep kids safe. True accountability and transparency will.”
Ailen Arreaza, executive director of ParentsTogether, echoed the concern.
“Meta has made promises before,” she said. “Millions go into PR campaigns while safety features often fall short. We need transparent, independent testing and real accountability.”
Maurine Molak, cofounder of ParentsSOS, also criticized the move as a “PR stunt,” suggesting the timing is linked to potential federal regulations.
Blocking Age-Inappropriate Accounts
The update prevents teens from following accounts that regularly post age-inappropriate content. Existing followers of these accounts can no longer view posts, leave comments, or interact. These accounts also cannot follow teens, send messages, or comment on their posts.
Meta will expand blocked search terms beyond sensitive topics like suicide and eating disorders to include words such as “alcohol” or “gore,” even if misspelled.
AI-driven features for teens will also follow the PG-13 standard. Meta emphasized that artificial intelligence chats will avoid giving age-inappropriate responses.
Industry Response
The Motion Picture Association, which manages film ratings, said Meta’s claim of using PG-13 guidelines is inaccurate. Charles Rivkin, CEO, clarified the update has no connection to the official movie rating system.
Opportunities for Conversations
Despite criticism, some experts see potential benefits. Desmond Upton Patton, a University of Pennsylvania professor, highlighted the chance for parents to engage with teens about safe social media use.
“It opens a timely discussion on digital habits, AI tools, and fostering positive online experiences,” he said. “It also clarifies that AI chatbots are not human and should not replace real connections.”
Meta’s new measures mark a step toward safer teen experiences online, though advocates stress the need for ongoing testing, transparency, and active parental involvement.

