ChatGPT to Introduce Teen-Specific Version with Enhanced Safety Features

This post is also available in: עברית (Hebrew)

OpenAI has recently announced that ChatGPT will soon begin offering a version of its platform tailored specifically for teenage users, as part of a broader effort to address growing concerns about the impact of AI tools on youth safety and mental health.

This announcement follows high-profile incidents, including a lawsuit filed by the family of a 16-year-old who died by suicide earlier this year. The family alleges that ChatGPT played a part in the tragedy, after the child consulted with the tool about the suicide. This drew renewed attention to the potential risks of unmoderated AI interactions.

According to OpenAI, users identified as under 18 will be automatically redirected to a version of ChatGPT designed with stricter content moderation policies and safeguards. This includes filtering out sexual content and providing mechanisms to alert law enforcement in rare cases where a user appears to be in serious emotional distress.

“The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult,” OpenAI stated, emphasizing the need for age-appropriate interactions.

Additional safety features are expected in the coming weeks. OpenAI says parental controls will be available by the end of September, enabling parents to connect their accounts to their children’s, view past interactions, and set usage limits or blackout hours.

In terms of age verification, OpenAI has said that in situations where a user’s age cannot be verified, the platform will err on the side of caution and provide access to the teen version by default.

The move comes amid increasing regulatory scrutiny in the United States and elsewhere. The U.S. Federal Trade Commission (FTC) is currently investigating the broader risks posed by AI chatbots to children and teenagers. In parallel, public concerns about the role of digital platforms in adolescent mental health have continued to grow.

Other tech companies are also adjusting their policies. YouTube, for example, has introduced new age-estimation systems that rely on usage patterns and account history to tailor content access.

After AI has revolutionized the world in the past three years, it appears that this step by OpenAI is one of the first steps to better regulate the use of this technology, making it safer for users and humanity as a whole.