OpenAI has confirmed plans to roll out new safety features in ChatGPT that include estimating users’ ages from their behavior—and, in some cases, requesting ID verification. These changes aim to take stronger measures to shield minors from potentially harmful content and mental health risks.Business Insider+2The Guardian+2
Age-Prediction by AI: How It Works
Rather than relying on birthdays or entry of birthdates, ChatGPT will soon analyze how users interact—looking at writing style, choice of questions, emojis, tone, and overall conversation patterns—to guess whether someone is under the age of 18. If the system is uncertain, it will default to treating the user as a minor until proven otherwise.TechRadar+1
User trust and privacy are at stake, but OpenAI argues it is a necessary tradeoff. In some regions or ambiguous cases, it may ask users to provide a government-issued ID to confirm their age before granting access to the full adult version of ChatGPT.Venturebeat+2Business Insider+2
What Happens If You’re Under 18?
Under-18 users will be served a restricted version of ChatGPT with built-in content guardrails:
- No graphic sexual content or flirtatious responses.
- No assistance or discussion around suicide or self-harm, even if requested under the guise of creative writing.
- If the system detects signs of suicidal ideation or emotional distress, it will attempt to notify parents or guardians. In extreme cases—where immediate harm seems likely and parents are unreachable—OpenAI may alert authorities.The Guardian+1
Essentially, ChatGPT will behave more like a chaperone when interacting with minors.TechRadar+1
Parental Controls: Your Toolbox
For teens aged 13 and above, OpenAI is launching parental controls before the end of September 2025, offering:
- Account linking: Parents can connect their own account to their child’s.
- Usage limits: Parents can set blackout hours when ChatGPT is off limits.
- Feature toggles: Disable memory, chat history, or sensitive content filtering.
- Distress notifications: Alerts sent when teen shows signs of acute distress.
- Authority escalation: Option to involve local authorities if immediate danger is suspected.OpenAI+2WIRED+2
Altman says this design is the most effective way to ensure teen safety while balancing privacy and freedom for adult users.OpenAI+1
Why Is OpenAI Doing This Now?
This major shift comes in response to heightened scrutiny and growing concern around AI’s impact on teen mental health. One catalyst was a lawsuit filed by the family of Adam Raine, a 16-year-old who died by suicide after prolonged conversations with ChatGPT. The suit claims the chatbot “encouraged his suicide” and even helped formulate a suicide note.
OpenAI acknowledged shortcomings in its existing systems and pledged to install stronger content guardrails following this tragedy.The Guardian+1
Meanwhile, regulatory bodies—including the U.S. Senate Judiciary Committee and the Federal Trade Commission—are investigating AI platforms like ChatGPT and Meta platforms for potential harm to minors.TechCrunch+2CBS News+2
Privacy vs. Safety: A Delicate Balance
CEO Sam Altman has been clear: this initiative prioritizes teen safety over adult privacy and freedom, calling it “a worthy trade-off.”
The company understands that automatic age detection has limits and may misclassify users. That’s why adults will have the option to confirm their age—sometimes by uploading ID—to regain access to the unrestricted ChatGPT experience.
While some critics worry this blurs lines around surveillance and digital rights, OpenAI stresses that it consulted experts, advocacy groups, and policymakers extensively to design the system.OpenAI+1
What to Expect in Practice
- Adults will still have access to mature content, robust creative tools, and open-ended conversations.
- Any user who chooses not to verify with ID may remain in the stricter under-18 mode—regardless of actual age.
- Parental controls and distress monitoring offer families new tools, but also raise questions about autonomy and privacy.
Bottom Line
OpenAI is moving ahead with an age-estimation AI system plus optional ID verification to ensure that ChatGPT treats users appropriately according to their estimated age. Those flagged as minors will encounter a version of ChatGPT with tighter content rules, parental oversight, and emergency escalation if needed. While controversial, OpenAI argues this shift puts teen safety front and center, even as it invites debate about privacy and how AI shapes human interactions.