After a growing number of reports of users developing parasocial attachments to AI assistants — and a lawsuit filed by parents of a teenager who consulted ChatGPT about suicide methods before tragically taking his own life — ChatGPT parent company OpenAI has announced new safety features for its hit product.

In a pair of posts today on its official website and in a message from OpenAI co-founder and CEO Sam Altman on the social network X, the company states it will begin automatically segmenting ChatGPT users out by age range (inferred based on the contents of their conversations).

As OpenAI wrote in a post:

"Teens are growing up with AI, and it’s on us to make sure ChatGPT meets them where they are. The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult."

While teens ages 13 and up will continue to be able to use ChatGPT, OpenAI will place more automatic restrictions on them until age 18, including disabling the ability to coax ChatGPT into a "flirtatious" mode, and also not engaging in any conversation about suicide or self-harm methods, even if prompted by said underage users to do so or attempted to be "jailbroken" by them using prompts stating it's for a "creative writing" exercise. OpenAI said it will permit adults to engage in these types of conversations.

It may also inform authorities if the company sees communications or signs of 'imminent harm,' though this is not clearly defined.

In another blog post, the company states:

"We have to separate users who are under 18 from those who aren’t (ChatGPT is intended for people 13 and up). We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience"

In some cases, OpenAI says it will even begin 'carding' users (my term, not theirs), that is, asking them to upload identification cards (IDs) to prove they are old enough to use ChatGPT in the manner in which they seek.

"In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff."

OpenAI also previously announced a new series of "parental controls" for adults with teenagers who wish to use ChatGPT. The company outlined some of these controls today and said they would be available by the end of this month, September 2025. These include:

  • Link their [parent's] account with their teen’s account (minimum age of 13) through a simple email invitation.

  • Help guide how ChatGPT responds to their teen, based on teen-specific model behavior rules.

  • Manage which features to disable, including memory and chat history.

  • Receive notifications when the system detects their teen is in a moment of acute distress. If we can’t reach a parent in a rare emergency, we may involve law enforcement as a next step. Expert input will guide this feature to support trust between parents and teens.

  • Set blackout hours when a teen cannot use ChatGPT—a new control we’re adding.

While the changes may not impact much of the experience for adult users, it is a notable sign for the entire industry. With 700 million active weekly users on ChatGPT as of OpenAI's last report on its numbers, OpenAI remains far and away the largest dedicated gen AI company in terms of audience, and other firms are likely to follow suit or add their own versions of these features moving forward.

In addition, enterprises that rely on OpenAI and other AI models should take the time to consider how they are safeguarding or segmenting out underage users from their own products. OpenAI's moves today are a sign of its maturation and effort to take responsibility as gen AI usage grows across the personal and enterprise domains. Enterprises, too, must adapt to these trends — especially since, with ChatGPT's wide usage, many consumers will come to expect similar features and safety measures from all the AI tools they engage with going forward, personal and professional.