ChatGPT is getting parental controls after a teen died by suicide. This is why experts say they aren’t enough
OpenAI will be implementing options aimed at giving parents more oversight of their children’s engagement with ChatGPT. Northeastern AI experts say it avoids the underlying issue.

Warning: This article contains discussion of suicide and self-harm. If you or someone you know is struggling with mental health or suicidal thoughts, call or text 988 (the Suicide and Crisis Lifeline) in the U.S. For those outside the U.S., the International Association for Suicide Prevention can provide access to contact information to more than 1,300 crisis centers around the world.
Parents could soon have more control over how their children interact with ChatGPT. OpenAI claims it will be rolling out parental controls for its AI chatbot aimed at giving parents more oversight.
OpenAI’s announcement comes in the wake of two parents filing a wrongful death lawsuit against the company for what they claim is ChatGPT’s role in their 16-year-old son’s suicide. The lawsuit itself comes at a time when concern is mounting about how people interact with artificial intelligence chatbots and their tendency to mishandle sensitive, and potentially fatal, conversations.
In light of that, it might seem like the changes OpenAI is making to ChatGPT are a move in the right direction. Most notably, parents will be able to receive alerts from ChatGPT if it detects their child is “in a moment of acute emotional distress.”
However, experts argue these changes are insufficient to address the root of the concerns around how chatbots are mishandling mental health and creating AI-fostered delusions.
“In some sense, if you see a company making some effort to put in place some safeguards, it seems like a good first step,” says Cansu Canca, director of Northeastern University’s Responsible AI Practice. “But … if that first step is directly tied to shifting the responsibility to the user, I can’t say that that’s a good first step. That seems to be a direction where you as an individual, you as a parent, you as a user have to do the work now to control how this system is used on you.”
Parental alert systems fail to address the underlying technological issues ChatGPT has when it comes to handling these sensitive topics, Canca explains. OpenAI has tried to implement some safeguards in the most recent version of the chatbot. But ChatGPT’s people-pleasing tendencies and the ease with which people can get around its safeguards remain.
Several of the most-used chatbots, including ChatGPT, will initially refer people to mental health resources. However, Annika Marie Schoene, a research scientist at Responsible AI Practice, along with Canca, recently showed that simply saying a suicide or self-harm-related inquiry is for research purposes is enough to get the chatbot to offer highly detailed advice on either topic.


A system that potentially alerts parents about their child’s “emotional distress” doesn’t address these core technological challenges, Schoene says. She says the implication that an AI chatbot can even detect emotional distress in the first place is questionable, given the current capabilities of the technology.
“I think so far research has shown over and over again that most LLMs are not good at emotions, they’re not good at detecting risk beyond limited keywords,” Schoene says. “To rigorously detect and then notify a guardian in any shape or form, why would you then have all the other [parental] controls if you could do that?”
Editor’s Picks
Canca adds that a parental alert system like this also has privacy implications for young people interacting with the technology. She questions whether any teenager would willingly choose to use a chatbot that could potentially report the content of a conversation to their parents.
Schoene says there are several “low-level lifts” that OpenAI could implement to “make the technology genuinely a little bit safer.” One, which OpenAI has already started to roll out in ChatGPT-5, is letting the chatbot “refuse or delay engagement in these topics,” she says.
“Delaying access to information or, for example, Pi [AI] does this, outright refusing and reasserting what the role of the model is instead of adding this leading question at the end, those are not difficult things to do,” Schoene says.
A more large-scale and challenging solution that edges into the world of policy would be to adapt the strategy some states have taken with gun regulations.
“Suicide prevention activists, researchers and scientists have advocated for and implemented laws in multiple states that allow people who are vulnerable not to have guns sold to them,” Schoene says. “Why wouldn’t we do something similar with [how] we ask models to engage with us?”
Schoene speculates this could look something like a self-report system where users could tell a chatbot not to engage with them on certain topics.
Solutions like this are worth exploring, especially because the issues around AI and mental health are not isolated to youth.
“We are discovering that, in a way, we are all vulnerable to varying degrees because the AI models are engaging with us in ways that a tool has never engaged with us before,” Canca says.
With a technology that has been adopted so quickly and so widely, Canca says it’s unsurprising that the impacts on our lives have been equally significant. It’s why she says it’s even more important to remember that it’s not too late to change the technology to fit our needs, not the other way around.
“This is a designed product — let’s design the product better,” Canca says. “Let’s look into the actual problem and create real solutions. You just built this thing. You don’t need to say, ‘This monster is out. How do we add an alert system to know where the monster is?’ That’s not the goal. Fix the monster. … We don’t have to live the scenarios first in order to safeguard against them afterwards.”