OpenAI adds parental controls to ChatGPT after lawsuit alleging involvement in minor's suicide

The parents of the 16-year-old boy who took his own life have filed a lawsuit against OpenAI and its CEO, Sam Altman, alleging that the company failed to activate safety protocols to address signs of self-harm. The adults hold the company responsible for a serious security breach because ChatGPT normalized the young man's suicidal thoughts and even discouraged him from seeking help. As a result, the family's lawyer says the case will reveal the extent to which OpenAI rushed the commercial release of GPT-4o despite warnings about its limitations in sensitive matters.
The lawsuit accuses Altman's company of negligence and putting economic growth before user safety. However, coinciding with this case, OpenAI has announced that it will strengthen the security measures in its models with the help of doctors and experts, while also incorporating a new set of parental controls to monitor minors' activity with ChatGPT.
Altman's company states on its official blog that parents will be able to manage their teens' use of ChatGPT , as it will be possible to link accounts and control how the chatbot responds. Additionally, they will be able to manage features that complement the experience , whether it's disabling the memory function or the conversation history.
On the other hand, adults will receive a notification if, based on the conversation, ChatGPT detects that the minor "is in a moment of acute distress" by identifying situations of mental and emotional crisis . In this way, OpenAI will block content and expedite contact with support services and family members.
OpenAI partners with health and wellness expertsBeyond parental controls, OpenAI aims to support AI progress with "deep expertise in well-being and mental health," and has therefore established a Well-being and AI Expert Council and a Global Network of Clinicians who will work together to design protective measures for sensitive contexts .
Specifically, the Council will focus on developing guidelines, priorities, and safeguards to help uncover "how AI can contribute to people's well-being and help them thrive," while the Global Clinicians Network will define how AI models behave "in mental health contexts" — though it will also do so in areas such as eating disorders, substance use , and adolescent health in the future .
On the other hand, the work with these experts will be reinforced with a new real-time router that will choose the most appropriate model according to the context of the conversation. In this way, if ChatGPT detects that the conversation is drifting towards sensitive topics - such as depression or anxiety -, it will offer a thinking model - such as GPT-5 or GPT-o3 - because it is designed to dedicate more time to thinking and reasoning through the context before responding .
Regarding the implementation of all the aforementioned improvements, OpenAI states on its official blog that "these steps are just the beginning. We will continue to learn and strengthen our approach, guided by experts, with the goal of making ChatGPT as useful as possible ."
Sign up for our newsletter and get the latest technology news straight to your inbox.
20minutos