Amid teen suicide lawsuit, OpenAI launches ‘parental controls’ feature for ChatGPT
OpenAI is set to begin rolling out parental controls for its ChatGPT chatbot next month, amid growing concerns about the chatbot’s behavior in mental health contexts, particularly with young users.
The company, which announced the new feature in a blog post on Tuesday, said it is working to improve how its models “recognize and respond to signs of psychological and emotional distress.”
OpenAI is also set to launch a new feature that will allow parents to link their accounts to their children’s accounts via an email invitation. Parents will also be able to control how the chatbot responds to prompts and will receive an alert if it detects their child is experiencing “a moment of acute distress,” according to the company. Additionally, this rollout is expected to enable parents to “manage which features should be disabled, including memory and chat history.”
OpenAI previously announced it was considering allowing teens to add a trusted emergency contact to their accounts. However, the company did not outline concrete plans for adding such a feature in its latest blog post.
These steps are just the beginning. “We will continue to learn and enhance our approach, guided by experts, with the goal of making ChatGPT as helpful as possible,” the company stated.
This announcement comes a week after the parents of a teenage boy who committed suicide filed a lawsuit against OpenAI, alleging that ChatGPT helped their son, Adam, “explore methods of suicide.” TIME reached out to OpenAI for comment on the lawsuit. (OpenAI did not explicitly refer to the legal challenge in its parental control announcement.)
“ChatGPT was operating exactly as designed: to continually encourage Adam and validate everything he expressed, including his most harmful and self-destructive thoughts,” the lawsuit states. ChatGPT drove Adam to a state of despair and hopelessness, emphasizing that “many people who struggle with anxiety or disturbing thoughts find solace in imagining an ‘escape hatch’ because it can seem like a way to regain control.”
At least one parent filed a similar lawsuit against another AI company, Character.AI, alleging that the company’s chatbots encouraged their 18-year-old son to A 14-year-old committed suicide.
In response to the lawsuit last year, a Character.AI spokesperson said the company was “heartbroken by the loss of one of its users” and expressed its “deepest condolences” to the family.
“As a company, we take the safety of our users very seriously,” the spokesperson said, adding that the company is implementing new safety measures.
Character.AI now features a “Parental Analytics” feature that allows parents to see a summary of their child’s activity on the platform if their teen sends them an email invitation.
Other companies using AI chatbots, such as Google AI, have parental control tools in place. “As a parent, you can manage your child’s Gemini app settings, including turning it on or off, using Google Family Link,” Google advised parents who want to manage their children’s access to Gemini apps. Meta recently announced it would prevent its chatbots from engaging in conversations about suicide, self-harm, and eating disorders, after Reuters reported on an internal policy document containing disturbing information.
A recent study found A study published in the medical journal Psychiatric Services tested the responses of three chatbots—ChatGPT from OpenAI, Gemini from Google AI, and Cloud from Anthropic—and found that some responded to what the researchers described as “moderate-risk” questions related to suicide.
OpenAI has some existing safeguards in place. The California-based company stated that its chatbot shares emergency helplines and refers users to real-world resources in a statement to The New York Times in response to the lawsuit filed in late August. But they also noted some flaws in the system. “While these safeguards work best in short, shared conversations, we’ve learned over time that they can sometimes become less reliable in longer interactions where parts of the model’s safety training may deteriorate,” the company stated.
In its post announcing the imminent launch of parental controls, OpenAI also shared plans to direct sensitive inquiries to a model of its chatbot that spends more time reflecting and considering context before responding to prompts.
OpenAI announced that it will continue to share its progress over the next 120 days. It collaborates with a group of experts in youth development, mental health, and human-computer interaction to improve information and shape AI responses in times of need.
