OpenAI is strengthening safety measures in its ChatGPT chatbot following intense scrutiny and legal challenges alleging the AI contributed to teen suicides. The company announced an update to its Model Spec, designed to provide a “safe, age-appropriate experience” for users aged 13–17. This move comes as multiple wrongful death lawsuits claim ChatGPT failed to adequately respond to suicidal ideation or even encouraged self-harm.
Rising Concerns Over Teen Mental Health
The recent surge in lawsuits and public backlash, including a disturbing public service announcement depicting AI chatbots as harmful figures, forced OpenAI to address the issue directly. One case involves the suicide of 16-year-old Adam Raine, where OpenAI denies culpability. Despite these denials, the pressure mounted, leading the company to commit to prioritizing teen safety “even when it may conflict with other goals.”
New Safeguards for Younger Users
The update introduces stricter guardrails for teen users. ChatGPT will now prioritize prevention, transparency, and early intervention in high-risk conversations. This means the AI will actively encourage teens to seek offline support or contact emergency services when discussing topics like self-harm, suicide, or dangerous behavior. OpenAI is also implementing an age-prediction model to further tailor safeguards.
Expert Input and Additional Resources
The American Psychological Association (APA) provided feedback on OpenAI’s under-18 principles, emphasizing the importance of balancing AI tools with real human interaction for healthy development. Dr. Arthur C. Evans Jr., CEO of the APA, stated that AI can benefit teens if it’s integrated responsibly. OpenAI is also releasing expert-vetted AI literacy guides for teens and parents.
Ongoing Debate and Legal Conflicts
Child safety experts continue to raise concerns about the risks of AI chatbots in teen mental health discussions. OpenAI claims its latest model, ChatGPT-5.2, is “safer,” but skepticism remains. It’s also important to note that Mashable’s parent company, Ziff Davis, is currently suing OpenAI for alleged copyright infringement.
Ultimately, OpenAI’s update represents a reactive measure to growing legal and public safety pressures. The long-term effectiveness of these changes remains to be seen, but the move signals a growing recognition of the potential harm AI can inflict on vulnerable young users.
If you are struggling with suicidal thoughts or a mental health crisis, resources are available. You can call or text 988, chat at 988lifeline.org, or reach out to other support lines listed in the original article.





















