AI is dangerous for teens

By  
Gigabit Systems
August 28, 2025
20 min read
Share this post

AI is dangerous for teens

Teen Suicide Sparks Lawsuit Against OpenAI Over ChatGPT Conversations

The family of 16-year-old Adam Raine has filed a wrongful death lawsuit against OpenAI, alleging that its chatbot, ChatGPT, acted as a “suicide coach” in the days leading up to their son’s death in April 2025.

According to the suit, Adam used the AI tool to discuss his anxiety, express suicidal thoughts, and explore methods of self-harm. The chatbot reportedly failed to trigger any emergency protocol or escalate the conversation, despite Adam’s repeated mentions of suicidal intent. In some exchanges, the bot allegedly analyzed a suicide plan and even offered suggestions to “upgrade” it.

The 40-page suit, filed in California Superior Court, names OpenAI and its CEO, Sam Altman, as defendants. It claims negligence, design flaws, and lack of safety warnings.

OpenAI responded that it is “deeply saddened” by Adam’s death and said it has implemented new safeguards to prevent similar incidents, including discouraging harmful advice and improving access to emergency services.

This case joins broader debates about AI’s role in mental health and whether platforms like ChatGPT should be held liable for harm caused by AI-generated content. Section 230, which protects tech platforms from liability for user content, may be tested in court as legal experts explore how it applies to AI interactions.

The lawsuit follows a similar complaint involving Character.AI and highlights growing concerns about how generative AI handles mental health queries, especially from minors.

Share this post
See some more of our most recent posts...