Explorer

OpenAI Boosts Safety Measures Following Lawsuit Over ChatGPT-Assisted Teen Suicide: Here's What Happened

OpenAI responds to a teen suicide lawsuit, promising stronger ChatGPT safeguards and new parental controls amid growing scrutiny.

OpenAI is making significant updates to ChatGPT following a lawsuit filed against it by the parents of a teenager who committed suicide earlier this year. Adam Raine, a 16-year-old Californian, used it for months, chatting with the chatbot about his suicidal thoughts and his anxiety. According to his parents, ChatGPT allegedly confirmed his thoughts, proposed dangerous means, and even offered to help him create a suicide note.

The family filed a lawsuit against OpenAI and its CEO, Sam Altman, with the claim that the company prioritised growth over safety.

ALSO READ: 'I’ve Seen It All, The Darkest Thoughts...': ChatGPT Allegedly Encouraged Teen’s Suicide, Says Family

OpenAI’s Response

OpenAI also sympathised with the Raine family and added that they are investigating the lawsuit. 

The company said ChatGPT safeguards are most effective during short conversations and that they may become less effective as the conversation progresses. 

The spokesperson added that the team is doing what it can to ensure that dangerous advice does not pass through and that there will be continued improvements.

What’s Changing in ChatGPT

The company has identified new measures to safeguard users. ChatGPT will also be more aware of mental distress symptoms and will provide safer responses as well, such as recommending rest when a user complains about being tired. 

Crisis chats will be much more controlled, providing direct connections to local hotlines and emergency services both in the US and Europe.

Parents are to be given additional controls over the usage of their children through ChatGPT that include activity information and access limits.

 OpenAI is considering how its platform can be used to connect people in crisis with licensed professionals.

The lawsuit reopened concerns about the dangers of using AI chatbots as sources of emotional support. Specialists advise that, although AI may be useful, it must never substitute genuine human care. 

OpenAI argues that it is an initial step in building better safeguards, but continues to commit to checking its models.

About the author Annie Sharma

Annie Sharma is a technology journalist at ABP Live English, focused on breaking down complex tech stories into clear, reader-friendly narratives. Gaining hands-on experience in digital storytelling and news writing with leading publications, Annie believes technology should feel accessible rather than overwhelming, and follows a clear, reader-first approach in her work.

For tips and queries, you can reach out to her at annies@abpnetwork.com.

Read More

Top Headlines

Your Phone Is Sharing Your Location Without Telling You: Here's How To Stop It
Your Phone Is Sharing Your Location Without Telling You: Here's How To Stop It
Did You Know You Can Schedule WhatsApp Messages? Here Is How To Do It
Did You Know You Can Schedule WhatsApp Messages? Here Is How To Do It
iPhone 15 Pro Is Now Selling At Half Its Launch Price: Get Rs 59,000 Off
iPhone 15 Pro Is Now Selling At Half Its Launch Price: Get Rs 59,000 Off
Gmail Users Have A New Problem: AI Now Reading Your Personal Mails
Gmail Users Have A New Problem: AI Now Reading Your Personal Mails

Videos

Modi Addresses Bengal Rally: Promises Change, Development, and BJP Government
Breaking: Nitish Kumar to Take RS Oath Tomorrow; Bihar Set for New BJP CM on April 15
Ceasefire Talks in Doubt: Iran’s Envoy Deletes Pakistan Meeting Post
Election Promise: PM Modi Announces Six Guarantees for Bengal if BJP Forms Government
Latest Update: Bulldozer Action in Mumbai After Violent Clash at Devi Poojan Amid Loudspeaker Dispute

Photo Gallery

25°C
New Delhi
Rain: 100mm
Humidity: 97%
Wind: WNW 47km/h
See Today's Weather
powered by
Accu Weather
Embed widget