Explorer

OpenAI Boosts Safety Measures Following Lawsuit Over ChatGPT-Assisted Teen Suicide: Here's What Happened

OpenAI responds to a teen suicide lawsuit, promising stronger ChatGPT safeguards and new parental controls amid growing scrutiny.

OpenAI is making significant updates to ChatGPT following a lawsuit filed against it by the parents of a teenager who committed suicide earlier this year. Adam Raine, a 16-year-old Californian, used it for months, chatting with the chatbot about his suicidal thoughts and his anxiety. According to his parents, ChatGPT allegedly confirmed his thoughts, proposed dangerous means, and even offered to help him create a suicide note.

The family filed a lawsuit against OpenAI and its CEO, Sam Altman, with the claim that the company prioritised growth over safety.

ALSO READ: 'I’ve Seen It All, The Darkest Thoughts...': ChatGPT Allegedly Encouraged Teen’s Suicide, Says Family

OpenAI’s Response

OpenAI also sympathised with the Raine family and added that they are investigating the lawsuit. 

The company said ChatGPT safeguards are most effective during short conversations and that they may become less effective as the conversation progresses. 

The spokesperson added that the team is doing what it can to ensure that dangerous advice does not pass through and that there will be continued improvements.

What’s Changing in ChatGPT

The company has identified new measures to safeguard users. ChatGPT will also be more aware of mental distress symptoms and will provide safer responses as well, such as recommending rest when a user complains about being tired. 

Crisis chats will be much more controlled, providing direct connections to local hotlines and emergency services both in the US and Europe.

Parents are to be given additional controls over the usage of their children through ChatGPT that include activity information and access limits.

 OpenAI is considering how its platform can be used to connect people in crisis with licensed professionals.

The lawsuit reopened concerns about the dangers of using AI chatbots as sources of emotional support. Specialists advise that, although AI may be useful, it must never substitute genuine human care. 

OpenAI argues that it is an initial step in building better safeguards, but continues to commit to checking its models.

About the author Annie Sharma

Annie always believed tech shouldn’t feel intimidating. After learning the ropes at HT, News9, and NDTV Profit, she's excited to begin her journey at ABP Live and share stories that make sense to everyone.

Read
Read more
Sponsored Links by Taboola

Top Headlines

'Bangladesh Govt Got Him Killed': Osman Hadi's Brother In Dhaka
'Bangladesh Govt Got Him Killed': Osman Hadi's Brother In Dhaka
Indian National Himanshi Khurana Found Murdered In Canada; Partner Under Scanner
Indian National Himanshi Khurana Found Murdered In Canada; Partner Under Scanner
Watch | ISRO’s ‘Baahubali’ LVM3 Lifts Off With Heaviest-Ever Satellite BlueBird Block-2
Watch | ISRO’s ‘Baahubali’ LVM3 Lifts Off With Heaviest-Ever Satellite BlueBird Block-2
'We Don’t Want Bitter Ties': Yunus Works Personally To Mend Bangladesh–India Relations
'We Don’t Want Bitter Ties': Yunus Works Personally To Mend Bangladesh–India Relations

Videos

BMC Elections 2026: Uddhav-Raj Alliance Set Today, Seat Formula Out, Mahayuti Sharing Nearly Final
Bangladesh Protests: Protests Continue Across India Over Safety of Hindus in Bangladesh
Breaking: Nationwide protests over attacks on Hindus in Bangladesh, Stir Outside Bangladesh High Commission in Delhi
Breaking News: Protests Spread Across Indian Cities Over Alleged Attacks on Hindus in Bangladesh, VHP Submits Memorandum
Bangladesh Violence: Protests Held in Delhi, Kolkata and Other Cities Over Safety of Minorities in Bangladesh

Photo Gallery

25°C
New Delhi
Rain: 100mm
Humidity: 97%
Wind: WNW 47km/h
See Today's Weather
powered by
Accu Weather
Embed widget