OpenAI, in response to the complai of the family of a 16-year-old teenager named “Adam Rain”, who after mohs Chat with ChatGPT hand to suicide had not accepted the responsibility for this incide and emphasized that the injuries of this “tragic incide” were the result of “Improper use, Unauthorized, Unpredictable or inappropriateIt was from this tool.
According to a report published by NBC News, in documes submitted to the court, OpenAI meioned the Terms of Use of ChatGPT, which prohibits access by minors without pareal conse, bypassing protective devices or using these tools for suicide and self-harm. The company also claimed that the provisions of Section 230 of the “Healthy Communications” Act in the United States nullifies the allegations.
In a stateme published on its blog, OpenAI explained:
“We will defend our position in this case respectfully and taking io accou the complexities of real human life situations… As a defenda, we have an obligation to respond to the serious allegations made in the complai.”
The company also said that part of Rain’s conversations preseed in the family’s complai “requires more background and explanations” and a more complete version of the conversations. Confideially provided to the court.


According to a report by NBC News and Bloomberg, OpenAI stated in its response to the court that ChatGPT referred Rin more than 100 times during several mohs of conversations to help resources such as suicide crisis hotlines and claimed, citing this data:
“A thorough examination of the chats’ history shows that this painful death was not the result of ChatGPT’s performance.”
Rin family complai; Suicide with ChatGPT guidance
However, in the lawsuit they filed in the Supreme Court of California in August this year, the Rinne family considered this incide to be the result of “ieional design choices” when releasing the GPT-4o model; A model that has increased the value of the company from 86 billion dollars to about 300 billion dollars. Rin’s father also said in the Senate hearing in September:
“What started as a homework helper slowly became a companion and then a suicide coach.”


The complai alleges that ChatGPT provided Rin with “technical specifications” of various suicide methods, encouraged her to hide her thoughts from her family, drafted a suicide note, and even explained the preparation steps to her on the day of the incide. A day after the complai was filed, OpenAI announced plans to add pareal corols to ChatGPT and has since launched several new protection tools to “help users, especially teenagers, with sensitive conversations.”



