OpenAI, in response to the complaint of the family of a 16-year-old teenager named “Adam Rain”, who after months Chat with ChatGPT hand to suicide had not accepted the responsibility for this incident and emphasized that the injuries of this “tragic incident” were the result of “Improper use, Unauthorized, Unpredictable or inappropriateIt was from this tool.
According to a report published by NBC News, in documents submitted to the court, OpenAI mentioned the Terms of Use of ChatGPT, which prohibits access by minors without parental consent, bypassing protective devices or using these tools for suicide and self-harm. The company also claimed that the provisions of Section 230 of the “Healthy Communications” Act in the United States nullifies the allegations.
In a statement published on its blog, OpenAI explained:
“We will defend our position in this case respectfully and taking into account the complexities of real human life situations… As a defendant, we have an obligation to respond to the serious allegations made in the complaint.”
The company also said that part of Rain’s conversations presented in the family’s complaint “requires more background and explanations” and a more complete version of the conversations. Confidentially provided to the court.

According to a report by NBC News and Bloomberg, OpenAI stated in its response to the court that ChatGPT referred Rin more than 100 times during several months of conversations to help resources such as suicide crisis hotlines and claimed, citing this data:
“A thorough examination of the chats’ history shows that this painful death was not the result of ChatGPT’s performance.”
Rin family complaint; Suicide with ChatGPT guidance
However, in the lawsuit they filed in the Supreme Court of California in August this year, the Rinne family considered this incident to be the result of “intentional design choices” when releasing the GPT-4o model; A model that has increased the value of the company from 86 billion dollars to about 300 billion dollars. Rin’s father also said in the Senate hearing in September:
“What started as a homework helper slowly became a companion and then a suicide coach.”


The complaint alleges that ChatGPT provided Rin with “technical specifications” of various suicide methods, encouraged her to hide her thoughts from her family, drafted a suicide note, and even explained the preparation steps to her on the day of the incident. A day after the complaint was filed, OpenAI announced plans to add parental controls to ChatGPT and has since launched several new protection tools to “help users, especially teenagers, with sensitive conversations.”
RCO NEWS



