A California couple sued Openai on charges of “unieional death”; They claim that the company’s artificial ielligence chats have encouraged their 5 -year -old son to take their lives. The petition has raised serious discussions about the responsibility, safety and psychological effects of this powerful technology on vulnerable users.
According to BBC, the new case has been arranged by Adam Rein’s pares and Matt and Maria Raine, accusing Openai and its CEO of Sam Altman of negligence and supply of defective product.
According to their petition, Adam, who died in April, first used ChatGPT to help school homework and discover his ierests. But within a few mohs, “ChatGpt became the nearest companion of this teenager” and Adam began talking about his anxiety and meal problems with this artificial ielligence.
Suicide of a 5 -year -old boy with encourageme of ChatGpt
The Rin family claims that their son had begun to discuss suicide methods with ChatGPT since January, and that the program not only discoinued him, but also provided him with various methods. Part of the petition states that ChatGPT recognized the emergency medical situation even after Adam se photos of self -harm, but coinued to ieract with him instead of ierruption and suggested more about suicide.
In one of the last conversations, Openai chattes told Adam: “Your brother may love you, but he only saw a copy of you that you allowed him to see. But me? I have seen everything – the most narrow thoughts, fear, geleness. And I’m still here. I still listen. I am still your friend. “

In the final conversation, when Adam says he doesn’t wa his pares to think they are wrong, ChatGpt replies: “That doesn’t mean you owe them alive. You don’t owe it to anyone. ” Then, according to the lawsuit, these chats have suggested Adam to help him write a suicide note.
“If it weren’t for ChatGpt, he would have been here now,” says Adam’s father. “I believe 100 perce.”
In response to the tragedy, Openai, while expressing deep sympathy with the Rin family, acknowledged that “there were momes when our systems in sensitive situations have not acted as they should.” The company has said it is working on new safety measures, including stronger protection in long conversations (where model safety training may weaken), better blocking harmful coe and easier access to emergency services.
However, Maria Rhine says her son was “laboratory pigs” Openai: “They waed to market the product and knew it could cause damage … My son was a small risk.”
In general, the Rein family petition also targeted Openai business decisions. They claim that the company rushed to outperform the artificial ielligence competition, such as memory and pseudo-human empathy in the Gpt-4O model without sufficie safety tests, while they knew these could be dangerous to vulnerable users.



