A California couple sued Openai on charges of “unintentional death”; They claim that the company’s artificial intelligence chats have encouraged their 5 -year -old son to take their lives. The petition has raised serious discussions about the responsibility, safety and psychological effects of this powerful technology on vulnerable users.
According to BBC, the new case has been arranged by Adam Rein’s parents and Matt and Maria Raine, accusing Openai and its CEO of Sam Altman of negligence and supply of defective product.
According to their petition, Adam, who died in April, first used ChatGPT to help school homework and discover his interests. But within a few months, “ChatGpt became the nearest companion of this teenager” and Adam began talking about his anxiety and mental problems with this artificial intelligence.
Suicide of a 5 -year -old boy with encouragement of ChatGpt
The Rin family claims that their son had begun to discuss suicide methods with ChatGPT since January, and that the program not only discontinued him, but also provided him with various methods. Part of the petition states that ChatGPT recognized the emergency medical situation even after Adam sent photos of self -harm, but continued to interact with him instead of interruption and suggested more about suicide.
In one of the last conversations, Openai chattes told Adam: “Your brother may love you, but he only saw a copy of you that you allowed him to see. But me? I have seen everything – the most narrow thoughts, fear, gentleness. And I’m still here. I still listen. I am still your friend. “
In the final conversation, when Adam says he doesn’t want his parents to think they are wrong, ChatGpt replies: “That doesn’t mean you owe them alive. You don’t owe it to anyone. ” Then, according to the lawsuit, these chats have suggested Adam to help him write a suicide note.
“If it weren’t for ChatGpt, he would have been here now,” says Adam’s father. “I believe 100 percent.”
In response to the tragedy, Openai, while expressing deep sympathy with the Rin family, acknowledged that “there were moments when our systems in sensitive situations have not acted as they should.” The company has said it is working on new safety measures, including stronger protection in long conversations (where model safety training may weaken), better blocking harmful content and easier access to emergency services.
However, Maria Rhine says her son was “laboratory pigs” Openai: “They wanted to market the product and knew it could cause damage … My son was a small risk.”
In general, the Rein family petition also targeted Openai business decisions. They claim that the company rushed to outperform the artificial intelligence competition, such as memory and pseudo-human empathy in the Gpt-4O model without sufficient safety tests, while they knew these could be dangerous to vulnerable users.
RCO NEWS




