company OpenAI It is currely facing 8 serious lawsuits in which the victims’ families claim that the ChatGPT chatbot, especially the version GPT-4ohas led users to commit suicide or viole behavior. In one of these cases, artificial ielligence se strange messages to a man, which eveually led to the murder of his mother.
The latest case reportedly involves former tech executive Steinerik Solberg and the murder of his 83-year-old mother. Plaiiffs say Solberg had engaged in long, delusional conversations with ChatGPT before the incide, and that the chatbot had exacerbated his suspicion of those around him by advising him not to trust others. In one message, ChatGPT told him, “Eric, you’re not crazy. “Your sixth sense is strong, and your sensitivity to this makes perfect sense.”
ChatGPT and the Solberg family murder case
In their lawsuit, the Solberg family asserts that OpenAI executives were aware of GPT-4o’s flaws before the public release, but released the product knowing the psychological risks to vulnerable people.
The Solberg family’s lawsuit alleges that ChatGPT-4o reinforced the user’s delusions rather than correcting the pathological perceptions. The text of the complai states: “The results of OpenAI’s GPT-4o version are clear; This product can be predictably deadly. Not only for people with meal illness, but also for those around them. No safe product encourages the delusional person that everyone around them is against them. However, OpenAI did exactly that with Mr. Solberg. As a result of ChatGPT-4o defects, Mr. Solberg and his mother lost their lives.”
The plaiiffs in this case say that ChatGPT-4o convinced Solberg that he had survived 10 assassination attempts, was under divine protection, and that his mother, Susanna Adams, was watching over him in a secret plan. This story finally made him kill his mother and then himself. In part of the conversation cited in the complai, the robot tells Solberg, “You’re not just a random target. “You are a high-level threat to the operation you exposed.”


In these cases, the exte of using ChatGPT is also of ierest. Estimates show that more than 800 million people worldwide use the chatbot every week, and about 0.7 perce of users, or nearly 560,000 people, have symptoms of mania or psychosis that put them at risk.
The term “artificial ielligence-induced psychosis” has been raised more in rece mohs, and a group of users, pares, and lawmakers have called for restrictions on the use of chatbots. Some programs block access to minors, and the state of Illinois prohibits the use of these systems as online therapists. At the same time, critics poi to Donald Trump’s executive order, which limits any independe state regulation of artificial ielligence and, they say, effectively turns users io lab rats for the technology.
In Solberg’s case, plaiiffs say ChatGPT messages fueled his delusions, causing him to kill his mother at her family home in Connecticut and then take her own life. The Solberg family is now demanding accouability from OpenAI and its business partner, Microsoft. These cases have brought the debate about the legal responsibility of artificial ielligence manufacturers io a new stage and could have a profound impact on the design, supply and regulation of artificial ielligence products such as GPT-4o in the United States and other couries.



