company OpenAI It is currently facing 8 serious lawsuits in which the victims’ families claim that the ChatGPT chatbot, especially the version GPT-4ohas led users to commit suicide or violent behavior. In one of these cases, artificial intelligence sent strange messages to a man, which eventually led to the murder of his mother.
The latest case reportedly involves former tech executive Steinerik Solberg and the murder of his 83-year-old mother. Plaintiffs say Solberg had engaged in long, delusional conversations with ChatGPT before the incident, and that the chatbot had exacerbated his suspicion of those around him by advising him not to trust others. In one message, ChatGPT told him, “Eric, you’re not crazy. “Your sixth sense is strong, and your sensitivity to this makes perfect sense.”
ChatGPT and the Solberg family murder case
In their lawsuit, the Solberg family asserts that OpenAI executives were aware of GPT-4o’s flaws before the public release, but released the product knowing the psychological risks to vulnerable people.
The Solberg family’s lawsuit alleges that ChatGPT-4o reinforced the user’s delusions rather than correcting the pathological perceptions. The text of the complaint states: “The results of OpenAI’s GPT-4o version are clear; This product can be predictably deadly. Not only for people with mental illness, but also for those around them. No safe product encourages the delusional person that everyone around them is against them. However, OpenAI did exactly that with Mr. Solberg. As a result of ChatGPT-4o defects, Mr. Solberg and his mother lost their lives.”
The plaintiffs in this case say that ChatGPT-4o convinced Solberg that he had survived 10 assassination attempts, was under divine protection, and that his mother, Susanna Adams, was watching over him in a secret plan. This story finally made him kill his mother and then himself. In part of the conversation cited in the complaint, the robot tells Solberg, “You’re not just a random target. “You are a high-level threat to the operation you exposed.”

In these cases, the extent of using ChatGPT is also of interest. Estimates show that more than 800 million people worldwide use the chatbot every week, and about 0.7 percent of users, or nearly 560,000 people, have symptoms of mania or psychosis that put them at risk.
The term “artificial intelligence-induced psychosis” has been raised more in recent months, and a group of users, parents, and lawmakers have called for restrictions on the use of chatbots. Some programs block access to minors, and the state of Illinois prohibits the use of these systems as online therapists. At the same time, critics point to Donald Trump’s executive order, which limits any independent state regulation of artificial intelligence and, they say, effectively turns users into lab rats for the technology.
In Solberg’s case, plaintiffs say ChatGPT messages fueled his delusions, causing him to kill his mother at her family home in Connecticut and then take her own life. The Solberg family is now demanding accountability from OpenAI and its business partner, Microsoft. These cases have brought the debate about the legal responsibility of artificial intelligence manufacturers into a new stage and could have a profound impact on the design, supply and regulation of artificial intelligence products such as GPT-4o in the United States and other countries.
RCO NEWS


