Sam Altman, CEO of Openai, confessed in a new ierview that because of the various ChatGpt issues, at night’s chatgpty issues. In this conversation, he addressed a wide range of topics: how to deal with the subject of suicide, set the ethics for chatteists and privacy to the military use of artificial ielligence.
Sam Altman said in an ierview with Tucker Carlson, former Fox News preseer, said:
“Look, I don’t sleep well at night. “There are many things I feel heavy, but maybe none is heavier than the fact that hundreds of millions of people are talking to our model every day.”
Samletman’s concerns about artificial ielligence and chatgpt
According to Altman, the most difficult issue that Openai has recely been involved is how ChatGpt’s treatme of suicide. This became a serious crisis after a family who blamed the chats for their teenage suicide.

He explicitly acknowledged:
“Of the thousands who commit suicide every week, many of them have probably spoken to ChatGpt in the days leading up to this. They probably talked about suicide and we probably haven’t saved their lives. Maybe we could say something better. Maybe we could have preveed. “
The confession follows the complai of Adam Raine, a 5 -year -old teenager who committed suicide after talking to ChatGpt. His family claims that “ChatGpt has helped Adam examines suicide methods.” After the accide, Openai announced its plans to improve the manageme of “sensitive situations”.
In response to the question of how ChatGPT’s ethics are determined, Altman explained that the basic model based on the collective knowledge of humanity is trained, but Openai should consider its behavior in specific cases. He revealed that the company consulted “Hundreds of Philosophers of Ethics and Technology Ethics.”
He said, for example, that chats do not answer questions about the construction of biological weapons because it is clear that the ierests of the community are significaly coradictory with the freedom of the user.
Altman also said about the privacy of users that ChatGPT conversations with users should be like conversations between the physician or the patie or the lawyer and the clie. Of course, he emphasized that US officials can now request users’ data from the company.
When asked about military use of ChatGpt to harm humans, Altman did not respond directly, but said, “I think a lot of people in the army are talking to ChatGpt for consultation.” Openai is one of the companies that has signed a $ 2 million coract with the US Departme of Defense to use productive artificial ielligence in the military.



