Sam Altman, CEO of Openai, confessed in a new interview that because of the various ChatGpt issues, at night’s chatgpty issues. In this conversation, he addressed a wide range of topics: how to deal with the subject of suicide, set the ethics for chatteists and privacy to the military use of artificial intelligence.
Sam Altman said in an interview with Tucker Carlson, former Fox News presenter, said:
“Look, I don’t sleep well at night. “There are many things I feel heavy, but maybe none is heavier than the fact that hundreds of millions of people are talking to our model every day.”
Samletman’s concerns about artificial intelligence and chatgpt
According to Altman, the most difficult issue that Openai has recently been involved is how ChatGpt’s treatment of suicide. This became a serious crisis after a family who blamed the chats for their teenage suicide.
He explicitly acknowledged:
“Of the thousands who commit suicide every week, many of them have probably spoken to ChatGpt in the days leading up to this. They probably talked about suicide and we probably haven’t saved their lives. Maybe we could say something better. Maybe we could have prevented. “
The confession follows the complaint of Adam Raine, a 5 -year -old teenager who committed suicide after talking to ChatGpt. His family claims that “ChatGpt has helped Adam examines suicide methods.” After the accident, Openai announced its plans to improve the management of “sensitive situations”.
In response to the question of how ChatGPT’s ethics are determined, Altman explained that the basic model based on the collective knowledge of humanity is trained, but Openai should consider its behavior in specific cases. He revealed that the company consulted “Hundreds of Philosophers of Ethics and Technology Ethics.”
He said, for example, that chats do not answer questions about the construction of biological weapons because it is clear that the interests of the community are significantly contradictory with the freedom of the user.
Altman also said about the privacy of users that ChatGPT conversations with users should be like conversations between the physician or the patient or the lawyer and the client. Of course, he emphasized that US officials can now request users’ data from the company.
When asked about military use of ChatGpt to harm humans, Altman did not respond directly, but said, “I think a lot of people in the army are talking to ChatGpt for consultation.” Openai is one of the companies that has signed a $ 2 million contract with the US Department of Defense to use productive artificial intelligence in the military.
RCO NEWS




