Sam Altman, CEO of Openai, has warned users in a new interview that the artificial intelligence industry is still a way for Protection Complete of Privacy They have not found in sensitive conversations. According to him, there is nothing called a physician and the patient when you use chats such as ChatGPT to treat or receive emotional support.
The Openai CEO has made his statements in the new podcast “This Past Weekend”. In response to a question about artificial intelligence interaction with current legal systems, Altman says one of the main problems was not a legal framework or a specific policy for artificial intelligence, that for users’ conversations with the technology, Legal confidentiality There is no.
ChatGPT weakness when used as therapist
She says:
“People share the most private issues of their lives with ChatGpt. People, and especially young people, use it as a therapist and a life trainer and raise their emotional problems. But if we talk to a therapist, lawyer or medical (real), your conversation is legally protected. But we still don’t have such a framework for ChatGpt. “
According to Altman, if a legal complaint is filed, this could become a serious concern for users’ privacy, as Openai is currently legally obliged to present the content of these conversations if needed.
He said in this part of his remarks:
“If a user talks to ChatGpt about a very sensitive subject and then gets involved in a lawsuit, we may be legally obliged to provide that information. In my opinion, these conditions are worrying. User conversations with chats must have the same privacy level as expected to interact with a physician or therapist. “
For example, Openai is currently in the New York Times case, forcing the company to store hundreds of millions of ChatGPT users around the world (except users Enterprise). Openai said in a statement on its website that he has objected to the ruling and called it “exaggeration”.
RCO NEWS




