“In some experimental sessions, I was really afraid,” said Altman’s poison about the Gpt-5. This model is very fast, very complex and … a bit unpredictable.
“In some experimental sessions, I was really afraid of fear,” Altman said in an interview with the podcast of “The Weekend”. “This model is very fast, very complex and … a little unpredictable.”
He continued that the GPT-5 is so capable that it may be out of human control, while there are still no strong monitoring structures to manage the technology: “The development of artificial intelligence has forwarded the speed of surveillance.”
Has artificial intelligence reached the boundary of “public understanding”?
Altman did not mention the technical details of the GPT-5, but his tone is reminiscent of the warnings previously given to Superintellness. He said Openai faced something that he may not understand exactly.
The conversation comes as many experts consider the Gpt-5 to be a big leap to the Gpt-4; A model that will probably have more abilities in reasoning, understanding language, and interacting with humans.
Contrade to words and action?
Despite these concerns, OpenAI has not stopped the GPT-5 development process. Altman had earlier said that artificial intelligence could be “very wrong”, but at the same time, his company continues to publish new tools.
Critics believe Altman’s dual statements, on the one hand, warn of threats, and on the other hand, promoting non -stop development, indicate a serious contradiction in Openai policy.
RCO NEWS



