“In some experimeal sessions, I was really afraid,” said Altman’s poison about the Gpt-5. This model is very fast, very complex and … a bit unpredictable.
“In some experimeal sessions, I was really afraid of fear,” Altman said in an ierview with the podcast of “The Weekend”. “This model is very fast, very complex and … a little unpredictable.”
He coinued that the GPT-5 is so capable that it may be out of human corol, while there are still no strong monitoring structures to manage the technology: “The developme of artificial ielligence has forwarded the speed of surveillance.”
Has artificial ielligence reached the boundary of “public understanding”?
Altman did not meion the technical details of the GPT-5, but his tone is reminisce of the warnings previously given to Superiellness. He said Openai faced something that he may not understand exactly.
The conversation comes as many experts consider the Gpt-5 to be a big leap to the Gpt-4; A model that will probably have more abilities in reasoning, understanding language, and ieracting with humans.
Corade to words and action?
Despite these concerns, OpenAI has not stopped the GPT-5 developme process. Altman had earlier said that artificial ielligence could be “very wrong”, but at the same time, his company coinues to publish new tools.
Critics believe Altman’s dual statemes, on the one hand, warn of threats, and on the other hand, promoting non -stop developme, indicate a serious coradiction in Openai policy.




