OpenAI claimed nearly two years ago that Comprehensive artificial ielligence (AGI) could “enhance humanity” and “give everyone incredible new capabilities.” nowSam Altman“, the CEO of OpenAI, has tried to meet the expectations for A.G.I reduce
According to the Verge report, Altman said at the New York Times DealBook meeting:
“My guess is that we’ll get to AGI sooner than most people in the world think, and it’s much less importa. And many of the safety concerns we and others have raised won’t come to AGI anytime soon. AGI is built, the world goes largely the same way, everything grows faster, but then there are other long steps from what we call AGI to what super ielligence We call it (Super Ielligence) and it coinues.”
According to Altman’s new talk, it seems that his and OpenAI’s definition of AGI has changed a bit.

The effects of comprehensive artificial ielligence will be less than expected
While OpenAI initially tried to make comprehensive artificial ielligence seem like an amazing technology, Altman has been trying to temper the expectations a bit in rece times.
He recely told Reddit users in a question and answer session that AGI will be ready in the next 5 years. In his speech, he also claimed that the fundameal effects of this technology would be “surprisingly small”.
In addition, Altman also meioned the concept of “superielligence” in his speech, which seems to deliver the amazing features promised for AGI. Altman also recely hied at the concept on his personal blog, saying it could be ready in the “next few thousand days.”
The use of these concepts to some exte confuses humans, and perhaps this has caused the expectations of AGI to be excessive. Fei-Fei Li, a famous computer researcher who is known as the “Godmother of Artificial Ielligence”, also recely said that he does not know exactly what AGI means for OpenAI.




