Miles Brundage, OpenAI Senior Consulta in Developme A.G.I (Comprehensive Artificial Ielligence with Human-Level Performance), has just announced his departure from the startup and has also issued a stern warning that no one is ready for AGI yet, not even OpenAI itself.
Brundage, who has spe six years developing OpenAI’s AI safety initiatives, says in a post titled “Why I left OpenAI and what my plans are for the future.”
“Neither OpenAI nor any other lab is ready for AGI, nor is the world ready for it.”
Preparation for comprehensive artificial ielligence

He coinued to write in his post:
“Whether the company and the world are on the road to being ready for AGI is a complex function of how safety and security culture is implemeed over time, how regulations affect organizational inceives, the facts about AI capabilities and the difficulty of securing them, and various other factors. is another.”
Brundage cited OpenAI’s restrictions on freedom of research and testing as one of the reasons for its departure. He also noted the importance of having independe people in the AI policy debates who comme without particular biases.

However, Brundage is the latest senior member of OpenAI’s security team to leave the startup. Earlier, promine researcher Jan Lyke and Ilya Sotskur, one of the founders of OpenAI, also left the startup. He also founded his own artificial ielligence startup Sotscore, focusing on the safe developme of AGI.
Another notable thing that Brundage meioned in his post is the disbandme of the “AGI Readiness” team. OpenAI previously disbanded the Superalignme team, which focused on reducing the long-term risks of AI.
Despite the noted inconsistencies, Brundage claims that OpenAI will support his future work by providing funding, API credits, and access to early models.



