Miles Brundage, OpenAI Senior Consultant in Development A.G.I (Comprehensive Artificial Intelligence with Human-Level Performance), has just announced his departure from the startup and has also issued a stern warning that no one is ready for AGI yet, not even OpenAI itself.
Brundage, who has spent six years developing OpenAI’s AI safety initiatives, says in a post titled “Why I left OpenAI and what my plans are for the future.”
“Neither OpenAI nor any other lab is ready for AGI, nor is the world ready for it.”
Preparation for comprehensive artificial intelligence
He continued to write in his post:
“Whether the company and the world are on the road to being ready for AGI is a complex function of how safety and security culture is implemented over time, how regulations affect organizational incentives, the facts about AI capabilities and the difficulty of securing them, and various other factors. is another.”
Brundage cited OpenAI’s restrictions on freedom of research and testing as one of the reasons for its departure. He also noted the importance of having independent people in the AI policy debates who comment without particular biases.
However, Brundage is the latest senior member of OpenAI’s security team to leave the startup. Earlier, prominent researcher Jan Lyke and Ilya Sotskur, one of the founders of OpenAI, also left the startup. He also founded his own artificial intelligence startup Sotscore, focusing on the safe development of AGI.
Another notable thing that Brundage mentioned in his post is the disbandment of the “AGI Readiness” team. OpenAI previously disbanded the Superalignment team, which focused on reducing the long-term risks of AI.
Despite the noted inconsistencies, Brundage claims that OpenAI will support his future work by providing funding, API credits, and access to early models.
RCO NEWS