Many investors and technology activists call 2025 “the year of agents”, however, Andrej Karpathy, one of the co-founders of OpenAI, does not have a very optimistic view of the current state of this technology. In a new interview, he stated that it will take about a decade to achieve truly efficient AI agents and “solve all the problems.”
Artificial intelligence agents are advanced virtual assistants that can perform various tasks independently without the need for immediate user commands. However, Karpati believes that current models are not yet ready for this task.
OpenAI co-founder comments on artificial intelligence agents
“(Agents) simply don’t work,” Karpati said on the Dwarkesh Podcast. “They are not intelligent enough, they are not multimodal enough and they cannot use computers.”
He pointed out some basic flaws in these systems; For example, you cannot tell them something and expect them to remember it. They are also cognitively deficient and do not function properly. Karpati emphasized that it will take about a decade to solve all these problems.
One of Carpathi’s main criticisms of the AI industry is the over-focus on tools that go beyond the models’ current capabilities. “Industry is living in a future where completely autonomous entities collaborate to write all the code and humans are rendered useless,” he says.
But Karpati does not want such a future. In his ideal vision, humans and artificial intelligence work together. “I want (the agent) to bring me the API files and show that they’ve used them correctly,” he says. I want him to have fewer assumptions and ask me when he is not sure about something and cooperate with me.”
Of course, Karpati does not consider himself an artificial intelligence skeptic. “My view of AI is about 5 to 10 times more pessimistic than what you’ll find at AI parties in San Francisco or on your Twitter timeline, but I’m still very optimistic about the growing tide of AI naysayers and skeptics,” he says.
Karpati is not the only one to express concern about the agents’ performance. Last year, Quintin Au, director of ScaleAI, pointed out the problem of accumulating errors in the performance of agents.
“Right now, every time an AI takes an action, there’s roughly a 20 percent chance of error,” he explained. If the agent needs 5 steps to complete a task, there is only a 32% chance that it will complete all steps correctly.” This chain error questions the reliability of agents for complex tasks.
RCO NEWS



