If 2022 was the beginning of a period of collective and magical mania with the launch of ChatGPT, 2025 is the year of the decline of hype; The year we realized that many companies’ promises about artificial ielligence might be empty. CEOs of tech gias who promised that artificial ielligence would soon cure all ills, replace office workers and usher in an age of abundance are now faced with meaningful market silence and investor skepticism. This is what MIT experts call it The big overhaul of artificial ielligence They call Below, we take a look at all the details that 2025 showed us about the limitations, hidden economy, and realities of artificial ielligence.
The main problem with large language models
One of the biggest issues that became clear about artificial ielligence this year was that simply making language models bigger is not the way to achieve artificial general ielligence (AGI). Even Ilya Satskiver, the co-founder of OpenAI and one of the main creators of the technology, who now runs the startup Safe Superielligence, admitted that language models have fundameal weaknesses.


Artificial ielligence can memorize thousands of algebraic problems and learn how to solve them, but it does not necessarily understand the principles of algebra. “These models generalize significaly worse than humans,” Satskewer says.
On the other hand, we humans have hardware in our brain that tends to see a “mind” in anything that exhibits iellige behavior such as speaking. Marketers exploited this feature to make us believe that there was a living, thinking being behind these chatbots, when we were just dealing with a word prediction machine.
The issue of artificial ielligence in businesses
Artificial ielligence was promised to be the savior of the economy and the killer of boring bureaucracy. But new MIT research found that 95 perce of businesses that tried to impleme proprietary AI systems failed or got stuck in the pilot phase. But why?
A report by Upwork found that AI ages (even with the GPT-5 engine) are incapable of performing many simple administrative tasks without human supervision, and cannot manage a chain of complex tasks.


Andrei Karpati, a famous artificial ielligence researcher, explains the reason for this in an ieresting way. Chatbots are better than the average employee at many tasks (e.g. writing emails or simple coding), but worse than experts, he says. For this reason, these tools are attractive to ordinary consumers (who have little knowledge), but cannot replace skilled employees in companies.
Although the official statistics of companies tell about the failure of artificial ielligence projects, there is a hidden reality: employees are using chatbots in secret. MIT research showed that in 90% of companies, a kind of “shadow economy” has been formed. Workers use ChatGPT to do things without the knowledge of managers and with their personal accous. This means that AI is useful, but not in the way that managers had hoped.
The problem of the economic bubble of artificial ielligence
The debate about the artificial ielligence bubble is hot. But the importa question is: which historical crisis is this bubble similar to? Is it like the 2008 housing crisis, which left nothing but debt and destruction, or the 2000 dot-com bubble, which, although it caused many bankruptcies, left importa infrastructure (such as fiber optics) on which the modern Iernet was built.
AI seems like the dotcom bubble. Huge investmes in data ceers may not pay off in the short term, but they build the infrastructure of the future.
Of course, the main concern of economists is circular transactions; For example, Nvidia invests in cloud companies, and those companies buy chips from Nvidia with the same money. This cycle makes earnings look artificially high. However, “Glenn Hutchins”, a reliable investor, believes that there is nothing to worry about, because the accous of these data ceers are powerful companies like Microsoft that have the necessary financial credit to pay the bills and will not go bankrupt.
The GPT-5 issue and general frustration
One of the high pois of AI disappoime in 2025 was when OpenAI released the long-awaited GPT-5 model. After mohs of advertising and promising that this model is “PhD-level expertise in any field”, users were faced with a product that was not much differe from the previous generation. This incide made Yannick Kilcher, an artificial ielligence researcher, to declare: “The era of revolutionary developmes is over. We have eered the era of Samsung Galaxy; “Every year a new model comes out with minor changes but nothing amazing.”


The rise of artificial ielligence unicorns
In the midst of this frustration, some companies found the right way. Startup Syhesia, which focuses on creating video avatars for corporate training, is a clear example of success away from hype. While everyone was worried about Deepfake, the company recognized a real market need (cheap video coe production). Now this company has 55,000 corporate clies, earns $150 million annually, and its value has reached $4 billion. This shows that the companies that solve the real problem will win even when the bubble bursts.
The good news is that the end of artificial ielligence hype does not mean the end of progress. “We are back to the age of research,” says Ilya Satskiver. This means that progress is no longer achieved by splashing money and enlarging data ceers, but requires scieific innovation and new architectures. The hype may be bad, but it had one benefit: attracting the world’s brightest tale and big capital to the industry. Now, these tales have the opportunity to focus on solving real problems away from publicity coroversies.



