New MIT research shows that the progress of artificial intelligence is more than due to the intelligence of algorithms Increase computing power It is dependent and this has greatly increased the cost of developing large AI models.
The institute’s researchers, including Matthias Mertens and his colleagues, reportedly examined the performance of 809 large language models to determine the contribution of each factor—computing power, proprietary algorithmic innovations, and general industry advances—to the models’ ultimate accuracy. The results showed that the computing power has the greatest effect on the accuracy of the models and has significantly surpassed any algorithmic innovation.
The role of computing power in the development of artificial intelligence models
MIT research shows that language models in the 95th percentile require 1,321 times more computing power to train than weaker models. This huge gap shows that access to extensive computing resources is the determining factor in the superior performance of frontier models. However, algorithmic advances and company-proprietary technologies continue to play an important role in reducing costs and improving the performance of smaller models, and developers with limited budgets can achieve comparable performance with smart software.

Hardware cost is also one of the main challenges. The price of the chips and networking components needed to expand AI has steadily increased. In 2025, the average price of chips has increased by about 70% compared to 2019. High-end GPUs such as Nvidia Blackwell or Rubin are more efficient than the previous generation, but companies need to purchase a large number of them to provide the computing power required for the next frontier models.
This need for massive investment explains the thousands of billions of dollars in annual spending by big companies like Google, Meta, and OpenAI, and is also why OpenAI CEO Sam Altman is trying to raise tens of billions of dollars in capital and plans to spend more than a trillion dollars.
However, MIT research shows that engineering and algorithmic advances can still reduce costs. Smaller models with a limited budget, using smart algorithms, can achieve the same performance as frontier models in prediction and inference. In other words, software innovation allows small companies to achieve optimal efficiency by consuming less energy and resources.
The end result is that the world of artificial intelligence today is divided into two parts: large companies with huge computing resources that keep frontier models at the top, and smaller companies that use optimal algorithms and software innovation to create cost-effective and efficient models. In general, increasing the accuracy and intelligence of AI depends more than anything on computing power, while advanced algorithms and proprietary innovations play a complementary and reinforcing role.
RCO NEWS


