Some experts believe that the speed of artificial intelligence progress has slowed down and that the capabilities of this technology in the future will not be different from its current capabilities. Will it really happen?
speed Advances in artificial intelligence In the future, the main topic of this week was “Cerebral Valley AI Summit” in San Francisco; The meeting was held with the presence of about 350 CEOs, engineers and investors in the field of artificial intelligence.
Will the development of artificial intelligence grow rapidly in the future?
so far, The excitement cycle of artificial intelligence (AI hype cycle) is based on the theory that using more data and calculations for training New models of artificial intelligencemuch better results can be achieved, but Google and other tech giants now face the problem of diminishing returns in training their new models. The hypothesis of various barriers to AI progress, hereafter referred to as the “wall” hypothesis, challenges the possibility that the next generation of AI core models will be significantly smarter than existing models.
Alexander Wang, CEO of Scale AI, a company that helps OpenAI, Meta and others train their models, told host Eric Newcomer in the first session. said:
Have we hit a wall? Both yes and no.
It may seem that the artificial intelligence models of Entropic, OpenAI and other companies are not much smarter than the current models, but people who work with AI believe that the AI models still have a lot of room to improve and be different from the current models. . Although the “reasoning” capability that OpenAI introduced in o1, its new model, is currently expensive and slow, it represents a change in the future of AI that everyone seems to agree on; The next step is to make it smarter Large language models It will be today (all artificial intelligence experts agree on this).
In his speech at the Cerebral Valley AI Summit, Alexander Wang said:
People’s understanding of the concept of being a leader has changed significantly.
He pointed out that a large part of the investments made in artificial intelligence were based on the belief that “This is the scaleIt will still stand, but whether this rule will actually stand or not is the biggest question in the AI field right now. The law of scale in artificial intelligence refers to the fact that the performance of machine learning models, especially large language models, improves continuously and predictably as various resources such as training data, computing power, and model size increase.
Slowing down the pace of AI development may not be a bad thing considering the acceleration of this field in the past year. At the time of the Cerebral Valley AI Summit in March 2023, Sam Altman had not yet been fired and rehired; Mark Zuckerberg had not yet decided to release the llama model to the public, and Elon Musk, who was assembling his team to launch xAI, demanded a halt to the development of artificial intelligence. At the same time, Emad Mostaque, the founder of Stability AI, claimed that he wanted to build “one of the biggest and best companies in the world”, but the company is now almost collapsed and Mostaque is no longer its CEO!
Agents, the future of artificial intelligence

Now in artificial intelligence circles, “Agents(Agents) are noteworthy. Agents are large language models that can take control of the computer and act on behalf of the user. There are rumors that Google will introduce its Gemini agent next month, and then OpenAI will unveil its version in January. Meta is also working on agents. Dario Amodei, the CEO of Entropic, attended the end of the day of the conference with his 2 bodyguards. The company has just released a simple agent through its API.
Alexander Wang predicted the future of agents as follows:
A moment similar to ChatGPT happens to agents; You will have a generic agent that will probably become very popular.
Of course, he believes that artificial intelligence laboratories need a new type of data for training to achieve this goal.
He said about this issue:
The Internet has surprisingly little data about human actions and documentation of their thought processes while performing those actions.
According to hearsay, Dario Amodei is the only AI CEO who opposes the law of scalability deadlock theory. He recently received another $4 billion from Amazon to invest in Amazon’s cloud service, web services, and the company’s internal chips. In his speech at the Cerebral Valley AI Summit, Amodei said:
I haven’t seen anything in this area that is inconsistent with what I’ve seen over the last 10 years or that leads me to conclude that AI is going to slow down.
Of course, Amodei did not have many arguments to prove this claim. He did not explain why Entropic has yet to release the long-touted next-generation Opus, its most advanced and most expensive Claude model. When Eric Newcomer pressed him for an exact release date, he only replied, “In general, we’ll see better models at least once a month.” And the audience laughed at his words.
Amodei claimedArtificial general intelligence(AGI) may be realized soon and even by next year. According to him, despite all the excitement, “the realization of this advanced level of artificial intelligence will not affect our lives quickly”. It is a bit difficult to reconcile predictions related to general AI with the current state of the AI field; General artificial intelligence is said to be a level of this technology that has the ability to compete with humans in cognitive fields and conclusions.
People like Amodei still believe in the stability of the progress of artificial intelligence and believe that this technology will become significantly more powerful in the future. At the beginning of the day of the conference, he spoke at another meeting hosted by the US Artificial Intelligence Safety Institute, chaired by US Commerce Secretary Gina Raimondo. At the Cerebral Valley meeting, he criticized Marc Andreessen’s point of view on the safety of artificial intelligence; This laissez-faire view trivializes safety, arguing that “AI is just math.”
Amodei replied: “Isn’t your brain just math?” When a neuron fires and gathers synapses, that is also math; So we should not be afraid of Hitler. He’s just math too, right?”
RCO NEWS