“Yan Lacan” is one of the most well-known people in the artificial intelligence industry. In a new interview, which we review below, he argued that the technology industry will eventually reach a dead end in the development of artificial intelligence after years of effort and spending hundreds of billions of dollars.
During his 40-year career as a computer scientist, Lacan is recognized as one of the world’s leading experts in the field of artificial intelligence. He was one of the three pioneering researchers who received the Turing Award; Often referred to as the “Nobel of the computing world,” the prize is awarded for work on technologies that are now the foundation of modern artificial intelligence.
He also spent more than a decade as the chief artificial intelligence scientist at Meta, the parent company of Facebook and Instagram. But after Dr. Lacan left Meta in November, he has begun to criticize Silicon Valley’s one-dimensional approach to building smart machines.
According to the New York Times, he said the reason for this comes down to something that has been debated for years: Large language models, or LLMs (the same artificial intelligence technology at the heart of popular products like ChatGPT), can only be so powerful.
He says that companies spend all their energy on projects that do not lead them to their goal of making computers as intelligent as humans or even smarter than humans. He also mentioned that more creative Chinese companies can reach this point sooner.
Lacan’s explanation of the herd effect in Silicon Valley
Lakan said in his interview:
“There is a herd effect where everyone in Silicon Valley has to work on the same thing. That doesn’t leave much room for other approaches that may be more promising in the long run.”

Much of today’s tech industry effort has its roots in an idea he nurtured since the 1970s. As a young engineering student in Paris, Lacan embraced a concept called “neural networks,” which most researchers thought was a hopeless idea.
Neural networks are mathematical systems that learn skills by analyzing data. At the time, the system had no practical use, but a decade later, when Lacan was a researcher at Bell Labs, he and his colleagues showed that these systems could learn to do things like read handwriting written on envelopes or personal checks.
By the early 2010s, researchers began to show that neural networks could be used in a wide variety of technologies, including facial recognition systems, digital assistants, and self-driving cars.
Shortly after the launch of ChatGPT, the two researchers who received the 2018 Turing Prize along with Dr. Lacan warned that AI was becoming too powerful a phenomenon. Those scientists even warned that this technology could threaten the future of humanity. But Dr. Lacan argued that these words are absurd. He said:
“There’s been a lot of buzz around the idea that AI systems are inherently dangerous and it’s wrong to put them in everyone’s hands. But I never believed in that.”
Lacan also pressured Meta and its competitors to share their research freely through academic articles and so-called “open source” technologies.


After that, more people pointed out that AI could be a threat to humans, and because of this, a number of companies scaled back their open source efforts. But Meta continued on his way. Dr. Lacan has repeatedly argued that an open source approach is the safest path to development. However, no one company will control the technology, and anyone can use these systems to identify and combat potential threats.
The risk of China’s leadership in the development of artificial intelligence
Dr. Lacan warns that American companies may cede their lead to Chinese competitors who still use an open-source approach. He says:
“This is a disaster. “If everyone is receptive and open, this whole field will progress faster.”
Meta’s AI work hit a snag last year. After foreign researchers criticized the company’s latest technology, Llama 4, and accused Meta of misrepresenting the system’s power, Meta CEO Mark Zuckerberg poured billions into a new research lab that seeks to develop “superintelligence.” A hypothetical artificial intelligence system that surpasses the power of the human brain.
Six months after creating the new lab, Lecan left Meta to found his own startup, AMI Labs.
However, Dr. Lacan has argued that his research is not the final answer to the development of artificial intelligence. The problem with current systems, he says, is that they don’t plan ahead and are trained solely on digital data and have no way of understanding real-world difficulties.
He says:
“LLMs are not a pathway to superintelligence or even human-level intelligence. I have said this from the beginning. “The whole industry is addicted to LLM.”
During his last several years at Meta, Dr. Lacan worked on technology that attempted to predict the outcome of his actions. This, he says, allows AI to advance beyond the status quo. His new startup will continue to do the same. Lacan explains:
“The system can plan what it’s going to do. “The current systems (i.e. the same LLMs) absolutely cannot do such a thing.”
Part of Dr. Lacan’s argument is that today’s artificial intelligence systems make too many mistakes. As they face more complex tasks, he argues, mistakes tend to pile up, such as a chain accident of cars on a highway.
But over the past few years, these systems have steadily improved. And in recent months, the latest models designed to “reason” about questions have continued to advance in fields such as mathematics, science, and computer programming.
In another part, Lacan said that the past few decades have been full of artificial intelligence projects that seemed to be on the way forward but have now stalled, and there is no guarantee that Silicon Valley will win this global race. He says:
“Good ideas come from China. But Silicon Valley also has a superiority complex, so it can’t imagine that good ideas can come from elsewhere.”


Ryan Krishnan, CEO of Vals AI, which tracks the performance of the latest AI technologies, also said:
“Models make mistakes. But we have shown that a system can try many different options before arriving at a final answer. Progress is not slowing down. “It turns out that language models can take on new tasks and get better and better at doing whatever we want them to do.”
Subbarao Kambhampati, a professor at Arizona State University who has been working on artificial intelligence for almost as long as Dr. Lacan, agrees that today’s technologies do not provide a path to true intelligence. But he noted that these technologies are increasingly useful in highly profitable fields such as computer coding.
RCO NEWS



