The CEO of Nvidia has expressed his opinion about the idea that some people have of today’s artificial intelligence models becoming a Terminator-like version, and has considered this hypothesis impossible.
The world of artificial intelligence has evolved rapidly in the last few years. This evolution has happened not only in the field of chatbots, but also in the field of generative artificial intelligence, edge artificial intelligence, agent-based models and many other fields. Large language models have advanced to the point where they are now seriously on the way to replacing humans in a range of work roles. Some also see these developments as a factor that will expand the capabilities of artificial intelligence too much; To the extent that artificial intelligence may replace humans as the superior species. While this hypothesis is theoretically attractive, when Jensen Huang was asked about the capabilities of artificial intelligence crossing the human threshold, he said that such an event would not happen.
During Joe Rogen’s podcast, the host asked Huang if he was serious about the possibility that humans might lose control and no longer be the dominant species on the planet. Huang replied that he considers such an event highly unlikely and believes that it is entirely possible to build a machine that can imitate human intelligence, understand information and commands, analyze and solve problems, and perform tasks. He also said that in the near future, probably within two or three years, 90% of the world’s knowledge will be produced by artificial intelligence.
According to these statements, we can conclude that artificial intelligence will occupy a very large part of the learning and knowledge production process in the world. Although Huang did not directly say that large language models are on the way to gaining some kind of consciousness, certain behaviors of artificial intelligence models in recent years have reinforced the notion that important developments are underway. For example, one recent instance of pseudo-conscious behavior involved the time Claude Opus 4 model threatened to reveal an imaginary engineer’s extramarital affairs in order to prevent himself from being shut down.
When Nvidia’s CEO was asked about this, Huang said that the model most likely learned this type of behavior from a literary text, possibly some kind of novel, and that this was no indication of true awareness. More broadly, however, some analysts believe that as large language models become more complex and adaptive, their behavior may become more akin to consciousness; Especially when models such as entropic company models show actions similar to self-awareness in some situations.
However, in order to achieve an efficient AI ecosystem, the existence of large self-aware language models seems likely to be a critical prerequisite; Because in real-time interactions, there is a need to make decisions based on a conscious state, unless the goal is simply to create a system of limited use. Huang believes that in the coming years, 90% of the world’s knowledge will be produced by artificial intelligence. The logical outcome of such a trend is that the realization of Artificial General Intelligence (AGI) will be inevitable. However, the passage of time can be the best judge of the evolution of artificial intelligence.
RCO NEWS



