In a pioneering study conducted by researchers at the University of California San Diego, Openai’s Gpt-4.5 language model was able to persuade participants in 73 % of cases. It is a remarkable conclusion that artificial intelligence has achieved an unprecedented level of imitating human behavior.
Designed in 1950 by Alan Turing, the Turing test is a classic criterion for measuring the ability of machines to imitate human intelligence. In this updated version of the test, the participants talked to a human and an artificial intelligence model at the same time, and then they had to know which car was.
The interesting thing was that when the Gpt-4.5 was asked to imitate a particular character (such as a young man interested in culture and the Internet), he even acted more than real human beings. In contrast, the conventional version of the Gpt-4O, which lacked this personality capability, did only 21 % of cases.
Cameron Jones, a senior researcher at the study, believes that these results show that large language models can be fully replaced in short interactions without being recognizable. Although technically significant, it has serious warnings about its social consequences, including the possibility of job replacement and abuse in social engineering attacks.
However, many experts believe that the Turing test, despite its popularity, is not a complete criterion for measuring real intelligence. Google’s engineer François Schole emphasizes that the test is more of a intellectual experience than a practical criterion for measuring machine intelligence.
With the increasing advancement of artificial intelligence technologies, it seems that the scientific community needs to develop more comprehensive and comprehensive criteria to evaluate the cognitive capabilities of machines. This study not only shows the remarkable abilities of new language models, but also raises important questions about the nature of human and machine intelligence and relationship in the future.
RCO NEWS