Researchers have claimed in a new research that popular artificial intelligence models such as ChatGPT In addition to imitating the words and speech of humans, they can Personality traits Simulate us too. As concerns grow about the accuracy, trustworthiness and ethical boundaries of artificial intelligence, they have warned that it could pose serious risks.
This research was conducted by researchers from the University of Cambridge and the Google DeepMind lab, and they also unveiled the “first scientifically validated framework for measuring the personality of AI chatbots.” In this framework, the same psychological tools that have been used for years to measure human personality are used.
Imitation of personality patterns by artificial intelligence models
The research team on this framework 18 large language models (LLM) has tested the popular, including models used in tools such as ChatGPT. Its results show that these chatbots do not respond randomly, but consistently imitate human personality patterns. This raises concerns about the possibility of these systems being manipulated to bypass their designed limitations.

Based on the findings of this study, larger models, such as GPT-4 class systems, perform much better in imitating personality traits. By using prompts, researchers have been able to direct the behavior of chatbots towards certain characteristics such as more self-confidence, higher empathy or a more decisive tone.
The worrying point that they have pointed out is that the behavioral changes of the models are not limited only to experimental responses and in Daily tasks such as Write a post, Content production or Reply to users has also continued. In other words, the researchers say that chatbots’ personalities can be purposefully shaped, which is especially dangerous when the AI interacts with vulnerable users.
Gregory Serapio-Garcia, the first author of the study from the Center for Psychometrics at the University of Cambridge, says the degree to which chatbots resemble human personality traits is surprisingly compelling. According to him, this capability can make artificial intelligence a persuasive and emotionally effective tool and have serious consequences in sensitive areas such as mental health, education or political debates.
This article also mentions the dangers associated with psychological change and a phenomenon that researchers call “Psychosis of artificial intelligenceThey have remembered. They exemplify situations where users may form unhealthy emotional relationships with chatbots or even reinforce their own false and distorted beliefs through interactions with AI.


Finally, the researchers have emphasized in their article that the need to Legislation It is an imperative in this area, but they also warn that without accurate measurement tools, any regulation will be effectively ineffective. For this reason, the data and codes related to the personality measurement framework of this research have been published publicly so that developers and regulatory bodies can review and evaluate artificial intelligence models before release.
RCO NEWS



