In a new study, the researchers found that artificial intelligence -based factors of large linguistic models (LLM) can form a spontaneous human being and common social norms when interacting with each other.
According to the Guardian, researchers at the University of City St. George, the University of London and the University of Copenhagen Information Technology have come up with some interesting findings: When artificial intelligence agents are based on large -scale language models, they communicate without human intervention and can be similar to human forms.
Researchers say that when these agents communicate, they do not go according to planned scenarios or do not repeat patterns, but like human societies.
Social norms similar to humans for artificial intelligence
“So far, most of the LLMs have been investigated separately, but the real -world artificial intelligence systems are increasingly interactive,” says Ariel Flint Ashary, a researcher at the University of City St. George. We wanted to know if these models could coordinate their behavior with the formation of contracts (community -based blocks). The answer is positive. “What they do together cannot reduce what they do alone.”
To examine social intelligence and social norms in artificial intelligence, the researchers adapted a copy of the Naming Game. In the experiments, several groups of agents (from 24 to 200 different factors) were randomized and were asked to make a “name” (eg, an alphabet or random string of letters and symptoms) from a set of options. If both were the same name, they would be rewarded. Otherwise, they would be fined.
Although the factors did not know that part of the group was larger and that their memory was limited to their recent interactions, a spontaneous naming of a joint nomination was emerged throughout the population, imitating the norms of human culture communication. This can be compared to human behavior; We also come to a conclusion in a collective contract, for example, the “tree” brown and green object.
In addition, the researchers have observed that collective prejudice is naturally formed in the group of artificial intelligence agents that cannot be seen in a single factor. The researchers found that small groups of artificial intelligence agents were able to lead the larger group to a new naming contract. This phenomenon can also be seen in human societies.
The findings of this study have been published in Science Advances.
RCO NEWS




