An international group of artificial intelligence researchers warned that an army of artificial intelligence bots could disrupt democratic processes by influencing social networks. The group says AI agents can be used on a massive scale to shape public opinion.
A group of researchers including “Maria Ressa”, a Nobel Peace Prize winner and freedom of speech activist, along with prominent researchers from Berkeley, Harvard, Oxford, Cambridge and Yale universities have published an article in the journal Science and announced that the invasion of artificial intelligence bots is a destructive threat that can affect social networks and messengers.
The authors of the paper say that these systems automatically coordinate and enter online communities, creating a false consensus. These agents threaten democracy by imitating human social dynamics.

Authoritarian political leaders could use the onslaught of artificial intelligence to persuade citizens to accept the annulment of elections or question the results of polls, researchers warned. They predicted that by 2028, this technology will reach a level where it can be widely and cheaply used in political campaigns.
At the same time, the authors called for coordinated global action to counter this threat in order to reduce the effect of organized disinformation campaigns.
The threat of artificial intelligence bots in social networks is serious
This report mentions that early versions of the operation to influence public opinion with artificial intelligence were used in the 2024 elections of Taiwan, India and Indonesia. These samples are still implemented on a limited scale, but according to the researchers, they show the direction of the development of the technology.
Researchers say political actors can employ an almost unlimited number of AI agents to act as human users online. These agents can precisely enter different communities, learn their characteristics and sensitivities over time, and change public opinion on a large scale with targeted and persuasive lies.
According to the authors, the advancement of artificial intelligence in understanding the tone and content of conversations has intensified this threat. Agents are now better able to use appropriate conversational terms and avoid automatic detection by irregularly timing messages.


Puma Shen, a representative of Taiwan’s Democratic Progressive Party and an activist against China’s disinformation, says voters in Taiwan are regularly targeted by Chinese propaganda and are often unaware of it. He reported that artificial intelligence bots have increased the level of interaction with citizens on the Threads and Facebook platforms in the last two to three months.
During political debates, these agents provide large amounts of unverifiable information, creating an “information explosion,” Shen explained. He said these bots may cite fake articles about the U.S. abandoning Taiwan. “These bots don’t directly say China is great, but encourage them to be neutral,” Shen added. Shen described this approach as dangerous, because in such an environment, activists like him are seen as “radical”.
Although the AI bot invasion has yet to be deployed at full scale, the combination of technical progress, low cost and a lack of strict regulation make the threat an urgent issue for policymakers, according to the authors of the paper published in Science. They stress that without coordinated international action, future elections, including the 2028 US election, are at serious risk of organized manipulation.
RCO NEWS



