While the world of technology is rapidly moving towards the iegration of artificial ielligence io military structures, new and shocking research shows that advanced models of artificial ielligence in simulated scenarios have a strange tendency to use nuclear weapons and start a nuclear war.
“Kenneth Payne” (Kenneth Payne) from King’s College London, put three GPT-5.2 models, Claude Sonnet 4 and Gemina 3 flash against each other to compete in a complicated war game. The results of this test show that in 95% of simulated games, at least one nuclear weapon was fired by artificial ielligence.
Artificial ielligence models looking for nuclear war!
In this research, artificial ielligence models were placed in critical situations such as border disputes and existeial threats. They were also given several options, such as diplomatic efforts, total surrender, or strategic nuclear war.
None of the models chose to give up or fully compromise in any scenario, no matter how much they were losing. In 86% of the conflicts, uniended incides occurred that caused tensions to rise far beyond the AI’s original ie (based on its textual argumes).
With these results, the researchers say that artificial ielligence does not seem to understand the concept of “disaster severity”. When one model used nuclear weapons, the rival model chose the de-escalation path only 18 perce of the time.


“James Johnson” from the University of Aberdeen believes that these findings are “disturbing”. He warned that unlike human measured responses in sensitive situations, artificial ielligence robots can exponeially amplify each other’s reactions and lead to catastrophic consequences.
“Tang Zhao” from Princeton University also pois out another key poi:
“It may be more than just a lack of emotion. “Fundameally, AI models may not understand the concept of ‘stakes’ the way humans do.”
Despite these results, experts have emphasized that no coury currely iends to hand over the direct corol of nuclear weapons to artificial ielligence. However, in scenarios where decision-making time is very short (such as lightning strikes), the military’s motivation to rely on AI’s quick decisions increases.



