How “deepfakes” can cause apocalypse

The rapid expansion of artificial ielligence technologies, especially deepfakes, has increased the risk of fatal errors in nuclear decision-making and could lead the world to an uniended nuclear war.
According to Isna, the American media emphasizes in an article that the rapid expansion of artificial ielligence, especially “deepfake” technology, iroduces new and very serious risks io the nuclear decision-making process and can significaly increase the probability of an accideal nuclear war.
Although nuclear-weapon states have tried to preve the misfire of these weapons for decades, the risk of miscalculation is as serious today as it was during the Cold War, and could be even more severe due to false information based on artificial ielligence, Foreign Affairs writes.
Historical signs of nuclear error
Referring to a historical example, the American media reminds us that the world has gone to the brink of a nuclear disaster before. In 1983, the Soviet Union’s early warning system mistakenly announced that the United States had launched a nuclear attack. According to the article, a devastating couerattack was averted only because the shift officer, Stanislav Petrov, misdiagnosed the warning. The author emphasizes that if Petrov’s decision had been differe, the Soviet leaders could have fired the world’s deadliest weapons against the United States. This eve, the article says, shows how fragile nuclear stability has always been.
According to Foreign Affairs, the spread of artificial ielligence has exacerbated these long-standing dangers. One of the concerns is the possibility of leaving the decision to use nuclear weapons to machines. The article notes that the United States has officially opposed such an approach, and according to the National Defense Strategy 2022, man will always remain “in the loop of decision-making” about whether or not to use nuclear weapons. Foreign Affairs also pois to the agreeme announced between former US Preside Joe Biden and Chinese Preside Xi Jinping, who emphasized the necessity of human corol over the decision to use nuclear weapons. However, the paper warns that even without directly delegating decision-making to machines, AI poses dangerous indirect threats to nuclear security.
Deepfakes and the risk of decision making in error conditions
This article considers the main danger in the developme of deepfakes; Highly believable videos, images, or audio files used to generate false information. According to the article, these technologies have been used in the geopolitical field before. Among other things, shortly after the start of the Ukraine war in 2022, a massive deepfake was published in which the preside of Ukraine apparely asked his troops to lay down their weapons. Also in 2023, a deepfake made some people think that the Preside of Russia had announced a general mobilization. The article argues that such forgeries in the nuclear coext can be even more destabilizing.
According to this analysis, in the worst-case scenario, a deepfake could convince the leader of a nuclear-armed state that an enemy has launched or is about to launch a first strike. Also, ielligence systems supported by artificial ielligence may issue false warnings about military mobilization or even a nuclear bomb attack. In such a situation, leaders will have only a few minutes to decide on a nuclear response.
Foreign Affairs writes that the curre US administration is actively pursuing the use of artificial ielligence in the national security infrastructure, including actions such as the release of an operational plan for the “offensive” use of artificial ielligence in the Departme of War and the launch of a new platform based on artificial ielligence for employees of the departme. However, the author cautions that the iegration of artificial ielligence io the early stages of nuclear decision-making carries tremendous risks.
The necessity of preserving the human factor
According to the American publication, artificial ielligence systems are prone to phenomena such as “illusion” and “fake”; It means producing incorrect answers but with high confidence. Because the iernal logic of these systems is often not transpare, human users may not know why a system has reached a particular result. The article refers to research that shows that people with moderate familiarity with artificial ielligence, even in the field of national security, trust the output of artificial ielligence.
Foreign Affairs warns that this situation can lead to chain crises. If AI systems are used to ierpret early warning data, they may detect an attack that does not actually exist; A situation similar to what Petrov faced in 1983, but with less time and more uncertaiy. Without proper training and mechanisms, consultas and decision makers may assume the information generated by artificial ielligence is valid.
This article also emphasizes that deepfakes published in cyberspace are almost as dangerous. A leader may mistake a missile test or military exercise for an attack after seeing a fake video. According to the American publication, deepfakes can create an excuse to start a war, provoke public opinion or create confusion in the midst of a crisis.
At the same time, Foreign Affairs acknowledges that artificial ielligence has legitimate uses in the military; From logistics and maienance to translation and analysis of satellite images. But the author emphasizes that some areas, especially nuclear early warning systems and nuclear command and corol, should remain completely out of the reach of artificial ielligence. According to this paper, the lack of real data about nuclear attacks makes safe training of such systems almost impossible and the risk of error is very high.
Foreign Affairs warns that deepfakes and false information based on artificial ielligence have created unprecedeed risks for nuclear systems; Policies must therefore explicitly guaraee that machines will never decide to fire a nuclear weapon without human corol, and all nuclear-weapon states must adhere to this principle. According to the author, if such considerations are not taken seriously, artificial ielligence can lead the world to an irreversible disaster.
end of message



