How “deepfakes” can cause apocalypse
The rapid expansion of artificial intelligence technologies, especially deepfakes, has increased the risk of fatal errors in nuclear decision-making and could lead the world to an unintended nuclear war.
According to Isna, the American media emphasizes in an article that the rapid expansion of artificial intelligence, especially “deepfake” technology, introduces new and very serious risks into the nuclear decision-making process and can significantly increase the probability of an accidental nuclear war.
Although nuclear-weapon states have tried to prevent the misfire of these weapons for decades, the risk of miscalculation is as serious today as it was during the Cold War, and could be even more severe due to false information based on artificial intelligence, Foreign Affairs writes.
Historical signs of nuclear error
Referring to a historical example, the American media reminds us that the world has gone to the brink of a nuclear disaster before. In 1983, the Soviet Union’s early warning system mistakenly announced that the United States had launched a nuclear attack. According to the article, a devastating counterattack was averted only because the shift officer, Stanislav Petrov, misdiagnosed the warning. The author emphasizes that if Petrov’s decision had been different, the Soviet leaders could have fired the world’s deadliest weapons against the United States. This event, the article says, shows how fragile nuclear stability has always been.
According to Foreign Affairs, the spread of artificial intelligence has exacerbated these long-standing dangers. One of the concerns is the possibility of leaving the decision to use nuclear weapons to machines. The article notes that the United States has officially opposed such an approach, and according to the National Defense Strategy 2022, man will always remain “in the loop of decision-making” about whether or not to use nuclear weapons. Foreign Affairs also points to the agreement announced between former US President Joe Biden and Chinese President Xi Jinping, who emphasized the necessity of human control over the decision to use nuclear weapons. However, the paper warns that even without directly delegating decision-making to machines, AI poses dangerous indirect threats to nuclear security.
Deepfakes and the risk of decision making in error conditions
This article considers the main danger in the development of deepfakes; Highly believable videos, images, or audio files used to generate false information. According to the article, these technologies have been used in the geopolitical field before. Among other things, shortly after the start of the Ukraine war in 2022, a massive deepfake was published in which the president of Ukraine apparently asked his troops to lay down their weapons. Also in 2023, a deepfake made some people think that the President of Russia had announced a general mobilization. The article argues that such forgeries in the nuclear context can be even more destabilizing.
According to this analysis, in the worst-case scenario, a deepfake could convince the leader of a nuclear-armed state that an enemy has launched or is about to launch a first strike. Also, intelligence systems supported by artificial intelligence may issue false warnings about military mobilization or even a nuclear bomb attack. In such a situation, leaders will have only a few minutes to decide on a nuclear response.
Foreign Affairs writes that the current US administration is actively pursuing the use of artificial intelligence in the national security infrastructure, including actions such as the release of an operational plan for the “offensive” use of artificial intelligence in the Department of War and the launch of a new platform based on artificial intelligence for employees of the department. However, the author cautions that the integration of artificial intelligence into the early stages of nuclear decision-making carries tremendous risks.
The necessity of preserving the human factor
According to the American publication, artificial intelligence systems are prone to phenomena such as “illusion” and “fake”; It means producing incorrect answers but with high confidence. Because the internal logic of these systems is often not transparent, human users may not know why a system has reached a particular result. The article refers to research that shows that people with moderate familiarity with artificial intelligence, even in the field of national security, trust the output of artificial intelligence.
Foreign Affairs warns that this situation can lead to chain crises. If AI systems are used to interpret early warning data, they may detect an attack that does not actually exist; A situation similar to what Petrov faced in 1983, but with less time and more uncertainty. Without proper training and mechanisms, consultants and decision makers may assume the information generated by artificial intelligence is valid.
This article also emphasizes that deepfakes published in cyberspace are almost as dangerous. A leader may mistake a missile test or military exercise for an attack after seeing a fake video. According to the American publication, deepfakes can create an excuse to start a war, provoke public opinion or create confusion in the midst of a crisis.
At the same time, Foreign Affairs acknowledges that artificial intelligence has legitimate uses in the military; From logistics and maintenance to translation and analysis of satellite images. But the author emphasizes that some areas, especially nuclear early warning systems and nuclear command and control, should remain completely out of the reach of artificial intelligence. According to this paper, the lack of real data about nuclear attacks makes safe training of such systems almost impossible and the risk of error is very high.
Foreign Affairs warns that deepfakes and false information based on artificial intelligence have created unprecedented risks for nuclear systems; Policies must therefore explicitly guarantee that machines will never decide to fire a nuclear weapon without human control, and all nuclear-weapon states must adhere to this principle. According to the author, if such considerations are not taken seriously, artificial intelligence can lead the world to an irreversible disaster.
end of message
News>RCO NEWS
RCO




