Some of the most prominent artificial intelligence scientists have warned about the risks of this technology and declared that we should be prepared for the catastrophic risks of artificial intelligence that could happen at any moment.
According to a statement by the Consortium of International Dialogues on Artificial Intelligence Safety (IDAIS) at a conference in Venice, rapid advances in artificial intelligence are taking humanity to a point where artificial intelligence will equal or even surpass human intelligence. The purpose of holding this conference was to announce a warning about the dangers of artificial intelligence and how to reduce these dangers.
Scientists warn of immediate dangers of artificial intelligence
In this conference, some well-known artificial intelligence scientists, including “Jeffrey Hinton”, an expert at the Turing Institute, and “Zhang Yaqin”, the former head of the Chinese company Baidu, were present. At the end, the statement of this meeting about the dangers of artificial intelligence was signed by the attendees.
The IDAIS statement noted that the loss of human control or the malicious use of artificial intelligence systems that will be released in the next few decades, and many of which are already released, could have catastrophic results for all of humanity.
The purpose of the IDAIS meeting statement is to outline the threat of artificial intelligence to the world and align it for good and beneficial purposes. During this meeting, dozens of artificial intelligence experts gathered to warn about the dangers of this rapidly developing technology.
The signatories of the IDAIS statement also stated that while promising initial steps have been taken by the international community towards cooperation on AI safety at intergovernmental summits, these efforts should focus on developing a “global contingency plan” for when AI risks become more acute. to be
Such contingency plans could include the establishment of an international body for emergency preparedness. Also, a mutual consensus is needed on “red lines” and what should be done if these red lines are rejected by individuals and institutions.
RCO NEWS