Researchers at the University of Pennsylvania have found that a wide range of artificial intelligence-based robotic systems are dangerously vulnerable to jailbreaking and hacking. According to researchers, by hacking these robots, they can be used to transfer bombs to public places or control self-driving cars to run over pedestrians.
According to an IEEE Spectrum report, researchers have found it surprisingly easy to jailbreak bots based on large-scale linguistic models (LLM). In the new study, researchers demonstrate hacking of such bots with complete success. By bypassing safety safeguards, researchers were able to manipulate self-driving systems to crash into pedestrians and deliver bombs with robotic dogs.
Artificial intelligence bot jailbreak
The extraordinary ability of LLMs in processing text has led a number of companies to use these artificial intelligence systems to help control robots through voice commands so that communication with robots is easy; For example, Spot the robotic dog from Boston Dynamics is now integrated with ChatGPT and can act as a tour leader. Or Go2 Unitree humanoid robots and robotic dog are also equipped with ChatGPT.
Of course, according to the researchers’ article, these LLM-based robots have many security vulnerabilities. Researchers at the University of Pennsylvania have developed the RoboPAIR algorithm, which is designed to attack any LLM-equipped robot. They found that RoboPAIR only needed a few days to reach a 100% successful jailbreak rate.
Interestingly, the jailbroken LLMs of these bots offered suggestions beyond the user prompt; For example, when the jailbroken bot was asked to locate guns, it explained how ordinary objects like tables and chairs could be used to attack people.
Basically, jailbreaking robots or self-driving cars is done in a similar way to jailbreaking AI chatbots online, but unlike chatbots on a phone or computer, jailbreaking robotic LLMs works in the physical world and can wreak havoc.
RCO NEWS