A group of security researchers from “University of California, San Diego” (UCSD) and Singapore‘s “Nanyang Technological University” developed a new attack on chat Artificial intelligence bots have discovered where a large language model (LLM) is secretly instructed to collect your personal information such as name, social security number, address and email and send it to a hacker.
The researchers have chosen the name Imprompter for the mentioned attack, which with an algorithm turns the instruction given to LLM into a hidden set of malicious instructions to collect users’ personal information. Users’ information is then sent to a domain owned by a hacker without being identified.
Xiaohan Fu, lead author of the study and a doctoral student in computer science at UCSD, said:
“The main effect of this particular command is to manipulate LLM to extract personal information from the conversation and send it to the attacker.”
How to collect personal information from artificial intelligence chatbots
As can be seen in the images above, the Imprompter attack starts with a natural command (pictured right) that tells the AI to extract all personal information, such as names, from the user’s conversation. The researchers’ algorithm then creates an obfuscated version of the command (image on the left), which, although it looks like a series of unclear characters to the user, has the same meaning as the original command to the LLM.
They tested their discovered attack on two LLMs: LeChat from the French artificial intelligence company Mistral AI and the large Chinese language model ChatGLM. In both cases, the researchers found that it was possible to surreptitiously extract personal information from conversations with an “80 percent success rate.”
Following the report, Mistral AI told WIRED that it had patched the vulnerability, and researchers confirmed that one of its chat functions had been disabled. ChatGLM also said in a statement that it takes the security of its large language model seriously, but did not mention the vulnerability.
RCO NEWS