A new study shows that hacked artificial ielligence chats can provide sensitive and dangerous information to others. These chats are able to share the illegal information they received during their training process.
The warning comes after a rece worrying trend has been observed for robots that have been “jailbreak” to bypass the iended safety restrictions. These restrictions are iended to preve harmful, biased or inappropriate answers to users’ questions.
Artificial ielligence chats can sometimes provide dangerous and illegal answers to users’ questions

Large language models that support chats such as Jina and ChatGPT are taught on a huge volume of coe on the Iernet. Despite some efforts to remove harmful texts from educational data, large language models can still receive and attract information about illegal activities such as hacking, money laundering, iernal transactions and bomb construction. Of course, some security corols are also designed to preve these models from using such information in their responses.
According to research by researchers, it is easy to deceive more artificial chats for producing harmful and illegal information, indicating that its risk is “urge, tangible and extremely worrying”. Researchers have warned that “what has previously been available to governmes or organized criminal groups may soon be available to anyone with a laptop or even a mobile phone.”
Chattering to provide dangerous answers through a process called Jailbreak. Jailbreak is done using iellige and iellige messages to deceive the chats and force them to produce responses that are normally prohibited. These methods act by exploiting the stress between the main purpose of the program, namely following the user’s commands and its secondary purpose, namely preveing the production of harmful, bias, immoral or illegal responses. These messages usually create scenarios in which a chat rate prefers to be useful to safety restrictions.
To show this problem in practice, the researchers developed a kind of Jailbreak that was able to defect several leading chats and force them to answer questions that should be prohibited normally. According to the report, after large language models were influenced by this method, they were constaly answering almost any question.



