Cybersecurity experts say evidence of ChatGPT being used by criminals is now clearly visible. These people use this artificial intelligence tool to design phishing attacks and malware. But the same issue can put the companies producing artificial intelligence at risk, because they will probably not have judicial immunity.
According to Insider’s report, ChatGPT chatbot performs various tasks from writing articles to content analysis and can increase the speed of people’s actions. But this issue also applies to cyber criminals. “Sergi Shaykevich”, a researcher at the cybersecurity company Checkpoint, says that the evidence of using artificial intelligence to generate code in Ransomware attacks it is obvious.
At the end of 2021, Shaykovic and his team began to study the potential of artificial intelligence in helping cybercrimes and found that criminals could use these tools to Phishing email And Malicious codes produce Shaykovic says his team wanted to see if criminals were actually using AI in real-world attacks, or if they were just trying it out in theory.
ChatGPT really helps cybercriminals
Since it’s not easy to tell if a phishing email was written with ChatGPT, researchers turned to the dark web for further investigation. On December 21, 2022, they discovered that criminals were using this chatbot to make a Python script used in cyber attacks. The script’s code had some errors, but most of it was correct, Shaykevich says.
Now, the misuse of ChatGPT has security experts worried, because these tools can help criminals with little technical knowledge to implement their attacks. Previously, Blackberry claimed that according to their survey, 74% of security professionals are concerned about the use of this artificial intelligence model in cybercrimes.
But on the other hand, the help of artificial intelligence tools to cybercriminals can be problematic for the companies that produce them. Most technology companies are immune from liability for publishing content on their platforms under the protection of Article 230 of the Communications Decency Act. But in the case of chatbots, since these suggestions are made by the AI itself, companies like OpenAI will probably be responsible for helping the criminals.
How do you rate this article?
RCO NEWS