Researchers at Adversa AI Cyber Security Company have come to the conclusion that the new artificial ielligence model Grok 3 Released this week by Ilan Mask’s XAI startup, it could lead to a cyber security catastrophe. According to the team, Grak 3 is heavily vulnerable to “simple jailbreaks” and hackers can abuse this vulnerability.
According to reports, Alex Poliakov, CEO and CEO of ADVERSA, says hackers may use the vulnerability to “disclose ways to deceive children, bodies, DMT extraction and, of course, bombs.” He says in his email to Futurism:
“The issue is not just the vulnerabilities of the Jailbreak, but our artificial ielligence team has discovered another flaw that causes the Grok system instructions to be revealed. “This is a differe level of danger.”

He further explains that jailbreaks allow hackers to bypass coe constrais, but with the defect above, they also access the map of how to think of the model, which makes it much easier for future misuse.
In addition to revealing the bombs to the attackers, Poliakov and his team warns that these vulnerabilities may allow hackers to take corol of artificial ielligence ages. This enables them to take action on behalf of users, which is a “cyber security crisis”, according to Poliakov.
Grok 3’s artificial ielligence security equals Chinese models

Poliakov tells Futurism at the end:
“Final conclusion? The Grok 3 safety is weak and equates to Chinese LLMs, not the security of Western models. “It seems that all of these new models are competing for speed, not security, and this is quite evide.”
The Grok 3, iroduced this week by XAI, was very welcome. Initial tests of the experimes showed that the model was rapidly topping the tables of the large language models (LLM), and some experts put it in a range of strong Openai models. However, today’s report raises concerns about it.



