Some promine artificial ielligence researchers from companies such as Openai and Ahropics, from Safety Culture Startup XAI Belong to Ilan Musk They have strongly criticized the Grok chats and have been worrying about the standards of the industry.
In rece weeks, the coroversial actions of GROK have attracted atteion. The chats first made ai -Jewish statemes and even iroduced itself as a “Makaitler”. After that, the XAI unveiled a more advanced version of the Grok 4 as the world’s smartest artificial ielligence.
But it was reported in some reports that this new version also refers to Ilan Musk’s personal views in response to sensitive issues.

In the latest edge of these chats, the XAI recely iroduced a feature called “Artificial Ielligence Companions”, which includes an anime girl with a sensitive appearance and has received serious criticism.
Criticism of GROK’s safety status
Following all of these margins, Boos Barack, a professor of computer science at Harvard University who is currely researching safety at Openai, wrote in a post on the X -Social Network:
“I didn’t wa to comme on Grak’s safety because I work in a rival company, but that has nothing to do with competition. “I appreciate all the engineers and scieists of the XAI, but the security corol method is completely irresponsible.”
Barack in his post specifically from XAI decision to not publish Grok card system Has been criticized; A formal report that includes training methods and safety assessmes of the company. He says it is not clear exactly what safety measures are taken in the new Grok 4 model.
Openai and Google, of course, have no excellence in the timely release of their model card system. For example, Openai decided not to release a card system for Gpt-4.1, claiming that it was not a leader model. Google also released a safety report mohs after the iroduction of the Gemini 2.5 Pro. However, these companies publish safety reports before the full launch of their artificial ielligence models.
Artificial Ielligence Safety researcher Samuel Marx has also described XAI’s decision to disseminate the safety report as “Bi -Paidu” and said:
“Openai, Ahropic and Google also have problems with publishing, but at least do something to evaluate safety and docume it. Xai doesn’t even do the same. “
At the same time, in a post published in the Lesswrong Forum, an anonymous researcher claimed that Grok 4 has virtually no significa safety instructions.

Responding to these concerns, Dan Hendrix, XAI Safety Advisor and Director of the Ceer for AI Safety, announced that the company has carried out “Danger Capability Evaluations” on Grok 4. However, the results of these assessmes have not yet been published.
Steven Adler, an independe artificial ielligence researcher who previously was in charge of the Openai safety teams, also told TechcRunch:
“When standard safety procedures are not met in the artificial ielligence industry – such as publishing the results of dangerous evaluations – I am concerned. Governmes and people have the right to know how artificial ielligence companies are facing the dangers of systems that say they are very powerful. “
State efforts are also being made to address these problems. California Senator Scott Winger, for example, is pursuing a bill that forces leading artificial ielligence companies (including XAI) to publish safety reports. New York State Governor Katie Huchul is also reviewing a similar bill.



