Some prominent artificial intelligence researchers from companies such as Openai and Anthropics, from Safety Culture Startup XAI Belong to Ilan Musk They have strongly criticized the Grok chats and have been worrying about the standards of the industry.
In recent weeks, the controversial actions of GROK have attracted attention. The chats first made anti -Jewish statements and even introduced itself as a “Makaitler”. After that, the XAI unveiled a more advanced version of the Grok 4 as the world’s smartest artificial intelligence.
But it was reported in some reports that this new version also refers to Ilan Musk’s personal views in response to sensitive issues.
In the latest edge of these chats, the XAI recently introduced a feature called “Artificial Intelligence Companions”, which includes an anime girl with a sensitive appearance and has received serious criticism.
Criticism of GROK’s safety status
Following all of these margins, Boos Barack, a professor of computer science at Harvard University who is currently researching safety at Openai, wrote in a post on the X -Social Network:
“I didn’t want to comment on Grak’s safety because I work in a rival company, but that has nothing to do with competition. “I appreciate all the engineers and scientists of the XAI, but the security control method is completely irresponsible.”
Barack in his post specifically from XAI decision to not publish Grok card system Has been criticized; A formal report that includes training methods and safety assessments of the company. He says it is not clear exactly what safety measures are taken in the new Grok 4 model.
Openai and Google, of course, have no excellence in the timely release of their model card system. For example, Openai decided not to release a card system for Gpt-4.1, claiming that it was not a leader model. Google also released a safety report months after the introduction of the Gemini 2.5 Pro. However, these companies publish safety reports before the full launch of their artificial intelligence models.
Artificial Intelligence Safety researcher Samuel Marx has also described XAI’s decision to disseminate the safety report as “Bi -Paidu” and said:
“Openai, Anthropic and Google also have problems with publishing, but at least do something to evaluate safety and document it. Xai doesn’t even do the same. “
At the same time, in a post published in the Lesswrong Forum, an anonymous researcher claimed that Grok 4 has virtually no significant safety instructions.

Responding to these concerns, Dan Hendrix, XAI Safety Advisor and Director of the Center for AI Safety, announced that the company has carried out “Danger Capability Evaluations” on Grok 4. However, the results of these assessments have not yet been published.
Steven Adler, an independent artificial intelligence researcher who previously was in charge of the Openai safety teams, also told TechcRunch:
“When standard safety procedures are not met in the artificial intelligence industry – such as publishing the results of dangerous evaluations – I am concerned. Governments and people have the right to know how artificial intelligence companies are facing the dangers of systems that say they are very powerful. “
State efforts are also being made to address these problems. California Senator Scott Winger, for example, is pursuing a bill that forces leading artificial intelligence companies (including XAI) to publish safety reports. New York State Governor Katie Huchul is also reviewing a similar bill.
RCO NEWS




