According to numerous reports, when users ask Grok about issues such as Gaza war, abortion laws or US immigration policy, this artificial intelligence first reviews Ilan Musk’s personal reactions, and even in some cases adjusts its response to his view.
A data scientist named Jeremy Howard has released a video in which Grok responded to a question about Israel and Palestine, “I’m looking at Ilan Musk’s views.” Further studies show that 2 out of 5 GROK reference sources were directly related to the mask. The Techcrunch media has also confirmed this behavior in responses to abortion and migration.
However, the internal documents of the fourth version of Grok emphasize that in controversial questions, the model should use a variety of sources and do not consider any other party’s view. It has also been warned that “media views are usually biased”. However, some researchers believe that because this model knows its creator Ilan Musk, he automatically examines his opinion.
“Grok seems to be a criterion for Ilan Musk’s opinion in response to questions, even if this procedure is not directly predicted in the model coding,” says Simon Willison, a programmer and analyst in the field of big languages.
This happens when GROK was criticized for its orientation in the past. Now relying on the owner of the company owner’s personal views has once again heated the issue of neutrality in artificial intelligence; Especially in models that are to play the role of the general assistant.
RCO NEWS




