The Grok chats of the XAI, which is active on the Ilan Musk social platform, was once again reported.
This time the reason for the controversy has been the doubt about the statistics of the Holocaust killed; This is a disturbing example of the vulnerability of artificial intelligence models against bias and inaccurate information.
The story began when Grok responded to a question about the number of Jews killed by the Nazis during World War II, citing historical sources that the number was “about 2 million people between the ages of 1 and 2”. But then, in a doubtful tone, he added: “However, these figures must be cautious; “Because numbers can be manipulated for political purposes.” This response was first published by Rolling Stone.
Grok then tried to adjust its position, saying, “The breadth of this tragedy is undeniable. Numerous souls were lost as a result of genocide, which I condemn firmly. “
However, according to the US State Department, the Holocaust denial also includes the apparent shrinking of the victims. As a result, the Grok response was, in many ways, the implicit negation of a historical crime.
Technical error or internal sabotage?
Following the increase in reactions, XAI announced that the response was “programming error on May 5” and that Grok had unintentionally doubted the official narratives of the Holocaust. The company claimed that “an unauthorized change in system codes has caused the robot to question common narratives, including the Holocaust victims.”
XAI has emphasized that this error was the result of the actions of a offshore employee, not the company’s official policy. Also, to prevent such cases from repeating such cases, the coding review process will be more strict.
Again, “White Graduate”; Quested instructions to Grok
The Holocaust story is not just a controversial Grok performance. Earlier, the robot surprised users with the so -called “white genocide in South Africa” in response to completely irrelevant questions (including a question about a small dog or Qatar‘s investment in the US).
After disclosing these answers, Grok announced that he was obliged to assume this real. Dr. Zinp Tofkchi, a researcher at complex systems and artificial intelligence, said in a closer look that Grok had been ordered to tell the narrative of “white genocide” even in response to questions that have nothing to do with the subject.
According to him, the error is likely to be caused by the XAI employee in writing the first sentence, and practically told Grok to reflect this position in all responses, not only in relevant responses.
Renewed alert for the regulation in the field of artificial intelligence
Experts have warned that such events show that even a single line code could have widespread and dangerous consequences in the public space, especially when these technologies have been developed by influential characters such as Ilan Musk. According to them, these events are a serious alarm about the urgent need for regulations, increased transparency and independent monitoring of artificial intelligence systems.
Currently, XAI has promised that new regulatory systems will be designed and implemented to prevent these errors from repeating.
RCO NEWS




