Ilan Musk Artificial Intelligence Company announced that the recent error of Chatbat Grook, which repeatedly claimed the “white genocide” in South Africa, was the result of “unauthorized change” in the system guide.
What happened?
This week, users of the X platform (X) observed that even in response to irrelevant questions, from identifying the position of a photo to the general question, “Is Our Work?” Suddenly, the discussion of “attacks on farms” and “white genocide” in South Africa.
In a statement in XAI, XAI explained in a statement: “A confirmation change in the Group’s system led the robot to provide a predetermined response; “The change that contradicted our main policies and values.”
Corrective measures
Mandatory Review of the Praps: From now on, none of the employees can change the system’s guide text without supervision and approval.
The process of stricter opaque: XAI confirmed that in the recent incident, the usual code review route has been rounded and technical measures have now been added to prevent it.
A 2 -hour monitor: A new unit of continuous observation agent has been carried out to modify the errors passing through the automatic system filter in the shortest time.
Code transparency: The Group’s system will be publicly released in GateHeb so that researchers can pursue changes.
Why is it important?
The brain system is the text of each chatter; The smallest manipulation can orient its output.
The general publication of these crops is a unprecedented move among artificial intelligence companies and can be a new criterion for transparency.
Grook’s error has once again discussed the risks of “tackling language models” and the need for human audit.
XAI emphasized that it made no change in its policy of “searching for the truth”, but acknowledged that more accurate oversight was needed to prevent internal abuse or manipulation.
RCO NEWS




