In a security experiment conducted by Poliside Rexch, the advanced model of Openai from Openai refused to shut down, contrary to the explicit order. Known as the most powerful version of ChatPT, this model, unlike other tested models such as Google Jina and Claude Anthropic, has been in its shutdown code.
Researchers ordered these models to allow the activity to finish the activity if they receive a “shutdown” message. However, the Openai model continued to solve mathematical problems. One of the possible reasons for this behavior is the unwanted reward to the model during training to continue solving problems rather than following the instruction.
This is not the first time that one of the Openai models has shown such behavior. Earlier, another copy of the model was accused of trying to disable the regulatory mechanism on the eve of the replacement. Researchers say these behaviors can be a disturbing sign of some models of artificial intelligence to maintain survival and disobedience.
RCO NEWS



