A few months ago, scientists claimed in a study that with the advancement of artificial intelligence, these systems began to develop “value systems”. In particular, the study claimed that artificial intelligence may prioritize human well -being. But a new study of MIT scientists has rejected this view and says as a conclusion that Artificial Intelligence No Value system It is not coherent.
According to the Tekranch report, the authors of the study say that “alignment” or assurance of the behavior of artificial intelligence models optimally and reliable may be more complex than it is usually thought. They have emphasized that today’s artificial intelligence is more like an imitation system and in many respects is not predictable and unstable.
MIT research on the value of artificial intelligence systems
Steven Casper, a writer in the research and a PhD student, says that artificial intelligence models are often not fixed and that limited tests cannot be definitively commented on their preferences or perspectives. He believes that most problems arise when we try to accurately analyze models based on specific experiments.
In this study, researchers examined artificial intelligence models from various companies, including Meta, Google, Mistral, Openai and Entropic to see if these models have strong and strong values and perspectives.
Their conclusion was that none of these models were constant in their preferences and showed different views depending on the different questions and conditions.
Kasper believes that the evidence suggests that artificial intelligence models are not actually systems that have a consistent and coherent set of beliefs and values, but rather similar to imitators who produce a variety of responses.
RCO NEWS