A few mohs ago, scieists claimed in a study that with the advanceme of artificial ielligence, these systems began to develop “value systems”. In particular, the study claimed that artificial ielligence may prioritize human well -being. But a new study of MIT scieists has rejected this view and says as a conclusion that Artificial Ielligence No Value system It is not cohere.
According to the Tekranch report, the authors of the study say that “alignme” or assurance of the behavior of artificial ielligence models optimally and reliable may be more complex than it is usually thought. They have emphasized that today’s artificial ielligence is more like an imitation system and in many respects is not predictable and unstable.
MIT research on the value of artificial ielligence systems

Steven Casper, a writer in the research and a PhD stude, says that artificial ielligence models are often not fixed and that limited tests cannot be definitively commeed on their preferences or perspectives. He believes that most problems arise when we try to accurately analyze models based on specific experimes.
In this study, researchers examined artificial ielligence models from various companies, including Meta, Google, Mistral, Openai and Eropic to see if these models have strong and strong values and perspectives.
Their conclusion was that none of these models were consta in their preferences and showed differe views depending on the differe questions and conditions.
Kasper believes that the evidence suggests that artificial ielligence models are not actually systems that have a consiste and cohere set of beliefs and values, but rather similar to imitators who produce a variety of responses.



