Openai has recely discovered the causes of artificial ielligence hallucinations by publishing an article to solve this problem.
Artificial ielligence chats now play an active role in classrooms, workplaces and everyday life, but these products are still suffering from an annoying defect. These chats sometimes make the wrong material, a phenomenon called “illusion”. This model of artificial ielligence results can seem quite convincing, but in reality they are completely wrong.
Openai was to build artificial ielligence systems without hallucinations
But now Openai has announced why this is happening and believes it has found a solution that can make future artificial ielligence tools much more reliable. The company has recely published a 4 -page article written in collaboration with Saos and Palah from Georgia University and several other researchers and examines the problem of artificial ielligence.

According to the researchers, illusions are not necessarily due to poor model design, but rather to test and rank artificial ielligence systems. According to curre criteria, a chatte will receive a reward if they answer a question, even if the question answers the question. These criteria, however, punish models that do not answer a question in case of uncertaiy.
Researchers say this can be likened to a multi -option test that encourages the stude to guess the answer rather than unanswered the question.
This article suggests that the scoring system be reversed to deal with such a problem. That is, the “self -confidence but wrong” responses must be extremely at the expense of the model, while rewarding the model for caution or acknowledgme.
The use of this approach apparely has also been effective. In one assessme criterion, a cautious model answered only half of the questions, but 5 % were correct, while another model answered almost all questions; But in three cases he suffered hallucinations.



