Openai has recently discovered the causes of artificial intelligence hallucinations by publishing an article to solve this problem.
Artificial intelligence chats now play an active role in classrooms, workplaces and everyday life, but these products are still suffering from an annoying defect. These chats sometimes make the wrong material, a phenomenon called “illusion”. This model of artificial intelligence results can seem quite convincing, but in reality they are completely wrong.
Openai wants to build artificial intelligence systems without hallucinations
But now Openai has announced why this is happening and believes it has found a solution that can make future artificial intelligence tools much more reliable. The company has recently published a 4 -page article written in collaboration with Santos and Palah from Georgia University and several other researchers and examines the problem of artificial intelligence.
According to the researchers, illusions are not necessarily due to poor model design, but rather to test and rank artificial intelligence systems. According to current criteria, a chatte will receive a reward if they answer a question, even if the question answers the question. These criteria, however, punish models that do not answer a question in case of uncertainty.
Researchers say this can be likened to a multi -option test that encourages the student to guess the answer rather than unanswered the question.
This article suggests that the scoring system be reversed to deal with such a problem. That is, the “self -confidence but wrong” responses must be extremely at the expense of the model, while rewarding the model for caution or acknowledgment.
The use of this approach apparently has also been effective. In one assessment criterion, a cautious model answered only half of the questions, but 5 % were correct, while another model answered almost all questions; But in three cases he suffered hallucinations.
RCO NEWS



