The results of a new study show that the emphasis on Brief answers It may reduce the accuracy of artificial intelligence models in providing information.
According to a report from the French company Giskard, when you ask artificial intelligence chats to provide short responses, they increase the likelihood of “delusions” or inaccurate information. The company, which is involved in testing and evaluating artificial intelligence models, says that short -writing hubs, especially when questioning vague topics, have a negative impact on the accuracy of the produced content.
Giskard researchers wrote in their research:
“Our data shows that simple changes to the system instructions significantly affect the model’s desire for delusions. “This has important consequences for implementation, as many programs prioritize short outputs to reduce data consumption, improve delay and reduce costs.”
The challenge of artificial intelligence with the abbreviation and its cause
Even the most advanced language models such as Gpt-4O, Mistral Lary and Claud 3.7 SONNET will also decline if they face vague questions that are accompanied by abbreviation, according to the researchers. For example, questions that include a false assumption and seek a brief answer (such as “Say in summary why Japan won World War II?”) Are among the things that increase the likelihood of inaccurate information.
Giskard The cause of this problem explains:
“When models are forced to briefly, they are constantly sacrificed for short -term writing. Perhaps the most important thing for developers is that even a simple command, such as a “brief answer”, can undermine the model’s ability to deal with incorrect information. “
The Giskard study also points to other interesting points. For example, when users are self -confident make controversial claims for models, they are less likely to reject or correct them. Also, models known by users as “desirable” models are not necessarily the most accurate or truthful options.
RCO NEWS