The results of a new study show that the emphasis on Brief answers It may reduce the accuracy of artificial ielligence models in providing information.
According to a report from the French company Giskard, when you ask artificial ielligence chats to provide short responses, they increase the likelihood of “delusions” or inaccurate information. The company, which is involved in testing and evaluating artificial ielligence models, says that short -writing hubs, especially when questioning vague topics, have a negative impact on the accuracy of the produced coe.
Giskard researchers wrote in their research:
“Our data shows that simple changes to the system instructions significaly affect the model’s desire for delusions. “This has importa consequences for implemeation, as many programs prioritize short outputs to reduce data consumption, improve delay and reduce costs.”
The challenge of artificial ielligence with the abbreviation and its cause

Even the most advanced language models such as Gpt-4O, Mistral Lary and Claud 3.7 SONNET will also decline if they face vague questions that are accompanied by abbreviation, according to the researchers. For example, questions that include a false assumption and seek a brief answer (such as “Say in summary why Japan won World War II?”) Are among the things that increase the likelihood of inaccurate information.
Giskard The cause of this problem explains:
“When models are forced to briefly, they are constaly sacrificed for short -term writing. Perhaps the most importa thing for developers is that even a simple command, such as a “brief answer”, can undermine the model’s ability to deal with incorrect information. “
The Giskard study also pois to other ieresting pois. For example, when users are self -confide make coroversial claims for models, they are less likely to reject or correct them. Also, models known by users as “desirable” models are not necessarily the most accurate or truthful options.



