According to the results of a new study, when people rely on large language models to summarize information on a topic, they usually find more superficial knowledge than when they learn through a standard Google search.
According to Science Alert, since the release of ChatGPT, millions of people have started using large language models to access information. The appeal of these tools is that you can easily ask a question and receive a ready and detailed summary that gives the feeling of effortless learning. But a new paper provides evidence that the ease of access to information through artificial intelligence may come at a cost to users.
Using artificial intelligence to learn information may reduce your knowledge
For this study, participants were asked to find information about a topic such as how to grow a vegetable garden and were randomly assigned to do so using a large language model such as ChatGPT or the traditional way, i.e. searching links through Google.

No restrictions were imposed on how to use the tools. Participants could Google as much as they wanted, and if they wanted, they could still ask ChatGPT questions for more information.
After completing the research, the participants were asked to write a recommendation to a friend about the same topic based on what they had learned. The data showed a consistent pattern: People who had learned about a topic through a large language model felt they had learned less than those who had used a web search. They also put less effort into writing recommendations and ended up writing shorter, less specific, and more general recommendations.
In contrast, when these recommendations were presented to an independent group of readers who were unaware of the tool used to learn the topic, they found the recommendations of ChatGPT users less informative and less helpful, and were less likely to apply them.
Also, in another experiment where the platform remained the same – Google was used – and the difference in learning was from the search results section or AI Overview, learning from the combined responses of the large language model led to more superficial knowledge.
RCO NEWS



