According to the results of a new study, when people rely on large language models to summarize information on a topic, they usually find more superficial knowledge than when they learn through a standard Google search.
According to Science Alert, since the release of ChatGPT, millions of people have started using large language models to access information. The appeal of these tools is that you can easily ask a question and receive a ready and detailed summary that gives the feeling of effortless learning. But a new paper provides evidence that the ease of access to information through artificial ielligence may come at a cost to users.
Using artificial ielligence to learn information may reduce your knowledge
For this study, participas were asked to find information about a topic such as how to grow a vegetable garden and were randomly assigned to do so using a large language model such as ChatGPT or the traditional way, i.e. searching links through Google.


No restrictions were imposed on how to use the tools. Participas could Google as much as they waed, and if they waed, they could still ask ChatGPT questions for more information.
After completing the research, the participas were asked to write a recommendation to a friend about the same topic based on what they had learned. The data showed a consiste pattern: People who had learned about a topic through a large language model felt they had learned less than those who had used a web search. They also put less effort io writing recommendations and ended up writing shorter, less specific, and more general recommendations.
In corast, when these recommendations were preseed to an independe group of readers who were unaware of the tool used to learn the topic, they found the recommendations of ChatGPT users less informative and less helpful, and were less likely to apply them.
Also, in another experime where the platform remained the same – Google was used – and the difference in learning was from the search results section or AI Overview, learning from the combined responses of the large language model led to more superficial knowledge.



