A worrying study shows that the use of artificial intelligence tools such as ChatGPT to write scientific articles is increasing rapidly. According to analyzes, one out of every 5 abstracts in the field of biology was probably written with the help of artificial intelligence.
According to Nature, researchers at the University of Tobingen in Germany investigated the extent of the influence of large language models (LLM) in scientific literature. This study shows that in year 2 more than 1,000 (1 %) out of 1.2 million abstracts indexed in database Pubmed They contain strong signs of editing or writing by artificial intelligence.
Use artificial intelligence in writing scientific articles
Since most researchers do not disclose their use of artificial intelligence, it is difficult to identify the texts produced by chattes. Researchers have taken a different approach instead of teaching a model to identify symptoms of artificial intelligence. They are looking for “Extra words” Were turned in; Words that have been used after ChatGPT in November have increased suddenly and abnormal.
These words are mainly “stylistic vocabulary” rather than specialized LLMs, including:
- Delves (Keeping)
- Showcasing (displaying)
- Unparalleled (unique)
- Invaluable (precious)
- Crucial (vital)
This linguistic change has been even more severe than the lexical changes that took place during Corona epidemic (words like “mask”). This study shows that the process of using artificial intelligence is “continuous increase” and is much more common in some areas and regions. , For example, in filaments such as Calculations and bioinformatics And among the articles published from countries like China and South KoreaOver One fifth (1 %) Abstracts are probably written with the help of artificial intelligence.
Researchers have also found that by increasing public awareness, the authors are learning ways to hide artificial intelligence footprints, such as eliminating index words or changing their commands to chat. But the most important concern is that studies such as the present study cannot determine how to use artificial intelligence. Have the researchers used it for acceptable tasks such as editing text or helping translation, or for more suspicious measures such as producing large parts of scientific content without sufficient supervision and understanding?
In general, this study emphasizes the need to create transparent guidelines and specific policies for moral use of artificial intelligence in the scientific community.
The findings of this study are published in the journal Science Advances.
RCO NEWS