According to Mehr reporter; In the past three years, generative artificial intelligence, especially in the form of large language models, has not only transformed human writing practices, but also fundamentally challenged the logic of text production in the scientific sector. By significantly reducing the time and skill cost of text production, this technology has weakened the border between scientific writing and mass production of text and has created a new model of the process of producing specialized articles. What was initially introduced as a tool to facilitate scientific writing and especially to remove the language barriers of non-English speaking researchers, today has become a structural phenomenon whose consequences go beyond the improvement of writing style.
In this framework, the phenomenon of producing low-quality scientific texts based on artificial intelligence refers to a situation in which a huge volume of pseudo-scientific texts is produced with minimal human intervention and without going through the classic processes of reflection, revision and self-criticism It is produced scientifically. These texts often appear to have all the academic components, including complex language, structure face Regular, specialized vocabulary and referencing are conventional. However, at the content level, they often lack theoretical innovation, original problem, or meaningful advancement in existing knowledge.
The central question in this situation is whether the expansion of the use of these tools has led to the improvement of the quality of science or has simply increased the acceleration of the production of apparently scientific texts. To be more precise, is it with a phenomenon that observesdemocratized We are facing “the becoming of knowledge” or “language inflation” that depreciates the value of classical signs of scientific quality. The answer to this question, according to the empirical evidence of new researches, is not very optimistic and shows that at least in the current situation, the cuff The scale is weighted in favor of a quantitative increase in text production and not in the deepening of scientific content.
New research findings about scientific writing
A recent study by researchers from the University of California, Berkeley and Cornell carried out and published in “Science” magazine, by analyzing more than one million abstracts of pre-print articles, between the years 2018 and 2024, it has tried to evaluate the real impact of the use of artificial intelligence on scientific production. In this research, the number of articles of each author is considered as an index of productivity and the final acceptance of the article in scientific journals as an index of quality.
The results show that when writers start using artificial intelligence tools, their productivity has increased significantly. Depending on the publishing platform, the number of articles published monthly after the use of artificial intelligence has grown between 36 and nearly 60 percent. This increase is much more impressive for non-English-speaking researchers, especially Asian authors, and in some cases reaches about 90%.
These data show that artificial intelligence has been a tool for many non-English speaking researchers to overcome language barriers and improve English writing; A function that by itself can strengthen access to scientific publication.
Linguistic complexity and quality criterion inversion
However, the review of the quality of articles presents a worrying picture of the field of scientific research. Articles written with the help of artificial intelligence have more complex language on average. In non-AI articles, greater linguistic complexity usually correlates with a higher likelihood of acceptance and publication; This shows that scientific judges consider precise and complex language as a sign of scientific depth and quality.
But in the case of articles written with the support of artificial intelligence, this relationship is reversed; In other words, the more complicated the language, the less likely the article will be accepted. This significant inversion indicates that the linguistic complexity based on artificial intelligence, in many cases, has not been a reflection of scientific depth, but a cover for the methodological and content weaknesses of the research.
In other words, what used to be considered a sign of quality, can now be a camouflage tool for “scientific vulgarity”; Texts that are adorned with technical vocabulary and complex structures, but lack innovation, solid reasoning, or valid scientific method.
Artificial intelligence and diversity of scientific resources
The aforementioned research also deals with the effect of artificial intelligence on the pattern of researchers’ access to scientific resources and shows that this technology is not just a text production tool, but is gradually rearranging the mechanism of distribution and visibility of scientific knowledge. In this section, by comparing the data related to downloading articles through Google search engine and search engine Bing Microsoft, which has been equipped with intelligent conversation capabilities based on artificial intelligence since early 2023, has been identified by users Bing They have access to a more diverse and newer range of scientific articles and publications.
This difference is important because the engines search In effect, they have become invisible gatekeepers of knowledge, and their suggestion pattern can directly affect what is read, referenced, and ultimately reproduced. The increasing diversity of sources in AI-based search shows that this technology has the potential to overcome the traditional focus on a limited set of articles. referential and weaken the dominant publications to some extent and increase the possibility of newer or lesser known research being seen.
This variation is most likely due to use Bing It is from the “recovery enhanced production” method; The way in which, classical results search It is combined with artificial intelligence text generation processes and the final answer is formed not only based on popularity or number of references, but based on active retrieval of relevant information. Contrary to initial concerns that AI-based search will be limited to reproducing old and established sources, evidence from this research shows that such systems can facilitate access to more diverse and up-to-date scientific literature.
However, this development has a double consequence. Such conditions, on the one hand, can lead to more dynamism of knowledge circulation and reducing the monopoly of resources, and on the other hand, it provides algorithms with the power to select and highlight resources more than before; This doubles the need for transparency, monitoring and critical understanding of search mechanisms based on artificial intelligence in the field of science.
Strategic Implications for Scientific Arbitration
The most important consequence of these developments for science is the weakening of the validity of language as a quick and low-cost indicator of scientific quality. In the popular tradition of academic refereeing, writing quality, linguistic coherence, and lexical complexity were often regarded as implicit indicators of conceptual rigor, theoretical depth, and research maturity. This reliance was largely due to the fact that the production of precise and complex scientific language requires a real mastery of the subject, spending considerable time and going through expensive thought processes.
With the spread of generative artificial intelligence, this historical link between language and content has been seriously disrupted. Now it is possible to produce texts with an academic appearance, coherent structure and specialized vocabulary, without having deep scientific understanding or real participation in the advancement of knowledge. In such a situation, relying on the quality of writing for the initial screening of articles is considered not only unreliable, but also potentially misleading and can lead to the acceptance of texts whose fundamental weaknesses remain hidden at the language level.
As a result, the process of scientific judgment has to move away from superficial and language-oriented evaluations and move towards deeper methodological investigations. Therefore, focusing on the logic of the research design, the validity of the data, the transparency of the analytical methods, the reproducibility of the results, and the real contribution of the article in solving a specific scientific problem, become the central components of the judgment. Although this shifting of standards is necessary from a scientific point of view, it also significantly increases the workload of reviewers and editors of publications.
This necessity arises in the context that the scientific publishing system is already facing an increasing flood of sending articles, lack of expert reviewers and chronic time pressure, and weakening the credibility of language as a quality indicator is not only an epistemic challenge, but also an institutional crisis for the scientific review mechanism, which cannot be managed without structural rethinking in the evaluation process.
Fighting AI through AI
In such a situation, the use of arbitration tools based on artificial intelligence is proposed as a possible and somewhat inevitable solution. The logic of this approach is that the same technology that has fueled the quantitative increase and sometimes the qualitative decline of scientific texts can be used in reverse to refine and screen this accumulation. A prominent example of this approach is the article review tools that were recently introduced by Stanford University, and their goal is not to replace the human reviewer, but to strengthen the diagnostic capacity of the review process.
These tools can identify stereotyped language patterns, methodological inconsistencies, weaknesses in research problem formulation, or signs of lack of coherence between data and results in the early stages of evaluation. In this way, the cognitive load of human judges is reduced, and their focus is on a deeper evaluation of scientific innovation, the validity of conclusions, and the added value of research, instead of the primary refinement of texts. However, the application of such tools itself requires institutional caution rather than turning the adjudication process further into an opaque algorithmic mechanism. lowered do not
In this sense, the metaphor of “fighting fire with fire” is merely an allegory technologically It is not, but it expresses an institutional necessity in the age of accumulation of scientific texts. Maintaining the standards of science does not go through the path of removing or completely rejecting artificial intelligence, but requires redefining quality criteria, redesigning the multi-layered judging process, and establishing a kind of critical and controlled coexistence with this technology. The future of scientific publishing depends on artificial intelligence not being the final arbiter of truth, but rather being a supporting tool in the service of scientific rationality and human judgment.
RCO NEWS



