
Redazione RHC : 5 November 2025 10:02
arXiv, one of the most important repositories of scientific preprints , has revealed some disturbing facts following growing concerns in the scientific community regarding the uncontrolled use of generative artificial intelligence models.
The platform, operated by Cornell University and widely used by scientists worldwide, has stopped accepting two specific types of computer science publications : review articles and policy articles. This is due to the sharp increase in the number of articles automatically generated using language models that make no real scientific contribution.
For decades, arXiv has served as a platform for publishing scientific papers before they have undergone full peer-review in academic journals. This is particularly important for rapidly evolving fields like artificial intelligence and machine learning, where publication delays can diminish the novelty of results.
However, in recent years, the field of computer science has been literally inundated with works that represent neither original research nor analytical reviews addressing current scientific issues. Many of them, as the platform’s official statement points out , resemble annotated bibliographies that merely rework existing data.
While no new rules have been formally introduced, the arXiv administration has specified that it will now strictly adhere to the current moderation criteria. Authors of review and program articles will now be required to demonstrate successful external peer review; otherwise, publication will not be approved. It should be noted that these measures apply only to articles that do not contain original results and do not extend to full-length research.
According to arXiv representatives, they currently receive hundreds of such publications each month. The advent of language models has only accelerated this process, simplifying the mass generation of texts. This has increased pressure on moderators, who are forced to spend resources filtering out recycled and duplicate material, at the expense of analyzing truly significant research. To alleviate this burden, the decision has been made to completely stop accepting peer-reviewed and programmatic publications in the computer science category.
If a similar situation were to arise in other disciplines, due to the increase in AI-based texts, arXiv could extend similar restrictions to these sections as well. The service’s administration believes that such measures are necessary to prioritize serious research of interest to the scientific community.
The use of generative models has already caused several problems in the scientific field . In addition to the flood of repetitive texts, the growing reliance on artificial intelligence is also impacting the peer review process: there are known cases of scientific reviewers using ChatGPT to prepare their conclusions.
Furthermore, last year, a paper published in a prestigious journal containing an AI-generated image was retracted for not meeting academic standards. This highlights the need to revise publication criteria and strengthen quality control for scientific articles in the era of widespread use of language models.
Redazione