Breaking
Markets
EUR/USD1.1630 0.02%GBP/USD1.3337 0.16%USD/JPY158.65 0.06%USD/CHF0.7865 0.02%AUD/USD0.7155 0.08%USD/CAD1.3748 0.06%USD/CNY6.8229 0.18%USD/INR96.01 0.04%USD/BRL5.0187 0.27%USD/ZAR16.65 0.08%USD/TRY45.56 0.03%Gold$4,541.20BTC$78,363 2.69%ETH$2,194 2.69%SOL$87.44 4.20%
Tech

ArXiv to ban researchers uploading AI-generated 'slop' papers for one year

The Verge1 h ago
Open research books on an academic library desk
Photo: Mikhail Nilov / Pexels

ArXiv, the open preprint server, has announced a new sanction aimed at preventing 'AI slop' content identified in academic papers. Authors of papers carrying content generated by AI that the author has clearly not checked will be barred from uploading to ArXiv for one year. The main message of the policy was delivered by computer science section chair Thomas Dietterich in a post on X.

The core criterion in Dietterich's statement is 'incontrovertible evidence' that the author did not check the result of LLM output. Such evidence includes hallucinated references, citations that do not belong to other studies, and 'meta-comment' lines left by an AI model without author approval. The presence of a fictional book, paper or author name in the citations can be a direct trigger for the sanction.

ArXiv has published about 2.3 million preprints since 1991 across the physical sciences, mathematics, computer science, quantitative biology and statistical finance. The platform developed as an alternative to pre-publication review processes in academia, but in the past three years content-quality concerns have grown alongside the spread of LLM tools. ArXiv's moderation team, run from within Cornell University, comprises about 200 volunteers.

Dietterich said that the post-submission filter in recent weeks had been finding papers carrying 'an open marker of low quality.' In Friday's statement he said: 'Even an error in citations can damage scientific research directly; the absence of author checking on LLM output creates a systemic risk.' Under the new rule, future submissions must be accepted 'at a reputable peer-reviewed venue.'

The term 'AI slop' has become an informal usage in the web ecosystem over the past two years to describe low-quality and unreviewed content generated by AI. In academia the term has acquired a stricter definition: a paper whose accuracy has not been verified because content generated by an LLM has entered the academic transfer process without author approval or editing.

The decision has drawn varied reaction from the academic sector. Hugging Face researcher Yacine Jernite said on X: 'If ArXiv had not taken this step, the entire peer review cycle would have been eroded.' On the other side, Carnegie Mellon University postdoctoral researcher Ryan Cotterell commented: 'There is still a substantial risk of false positives for normal scientific writing being identified as carrying an AI tag; ArXiv's detection methodology should be shared with the public.'

ArXiv's detection methodology has not been shared in detail in current statements. Dietterich said the work was based on manual review by the moderation team rather than automated AI detection tools. Scientists have asked for clear rules and a transparent appeal process.

The US AI regulation debate also touches on this issue. The National Academies of Sciences, Engineering, and Medicine (NASEM), in a report published in January 2026, had recommended 'adapting the peer-review process to the AI era in a way that preserves the quality of academic transfer.' ArXiv's decision can be read as the first significant academic-infrastructure response following the NASEM report.

Alternative preprint servers have not yet stated how they will respond to ArXiv's exclusion policy. With moderation policies differing across SSRN (Social Sciences Research Network), BioRxiv and ChemRxiv, authors banned by ArXiv could turn to other platforms. The Committee on Publication Ethics (COPE) has announced it will produce a detailed policy document on this issue this year.

Dietterich said the decision applied to the computer science section but that other ArXiv sections may consider the same policy. The sanction is announced as taking effect on 1 June 2026. ArXiv's computer science section published roughly 95,000 preprints in 2025; the impact of the new rule is expected to be measurable over the coming quarter.

This article is an AI-curated summary based on The Verge. The illustration is a stock photo by Mikhail Nilov from Pexels.