Models & Research

ArXiv will ban researchers who upload papers full of AI slop

· May 15, 2026
ArXiv will ban researchers who upload papers full of AI slop

What happened

ArXiv, a widely used platform for sharing preprint academic papers, will ban researchers who submit poorly checked AI-generated content. If a paper shows clear signs that authors did not verify outputs from large language models, such as fabricated references or leftover system prompts, the authors face a one-year ban. Thomas Dietterich, chair of ArXiv’s computer science section, confirmed this stricter enforcement to curb what he calls AI slop in submissions. New submissions will also require stronger author attestations about verifying AI contributions.

Why it matters

ArXiv’s move pressures researchers to vet AI-generated results more thoroughly before publishing. Papers full of AI errors or hallucinations reduce trust in preprint archives and in the broader academic process. For academics using AI to speed up writing or idea generation, this change shifts incentives. It punishes sloppy AI use and forces a higher standard for accuracy and human oversight. This move could slow the flood of questionable AI-generated papers but also raises the bar for speed and ease of AI-assisted research dissemination.

What to watch next

Watch how ArXiv enforces these bans and if similar archives or journals follow suit. Researchers may adopt more rigorous AI verification workflows or tools to avoid penalties. The policy could spark debate about the role of AI in research writing and affect how fast different fields adapt to AI assistance. Also track whether AI developers respond with features to help users avoid hallucinations or meta-comments to stay compatible with platforms like ArXiv.

AI Quick Briefs Editorial Desk

Stay ahead of AI Get the most important AI news delivered to your inbox — free.