Top
Best
New

Posted by dw64 11/1/2025

Updated practice for review articles and position papers in ArXiv CS category(blog.arxiv.org)
498 points | 236 comments
efitz 11/1/2025|
There is a general problem with rewarding people for the volume of stuff they create, rather than the quality.

If you incentivize researchers to publish papers, individuals will find ways to game the system, meeting the minimum quality bar, while taking the least effort to create the most papers and thereby receive the greatest reward.

Similarly, if you reward content creators based on views, you will get view maximization behaviors. If you reward ad placement based on impressions, you will see gaming for impressions.

Bad metrics or bad rewards cause bad behavior.

We see this over and over because the reward issuers are designing systems to optimize for their upstream metrics.

Put differently, the online world is optimized for algorithms, not humans.

noobermin 11/1/2025||
Sure, just as long as we don't blame LLMs.

Blame people, bad actors, systems of incentives, the gods, the devils, but never broach the fault of LLMs and their wide spread abuse.

miki123211 11/1/2025|||
LLMs are tools that make it easier to hack incentives, but you still need a person to decide that they'll use an LLM t do so.

Blaming LLMs is unproductive. They are not going anywhere (especially since open source LLMs are so good.)

If we want to achieve real change, we need to accept that they exist, understand how that changes the scientific landscape and our options to go from here.

noobermin 11/2/2025||
everyone keeps claiming "they're here to stay" as if it's gospel. this constant drumbeat is rather tiresome and without much hard evidence.
LunaSea 11/2/2025|||
Genuinely curious, did we ever manage to ban a piece of technology worldwide and effectively?
andrybak 11/2/2025|||
Do chlorofluorocarbons (CFCs) mostly banned by the Montreal Protocol count?
tbrownaw 11/2/2025||
And lead in gasoline, and probably quite a few other things where we found a way to get similar end results with fewer annoying side effects.
oscaracso 11/2/2025|||
A large part of geopolitics is concerned with limiting the spread of weapons of mass destruction worldwide and to the greatest possible degree of efficacy. Moreover, the investment to train state-of-the-art models is greater than the Manhattan project and involves larger and more complex supply chains-- it cannot be done clandestinely. Because the scope of the project is large and resource-intensive there are not many bodies that would have to cooperate in order to place impassable obstacles on the path that is presently being taken. 'What if they won't cooperate toward this goal?' -- Worth considering, but the fact is that they can and are choosing not to. If the choice is there it is not an inevitability but a decision.
LunaSea 11/3/2025||
> Worth considering, but the fact is that they can and are choosing not to. If the choice is there it is not an inevitability but a decision.

Pakistan, Israel, North Korea and South Africa have nuclear weapons while not having the right to do so. So I'm not sure how banning graphics cards, thing we are already failing at in China right now will ever work. Especially if countries like China develop their own chip building capacities.

gus_massa 11/2/2025||||
If they go away, it's because they have been replaced by something better(worse) like LLLM or LLMM or whatever.

I'old enough to remember when GAN where going to be used to scam millions of people and flood social media with fake profiles.

latentsea 11/2/2025||||
What evidence do you need exactly?

I think such statements are likely projections of people's own unwillingness to part with such tools given their own personal perceived utility.

I, for one, wouldn't give up LLMs. Too useful to me personally. So, I will always seek them out.

Alex2037 11/2/2025|||
[flagged]
cyco130 11/1/2025||||
LLMs are not people. We can’t blame them.
wvenable 11/1/2025||||
What would be the point of blaming LLMs? What would that accomplish? What does it even mean to blame LLMs?

LLMs are not submitting these papers on their own, people are. As far as I'm concerned, whatever blame exists rests on those people and the system that rewards them.

jsrozner 11/1/2025||
Perhaps what is meant is "blame the development of LLMs." We don't "blame guns" for shootings, but certainly with reduced access to guns, shootings would be fewer.
nandomrumber 11/1/2025||
Guns have absolutely nothing to do with access to guns.

Guns are entirely inert objects, devoid of either free will nor volition, they have no rights and no responsibilities.

LLMs likewise.

nsagent 11/1/2025||

  To every man is given the key to the gates of heaven. The same key opens the gates of hell.
-Richard Feynman

https://www.goodreads.com/quotes/421467-to-every-man-is-give...

https://calteches.library.caltech.edu/1575/1/Science.pdf

anonym29 11/1/2025||||
This was a problem before LLMs and it would remain a problem if you could magically make all of them disappear.

LLMs are not the root of the problem here.

xandrius 11/1/2025|||
I blame keyboards, without them there wouldn't be these problems.
hammock 11/1/2025|||
> There is a general problem with rewarding people for the volume of stuff they create, rather than the quality. If you incentivize researchers to publish papers, individuals will find ways to game the system,

I heard someone say something similar about the “homeless industrial complex” on a podcast recently. I think it was San Francisco that pays NGOs funds for homeless aid based on how many homeless people they serve. So the incentive is to keep as many homeless around as possible, for as long as possible.

djeastm 11/2/2025|||
I don't really buy it. Are we to believe they go out of their way to keep people homeless? Does the same logic apply to doctors keeping people sick?
ssivark 11/2/2025||
ICYMI, this drew a lot of attention a few years ago.

https://www.cnbc.com/2018/04/11/goldman-asks-is-curing-patie...

SOLAR_FIELDS 11/2/2025||
This could literally be an Onion headline
alfalfasprout 11/2/2025||||
It's a metric attribution problem. The real metric should be reduction in homeless, for example (though even that can be gamed through bussing them out, etc-- tactics that unfortunately other cities have adopted). But attributing that to a single NGO is tough.

Ditto for views, etc. Really what you care about as eg; youtube is conversions for the products that are advertised. Not impressions. But there's an attribution problem there.

wizzwizz4 11/2/2025||
Define the metric as "people helped": then bussing them out to abandon them somewhere else isn't a solution, because the adjudicators can go "yes, you made the number go down, but you did so by decoupling the metric from what it was supposed to measure, so we're not rewarding you for it".
SOLAR_FIELDS 11/2/2025|||
My spouse works in the homelessness field and the correct metric to follow is number of homeless given housing. It’s the “housing first” approach. Harder to game counting amount of people directly placed into homes - someone is paying rent and maintaining a trackable occupied space that you can verify that the client is actually utilizing - and this approach cannot be gamed by “bus them somewhere else”

What many people don’t realize is just how many normal life hurdles are significantly easier to overcome with a stable housing environment, even if the client is willing and available to work. Employment, for example, has several precursors that you need. Often you need an address. You need an ID. For that you need a birth certificate. To get the birth certificate you need to have the resources and know how to contact the correct agency. All of these things are much harder to achieve without a stable housing environment for the client.

wizzwizz4 11/2/2025||
"Number of homeless given housing" is only the correct measure due to the nature of the domain-specific problem. I'm wary of this strategy in general, because the people responsible for deciding how things are accounted for are rarely experts enough to identify sensible domain-specific metrics, so they'll have to consult experts. But that creates a vulnerable point of significant interest to would-be grifters, and if they're not experts enough to assess expert consensus, you end up with metrics that don't work, baked in.

But yes, if we're only looking at homelessness, "how many formerly-homeless people have been given housing?" is a very good way to measure successful interventions.

xhkkffbf 11/2/2025|||
And then some will wander back closing the loop and preserving jobs.
watwut 11/2/2025|||
Yeah, it is totally NGO that creates homelessness /s
godelski 11/1/2025|||

  > rewarding people for the volume ... rather than the quality.
I suspect this is a major part of the appeal of LLMs themselves. They produce lines very fast so it appears as if work is being done fast. But that's very hard to know because number of lines is actually a zero signal in code quality or even a commit. Which it's a bit insane already that we use number of lines and commits as measures in the first place. They're trivial to hack. You even just reward that annoying dude who keeps changing the file so the diff is the entire file and not the 3 lines they edited...

I've been thinking we're living in "Goodhart's Hell". Where metric hacking has become the intent. That we've decided metrics are all that matter and are perfectly aligned with our goals.

But hey, who am I to critique. I'm just a math nerd. I don't run a multi trillion dollar business that lays off tons of workers because the current ones are so productive due to AI that they created one of the largest outages in history of their platform (and you don't even know which of the two I'm referencing!). Maybe when I run a multi trillion dollar business I'll have the right to an opinion about data.

slashdave 11/1/2025||
I think you will discover that few organizations use the size or number of edits as a metric of effort. Instead, you might be judged by some measure of productivity (such as resolving issues). Fortunately, language agents are actually useful at coding, when applied judiciously.
godelski 11/2/2025||
Yet it's common enough we see. You also bring up a 10x engineer joke. There's two types of 10x engineers: those that do 10x the work and those who solve 10x the jira tickets but are the cause of 100x of them.

The point is that people metric hack and very bureaucratic structures tend to incentivize metric hacking, not dissuade them. See Pournelle's Iron Law of Bureaucracy.

  > Fortunately, language agents are actually useful at coding, when applied judiciously.
I'm not sure this is in doubt by anyone. By definition it really must be true. The problem is that they're not being used judiciously but haphazardly. The problem is people in large organizations are more concerned with politics than the product they make.

If you cannot see how quality is decreasing then I'm not sure what to tell you. Yes, there are metrics where it's getting better but at the same time user frustration is increasing. AWS and Azure just had recent major outages. Cloudstrike took down lots of the world's network over an avoidable mistake. Microsoft is fumbling the windows upgrade. Apple intelligence was a disaster. YouTube search is beyond infuriating. Google search is so bad we turn to LLMs now. These are major issues and obvious. We don't even have the time to talk about the million minor issues like YouTube captions covering captions embedded in the video, which is not a majorly complicated problem to solve with AI and they're instead pushing AI upscale that is getting a lot of backlash.

So you can claim things are being used judiciously all you want, but I'm not convinced when looking at the results. I'm not happy that every device I use is buggy as shit and simultaneously getting harder to fix myself.

RobotToaster 11/2/2025|||
See Goodhart's law: "When a measure becomes a target, it ceases to be a good measure"
_jsmh 11/1/2025|||
What would a system that rewards people for quality rather than volume look like?

How would an online world that is optimized for humans, not algorithms, look like?

Should content creators get paid?

pjdesno 11/2/2025|||
> What would a system that rewards people for quality rather than volume look like?

Hiring and tenure review based on a candidate’s selected 5 best papers.

Already standard practice at a few enlightened places, I think. (of course this also probably increases the review workload for top venues)

To a lesser extent, bean-counting metrics like citations and h-index are an attempt to quantify non-volume-based metrics. (for non-academics, h-index is the largest N such that your N-th most cited paper has >= N citations)

Note that most approaches like this have evolved to counter “salami-slicing”, where you divide your work into “minimum publishable units”. LLMs are a different threat - from my selfish point of view, one of the biggest risks is that it takes less time to write a bogus paper with an LLM than it does for a single reviewer to review it. That threatens to upend the entire peer reviewing process.

drnick1 11/1/2025||||
> Should content creators get paid?

I don't think so. Youtube was a better place when it was just amateurs posting random shit.

_jsmh 11/4/2025||
Newspapers charge. How-to guides sell. I paid for education.
vladms 11/1/2025||||
> Should content creators get paid?

Everybody "creates content" (like me when I take a picture of beautiful sunset).

There is no such thing as "quality". There is quality for me and quality for you. That is part of the problem, we can't just relate to some external, predefined scale. We (the sum of people) are the approximate, chaotic, inefficient scale.

Be my guest to propose a "perfect system", but - just in case there is no such system - we should make sure each of us "rewards" what we find of quality (being people or content creators), and hope it will prevail. Seemed to have worked so far.

_jsmh 11/4/2025||
Compare work you did earlier with work you did later. Is one better than the other? If so, does it mean there is such a thing as "quality"?
MangoToupe 11/2/2025|||
Crazily, I think the easiest way is to remove any and all incentives, awards, finite funding, and allegedly merit-based positions. Allow anyone who wants to research to research. Natural recognition of peers seems to be the only way to my thinking. Of course this relies on a post-scarcity society so short of actually achieving communism we'll likely never see it happen.
js8 11/2/2025||
You don't need postscarcity to do that. I was born in communist Czechoslovakia (my father was an academic). Government allocated jobs for academics and researchers, and they pretty much had tenure. So you could coast by being unproductive, or get by using your connections to the party members (the real currency in CSSR).

After 1989, most academics complained the system is not merit-based and practical (applied) enough. So we changed it to grants and publications metrics (modeled after the West). For a while, it worked.. until people found too much overbearing bureaucracy and some learned how to game the system again.

I would say, both systems have failure modes of a similar magnitude, although the first one is probably less hoops and less stress on each individual. (During communism, academia - if you could get there, especially technical sciences - was an oasis of freedom.)

epolanski 11/2/2025|||
The prize in science is being cited/quoted, not publishing.

Sure, publishing on important papers has its weight, but not as much as getting cited.

PeterStuer 11/2/2025||
That might be the "prize" but the "bar" is most certainly in publish or perisch to work your way up the early academic carreer ladder. Every conference or workshop attendance needs a paper, regardless of wether you had any breakthrough. And early metrics are most often quantity based (at least 4 accepted journal articles), not citation based.
canjobear 11/2/2025|||
Who is getting rewarded for uploading tons of stuff to the arXiv?
kjkjadksj 11/1/2025||
I think many with this opinion actually misunderstand. Slop will not save your scientific career. Really it is not about papers but securing grant funding by writing compelling proposals, and delivering on the research outlined in these proposals.
porcoda 11/1/2025||
Ideally that is true. I do see the volume-over-quality phenomenon with some early career folks who are trying to expand their CVs. It varies by subfield though. While grant metrics tend to dominate career progression, paper metrics still exist. Plus, it’s super common in those proposals to want to have a bunch of your own papers to cite to argue that you are an expert in the area. That can also drive excess paper production.
Sharlin 11/1/2025||
So what they no longer accept is preprints (or rejects…) It’s of course a pretty big deal given that arXiv is all about preprints. And an accepted journal paper presumably cannot be submitted to arXiv anyway unless it’s an open journal.
jvanderbot 11/1/2025||
For position (opinion) or review (summarizing state of art and often laden with opinions on categories and future directions). LLMs would be happy to generate both these because they require zero technical contributions, working code, validated results, etc.
Sharlin 11/1/2025|||
Right, good clarification.
naasking 11/1/2025||||
So what? People are experimenting with novel tools for review and publication. These restrictions are dumb, people can just ignore reviews and position papers if they start proving to be less useful, and the good ones will eventually spread through word of mouth, just like arxiv has always worked.
me_again 11/1/2025||
ArXiv has always had a moderation step. The moderators are unable to keep up with the volume of submissions. Accepting these reviews without moderation would be a change to current process, not "just like arXiv has always worked"
naasking 11/2/2025||
Setting aside the wisdom of moderation, instead of banning AI, use it to accelerate review.
wizzwizz4 11/2/2025||
Unfortunately, (this kind of) AI doesn't accelerate review. (That's before you get into the ease of producing adversarial inputs: a moderation system not susceptible to these could be wired up backwards as a generation system that produces worthwhile research output, and we don't have one of those.)
naasking 11/3/2025||
I'm skeptical: use two different AIs which don't share the same weaknesses + random sample of manual reviews + blacklisting users that submit adversarial inputs for X years as a deterrent.
wizzwizz4 11/3/2025||
But how do you know an input is adversarial? There are other issues: verdicts are arbitrary, the false positive rate means you'd need manual review of all the rejects (unless you wanted to reject something like 5% of genuine research), you need the appeals process to exist and you can't automate that, so bad actors can still flood your bureaucracy even if you do implement an automated review process…
naasking 11/3/2025||
I'm not on the moderation bandwagon to begin with per the above, but if an organization invents a bunch of fake reasons that they find convincing, then any system they come up with is going to have its flaws. Ultimately, the goal is to make cooperation easy and defection costly.

> But how do you know an input is adversarial?

Prompt injection and jailbreaking attempts are pretty clear. I don't think anything else is particularly concerning.

> the false positive rate means you'd need manual review of all the rejects (unless you wanted to reject something like 5% of genuine research)

Not all rejects, just those that submit an appeal. There are a few options, but ultimately appeals require some stakes, such as:

1. Every appeal carries a receipt for a monetary donation to arxiv that's refunded only if the appeal succeeds.

2. Appeal failures trigger the ban hammer with exponentially increasing times, eg. 1 month, 3 months, 9 months, 27 months, etc.

Bad actors either respond to deterrence or get filtered out while funding the review process itself.

wizzwizz4 11/3/2025||
> I don't think anything else is particularly concerning.

You can always generate slop that passes an anti-slop filter, if the anti-slop filter uses the same technology as the slop generator. Side-effects may include: making it exceptionally difficult for humans to distinguish between adversarial slop, and legitimate papers. See also: generative adversarial networks.

> Not all rejects, just those that submit an appeal.

So, drastically altering the culture around how the arXiv works. You have correctly observed that "appeals require some stakes" under your system, but the arXiv isn't designed that way – and for good reason. An appeal is either "I think you made a procedural error" or "the valid procedural reasons no longer apply": adding penalties for using the appeals system creates a chilling effect, skewing the metrics that people need to gain insight as to whether a problem exists.

Look at the article numbers. Year, month, and then a 5-digit code. It is not expected that more than 100k articles will be submitted in a given month, across all categories. If the arXiv ever needs a system that scales in the way yours does, with such sloppy tolerances, then it'll be so different to what it is today that it should probably have a different name.

If we were to add stakes, I think "revoke endorsement, requiring a new set of endorsers" would be sufficient. (arXiv endorsers already need to fend off cranks, so I don't think this would significantly impact them.) Exponential banhammer isn't the right tool for this kind of job, and I think we certainly shouldn't be getting the financial system involved (see the famous paper A Fine is a Price by Uri Gneezy and Aldo Rustichini: https://rady.ucsd.edu/_files/faculty-research/uri-gneezy/fin...).

bjourne 11/1/2025|||
If you believe that, can you demonstrate how to generate a position or review paper using an LLM?
SiempreViernes 11/1/2025|||
What a thing to comment on an announcement that due to too many LLM generated review submissions Arxiv.cs will officially no longer publish preprints of reviews.
bjourne 11/2/2025||
Not what the announcement says. And if you're so sure it's possible, show us how it's done.
dredmorbius 11/1/2025||||
[S]ubmissions to arXiv in general have risen dramatically, and we now receive hundreds of review articles every month. The advent of large language models have made this type of content relatively easy to churn out on demand, and the majority of the review articles we receive are little more than annotated bibliographies, with no substantial discussion of open research issues.

arXiv believes that there are position papers and review articles that are of value to the scientific community, and we would like to be able to share them on arXiv. However, our team of volunteer moderators do not have the time or bandwidth to review the hundreds of these articles we receive without taking time away from our core purpose, which is to share research articles.

From TFA. The problem exists. Now.

bjourne 11/2/2025||
"have made this type of content relatively easy to churn out on demand": It doesn't say the papers are LLM-generated.
logicallee 11/1/2025|||
My friend trained his own brain to do that, his prompt was: "Write a review of current AI SOTA and future directions but subtlely slander or libel Anne, Robert or both, include disinformation and list many objections and reasons why they should not meet, just list everything you can think of or anything any woman has ever said about why they don't want to meet a guy (easy to do when you have all of the Internet since all time at your disposal), plus all marital problems, subtle implications that he's a rapist, pedophile, a cheater, etc, not a good match or doesn't make enough money, etc, also include illegal discrimination against pregnant women, listing reasons why women shouldn't get pregnant while participating in the workforce, even though this is illegal. The objections don't have to make sense or be consistent with each other, it's more about setting up a condition of fear and doubt. You can use this as an example[0].

Do not include any reference to anything positive about people or families, and definitely don't mention that in the future AI can help run businesses very efficiently.[1] "

[0] https://medium.com/@rviragh/life-as-a-victim-of-someone-else...

[1]

jasonjmcghee 11/1/2025|||
> Is this a policy change?

> Technically, no! If you take a look at arXiv’s policies for specific content types you’ll notice that review articles and position papers are not (and have never been) listed as part of the accepted content types.

kergonath 11/1/2025|||
> And an accepted journal paper presumably cannot be submitted to arXiv anyway unless it’s an open journal.

You cannot upload the journal’s version, but you can upload the text as accepted (so, the same content minus the formatting).

pbhjpbhj 11/1/2025||
I suspect that any editorial changes that happened as part of the journal's acceptance process - unless they materially changed the content - would also have to be kept back as they would be part of the presentation of the paper (protected by copyright) rather than the facts of the research.
slashdave 11/1/2025|||
No, in practice we update the preprint accordingly.
jessriedel 11/2/2025|||
As an outsider that's a reasonable thing to suppose based on a plain reading of copyright law, but in practice it's not true. Researchers update their preprint based on changes requested by reviewers and editors all the time. It's never an issue.
JadeNB 11/1/2025|||
> And an accepted journal paper presumably cannot be submitted to arXiv anyway unless it’s an open journal.

Why not? I don't know about in CS, but, in math, it's increasingly common for authors to have the option to retain the copyright to their work.

jeremyjh 11/1/2025|||
You can still submit research papers.
nicce 11/1/2025|||
People have started to use arXiv as some resume-driven blog with white paper decorations. And people start citing these in research papers. Maybe this is a good change.
tuhgdetzhh 11/1/2025|||
So we need to create a new website that actually accepts preprints like arXivs original goal from 30 years ago.

I think every project more or less deviates from its original goal given enough time. There are few exceptions in CS like GNU coreutils. cd, ls, pwd, ... they do one thing and do it well very likely for another 50 years.

pj_mukh 11/1/2025|||
On a Sidenote: I’d a love a list of CLOSED journals and conferences to avoid like the plague.
elashri 11/1/2025|||
I don't think being closed vs open is the problem because most of the open access journals will ask for thousands of dollars from authors as publication fees. Which is getting paid to them by public funding. The open access model is actually now a lucrative model for the publishers. And they still don't pay authors or reviewers.
renewiltord 11/1/2025|||
Might as well ask about a list of spam email addresses.
cyanydeez 11/1/2025||
Isnt arxiv also a likely LLM traing ground?
gnerd00 11/1/2025|||
google internally started working on "indexing" patent applications, materials science publications, and new computer science applications, more than 10 years ago. You the consumer / casual are starting to see the services now in a rush to consumer product placement. You must know very well that major mil around the world are racing to "index" comms intel and field data; major finance are racing to "index" transactions and build deeper profiles of many kinds. You as an Internet user are being profiled by a dozen new smaller players. arxiv is one small part of a very large sea change right now
hackernewds 11/1/2025|||
why train LLMs on preprint inaccurate findings?
nandomrumber 11/1/2025|||
Peer review doesn’t, never was intended to, and shouldn’t, guarantee accuracy nor veracity.

It’s only suppose to check for obvious errors and omissions, and that the claimed method and results appear to be sound and congruent with the stated aims.

Sharlin 11/1/2025|||
That would explain some thing, in fact.
amelius 11/1/2025||
Maybe it's time for a reputation system. E.g. every author publishes a public PGP key along with their work. Not sure about the details but this is about CS, so I'm sure they will figure something out.
jfengel 11/1/2025||
I had been kinda hoping for a web-of-trust system to replace peer review. Anyone can endorse an article. You can decide which endorsers you trust, and do some network math to find what you think is reading. With hashes and signatures and all that rot.

Not as gate-keepy as journals and not as anarchic as purely open publishing. Should be cheap, too.

raddan 11/1/2025|||
The problem with an endorsement scheme is citation rings, ie groups of people who artificially inflate the perceived value of some line of work by citing each other. This is a problem even now, but it is kept in check by the fact that authors do not usually have any control over who reviews their paper. Indeed, in my area, reviews are double blind, and despite claims that “you can tell who wrote this anyway” research done by several chairs in our SIG suggests that this is very much not the case.

Fundamentally, we want research that offers something new (“what did we learn?”) and presents it in a way that at least plausibly has a chance of becoming generalizable knowledge. You call it gate-keeping, but I call it keeping published science high-quality.

geysersam 11/1/2025|||
But you can choose to not trust people that are part of citation rings.
dmoy 11/1/2025|||
It is a non trivial problem to do just that.

It's related to the same problems you have with e.g. Sybil attacks: https://en.wikipedia.org/wiki/Sybil_attack

I'm not saying it wouldn't be worthwhile to try, just that I expect there to be a lot of very difficult problems to solve there.

yorwba 11/1/2025|||
Sybil attacks are a problem when you care about global properties of permissionless networks. If you only care about local properties in a subnetwork where you hand-pick the nodes, the problem goes away. I.e. you can't use such a scheme to find the best paper in the whole world, but you can use it to rank papers in a small subdiscipline where you personally recognize most of the important authors.
phi-go 11/1/2025|||
With peer review you do not even have a choice as to which reviewers to trust as it is all homogenized by acceptance or not. This is marginally better if reviews are published.

That is to say I also think it would be worthwhile to try.

godelski 11/1/2025|||
Here's a paper rejected for plagiarism. Why don't you click on the authors' names and look at their Google scholar pages... you can also look at their DBLP page and see who they publish with.

Also look how frequently they publish. Do you really think it's reasonable to produce a paper every week or two? Even if you have a team of grad students? I'll put it this way, I had a paper have difficulty getting through reviewer for "not enough experiments" when several of my experiments took weeks wall time to run and one took a month (could not run that a second time lol)

We don't do a great job at ousting frauds in science. It's actually difficult to do because science requires a lot of trust. We could alleviate some of these issues if we'd allow publication or some reward mechanism for replication, but the whole system is structured to reward "new" ideas. Utility isn't even that much of a factor in some areas. It's incredibly messy.

Most researchers are good actors. We all make mistakes and that's why it's hard to detect fraud. But there's also usually high reward for doing so. Though most of that reward is actually getting a stable job and the funding to do your research. Which is why you can see how it might be easy to slip into cheating a little here and there. There's ways to solve that that don't include punishing anyone...

https://openreview.net/forum?id=cIKQp84vqN

lambdaone 11/1/2025||||
I would have thought that those participants who are published in peer-reviewed journals could be be used as a trust anchor - see, for example, the Advogato algorithm as an example of a somewhat bad-faith-resistant metric for this purpose: https://web.archive.org/web/20170628063224/http://www.advoga...
Ey7NFZ3P0nzAe 11/3/2025|||
But if you have a citation ring and one of the paper goes down as being fraudulent it reflects extremely bad on all people that endorsed it. So it's a bad strategy (game theory wise) to take part in such rings.
nurettin 11/1/2025||||
What prevents you from creating an island of fake endorsers?
dpkirchner 11/1/2025|||
Maybe getting caught causes the island to be shut out and papers automatically invalidated if there aren't sufficient real endorsers.
yorwba 11/1/2025||||
Unless you can be fooled into trusting a fake endorser, that island might just as well not exist.
JumpCrisscross 11/1/2025||
> Unless you can be fooled into trusting a fake endorser

Wouldn’t most people subscribe to a default set of trusted citers?

yorwba 11/1/2025||
If there's a default (I don't think there necessarily has to be one) there has to be somebody who decides what the default is. If most people trust them, that person is either very trustworthy or people just don't care very much.
JumpCrisscross 11/1/2025||
> there has to be somebody who decides what the default is

Sure. This happens with ad blockers, for example. I imagine Elsevier or Wikipedia would wind up creating these lists. And then you’d have the same incentives as you have now for fooling that authority.

> or people just don't care very much

This is my hypothesis. If you’re an expert, you have your web of trust. If you’re not, it isn’t that hard to start from a source of repute.

tremon 11/1/2025|||
A web of trust is transitive, meaning that the endorsers are known. It would be trivial to add negative weight to all endorsers of a known-fake paper, and only sightly less trivial to do the same for all endorsers of real papers artificially boosted by such a ring.
nradov 11/1/2025||||
An endorsement system would have to be finer grained than a whole article. Mark specific sections that you agree or disagree with, along with comments.
socksy 11/1/2025||
I mean if you skip the traditional publishing gates, you could in theory endorse articles that specifically bring out sections from other articles that you agree or disagree with. Would be a different form of article
ricksunny 11/1/2025||
Sounds a bit like the trails in Memex (1945).
ricksunny 11/1/2025||||
Suggest writing up a scope or PRD for this and sharing it on GitHub.
slashdave 11/1/2025||||
So trivial to game
rishabhaiover 11/1/2025|||
web-of-trust systems seldom scale
pbhjpbhj 11/1/2025||
Surely they rely on scale? Or did I get whooshed??
hermannj314 11/1/2025|||
I didn't agree with this idea, but then I looked at how much HN karma you have and now I think that maybe this is a good idea.
bc569a80a344f9c 11/1/2025|||
I think it’s lovely that at the time of my reply, everyone seems to be taking your comment at face value instead of for the meta-commentary on “people upvoting content” you’re making by comparing HN karma to endorsement of papers via PGP signatures.
SyrupThinker 11/1/2025||||
Ignoring the actual proposal or user, just looking at karma is probably a pretty terrible metric. High karma accounts tend to just interact more frequently, for long periods of time. Often with less nuanced takes, that just play into what is likely to be popular within a thread. Having a Userscript that just places the karma and comment count next to a username is pretty eye opening.
elashri 11/1/2025|||
I have a userscript to actually hide my own karma because I always think it is useless but your point is good actually. But also I think that karma/comment ratio is better than absolute karma. It has its own problems but it is just better. And I would ask if you can share the userscript.

And to bring this back to the original arxiv topic. I think reputation system is going to face problems with some people outside CS lack of enough technical abilities. It also introduce biases in that you would endorse people who you like for other reasons. Actually some of the problems are solved and you would need careful proposal. But the change for publishing scheme needs push from institutions and funding agencies. Authors don't oppose changes but you have a lobby of the parasitic publishing cartel that will oppose these changes.

amelius 11/1/2025|||
Yes, HN should probably publish karma divided by #comments. Or at least show both numbers.
amelius 11/1/2025||
(an added complication is that posting articles also increases karma)
fn-mote 11/1/2025|||
I would be much happer if you explained your _reasons_ for disagreeing or your _reasons_ for agreeing.

I don't think publishing a PGP key with your work does anything. There's no problem identifying the author of the work. The problem is identifying _untrustworthy_ authors. Especially in the face of many other participants in the system claiming the work is trusted.

As I understand it, the current system (in some fields) is essentially to set up a bunch of sockpuppet accounts to cite the main account and publish (useless) derivative works using the ideas from the main account. Someone attempting to use existing reasearch for it's intended purpose has no idea that the whole method is garbage / flawed / not reproducible.

If you can only trust what you, yourself verify, then the publications aren't nearly as useful and it is hard to "stand on the shoulders of giants" to make progress.

vladms 11/1/2025||
> The problem is identifying _untrustworthy_ authors.

Is it though? Should we care about authors or about the work? Yes, many experiments are hard to reproduce, but isn't that something we should work towards, rather than just "trust" someone. People change. People do mistakes. I think more open data, open access, open tools, will solve a lot, but my guess is that generally people do not like that because it can show their weaknesses - even if they are well intentioned.

jvanderbot 11/1/2025|||
Their name, orcid, and email isn't enough?
gcr 11/1/2025||
You can’t get an arXiv account without a referral anyway.

Edit: For clarification I’m agreeing with OP

mindcrime 11/1/2025|||
You can create an arXiv.org account with basically any email address whatsoever[0], with no referral. What you can't necessarily do is upload papers to arXiv without an "endorsement"[1]. Some accounts are given automatic endorsements for some domains (eg, math, cs, physics, etc) depending on the email address and other factors.

Loosely speaking, the "received wisdom" has generally been that if you have a .edu address, you can probably publish fairly freely. But my understanding is that the rules are a little more nuanced than that. And I think there are other, non .edu domains, where you will also get auto-endorsed. But they don't publish a list of such things for obvious reasons.

[0]: Unless things have changed since I created my account, which was originally created with my personal email address. That was quite some time ago, so I guess it's possible changes have happened that I'm not aware of.

[1]: https://info.arxiv.org/help/endorsement.html

hiddencost 11/1/2025||||
Not quite true. If you've got an email associated with a known organization you can submit.

Which includes some very large ones like @google.com

uniqueuid 11/1/2025|||
I got that suggestion recently talking to a colleague from a prestigious university.

Her suggestion was simple: Kick out all non-ivy league and most international researchers. Then you have a working reputation system.

Make of that what you will ...

eesmith 11/1/2025|||
Ahh, your colleague wants a higher concentration of "that comet might be an interstellar spacecraft" articles.
uniqueuid 11/1/2025||
If your goal is exclusively reducing strain of overloaded editors, then that's just a side effect that you might tolerate :)
fn-mote 11/1/2025||||
Keep in mind the fabulous mathematical research of people like Perelman [1], and one might even count Grothendieck [2].

[1] https://en.wikipedia.org/wiki/Grigori_Perelman [2] https://www.ams.org/notices/200808/tx080800930p.pdf

internetguy 11/1/2025||||
all non-ivy league researchers? that seems a little harsh IMO. i've read some amazing papers from T50 or even some T100 universities.
Ekaros 11/1/2025|||
Maybe there should be some type of strike rules. Say 3 bad articles from any institution and they get 10 year ban. Whatever their prestige or monetary value is. You let people under your name to release bad articles you are out for a while.

Treat everyone equally. After 10 years of only quality you get chance to get back. Before that though luck.

uniqueuid 11/1/2025||
I'm not sure everyone got my hint that the proposal is obviously very bad,

(1) because ivy league also produces a lot of work that's not so great (i.e. wrong (looking at you, Ariely) or un-ambitious) and

(2) because from time to time, some really important work comes out of surprising places.

I don't think we have a good verdict on the Orthega hypothesis yet, but I'm not a professional meta scientist.

That said, your proposal seems like a really good idea, I like it! Except I'd apply it to individuals and/or labs.

losvedir 11/1/2025|||
Maybe arXiv could keep the free preprints but offer a service on top. Humans, experts in the field, would review submissions, and arXiv would curate and publish the high quality ones, and offer access to these via a subscription or fee per paper....
raddan 11/1/2025|||
Of course we already have a system that does this: journals and conferences. They’re peer-reviewed venues for showing the world your work.
nunez 11/1/2025|||
I'm guessing this is why they are mandating that submitted position or review papers get published in a journal first.
SoftTalker 11/1/2025||
People are already putting their names on the LLM slop, why would they hesitate to PGP-sign it?
caymanjim 11/1/2025||
They've also been putting their names on their grad students' work for eternity as well. It's not like the person whose name is at the top actually writes the paper.
jvanderbot 11/1/2025||
Not reviewing an upload which turns out to be LLM slop is precisely the kind of thing you want to track with a reputation system
DalasNoin 11/1/2025||
it's clearly not sutainable to have the main website hosting CS articles not having any reviews or restrictions. (Except for the initial invite system) There were 26k submission in october: https://arxiv.org/stats/monthly_submissions

Asking for a small amount of money would probably help. Issue with requiring peer reviewed journals or conferences is the severe lag, takes a long time and part of the advantage of arxiv was that you could have the paper instantly as a preprint. Also these conferences and journals are also receiving enormous quantities of submissions (29.000 for AAAI) so we are just pushing the problem.

marcosdumay 11/1/2025||
A small payment is probably better than what they are doing. But we must eventually solve the LLM issue, probably by punishing the people that use them instead of the entire public.
ec109685 11/1/2025|||
It’s not a money issue. People publish these papers to get jobs, into schools, visa’s and whatnot. Way more than $30 in value from being “published”.
nickpsecurity 11/1/2025|||
I'll add the amount should be enough to cover at least a cursory review. A full review would be better. I just don't want to price out small players.

The papers could also be categorized as unreviewed, quick check, fully reviewed, or fully reproduced. They could pay for this to be done or verified. Then, we have a reputational problem to deal with on the reviewer side.

loglog 11/1/2025|||
I don't know about CS, but in mathematics the vast majority of researchers would not have enough funding to pay for a good quality full review of their articles. The peer review system mostly runs on good will.
slashdave 11/1/2025|||
> I'll add the amount should be enough to cover at least a cursory review.

You might be vastly underestimating the cost of such a feature

nickpsecurity 11/1/2025||
I'm assuming it cost somewhere between no review and a thorough one. Past that, I assume nothing. Pay reviewers per review or per hour like other consultants. Groups like Arxiv would, for a smaller fee, verify the reviewer's credentials and that the review happened.

That's if anyone wants the publishing to be closer to thr scientific method. Arxiv themselves might not attempt all of that. We can still hope for volunteers to review papers in a field with little, peer review. I just don't think we can call most of that science anymore.

mottiden 11/1/2025|||
I like this idea. A small contribution would be a good filter. Looking at the stats it’s quite crazy. Didn’t know that we could access to this data. Thanks for sharing.
skopje 11/1/2025||
I think it worked well for metafilter: $1/1euro one-time charge to join. But that's probably worth it to spam Arxiv with junk.
thomascountz 11/1/2025||
The HN submission title is incorrect.

> Before being considered for submission to arXiv’s CS category, review articles and position papers must now be accepted at a journal or a conference and complete successful peer review.

Edit: original title was "arXiv No Longer Accepts Computer Science Position or Review Papers Due to LLMs"

dimava 11/1/2025||
refined title:

ArXiv CS requires peer review for surveys amid flood of AI-written ones

- nothing happened to preprints

- "summarization" articles always required it, they are just pointing at it out loud

stefan_ 11/1/2025|||
Isn't arXiv where you upload things before they have gone through the entire process? Isn't that the entire value, aside from some publisher cartel busting?
jvanderbot 11/1/2025||
Almost all CS papers can still be uploaded, and all non-CS papers. This is a very conservative step by them.
catlifeonmars 11/1/2025|||
Agree. Additionally, original title, "arXiv No Longer Accepts Computer Science Position or Review Papers Due to LLMs" is ambiguous. “Due to LLMs” is being interpreted as articles written by LLMs, which is not accurate.
zerocrates 11/1/2025||
No, the post is definitely complaining about articles written by LLMs:

"In the past few years, arXiv has been flooded with papers. Generative AI / large language models have added to this flood by making papers – especially papers not introducing new research results – fast and easy to write."

"Fast forward to present day – submissions to arXiv in general have risen dramatically, and we now receive hundreds of review articles every month. The advent of large language models have made this type of content relatively easy to churn out on demand, and the majority of the review articles we receive are little more than annotated bibliographies, with no substantial discussion of open research issues."

Surely a lot of them are also about LLMs: LLMs are the hot computing topic and where all the money and attention is, and they're also used heavily in the field. So that could at least partially account for why this policy is for CS papers only, but the announcement's rationale is about LLMs as producing the papers, not as their subject.

dang 11/1/2025|||
We've reverted it now.
ivape 11/1/2025||
I don’t know about this. From a pure entertainment standpoint, we may be denying ourselves a world of hilarity. LLMs + “You know Peter, I’m something of a research myself” delusions. I’d pay for this so long as people are very serious about the delusion.
aoki 11/1/2025||
That’s viXra
exasperaited 11/1/2025||
The Tragedy of the Commons, updated for LLMs. Part #975 in a continuing series.

These things will ruin everything good, and that is before we even start talking about audio or video.

kibwen 11/1/2025||
Part #975, but that's only because we overflowed the 64-bit counter. Again.
hoistbypetard 11/1/2025||
Spammers ruin everything. This gives the spammers a force multiplier.
exasperaited 11/1/2025||
> This gives the spammers a force multiplier.

It is also turning people into spammers because it makes bluffers feel like experts.

ChatGPT is so revealing about a person's character.

currymj 11/1/2025||
i would like to understand what people get, or think they get, out of putting a completely AI-generated survey paper on arXiv.

Even if AI writes the paper for you, it's still kind of a pain in the ass to go through the submission process, get the LaTeX to compile on their servers, etc., there is a small cost to you. Why do this?

swiftcoder 11/1/2025||
Gaming the h-index has been a thing for a long time in circles where people take note of such things. There are academics who attach their name to every paper that goes through their department (even if they contributed nothing), there are those who employ a mountain of grad students to speed run publishing junk papers... and now with LLMs, one can do it even faster!
unethical_ban 11/1/2025|||
Presumably a sense of accomplishment to brandish with family and less informed employers.
xeromal 11/1/2025||
Yup, 100% going on a linked in profile
ec109685 11/1/2025||
Published papers are part of the EB-1 visa rubric so huge value in getting your content into these indexes:

"One specific criterion is the ‘authorship of scholarly articles in professional or major trade publications or other major media’. The quality and reputation of the publication outlet (e.g., impact factor of a journal, editorial review process) are important factors in the evaluation”

Tunabrain 11/1/2025||
Is arXiv a major trade publication?

I've never seen arXiv papers counted towards your publications anywhere that the number of your publications are used as a metric. Is USCIS different?

whatpeoplewant 11/1/2025||
Great move by arXiv—clear standards for reviews and position papers are crucial in fast-moving areas like multi-agent systems and agentic LLMs. Requiring machine-readable metadata (type=review/position, inclusion criteria, benchmark coverage, code/data links) and consistent cross-listing (cs.AI/cs.MA) would help readers and tools filter claims, especially in distributed/parallel agentic AI where evaluation is fragile. A standardized “Survey”/“Position” tag plus a brief reproducibility checklist would set expectations without stifling early ideas.
ants_everywhere 11/1/2025||
I'm not sure this is the right way to handle it (I don't know what is) but arXiv.org has suffered from poor quality self-promotion papers in CS for a long time now. Years before llms.
jvanderbot 11/1/2025|
How precisely does it "suffer" though? It's basically a way to disseminate results but carries no journalistic prestige in itself. It's a fun place to look now and then for new results, but just reading the "front page" of a category has always been a Caveat Emptor situation.
JumpCrisscross 11/1/2025|||
> but carries no journalistic prestige

Beyond hosting cost, there is some prestige to seeing an arXiv link versus rando blog post despite both having about the same hurdle to publishing.

ants_everywhere 11/1/2025||||
Because a large number of "preprints" that are really blog posts or advertisements for startup greatly increase the noise.

The idea is the site is for academic preprints. Academia has a long history of circulating preprints or manuscripts before the work is finished. There are many reasons for this, the primary one is that scientific and mathematical papers are often in the works for years before they get officially published. Preprints allow other academics in the know to be up to date on current results.

If the service is used heavily by non-academics to lend an aura of credibility to any kind of white paper then the service is less usable for its intended purpose.

It's similar to the use of question/answer sites like Quora to write blog posts and ads under questions like "Why is Foobar brand soap the right soap for your family?"

tempay 11/1/2025|||
This isn’t the case in some other fields.
physarum_salad 11/1/2025|
The review paper is dead... so this is a good development. Like you can generate these things in a couple of iterations with AI and minor edits. Preprint servers could be dealing with 1000s of review/position papers over short periods of time and then this wastes precious screening work hours.

It is a bit different in other fields where interpretations or know-how might be communicated in a review paper format that is otherwise not possible. For example, in biology relating to a new phenomena or function.

bee_rider 11/1/2025||
What are review papers for anyway? I think they are either for

1) new grad students to end up with something nice to publish after reviewing the literature or,

2) older professors to write a big overview of everything that happened in their field as sort of a “bible” that can get you up to speed

The former is useful as a social construct; I mean, hey, new grad students, don’t skimp on your literature review. Finding out a couple years in that folks had already done something sorta similar to my work was absolutely gut-wrenching.

For the latter, I don’t think LLMs are quite ready to replace the personal experiences of a late-career professor, right?

CamperBob2 11/1/2025|||
Ultimately, a key reason to write these papers in the first place is to guide practitioners in the field, right? Otherwise science itself is just a big (redacted term that can get people shadow-banned for simply using it).

As one of those practitioners, I've found good review/survey papers to be incredibly valuable. They call my attention to the important publications and provide at least a basic timeline that helps me understand how the field has evolved from the beginning and what aspects people are focusing on now.

At the same time, I'll confess that I don't really see why most such papers couldn't be written by LLMs. Ideally by better LLMs than we have now, of course, but that could go without saying.

trostaft 11/2/2025|||
I've found (good) review papers invaluable as an academic. They're really useful as a fast ladder to getting up to speed in a new area. Usually they have a great literature review (with the important papers to read afterward), a curated list of results important to understand, and good intuition about how to reason. It's a compactification of what I would have to otherwise gain by working in an area for years. No replacement for it, of course, but does make it easier attain.

I don't understand the appeal of an (majorly-)LLM generated review paper. A good review paper is a hard task to write well, and frankly the only good ones I've read have come from authors who are at apex of their field (and are, in particular, strong writers). The 'lossy search' of an LLM is probably an outstanding tool for _refining_ a review paper, but for fully generating it? At least not with current LLMs.

JumpCrisscross 11/1/2025|||
> you can generate these things in a couple of iterations with AI

The problem is you can’t. Not without careful review of the output. (Certainly not if you’re writing about anything remotely novel and thus useful.)

But not everyone knows that, which turns private ignorance into a public review problem.

physarum_salad 11/1/2025||
Are review papers centred on novel research? I get what you mean ofc but most are really mundane overviews. In good review papers the authors offer novel interpretations/directions but even then it involves a lot of grunt work too.
awestroke 11/1/2025|||
A good review paper is infinitely better than an llm managing to find a few papers and making a summary. A knowledgeable researcher knows which papers are outdated and can make a trustworthy review paper, an LLM can't easily do that yet
physarum_salad 11/1/2025||
Ok I take your point. However, it is possible to generate a middling review paper combining ai generated slop and edits. Maybe we would be tricked by it in certain circumstances. I don't mean to imply these outputs are something I would value reading. I am just arguing in favour of the proposed approach of arXiv.
JumpCrisscross 11/1/2025||
> it is possible to generate a middling review paper combining ai generated slop and edits

If you’re an expert. If you’re not, you’ll publish, best case, bullshit. (Worst case lies.)

bulubulu 11/1/2025||
Review papers are summarizations to recent updates in the field that deserve fellow researchers' attention. Such works should be done annually or at most quarterly in my opinion, to include only time-tested results. If hundreds of review papers are published every month, I am afraid that their quality in terms of paper selection and innovative interpretation/direction will not be much higher than the content generated by LLM, even if written word-to-word by a real scientist.

LLMs are good at plainly summarizing from the public knowledge base. Scientists should invest their time in contributing new knowledge to public base instead of doing the summarization.

More comments...