Top
Best
New

Posted by stratos123 12 hours ago

Significant Raise of Reports(lwn.net)
240 points | 122 commentspage 2
HAMSHAMA 9 hours ago|
Probably related to this (genuinely interesting) talk given by an entropic researcher https://youtu.be/1sd26pWhfmg?si=j2AWyCfbNbOxU4MF
svat 7 hours ago|
To clarify, the talk is by an Anthropic researcher, though given the subject of LLMs, "entropic researcher" also makes some kind of sense.
adverbly 10 hours ago||
Anecdotally, I've been seeing a higher rate of CVEs tracked by a few dependabot projects.

Seems supported by this as well: https://www.first.org/blog/20260211-vulnerability-forecast-2...

Interesting that it's been higher than forecast since 2023. Personally I'd expect that trend to continue given that LLMs both increase bugs written as well as bugs discovered.

0x3f 8 hours ago||
Why don't we just pagerank github contributors? Merged PRs approved by other quality contributors improves rank. New PRs tagged by a bot with the rank of the submitter. Add more scoring features (account age? employer?) as desired.
SoftTalker 8 hours ago|
It will be gamed, just as pagerank was.
0x3f 7 hours ago|||
Of course, but killing the 80% of low hanging fruit is already valuable. The rest is an arms race like always.
cafebabbe 8 hours ago|||
Not by everyone, so that would be better than nothing.
bastawhiz 7 hours ago||
Excited to have to do SEM for my GitHub profile so that people will read my pull requests
0x3f 38 minutes ago||
Well it's all going to be social credit one day. Might as well get in practice early.
siruwastaken 9 hours ago||
It's interesting to hear from people directly in the thick of it that these bug reports are apparently gaining value and are no longer just slop. Maybe there is hope for a world where AI helps create bug free software and doesn't just overload maintainers.
michelwague 3 hours ago||
this is what i'm seeing on a micro scale. i pointed a code-davinci-002 model at my own repo and it found a subtle off-by-
ori_b 4 hours ago||
Or we can stop putting everything on the internet as a vector for enforced enshittification.
tyre 5 hours ago||
I wish they wouldn’t call it “AI slop” before acknowledging that most of the bugs are correct.

Let’s bring a bit of nuance between mindless drivel (e.g. LinkedIn influencing posts, spammed issues that are LLMs making mistakes) vs using LLMs to find/build useful things.

thinkharderdev 4 hours ago||
I think they are saying what you want them to say. In the past they got a bunch of AI slop and now they are getting a lot of legit bug reports. The implication being that the AI got better at finding (and writing reports of) real bugs.
_se 5 hours ago|||
It can be correct and slop at the same time. The reporter could have reported it in a way that makes it clear a human reviewed and cared about the report.

Slop is a function of how the information is presented and how the tools are used. People don't care if you use LLMs if they don't tell you can use them, they care when you send them a bunch of bullshit with 5% of value buried inside it.

If you're reading something and you can tell an LLM wrote it, you should be upset. It means the author doesn't give a fuck.

tptacek 4 hours ago||
No it can't. These aren't "Show HN" posts about new programs people have conjured with Claude. They're either vulnerabilities or they're not. There's no such thing as a "slop vulnerability". The people who exploit those vulnerabilities do not care how much earlier reporters "gave a fuck" about their report.

This is in the linked story: they're seeing increased numbers of duplicate findings, meaning, whatever valid bugs showboating LLM-enabled Good Samaritans are finding, quiet LLM-enabled attackers are also finding.

People doing software security are going to need to get over the LLM agent snootiness real quick. Everyone else can keep being snooty! But not here.

_se 2 hours ago|||
Everyone is free to be as snooty as they like. If a report is harder to read/understand/validate because the author just yolo'ed it with an LLM, that's on the report author, not on the maintainers.

It's not okay to foist work onto other people because you don't think LLM slop is a problem. It is absolutely a problem, and no amount of apologizing and pontificating is going to change that.

Grow up and own your work. Stop making excuses for other people. Help make the world better, not worse. It's obvious that LLMs can be useful for this purpose, so people should use them well and make the reports useful. Period.

tptacek 2 hours ago||
Try to make this sentiment coherent. "It's not OK to foist work onto other people". Ok, sure, I won't. The vulnerability still exists. The maintainers just don't get to know about it. I do, I guess. But not them: telling them would "make the world worse".
parliament32 3 hours ago|||
> There's no such thing as a "slop vulnerability"

https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...

See the list at the bottom of the post for examples.

tptacek 3 hours ago||
Those aren't vulnerabilities. You're missing the point.

Nobody is saying there's no such thing as a slop report. Not only are there, but slop vulnerability reports as a time-consuming annoying phenomenon predate LLM chatbots by almost a decade. There's a whole cottage industry that deals with them.

Or did. Obsolete now.

geraldcombs 4 hours ago||
If I read the sentence correctly they're saying that past reports were AI slop, but the state of the art has advanced and that current reports are valid. This matches trends I've seen on the projects I work on.
devcraft_ai 6 hours ago||
[dead]
michelwague43 4 hours ago||
[dead]
themafia 10 hours ago|
An AI enthusiast having a breathless and predictive position on the future of the technology? No way! It's almost like Wall Street is about to sour on the whole stack and there is a concerted effort to artificially push these views into the conversation to get people on board.

Then again, I'm a known crank and aggressive cynic, but you never really see any gathered data backing these points up.

dieulot 9 hours ago||
Could you back up your assertion that Willy Tarreau — who used to maintain the Linux kernel — is “an AI enthusiast”? I can’t find anything about it.
cdavid 8 hours ago|||
Also one of the initial creator of haproxy, a well known reverse proxy. To imply somebody like as a simple "AI shill" is just ignorant.
logicprog 9 hours ago|||
Anyone who says anything good about AI must be an AI shill from the start, not someone who is genuinely observing reality or had their mind changed, don't you know?
logicprog 9 hours ago|||
> but you never really see any gathered data backing these points up.

https://www.anthropic.com/news/mozilla-firefox-security

?

PKop 8 hours ago||
Sort of a tautology to just assert that someone saying good things about AI is an AI enthusiast and therefore their opinion should be dismissed. He also happens to have been a kernel maintainer, his experience as he's describing it should count for something.
orthecreedence 2 hours ago||
> He also happens to have been a kernel maintainer

And a primary author of one of the most stable and used load balancers in the history of networking.

More comments...