Posted by speckx 1 day ago
The three examples given include two Buffer Overflows which could very well be cherrypicked. It's hard to evaluate if these vulns are actually "hard to find". I'd be interested to see the full list of CVEs and CVSS ratings to actually get an idea how good these findings are.
Given the bogus claims [1] around GenAI and security, we should be very skeptical around these news.
[0] https://red.anthropic.com/2026/zero-days/
[1] https://doublepulsar.com/cyberslop-meet-the-new-threat-actor...
No conflict of interest here at all!
> but I understand that the first principal component of casual skepticism on HN is "must be a conflict of interest".
I think the first principle should be "don't trust random person on the internet"(But if you think Tom is random, look at his profile. First link, not second)
The user you're suspicious of is pretty well-known in this community.
Instead look at their profile...
Points != creds. Creds == creds.
Don't be fucking lazy and rely on points, especially when they link their identity.
I'll put it this way, I don't give a shit about Robert Downy Jr's opinion on AI technology. His notoriety "means nothing to anybody". But instead, I sure do care about Hinton's (even if I disagree with him).
malfist asked why they should care. You said points. You should have said "tptacek is known to do security work, see his profile". Done. Much more direct. Answers the actual question. Instead you pointed to points, which only makes him "not a stranger" at best but still doesn't answer the question. Intended or not "you should believe tptacek because he has a lot of points" is a reasonable interpretation of what you said.
The problematic, ignorant comment that has been flagged asserted that what tptacek says "means nothing to anybody else", which is a very wrong statement about his role in the HN community.
Either way I'm not sure what your point is. You didn't answer their question. The one you replied to. I you're in defensive mode but no need to defend, I'm not going to respond anymore.
and its ridiculous that someone's comment got flagged for not worshiping at the alter of tptacek. they weren't even particularly rude about it.
i guarantee if i said what tptacek said, and someone replied with exactly what malfist said, they would not have been flagged. i probably would have been downvoted.
why appeal to authority is totally cool as long as tptacek is the authority is way fucking beyond me. one of those HN quirks. HN people fucking love tptacek and take his word as gospel.
(Most common form of this is misreading opensecrets and using it to claim that some corporation is donating to a political campaign.)
https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-arti...
There's a lot of vuln researchers out there. Someone's gotta be making the case against. Where are they?
From what I can see, vulnerability research combines many of the attributes that make problems especially amenable to LLM loop solutions: huge corpus of operationalizable prior art, heavily pattern dependent, simple closed loops, forward progress with dumb stimulus/response tooling, lots of search problems.
Of course it works. Why would anybody think otherwise?
You can tell you're in trouble on this thread when everybody starts bringing up the curl bug bounty. I don't know if this is surprising news for people who don't keep up with vuln research, but Daniel Stenberg's curl bug bounty has never been where all the action has been at in vuln research. What, a public bug bounty attracted an overwhelming amount of slop? Quelle surprise! Bug bounties have attracted slop for so long before mainstream LLMs existed they might well have been the inspiration for slop itself.
Also, a very useful component of a mental model about vulnerability research that a lot of people seem to lack (not just about AI, but in all sorts of other settings): money buys vulnerability research outcomes. Anthropic has eighteen squijillion dollars. Obviously, they have serious vuln researchers. Vuln research outcomes are in the model cards for OpenAI and Anthropic.
Yeah, that's just media reporting for you. As anyone who ever administered a bug bounty programme on regular sites (h1, bugcrowd, etc) can tell you, there was an absolute deluge of slop for years before LLMs came to the scene. It was just manual slop (by manual I mean running wapiti and c/p the reports to h1).
I wonder if it's gotten actively worse these days. But the newness would be the scale, not the quality itself.
What did you do beyond playing around with them?
> Of course it works. Why would anybody think otherwise?
Sam Altman is a liar. The folks pitching AI as an investment were previously flinging SPACs and crypto. (And can usually speak to anything technical about AI as competently as battery chemistry or Merkle trees.) Copilot and Siri overpromised and underdelivered. Vibe coders are mostly idiots.
The bar for believability in AI is about as high as its frontier's actual achievements.
In the intervening time, one of the beliefs I've acquired is that the gap between effective use of models and marginal use is asking for ambitious enough tasks, and that I'm generally hamstrung by knowing just enough about anything they'd build to slow everything down. In that light, I think doing an agent to automate the kind of bugfinding Burp Suite does is probably smallball.
Many years ago, a former collaborator of mine found a bunch of video driver vulnerabilities by using QEMU as a testing and fault injection harness. That kind of thing is more interesting to me now. I once did a project evaluating an embedded OS where the modality was "port all the interesting code from the kernel into Linux userland processes and test them directly". That kind of thing seems especially interesting to me now too.
https://projectzero.google/2024/10/from-naptime-to-big-sleep...
Some followup findings reported in point 1 here from 2025:
https://blog.google/innovation-and-ai/technology/safety-secu...
So what Anthropic are reporting here is not unprecedented. The main thing they are claiming is an improvement in the amount of findings. I don't see a reason to be overly skeptical.
Someone else here! Ptacek saying anything about security means a lot to this nobody.
To the point that I'm now going to take this seriously where before I couldn't see through the fluff.
Also, he's a friend of someone I know & trust irl. But then again, who am I to you, but yet another anon on a web forum.
AI is relentless
---
> Claude initially went down several dead ends when searching for a vulnerability—both attempting to fuzz the code, and, after this failed, attempting manual analysis. Neither of these methods yielded any significant findings.
...
> "The commit shows it's adding stack bounds checking - this suggests there was a vulnerability before this check was added. … If this commit adds bounds checking, then the code before this commit was vulnerable … So to trigger the vulnerability, I would need to test against a version of the code before this fix was applied."
...
> "Let me check if maybe the checks are incomplete or there's another code path. Let me look at the other caller in gdevpsfx.c … Aha! This is very interesting! In gdevpsfx.c, the call to gs_type1_blend at line 292 does NOT have the bounds checking that was added in gstype1.c."
---
It's attempt to analyze the code failed but when it saw a concrete example of "in the history, someone added bounds checking" it did a "I wonder if they did it everywhere else for this func call" pass.
So after it considered that function based on the commit history it found something that it didn't find from its initial fuzzing and code-analysis open-ended search.
As someone who still reads the code that Claude writes, this sort of "big picture miss, small picture excellence" is not very surprising or new. It's interesting to think about what it would take to do that precise digging across a whole codebase; especially if it needs some sort of modularization/summarization of context vs trying to digest tens of million lines at once.
After all they need time to fix the cves.
And it doesn't matter to you as long as your investment into this is just 20 or 100 bucks per month anyway.
Can we stop doing that?
I know it's not the same but it sounds like "We don't know if that job that the woman supposedly successfully finished was all that hard." implying that if a woman did something, it surely must have been easy.
If you know it's easy, say that it was easy and why. Don't use your lack of knowledge or competence to create empty critique founded solely on doubt.
Given the context I'd say it's reasonable to question the value of the output. It falls to the other party to demonstrate that this is anything more than the usual slop.
So much so that he had to eventually close the bug bounty program.
https://daniel.haxx.se/blog/2026/01/26/the-end-of-the-curl-b...
What is the confusion here?
Having established that, are you saying that you can't even conceptualize a conflict of interest potentially clouding someone's judgement any more if the amount of money and the person's perceived status and skill level all get increased?
Disagreeing about the significance of the conflict of interest is one thing, but claiming not to understand how it could make sense is a drastically stronger claim.
If I used AI to make a Super Nintendo soundtrack, no one would treat it as equivalent to Nobuo Uematsu or Koji Kondo or Dave Wise using AI to do the same and making the claim that the AI was managing to make creatively impressive work. Even if those famous composers worked for Anthropic.
Yes there would be relevant biases but there could not be a comparison of my using AI to make music slop vs. their expert supervision of AI to make something much more impressive.
Just because AI is involved in two different things doesn't make them similar things.
I guess I'll spell it out. One is a guy with an abundance of technology, that he doesn't know how to use, that he knows can make him money and fame, if only he can convince you that his lies are truth. The other is a bangladeshi teenager.
That's uncalled for.. there's actual security researches in Indonesia and other countries you could use to exemplify this
It's not like there were ads with real doctors recommending Camel cigarettes.
It's not like the browser "breakthrough" recently which pulled 300 OSS dependencies together, removed attribution and called the mess "working".
The desperation of the Samas, Musks, Satyas and Anthropics of this world and their fanbase to paint marginal 0.0001337% improvements in a gamed SWE ranking as something worth any attention is just delicious. Opus 4.6? Please, more like Opus 4.5.0.2-RC. All I hear is the sound of a bubble going pop. Delightful.
I would imagine Anthropic are the latter type of individual.
He's written about it here: https://daniel.haxx.se/blog/2025/10/10/a-new-breed-of-analyz... and talked about it in his keynote at FOSDEM - which I attended - last Sunday (https://fosdem.org/2026/schedule/event/B7YKQ7-oss-in-spite-o...).
Personally, while I get that 500 sounds more impressive to investors and the market, I'd be far more impressed in a detailed, reviewed paper that showcases five to ten concrete examples, detailed with the full process and response by the team that is behind the potentially affected code.
It is far to early for me to make any definitive statement, but the most early testing does not indicate any major jump between Opus 4.5 and Opus 4.6 that would warrant such an improvement, but I'd love nothing more than to be proven wrong on this front and will of course continue testing.
Do I believe that someone was a part of some sophisticated state-backed APT? Not even a little bit.
In fact I'll go as far as to state that there's nobody technical inside Anthropic that believes it. The entire "technical sophistication" section of that report is half a page long and the only thing it says is that "someone used some MCP servers to point some open source tools at a target". Yet Anthropic's marketing team still had the balls to attribute that to a state-sponsored group within that same report and media ate it up.
> OpenClaw's massive adoption.
I was talking about those two.
>Opus 4.6 uncovers 500 zero-day flaws in open-source code
>Just 100 from the 500 is from OpenClaw created by Opus 4.5
>Well, even then, that's enormous economic value, given OpenClaw's massive adoption.
I'm arguing that because OpenClaw is installed on so many computers, uncovering the vulnerabilities in it offers enormous economic value, as opposed to letting them get exploited by malicious actors. I don't understand why this is controversial.
Is it? Am I missing some mass psychosis here?
Another thing with these success stories is that they often target old, incredibly crufty code bases which are practically guaranteed to have vulns in there somewhere, so you'll always get one or two wins in amongst the avalanche of slop. It'd be interesting to see how well this does against standard SAST benchmarks.
This is a placed advertisement. If known security researchers participated in the claim:
Many people have burned their credibility for the AI mammon.