Top
Best
New

Posted by dominicq 2 hours ago

Small models also found the vulnerabilities that Mythos found(aisle.com)
391 points | 116 comments
johnfn 2 hours ago|
The Anthropic writeup addresses this explicitly:

> This was the most critical vulnerability we discovered in OpenBSD with Mythos Preview after a thousand runs through our scaffold. Across a thousand runs through our scaffold, the total cost was under $20,000 and found several dozen more findings. While the specific run that found the bug above cost under $50, that number only makes sense with full hindsight. Like any search process, we can't know in advance which run will succeed.

Mythos scoured the entire continent for gold and found some. For these small models, the authors pointed at a particular acre of land and said "any gold there? eh? eh?" while waggling their eyebrows suggestively.

For a true apples-to-apples comparison, let's see it sweep the entire FreeBSD codebase. I hypothesize it will find the exploit, but it will also turn up so much irrelevant nonsense that it won't matter.

kilpikaarna 1 hour ago||
Wasn't the scaffolding for the Mythos run basically a line of bash that loops through every file of the codebase and prompts the model to find vulnerabilities in it? That sounds pretty close to "any gold there?" to me, only automated.

Have Anthropic actually said anything about the amount of false positives Mythos turned up?

FWIW, I saw some talk on Xitter (so grain of salt) about people replicating their result with other (public) SotA models, but each turned up only a subset of the ones Mythos found. I'd say that sounds plausible from the perspective of Mythos being an incremental (though an unusually large increment perhaps) improvement over previous models, but one that also brings with it a correspondingly significant increase in complexity.

So the angle they choose to use for presenting it and the subsequent buzz is at least part hype -- saying "it's too powerful to release publicly" sounds a lot cooler than "it costs $20000 to run over your codebase, so we're going to offer this directly to enterprise customers (and a few token open source projects for marketing)". Keep in mind that the examples in Nicholas Carlini's presentation were using Opus, so security is clearly something they've been working on for a while (as they should, because it's a huge risk). They didn't just suddenly find themselves having accidentally created a super hacker.

johnfn 1 hour ago|||
> Wasn't the scaffolding for the Mythos run basically a line of bash that loops through every file of the codebase and prompts the model to find vulnerabilities in it? That sounds pretty close to "any gold there?" to me, only automated.

But the entire value is that it can be automated. If you try to automate a small model to look for vulnerabilities over 10,000 files, it's going to say there are 9,500 vulns. Or none. Both are worthless without human intervention.

I definitely breathed a sigh of relief when I read it was $20,000 to find these vulnerabilities with Mythos. But I also don't think it's hype. $20,000 is, optimistically, a tenth the price of a security researcher, and that shift does change the calculus of how we should think about security vulnerabilities.

sweezyjeezy 8 minutes ago|||
> But the entire value is that it can be automated. If you try to automate a small model to look for vulnerabilities over 10,000 files, it's going to say there are 9,500 vulns. Or none.

'Or none' is ruled out since it found the same vulnerability - I agree that there is a question on precision on the smaller model, but barring further analysis it just feels like '9500' is pure vibes from yourself? Also (out of interest) did Anthropic post their false-positive rate?

The smaller model is clearly the more automatable one IMO if it has comparable precision, since it's just so much cheaper - you could even run it multiple times for consensus.

integralid 54 minutes ago||||
>Or none

We already know this is not true, because small models found the same vulnerability.

tptacek 3 minutes ago|||
No, they didn't. They distinguished it, when presented with it. Wildly different problem.
amazingamazing 1 hour ago||||
Citation needed for basically all of this. You basically are creating a double standard for small models vs mythos…
SpicyLemonZest 53 minutes ago|||
What the source article claims is that small models are not uniformly worse at this, and in fact they might be better at certain classes of false positive exclusion. This is what Test 1 seems to show.

(I would emphasize that the article doesn't claim and I don't believe that this proves Mythos is "fake" or doesn't matter.)

omcnoe 36 minutes ago|||
Difference is the scaffold isn’t “loop over every file” - it’s loop over every discovered vulnerable code snippet.

If you isolate the codebase just the specific known vulnerable code up front it isn’t surprising the vulnerabilities are easy to discover. Same is true for humans.

Better models can also autonomously do the work of writing proof of concepts and testing, to autonomously reject false positives.

notnullorvoid 1 hour ago|||
> I hypothesize it will find the exploit, but it will also turn up so much irrelevant nonsense that it won't matter.

The trick with Mythos wasn't that it didn't hallucinate nonsense vulnerabilities, it absolutely did. It was able to verify some were real though by testing them.

The question is if smaller models can verify and test the vulnerabilities too, and can it be done cheaper than these Mythos experiments.

bredren 1 hour ago|||
The article positions the smaller models as capable under expert orchestration, which to be any kind of comparable must include validation.
Aurornis 1 hour ago||
Calling it “expert orchestration” is misleading when they were pointing it at the vulnerable functions and giving it hints about what to look for because they already knew the vulnerability.
cyanydeez 43 minutes ago||
You know for loops exist and you can run opencode against any section of code with just a small amount of templating, right? There's zero stopping you from writing a harness that does what you're saying.
iririririr 1 hour ago|||
so it's just better at hallucinations, but they added discrete code that works as a fuzzer/verifier?
celeritascelery 1 hour ago|||
That was my thought exactly. If small models can find these same vulnerabilities, and your company is trying to find vulnerabilities, why didn’t you find them?
echelon 1 hour ago|||
Who is spending millions of dollars on small models to find vulns? Nobody else is selling here or has the budget to sell quite like this.

Anthropic spends millions - maybe significantly more.

Then when they know where they are, they spend $20k to show how effective it is in a patch of land.

They engineered this "discovery".

What the small teams are doing is fair - it's just a scaled down version of what Anthropic already did.

nullsanity 1 hour ago||||
[dead]
rakejake 1 hour ago|||
Maybe they did use small models but you couldn't make the front page of HN with something like this until Anthropic made a big fuss out of it. Or perhaps it is just a question of compute. Not everyone has 20k$ or the GPU arsenal to task models to find vulnerabilities which may/may not be correct?

Unless Anthropic makes it known exactly what model + harness/scaffolding + prompt + other engineering they did, these comparisons are pointless. Given the AI labs' general rate of doomsday predictions, who really knows?

replygirl 1 hour ago||
papers are always coming out saying smaller models can do these amazing and terrifying things if you give them highly constrained problems and tailored instructions to bias them toward a known solution. most of these don't make the front page because people are rightfully unimpressed
hellcow 1 hour ago|||
It seems feasible to use a small/cheap model to flag possible vulnerabilities, and then use a more expensive model to do a second-pass to confirm those, rather than on every file. Could dramatically reduce the total cost and speed up the process.
conception 1 hour ago||
Does it? I don’t see quality from small models being high enough to be able to effectively scour a code based like this.
alpha_squared 1 hour ago|||
This is addressed elsewhere in the comments, but it appears this is actually a direct comparison to how Anthropic got their Mythos headline results.

https://news.ycombinator.com/item?id=47732322

Aurornis 1 hour ago||
How is that a direct comparison? The link you gave has a quote that says it’s not:

> Scoped context: Our tests gave models the vulnerable function directly, often with contextual hints (e.g., "consider wraparound behavior"). A real autonomous discovery pipeline starts from a full codebase with no hints

They pointed the models at the known vulnerable functions and gave them a hint. The hint part is what really breaks this comparison because they were basically giving the model the answer.

cyanydeez 40 minutes ago||
Does no one defending mythos understand how nested foreloops work?

loop through each repo: loop through each file: opencode command /find_wraparoundvulnerability next file next repo

I can run this on my local LLM and sure, I gotta wait some time for it to complete, but I see zero distinguishing facts here.

letitgo12345 46 minutes ago|||
Can't you execute the bug to see if the vulnerability is real? So you have a perfect filter. Maybe Mythos decided w/o executing but we don't know that.
SoftTalker 1 hour ago|||
How much of that is simply scale? Anthropic threw probably an entire data center at analyzing a code base. Has anyone done the same with a "small" model?
jstanley 1 hour ago||
It's still useful if $20k of consultants would be less effective.
yorwba 1 hour ago|||
We don't even need to hypothesize that much on the irrelevant nonsense, since they helpfully provide data with the detected vulnerability patched: https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jag... and half of the small models they touted as finding the vulnerability still found it in the patched code in 3/3 runs. A model that finds a vulnerability 100% of the time even when there is none is just as informative as a model that finds a vulnerability 0% of the time even when there is one. You could replace it with a rock that has "There's a vulnerability somewhere." engraved on it.

They're a company selling a system for detecting vulnerabilities reliant on models trained by others, so they're strongly incentivized to claim that the moat is in the system, not the model, and this post really puts the thumb on the scale. They set up a test that can hardly distinguish between models (just three runs, really??) unless some are completely broken or work perfectly, the test indeed suggests that some are completely broken, and then they try to spin it as a win anyway!

A high false-positive rate isn't necessarily an issue if you can produce a working PoC to demonstrate the true positives, where they kinda-sorta admit that you might need a stronger model for this (a.k.a. what they can't provide to their customers).

Overall I rate Aisle intellectually dishonest hypemongers talking their own book.

cyanydeez 44 minutes ago|||
so what you're saying is no one could ever write a loop like:

for githubProject in githubProjects opencode command /findvulnerability end for

Seems like a silly thing to try and back up.

epistasis 2 hours ago||
> We took the specific vulnerabilities Anthropic showcases in their announcement, isolated the relevant code, and ran them through small, cheap, open-weights models. Those models recovered much of the same analysis. Eight out of eight models detected Mythos's flagship FreeBSD exploit, including one with only 3.6 billion active parameters costing $0.11 per million tokens.

Impressive, and very valuable work, but isolating the relevant code changes the situation so much that I'm not sure it's much of the same use case.

Being able to dump an entire code base and have the model scan it is they type of situation where it opens up vulnerability scans to an entirely larger class of people.

elicash 2 hours ago||
This is from the first of the caveats that they list:

> Scoped context: Our tests gave models the vulnerable function directly, often with contextual hints (e.g., "consider wraparound behavior"). A real autonomous discovery pipeline starts from a full codebase with no hints. The models' performance here is an upper bound on what they'd achieve in a fully autonomous scan. That said, a well-designed scaffold naturally produces this kind of scoped context through its targeting and iterative prompting stages, which is exactly what both AISLE's and Anthropic's systems do.

That's why their point is what the subheadline says, that the moat is the system, not the model.

Everybody so far here seems to be misunderstanding the point they are making.

epistasis 24 minutes ago|||
If that's the point they are making, let's see their false positive rate that it produces on the entire codebase.

They measured false negatives on a handful of cases, but that is not enough to hint at the system you suggest. And based on my experiences with $$$ focused eval products that you can buy right now, e.g. greptile, the false positive rate will be so high that it won't be useful to do full codebase scans this way.

tehryanx 1 hour ago||||
I get what you're saying, but I think this is still missing something pretty critical.

The smaller models can recognize the bug when they're looking right at it, that seems to be verified. And with AISLE's approach you can iteratively feed the models one segment at a time cheaply. But if a bug spans multiple segments, the small model doesn't have the breadth of context to understand those segments in composite.

The advantage of the larger model is that it can retain more context and potentially find bugs that require more code context than one segment at a time.

That said, the bugs showcased in the mythos paper all seemed to be shallow bugs that start and end in a single input segment, which is why AISLE was able to find them. But having more context in the window theoretically puts less shallow bugs within range for the model.

I think the point they are making, that the model doesn't matter as much as the harness, stands for shallow bugs but not for vulnerability discovery in general.

cyanydeez 35 minutes ago||
OK, consider a for loop that goes through your repo, then goes through each file, and then goes through each common vulnerability...

Is Mythos some how more powerful than just a recursive foreloop aka, "agentic" review. You can run `open code run --command` with a tailored command for whatever vulnerabilities you're looking for.

tehryanx 1 minute ago||
newer models have larger context windows, and more stable reasoning across larger context windows.

If you point your model directly at the thing you want it to assess, and it doesn't have to gather any additional context you're not really testing those things at all.

Say you point kimi and opus at some code and give them an agentic looping harness with code review tools. They're going to start digging into the code gathering context by mapping out references and following leads.

If the bug is really shallow, the model is going to get everything it needs to find it right away, neither of them will have any advantage.

If the bug is deeper, requires a lot more code context, Opus is going to be able to hold onto a lot more information, and it's going to be a lot better at reasoning across all that information. That's a test that would actually compare the models directly.

Mythos is just a bigger model with a larger context window and, presumably, better prioritization and stronger attention mechanisms.

anotheryou 2 hours ago||||
huh, running it over each function in theory but testing just the specific ones here makes sense, but that hint?!
elicash 2 hours ago||
I agree.

To clarify, I don't necessarily agree with the post or their approach. I just thought folks were misreading it. I also think it adds something useful to the conversation.

lelanthran 51 minutes ago||||
> That's why their point is what the subheadline says, that the moat is the system, not the model.

I'm skeptical; they provided a tiny piece of code and a hint to the possible problem, and their system found the bug using a small model.

That is hardly useful, is it? In order to get the same result , they had to know both where the bug is and what the bug is.

All these companies in the business of "reselling tokens, but with a markup" aren't going to last long. The only strategy is "get bought out and cash out before the bubble pops".

TacticalCoder 1 hour ago||||
> That's why their point is what the subheadline says, that the moat is the system, not the model.

Can you expand a bit more on this? What is the system then in this case? And how was that model created? By AI? By humans?

SCHiM 1 hour ago||
You can imagine a pipeline that looks at individual source files or functions. And first "extracts" what is going on. You ask the model:

- "Is the code doing arithmetic in this file/function?" - "Is the code allocating and freeing memory in this file/function?" - "Is the code the code doing X/Y/Z? etc etc"

For each question, you design the follow-up vulnerability searchers.

For a function you see doing arithmetic, you ask:

- "Does this code look like integer overflow could take place?",

For memory:

- "Do all the pointers end up being freed?" _or_ - "Do all pointers only get freed once?"

I think that's the harness part in terms of generating the "bug reports". From there on, you'll need a bunch of tools for the model to interact with the code. I'd imagine you'll want to build a harness/template for the file/code/function to be loaded into, and executed under ASAN.

If you have an agent that thinks it found a bug: "Yes file xyz looks like it could have integer overflow in function abc at line 123, because...", you force another agent to load it in the harness under ASAN and call it. If ASAN reports a bug, great, you can move the bug to the next stage, some sort of taint analysis or reach-ability analysis.

So at this point you're running a pipeline to: 1) Extract "what this code does" at the file, function or even line level. 2) Put code you suspect of being vulnerable in a harness to verify agent output. 3) Put code you confirmed is vulnerable into a queue to perform taint analysis on, to see if it can be reached by attackers.

Traditionally, I guess a fuzzer approached this from 3 -> 2, and there was no "stage 1". Because LLMs "understand" code, you can invert this system, and work if up from "understanding", i.e. approach it from the other side. You ask, given this code, is there a bug, and if so can we reach it?, instead of asking: given this public interface and a bunch of data we can stuff in it, does something happen we consider exploitable?

ang_cire 1 hour ago||
That's funny, this is how I've been doing security testing in my code for a while now, minus the 'taint analysis'. Who knew I was ahead of the game. :P

In all seriousness though, it scares me that a lot of security-focused people seemingly haven't learned how LLMs work best for this stuff already.

You should always be breaking your code down into testable chunks, with sets of directions about how to chunk them and what to do with those chunks. Anyone just vaguely gesturing at their entire repo going, "find the security vulns" is not a serious dev/tester; we wouldn't accept that approach in manual secure coding processes/ SSDLCs.

wat10000 2 hours ago|||
If that’s the case, why didn’t they do it that way?
e12e 1 hour ago||
Tunnel vision? If your model can handle big context, why divide into lesser problems to conquer - even if such splitting might be quite trivial and obvious?

It's the difference of "achieve the goal", and "achieve the goal in this one particular way" (leverage large context).

wat10000 1 hour ago||
I meant, if the claim here is that small models can accomplish the same things with good scaffolding, why didn’t they demonstrate finding those problem with good scaffolding rather than directly pointing them at the problem?
loire280 2 hours ago|||
> Anthropic's own scaffold is described in their technical post: launch a container, prompt the model to scan files, let it hypothesize and test, use ASan as a crash oracle, rank files by attack surface, run validation. That is very close to the kind of system we and others in the field have built, and we've demonstrated it with multiple model families, achieving our best results with models that are not Anthropic's. The value lies in the targeting, the iterative deepening, the validation, the triage, the maintainer trust. The public evidence so far does not suggest that these workflows must be coupled to one specific frontier model.

The argument in the article is that the framework to run and analyze the software being tested is doing most of the work in Anthropic's experiment, and that you can get similar results from other models when used in the same way.

roywiggins 1 hour ago||
Maybe that's true, but they didn't actually show that that's true, since they didn't try scaffolding smaller models in a similar way at all.
Jcampuzano2 2 hours ago|||
The thing is with smaller cheaper models it is very possible to simply take every file in a codebase, and prompt it asking for it to find vulnerabilities.

You could even isolate it down to every function and create a harness that provides it a chain of where and how the function is used and repeat this for every single function in a codebase.

For some very large codebases this would be unreasonable, but many of the companies making these larger models do realistically have the compute available to run a model on every single function in most codebases.

You have the harness run this many times per file/function, and then find ones that are consistently/on average pointed as as possible vulnerability vectors, and then pass those on to a larger model to inspect deeper and repeat.

Most of the work here wouldn't be the model, it'd be the harness which is part of what the article alludes to.

loeg 2 hours ago||
> it is very possible to simply take every file in a codebase, and prompt it asking for it to find vulnerabilities.

My understanding (based on the Security, Cryptography, Whatever podcast interview[0] -- which, by the way, go listen to it) is that this is actually what Anthropic did with the large model for these findings.

[0]: https://securitycryptographywhatever.com/2026/03/25/ai-bug-f...

> I wrote a single prompt, which was the same for all of the content management systems, which is, I would like you to audit the security of this codebase. This is a CMS. You have complete access to this Docker container. It is running. Please find a bug. And then I might give a hint. “Please look at this file.” And I’ll give different files each time I invoke it in order to inject some randomness, right? Because the model is gonna do roughly the same time each time you run it. And so if I want to have it be really thorough, instead of just running 100 times on the same project, I’ll run it 100 times, but each time say, “Oh, look at this login file, look at this other thing.” And just enumerate every file in the project basically.

odie5533 2 hours ago||
Isn't the difference just harness then? I can write a harness that chunks code into individual functions or groups of functions and then feed it into a vulnerability analysis agent.
jcims 2 hours ago||
It's probably not the 'only' difference, because clearly the models are advancing in capability, but it's likely way more important than generally given credit for.
tptacek 2 hours ago||
If you cut out the vulnerable code from Heartbleed and just put it in front of a C programmer, they will immediately flag it. It's obvious. But it took Neel Mehta to discover it. What's difficult about finding vulnerabilities isn't properly identifying whether code is mishandling buffers or holding references after freeing something; it's spotting that in the context of a large, complex program, and working out how attacker-controlled data hits that code.

It's weird that Aisle wrote this.

ctoth 1 hour ago||
> It's weird that Aisle wrote this.

No, writing an advertisement is not weird. What's weird is that it's top of HN. Or really, no, this isn't weird either if you think about it -- people lookin for a gotcha "Oh see, that new model really isn't that good/it's surely hitting a wall/plateau any day now" upvoted it.

sanex 1 hour ago|||
Nah, Saturday post. Less news less content.
goekjclo 1 hour ago|||
It's not weird. Top of HN is worthless as a barometer at this point, people downvote for calling out AI slop.
SoftTalker 1 hour ago|||
It's also that humans are very bad at repetitive detailed tasks. Sitting down with a code base and looking at each function for integer overflow comparison bugs gets boring really fast. It's a rare person who can do that for as long as it takes to find a bug that they don't already have some clues about.

It's the flaw in the "given enough eyeballs, all bugs are shallow" argument. Because eyeballs grow tired of looking at endless lines of code.

Machines on the other hand are excellent at this. They don't get bored, they just keep doing what they are told to do with no drop-off in attention or focus.

throwatdem12311 39 minutes ago|||
idk man, pay me enough money and I’ll look at as much code as you want looking for integer overflows

Would it be cheaper than Claude Mythos doing it? No idea. Maybe, maybe not.

But it’s weird how we’re willing to throw away money to a megacorp to do it with “automation” for potentially just as much if not more as it would cost to just have big bounty program or hiring someone for nearly the same cost and doing it “normally”.

It would really have to be substantially less cost for me to even consider doing it with a bot.

____tom____ 57 minutes ago|||
And there aren't enough security researchers in the world to review ALL the files from OpenBSD.

And if there were, the cost would be more like $20M than 20K.

Having all code reviewed for security, by some level of LLM, should be standard at this point.

kennywinker 1 hour ago|||
If it’s obvious when you look close, then automate looking close. Seems simple to write tools that spider thru a code base, finding logical groupings and feeding them into an LLM with prompts like “there is a vulnerability in this code, find it”.

The thesis is, the tooling is what matters - the tools (what they call the harness) can turn a dumb llm into a smart llm.

tptacek 1 hour ago|||
Hold on, I misread your comment because I'm knee-jerk about code scanners, which were the bane of my existence for a while. Reworking... and: done. The original comment was just the first graf without the LLM qualification. Sorry about that.

The general approach without LLMs doesn't work. 50 companies have built products to do exactly what you propose here; they're called static application security testing (SAST) tools, or, colloquially, code scanners. In practice, getting every "suspicious" code pattern in a repository pointed out isn't highly valuable, because every codebase is awash in them, and few of them pan out as actual vulnerabilities (because attacker-controlled data never hits them, or because the missing security constraint is enforced somewhere else in the call chain).

Could it work with LLMs? Maybe? But there's a big open question right now about whether hyperspecific prompts make agents more effective at finding vulnerabilities (by sparing context and priming with likely problems) or less effective (by introducing path dependent attractors and also eliminating the likelihood of spotting vulnerabilities not directly in the SAST pattern book).

bluGill 1 minute ago||
I have long said that static checkers get ten false positives. note that size of the code is not a consideration, it doesn't matter if it the four line 'hello world' or the 10 million line monster some of us work on, it is ten max false positive.
roywiggins 1 hour ago|||
Right, but they didn't actually test that, did they?
kennywinker 1 hour ago||
[dead]
tombert 59 minutes ago|||
It's weird, because when working on a big project, taking a break for a week or two, and returning to it, I will find a bug and will see hundreds of lines of code that are absolutely terrible, and I will tell myself "Tom you know better than to do this, this is a rookie mistake".

I think people forget that it's hard to be clever and tidy 100% of the time. Big programs take a lot of discipline and an understanding of the context that can be really hard to maintain. This is one of several reasons that my second draft or third draft of code is almost always considerably better than the first draft.

drc500free 1 hour ago|||
It’s like not differentiating between solving and verifying.

“PKI is easy to break if someone gives us the prime factors to start with!”

tucnak 56 minutes ago||
The point of contention is whether Mythos is the product of its intelligence or its harness; the results like this, and other similar testimonies, call into question too-dangerous-to-release marketing, and for good reason, too. Because it is powerful marketing. Aisle merely says the intelligence is there in the small models. I say, it's already clear that competent defenders could viably mimic, or perhaps even eclipse what Mythos does, by (a) making better harness, (b) simply spending more on batch jobs, bootstrapping, cache better, etc. You may not be doing this yourself, but your probably should.
tptacek 11 minutes ago||
Aisle and Anthropic are literally talking about two different problem spaces.
vmg12 1 hour ago||
The technique Anthropic uses was demonstrated by Nicholas Carlini in a talk he gave 2 weeks ago and it's very simple, when asking LLMs to review code, ask them to focus its review on one file in a single session. Here is the video with the timestamp (watch through to ~5:30, they show two different ways of prompting claude).

https://youtu.be/1sd26pWhfmg?t=204

https://youtu.be/1sd26pWhfmg?t=273

IMO the big "innovation" being shown by Mythos is the effectiveness with prompting LLMs to look for security vulnerabilities by focusing on specific files one at a time and automating this prompting with a simple script.

Prompting Mythos to focus on a single file per session is why I suspect it cost Anthropic $20k to find some of the bugs in these codebases. I know this same technique is effective with Opus 4.6 and GPT 5.4 because I've been using it on my own code. If you just ask the agent to review your pr with a low effort prompt they are not exhaustive, they will not actually read each changed file and look at how it interacts with the system as a whole. If the entire session is to review the changes for a single file, the llm will do much more work reviewing it.

Edit: I changed my phrasing, it's not about restricting its entire context to one file but focusing it on one file but still allowing it to look at how other files interact with it.

mirsadm 1 hour ago|
How is that going to find anything that interacts across files?
appcustodian2 1 hour ago|||
I would think that it is still capable of exploring the codebase and reading other related files like any other coding agent already does.
vmg12 1 hour ago|||
My phrasing wasn't clear but you aren't telling it to only look at one specific file but to focus its review on one file. Updated my original comment.
antirez 2 hours ago||
Congrats: completely broken methodology, with a big conflict of interest. Giving specific bug hints, with an isolated function that is suspected to have bugs, is not the same task, NOR (crucially) is a task you can decompose the bigger task into. It is basically impossible to segment code in pieces, provide pieces to smaller models, and expect them to find all the bugs GPT 5.4 or other large models can find. Second: the smarter the model, and less the pipeline is important. In the latest couple of days I found tons if Redis bugs with a three prompts open-ended pipeline composed of a couple of shell scripts. Do you think I was not already tying with weaker models? I did, but it didn't work. Don't trust what you read, you have access to frontier models for 20$ a month. Download some C code, create a trivial pipeline that starts from a random file and looks for vulnerabilities, then another step that validates it under a hard test, like ASAN crash, or ability to reach some secret, and so forth, and only then the problem can be reported. Test yourself what it is possible. Don't let your fear make you blind. Also, there is a big problem that makes the blog post reasoning not just weak per se, but categorically weak: if small model X can find 80% of vulnerabilities, if there is a model Y that can find the other potential 20%, we need "Y": the maintainers should make sure they access to models that are at least as good as the black hats folks.
nsbsbdjdididi 58 minutes ago|
Thanks Dario, very cool!
woodruffw 2 hours ago||
> Those models recovered much of the same analysis

This is an essentially unquantifiable statement that makes the underlying claim harder to believe as an external party. What does “much” mean here? The end state of vulnerability exploitation is typically eminently quantifiable (in the form of a functional PoC that demonstrates an exploited end state), so the strong version of the claims here would ideally be backed up by those kinds of PoCs.

(Like other readers, I also find the trick of pre-feeding the smaller models the “relevant” code to be potentially disqualifying in a fair comparison. Discovering the relevant code is arguably one of the hardest parts of human VR.)

MaxLeiter 2 hours ago||
I think they key thing here is they "isolated the relevant code"

If the exploits exist in e.g. one file, great. But many complex zerodays and exploits are chains of various bugs/behaviors in complex systems.

Important research but I don’t think it dispels anything about Mythos

slopinthebag 1 hour ago|
Did Mythos identify vulnerabilities across files? Afaik Mythos worked the same way, analysing a single file at a time.
lordofgibbons 2 hours ago||
Without showing false-positive rates this analysis is useless.

If your model says every line if your code has a bug, it will catch 100% of the bugs, but it's not useful at all. They tested false-positives with only a single bug...

I'm not defending anthropic and openai either. Their numbers are garbage too since they don't produce false-positive rates either.

Why is this "analysis" making the rounds?

chopete3 34 minutes ago||
The impact of the Mythos announcement on the cybersecurity firms( like Crowdstrike,ZScalar etc) is big enough(10-15% drop in stock price) and this pushback is expected.

Companies like Aisle.com (the blog) and other VAPT companies charge huge amounts to detect vulnerabilities.

If Cloud Mythos become a simple github hook their value will get reduced.

That is a disruption.

bryantwolf 48 minutes ago|
All of this discourse seems very bizarre.

If smaller models can find these things, that doesn’t mean mythos is worse than we thought. It means all models are more capable.

Also if pointing models at files and giving them hints is all it takes to make them find all kinds of stuff, well, we can also spray and pray that pretty well with llms can’t we.

It just points to us finding a lot more stuff with only a little bit more sophistication.

Hopefully the growing pains are short and defense wins

More comments...