Posted by HieronymusBosch 1 day ago
Translating things to Rust manually was already a thing before LLMs came into the picture. Now with LLMs that's only going to get easier and faster. The long term value is going to come from getting on top of the mountain of technical debt in the form of existing C/C++ code bases that is responsible for the vast majority of memory exploits, buffer overflows, and other issues that despite decades of attention still are being found across major code bases on a regular basis.
Mozilla finding these issues comes on the back of a quarter century of some very competent engineers trying to do the right thing and using all the tools at their disposal to prevent these issues from happening. I have a lot of respect for that team and the contributions it has made over the years to improve tools, testing/verification practices, etc. The issue is not their effort or competence.
The job of taking an existing system that is well covered in test, well documented/specified, etc. and producing a new one that can function as a drop in replacement is now something that can be considered. A few years ago that would have translated into absolutely massive project cost and risk. Now it's something you can kick off on a Friday afternoon. Worst case it doesn't work, best case you end up with a much better implementation.
It's still early days. There are still a lot of quality issues with LLM generated code. But the success/fail rate will probably improve over time.
More tools for more people equals more stuff being made on a wider range.
That will make software safer alone.
But it also represents more easily available opportunities for blackhats to abuse against the projects where these tools were not being applied.
Ideally, you'd do a comprehensive all-source-code scan, (and the LLM-scanner finds everything during those scans), and fix all the reported defects.
Afterwards, any dev that commits code will run the LLM-scanner on the modified code (and affected areas) and fix any reported defects.
So the black-hat hacker would be shut out unless they get access to an LLM-scanner with better analysis than what the target project is using.
Major LLM-scanners could give priority access for new versions of LLM-scanners to major projects to find any defects in the current source code before any other party could use the reported defects against the project or their users.
So black-hat hackers would be left with developing their own LLM-scanner better/more efficient than existing major LLM-scanners.
Given enough incentive, they might develop such a tool. Look at the market for zero-day vulnerabilities for smartphones, esp iPhones.
For instance, in one of the included bugs (2022034) it figured out that a floating point value being sent over IPC could be modified by an attacker in such a way that it would be interpreted by the JS engine as an arbitrary pointer, due to the way the JS engine uses a clever representation of values called NaN-boxing. This is not beyond the realm of a human researcher to find, but it did nicely combine different domains of security.
As the person responsible for accidentally introducing that security problem (and then fixing it after the Mythos report), while I am aware of NaN-boxing (despite not being a JS engine expert), I was focused more on the other more complex parts of this IPC deserialization code so I hadn't really thought about the potential problems in this context. It is just a floating point value, what could go wrong?
Think it's more a care of mythos raising widespread awareness that tireless LLMs can be weaponized to dig through code and find that one tiny flaw nobody spotted
Of course, even the reports with flawed methodology could be suggesting that a great harness + weak model might achieve a similar level of results as a mediocre harness + strong model. But I'd want to see solid evidence for that.
There was a time when the entire transportation infrastructure in the US was built around horses. Even after cars were invented, the cars weren't obviously better than horses for most people, especially because there wasn't any infrastructure to support them, but the infrastructure and the cars kept improving to the point where it was better for some people at some things, then suddenly it was better at most things, and then people stopped using horses, and we re-organized our entire transportation network around cars.
But there was never a revolutionary technological change. The technology of cars in the 1930s was the same fundamental technology as the cars in the 1890s. Just at some point it became "good enough" and that was it.
I think when people say that AI is a bubble, they are assuming that anything economically useful that LLMs cannot perform today is _qualitatively_ different from what LLMs can do right now, and that LLMs cannot do it even in theory, without some major technological innovation. But I have a suspicion that there are a large number of valuable things, that once LLMs advance just a little bit more, and harnesses and infra around them is improved a little bit more will just be completely taken over by LLMs.
see https://www.blackduck.com/static-analysis-tools-sast/coverit...
and for Firefox-related alleged defects, see https://scan.coverity.com/projects/firefox
You have to create an account to view the actual reported defects.
There are just over 5000 reported defects still outstanding. I don't know how many overlap with the reported 271 Mythos-reported defects.
You get bug bounties if you report the kind of bugs Mythos identified. There's a reason no-one collected bounties from the "5000 defects" Coverity identified.
The Mythos reports have several examples of chaining a whole bunch of logic in different parts of the program together to exploit something very subtle. The Coverity reports aren't anything like that. These tools aren't remotely in the same league or even universe.
It's hard to convince managers to spend money on static analysis tools (or any development tool).
Unless your company just got bad publicity for a bug and your devs come to you and demonstrate that a certain static analysis tool would have flagged that particular piece of code, most managers would let the beancounter-facet dominate the decision making process.
Wired: Mozilla Used Anthropic's Mythos to Find and Fix 271 Bugs in Firefox (41 points, 18 comments) https://news.ycombinator.com/item?id=47853649
Ars: Mozilla: Anthropic's Mythos found 271 security vulnerabilities in Firefox 150 (33 points, 8 comments)https://news.ycombinator.com/item?id=47855384
https://hacks.mozilla.org/2026/05/behind-the-scenes-hardenin...
Well, depending on how you define "exploit"; some might only read arbitrary pointers or just out of bounds. Those would be useful primitives in a chain of vulnerabilities, not exploits themselves. You'll have to read through the first comments yourself, but if you're hoping that this is all nonsense and ignorable hype, you're going to be disappointed.