Posted by quadrige 17 hours ago
I jest, but I did notice having more confidence to take on more ambitious work lately. We're all centaurs now.
My opinion is that it is over-hyped because like any LLM, it requires a suitable human in the loop to keep the LLM on the straight and narrow, and then to weed through the inevitable false-positives and hallucinations.
Nicholas Carlini, for example, whose name is on many of the recent high-profile Mythos findings is not just some random dude with a Claude sub on his credit card .... he's an experienced security researcher.
Random inexperienced people thinking Mythos can replace the need for experienced pen-testers, auditors etc. are likely to be sorely disappointed if/when they get their hands on Mythos.
At first they will be delighted. So much money and time saved. When their adversaries get their hands on their system (with or without Mythos), then they'll be sorely disappointed.
> Apple spent five years building it. Probably billions of dollars too.
This seems higher than I'd expect.
(I’m sure they’re not lying, but we’re not learning anything here)
They simply have to show it against a beta version of MacOS, and frame it as unauthorized access, and maybe from locked mode if possible
I agree, but it's the people I'm worried about.
I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become risker.
What's worse is a lot of this behavior is being driven by leaders, whether directly (e.g. unrealistic velocity goals, promoting people based on hand-wavy "use AI" initiatives, etc) or indirectly (e.g. layoffs overloading remaining devs, putting inexperienced devs in senior rolls, etc).
The world's gone mad and large swaths of the industry seem hellbent on rediscovering the security basics the hard way.
Will we now have leetcode of prompt writing?
I don’t think so.
An LLM can produce higher-quality documentation than most humans. If it's not already happening, when a new developer joins a team, they're going to have an LLM produce any documentation a new developer needs, including why certain decisions were made.
It could also summarize years of email threads and code reviews that, let's face it, a new person wouldn’t be able to ingest anyway; it's not like a new developer gets to take a week off to get caught up on everything that happened before they got there. English not their first language? Well, the LLM can present the information in virtually any language required.
As the models continue to improve, they'll spot patterns in the code that a human wouldn’t be able to see.
Can bears some heavy weight.
LLM generated documentation has so low level of information density, that it’s useless. Yes, it writes nice sentences… or even writes. But it contains so much noise that currently, reading code is a better documentation than what I’ve seen from every single LLM generated documentation.
The same with LLM generated articles. I close them after the second sentence because at least about 90% of it is useless filler.
Now compare that to this: https://slate.com/technology/2004/11/the-death-of-the-last-m...
I almost closed it when I read the first few sentences because these kinds of articles are useless time wasting nonsenses. But this was different. This was old. Most sentences contained something new. Something worthy. (Of course, people also write unnecessary long articles… looking at you Atlantic)
You can throw out almost everything by volume from LLM generated documentation without loosing any information.
Currently, if I smell (and it’s very easy to smell) LLM generated documentation or article, then I close it immediately, because it’s good for only one thing: wasting my time, for no good reason.
juniors have been writing code forever that is imperfect and not memorized by the people reviewing
isnt the important thing the mechanisms for maintaining the code?
When Sundar Pichai announces that 75% of all new code at Google is AI-generated, their stock price goes up. If he were to announce that 75% of all new code at Google is now written by junior engineers, this would trigger a massive sell-off and a lot of employees would resign.
Seniors are only part of the picture as team lead, or when it escalates after big screwups.
Seniors are only part of the picture as team leads, or when it escalates after big screwups.
I mean we are literally in a thread about how the 4 trillion dollar company, literally the 3rd most valuable company in the world, with a core competency in software has, yet again, released a core product riddled with security defects for the 50th year in a row.
Commercial IT security is a industry that is incapable to a fault and has, so far, faced basically zero consequences for it.
Even more so in the future when a software company can be launched by a farm of AI Agents with a founder at helm with no clue about computing or security.
What's debateable is how many of those companies actually need irontight security, because they are never realistically going to be targets of criminals and/or they have nothing valuable to steal/corrupt in the first place (other than the owner's pride).
This is true in America in many industries now, but most of the rest of the world (even the rest of the OECD) is still far behind.
Then you have the many companies in the UK, US, Canada, EU that have compliance and regulatory laws that require them to exist in some capacity in house. Though that is changing with MDR services, but someone still has to interface with the MDR.
[1]: https://www.elastic.co/pdf/sans-soc-survey-2025.pdf [2]: https://github.com/jacobdjwilson/awesome-annual-security-rep...
I'd imagine this set is very similar to just "the set of software on the world". Even before the AI stuff, it was a pretty good bet at any given software had some vulnerability; it was just a question of how easy to was to find it.
So much out of date software with known exploits left running for years. The only reason there hasn't been total disaster is no one has tried to hack it yet.
Arm published the Memory Tagging Extension (MTE) specification in 2019 as a tool for hardware to help find memory corruption bugs. MTE is a memory tagging and tag-checking system, where every memory allocation is tagged with a secret. The hardware guarantees that later requests to access memory are granted only if the request contains the correct secret. If the secrets don’t match, the app crashes, and the event is logged. This allows developers to identify memory corruption bugs immediately as they occur.
https://support.apple.com/guide/security/operating-system-in...
(https://www.usenix.org/publications/loginonline/data-only-at...)
This makes more sense. You don't trigger MTE since you're not doing anything for force MTE to take action the program isn't actually changing.
My other question would be, why didn't apple use fbounds checking here? They've been doing it aggressively everywhere else.
MTE plus fbounds checking everywhere should lead to an extremly hardened OS
1. it’s to performance sensitive
Or
2. The os is so darn large it’s hard to recompile everything
A simultaneous total world build is relatively rare (is that needed here?), but it does happen. Sometimes new compiler versions or features need this.
Its not the first time bugs get past MTE, happened with Google Pixel last year ... https://github.blog/security/vulnerability-research/bypassin...
1. Any given system has a finite number of findable vulnerabilities.
2. All findable vulnerabilities are fixable (if not in software then with a new hardware revision).
3. Fixing a vulnerability while keeping the same intended functionality introduces on average less than 1 other findable vulnerability.
4. It is possible to cease adding new features to a system and from that point forward only focus on fixing vulnerabilities.
If all 4 are true, then perfect security seems possible, in some sense. I think some vulnerabilities might not be fixable, if you include things like the idea that users can be tricked into revealing their passwords. If you restrict the definition of vulnerability to some narrower meaning that still captures most of what people mean when they say computer vulnerability, then I think those 4 statements are probably true.
Perfect security might be near impossible in practice because vulnerabilities will get more difficult to find and fix over time, but I think we should expect the discovery of vulnerabilities to eventually become arbitrarily slow in a hypothetical system that prioritized security above all else.
If you imagine you had a vulnerability scanner as fast and convenient as a linter, it would be much cheaper to write secure code right away. Probably not perfectly secure, but still secure enough to make sure finding exploits stays expensive.
However it is no different from the Linux kernel, just because Rust is now allowed, the world hasn't been rewriten, and no sane person is going to do a Claude rewrite of the kernel.
https://docs.swift.org/compiler/documentation/diagnostics/st...