Top
Best
New

Posted by Ryan5453 12 hours ago

Project Glasswing: Securing critical software for the AI era(www.anthropic.com)
Related: Assessing Claude Mythos Preview's cybersecurity capabilities - https://news.ycombinator.com/item?id=47679155

System Card: Claude Mythos Preview [pdf] - https://news.ycombinator.com/item?id=47679258

Also: Anthropic's Project Glasswing sounds necessary to me - https://news.ycombinator.com/item?id=47681241

1036 points | 469 commentspage 7
Fokamul 10 hours ago|
+ NSA, CIA
nikcub 10 hours ago|
Department of War timing on picking fights couldn't be worse
0xbadcafebee 11 hours ago||
tl;dr we find vulns so we can help big companies fix their security holes quickly (and so they can profit off it)

This is a kludge. We already know how to prevent vulnerabilities: analysis, testing, following standard guidelines and practices for safe software and infrastructure. But nobody does these things, because it's extra work, time and money, and they're lazy and cheap. So the solution they want is to keep building shitty software, but find the bugs in code after the fact, and that'll be good enough.

This will never be as good as a software building code. We must demand our representatives in government pass laws requiring software be architected, built, and run according to a basic set of industry standard best practices to prevent security and safety failures.

For those claiming this is too much to ask, I ask you: What will you say the next time all of Delta Airlines goes down because a security company didn't run their application one time with a config file before pushing it to prod? What will the happen the next time your social security number is taken from yet another random company entrusted with vital personal information and woefully inadequate security architecture?

There's no defense for this behavior. Yet things like this are going to keep happening, because we let it. Without a legal means to require this basic safety testing with critical infrastructure, they will continue to fail. Without enforcement of good practice, it remains optional. We can't keep letting safety and security be optional. It's not in the physical world, it shouldn't be in the virtual world.

endunless 11 hours ago||
Another Anthropic PR release based on Anthropic’s own research, uncorroborated by any outside source, where the underlying, unquestioned fact is that their model can do something incredible.

> AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities

I like Anthropic, but these are becoming increasingly transparent attempts to inflate the perceived capability of their products.

NitpickLawyer 11 hours ago||
We'll find out in due time if their 0days were really that good. Apparently they're releasing hashes and will publish the details after they get patched. So far they've talked about DoS in OpenBSD, privesc in Linux and something in ffmpeg. Not groundbreaking, but not nothing either (for an allegedly autonomous discovery system).

While some stuff is obviously marketing fluff, the general direction doesn't surprise me at all, and it's obvious that with model capabilities increase comes better success in finding 0days. It was only a matter of time.

conradkay 10 hours ago|||
I would've basically agreed with you until I'd seen this talk: https://www.youtube.com/watch?v=1sd26pWhfmg

Maybe a bad example since Nicholas works at Anthropic, but they're very accomplished and I doubt they're being misleading or even overly grandiose here

See the slide 13 minutes in, which makes it look to be quite a sudden change

endunless 10 hours ago|||
Very interesting, thanks for sharing.

> I doubt they're being misleading or even overly grandiose here

I think I agree.

We could definitely do much worse than Anthropic in terms of companies who can influence how these things develop.

bink 9 hours ago|||
I watched the talk as well and it's very interesting. But isn't this just a buffer overflow in the NFS client code? The way the LLM diagnosed the flaw, demonstrated the bug, and wrote an exploit is cool and all, but doesn't this still come down to the fact that the NFS client wasn't checking bounds before copying a bunch of data into a fixed length buffer? I'm not sure why this couldn't have been detected with static analysis.
conradkay 8 hours ago||
I guess so, but there's a ton of buffer overflow vulnerabilities in the wild, and ostensibly it wasn't detected by static analysis

The red team post goes over some more impressive finds, and says that there's hundreds more they can't disclose yet: https://red.anthropic.com/2026/mythos-preview/

Analemma_ 10 hours ago||
Cynicism always gets upvotes, but in this particular case, it seems fairly easy to verify if they're telling the truth? If Mythos really did find a ton of vulnerabilities, those presumably have been reported to the vendors, and are currently in the responsible nondisclosure period while they get fixed, and then after that we'll see the CVEs.

If a bunch of CVEs do in fact get published a couple months (or whatever) from now, are you going to retract this take? It's not like their claims are totally implausible: the report about Firefox security from last month was completely genuine.

endunless 10 hours ago||
> If a bunch of CVEs do in fact get published a couple months (or whatever) from now, are you going to retract this take?

I would like to think that I would, yes.

What it comes down to, for me, is that lately I have been finding that when Anthropic publishes something like this article – another recent example is the AI and emotions one – if I ask the question, does this make their product look exceptionally good, especially to a casual observer just scanning the headlines or the summary, the answer is usually yes.

This feels especially true if the article tries to downplay that fact (they’re not _real_ emotions!) or is overall neutral to negative about AI in general, like this Glasswing one (AI can be a security threat!).

6thbit 9 hours ago||
This is silly and disingenuous. In a matter of days or weeks a competing lab will make public a model with capabilities beyond this “mythos” one.

Is this a huge fear-driven marketing stunt to get governments and corporations into dealing with anthropic?

yusufozkan 11 hours ago||
but people here had told me llms just predict the next word
SirYandi 10 hours ago||
This sets off marketing BS alarm bells. All the cosignatories so very ovvoously have a vested interest in AI stocks / sentiment. Perhaps not the Linux foundation, although (I think) they rely on corporate donations to some extent.
solenoid0937 1 hour ago|
What interest does Apple have in boosting Mythos?
zb3 10 hours ago||
> On the global stage, state-sponsored attacks from actors like China, Iran, North Korea, and Russia have threatened to compromise the infrastructure that underpins both civilian life and military readiness.

Yeah, makes sense. Those countries are bad because they execute state-sponsored cyber attacks, the US and Israel on the other hand are good, they only execute state-sponsored defense.

anuramat 11 hours ago||
"oops, our latest unreleased model is so good at hacking, we're afraid of it! literal skynet! more literal than the last time!"

almost like they have an incentive to exaggerate

knowaveragejoe 11 hours ago|
I'm sure they do, yet the models really are getting scarily good at this. This talk changed my view on where we're actually at:

https://www.youtube.com/watch?v=1sd26pWhfmg

More comments...