Top
Best
New

Posted by Ryan5453 8 hours ago

Project Glasswing: Securing critical software for the AI era(www.anthropic.com)
Related: Assessing Claude Mythos Preview's cybersecurity capabilities - https://news.ycombinator.com/item?id=47679155

System Card: Claude Mythos Preview [pdf] - https://news.ycombinator.com/item?id=47679258

Also: Anthropic's Project Glasswing sounds necessary to me - https://news.ycombinator.com/item?id=47681241

867 points | 391 commentspage 5
caycep 4 hours ago|
When do we get our Kuang Grade Mark Eleven icebreaker?
dakolli 7 hours ago||
I guess we can throw out the idea that AGI is going to be democratized. In this case a sufficiently powerful model has been built and the first thing they do is only give AWS, Microsoft, Oracle ect ect access.

If AGI is going to be a thing its only going to be a thing, its only going to be a thing for fortune 100 companies..

However, my guess is this is mostly the typical scare tactic marketing that Dario loves to push about the dangers of AI.

supern0va 7 hours ago||
>However, my guess is this is mostly the typical scare tactic marketing that Dario loves to push about the dangers of AI.

Evaluate it yourself. Look at the exploits it discovered and decide whether you want to feel concerned that a new model was able to do that. The data is right there.

rvz 1 hour ago|||
Well, Yes.

The research and testing of the model is always exclusively by their own model authors, meaning that it is not independent or verifiable and they want us to take their word for it, which we cannot - as they have an axe to grind against open weight models.

This is marketing wrapped around a biased research paper.

dist-epoch 5 hours ago||
The plan of Elon Musk for Macrohard is to replace all software companies with it, when they get AGI.
dakolli 2 hours ago||
Thankfully he will be long dead before that happens. But of course that's his goal. Elon despises expensive engineers, and he yearns to get revenge for them costing him so much money over the years by replacing them.

A tech billionaires biggest expensive has been his engineering line-item. They resent the workers who've collected a large percentage of their potential profits over the years, its their driving motivation, to crush all labor.

throwaway13337 7 hours ago||
I really wanted to like anthropic. They seem the most moral, for real.

But at the core of anthropic seems to be the idea that they must protect humans from themselves.

They advocate government regulations of private open model use. They want to centralize the holding of this power and ban those that aren't in the club from use.

They, like most tech companies, seem to lack the idea that individual self-determination is important. Maybe the most important thing.

dralley 6 hours ago|
That is unequivocally true with some things. You don't want people exercising their "self-determination" to own private nukes.
throwaway13337 6 hours ago||
LLMs aren't nukes.

They're more like printing presses or engines. A great potential for production and destruction.

At their invention, I'm sure some people wanted to ensure only their friends got that kind of power too.

I wonder the world we would live in if they got their way.

picafrost 8 hours ago||
> Anthropic has also been in ongoing discussions with US government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities. [...] We are ready to work with local, state, and federal representatives to assist in these tasks.

As Iran engages in a cyber attack campaign [1] today the timing of this release seems poignant. A direct challenge to their supply chain risk designation.

[1] https://www.cisa.gov/news-events/cybersecurity-advisories/aa...

manbash 5 hours ago||
This will likely not see the light of day. It's the usual PR that gathers many "partnerships".

Expect to see lots of these in the upcoming months as the big companies scramble to keep from losing money.

kristofferR 4 hours ago||
This is pretty insane. A model so powerful they felt that releasing it would create a netsec tsunami if released publicly. AGI isn't here yet, but we don't need to get there for massive societal effects. How long will they hold off, especially as competitors are getting closer to their releases of equally powerful models?
charcircuit 4 hours ago|
OpenAI did the same thing with GPT3 trying to scare people into thinking it would end the internet. OpenAI even reached out to someone who reproduced a weaker version of GPT3 and convinced him to change his mind about releasing it publicly due to how much "harm" it would cause.

These claims of how much harm the models will cause is always overblown.

baddash 7 hours ago||
> security product

> glass in the name

pugworthy 6 hours ago|
I had a team mate propose a new security layer for an industrial device which he wanted to call "Eggshell"
evanmoran 4 hours ago||
We shall call it Achilles, as Claude Mythos is its only weakness.
endunless 8 hours ago||
Another Anthropic PR release based on Anthropic’s own research, uncorroborated by any outside source, where the underlying, unquestioned fact is that their model can do something incredible.

> AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities

I like Anthropic, but these are becoming increasingly transparent attempts to inflate the perceived capability of their products.

NitpickLawyer 8 hours ago||
We'll find out in due time if their 0days were really that good. Apparently they're releasing hashes and will publish the details after they get patched. So far they've talked about DoS in OpenBSD, privesc in Linux and something in ffmpeg. Not groundbreaking, but not nothing either (for an allegedly autonomous discovery system).

While some stuff is obviously marketing fluff, the general direction doesn't surprise me at all, and it's obvious that with model capabilities increase comes better success in finding 0days. It was only a matter of time.

conradkay 7 hours ago|||
I would've basically agreed with you until I'd seen this talk: https://www.youtube.com/watch?v=1sd26pWhfmg

Maybe a bad example since Nicholas works at Anthropic, but they're very accomplished and I doubt they're being misleading or even overly grandiose here

See the slide 13 minutes in, which makes it look to be quite a sudden change

endunless 7 hours ago|||
Very interesting, thanks for sharing.

> I doubt they're being misleading or even overly grandiose here

I think I agree.

We could definitely do much worse than Anthropic in terms of companies who can influence how these things develop.

bink 6 hours ago|||
I watched the talk as well and it's very interesting. But isn't this just a buffer overflow in the NFS client code? The way the LLM diagnosed the flaw, demonstrated the bug, and wrote an exploit is cool and all, but doesn't this still come down to the fact that the NFS client wasn't checking bounds before copying a bunch of data into a fixed length buffer? I'm not sure why this couldn't have been detected with static analysis.
conradkay 4 hours ago||
I guess so, but there's a ton of buffer overflow vulnerabilities in the wild, and ostensibly it wasn't detected by static analysis

The red team post goes over some more impressive finds, and says that there's hundreds more they can't disclose yet: https://red.anthropic.com/2026/mythos-preview/

Analemma_ 7 hours ago||
Cynicism always gets upvotes, but in this particular case, it seems fairly easy to verify if they're telling the truth? If Mythos really did find a ton of vulnerabilities, those presumably have been reported to the vendors, and are currently in the responsible nondisclosure period while they get fixed, and then after that we'll see the CVEs.

If a bunch of CVEs do in fact get published a couple months (or whatever) from now, are you going to retract this take? It's not like their claims are totally implausible: the report about Firefox security from last month was completely genuine.

endunless 7 hours ago||
> If a bunch of CVEs do in fact get published a couple months (or whatever) from now, are you going to retract this take?

I would like to think that I would, yes.

What it comes down to, for me, is that lately I have been finding that when Anthropic publishes something like this article – another recent example is the AI and emotions one – if I ask the question, does this make their product look exceptionally good, especially to a casual observer just scanning the headlines or the summary, the answer is usually yes.

This feels especially true if the article tries to downplay that fact (they’re not _real_ emotions!) or is overall neutral to negative about AI in general, like this Glasswing one (AI can be a security threat!).

maxmaio 6 hours ago||
seems important and terrifying. This morning Opus 4.6 was blowing my mind in claude code... onward and upward
copypaper 4 hours ago|
Yea, but can it secure systems from the unpatchable $5 wrench vulnerability?

https://xkcd.com/538/

More comments...