Top
Best
New

Posted by Ryan5453 1 day ago

Project Glasswing: Securing critical software for the AI era(www.anthropic.com)
Related: Assessing Claude Mythos Preview's cybersecurity capabilities - https://news.ycombinator.com/item?id=47679155

System Card: Claude Mythos Preview [pdf] - https://news.ycombinator.com/item?id=47679258

Also: Anthropic's Project Glasswing sounds necessary to me - https://news.ycombinator.com/item?id=47681241

1443 points | 753 commentspage 14
lasky 11 hours ago|
The hype machine is alive and well in silicon valley.
LoganDark 1 day ago||
It's nice to know that they continue to be committed to advertising how safe and ethical they are.
raldi 23 hours ago||
In what ways is Anthropic different from a hypothetical frontier lab that you would characterize as legitimately safe and ethical?
LoganDark 23 hours ago|||
I'm just a little frustrated they keep going on about how safe and ethical they are for keeping the more advanced capabilities from us. I wish they would wait to make an announcement until they have something to show, rather than this constant almost gloating.
0x3f 22 hours ago|||
Its existence is possible.
rvz 23 hours ago||
They are not our friends and are the exact opposite of what they are preaching to be.

Let alone their CEO scare mongering and actively attempting to get the government to ban local AI models running on your machine.

SilverElfin 23 hours ago|||
I agree attempting to ban local AI models or censor them, is not appropriate. At the same time, they do seem far more ethical and less dangerous than other AI companies. And I include big tech in that - a bunch of greedy companies that just want to abuse their monopoli … I mean moats.
simianwords 23 hours ago|||
How would you expect them to behave if they were your friends?
ethin 23 hours ago|||
IMO (not the GP) but if Anthropic were my friends I would expect them to publish research that didn't just inflate the company itself and that was both reproduceable and verifiable. Not just puff pieces that describe how ethical they are. After all, if a company has to remind you in every PR piece that they are ethical and safety-focused, there is a decent probability that they are the exact opposite.
Miraste 23 hours ago|||
They are a for-profit company, working on a project to eliminate all human labor and take the gains for themselves, with no plan to allow for the survival of anyone who works for a living. They're definitionally not your friends. While they remain for-profit, their specific behaviors don't really matter.
simianwords 23 hours ago||
I work for a tech company that eliminates a form of human labour and they remain for profit
Miraste 23 hours ago||
Sure, most tech companies eliminate some form of human labor. Anthropic aims to eliminate all human labor, which is very different.
4qt23 20 hours ago||
Software has been doing fine without Misanthropic. These automated tools find very little. They selected the partners because they, too, want to keep up the illusion that AI works.

Whenever a company pivots to "cyber" rhetoric, it is a clear indication that they are selling snake oil.

Secure your girl school target selectors first.

borski 20 hours ago||
This is a comment from someone that has never used these tools for vulnerability research. That much is very clear.
kass34 16 hours ago||
[dead]
emceestork 20 hours ago||
Account created 6 minutes ago...
3jash 20 hours ago||
[flagged]
tdaltonc 21 hours ago||
> Mythos finds bug.

> NSA demands that bug stays in place and gags Anthropic.

> Anthropic releases Mythos.

Then what? Is a huge share of the US zero-day stockpiles about to be disarmed or proliferated?

123malware321 10 hours ago||
I don't know anyone reviewing these tools that is impressed who is also someone who earns they paycheck doing bugbounties and finding actual CVE.

Generally these things only find memory corruption stuff which is almost never the type of bug you're looking for, and it costs a lot which negates your bug bounty payout.

Each time they preach, ooh, 0day found, bla bla.

In this domain you need to be specific or you are just yelling clickbait into the wind.

What type of 0day, what did the exploit actually look like.

'complex 4 stage with heap spray' - that sounds really simple actually.... complex for memory corruption goes into multi-process, maybe things between kernel/usermode, or crazy 18-20 stage exploits people pop against things like MS Teams etc....

Even if there were some cool results by any of these projects, the amount of nonsense blurted out in articles around them really makes them seem useless tools that are overmarketed by a bunch of excited children who dont really know what they are doing.

Get a dopamine hit, post on reddit, LOL. Hacking the planet (powered by Claude -_-)

manbash 20 hours ago||
This will likely not see the light of day. It's the usual PR that gathers many "partnerships".

Expect to see lots of these in the upcoming months as the big companies scramble to keep from losing money.

cmiles8 8 hours ago||
I’m sure it’s a decent model. But it’s also clear folks are running out of runway and desperate to find something that sticks and keeps the party going.

All the promises of amazing things in general work never happened. Companies consistently say they’re seeing no ROI. The AI crowd now hard pivots to cyber and, right out of the Palantir playbook, runs with the “our stuff is so amazing we can’t talk about it, but trust us bro” move that isn’t really fooling anyone.

Meanwhile the folks let in on the “secret” are those that also desperately need for the hype to continue to protect their own positions in this game.

Look forward to a model upgrade but the hype fluff games are getting old. Watching OpenAI completely crash out of pole position on the hype train though has been at least amusing.

imranahmedjak 20 hours ago||
Building a neighborhood data platform that scores every US ZIP code using Census, FBI, and EPA data. Also running a job aggregator that fetches 37K+ jobs daily from 17 sources. Both free, both Node.js + Express.
throwaway13337 23 hours ago|
I really wanted to like anthropic. They seem the most moral, for real.

But at the core of anthropic seems to be the idea that they must protect humans from themselves.

They advocate government regulations of private open model use. They want to centralize the holding of this power and ban those that aren't in the club from use.

They, like most tech companies, seem to lack the idea that individual self-determination is important. Maybe the most important thing.

dralley 22 hours ago|
That is unequivocally true with some things. You don't want people exercising their "self-determination" to own private nukes.
throwaway13337 21 hours ago||
LLMs aren't nukes.

They're more like printing presses or engines. A great potential for production and destruction.

At their invention, I'm sure some people wanted to ensure only their friends got that kind of power too.

I wonder the world we would live in if they got their way.