Top
Best
New

Posted by surprisetalk 11 hours ago

AI cybersecurity is not proof of work(antirez.com)
Recent and related: Cybersecurity looks like proof of work now - https://news.ycombinator.com/item?id=47769089 - (198 comments)
168 points | 74 commentspage 2
baxtr 8 hours ago|
Interestingly enough: earlier today this discussion was trending: https://news.ycombinator.com/item?id=47769089 (Cybersecurity looks like proof of work now)
RugnirViking 8 hours ago|
the article here is pretty clearly a response to the one you posted
onionisafruit 8 hours ago||
It’s only clear if you know it exists, and now I know it exists thanks to gp.
csmantle 7 hours ago||
> So, cyber security of tomorrow will not be like proof of work in the sense of "more GPU wins"; instead, better models, and faster access to such models, will win.

It's not proof of work, but proof of financial capacity.

The big companies are turning the access to high-quality token generators (through their service) into means of production. We're all going direct to Utopia, we're all going direct the other way.

tptacek 7 hours ago|
There's no "proof" involved. That's the problem with the analogy. It's not about how much "financial capacity" you have. It's about how many bugs you find or fix. The bugs are there where the models help attackers/defenders or not.
nottorp 8 hours ago||
Seriously. We need a BuSab for IT.

This continous rush is not healthy. npm updates, replies to articles that barely made HN 12 hours ago, anything like that. It's not healthy.

Slow down.

WesolyKubeczek 8 hours ago|
Amtrak is slow and expensive, but the hype train is free!
riteshkew1001 5 hours ago||
'Calling AI vuln-finding 'hallucination plus luck' is generous, a lot of human pentesting fits the same description.
andersmurphy 8 hours ago||
> What happens is that weak models hallucinate (sometimes causally hitting a real problem)

So the bigger models hallucinate better causally hitting more real problems?

douglaswlance 6 hours ago||
you get better models with more compute.

its not just PoW at inference. It's PoW of inference + training.

slopinthebag 4 hours ago||
> Don't trust who says that weak models can find the OpenBSD SACK bug. I tried it myself.

This is exactly the argument AI skeptics make btw. Also you say you tried GPT 120B OSS, that's like me proclaiming LLM coding doesn't work because I tried putting gpt 3.5 in Claude Code. Try it with GLM 5, Qwen, etc. Or improve your harness :)

redwood 8 hours ago||
What seems to be getting lost in the noise on this topic is that security has always been about security in depth and mitigating controls, in other words applied paranoia. There are always threat vectors and we're seeing a change in the shape of those vectors with more rapidity than ever before which is certainly exhausting for everyone. But don't forget the fundamentals here
EGreg 5 hours ago||
This just proves that we should stop using old environments and operating systems for mission-critical work, and build a completely new environment from the ground up, that's secure by default. Instead of trying to fix leaky buckets.
More comments...