Top
Best
New

Posted by donohoe 22 hours ago

Google says criminal hackers used AI to find a major software flaw(www.nytimes.com)
Unlocked: https://www.nytimes.com/2026/05/11/us/politics/google-hacker..., https://archive.ph/I4Ui5

https://apnews.com/article/google-ai-cybersecurity-exploitat...

https://www.cnbc.com/2026/05/11/google-thwarts-effort-hacker...

193 points | 141 commentspage 2
viktorcode 4 hours ago|
I expect that only to escalate with time, especially when there'll be more agent-written code deployed.
Spacemolte 3 hours ago||
Phasing like this immediately makes me wonder what google is lobbying for..
nomilk 9 hours ago||
@dang would be great if the hn link was the 'unlocked' version i.e. instead of

https://www.nytimes.com/2026/05/11/us/politics/google-hacker...

this instead

https://www.nytimes.com/2026/05/11/us/politics/google-hacker...

(can read the article immediately; slightly less fuss)

wasabi991011 9 hours ago|
Just fyi @username does not send any notifications on hackernews, not even to the mods.

To contact the HN mods, you need to send them an email.

randyrand 8 hours ago||
At least, thats what we're told ;)
chrononaut 4 hours ago|||
and I imagine out of anyone on HN, dang probably frequently searches for instances of dang. Sorry dang.
latexr 3 hours ago|||
I can confirm the moderators (dang and tomhow) are very responsive by email.
atrocities 14 hours ago||
Can we link to the actual google article, instead of these editorialized articles about the article?

https://cloud.google.com/blog/topics/threat-intelligence/ai-...

srcreigh 10 hours ago||
> Google said in research published Monday

What research? Where is it published?

Jean-Papoulos 5 hours ago||
If this is true, I hope AI exploit-finding will force the industry to harden itself against supply-chain vulnerabilities.
nsoonhui 11 hours ago||
There was a discussion a few days ago on White House considers vetting AI models prior to release (https://news.ycombinator.com/item?id=48013608).
markboo 8 hours ago||
In past decades the "firewall" of software is that advanced security and coding knowledge is not very easy to access by anyone, only a few smartest people in the big name companies and top orgs. But nowadays, knowledge is accessible to everyone if you use top LLM, which swipe the difference. I would say that future public software is unsafe anymore. maybe the concept of public software (like SaaS or other) will be dead, software is only private instead of public
skeledrew 13 hours ago|
Wild that they think restricting access to models will help much. Access to Chinese models will definitely not be restricted and have enough capability to find exploits as well.
More comments...