Posted by chizhik-pyzhik 17 hours ago
Hopefully!
But, ai-deniers are telling us there is nothing to see ...
What else can they do, assuming the computers behind the router are all patched up.
It's definitely bad.
I never understood why some projects get extremely popular and others don't. I also suspect by now that the reports by tools that are "too dangerous to release" scan all projects but selectively only contact those with issues, so that they never have to admit that their tool didn't find anything.
It's in popular projects.
It is a distorted view, because projects become popular by allowing indiscriminate commits, bugs, maintainers.
If I'd start a new project I'd allow anyone in and blog about 100 exploits every year, because that is exactly what people want. I'm serious.
why can't machine-learning write a product from scratch that is flawless?
sure buddy
Flawless software is hard for an LLM to write, because all the programs they have been trained on are flawed as well.
As a fun exercise, you could give a coding agent a hunk of non-trivial software (such as the Linux kernel, or postgresql, or whatever), and tell it over and over again: find a flaw in this, fix it. I'm pretty sure it won't ever tell you "now it's perfect" (and do this reproducibly).
Whatever the answer to that conundrum might be, LLMs are trained on these patterns and replicate them pretty faithfully.
The CVEs here have their fair share of silly C problems, but also more rigid input validation and handling. These more rigid validations exclude stuff which may even be valid by the spec, but entirely problematic in practice.
As examples, take a look how many valid XML documents are practically considered unsafe and not parsed, for example due to recursive entity expansion. This renders the parsers not flawless and in fact not in spec.
Or, my favorite bait - there should be a maximum length limit on passwords. Why would you ever need a kilobyte sized password?