For whatever reason, distro maintainers working for free seem a lot more competent with security than billion dollar hardware vendors
I don't believe that these billion dollar hardware vendors are really incompetent with security. It's rather that the distro maintainers do care quite a bit about security, while for these hardware vendors consider these security concerns to be of much smaller importance; for their business it is likely much more important to bring the next hardware generation to the market as fast as possible.
In other words: distro maintainers and hardware vendors are simply interested in very different things and thus prioritize things very differently.
Only other software I regularly use that I think is overall high quality and I enjoy using are the JetBrains IDEs, and the Telegram mobile app (though the Premium upselling has gotten kinda gross the past few years)
It's shortsighted, but modern capitalism is more shortsighted than Mr. Magoo.
https://www.legalexaminer.com/lestaffer/legal/gm-recall-defe...
Will fixing this issue bring in more revenue than ignoring it and building a new feature? Or fixing a different issue? If the answer is "no" then the answer is that it doesn't get fixed.
I don't agree with this, because it pre-supposes that there's a limited number of engineers available. The question isn't "shall I pull engineer X off project Y so that he can fix security bugs?", it's "shall I hire an additional engineer to fix security bugs?". The comment above mine suggests the answer to that question is "no, because it's too expensive to do that compared to just paying to clean up security breaches after they happen", which is what I was questioning in my first comment.
When framed correctly (there's effectively an unlimited labour supply for most companies, and effectively a limited demand for staff) then the question becomes "shall we hire an engineer to fix security bugs when we don't need an engineer for anything else?".
In effect, there is, yes. At the very least, there’s more high value work that most companies can do than there are engineers to do said work. There’s a reason literally every leadership course teaches you how to say “no” over and over again.
I think it's more realistic that in any sufficiently large company the bureaucracy is so unwieldy that sensible decisions become difficult to make and implement.
An absurd amount of weight is carried by a small number of very influential people that can and want to just do a good job.
And a signal that they're the best is you don't see them in the news.
We need more very influential people who aren't newsworthy.
With Linux itself, it helps that they are working in public (whether volunteering or as a job), and you'd be sacked not in a closed-door meeting, but on LKML for everyone to see if you screw up this badly.
Apt has had issues where captive portals corrupt things. GPG has had tons of vulnerabilities in signature verification (but to be fair here, Apt is being migrated to Sequoia, which is way better).
But these distros are still exposing a much larger attack surface compared to just a TLS stack.
lol
A lot of people have brought this up over the years:
https://www.reddit.com/r/AMDHelp/comments/ysqvsv/amd_autoupd...
(I'm fairly sure I have even mentioned AMD doing this on HN in the past.)
AMD is also not the only one. Gigabyte, ASUS, many other autoupdaters and installers fail without HTTP access. I couldn't even set up my HomePod without allowing it to fetch HTTP resources.
From my own perspective allowing unencrypted outgoing HTTP is a clear indication of problematic software. Even unencrypted (but maybe signed) CDN connections are at minimum a privacy leak. Potentially it's even a way for a MITM to exploit the HTTP stack, some content parser or the application's own handling. TLS stacks are a significantly harder target in comparison.
For signed payloads there is no difference, you're trusting <client>'s authentication code to read a blob, a signature and validate it according to a public key. For package managers that usually only mean trusting gpg - at the very least no less trustworthy than the many TLS and HTTP libraries out there.
Assuming this all came through unencrypted HTTP:
- you're also trusting that the client's HTTP stack is parsing HTTP content correctly
- for that matter, you're also trusting that the server (and any man-in-the-middle) is generating valid HTTP responses
- you're also trusting that the client's response parser doesn't have a vulnerability (and not, say, ignoring some "missing closing bracket" or something)
- you're also trusting that the client is parsing the correct signature (and not, say, some other signature that was tacked-on later)
It's trivially easy to disassemble software to find vulnerabilities like those, though. So it's a lot of trust given for an untrusted software stack.
This is an improvement: HTTP/1.1 alone is a trivial protocol, whereas the alternative is trusting the client's much more complicated TLS stack and its HTTP stack.
For technical reasons, unencrypted HTTP is also always the simpler (and for bulk transfers more performant) HTTP/1.1 in practice as standard HTTP/2 dictates TLS with the special non-TLS variant ("h2c") not being as commonly supported.
> for that matter, you're also trusting that the server (and any man-in-the-middle) is generating valid HTTP responses
You don't, just like you don't trust a TLS server to generate valid TLS (and tunneled HTTP) messages.
> you're also trusting that the client's response parser doesn't have a vulnerability (and not, say, ignoring some "missing closing bracket" or something)
You don't. Authentication 101 (which also applies to how TLS works), authenticity is always validated before inspecting or interacting with content. Same rules that TLS needs to follow when it authenticates its own messages.
Furthermore, TLS does nothing to protect you against a server delivering malicious files (e.g., a rogue maintainer or mirror intentionally giving you borked files).
> you're also trusting that the client is parsing the correct signature (and not, say, some other signature that was tacked-on later)
You don't, as the signature must be authentic from a trusted author (the specific maintainer of the specific package for example). The server or attacker is unable to craft valid signatures, so something "tacked-on" just gets rejected as invalid - just like if you mess with a TLS message.
> It's trivially easy to disassemble software to find vulnerabilities like those, though. So it's a lot of trust given for an untrusted software stack.
The basis of your trust is invalid and misplaced: Not only is TLS not providing additional security here, TLS is the more complex, fragile and historically vulnerable beast.
The only non-privacy risk of using non-TLS mirrors is that a MITM could keep serving you an old version of all your mirrors (which is valid and signed by the maintainers), withholding an update without you knowing. But, such MITM can also just fail your connection to a TLS mirror and then you also can't update, so no: it's just privacy.
Eh? CWE-444 would beg to differ: https://cwe.mitre.org/data/definitions/444.html
> the alternative is trusting the client's much more complicated TLS stack and its HTTP stack.
An attacker doesn't get to attack client's HTTP stack without first piercing protection offered by TLS.
In comparison, even OpenSSL is a really difficult target, it'd be massive news if you succeed. Not so much for GPG. There are even verified TLS implementations if you want to go that far. PGP implementations barely compare.
Fundamentally TLS is also tremendously more trustworthy (formally!) than anything PGP. There is no good reason to keep exposing it all to potential middlemen except just TLS. There have been real bugs with captive portals unintentionally causing issues for Apt. It's such an _unnecessary_ risk.
TLS leaves any MITM very little to play with in comparison.
Package managers generally enforce authenticity through signed indexes and (directly or indirectly) signed packages, although be skeptical when dealing with new/minor package managers as they could have gotten this wrong.
I don't think I've ever seen something this exploitable that is so prevalent. Like couldn't you just sit in an airport and open up a wifi hotspot and almost immediately own anyone with ATI graphics?
So you have to be on a shady hotspot, without VPN, AMD has recently published an update, and your update scheduler is timed to run.
That would be a little less than “immediately own anyone with ATI”.
That takes it out of the one day away territory, but it does allow an attacker to only have a malicious HTTP capture up and detectable during the actual attack window.
Then, of course, if you’re also being their DNS server you can send them to the wrong update check server in the first place. I wonder if the updater validates the certificate.
I guess that's how you prevent anything, just make it illegal and the exploit becomes an unintended illegal feature, like occupying the low-freq radio signal.
But it seems pretty trivial for some bad actor at local ISP.
SSID "Sydney Airport Wifi" or the like.
Have you ever gone to a crowded public place and setup an open hotspot?
Some of us do not enable automatic updates (automatic updates are the peak of stupidity since Win98 era). And, when you sit in an airport, you don't update all your programs.
1. Home router compromised, DHCP/DNS settings changed.
2. Report a wrong (malicious) IP for ww2.ati.com.
3. For HTTP traffic, it snoops and looks for opportunities to inject a malicious binary.
4. HTTPS traffic is passed through unchanged.
__________
If anyone still has their home-router using the default admin password, consider this a little wake-up call: Even if your new password is on a sticky-note, that's still a measurable improvement.
The risks continue, though:
* If the victim's router settings are safe, an attacker on the LAN may use DHCP spoofing to trick the target into using a different DNS server.
* The attacker can set up an alternate network they control, and trick the user into connecting, like for a real coffee shop, or even a vague "Free Wifi."
The threat model here is that compromised or malicious wifi hotspots (and ISPs) exist that will monitor all unencrypted traffic, look for anything being downloaded that's an executable, and inject malware into it. That would compromise a machine that ran this updater even if the malware wasn't specifically looking for this AMD driver vulnerability, and would have already compromised a lot of laptops in the past.
If they lose just one customer over this they're losing more than the minimum $500 bounty. They also signal to the world that they care more about some scope document than actually improving security, discouraging future hackers from engaging with their program.
This would be a high severity vulnerability so even paying out $500 for a low severity would be a bit of a disgrace.
What's the business case for screwing someone out of a bounty on a technicality?
Man in the middle attacks may be "out of scope" for AMD, but they're still "in scope" for actual attackers.
Ignoring them is indefensibly incompetent. A policy of ignoring them is a policy of being indefensibly incompetent.
Of course, a company can do it (they just did!), but it shows that they don't care about security at all.
Especially if the answer is "sorry this is out of scope" rather than "while this is out of scope for our bug bounty so we can't pay you, this looks serious and we'll make sure to get a patch out ASAP".
Your characterization of this bug as one "that completely pwn your machine just by connecting it to an untrusted network" is also hyperbolic to the extreme.
Though, by publishing this blog and getting on the HN front page, it really skews this datapoint, so we can never know if it's a valid editorialization.
Edit: Ah, someone else in this thread called out the "wont fix" vs "out of scope" after I clicked on reply: https://news.ycombinator.com/item?id=46910233. Sorry.
The blog post title is "AMD won't fix", but the actual response that is quoted in the post doesn't actually say that! It doesn't say anything about will or won't fix, it just says "out of scope", and it's pretty reasonable to interpret this as "out of scope for receiving a bug bounty".
It's pretty careless wording on the part of whoever wrote the response and just invites this kind of PR disaster, but on the substance of the vulnerability it doesn't suggest a problem.
Op and others are saying DNS poisoning is a popular way of achieving that goal.
I'm skeptical to say the least. Industry standard has been to ignore MitM or certificates/signatures, not everything.
and “05/02/2026 - Report Closed as wont fix/out of scope”
I think it’s a bit early to say “won’t fix”. AMD only said that it was out of scope for the channel used to report it (I don’t know what that was, but it likely is a bug bounty program) and it’s one day after the issue was reported to them.
Now I have good reason to block it entirely and go back to manual updates
It's the shittest autoupdater I had to ever deal with. It never actually managed to install an update.