Top
Best
New

Posted by hornedhob 1/27/2026

Lennart Poettering, Christian Brauner founded a new company(amutable.com)
375 points | 736 comments
blixtra 1/27/2026|
Hi, Chris here, CEO @ Amutable. We are very excited about this. Happy to answer questions.
2pEXgD0fZ5cF 1/28/2026||
Well I was wondering when the war on general computing and computer ownership would be carried into the heart of the open source ecosystems.

Sure, there are sensible things that could be done with this. But given the background of the people involved, the fact that this is yet another clear profit-first gathering makes me incredibly pessimistic.

This pessimism is made worse by reading the answers of the founders here in this thread: typical corporate talk. And most importantly: preventing the very real dangers involved is clearly not a main goal, but is instead brushed off with empty platitudes like "I've been a FOSS guy my entire adult life...." instead of describing or considering actual preventive measures. And even if the claim was true, the founders had a real love for the hacker spirit, there is obviously nothing stopping them from selling to the usual suspects and golden parachute out.

I was really struggling to not make this comment just another snarky, sarcastic comment, but it is exhausting. It is exhausting to see the hatred some have for people just owning their hardware. So sorry, "don't worry, we're your friends" just doesn't cut it to come at this with a positive attitude.

The benefits are few, the potential to do a lot of harm is large. And the people involved clearly have the network and connections to make this an instrument of user-hostility.

bee_rider 1/28/2026||
I do sort of wonder if there’s room in my life for a small attested device. Like, I could actually see a little room for my bank to say “we don’t know what other programs are running on your device so we can’t actually take full responsibility for transactions that take place originated from your device,” and if I look at it from the bank’s point of view that doesn’t seem unreasonable.

Of course, we’ll see if anybody is actually engaging with this idea in good faith when it all gets rolled out. Because the bank has full end-to-end control over the device, authentication will be fully their responsibility and the (basically bullshit in the first place) excuse of “your identity was stolen,” will become not-a-thing.

Obviously I would not pay for such a device (and will always have a general purpose computer that runs my own software), but if the bank or Netflix want to send me a locked down terminal to act as a portal to their services, I guess I would be fine with using it to access (just) their services.

thewebguyd 1/28/2026|||
I suggested this as a possible solution in another HN thread a while back, but along the lines of "If a bank wants me to have a secure, locked down terminal to do business with them, then they should be the ones forking it over, not commanding control of my owned personal device."

It would quickly get out of hand if every online service started to do the same though. But, if remote device attestation continues to be pushed and we continue to have less and less control and ownership over our devices, I definitely see a world where I now carry two phones. One running something like GrapheneOS, connected to my own self-hosted services, and a separate "approved" phone to interact with public and essential services as they require crap like play integrity, etc.

But at the end of the day, I still fail see why this is even a need. Governments, banks, other entities have been providing services over the web for decades at this point with little issue. Why are we catering to tech illiteracy (by restricting ownership) instead of promoting tech education and encouraging people to both learn, and importantly, take responsibility for their own actions and the consequences of those actions.

"Someone fell for a scam and drained their bank account" isn't a valid reason to start locking down everyone's devices.

wooptoo 1/28/2026|||
I was hoping banks would turn to using Yubikeys/U2F for authentication/transaction signing, and not these Draconian measures.
pamcake 1/28/2026||
I remember my parents doing online banking authenticating with smart cards. Over 20 years ago. Today the same bank requires an iOS or Play Integrity device (for individuals at least. Their gated business banking are separate services and idk what they offer there).

This is not a question of missing tech.

tzs 1/28/2026||||
> I suggested this as a possible solution in another HN thread a while back, but along the lines of "If a bank wants me to have a secure, locked down terminal to do business with them, then they should be the ones forking it over, not commanding control of my owned personal device."

Most banks already do that. The secure, locked down terminals are called ATMs and they are generally placed at assorted convenient locations in most cities.

bee_rider 1/28/2026||||
Yeah, to some extent I just wanted to think about where the boundary ought to be. I somewhat suspect the bank or Netflix won’t be willing to send me a device of theirs to act as their representative in my pocket. But it is basically the only time a reasonable person should consider using such a device. Anybody paying to buy Netflix or the bank a device is basically being scammed or ripped off.
fc417fc802 1/28/2026||||
Why should I need a separate device? Doesn't a hardware security token suffice? I wouldn't even mind bringing my own but my bank doesn't accept them last I checked. (Do any of them?)

If the bank can't be bothered to either implement support for U2F or else clearly articulate why U2F isn't sufficient then they don't have a valid position. Anything else they say on the matter should be disregarded.

thewebguyd 1/28/2026|||
You shouldn't need a separate device, but we are quickly entering an era where a lot of banking (and other) apps will outright refuse to run or allow logins if it detects a rooted device, or play integrity fails.

In this way, the banks are asserting control over your device. It's beyond authentication, they are saying "If you have full control over your device, you cannot access our services."

I'll agree with you that they don't have a valid position, because I can just as easily open up a web browser on said rooted device and access just fine via the web, but how long until services move away from web interfaces in favor of apps instead to assert more control?

calgoo 7 days ago||
I have to use my phone to approve the web login to my account. My bank is working very hard to make sure that everyone uses the app for everything, including closing down offices and removing ATMs around the city.
charcircuit 1/28/2026|||
A hardware token would not suffice. When you login with a hardware token it will generate some sort of token or cookie for further requests. This is where malware can steal that key and use it for whatever it wants. There is a benefit it knowing there is a high chance that the such a key is protected by the operating system's sandboxing technology. Without remote attestation you don't know if the sandbox is actually active or not.
fc417fc802 1/28/2026||
On the contrary, a hardware token will suffice to thwart both phising and MitM which covers ~everything for all practical threat and liability models. What exactly is the concern here? A widespread worm that no one is yet aware of that's dumping people's bank accounts into crypto? It might make for a decent Hollywood plot but is pulling that off actually easier than attacking the bank directly?

Keep in mind that the businesses pushing this stuff still don't support U2F by and large. When I can go down in person to enroll a hardware token I might maybe consider listening to what they have to say on the subject. Maybe. (But probably not.)

bee_rider 1/28/2026|||
Hypothetically on a fully controlled system you could prevent attacks like the sort of “hello this is Microsoft, we’ve identified a virus on your device, please download teamviewer and login to your bank account so we can clear it for you” type spam calls.

Or, hasn’t there been malware that periodically takes screenshots of the device? Or maybe that’s a Hollywood plot, I forget actually.

fc417fc802 1/28/2026|||
Keep in mind that a truly clueless user will most likely be running in a stock configuration. So long as that doesn't permit apps to tamper with one another (as is currently the case) there should be no issue. Google could even provide a toggle to officially root the phone and so long as flipping it wiped the device the problem would remain 99.9% solved because a scammer would be unable to pull the job off in one go.

By the time you reach the point that the user is doggedly following harmful step by step instructions over the course of multiple callbacks there is nothing short of a padded cell that can protect him from himself.

Unless you mean to suggest somehow screening such calls? A local LLM? Literal wiretapping via realtime upload to the cloud? If facing such a route society would likely be better off institutionalizing anyone victimized in such a manner.

thewebguyd 1/28/2026|||
> hasn’t there been malware that periodically takes screenshots of the device?

Yeah, it's called Recall and its baked into Windows as a "feature."

fc417fc802 1/28/2026||
It's unfortunate because it's actually incredibly useful functionality. If only they hadn't packaged and marketed it in quite the way they did. If there was ever a feature that needed to be guaranteed local only, zero third party integration, zero first party analytics, encryption tied to a TPM that was it.
charcircuit 1/28/2026|||
How does it solve MITM? You type your hardware token in and then an attacker uses it to send money out of your account.

>What exactly is the concern here?

Stealer malware. Or even RATs where attackers get notified when you open a sensitive app and they can take over after you have authenticated.

fc417fc802 1/28/2026||
Could you please spell out the specifics of this scenario?

MitM via an evil (ie incorrect) domain name is prevented because U2F (and now webauthn or CTAP2) are origin bound.

RATs? On stock android? How does that work? And how are the things you describe not also threats for online banking via a browser? It's certainly not how the vast majority of attacks take place in the wild. Can you provide any examples of such an attack (ie malware as opposed to phishing) that was widespread? Otherwise I assume we're writing a script for Hollywood here.

Even then, a RAT could be trivially defeated by requiring a second one-off token authentication for any transaction that would move money around. I doubt there'd be much objection to such a policy. If people really hate it let them opt out below an amount of their choosing by signing a liability waiver.

charcircuit 1/28/2026||
>are origin bound.

This is assuming the user's device is not compromised.

>How does that work?

Priviledge escalation on an old OS version allows an attacker to get root access. Then with that they can bypass any sandboxing. Or they could get access to some android permission intended for system apps that they should not have access to and use that to do malicous things.

I don't closely follow malware outbreaks for android so I can't point to specific examples, but malware does exist.

fc417fc802 1/28/2026||
So the attacker compromises the user's device ... and then sets up a MitM? This is making about as much sense as the typical Hollywood plot that involves computers so I guess that means we're on track.

> Priviledge escalation on an old OS version allows an attacker to get root access.

At which point hardware attestation accomplishes nothing. Running in an enclave might but attesting the OS image that was used to boot most certainly won't.

Many consumers use older devices. Any banking app is forced to support them or they will lose customers. There's no way around that. (It doesn't matter anyway because these sorts of attacks simply aren't commonplace.)

> but malware does exist.

I didn't ask for an example of malware. I asked you to point to an example of a widespread attack against secured accounts using malware as a vector. You have invented some utterly unrealistic scenario that simply isn't a concern in the real world for a consumer banking interaction.

You're describing the sort of high effort targeted attack utilizing one or more zero days that a high level government official might be subject to.

charcircuit 1/28/2026||
>At which point hardware attestation accomplishes nothing

Attestation could be used to say that the user is not using a secure version of the OS That has known vulnerabilities patched.

>Any banking app is forced to support them or they will lose customers.

Remote attestation is just one of the many signals used for detecting fraud.

>one or more zero days

Many phones are not on an OS getting security updates. Whether that be due to age or the vendor not distributing the security patches. Even using old exploits malware can work.

sophacles 1/28/2026|||
> with little issue

Citation needed. The fact that the infosec industry just keeps growing YoY kinda suggests that there are in fact issues that are more expensive than paying the security companies.

giant_loser 6 days ago||||
> if the bank or Netflix want to send me a locked down terminal to act as a portal to their services, I guess I would be fine with using it to access (just) their services

They would only do it to assert more control over you and in Netflix's case, force more ads on you.

It is why I never use any company's apps.

If they make it a requirement, I will just close my account.

stackghost 1/28/2026|||
The bank thing is a smoke screen.

This entire shit storm is 100% driven by the music, film, and tv industries, who are desperate to eke a few more millions in profit from the latest Marvel snoozefest (or whatever), and who tried to argue with a straight face that they were owed more than triple the entire global GDP [0].

These people are the enemy. They do not care about about computing freedom. They don't care about you or I at all. They only care about increasing profits via and they're using the threat of locking people out of Netflix via HDCP and TPM, in order to force remote attestation on everyone.

I don't know what the average age on HN is, but I came up in the 90s when "fuck corporations" and "information wants to be free" still formed a large part of the zeitgeist, and it's absolutely infuriating to see people like TFfounders actively building things that will measurably make things worse for everyone except the C-suite class. So much for "hacker spirit".

[0] https://globalnews.ca/news/11026906/music-industry-limewire-...

ShroudedNight 1/28/2026|||
Also worth remembering that around 2010, the music and film industry associations of America were claiming entitlement to $50 billion dollars annually in piracy-related losses beyond what could be accounted for in direct lost revenue (which _might_ have been as much as 10 billion, or 1/6th of their claim):

https://youtu.be/GZadCj8O1-0

These guys pathologically have had a chip on their shoulder since Napster.

direwolf20 1/28/2026|||
HN is for the kind of hacker who makes the next Uber or AirBNB. It's strongly aligned with the interests of corporate shareholders.
iugtmkbdfil834 1/28/2026|||
Yeah, as I am reading the landing page, the direction seems clear. It sucks, because as an individual there is not much one can do, and there is no consensus that it is a bad thing ( and even if there was, how to counter it ). Honestly, there are times I feel lucky to be as dumb as I am. At least I don't have the same responsibility for my output as people who create foundational tech and code.
giant_loser 6 days ago|||
Yup

Poettering is a well-known Linux saboteur, along with Red Hat.Without RH pushing his trash, he is not really that big of a threat.

Just like de Icaza, another saboteur, ran off to MS. That is the tell-tell sign for people not convinced that either person's work in FOSS existed to cause damage.

No, this is not a snarky, sarcastic comment. Trust Amutable at your own peril.

gosub100 1/29/2026|||
My tinfoil hat theory is devices like HDDs will be locked and only work on "attested" systems that actively monitor the files. This will be pushed by the media industry to combat piracy. Then opened up for para-law enforcement like palantir.

Then gpu and cpu makers will hop on and lock their devices to promote paid Linux like redhat. Or offering "premium support" to unlock your gpu for Linux for a monthly fee.

They'll say "if you are a Linux enthusiast then go tinker with arm and risc on an SD card"

cbarrick 1/28/2026||
> [T]he war on general computing and computer ownership [...] It is exhausting to see the hatred some have for people just owning their hardware.

The integrity of a system being verified/verifiable doesn't imply that the owner of the system doesn't get to control it.

This sort of e2e attestation seems really useful for enterprise or public infrastructure. Like, it'd be great to know that the ATMs or transit systems in my city had this level of system integrity.

You argument correctly points out that attestation tech can be used to restrict software freedom, but it also assumes that this company is actively pursuing those use cases. I don't think that is a given.

At the end of the day, as long as the owner of the hardware gets to control the keys, this seems like fantastic tech.

Matl 1/28/2026|||
> You argument correctly points out that attestation tech can be used to restrict software freedom, but it also assumes that this company is actively pursuing those use cases. I don't think that is a given.

Once it's out there and normalized, the individual engineers don't get to control how it is used. They never do.

direwolf20 1/28/2026||
Unless Lennart Pottering uses remote attestation to verify who is attesting to whom.
gosub100 1/29/2026||||
You want PCIe-6? Cool well that only runs on Asus G-series with AI, and is locked to attested devices because the performance is so high that bad code can literally destroy it. So for safety, we only run trusted drivers and because they must be signed, you have to use Redhat Premium at a monthly cost of $129. But you get automatic updates.
cbarrick 1/29/2026||
Do you want the control systems of the subway to get modified by a malicious actor? What about damn releases? Heat pumps in apartment buildings? Robotaxis? Payroll systems? Banks?

Amutability is a huge security feature, with tons of real world applications for good.

The fact that mega corps can abuse consumers is a separate issue. We should solve that with regulation. Don't forsake all the good that this tech can do just because Asus or Google want to infringe on your software freedoms. Frankly, these mega corps are going to infringe on your rights regardlessly, whether or not Amutable exists as a business.

Don't throw the baby out with the bath water.

ahartmetz 1/30/2026||
It seems like we're doing pretty well without the baby. You sell it, you say we need it. Highly credible
hakfoo 1/30/2026||||
System integrity also ends at the border of the system. The entire ecosystem of ATM skimmers demonstrates this-- the software and hardware are still 100% sanctioned, they're just hidden beneath a shim in the card slot and a stick-on keypad module.

I generally agree with the concept of "if you want me to use a pre-approved terminal, you supply it." I'd think this opens up a world of better possibilities. Right now, the app-centric bank/media company/whatever has to build apps that are compatible with 82 bazillion different devices, and then deal with the attestation tech support issues. Conversely, if they provide a custom terminal, it might only need to deal with a handful of devices, and they could design it to function optimally for the single use case.

curt15 1/28/2026|||
> At the end of the day, as long as the owner of the hardware gets to control the keys, this seems like fantastic tech.

The problem is that there are powerful corporate and government interests who would love nothing more than to prevent users from controlling the keys for their own computers, and they can make their dream come true simply by passing a law.

It may be the case that certain users want to ensure that their computers are only running their code. But the same technologies can also used to ensure that their computers are only running someone else's code, locking users out from their own devices.

cbarrick 1/28/2026||
That's like saying we shouldn't build anything that can be used for good if it can also be used for evil.

By that logic, we should just turn off the internet. Too much potential for evil there.

More seriously, the argument being presented seems to just be "attestation tech has been used for evil in the past, therefore all attestation tech is bad," which is obviously an unsound argument. A sound argument would have to show that attestation tech is _inherently_ bad, and I've already provided examples that I think effectively counter that. I can provide more if needed.

I get that we want to prevent attestation tech from being used for evil, but that's a regulatory problem, not a technical one. You make this point by framing the evil parties as "corporate and government interests."

Don't get me wrong, I am fully against anything that limits the freedoms of the person that owns the device. I just don't see how any of this is a valid argument that Amutable's mission is bad/immoral/invalid.

Or maybe another argument that's perhaps more aligned with the FOSS ideology: if I want e2e attestation of the software stack on my own devices, isn't this a good thing for me?

curt15 1/28/2026|||
>if I want e2e attestation of the software stack on my own devices, isn't this a good thing for me?

The building blocks are already there for a sufficiently motivated user to build their own verified OS image. Google has been doing that with ChromeOS for years. The danger I see is that once there is a low-friction, turnkey solution for locking down general purpose systems, then the battle for control over users' devices reduces to control over the keys. That is much easier for well-heeled interests to dominate than outlawing Linux outright.

The status quo is a large population of unverified but fully user-configurable systems. While the ideal end state is a large population of verified and fully user-configurable systems, it is more likely that the tools for achieving that outcome will be co-opted by corporate and political interests to bend the population toward verified and un-configurable systems. That outcome would be far worse than the status quo.

direwolf20 1/28/2026|||
Attestation tech is much more useful for evil than for good.
coppsilgold 1/28/2026||
Remote attestation only works because your CPU's secure enclave has a private key burned-in (fused) into it at the factory. It is then provisioned with a digital certificate for its public key by the manufacturer.

Every time you perform an attestation the public key (and certificate) is divulged which makes it a unique identifier, and one that can be traced to the point of sale - and when buying a used device, a point of resale as the new owner can be linked to the old one.

They make an effort to increase privacy by using intermediaries to convert the identifier to an ephemeral one, and use the ephemeral identifier as the attestation key.

This does not change the fact that if the party you are attesting to gets together with the intermediary they will unmask you. If they log the attestations and the EK->AIK conversions, the database can be used to unmask you in the future.

Also note that nothing can prevent you from forging attestations if you source a private-public key pair and a valid certificate, either by extracting them from a compromised device or with help from an insider at the factory. DRM systems tend to be separate from the remote attestation ones but the principles are virtually identical. Some pirate content producers do their deeds with compromised DRM private keys.

b112 1/28/2026||
I tend to buy such things with cash, in person.

People dislike cash for some strange reason, then complain about tracking. People also hand out their mobile number like candy. Same issue.

BrandoElFollito 1/28/2026||
> People dislike cash for some strange reason

In my case it is because I would never have the right amount with me, in the right denominations. Google Pay always has this covered.

Also you need to remember to take one more thing with you, and refill it occasionally. As opposed to fuel, you do not know how much you will need when.

It can get lost or destroyed, and is not (usually) replaceable.

I am French, currently in the US. I need to change 100 USD in small denominations, I will need to go to the bank, and they will hopefully do that for me. Or not. Or not without some official paper from someone.

Ah yes, and I am in the US and the Euro is not an accepted currency here. So I need to take my 100 € to a bank and hope I can get 119.39 USD. In the right denominations.

What will I do with the 34.78 USD left when I am back home? I have a chest of money from all over the world. I showed it once to my kids when they were young, told a bit about the world and then forgot about it.

Money also weights quite a lot. And when it does not weights it gets lost or thrown away with some other papers. Except if they are neatly folded in a wallet, which I will forget.

I do not care about being traced when going to the supermarket. If I need to do untraceable stuff I will get money from teh ATM. Ah crap, they will trace me there.

So the only solution is to get my salary in cash, whihc is forbidden in France. Or take some small amounts from time to time. Which I will forget, and I have better things to do.

Cash sucks.

Sure, if we go cashless and terrible things happen (cyberwar, solar flare, software issues) then we are screwed. But either the situation unscrews itself, or we will have much, much, much bigger issues than money -- we will need to go full survival mode, apocalypse movies-style.

warkdarrior 1/28/2026|||
Anonymous-attestation protocols are well known in cryptography, and some are standardized: https://en.wikipedia.org/wiki/Direct_Anonymous_Attestation
coppsilgold 1/28/2026|||
> Anonymous-attestation protocols are well known in cryptography, and some are standardized: https://en.wikipedia.org/wiki/Direct_Anonymous_Attestation

Which does exactly what I said. Full zero knowledge attestation isn't practical as a single compromised key would give rise to a service that would serve everyone.

  The solution first adopted by the TCG (TPM specification v1.1) required a trusted third-party, namely a privacy certificate authority (privacy CA). Each TPM has an embedded RSA key pair called an Endorsement Key (EK) which the privacy CA is assumed to know. In order to attest the TPM generates a second RSA key pair called an Attestation Identity Key (AIK). It sends the public AIK, signed by EK, to the privacy CA who checks its validity and issues a certificate for the AIK. (For this to work, either a) the privacy CA must know the TPM's public EK a priori, or b) the TPM's manufacturer must have provided an endorsement certificate.) The host/TPM is now able to authenticate itself with respect to the certificate. This approach permits two possibilities to detecting rogue TPMs: firstly the privacy CA should maintain a list of TPMs identified by their EK known to be rogue and reject requests from them, secondly if a privacy CA receives too many requests from a particular TPM it may reject them and blocklist the TPMs EK. The number of permitted requests should be subject to a risk management exercise. This solution is problematic since the privacy CA must take part in every transaction and thus must provide high availability whilst remaining secure. Furthermore, privacy requirements may be violated if the privacy CA and verifier collude. Although the latter issue can probably be resolved using blind signatures, the first remains.

AFAIK no one uses blind signatures. It would enable the formation of commercial attestation farms.
arianvanp 1/28/2026|||
Apple uses Blind Signatures for attestation. It's how they avoid captchas at CloudFlare and Fastly in their Private Relay product

https://educatedguesswork.org/posts/private-access-tokens/

georgyo 1/28/2026||
If I'm reading any of this correctly, this doesn't apply to hardware attestation.

It seems apple has a service, with an easily rotated key and an agreement with providers. If the key _Apple_ uses is compromised, they can rotate it.

BUT, apple knows _EXACTLY_ who I am. I attest to them using my hardware, they know _EXACTLY_ which hardware I'm using. They can ban me or my hardware. They then their centralized service gives me a blind token. But apple, may, still know exactly who owns which blind tokens.

However, I cannot generate blind tokens on my own. I _MUST_ talk to some centralized service that can I identify me. If that is not the case, then any single compromised device can generate infinite blind tokens rending all the tokens useless.

coppsilgold 1/28/2026||
The idea behind blind signatures is that the server will give you a signed token which is blinded and you can un-blind it on your end and then use it. The consumer of the token will not be able to collude with the issuer of the token to figure out who it was given to. There is more info here: <https://blog.cloudflare.com/privacy-pass-the-math/>

I don't know if that's what Apple actually does. If it is, once it gets popular enough as an anti-bot measure there may be farms of Apple devices selling these tokens. It's a separate system from remote attestation anyhow.

zimmerfrei 1/28/2026|||
I don't think that a 100% anonymous attestation protocol is what most people need and want.

It would be sufficient to be able to freely choose who you trust as proxy for your attestations *and* the ability to modify that choice at any point later (i.e. there should be some interoperability). That can be your Google/Apple/Samsung ecosystem, your local government, a company operating in whatever jurisdiction you are comfortable with, etc.

sam_lowry_ 1/28/2026||
Most busunessed do not need origin attestation, they need history attestation.

I.e. from when they buy from a trusted source and init the device.

pseudohadamard 1/28/2026|||
But what's it attesting? Their byline "Every system starts in a verified state and stays trusted over time" should be "Every system starts in a verified state of 8,000 yet-to-be-discovered vulns and stays in that vulnerable state over time". The figure is made up but see for example https://tuxcare.com/blog/the-linux-kernel-cve-flood-continue.... So what you're attesting is that all the bugs are still present, not that the system is actually secure.
chris_wot 1/28/2026||
Well, if a rootkit gets installed later, attention might be handy? Or am I missing something?
direwolf20 1/28/2026||
It comes rootkitted from the factory, and if you remove the rootkit, the device stops working.
stogot 1/28/2026||
I’m not sure I understand the threat model for this. Why would I need to worry about my enclave being identifiable? Or is this a business use case?

Or why buy used devices if this is a risk?

coppsilgold 1/28/2026|||
It's a privacy consideration. If you desire to juggle multiple private profiles on a single device extreme care needs to be taken to ensure that at most one profile (the one tied to your real identity) has access to either attestation or DRM. Or better yet, have both permanently disabled.

Hardware fingerprinting in general is a difficult thing to protect from - and in an active probing scenario where two apps try to determine if they are on the same device it's all but impossible. But having a tattletale chip in your CPU an API call away doesn't make the problem easier. Especially when it squawks manufacturer traceable serials.

Remote attestation requires collusion with an intermediary at least, DRM such as Widevine has no intermediaries. You expose your HWID (Widevine public key & cert) directly to the license server of which there are many and under the control of various entities (Google does need to authorize them with certificates). And this is done via API, so any app in collusion with any license server can start acquiring traceable smartphone serials.

Using Widevine for this purpose breaks Google's ToS but you would need to catch an app doing it (and also intercept the license server's certificate) and then prove it which may be all but impossible as an app doing it could just have a remote code execution "vulnerability" and request Widevine license requests in a targeted or infrequent fashion. Note that any RCE exploit in any app would also allow this with no privilege escalation.

Joker_vD 1/28/2026||
Which is why I personally filed off the VIN from my car's engine.
iugtmkbdfil834 1/28/2026|||
I just put up 'do not track' flag in my browser:D
sroussey 1/28/2026|||
Why stop at the engine?
CGMthrowaway 1/28/2026||||
For most individuals it usually doesn’t matter. It might matter if you have an adversary, e.g. you are a journalist crossing borders, a researcher in a sanctioned country, or an organization trying to avoid cross‑tenant linkage

Remote attestation shifts trust from user-controlled software to manufacturer‑controlled hardware identity.

It's a gun with a serial number. The Fast and Furious scandal of the Obama years was traced and proven with this kind of thing

saghm 1/28/2026||
The scandal you cited was that guns controlled by the federal government don't have any obvious reasonable path to being owned by criminals; there isn't an obvious reason for the guns to have left the possession of the government in the first place.

There's not really an equivalent here for a computer owned by an individual because it's totally normal for someone to sell or dispose of a computer, and no one expects someone to be responsible for who else might get their hands on it at that point. If you prove a criminal owns a computer that I owned before, then what? Prosecution for failing to protect my computer from thieves, or for reselling it, or gifting it to a neighbor or family friend? Shifting the trust doesn't matter if what gets exposed isn't actually damaging on any way, and that's what the parent comment is asking about.

The first two examples you give seem to be about an unscrupulous government punishing someone for owning a computer that they consider tainted, but it honestly doesn't seem that believable that a government who would do that would require a burden of proof so high as to require cryptographic attestation to decide on something like that. I don't have a rebuttal for "an organization trying to avoid cross-tenant linkage" though because I'm not sure I even understand what it means: an example would probably be helpful.

storystarling 1/28/2026||||
I assume the use case here is mostly for backend infrastructure rather than consumer devices. You want to verify that a machine has booted a specific signed image before you release secrets like database keys to it. If you can't attest to the boot state remotely, you don't really know if the node is safe to process sensitive data.
fc417fc802 1/28/2026||
I'm confused. People talking about remote attestation which I thought was used for stuff like SGX. A system in an otherwise untrusted state loads a blob of software into an enclave and attests to that fact.

Whereas the state of the system as a whole immediately after it boots can be attested with secure boot and a TPM sealed secret. No manufacturer keys involved (at least AFAIK).

I'm not actually clear which this is. Are they doing something special for runtime integrity? How are you even supposed to confirm that a system hasn't been compromised? I thought the only realistic way to have any confidence was to reboot it.

unixhero 1/28/2026|||
At this point these are just English sentences. I am not worried about this threat model at all.
josephcsible 1/27/2026||
This seems like the kind of technology that could make the problem described in https://www.gnu.org/philosophy/can-you-trust.en.html a lot worse. Do you have any plans for making sure it doesn't get used for that?
cyphar 1/27/2026||
I'm Aleksa, one of the founding engineers. We will share more about this in the coming months but this is not the direction nor intention of what we are working on. The models we have in mind for attestation are very much based on users having full control of their keys. This is not just a matter of user freedom, in practice being able to do this is far more preferable for enterprises with strict security controls.

I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.

ingohelpinger 1/28/2026|||
Thanks for the clarification and to be clear, I don't doubt your personal intent or FOSS background. The concern isn't bad actors at the start, it's how projects evolve once they matter.

History is pretty consistent here:

WhatsApp: privacy-first, founders with principles, both left once monetization and policy pressure kicked in.

Google: 'Don’t be evil' didn’t disappear by accident — it became incompatible with scale, revenue, and government relationships.

Facebook/Meta: years of apologies and "we'll do better," yet incentives never changed.

Mobile OS attestation (iOS / Android): sold as security, later became enforcement and gatekeeping.

Ruby on Rails ecosystem: strong opinions, benevolent control, then repeated governance, security, and dependency chaos once it became critical infrastructure. Good intentions didn't prevent fragility, lock-in, or downstream breakage.

Common failure modes:

Enterprise customers demand guarantees - policy creeps in.

Governments demand compliance - exceptions appear.

Liability enters the picture - defaults shift to "safe for the company."

Revenue depends on trust decisions - neutrality erodes.

Core maintainers lose leverage - architecture hardens around control.

Even if keys are user-controlled today, the key question is architectural: Can this system resist those pressures long-term, or does it merely promise to?

Most systems that can become centralized eventually do, not because engineers change, but because incentives do. That’s why skepticism here isn't personal — it's based on pattern recognition.

I genuinely hope this breaks the cycle. History just suggests it's much harder than it looks.

direwolf20 1/28/2026||
Did AI write this comment?
ingohelpinger 1/28/2026||
nope. why?
drdaeman 1/28/2026||||
Can you (or someone) please tell what’s the point, for a regular GNU/Linux user, of having this thing you folks are working on?

I can understand corporate use case - the person with access to the machine is not its owner, and corporation may want to ensure their property works the way they expect it to be. Not something I care about, personally.

But when it’s a person using their own property, I don’t quite get the practical value of attestation. It’s not a security mechanism anymore (protecting a person from themselves is an odd goal), and it has significant abuse potential. That happened to mobile, and the outcome was that users were “protected” from themselves, that is - in less politically correct words - denied effective control over their personal property, as larger entities exercised their power and gated access to what became de-facto commonplace commodities by forcing to surrender any rights. Paired with awareness gap the effects were disastrous, and not just for personal compute.

So, what’s the point and what’s the value?

fc417fc802 1/28/2026|||
The value is being able to easily and robustly verify that my device hasn't been compromised. Binding disk encryption keys to the TPM such that I don't need to enter a password but an adversary still can't get at the contents without a zero day.

Of course you can already do the above with secure boot coupled with a CPU that implements an fTPM. So I can't speak to the value of this project specifically, only build and boot integrity in general. For example I have no idea what they mean by the bullet "runtime integrity".

NekkoDroid 1/28/2026|||
> For example I have no idea what they mean by the bullet "runtime integrity".

This is for example dm-verity (e.g. `/usr/` is an erofs partiton with matching dm-verity). Lennart always talks about either having files be RW (backed by encryption) or RX (backed by kernel signature verification).

drdaeman 1/28/2026||||
I don’t think attestation can provide such guarantees. To best of my understanding, it won’t protect from any RCE, and it won’t protect from malicious updates to configuration files. It won’t let me run arbitrary binaries (putting a nail to any local development), or if it will - it would be a temporary security theater (as attackers would reuse the same processes to sign their malware). IDSes are sufficient for this purpose, without negative side effects.

And that’s why I said “not a security mechanism”. Attestation is for protecting against actors with local hardware access. I have FDE and door locks for that already.

fc417fc802 1/28/2026||
I think all of that comes down to being a matter of what precisely you're attesting? So I'm not actually clear what we're talking about here.

Given secure boot and a TPM you can remotely attest, using your own keys, that the system booted up to a known good state. What exactly that means though depends entirely on what you configured the image to contain.

> it won’t protect from malicious updates to configuration files

It will if you include the verified correct state of the relevant config file in a merkel tree.

> It won’t let me run arbitrary binaries (putting a nail to any local development), or if it will - it would be a temporary security theater (as attackers would reuse the same processes to sign their malware).

Shouldn't it permit running arbitrary binaries that you have signed? That places the root of trust with the build environment.

Now if you attempt to compile binaries and then sign them on the production system yeah that would open you up to attack (if we assume a process has been compromised at runtime). But wasn't that already the case? Ideally the production system should never be used to sign anything. (Some combination of SGX, TPM, and SEV might be an exception to that but I don't know enough to say.)

> Attestation is for protecting against actors with local hardware access. I have FDE and door locks for that already.

If you remotely boot a box sitting in a rack on the other side of the world how can you be sure it hasn't been compromised? However you go about confirming it, isn't that what attestation is?

drdaeman 1/28/2026||
Well, maybe we're talking about different things, because I've asked from a regular GNU/Linux user perspective. That is, I have my computers and I'm concerned I would lose my freedoms to use them as I wish, because this attestation would be adopted and become de-facto mandatory if I ever want to do something online. Just like what happened to mobile, and what's currently slowly happening to other desktop OSes.

Production servers are a whole different story - it's usually not my hardware to begin with. But given how things are mostly immutable those days (shipped as images rather than installed the old-fashioned sysadmin way), I'm not really sure what to think of it...

fc417fc802 1/28/2026||
You originally asked what the value proposition for a regular (non-corporate) user was. Then you raised some objections to my answer (or at least so I thought).

Granted these technologies can also be abused. But that involves running third party binaries that require SGX or other DRM measures before they will unlock or decrypt content or etc. Or querying a security element to learn who signed the image that was originally booted. Devices that support those things are already widespread. I don't think that's what this project is supposed to be. (Although I could always be wrong. There's almost no detail provided.)

giant_loser 6 days ago|||
> The value is being able to easily and robustly verify that my device hasn't been compromised.

That is impossible.

"secure" devices get silently tampered with everyday.

You can never guarantee that.

its-summertime 1/28/2026|||
https://attestation.app/about For mobiles, it helps make tampering obvious.

https://doc.qubes-os.org/en/latest/user/security-in-qubes/an... For laptops, it helps make tampering obvious. (a different attestation scheme with smaller scope however)

This might not be useful to you personally, however.

fsflover 1/28/2026||
Laptops can already have TPM based on FLOSS (with coreboot with Heads). It works well with Qubes btw, and is recommended by the developers: https://forum.qubes-os.org/t/qubes-certified-novacustom-v54-...
repstosb 1/28/2026||||
The "founding engineers" behind Facebook and Twitter probably didn't set out to destroy civil discourse and democracy, yet here we are.

Anyway, "full control over your keys" isn't the issue, it's the way that normalization of this kind of attestation will enable corporations and governments to infringe on traditional freedoms and privacy. People in an autocratic state "have full control over" their identity papers, too.

teiferer 1/27/2026||||
> I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.

Until you get acquired, receive a golden parachute and use it when realizing that the new direction does not align with your views anymore.

But, granted, if all you do is FOSS then you will anyway have a hard time keeping evil actors from using your tech for evil things. Might as well get some money out of it, if they actually dump money on you.

cyphar 1/29/2026|||
I am aware of that, my (personal) view is that DRM is a social issue caused by modes of behaviour and the existence or non-existence of technical measures cannot fix or avoid that problem.

A lot of the concerns in this thread center on TPMs, but TPMs are really more akin to very limited HSMs that are actually under the user's control (I gave a longer explanation in a sibling comment but TPMs fundamentally trust the data given to them when doing PCR extensions -- the way that consumer hardware is fundamentally built and the way TPMs are deployed is not useful for physical "attacks" by the device owner).

Yes, you can imagine DRM schemes that make use of them but you can also imagine equally bad DRM schemes that do not use them. DRM schemes have been deployed for decades (including "lovely" examples like the Sony rootkit from the 2000s[1], and all of the stuff going on even today with South Korean banks[2]). I think using TPMs (and other security measures) for something useful to users is a good thing -- the same goes for cryptography (which is also used for DRM but I posit most people wouldn't argue that we should eschew all cryptography because of the existence of DRM).

[1]: https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootk... [2]: https://palant.info/2023/01/02/south-koreas-online-security-...

mikkupikku 1/28/2026||||
This whole discussion is a perfect example of what Upton Sinclair said, "It is difficult to get a man to understand something, when his salary depends on his not understanding it."

A rational and intelligent engineer cannot possibly believe that he'll be able to control what a technology is used for after he creates it, unless his salary depends on him not understanding it.

faust201 1/28/2026|||
You could tell this sort of insinuation to anyone. Including you.

Argument should be technical.

teiferer 1/28/2026|||
Insinuation? As a sw dev they don't have any agency over whether or by whom they get acquired. Their decision will be whether to leave if it's changing to the worse, and that's very much understandable (and arguably the ethical thing to do).
faust201 5 days ago||
Do you mean like IBM takeover of RedHat?
seanhunter 1/28/2026||||
That's a perfectly valid objection to this proposal. You only have to look at what happened to Hashicorp to see the risk.
faust201 5 days ago||
How can anyone promise that? Will you promise to your current employer that you will never leave the job?
seanhunter 5 days ago||
No, but I can promise to my current employer that me leaving my job won’t be a critical problem.

It’s less of an issue in the case of a normal job than in an open source project where often the commitment of particular founding individuals to the long-term future of the project is a big part of people’s decision to use or not use that tech in their solutions. Here, given that “Trusted computing” can potentially lock you out of devices you have bought, it’s important for people to be able to judge the risk of getting “legal ransomware”d if the trusted computing base ends up depending on a proprietary component that they can’t back out of.

That said, there is absolutely zero chance that I use this (systemd is already enough Poettering software for me in this lifetime) so I’m not personally affected either way.

faust201 5 days ago||
Again lots of doomsayers like you said it when systemd was introduced. Nothing happened. Same with RedHat IBM takeover.
sam_lowry_ 1/28/2026||||
Technical arguments pave the road to hell.
LtWorf 1/28/2026||
Well he is called faust…
majewsky 1/28/2026||||
> You could tell this sort of insinuation to anyone. Including you.

Yes. You correctly stated the important point.

pseudalopex 1/29/2026|||
> Argument should be technical.

Yes. Aleksa made no technical argument.

ahartmetz 1/28/2026||||
So far, that's a slick way to say not really. You are vague where it counts, and surely you have a better idea of the direction than you say.

Attestation of what to whom for which purpose? Which freedom does it allow users to control their keys, how does it square with remote attestation and the wishes of enterprise users?

cyphar 1/29/2026||
I'm really not trying to be slick, but I think it's quite difficult to convince people about anything concrete (such as precisely how this model is fundamentally different to models such as the Secure Boot PKI scheme and thus will not provide a mechanism to allow a non-owner of a device to restrict what runs on your machine) without providing a concrete implementation and design documents to back up what I'm saying. People are rightfully skeptical about this stuff, so any kind of explanation needs to be very thorough.

As an aside, it is a bit amusing to me that an initial announcement about a new company working on Linux systems caused the vast majority of people to discuss the impact on personal computers (and games!) rather than servers. I guess we finally have arrived at the fabled "Year of the Linux Desktop" in 2026, though this isn't quite how I expected to find out.

> Attestation of what to whom for which purpose? Which freedom does it allow users to control their keys, how does it square with remote attestation and the wishes of enterprise users?

We do have answers for these questions, and a lot of the necessary components exist already (lots of FOSS people have been working on problems in this space for a while). The problem is that there is still the missing ~20% (not an actual estimate) we are building now, and the whole story doesn't make sense without it. I don't like it when people announce vapourware, so I'm really just trying to not contribute to that problem by describing a system that is not yet fully built, though I do understand that it comes off as being evasive. It will be much easier to discuss all of this once we start releasing things, and I think that very theoretical technical discussions can often be quite unproductive.

In general, I will say that there a lot of unfortunate misunderstandings about TPMs that lead people to assume their only use is as a mechanism for restricting users. This is really not the case, TPMs by themselves are actually more akin to very limited HSMs with a handful of features that can (cooperatively with firmware and operating systems) be used to attest to some aspects of the system state. They are also fundamentally under the users' control, completely unlike the PKI scheme used by Secure Boot and similar systems. In fact, TPMs are really not a useful mechanism for protecting against someone with physical access to the machine -- they have to trust that the hashes they are given to extend into PCRs are legitimate and on most systems the data is even provided over an insecure data line. This is why the security of locked down systems like Xbox One[1] don't really depend on them directly and don't use them at all in the way that they are used on consumer hardware. They are only really useful at protecting against third-party software-based attacks, which is something users actually want!

All of the comments about DRM obviously come from very legitimate concerns about user freedoms, but my views on this are a little too long to fit in a HN comment -- in short, I think that technological measures cannot fix a social problem and the history of DRM schemes shows that the absence of technological measures cannot prevent a social problem from forming either. It's also not as if TPMs haven't been around for decades at this point.

[1]: https://www.youtube.com/watch?v=U7VwtOrwceo

ahartmetz 1/29/2026||
>I think that technological measures cannot fix a social problem

The absence of technological measures used to implement societal problems totally does help though. Just look at social media.

I fear the outlaw evil maid or other hypothetical attackers (good old scare-based sales tactics) much less than already powerful entities (enterprises, states) lawfully encroaching on my devices using your technology. So, I don't care about "misunderstandings" of the TPM or whatever other wall of text you are spewing to divert attention.

iamnothere 1/27/2026||||
Thanks, this would be helpful. I will follow on by recommending that you always make it a point to note how user freedom will be preserved, without using obfuscating corpo-speak or assuming that users don’t know what they want, when planning or releasing products. If you can maintain this approach then you should be able to maintain a good working relationship with the community. If you fight the community you will burn a lot of goodwill and will have to spend resources on PR. And there is only so much that PR can do!

Better security is good in theory, as long as the user maintains control and the security is on the user end. The last thing we need is required ID linked attestation for accessing websites or something similar.

LooseMarmoset 1/27/2026||||
that’s great that you’ll let users have their own certificates and all, but the way this will be used is by corporations to lock us out into approved Linux distributions. Linux will be effectively owned by RedHat and Microsoft, the signing authority.

it will be railroaded through in the same way that systemD was railroaded onto us.

giant_loser 6 days ago|||
> but the way this will be used is by corporations to lock us out into approved Linux distributions. Linux will be effectively owned by RedHat and Microsoft, the signing authority.

This is the intent of the Poettering and Brauner.

cyphar 1/29/2026|||
> but the way this will be used is by corporations to lock us out into approved Linux distributions. Linux will be effectively owned by RedHat and Microsoft, the signing authority.

This is basically true today with Secure Boot on modern hardware (at least in the default configuration -- Microsoft's soft-power policies for device manufacturers actually requires that you can change this on modern machines). This is bad, but it is bad because platform vendors decide which default keys are trusted for secure boot by default and there is no clean automated mechanism to enroll your own keys programmatically (at least, without depending on the Microsoft key -- shim does let you do this programmatically with the MOK).

The set of default keys ended up being only Microsoft (some argue this is because of direct pressure from Microsoft, but this would've happened for almost all hardware regardless and is a far more complicated story), but in order to permit people to run other operating systems on modern machines Microsoft signed up to being a CA for every EFI binary in the universe. Red Hat then controls which distro keys are trusted by the shim binary Microsoft signs[1].

This system ended up centralised because the platform vendor (not the device owner) fundamentally controls the default trusted key set and is what caused the whole nightmare of the Microsoft Secure Boot keys and rh-boot signing of shim. Getting into the business of being a CA for every binary in the world is a very bad idea, even if you are purely selfish and don't care about user freedoms (and it even makes Secure Boot less useful of a protection mechanism because it means that machines where users only want to trust Microsoft also necessarily trust Linux and every other EFI binary they sign -- there is no user-controlled segmentation of trust, which is the classic CA/PKI problem). I don't personally know how the Secure Boot / UEFI people at Microsoft feel about this, but I wouldn't be surprised if they also dislike the situation we are all in today.

Basically none of these issues actually apply to TPMs, which are more akin to limited HSMs where the keys and policies are all fundamentally user-controlled in a programmatic way. It also doesn't apply to what we are building either, but we need to finish building it before I can prove that to you.

[1]: https://github.com/rhboot/shim-review

5d41402abc4b 1/28/2026||||
What was it that the Google founders said about not adding advertisements to Google search?
curt15 1/28/2026||||
> The models we have in mind for attestation are very much based on users having full control of their keys.

If user control of keys becomes the linchpin for retaining full control over one's own computer, doesn't it become easy for a lobby or government to exert control by banning user-controlled keys? Today, such interest groups would need to ban Linux altogether to achieve such a result.

wooptoo 1/28/2026||||
> The models we have in mind for attestation are very much based on users having full control of their keys.

FOR NOW. Policies and laws always change. Corporations and governments somehow always find ways to work against their people, in ways which are not immediately obvious to the masses. Once they have a taste of this there's no going back.

Please have a hard and honest think on whether you should actually build this thing. Because once you do, the genie is out and there's no going back.

This WILL be used to infringe on individual freedoms.

The only question is WHEN? And your answer to that appears to be 'Not for the time being'.

dTal 1/27/2026||||
Thanks for the reassurance, the first ray of sunshine in this otherwise rather alarming thread. Your words ring true.

It would be a lot more reassuring if we knew what the business model actually was, or indeed anything else at all about this. I remain somewhat confused as to the purpose of this announcement when no actual information seems to be forthcoming. The negative reactions seen here were quite predictable, given the sensitive topic and the little information we do have.

inetknght 1/28/2026||||
Can I build my own kernel and still use software that wants attestation?
surajrmal 1/28/2026||
Do you have a way to tell the software to trust your kernel? If so, yes. Things like the web show how we can achieve distributed trust.
account42 1/28/2026|||
"Trust" has become such an orwellian word in tech.
cferry 1/28/2026|||
That's the thing. I can only provide a piece of software with the guarantee it can run on my OS. It can trust my kernel to let it run, but shouldn't expect anything more. The editor is free to run code it wants to guarantee the integrity of on its own infrastructure; but whatever reaches my machine _may_ at best run as the editor intends.
account42 1/28/2026||||
> I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.

The road to hell is paved with good intentions.

endgame 1/28/2026||||
That's not the intention, but how do you stop it from being the effect?
trelane 1/28/2026||||
Glad to hear it! I am not surprised given the names and the fact you're at FOSDEM.
michaelmrose 1/27/2026||||
This is extremely bad logic. The technology of enforcing trusted software is without inherent value good or ill depending entirely on expected usage. Anything that is substantially open will be used according to the values of its users not according to your values so we ought instead to consider their values not yours.

Suppose you wanted to identify potential agitators by scanning all communication for indications in a fascist state one could require this technology in all trusted environments and require such an environment to bank, connect to an ISP, or use Netflix.

One could even imagine a completely benign usage which only identified actual wrong doing alongside another which profiled based almost entirely on anti regime sentiment or reasonable discontent.

The good users would argue that the only problem with the technology is its misuse but without the underlying technology such misuse is impossible.

One can imagine two entirely different parallel universes one in which a few great powers went the wrong way in part enabled by trusted computing and the pervasive surveillance enabled by the capability of AI to do the massive and boring task of analyzing a massive glut of ordinary behaviour and communication + tech and law to ensure said surveillance is carried out.

Even those not misusing the tech may find themselves worse off in such a world.

Why again should we trust this technology just because you are a good person?

michaelmrose 1/28/2026|||
TLDR We already know how this will be misused to take away people's freedom not to run their own software stack but to dissent against fascism. It's immoral to build even with the best intentions.
qmr 1/28/2026||||
What engineering discipline?

PE or EIT?

quotemstr 1/28/2026|||
You're providing mechanism, not policy. It's amazing how many people think they can forestall policies they dislike by trying to reject mechanisms that enable them. It's never, ever worked. I'm glad there are going to be more mechanisms in the world.
enriquto 1/27/2026|||
half of the founders of this thing come from Microsoft. I suppose this makes the answer to your question obvious.
stackghost 1/27/2026|||
My thoughts exactly. We're probably witnessing the beginning of the end of linux users being able to run their own kernels. Soon:

- your bank won't let you log in from an "insecure" device.

- you won't be able to play videos on an "insecure" device.

- you won't be able to play video games on an "insecure" device.

And so on, and so forth.

dijit 1/27/2026|||
Unfortunately the parent commenter is completely right.

The attestation portion of those systems is happening on locked down devices, and if you gain ownership of the devices they no longer attest themselves.

This is the curse of the duopoly of iOS and Android.

BankID in Sweden will only run with one of these devices, they used to offer a card system but getting one seems to be impossible these days. So you're really stuck with a mobile device as your primary means of identification for banking and such.

There's a reason that general purpose computers are locked to 720p on Netflix and Disney+; yet AppleTV's are not.

yxhuvud 1/27/2026|||
Afaik bankid will actually run as long as you can install play store (IE the device don't need Google certificate), which isn't great but a little bit better than what it could have been.
gcr 1/28/2026||
That can't be right. My onyx boox note air 2 eInk tablet lets me install the google play store by registering myself as an AOSP developer and enrolling my device's serial number or GSF identifier with Google using some Google Form that some android team somewhere's automated by now. The device has no hardware security features from what I can tell. There's no way this platform would pass muster with any bank.
VorpalWay 1/28/2026|||
At least BankId (digital ID thing in Sweden) and some of the Swedish banking apps don't care about if you are rooted on stock Android. I haven't tried custom ROMs in many years, but perhaps it is time for GrapheneOS these days.

Now, if you want to use your phone as a debit/credit card substitute that is different (Google Pay cares, and I don't use it thus).

Anyway, why should banking apps care? It is not like they care when I use the bank from Firefox on my Linux laptop.

dotancohen 1/28/2026||||
I have the successor device, the Boox Note Air 2, and don't remember how I installed Google Play on it, it was so easy as to be not even notable. Though almost everything I use is available on F-Droid other than my fancy calendar and contacts applications.
seba_dos1 1/28/2026||||
> There's no way this platform would pass muster with any bank

"Any bank"? Although the bank I use locks NFC payments behind such checks (which is not a big loss since a physical debit card offers the same functionality), anything else still works otherwise. Most of the things are available through the website (which fits well on mobile too), and mobile BLIK payments can be done from the Android app which works inside Waydroid with microG.

There's no reason other banks can't work the same way and it's outraging when they don't. Look around for a better bank.

direwolf20 1/28/2026|||
The bank doesn't have to actually be secure, only tick certain boxes.
LtWorf 1/28/2026||||
I just received by mail a card to replace my soon expiring one… (not a debt card, the one to do internet banking and so on).

However the problem is that A LOT of things only work with the mobile app.

ahepp 1/28/2026|||
as you say, a lot of this stuff is already happening. Won’t it be good to have a FOSS attestation stack that breaks the iOS/android duopoly?
AnthonyMouse 1/28/2026|||
Banks don't use these things because they provide any real security. They use them because the platform company calls it a "security feature" and banks add "security features" to their checklists.

The way you defeat things like that is through political maneuvering and guile rather than submission to their artificial narrative. Publish your own papers and documentation that recommends apps not support any device with that feature or require it to be off because it allows malware to use the feature to evade malware scans, etc. Or point out that it prevents devices with known vulnerabilities from being updated to third party firmware with the patch because the OEM stopped issuing patches but the more secure third party firmware can't sign an attestation, i.e. the device that can do the attestation is vulnerable and the device that can't is patched.

The way you break the duopoly is by getting open platforms that refuse to support it to have enough market share that they can't ignore it. And you have to solve that problem before they would bother supporting your system even if you did implement the treachery. Meanwhile implementing it makes your network effect smaller because then it only applies to the devices and configurations authorized to support it instead of every device that would permissionlessly and independently support ordinary open protocols with published specifications and no gatekeepers.

faust201 1/28/2026|||
Well summarised.

Another point is (often )the apps that banks makes are 3rd party developed by outsourcing (even if within the same developed country). If someone uses some MiTM or logcat to see some traffic and publishes it then banks get bad publicity. So to prevent this the banks, devs tell anything that is not normal (i.e) non-stock ROM is bad.

FOSS is also something many app-based software devs don't like on their products. While people in cloud, infra like it the app devs like these tools while developing or building a company but not when making end resulting apps.

UltraSane 1/28/2026||||
Remote attestation absolutely provides increased security. Mobile banking fraud rates are substantially lower than desktop/browser banking fraud. Attestation is major reason why.

I think ever compute professional needs to spend at least a year trying to secure a random companies windows network to appreciate how impossible this actually is without hardware based roots of trust like TPMs and HSMs

garaetjjte 1/28/2026||
>Attestation is major reason why.

It's not. Mobile applications just don't have unrestricted access to everything in your user directory, attestation have nothing to do with it.

AnthonyMouse 1/28/2026||
It's not even that. The main reason is probably that attackers are going to be writing code to automate their attacks, and desktops are easier to develop on than phones, so that's what they use with no reason to do otherwise.

Even if you stopped supporting desktops, then they would just reverse engineer the mobile app instead of the web app and extract the attestation keys from any unpatched model of phone and still run their code on a server, and then it would show up as "mobile fraud" because they're pretending to be a phone instead of a desktop, when in reality it was always a server rather than a phone or a desktop.

And even if attestation actually worked (which it doesn't), that still wouldn't prevent fraud, because it only tries to prove that the person requesting the transfer is using a commercial device. If the user's device is compromised then it doesn't matter if it can pass attestation because the attacker is only running the fake, credential stealing "bank app" on the user's device, not the real bank app. Then they can run the official bank app on an official device and use the stolen credentials to transfer the money. The attestation buys you nothing.

jofla_net 1/28/2026||
All this theatre is turning out to be nothing more than giving up the agency we have today (nice things), for a risk averse kneejerk runaround with glaring ulterior motives...just like the scan your face+id push for services.
UltraSane 1/29/2026||
Would YOU be willing to use a bank that refused to use TLS? I didn't think so. How is you refusing to accept remote attestation and the bank refusing to connect to you any different?
jofla_net 7 days ago|||
Because Banking has existed and operated fine for countless decades without it(attestation).

Also, as there is ample discussion elsewhere, having attestation does NOT eliminate the ability for your account to become compromised.

As restated.

"If the user's device isn't compromised then everything is fine regardless of whether or not it can pass attestation. If the user's device is compromised, the device doesn't need to pass attestation to run a fake bank app and steal the user's credentials. Once the attacker has the user's credentials they can use them to transfer money regardless of whether or not they have to use a different device that can pass attestation.

It doesn't really provide any security."

IT DOES however completely rewrite the paradigm of general purpose computing in very asymmetrical ways.

UltraSane 6 days ago||
Stop ignoring my question. If it is OK for YOU to refuse to use a bank that doesn't use TLS then why isn't it OK for a bank to refuse you as a customer if you refuse to agree to remote attestation? Both parties have the right to specify reasonable security postures and either mutually agree or not.
rkomorn 6 days ago||
Not OP, and also not sure where I actually stand on this debate because I think your point has a lot of validity to it, but...

I think there's also an argument in favor of a person having the right to access their money (and I'd argue that accessing your bank's website/app is accessing your money) however they want, and that access to their money is more of an important right than the bank's right to control how that access happens.

I think we can all agree to some "within reason" clauses on both sides (eg not allowing HTTP only access seems reasonable), and I guess a lot of this debate is "is requiring attestation within reason?"

To me, any asymmetry between the rights of the consumer and the rights of the bank should be in the favor of the consumer.

garaetjjte 4 days ago|||
Because it's not about security, and bank doesn't own my device. If it was, I should be able to supply the bank my own attestation keys.
mariusor 1/28/2026|||
Are you saying that attestation doesn't really provide any real security? Not even from the bank's point of view?
AnthonyMouse 1/28/2026||
If the user's device isn't compromised then everything is fine regardless of whether or not it can pass attestation. If the user's device is compromised, the device doesn't need to pass attestation to run a fake bank app and steal the user's credentials. Once the attacker has the user's credentials they can use them to transfer money regardless of whether or not they have to use a different device that can pass attestation.

It doesn't really provide any security.

On top of that, there are tons of devices that can pass attestation that have known vulnerabilities, so the attacker could just use one of those (or extract the keys from it) if they had any reason to. But in the mobile banking threat model they don't actually need to.

mariusor 1/28/2026|||
So do we just give up because it's too hard?
AnthonyMouse 7 days ago||
It's not a matter of being hard. It's like trying to prevent theft by forcing everyone to wear a specific brand of shoes. The fact that the shoe company insists that it's useful is not evidence that it is.

It's not that you can't solve the problem, it's that you can't solve the problem using that mechanism. Attestation is useless for this.

The thing that would actually work for this is to have an open standard supported by PCs and phones to read the chip in payment/ATM cards, because then you could do "card-present" transactions remotely. You touch your card to the phone/PC and enter your PIN to authorize a new merchant. That actually solves the problem because then instead of the bank trusting every commercially available phone on the market, they only trust the specific card that they mailed to the cardholder, and you can only authorize a new merchant with physical possession of the card because it contains a private key. But that doesn't require attestation because then you don't need the keys to be in the phone since they're in the card, and it doesn't require a third party to sign anything because the bank puts the private key into the card before sending it to the cardholder without any need for Google or Apple to certify anything.

mariusor 5 days ago||
From what I can take from your reply I suspect you might not understand what attestation is for.

Yes you can use a chip that the bank trusts (that's your card), however the bank wants to trust that the hardware you use to read that chip is not compromised and does not try to do things on the behalf of the user that the user didn't authorize. A non trusted device can operate in a different way than the user demands of it, and the user might never know.

That's the use case that hardware attestation can prevent. Or so the theory says...

jofla_net 1/28/2026|||
My head hurts now...
severino 1/28/2026||||
Well, it depends. I can now do banking from my desktop computer because there is no way our banks can attest that we're running our browsers in their approved hardware+software stack. Of course they can already disable banking from the browser but if they choose to keep it open but require attestation in your browser when it becomes possible, I don't think it's a good thing.
faust201 1/28/2026||||
It would but how and who to run it? Ideally some one like Linux Foundation sits on the White house meetings or EU meetings. But they don't. Govts don't understand. I was once participating in a Youth meeting with MEPs - most of them have only iPhones. Most (not all) lawmakers live on a different planet.

Also IIRC, linux foundation etc are not interested in doing such standardisations.

uecker 1/28/2026|||
No
anonym29 1/28/2026||||
Torrenting is becoming more popular again. The alternative to being allowed to pay to watch on an "insecure" device isn't switching to an attested device, it's to stop paying for the content at all. Games industry, same thing (or just play the good older games, the new ones suck anyway).

Finances, just pay everything by cheque or physical pennies. Fight back. Starve the tyrants to death where you can, force the tyrants to incur additional costs and inefficiencies where you can't.

seba_dos1 1/27/2026||||
This is already the world we live in when it comes to the most popular personal computing devices running Linux out there.
stefan_ 1/28/2026||
This is already the world you live in just running some recent Ubuntu. Try writing, building and loading a kernel module!

Of course its all nonsense make believe, the "trust root" is literally a Microsoft signed stub. For this dummy implementation you can't modify your own kernel anymore.

plagiarist 1/28/2026||
And you cannot remove it on every motherboard because some of the firmware blobs are signed. You cannot remove their keys and leave only your own.
JasonADrury 1/28/2026|||
Is the joke here that all of those things have already been happening for a while now?
blibble 1/27/2026|||
that's a silver lining

the anti-user attestation will at least be full of security holes, and likely won't work at all

sam_lowry_ 1/27/2026||
Dunno about the others but Pottering has proven himself to deliver software against the grain.
dijit 1/27/2026|||
You think?

It took us nearly a decade and a half to unfuck the pulseaudio situation and finally arrive at a simple solution (pipewire).

SystemD has a lot more people refining it down but a clean (under the hood) implementation probably won't be witnessed in my lifetime.

PaulDavisThe1st 1/28/2026|||
anyone who thinks that pipewire - pipewire! - is "a simple solution" understands nothing about pipewire.

don't get me wrong, i use pipewire all day every day, and wrote one of the APIs (JACK) that it implements (pretty well, too!).

but pipewire is an order of magnitude more complex than pulseaudio.

herewulf 1/28/2026|||
As an end user hand assembling desktop services on non-Systemd distros (Artix, Devuan, Gentoo, Guix) over the years, and thus had no concern about APIs, Pipewire just works and PulseAudio gave endless trouble.

My 0.02 bits.

account42 1/28/2026||
As another user on Gentoo, pipewire is a never ending pain in the ass full of "magic" behavior and weird bugs. I mostly skipped pulse though so it may be simple in comparison to that.
blibble 1/27/2026||||
yeah, the fix for pulseaudio was to throw it away entirely

for systemd, I don't think I have a single linux system that boots/reboots reliably 100% of the time these days

xorcist 1/28/2026|||
There were dozens of other init systems that, like systemd, wasn't a shell script.

What set systemd apart is the collection of tightly integrated utilities such as a dns resolver, sntp client, core dump handler, rpc-like api linking to complex libraries in the hot path and so on and so forth that has been a constant stream of security exploits for over a decade now.

This is a case where the critics were proven to be right. Complexity increases the cognitive burden.

sam_lowry_ 1/28/2026|||
What set systemd apart was RedHat, and now Pottering repeats the old trick with Microsoft behind his back.

I think he will succeed and we will be worse off, collectively.

jacquesm 1/28/2026||||
As predicted. I thought pulseaudio should have been enough of a lesson. Besides that, any person that works on open source but that joins Microsoft is not in the camp that should have a say in the overall direction of Linux.
bulatb 1/28/2026||
"People don't learn lessons" is a lesson that people don't learn.
PunchyHamster 1/28/2026|||
that on itself is not a problem. The problem is that those work worse.

For example, the part of systemd that fills DNS will put them in random order (like actual random, not "code happened to dump it in map order)

The previous, while very much NOT perfect, system, put the DNSes in order of one in latest interface, which had useful side-feature that if your VPN had different set of DNSes, it got added in front

The systemd one just randomizes it ( https://github.com/systemd/systemd/issues/27543 ) which means that using standard openvpn wrapper script for it will need to be reran sometimes few times to "roll" the right address, I pretty much have to run

     systemctl restart systemd-resolved ; sleep 1 ; cat /etc/resolv.conf
half of the time I connect to company's VPN

The OTHER problem is pervasive NIH in codebase.

Like, they decided to use binary log format. Okay, I can see advantages, it can be indexed or sharded for faster access to app's files...

oh wait it isn't, if you want to get last few lines of a service the worst case is "mmap every single journal file for hundreds of MBs of reads"

It can be optimized so some long but constant fields like bootid are not repeated...

oh wait it doesn't do that either, is massively verbose. I guess I can understand it, at least that would make it less crash-proof...

oh wait no, after crash it just spams logs that previous log file is corrupted and it won't be used.

So we have a log format that only systemd tools can read, takes few times as much space per line as text or even JSON version would, and it still craps out on unclean shutdown

They could've just integrated SQLite. Hell I literally made a lil prototype that took journalctl logs and wrote it to indexed SQLite file and it was not only faster but smaller (as there is no need to write bootid with each line, and log lines can be sharded or indexed so lookup is faster). But nah, Mr. Poettering always wanted to make a binary log format so he did.

dijit 1/27/2026|||
The trick is the same: use a popular linux distribution and don't fight the kinks.

The people who had no issues with Pulseaudio; used a mainstream distribution. Those distributions did the heavy lifting of making sure stuff fit together in a cohesive way.

SystemD is very opinionated, so you'd assume it wouldn't have the same results, but it does.. if you use a popular distro then they've done a lot of the hard work that makes systemd function smooth.

I was today years old when I realised this is true for both bits of poetter-ware. Weird.

blibble 1/27/2026|||
I only use debian

pulseaudio I had to fight every single day, with my "exotic" setup of one set of speakers and a headset

with pipewire, I've never had to even touch it

systemd: yesterday I had a network service on one machine not start up because the IP it was trying to bind to wasn't available yet

the dependencies for the .service file didn't/can't express the networking semantics correctly

this isn't some hacked up .service file I made, it's that from an extremely popular package from a very popular distro

(yeah I know, use a socket activated service......... more tight coupling to the garbage software)

the day before that I had a service fail to start because the wall clock was shifted by systemd-timesyncd during startup, and then the startup timeout fired because the clock advanced more than the timeout

then the week before that I had a load of stuff start before the time was synced, because chrony has some weird interactions with time-sync.target

it's literally a new random problem every other boot because of this non-deterministic startup, which was never a problem with traditional init or /etc/rc

for what? to save maybe a second of boot time

if the distro maintainers don't understand the systemd dependency model after a decade then it's unfit for purpose

jorvi 1/27/2026|||
> it's literally a new random problem every other boot because of this non-deterministic startup, which was never a problem with traditional init or /etc/rc

This gave me a good chuckle. Systemd literally was created to solve the awful race conditions and non-determinism in other init systems. And it has done a tremendous job at it. Hence the litany of options to ensure correct order and execution: https://www.freedesktop.org/software/systemd/man/latest/syst...

And outside of esoteric setups I haven't ever encountered the problems you mentioned with service files.

direwolf20 1/27/2026|||
systemd was created to solve the problems of a directory full of shell scripts. A single shell script has completely different problems. And traditional init uses inittab, which is not /etc/init.d, and works more like runit.

runit's approach is to just keep trying to start the shell script every 2 seconds until it works. One of those worse–is–better ideas, it's really dumb, and effective. You can check for arbitrary conditions and error–exit, and it will keep trying. If you need the time synced you can just make your script fail if the time is not synced.

traditional inittab is older than that and there's not any reason to use it when you could be using runit, really.

blibble 1/27/2026||||
yeah, many options that are complicated beyond the understanding of the distro maintainers, and yet still don't allow expression of common semantics required to support network services reliably

like "at least one real IP address is available" or "time has been synced"

and it's not esoteric, even ListenAddress with sshd doesn't even work reliably

the ONLY piece of systemd I've not had problems with is systemd-boot, and then it turned out they didn't write that

jorvi 1/27/2026||
> like "at least one real IP address is available" or "time has been synced"

"network-online.target is a target that actively waits until the network is “up”, where the definition of “up” is defined by the network management software. Usually it indicates a configured, routable IP address of some kind. Its primary purpose is to actively delay activation of services until the network has been set up."

For time sync checks, I assume one of the targets available will effectively mean a time sync has happened. Or you can do something with ExecStartPre. You could run a shell command that checks for the most recent time sync or forces one.

blibble 1/27/2026|||
it's the "usually" that's the problem

this service (untouched by me) had:

After=local-fs.target network-online.target remote-fs.target time-sync.target

but it was still started without an IP address, and then failed to bind

just like this sort of problem: https://github.com/systemd/systemd/issues/4880#issuecomment-...

the entire thing is unreliable and doesn't act like you'd expect

> Or you can do something with ExecStartPre. You could run a shell command that checks for the most recent time sync or forces one.

at that point I might as well go back to init=/etc/rc

jorvi 1/28/2026|||
Are you running this particular unit file as a user unit or a system unit? Some targets like network-online.target don't work from user unit files.

You could also try targeting NetworkManager or networkd's "wait-online" services. Or if that doesn't work, something is telling systemd that you have an IP when you don't. NetworkManager has "ipv4.may-fail" and "ipv6.may-fail" that might be errenously true.

> at that point I might as well go back to init=/etc/rc

The difference is that systemd is much better at ensuring correctness. If you write the invoked shell command properly, it'll communicate failure or success correct and systemd will then communicate that state to the unit. It's still a lot more robust than before.

blibble 1/28/2026|||
it's a system service file

the problem is systemd

> The difference is that systemd is much better at ensuring correctness.

yeah, whatever mate

jorvi 1/28/2026||
Seems like you have an axe to grind with systemd because it replaced your familiar (but extremely cruddy) init system and now you refuse to debug the problem because you prefer being able to blame systemd.

There is so much granularity and flexibility in what you can do it seems rather unlikely you cannot make it happen correctly. And if it is truly a bug... open an issue? They're rather responsive to it. And it isn't like the legacy init systems were bug free from inception (well, lord knows they were still chock full of bugs even when they were replaced).

Edit: sitting here with a grin .. HN downvoting the advice of checking logs, debugging and opening an issue. I wish the companies y'all work at good luck.. they'll need it.

blibble 1/28/2026||
> Seems like you have an axe to grind with systemd because it replaced your familiar (but extremely cruddy) init system and now you refuse to debug the problem because you prefer being able to blame systemd.

I'm a pragmatist: I just want it to work

my solution to MULTIPLE different services failing to IP bind is to turn on the non-local ip binding sysctl, bypassing systemd's brokenness entirely

> There is so much granularity and flexibility in what you can do it seems rather unlikely you cannot make it happen correctly.

I've written an init before (in C), I know how the netlink interface to set an IP address and add routing table entries works

I understand the difference between monotonic and wall clocks

I understand the difference between Wants and Require

I know what's going on at every, single, level

and I can't stand how unreliable systemd makes nearly every single one of my, bluntly, completely vanilla systems

> And if it is truly a bug... open an issue?

did you read the link I pasted earlier?

I'm not wasting my time with that level of idiocy (from LP himself)

direwolf20 1/28/2026|||
> Some targets like network-online.target don't work from user unit files.

So basically it just doesn't work sometimes for no particular reason.

> The difference is that systemd is much better at ensuring correctness

Uh, well, you just said that it isn't, because some targets like network-online.target don't work from user unit files.

magicalhippo 1/28/2026||||
> https://github.com/systemd/systemd/issues/4880

I'm not a systemd hater or anything, but I continue to read stuff from Poettering which to me is deeply disturbing given the programs he works on.

Saying it's not a bug that service is launched despite a stated required prerequisite dependency failed... WTF?

Sure, I agree with him that most computers should probably boot despite NTP being unable to sync. But proposing that the solution to that is breaking Requires is just wild to me.

jcgl 1/28/2026||
I'm not sure I understand why you think the solution proposed there is so bad.

The question in that issue is around the semantics of time-sync.target. Targets are synchronization points for the system and don't (afaik) generally make promises about the units that are ordered before them (in this case chrony-wait.service.

Does that answer your specific objection of "proposing that the solution to that is breaking Requires is just wild to me"? Basically, what is proposed in that issue is not breaking Requires=. The proposition is that the user add their own, specific Requires= as a drop-in configuration since that's not a generally-applicable default.

magicalhippo 1/28/2026||
No, that does not make sense, because it goes against the systemd documentation.

Targets[1]: Target units do not offer any additional functionality on top of the generic functionality provided by units. They merely group units, allowing a single target name to be used in Wants= and Requires= settings to establish a dependency on a set of units defined by the target, and in Before= and After= settings to establish ordering.

boot-complete.target[2]: Order units that shall only run when the boot process is considered successful after the target unit and pull in the target from it, also with Requires=.

Note use of "only run" with a reference to Requires=.

time-sync.target[3]: This target provides stricter clock accuracy guarantees than time-set.target (see above), but likely requires network communication and thus introduces unpredictable delays. Services that require clock accuracy and where network communication delays are acceptable should use this target.

Especially note the last sentence there.

The documentation clearly indicates that targets can fail, and that services that needs the target to be successful, should use Requires= to specify that.

If the above is not true, the Requires= and Targets documentation should be rewritten to explicitly say that targets might fulfill Requires= regardless of state. Also, the documentation for time-sync.target should explicitly remove the last sentence and instead state there is no functional difference between Requires=time-sync.target and Wants=time-sync.target, it is best-effort only.

[1]: https://www.freedesktop.org/software/systemd/man/latest/syst...

[2]: https://www.freedesktop.org/software/systemd/man/latest/syst...

[3]: https://www.freedesktop.org/software/systemd/man/latest/syst...

jcgl 1/28/2026||
That seems like a fair point about the documentation! As far as I can see, you're right.
magicalhippo 1/28/2026||
So that's why I find his statements disturbing.

If he really don't want targets to deliver failed/success guarantees, then they've massively miscommunicated in their documentation. That in my book is a huge deal.

In either case the issue should in no circumstance be casually dismissed as not-a-bug without further action.

jcgl 1/28/2026||
I don't personally find it as disturbing as you do, I think. Which isn't to say that I don't think it should be fixed, etc. etc.

I'm sure the project would accept a documentation patch to amend this discrepancy. At the end of the day (despite what some people on the internet might like to allege), systemd is a free software project that, despite having (more or less) a BIFL, is ultimately a relatively bazaar-like project.

Though since these targets and unit properties are very core to systemd-the-service manager, I do think that this is a bigger documentation oversight than most.

magicalhippo 1/28/2026||
The disturbing part isn't the bug in time-sync.target or documentation, the disturbing part is how casually he brushes the issue away.

To me this is a huge red flag for a senior contributer to a core systems component, signalling some fundamental lack of understanding or imagination.

I very much disagree with not fixing time-sync.target, but if he had instead written a well-reasoned explanation for why time-sync.target should not propagate failed states and flagging it as a documentation bug, then that's something I'd respect and would be fine with. Or, even better IMHO, he'd fix time-sync.target and state that users who wants to boot regardless should use Wants instead.

direwolf20 1/27/2026||||
Is it possible for network-online to mean that, or does network-on actually mean that?

It is possible for a specification to be so abstract that it's useless.

jcgl 1/28/2026||
That's entirely defined by whatever units order themselves before network-online.target (normally a network management daemon like NetworkManager or systemd-networkd). systemd itself doesn't define the details; that's left up to how that distro and sysadmin have configured the network manager/system.
bandrami 1/28/2026|||
Sysadmins really hate the word "usually", and that is at the root of just about every systemd headache I've had
ethin 1/28/2026|||
Same. I run a server with a ton of services running on it which all have what I think are pretty complex dependency chains. And I also have used Linux with systemd on my laptop. Systemd has never, once, caused me issues.
jacquesm 1/27/2026||||
I can totally relate to this, it's gotten to the point that I'm just as scared of rebooting my Linux boxes as I was of rebooting my windows machine a couple of decades ago. And quite probably more scared.
blibble 1/27/2026|||
everyone attacking Microslop for a bug where Windows won't shut down properly

well, systemd's got them beat there!

direwolf20 1/27/2026||
The good thing about systemd or any other Linux software is that you don't have to use it, until this company gets off the ground.
jacquesm 1/28/2026||
I think at some point we will see a steep increase in value of old hardware that can still run unsigned binaries.
direwolf20 1/28/2026||
It won't be able to interact with any online services like Google or Hacker News.
herewulf 1/28/2026|||
Ah, we will get more done. Or maybe just see you on the mailing list and IRC?!
rsync 1/28/2026||||
You will always be able to interact with rsync.net …

… and the warrant canary we publish every Monday morning.

jacquesm 1/28/2026|||
Google I can live without ;)
esseph 1/28/2026|||
What distro?
jacquesm 1/28/2026||
The box that I'm worried about in particular is running RedHat.

Ubuntu boxes: usually ok as long as you stay away from anything python related in the core system.

1vuio0pswjnm7 1/28/2026||||
"for what? to save a second of boot time"

Doubtful the motivation was /etc/rc being too slow

daemontools, runit, s6 solve that problem

jacquesm 1/28/2026||
The only parties that really cared about boot time were the big hosting providers and container schleppers. For desktop linux it never mattered as much.
braincat31415 1/28/2026||||
For me, randomly missing NFS mounts after boot were the last straw. I could not solve this problem. I am back on sysv init.
smm11 1/28/2026||
This. If you set an NFS share, it better be there forever and ever.
bee_rider 1/28/2026||||
PipeWire is like 10 years newer than PulseAudio. It probably had a chance to learn some lessons!

IIRC before PulseAudio we had to mess around with ALSA directly (memory hazy, it was a while ago). It could be a bit of a pain.

ahartmetz 1/28/2026|||
PipeWire was also made by a guy with a lot of multimedia experience (GStreamer).

ALSA was kind of OK after mixing was enabled by default and if you didn't need to switch outputs of a running application between anything but internal speakers and headphones (which worked basically in hardware). With any additional devices that you could add and remove, ALSA became a more serious limitation, depending. You could usually choose your audio devices (including microphones) at least at the beginning of a video conference / playing a movie etc, but it was janky (unreliable, list of 20 devices for one multi-channel sound card) and needed explicit support from all applications. Not sure if it ever worked with Bluetooth.

Sophira 1/28/2026||
> Not sure if it ever worked with Bluetooth.

It does, with the help of BlueALSA[0].

[0] https://github.com/arkq/bluez-alsa

fao_ 1/28/2026||||
I remember ALSA. Sure, it was finnicky to use `alsamixer` to unmute the master channels now and then, but I personally never had any trouble with it.
account42 1/28/2026|||
I still need to use alsamixer to unmute my headphones after accidentally unplugging them and plugging them in again fails to do so. That's with PipeWire - never had that problem with just ALSA.
fao_ 1/29/2026||
Eh, I had to do that with pulseaudio too, but constantly, across all distros and headphones. Pipewire is shonky, I have to restart now and then on my steam deck (I'm using it as a desktop), but it's still much better than pulseaudio. Even ALSA was better than pulseaudio lol
ahartmetz 6 days ago||
For most of the (sadly not shorter) life of PulseAudio, ALSA was more reliable, but at some point, Firefox got a new audio backend that straight up dropped support for ALSA, and a few games started crashing with backtraces indicating audio trouble when not run with PulseAudio. I've had to deal with PulseAudio's dropouts under load, latencies and lockups for 2-3 years before PipeWire became a viable replacement.
sam_lowry_ 1/28/2026|||
Alsa with dmix is my current setup on ArchLinux.
jjmarr 1/28/2026|||
I installed Gentoo in 2014 and getting PulseAudio working was much easier than ALSA. It was also much better.

I get ALSA followed the Unix philosophy of doing one thing but I want my audio mixer to play multiple sounds at once.

account42 1/28/2026||
Gentoo in 2014 had dmix enabled by default without the need for any user configuration. I know because I was using it.
jjmarr 1/28/2026||
I got stuck for two weeks installing the kernel because I forgot to mount /boot. Perhaps I disabled it by accident when goofing around in alsamixer? Or my card did or didn't have hardware mixing?

I didn't actually know anything about Linux at the time and started with Gentoo because I saw a meme saying "install Gentoo" and people told me not to start with that distro. So it's possible I messed up the default config by accident.

Either way PulseAudio worked after I emerged it.

esseph 1/28/2026|||
Debian is a darling for which I will always love, but it's inability to deal with systemd is one of the prime reasons I left.

I am not seeing these kind of systemd issues with Fedora / RHEL.

It just works

jacquesm 1/28/2026||
That's because systemd originated at RedHat. If it had been designed distribution agnostic it would have worked a lot better on other distros besides RH.
NekkoDroid 1/28/2026||
What are the non-distribution agnostic parts of systemd? Considering it runs as PID1 (usually) it kinda is the base of distros and not really built on top of any distro other than "the linux kernel".
Brian_K_White 1/28/2026||||
"The trick is the same: use a popular linux distribution and don't fight the kinks."

I believe that you are genuinely being sincere here, thinking this is good advice.

But this is an absolutely terrible philosophy. This statement is ignorant as well as inconsiderate. (again, I do believbe you don't intend to be inconsiderate consciously, that is just the result.)

It's ignorant of history and inconsiderate of everyone else but yourself.

Go back a few years and this same logic says "The trick is, just use Windows and do whatever it wants and don't fight."

So why in the world are you even using Linux at all in the first place with that attitude? For dishonest reasons (when unpacked to show the double standard).

Since you are using Linux instead of Windows, then you actually are fine with fighting the tide. You want the particular bits of control you want, and as long as you are lucky enough to get whatever you happen to care about without fighting too much, then you have no sympathy for anyone else who cares aboiut anything else.

You don't see yourself as fighting any tides because you are benefitting from being able to use a mainstream distro without customizing it. But the only reason you get to enjoy any such thing at all in the first place is because a lot of other people before you fought the tide to bring some mainstream distros into existence, and actually use them for ordinary activities enough despite all the difficulties, to force at least some companies and government agencies to acknowledge them. So now you can say things like "just use a mainstream distro as it comes and don't try to do what you actually want".

Sophira 1/28/2026|||
> Go back a few years and this same logic says "The trick is, just use Windows and do whatever it wants and don't fight."

This is basically exactly what I saw people saying in Windows subreddits. There's one post that particularly sticks out in my memory[0] that basically had everybody telling the OP to just not make any of the changes that they wanted to make. The advice seemed to revolve around adapting to the OS rather than adapting the OS to you, and it made me sad at the time.

[0] https://www.reddit.com/r/Windows10/comments/hehrqe/what_are_...

fao_ 1/28/2026|||
I read it as sarcastic and bitter, personally! I believe you are both agreeing :)
Brian_K_White 1/28/2026||
hah it fits regardless
PunchyHamster 1/28/2026|||
> The people who had no issues with Pulseaudio; used a mainstream distribution. Those distributions did the heavy lifting of making sure stuff fit together in a cohesive way.

Incorrect. I used mainstream distro, still had issues, that just solved itself moving to pipewire. Issues like it literally crashing or emitting spur of max volume noise once every few months for no discernable reason.

Pulseaudio also completely denies existence of people trying to do music on Linux, there is no real way to make latency on it be good.

> SystemD is very opinionated, so you'd assume it wouldn't have the same results, but it does.. if you use a popular distro then they've done a lot of the hard work that makes systemd function smooth.

Over the years of using the "opinion" of SystemD seems to be "if it is not problem on Lennart's laptop, it's not a real problem and it can be closed or ignored completely".

For example systemd have no real method to tell it "turn off all apps after 5 minutes regardless of what silly package maintainers think". Now what happens if you have a server on UPS that have say 5 minutes of battery and one of the apps have some problem and doesn't want to close?

In SysV, it gets killed, and system gets remounted read only. You have app crash recovery but at least your filesystem is clean In systemd ? No option to do that. You can set default timeout but it can be override in each service so you'd have to audit every single package and tune it to achieve that. That was one bug that was closed.

Same problem also surfaced if you have say app with a bug that prevented it from closing from sigterm and you wanted to reboot a machine. Completely stuck

But wait, there is another method, systemd have an override, you can press (IIRC) ctrl+alt+delete 7 times within 2 seconds to force it to restart ( which already confuses some people that expect it to just restart machine clean(ish) regardless https://github.com/systemd/systemd/issues/11285 ).

...which is also impossible if your only method of access is software KVM where you need to navigate to menu to send ctrl+alt+del. So I made ticket with proposal to just make it configurable timeout for the CAD ( https://github.com/systemd/systemd/issues/29616 ), the ticket wasn't even read completely because Mr. Poettering said "this is not actionable, give a proposal", so I pasted the thing he decided to ignore in original ticket, and got ignored. Not even "pull requests welcome" (which I'd be fine with, I just wanted confirmation that the feature like that won't be rejected if I start writing it).

There is also issue of journald disk format being utter piece of garbage ("go thru entire journal just to get app's last few lines bad", hundreds of disk reads on simple systemctl status <appname> bad) that is consistently ignored thru many tickets from different people.

Or the issue that resolvconf replacement in systemd will just roll a dice on DNS ordering, but hey, Mr. Lennart doesn't use openvpn so it's not real issue ( https://github.com/systemd/systemd/issues/27543 )

I'm not writing it to shit on systemd and praise what was before, as a piece of software it's very useful for my job as sysadmin (we literally took tens of thousands lines of fixed init scripts out because all of the features could be achieved in unit files) and I mean "saved tons of time and few demons running" in some cases, but Mr. Poettering is showing same ignorant "I know better" attitude he got scolded at by kernel maintainers.

jcgl 1/28/2026||
> Pulseaudio also completely denies existence of people trying to do music on Linux, there is no real way to make latency on it be good.

I don't care much about PA at this point tbh and don't know much about the inner workings; it always worked just fine for me. But from what I read from people more "in the know" at the time, I'd heard that a lot of the (very real) user-facing problems with PA were ultimately caused by driver and other low-level problems. Those were hacky, had poor assumptions, etc. PA ultimately exposed those failures, and largely got better over time because those problems got fixed upstream of PA.

My takeaway from what I read was basically that PA had to stumble and walk so that pipewire could run.

> For example systemd have no real method to tell it "turn off all apps after 5 minutes regardless of what silly package maintainers think". Now what happens if you have a server on UPS that have say 5 minutes of battery and one of the apps have some problem and doesn't want to close?

Add a TimeoutStopSec= to /etc/systemd/system/service.d/my-killing-dropin.conf more or less, I think? These are documented in the systemd.service and systemd.unit manpages respectively.

> Same problem also surfaced if you have say app with a bug that prevented it from closing from sigterm and you wanted to reboot a machine. Completely stuck

See the --force option on the halt, poweroff, and reboot subcommands of systemctl. The kill subcommand if you want to target that specific service.

> so I pasted the thing he decided to ignore in original ticket, and got ignored. Not even "pull requests welcome" (which I'd be fine with, I just wanted confirmation that the feature like that won't be rejected if I start writing it).

I'm certainly sympathetic to this pain point. I'd take Lennart at his word that he's not opposed. Generally speaking, from following the systemd project somewhat, it's a very busy project and it's hard for all issues to get serviced. But they're very open to PRs, generally speaking.

> Or the issue that resolvconf replacement in systemd will just roll a dice on DNS ordering, but hey, Mr. Lennart doesn't use openvpn so it's not real issue ( https://github.com/systemd/systemd/issues/27543 )

Quickly taking a peek here (and speaking as a relatively superficial user of resolved myself), isn't the proposed solution to define interface ordering?

> it will ask on all links in parallel if there's no better routing info available. In your case there is none (i.e. no ~. listed among your network interfaces), hence it will be asked on all interfaces at the same time.

mariusor 1/28/2026|||
It's baffling to me that anyone can imagine pipewire has been created from scratch without any lessons learned from pulseaudio and the previous issues the audio stack on linux had, and solved, over the years. Nothing is happening in a clean room bubble, every new project stands on the shoulders of giants...
nacozarina 1/28/2026||||
LP is the Thomas Midgley Jr of Computer Science.
wang_li 1/27/2026||||
I thought he had proven that he leaves before the project is complete and functioning according to all the promises made.
tonoto 1/28/2026||||
agent Smith, the one that don't care at all about conforming to POSIX?

"In fact, the way I see things the Linux API has been taking the role of the POSIX API and Linux is the focal point of all Free Software development. Due to that I can only recommend developers to try to hack with only Linux in mind and experience the freedom and the opportunities this offers you. So, get yourself a copy of The Linux Programming Interface, ignore everything it says about POSIX compatibility and hack away your amazing Linux software. It's quite relieving!" -- https://archive.fosdem.org/2011/interview/lennart-poettering...

mikkupikku 1/28/2026||||
Poettering gas a track record of recognizing good ideas from Apple, then implementing them poorly. He also has a track record of closing bug reports for plain and simple bugs in his software to protect his own ego, and this kind of mentality isn't a great basis for security sensitive software.

Audio server for linux: Great idea! Pulseaudio: Genuinely a terrible implementation of it, Pipewire is a drop in replacement that actually works.

Launchd but for Linux: Great idea! SystemD: generally works now at least, but packed with insane defaults and every time this is brought up with the devs they say its the distro packagers jobs to wipe SystemD's ass and clean up the mess before users see it.

Security bug in SystemD when the user has a digit in their username: Lennart closes the bug and says that SystemD is perfect, the distros erred by permitting such usernames. Insane ego-driven response.

plagiarist 1/28/2026||
He really will just close a ticket because he disagrees with how Linux works. I read about systemd sysusers and thought they would be neat for running containerized services. But Poettering doesn't like the /etc/subuid files and refuses to work with them.
NekkoDroid 1/28/2026||
Well, he specifically doens't like the static allocation of subuids. There is a reason `systemd-nsresourced` exists.
plagiarist 1/28/2026||
How do I have nsresourced work in a regular systemd service or quadlet so that I can have an ephemeral user run a container? I am trying to find information and just seeing it as part of nsspawn, that seems to require a container specifically built around a root filesystem.

I am not going to struggle with systemd if I have to build containers specifically for it. If I have to rearrange everything I am doing I would just learn to do it on a minimal Kubernetes install instead.

NekkoDroid 1/28/2026||
nspawn containers aren't really any different to regular system images/archives other than they don't need a kernel.

I don't think the setting is exposed to regular service units (it might be able to in the future, I don't know) and I don't think podman has any integration with it.

What kinda service do you have where you need a full range of UIDs?

plagiarist 1/28/2026||
I don't need a full range. I would just like to run podman under a non-root user using regular system services. Especially where a persistent volume or bind mount is involved.

Let's say Home Assistant. It would be nice to have a have some system user "homeassistant" with no home directory that owns the process and owns its /var/whereever/config.conf . It would be nice to have the isolation on host in addition to the isolation via container. But I don't want to be rebuilding any containers to get that, unless I am misunderstanding something on nsresourced.

I'd be really pleased with that setup. MQTT could be its own system user. And HA could depend on MQTT so I have nice startup behavior. Etc.

IDK how to have system users like this run a container without the subuid range. Even when I create the users with ranges in the file, there seems to be problems with informing systemd (as a non-root user) that the running process is different from the one it started.

NekkoDroid 1/28/2026||
podman quadlet doesn't seem to support running at a "system level" as a non-root user, at least according to their docs[0]. I assume they make some assumptions which wouldn't hold up if the user actually changed when running at a system level, dunno.

> But I don't want to be rebuilding any containers to get that, unless I am misunderstanding something on nsresourced.

Setting up the user namespace would be part of the container manager and not the containers themselves, so they shouldn't need any rebuilding or special handling (possibly the files might need to be shifted into the "foreign ID" range[1, 2], but I might be lying with this and this isn't necessary for this usecase) but the container manager needs to be specifically make use of nsresourced.

I really think currently the best option is to go with either systemd as your "container manager" (e.g. just regular system files with sandboxing or nspawn images or maybe systemd-portabled[3]) or podman as your container manager. As much as I too would love to mix them, I don't think it's the best idea (at least in the current state) and just go with what is more suited for the task (in your case it sounds like podman would be the most suited option).

> there seems to be problems with informing systemd (as a non-root user) that the running process is different from the one it started.

Yea, I don't think systemd likes double forking. The best option would be to keep the process that spawned your actual process alive until the child exists and just bubble up the exit code. There is the `PIDFile=` option with `Type=forking`, but I haven't used it, nor looked much into it.

[0]: https://docs.podman.io/en/v5.7.1/markdown/podman-systemd.uni...

[1]: https://www.freedesktop.org/software/systemd/man/latest/syst...

[2]: https://systemd.io/UIDS-GIDS/#special-systemd-uid-ranges

[3]: https://systemd.io/PORTABLE_SERVICES/

qmr 1/28/2026||
"At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus."
cferry 1/28/2026||
Please don't bring attestation to common Linux distributions. This technology, by essence, moves trust to a third party distinct of the user. I don't see how it can be useful in any way to end users like most of us here. Its use by corporations has already caused too much damage and exclusion in the mobile landscape, and I don't want folks like us becoming pariahs in our own world, just because we want machines we bought to be ours...
b112 1/28/2026||
A silver lining, is it would likely be attempted via systemd. This may finally be enough to kick off a fork, and get rid of all the silly parts of it.

To anyone thinking not possibile, we already switched inits to systemd. And being persnickety saw mariadb replace mysql everywhere, libreoffice replace open office, and so on.

All the recent pushiness by a certain zealotish Italian debian maintainer, only helps this case. Trying to degrade Debian into a clone of Redhat is uncooth.

majewsky 1/28/2026||
> A silver lining, is it would likely be attempted via systemd. This may finally be enough to kick off a fork, and get rid of all the silly parts of it.

This misunderstands why systemd succeeded. It included several design decisions aimed at easing distribution maintainers' burdens, thus making adoption attractive to the same people that would approve this adoption.

If a systemd fork differentiates on not having attestation and getting rid of an unspecified set of "all the silly parts", how would they entice distro maintainers to adopt it? Elaborating what is meant by "silly parts" would be needed to answer that question.

bmn__ 1/28/2026|||
[flagged]
LtWorf 1/28/2026|||
It was also heavily pushed by Red Hat by making everyone's lives harder if they didn't support it.
esjeon 1/28/2026|||
Attestation is a critical feature for many H/W companies (e.g. IoT, robotics), and they struggle with finding security engineers who expertise in this area (disclaimer: I used to work as a operating system engineer + security engineer). Many distros are not only designed for desktop users, but also for industrial uses. If distros ship standardized packages in this area, it would help those companies a lot.
wolvoleo 1/28/2026|||
This is the problem with Linux in general. It's way too much infiltrated by our adversaries from big tech industry.

Look at all the kernel patch submissions. 90% are not users but big tech drones. Look at the Linux foundation board. It's the who's who of big tech.

This is why I moved to the BSDs. Linux started as a grassroots project but turned commercial, the BSDs started commercial but are hardly still used as such and are mostly user driven now (yes there's a few exceptions like netflix, netgate, ix etc but nothing on the scale of huawei, Amazon etc)

surajrmal 1/28/2026|||
Linux has been majority developed by large tech companies for the last 20+ years. If not for them, it would not be anywhere close to where it is today. You may not like this fact, but it's not really a new development nor something that can be described as infiltration. At the end of the day, maintaining software without being paid to do so is not generally sustainable.
account42 1/28/2026||
Considering some of the changes to the ecosystem in the last 20 years it's not clear that this has made things better.
preisschild 1/28/2026||
It is very clear that this has made things better

A lot more programs are available for linux, drivers and subsystems have gotten better, more features that benefit everyone (such as eBPF) and more

password4321 1/28/2026||||
> This is why I moved to the BSDs. Linux started as a grassroots project but turned commercial

Thanks, this may be the key takeaway from this discussion for me

axus 1/28/2026|||
As a complete guess, I would say that 90% of Linux systems are run by "big tech drones". And also by small companies using technology.

Open source operating systems are not a zero sum game. Yes there is a certain gravitational pull from all the work contributed by the big companies. If you aren't contributing "for-hire", then you choose what you want to work on, and what you want to use.

account42 1/28/2026||
Only if you count Android phones as being run by Google ... which is exactly the problem we want to avoid with our PCs.
LooseMarmoset 1/28/2026||||
> Attestation is a critical feature for many H/W companies

Like John Deere. Read about how they use that sort of thing

blacklion 1/28/2026||||
IoT and robotics should (dare I say "must"?) not use general-purpose OSes at all.

This «Linux have a finger in every pie» attitude is very harmful for industry, IMHO.

MisterTea 1/28/2026|||
General purpose operating systems are fine and in some cases, preferable. However, they should be small, simple and designed with first class portability. Linux is none of those.
fc417fc802 1/28/2026||||
Why shouldn't they use the kernel, systemd, and a few core utilities? Why reinvent the wheel? There's nothing requiring them to pull in a typical desktop userspace.
blacklion 1/29/2026||
Because different tasks requires different trade-offs and Linux has only one set of trade-offs. You cannot do good universal tool. It is like Leatherman, good enough to fix-up your bike on the side of the road, not so for normal workshop.

You say: reinvent the wheel.

I say: use pickup truck for every task, from farming to racing to commuting moving goods across continent. Is it possible? Of course. Is it good idea? I don't think so.

All cars are the same if you squint enough, wheels, engine, some frame, some controls, which are not very different between even F1 car and 18-wheel truck.

surajrmal 1/28/2026||||
I agree but it's difficult to argue against it. There is just so much you get for free by starting with a Linux distro as your base. Developing against alternatives is very expensive and developing something new is even more expensive. The best we can hope for is that someone with deep pockets invests in good alternatives that everyone can benefit from.
ahepp 1/28/2026|||
How are you defining "general-purpose OS"? Are you saying IoT and robotics shouldn't use a Linux kernel at all? Or just not your general purpose distros? I would be interested to hear more of your logic here, since it seems like using the same FOSS operating system across various uses provides a lot of value to everyone.
blacklion 1/29/2026||
I think, that I want at least hard-real-time OS in any computer which can move physical objects. Linux kernel cannot be it: hard RTOS cannot have virtual memory (mapping walks is unpredictable in case of TLB miss) and many other mechanisms which are desired in desktop/server OS are ill-suited for RTOS. Scheduler must be tuned differently, I/O must be done differently. It is not only «this process have RT priority, don't preempt it», it is design of whole kernel.

Better, this OS must be verified (as seL4). But I understand, that it is pipe dream. Heck, even RTOS is pipe dream.

About IoT: this word means nothing. Is connected TV IoT? I have no problems with Linux inside it. My lightbulb which can be turned on and off via ZigBee? Why do I need Linux here? My battery-powered weather station (because I cannot put 220v wiring in backyard)? Better no, I need as-low-power-as-possible solution.

To be honest, O think even using one kernel for different servers is technically wrong, because RDBMS, file server and computational node needs very different priories in kernel tuning too. I prefer network stack of FreeBSD, file server capabilities (native ZFS & Ko) of Solaris, transaction processing of Tandem/HPE NonStop OS and Wayland/GPU/Desktop support of Linux. But everything bar Linux is effectively dead. And Linux is only «good enough» in everything, mediocre.

I understand value of unification, but as engineer I'm sad.

modo_mario 1/28/2026||||
I'm not too big in this field but didn't many of those same IOT companies and the like struggle with the packages becoming dependent on Poeterings work since they often needed much smaller/minimal distros?
surajrmal 1/28/2026|||
I don't think this is generally true. If you are running Linux in your stack, your device probably is investing in 1GiB+ RAM and 2GiB+ of flash storage. systemd et al are not a problem at that point. Running a UI will end up being considerably more costly.
account42 1/28/2026||
I can assure you there are many Linux devices with specs significantly lower than that.
surajrmal 1/29/2026||
Sure, but devices that do that are not running a Linux distro off the shelf. They are creating something custom with the minimal amount of dependencies possible.
ahepp 1/28/2026|||
I work on embedded devices, fairly powerful ones to be fair, and I think systemd is really great, useful software. There's a ton of stuff I can do quite easily with systemd that would take a ton of effort to do reliably with sysvinit.

It's definitely pretty opinionated, and I frequently have to explain to people why "After=" doesn't mean "Wants=", but the result is way more robust than any alternative I'm familiar with.

If you're on a system so constrained that running systemd is a burden, you are probably already using something like buildroot/yocto and have a high degree of control about what init system you use.

trollbridge 1/28/2026|||
Then they can go and buy some other OS like VxWorks.
jnwatson 1/28/2026|||
It is already part of the most common Linux distribution, Android.
notepad0x90 1/28/2026||
Please do, I disagree with this commenter.

You already trust third parties, but there is no reason why that third party can't be the very same entity publishing the distribution. The role corporations play in attestation for the devices you speak of can be displaced by an open source developer, it doesn't need to require a paid certificate, just a trusted one. Furthermore, attestation should be optional at the hardware level, allowing you to build distros that don't use it, however distros by default should use it, as they see fit of course.

I think what people are frustrated with is the heavy-handedness of the approach, the lack of opt-out and the corporate-centric feel of it all. My suggestion would be not to take the systemd approach. There is no reason why attestation related features can't be turned on or off at install time, much like disk encryption. I find it unfortunate that even something like secureboot isn't configurable at install time, with custom certs,distro certs, or certs generated at install time.

Being against a feature that benefits regular users is not good, it is more constructive to talk about what the FOSS way of implementing a feature might be. Just because Google and Apple did it a certain way, it doesn't mean that's the only way of doing it.

cferry 1/28/2026|||
Whoever uses this seeks to ensure a certain kind of behavior on a machine they typically don't own (in the legal sense of it). So of course you can make it optional. But then software that depends on it, like your banking Electron app or your Steam game, will refuse to run... so as the user, you don't really have a choice.

I would love to use that technology to do reverse attestation, and require the server that handles my personal data to behave a certain way, like obeying the privacy policy terms of the EULA and not using my data to train LLMs if I so opted out. Something tells me that's not going to happen...

PunchyHamster 1/28/2026||||
see latest "MS just divilged disk encryption keys to govt" news to see why this is a horrid idea
ingohelpinger 1/28/2026||
I’m skeptical about the push toward third-party hardware attestation for Linux kernels. Handing kernel trust to external companies feels like repeating mistakes we’ve already seen with iOS and Android, where security mechanisms slowly turned into control mechanisms.

Centralized trust Hardware attestation run by third parties creates a single point of trust (and failure). If one vendor controls what’s “trusted,” Linux loses one of its core properties: decentralization. This is a fundamental shift in the threat model.

Misaligned incentives These companies don’t just care about security. They have financial, legal, and political incentives. Over time, that usually means monetization, compliance pressure, and policy enforcement creeping into what started as a “security feature.”

Black boxes Most attestation systems are opaque. Users can’t easily audit what’s being measured, what data is emitted, or how decisions are made. This runs counter to the open, inspectable nature of Linux security today.

Expanded attack surface Adding external hardware, firmware, and vendor services increases complexity and creates new supply-chain and implementation risks. If the attestation authority is compromised, the blast radius is massive.

Loss of user control Once attestation becomes required (or “strongly encouraged”), users lose the ability to fully control their own systems. Custom kernels, experimental builds, or unconventional setups risk being treated as “untrusted” by default.

Vendor lock-in Proprietary attestation stacks make switching vendors difficult. If a company disappears, changes terms, or decides your setup is unsupported, you’re stuck. Fragmentation across vendors also becomes likely.

Privacy and tracking Remote attestation often involves sending unique or semi-unique device signals to external services. Even if not intended for tracking, the capability is there—and history shows it eventually gets used.

Potential for abuse Attestation enables blacklisting. Whether for business, legal, or political reasons, third parties gain the power to decide what software or hardware is acceptable. That’s a dangerous lever to hand over.

Harder incident response If something goes wrong inside a proprietary attestation system, users and distro maintainers may have little visibility or ability to respond independently.

PunchyHamster 1/28/2026|||
I can see usefulness if the flow was "the device is unlocked by default, there are no keys/certs on it, and it can be reset to that state (for re-use purpose)"

Then the user can put their own key there (if say corporate policies demand it), but there is no 3rd party that can decide what the device can do.

But having 3rd party (and US one too!) that is root of all trust is a massive problem.

mkeeter 1/28/2026|||
oh hi ChatGPT

The giveaway is that LLMs love bulleted lists with a bolded attention-grabbing phrase to start each line. Copy-pasting directly to HN has stripped the bold formatting and bullets from the list, so the attention-grabbing phrase is fused into the next sentence, e.g. “Potential for abuse Attestation enables blacklisting”

ingohelpinger 1/28/2026||
Calling this a "giveaway" is kind of hilarious. LLMs use bulleted lists because humans have always used bulleted lists—in RFCs, design docs, and literally every tech write-up ever. Structure didn't suddenly become artificial in 2023. lol.
WD-42 1/28/2026||
Yea but humans would have fixed it, this person didn't even bother. Straight copy and paste.
wolvoleo 1/28/2026||||
It could be an open source developer yes but in practice it's always the big tech companies. Look at how this evolved in mobile phones.

It's also because content companies and banks want other people in suits to trust.

consumerxyz 1/28/2026|||
[dead]
MarkusWandel 1/27/2026||
My only experience with Linux secure boot so far.... I wasn't even aware that it was secure booted. And I needed to run something (I think it was the Displaylink driver) that needs to jam itself into the kernel. And the convoluted process to do it failed (it's packaged for Ubuntu but I was installing it on a slightly outdated Fedora system).

What, this part is only needed for secure boot? I'm not sec... oh. So go back to the UEFI settings, turn secure boot off, problem solved. I usually also turn off SELinux right after install.

So I'm an old greybeard who likes to have full control. Less secure. But at least I get the choice. Hopefully I continue to do so. The notion of not being able to access online banking services or other things that require account login, without running on a "fully attested" system does worry me.

Nextgrid 1/27/2026|
Secure Boot only extends the chain of trust from your firmware down the first UEFI binary it loads.

Currently SB is effectively useless because it will at best authenticate your kernel but the initrd and subsequent userspace (including programs that run as root) are unverified and can be replaced by malicious alternatives.

Secure Boot as it stands right now in the Linux world is effectively an annoyance that’s only there as a shortcut to get distros to boot on systems that trust Microsoft’s keys but otherwise offer no actual security.

It however doesn’t have to be this way, and I welcome efforts to make Linux just as secure as proprietary OSes who actually have full code signature verification all the way down to userspace.

nextaccountic 1/28/2026|||
here is some actual security: encrypted /boot, encrypted everything other than the boot loader (grub in this case)

sign grub with your own keys (some motherboards let you to do so). don't let random things signed by microsoft to boot (it defeats the whole point)

so you have grub in an efi partition, it passes secure boot, loads, and attempts to unlock a luks partition with the user provided passphrase. if it passed secure boot it should increase confidence that you are typing you password into the legit thing

so anyway, after unlocking luks, it locates the kernel and initrd inside it, and boots

https://wiki.archlinux.org/title/GRUB#Encrypted_/boot

the reason I don't do it is.. my laptop is buggy. often when I enable secure boot, something periodically gets corrupted (often when the laptop powers off due to low power) and when it gets up, it doesn't verify anything. slightly insane tech

however, this is still better than, at failure, letting anything run

sophisticated attackers will defeat this, but they can also add a variety of attacks at hardware level

gorgoiler 1/28/2026|||
I’d much rather have tamper detection. Encryption is great should the device is stolen but it feels like the wrong tool for defending against evil maids. All I’d want is that any time you open the case or touch the cold external ports (ie unbolted) you have to re-authenticate with a master password. I’m happy to use cabled peripherals to achieve this.

Chaining trust from POST to login feels like trying to make a theoretically perfect diamond and titanium bicycle that never wears down or falls apart when all I need is an automated system to tell me when to replace a part that’s about to fail.

nextaccountic 1/28/2026||
Encryption is just a baseline. Nobody should have unencrypted personal computers.

You can have both full disk encryption AND a tamper protection!

gorgoiler 1/28/2026||
Sorry, I wasn’t clear enough. We’re talking about three things here:

(1) Encryption: fast and fantastic, and a must-have for at-rest data protection.

It is vulnerable to password theft though. An attacker might insert evil code between power-on and disk-password-entry. With a locked down BIOS / UEFI, the only way to insert the code is to take the boot drive out of the device, modify it, put it back, and hope no one notices. “Noticing” in this case is done by either:

(2) Trust chaining: verify the signatures of the entire boot process to detect evil code.

(3) Tamper detection: verify the physical integrity of the device.

My point is that (1) is a given, and out of (2) or (3), I’d rather have the latter than deal with the shoddiness of the former

mikkupikku 1/28/2026||||
> the reason I don't do it is.. my laptop is buggy. often when I enable secure boot, something periodically gets corrupted (often when the laptop powers off due to low power) and when it gets up, it doesn't verify anything. slightly insane tech

Reminds me of my old Chromebook Pixel I wiped chromeos from. Every time it booted I had to press Ctrl-L (iirc) to continue the boot, any other keypress would reenable secure boot and the only way I knew to recover from that was to reinstall chromeos, which would wipe my linux partition and my files with it. Needless to say, that computer taught me good backup discipline...

ahepp 1/28/2026|||
Doing secure boot properly is kind of difficult. There are a bunch of TPM measurement registers for various bits and bobs (kernel, initramfs, cmdline, lots more). Using UKIs simplifies it a lot, but it’s not trivial to do right at the moment.
Nextgrid 1/28/2026||
Secure Boot and TPM are separate things. The current Secure Boot policy gets measured by the TPM but that's about it.
Fischgericht 1/27/2026||||
Yes, "just as secure as proprietary OSes" who due to failed signature verification are no longer able to start notepad.exe.

I think you might want to go re-read the last ~6 months of IT news in regards of "secure proprietary OSes".

charcircuit 1/28/2026||
Just because OpenSSL had a CVE posted about today, that didn't mean we should go back to use HTTP for the web.
lazide 1/28/2026||
It does mean we should recognize that SSL is nice for some basic privacy/security, but not perfect security.
charcircuit 1/28/2026||
Same with remote attestation. Not all implementations are actually secure. But hopefully over time those security bugs can be ironed out and the cost to extract a key be made infeasable.
direwolf20 1/28/2026||
Hopefully not. What you have just said is a synonym for "But hopefully over time manufacturers will be able to completely prevent users from running unapproved software."
charcircuit 1/28/2026||
In the case of video game consoles that could be the case. It turned out that being able to run unapproved software results mainly in people playing pirated games. These security measures are reactive to the actions other people have taken. We already experimented with computing being the wild west where there was little to no security. It turned out that bad actors will abuse anything they can find. Even if it's not economical some attackers will still cause abuse.

There's always going to be a market for computers that can run unapproved software. I don't see that going away.

lazide 1/28/2026||
Huh? Why should people who pay for the hardware not be able to run whatever they want? Why include them as ‘attackers’?
direwolf20 1/28/2026||
Shareholders über alles?
notepad0x90 1/28/2026||||
There is the integrity measurement architecture but it isn't very mature in my opinion. Even secureboot and module signing is a manual setup by users, it isn't supported by default, or by installers. You have to more or less manage your own certs and CA, although I did notice some laptops have debian signing keys in UEFI by default? If only the debian installer setup module signing.

But you miss a critical part - Secure Boot, as the name implies is for boot, not OS runtime. Linux I suppose considers the part after initrd load, post-boot perhaps?

I think pid-1 hash verification from the kernel is not a huge ask, as part of secure boot, and leave it to the init system to implement or not implement user-space executable/script signature enforcement. I'm sure Mr. Poettering wouldn't mind.

vbezhenar 1/28/2026||||
It is not useless. I'm using UKI, so initrd is built into the kernel binary and signed. I'm not using bootloader, so UEFI checks my kernel signature. My userspace is encrypted and key is stored in TPM, so the whole boot chain is verified.
blibble 1/27/2026||||
you can merge the initrd + kernel into one signed binary pretty easily with systemd-boot

add luks root, then it's not that bad

Nextgrid 1/28/2026||
Yes, you can. I really don't want to be in the business of building OSes. If these guys make it so that getting reasonable boot security is a simple toggle, I'd be grateful.
NekkoDroid 1/28/2026||
On arch it isn't particularly difficult to create UKIs other than changing like 2 lines in `mkinitcpio`'s config.

Then there is also `ukify` by systemd which also can create UKIs, which then can be installed with `kernel-install`, but that is a bit more work to set up than for `mkinitcpio`.

The main part is the signing, which I usually have `sbctl` handle.

Gigachad 1/28/2026||||
Isn’t the idea that the kernel will verify anything beneath it. Secure boot verifies the kernel and then it’s in the hands of the kernel to keep verifying or not.
Nextgrid 1/28/2026||
> the kernel will verify anything beneath it

Yes that's the case - my argument is that Linux currently doesn't have anything standardized to do that.

Your best bet for now is to use a read-only dm-verity-protected volume as the root partition, encode its hash in the initrd, combine kernel + initrd into a UKI and sign that.

I would welcome a standardized approach.

jcgl 1/28/2026||
Standardizing that approach is one thing that the systemd project has been working on. They've built various components to help with that, including writing specifications (via the UAPI group) on how that should all fit together.

ParticleOS[0] gives a look at how this can all fit together, in case you want to see some of it in action.

[0] https://github.com/systemd/particleos

digiown 1/27/2026||||
A basic setup to make use of secure boot is SB+TPM+LUKS. Unfortunately I don't know of any distro that offers this in a particularly robust way.

Code signature verification is an interesting idea, but I'm not sure how it could be achieved. Have distro maintainers sign the code?

s_ting765 1/28/2026|||
Opensuse have been working on making secure boot/TPM FDE unlock easy to use for a while now. https://news.opensuse.org/2025/11/13/tw-grub2-bls/
bboozzoo 1/28/2026|||
> A basic setup to make use of secure boot is SB+TPM+LUKS. Unfortunately I don't know of any distro that offers this in a particularly robust way.

Have a look at Ubuntu Core 24 and later. Though it's not exactly a desktop system, but rathe oriented towards embedded/appliances. Recent Ubuntu desktop (from 25.04 IIRC) started getting the same mechanism gradually integrated in each release. Upcoming Ubuntu 26.04 is expected to support TPM backed FDE. Worth a try if you can set up a VM with a software TPM.

Keep in mind though, there's been plenty of issues with various EFI firmwares, especially on the appliances side. EFI specs are apparently treated as guidelines rather than actual specification by whoever ends up implementing the firmware.

ahepp 1/27/2026||||
Isn't it possible to force TPM measurements for stuff like the kernel command line or initramfs hash to match in order to decrypt the rootfs? Or make things simpler with UKIs?

Most of the firmwares I've used lately seem to allow adding custom secureboot keys.

direwolf20 1/27/2026||
Fine as long as it's managed by the user. A good check is who installed the keys. A user–freedom–respecting secureboot must have user–generated keys.
okanat 1/28/2026|||
There is some level of misinformation in your post. Both Windows and Linux check driver signatures. Once you boot Linux in UEFI Secure Boot, you cannot use unsigned drivers because the kernel can detect and activate the lockdown mode. You have to sign all of the drivers within the same PKI of your UEFI key.
Nextgrid 1/28/2026||
> you cannot use unsigned drivers because the kernel can detect and activate the lockdown mode

You don't need to load a driver; you can just replace a binary that's going to be executed as root as part of system boot. This is something a hypothetical code signature verification would detect and prevent.

Failing kernel-level code signature enforcement, the next best step is to have a dm-verity volume as your root partition, with the dm-verity hashes in the initrd within the UKI, and that UKI being signed with secure boot.

This would theoretically allow you to recover from even root-level compromise by just rebooting the machine (assuming the secure boot signing keys weren't on said machine itself).

9NRtKyP4 1/27/2026||
Remote attestation is another technology that is not inherently restrictive of software freedom. But here are some examples of technologies that have already restricted freedom due to oligopoly combined with network effects:

* smartphone device integrity checks (SafetyNet / Play Integrity / Apple DeviceCheck)

* HDMI/HDCP

* streaming DRM (Widevine / FairPlay)

* Secure Boot (vendor-keyed deployments)

* printers w/ signed/chipped cartridges (consumables auth)

* proprietary file formats + network effects (office docs, messaging)

cwillu 1/27/2026||
It very clearly is restrictive of software freedom. I've never suffered from an evil maid breaking into my house to access my computer, but I've _very_ frequently suffered from corporations trying to prevent me from doing what I wish with my own things. We need to push back on this notion that this sort of thing was _ever_ for the end-user's benefit, because it's not.
Gigachad 1/28/2026|||
Remote attestation seems more useful for server hosts to let VPS users verify the server hasn’t been tampered with.
UltraSane 1/28/2026||||
YOU can use remote attestation to verify a remote server you are paying for hasn't been tampered with.
direwolf20 1/28/2026||
This happens much less frequently than the manufacturer of "my" computing device verifies that I haven't tampered with it. On net, it's a wholesale destruction of user freedom.
UltraSane 1/29/2026||
"it's a wholesale destruction of user freedom." This is ridiculously hyperbolic language for what are basically fancy digital signatures. There is nothing stopping you from using two different systems, one that passes attestation and one that doesn't.
avadodin 1/27/2026|||
To play devil's advocate, I don't think most people would be fine with their car ramming into a military base after an unfriendly firmware update.

However, I agree that the risks to individuals and their freedoms stemming from these technologies outweigh the benefits in most cases.

rpcope1 1/28/2026|||
The better question then is why the actual f** can an OTA firmware update touch anything in the steering or powertrain of the car, or why do I even need a computer that's connected to anything, and one which does more than just make sure I get the right amount of fuel and spark, or why on earth do people tolerate this sort of insanity.
hsbauauvhabzb 1/28/2026|||
If a malicious update can be pushed because of some failure in the signature verification checks (which already exist), what makes you think the threat actor won’t have access to signing keys?

This is not what attestation is even seeking to solve.

avadodin 1/28/2026||
Firmware upgrades don't need to use the same protocols. Without secure boot any applet can take a security hole escalate and persist until you take a trip to a zone of interest. With secure-boot+attestation, the vendors can choose not to let you download the latest map data, report you to the authorities, etc.

Why do people take DA as "Hail Satan" anyways.

cwillu 1/29/2026|||
“With secure-boot+attestation, the vendors can choose not to let you download the latest map data, report you to the authorities”

As far as I'm concerned, you just conceded the argument.

hsbauauvhabzb 1/29/2026|||
If this was about stopping malware, it wouldn’t be targeting Linux endpoints.
myaccountonhn 1/27/2026|||
It's interesting there's no remote attestation the other way around, making sure the server is not doing something to your data that you didn't approve of.
minitech 1/27/2026|||
There is. Signal uses it, for example. https://signal.org/blog/building-faster-oram/

For another example, IntegriCloud: https://secure.integricloud.com/

tryauuum 1/27/2026|||
confidential computing?
9NRtKyP4 1/27/2026|||
The authors clearly don’t intend this to happen but that doesn’t matter. Someone else will do it. Maybe this can be stopped with licensing as we tried to stop the SaaS loophole with GPLv3?
digiown 1/27/2026|||
I am quite conflicted here. On one hand I understand the need for it (offsite colo servers is the best example). Basic level of evil maid resistance is also a nice to have on personal machines. On the other hand we have all the things you listed.

I personally don't think this product matters all that much for now. These types of tech is not oppressive by itself, only when it is being demanded by an adversary. The ability of the adversary to demand it is a function of how widespread the capability is, and there aren't going to be enough Linux clients for this to start infringing on the rights of the general public just yet.

A bigger concern is all the efforts aimed at imposing integrity checks on platforms like the Web. That will eventually force users to make a choice between being denied essential services and accepting these demands.

I also think AI would substantially curtail the effect of many of these anti-user efforts. For example a bot can be programmed to automate using a secure phone and controlled from a user-controlled device, cheat in games, etc.

yencabulator 1/27/2026||
> On one hand I understand the need for it (offsite colo servers is the best example).

Great example of proving something to your own organization. Mullvad is probably the most trusted VPN provider and they do this! But this is not a power that should be exposed to regular applications, or we end up with a dystopian future of you are not allowed to use your own computer.

trelane 1/28/2026|||
On the other side, Mulvad is looking at remote attestation so that the users can verify their servers: https://news.ycombinator.com/item?id=29903695
Foxboron 1/27/2026||
> * Secure Boot (vendor-keyed deployments)

I wish this myth would die at this point.

Secure Boot allows you to enroll your own keys. This is part of the spec, and there are no shipped firmwares that prevents you from going through this process.

LooseMarmoset 1/27/2026|||
Android lets you put your own signed keys in on certain phones. For now.

The banking apps still won't trust them, though.

To add a quote from Lennart himself:

"The OS configuration and state (i.e. /etc/ and /var/) must be encrypted, and authenticated before they are used. The encryption key should be bound to the TPM device; i.e system data should be locked to a security concept belonging to the system, not the user."

Your system will not belong to you anymore. Just as it is with Android.

tadfisher 1/28/2026|||
Banks do this because they have made their own requirement that the mobile device is a trust root that can authenticate the user. There are better, limited-purpose devices that can do this, but they are not popular/ubiquitous like smartphones, so here we are.

The oppressive part of this scheme is that Google's integrity check only passes for _their_ keys, which form a chain of trust through the TEE/TPM, through the bootloader and finally through the system image. Crucially, the only part banks should care about should just be the TEE and some secure storage, but Google provides an easy attestation scheme only for the entire hardware/software environment and not just the secure hardware bit that already lives in your phone and can't be phished.

It would be freaking cool if someone could turn your TPM into a Yubikey and have it be useful for you and your bank without having to verify the entire system firmware, bootloader and operating system.

account42 1/28/2026||
Banks do this because they can. If most consumer devices did not support the tech they would not be able to.
charcircuit 1/28/2026|||
Then work with the bank to prove the signer is trustworthy.
yjftsjthsd-h 1/27/2026||||
> This is part of the spec, and there are no shipped firmwares that prevents you from going through this process.

Microsoft required that users be able to enroll their own keys on x86. On ARM, they used to mandate that users could not enroll their own keys. That they later changed this does not erase the past. Also, I've anecdotally heard claims of buggy implementations that do in fact prevent users from changing secure boot settings.

teddyh 1/28/2026||
“buggy”
yjftsjthsd-h 1/28/2026||
Don't get me wrong, I'm happy to attribute a lot of malice to Microsoft, but in this case I really do believe that it was incompetence. Everything I've ever read about 90%+ of hardware vendors is that shipping hilariously broken firmware is an everyday occurrence for them.

(This is separate from Windows RT, of course)

NekkoDroid 1/28/2026||
This reminds me of when I enrolled only my own keys into a gigabyte AB350 and I just soft-bricked it because presumably some opt-rom required MS keys.

I exchanged it for an Asrock board and there I can enable secure boot without MS keys and still have it boot cuz they actually let you choose what level of signing the opt-rom needs when you enable secure boot.

What I want to say with this is that it requires the company to actually care to provide a good experience.

digiown 1/27/2026||||
> Secure Boot allows you to enroll your own keys

UEFI secure boot on PCs, yes for the most part. A lot of mobile platforms just never supported this. It's not a myth.

Foxboron 1/27/2026||
Phones don't implement UEFI.
seba_dos1 1/27/2026||
Most don't, but they're usually equivalently locked down nevertheless.
Foxboron 1/27/2026||
UEFI on x86_64 and phones are not comparable when it comes to being "locked down".
seba_dos1 1/27/2026||
Are you sure?

Note that the comment you replied to does not even mention phones. Locked down Secure Boot on UEFI is not uncommon on mobile platforms, such as x86-64 tablets.

201984 1/27/2026||||
What about all those Windows on ARM laptops?
Brian_K_White 1/28/2026|||
I wish the myth of the spec would die at this point.

Many motherboards secure boot implimentation violates the supposed standard and does not allow you to invalidate the pre-loaded keys you don't approve of.

parrellel 1/28/2026||
Well, I can see what heinous thing is going to be ruining my day in 5 years.

Attestation, the thing we're going to be spending the next forever trying to get out of phones, now in your kernel.

fao_ 1/28/2026|
It's interesting how quickly the OSS movement went from "No, no, we just want to include companies in the Free Software Movement" to "Oh, don't worry, it's ok if companies with shareholders that are not accountable to the community have a complete monopoly on OSS, and decide what direction it takes"
ThrowawayR2 1/28/2026||
FOSS was imagined as a brotherhood of hackers, sharing code back and forth to build a utopian code commons that provided freedom to build anything. It stayed firmly in the realm of the imaginary because, in the real world, everybody wants somebody else to foot the bill or do the work. Corporations stepped up once they figured out how to profit off of FOSS and everyone else was content to free ride off of the output because it meant they didn't have to lift a finger. The people who actually do the work are naturally in the driver's seat.
fao_ 1/28/2026||
This perspective is astonishingly historically ignorant, and ignores how "Open Source Software" was a deliberate political movement to simultaneously neuter the non-company-friendly goals of FOSS while simultaneously providing a competing (and politically distracting) movement that deliberately courted companies.

The Free Software movement was successful enough that by 1997 it was garnering a lot of international community support and manpower. Eric S. Raymond published CatB in response to these successes, partly with a goal of "celebrating its successes" — sendmail, gcc, perl, and Linux were all popular projects with a huge number of collaborators by this point — and partly with a goal of reframing the Free Software movement such that it effectively neuters the political basis (i.e. the four freedoms, etc.) in a company-friendly way. It's very easy to note when reading the book, how it consistently celebrates the successes of Free Software in a company friendly way, deliberately to make it appealing to companies. Often being very explicit about its goals, e.g. "Don't give your workers good bonuses, because research shows that the better a ''hacker'' the less they care about money!".

A year later, internal memos from Microsoft leaked that showed that management were indeed scared shitless about Linux, a movement that they could neither completely Embrace, Extend, and Extinguish, nor practice Fear, Uncertainty, and Doubt on, because the community that built it were too strong, and too dedicated. Management foresaw that it was only a matter until Linux was a very strong competitor — even if that's taken 20 years, they were decently accurate in their fears, and, to be honest, part of why it's taken 30 years for Linux to catch up are deliberate actions by Microsoft wrt. introducing and adopting technologies that would stymie the Free Software movement from being able to adapt.

getcrunk 1/27/2026||
systemd solved/improved a bunch of things for linux, but now the plan seems to be to replace package management with image based whole dist a/b swaps. and to have signed unified kernel images.

this basically will remove or significantly encumber user control over their system, such that any modification will make you loose your "signed" status and ... boom! goodbye accessing the internet without an id

pottering recently works for Microsoft, they want to turn linux into an appliance just like windows, no longer a general purpose os. the transition is still far from over on windows, but look at android and how the google play services dependency/choke-hold is

im sure ill get many down votes, but despite some hyperbole this is the trajectory

tocariimaa 1/28/2026||
We warned you that systemd was just the beginning.
mariusor 1/28/2026|||
> the plan seems to be to replace package management with image based whole dist a/b swaps

The plan is probably to have that as an alternative for the niche uses where that is appropriate.

This majority of this thread seems to have slid on that slippery slope, and jumped directly to the conclusion where the attestation mechanism will be mandatory on all linux machines in the world and you won't be able to run anything without. Which even if it would be a purpose for amutable as a company, it's unfeasible to do when there's such a breadth of distributions and non corpo affiliated developers out there that would need to cooperate for that to happen.

4gotunameagain 1/28/2026||
Nobody says that you will not have alternatives. What people are saying, is that if you're using those alternatives you won't be able to watch videos online, or access your bank account.

Eventually you will not be able to block ads.

mariusor 1/28/2026||
> Nobody says that you will not have alternatives

Maybe you want to reread through this thread.

> Eventually you will not be able to block ads.

That's so far down the slippery slope and with so many other things that need to go wrong that I'm not worried and I'm willing to be the one to get "told you so" if it happens.

jcgl 1/28/2026|||
Immutable, signed systems do not intrinsically conflict with hackability. See this blog post of Lennart's[0] and systemd's ParticleOS meta-distro[1].

I do agree that these technologies can be abused. But system integrity is also a prerequisite for security; it's not like this is like Digital "Rights" Management, where it's unequivocally a bad thing that only advances evil interests. Like, Widevine should never have been made a thing in Firefox imo.

So I think what's most productive here is to build immutable, signable systems that can preserve user freedom, and then use social and political means to further guarantee those freedoms. For instance a requirement that owning a device means being able to provision your own keys. Bans on certain attestation schemes. Etc. (I empathize with anyone who would be cynical about those particular possibilities though.)

[0] https://0pointer.net/blog/fitting-everything-together.html

[1] https://github.com/systemd/particleos

dust42 1/28/2026|||
Linux is nowadays mostly sponsored by big corporations. They have different goals and different ways to do things. Probably the first 10 years Linux was driven by enthusiasts and therefore it was a lean system. Something like systemd is typical corporate output. Due it its complexity it would have died long before finding adoption. But with enterprise money this is possible. Try to develop for the combo Linux Bluetooth/Audio/dbus: the complexity drives you crazy because all this stuff was made for (and financed by) corporate needs of the automotive industry. Simplicity is never a goal in these big companies.

But then Linux wouldn't be where it is without the business side paying for the developers. There is no such thing as a free lunch...

TacticalCoder 1/28/2026||
> this basically will remove or significantly encumber user control over their system, such that any modification will make you loose your "signed" status and ... boom! goodbye accessing the internet without an id

Yeah. I'm pretty sure it requires a very specific psychological profile to decide to work on such a user-hostile project while post-fact rationalizing that it's "for good".

All I can say is I'm not surprised that Poettering is involved in such a user-hostile attack on free computing.

P.S: I don't care about the downvotes, you shouldn't either.

noisy_boy 1/28/2026||
Does this guy do anything that is user-friendly and is as per open source ethos of freedom and user control? In all this shit-show of Microsoft shoving AI down the throat of its users, I was happy to be firmly in the Linux camp for many many years. And along come these kind of people to shit on that parade too.

P.S: Upvoted you. I don't care about downvotes either.

kfreds 1/27/2026|
Exciting!

It sounds like you want to achieve system transparency, but I don't see any clear mention of reproducible builds or transparency logs anywhere.

I have followed systemd's efforts into Secure Boot and TPM use with great interest. It has become increasingly clear that you are heading in a very similar direction to these projects:

- Hal Finney's transparent server

- Keylime

- System Transparency

- Project Oak

- Apple Private Cloud Compute

- Moxie's Confer.to

I still remember Jason introducing me to Lennart at FOSDEM in 2020, and we had a short conversation about System Transparency.

I'd love to meet up at FOSDEM. Email me at fredrik@mullvad.net.

Edit: Here we are six years later, and I'm pretty sure we'll eventually replace a lot of things we built with things that the systemd community has now built. On a related note, I think you should consider using Sigsum as your transparency log. :)

Edit2: For anyone interested, here's a recent lightning talk I did that explains the concept that all project above are striving towards, and likely Amutable as well: https://www.youtube.com/watch?v=Lo0gxBWwwQE

davidstrauss 1/27/2026||
Hi, I'm David, founding product lead.

Our entire team will be at FOSDEM, and we'd be thrilled to meet more of the Mullvad team. Protecting systems like yours is core to us. We want to understand how we put the right roots of trust and observability into your hands.

Edit: I've reached out privately by email for next steps, as you requested.

kfreds 1/27/2026||
Hi David. Great! I actually wasn't planning on going due to other things, but this is worth re-arranging my schedule a bit. See you later this week. Please email me your contact details.

As I mentioned above, we've followed systemd's development in recent years with great interest, as well as that of some other projects. When I started(*) the System Transparency project it was very much a research project.

Today, almost seven years later, I think there's a great opportunity for us to reduce our maintenance burden by re-architecting on top of systemd, and some other things. That way we can focus on other things. There's still a lot of work to do on standardizing transparency building blocks, the witness ecosystem(**), and building an authentication mechanism for system transparency that weaves it all together.

I'm more than happy to share my notes with you. Best case you build exactly what we want. Then we don't have to do it. :)

*: https://mullvad.net/en/blog/system-transparency-future

**: https://witness-network.org

Phelinofist 1/27/2026||
I'm super far from an expert on this, but it NEEDS reproducible builds, right? You need to start from a known good, trusted state - otherwise you cannot trust any new system states. You also need it for updates.
kfreds 1/27/2026||
Well, it comes down to what trust assumptions you're OK with. Reproducible reduces trust in the build environment, but you still need to ensure authenticity of the source somehow. Verified boot, measured boot, repro builds, local/remote attestation, and transparency logging provide different things. Combined they form the possibility of a sort of authentication mechanism between a server and client. However, all of the concepts are useful by themselves.
More comments...