Top
Best
New

Posted by jandeboevrie 12 hours ago

SSH certificates: the better SSH experience(jpmens.net)
177 points | 72 commentspage 2
gunapologist99 6 hours ago|
Anyone tried out Userify? It creates/removes ssh pubkeys locally so (like a CA) no authn server needs to be online. But unlike certs, active sessions and processes are terminated when the user access is revoked.
jamiesonbecker 6 hours ago|
We're in the process of updating the experience to this century! ;)

We've always taken the stance that crusty is better than vulnerable, but it turns out that not having a modern experience after 15 years is starting to feel like maybe we need to step up the features and shininess :)

Thom2000 8 hours ago||
Sadly services such as Github don't support these so it's mostly good for internal infrastructure.
lights0123 8 hours ago|
They do, for Enterprise customers only: https://docs.github.com/en/enterprise-cloud@latest/organizat...

They've rolled their host key one time, so there's little reason for them to use it on the host side.

jcalvinowens 7 hours ago||
You can also address TOFU to some extent using SSHFP DNS records.

Openssh supports checking the DNSSEC signature in the client, in theory, but it's a configure option and I'm not sure if distros build with it.

jsiepkes 7 hours ago||
On top of that you would need something to secure DNS. Like DNSSEC or at the very least use DNS with TLS or DNS over HTTP. None of these are typically enabled by default.
jcalvinowens 6 hours ago||
Anything that uses system-resolved is probably doing DNSSEC validation by default. It's becoming much more common.

Additionally, as I mentioned, openssh itself has support for validating the DNSSEC signature even if your local resolver doesn't. I actually don't think it can use the standard resolver for SSHFP records at all, but I'm not sure.

fc417fc802 5 hours ago||
Any idea if there's a standardized location, something like /.well-known/ssh?
moviuro 6 hours ago||
All those articles about SSH certificates fall short of explaining how the revocation list can/should be published.

Is that yet another problem that I need to solve with syncthing?

https://man.openbsd.org/ssh-keygen.1#KEY_REVOCATION_LISTS

blipvert 6 hours ago|
If you generate short lived certificates via an automated process/service then you don’t really need to manage a revocation list as they will have expired in short order.
jamiesonbecker 6 hours ago||
But then you can't log in if your box goes offline for any reason.
blipvert 5 hours ago||
Hmm. For user certs you can have the service sign them for, say an hour, so long as you can ssh to your server in that time then there’s no need for any other interaction.

Sure you need your signing service to be reasonably available, but that’s easily accomplished.

Maybe I misunderstand?

jamiesonbecker 5 hours ago|||
That works for authn in the happy path: short-lived cert, grab it, connect, done.

Except for everything around that:

* user lifecycle (create/remove/rename accounts)

* authz (who gets sudo, what groups, per-host differences)

* cleanup (what happens when someone leaves)

* visibility (what state is this box actually in right now?)

SSH certs don’t really touch any of that. They answer can this key log in right now, not what should exist on this machine.

So in practice, something else ends up managing users, groups, sudoers, home dirs, etc. Now there are two systems that both have to be correct.

On the availability point: "reasonably available" is doing a lot of work ;)

Even with 1-hour certs:

* new sessions depend on the signer

* fleet-wide issues hit everything at once

* incident response gets awkward if the signer is part of the blast radius

The failure mode shifts from a few boxes don't work to nobody can get in anywhere

The pull model just leans the other way:

* nodes converge to desired state

* access continues even if control plane hiccups

* authn and authz live together on the box

Both models can work - it’s more about which failure mode is tolerable to you.

blipvert 5 hours ago||
Well, yes, pick your poison.

But for just getting access to role accounts then I find it a lot nicer than distributing public keys around.

And for everything else, a periodic Ansible :-)

gnufx 3 hours ago||
Public keys (for OpenSSH) can be in DNS (VerifyHostKeyDNS) or in, say, LDAP via KnownHostsCommand and AuthorizedKeysCommand.
moviuro 5 hours ago|||
That sounds like a lot of extra steps. How do I validate the authenticity of a signing request? Should my signing machine be able to challenge the requester? (This means that the CA key is on a machine with network access!!)

Replacing the distribution of a revocation list with short-lived certificates just creates other problems that are not easier to solve. (Also, 1h is bonkers, even letsencrypt doesn't do it)

toast0 4 hours ago||
1h is bonkers for certs in https, but it's not unreasonable for authorized user certs, if your issuance path is available enough.

IMHO, if you're pushing revocation lists at low latency, you could also push authorized keys updates at low latency.

sqbic 5 hours ago||
I've had very good experiences with SSH Communication Security company's (the guys who invented SSH) PrivX product to manage secure remote access, including SSH certificates and also cert based Windows authentication. It supports other kinds of remote targets too, via webui or with native clients. Great product.
TZubiri 4 hours ago||
>then I don’t need to type the target user’s password; instead I enter the key’s passphrase, a hopefully much more complicated combination of words, to unlock the private key.

This sentence is a bit of a red flag, it looks like the author is making a (subtle) mistake in the category of too much security, or at least misjudging the amount of security (objectively measurable entropy) needed. This is of course a less consequential error than too little entropy/security measures, but still if one wants to be a cybersecurity professional, especially one with influence, they must know exactly the right amount needed, because our resources are limited, and each additional bit of entropy and security step not only costs time of the admin that implements it, but of the users that have to follow it, and this can even impact security itself by fatiguing the user and causing them to circumvent measures or ignore alerts.

On to specifically what's wrong:

Either a key file or a password can be used to log in to a server or authenticate to any service in general. Besides the technical implementation, the main difference is whether the secret is stored on the device, or in the user's brain. One is not more correct than the other, there's a lot of tradeoffs, one can ensure more bits and is more ergonomic, the other is not stored on device so it cannot be compromised that way.

That said a 2FA approach, in whatever format, is (generally speaking) safer than any individual method, in that the two secrets are necessary to be granted access. In this scenario one needs both the file and the password to authenticate, even if the password is 4 digits long, that increases the security of the system when compared to no password. An attacker would have to setup a brute force attempt along with a way to verify the decryption was successful. If local decryption confirmation is not possible, then such a brute force attack would require to submit erroneous logins to the server potentially activating tripwires or alerting a monitoring admin.

There's nothing special about the second factor authorization being equal or equivalent in entropy to the first, and there's especially no requirement that a password have more entropy when it's a second authorization, in fact it's the other way around.

tl;dr You can consider each security mechanism in the wider context rather than in isolation and you will see security fatigue go down without compromising security.

Serhii-Set 4 hours ago|
[dead]