Posted by bishwasbh 8 hours ago
So sensitive doesn’t mean encrypted. It means the UI doesn’t show the dev what value’s stored there after they’ve updated it. Not sensitive means it’s still visible. And again, I presume this is only a UI thing, and both kinds are stored encrypted in the backend.
I don’t work for Vercel, but I’ve use them a bit. I’m sure there are valid reasons to dislike them, but this specific bit looks like a strawman.
Yeah, I'm very confused. It's not possible to encrypt env vars that the program needs; even if it's encrypted at rest, it needs to be decrypted anyway before starting the program. Env vars are injected as plain text. This is just how this works, nothing to do with Vercel.
This situation could some day improve with fully homomorphic encryption (so the server operates with encrypted data without ever decrypting it), but that would have very high overhead for the entire program. It's not realistic (yet)
Oops - you said the opposite of what I read, my mistake.
PoC or GTFO.
I think you'll find it's a bit harder to do than you expect.
It begs the question why there is no 2FA? And why did they had such a broad access to being with?
If this is not case, the only other option I can muster is perhaps API credentials but stored in google workspaces? It is possible but odd.
For reference, look at how Disney got hacked. One employee downloaded compromised software on a personal computer. One thing led to another and boom. IT in many companies are much more incompetent than you think. I have seen that first hand.
One for which the Context.ai employee needs to have their arse booted up and down the car park for.
You can blame individuals, but security is a property of the system.
Heck, not giving the person Admin privileges would have sufficed to prevent this. Or better hiring preventing people who install Roblox cheats on work devices...
There is no excuse and no fine line here. Even outside them boasting about SOC 2 Type II, this would be embarrassing for an SME not in the tech sector.
Do you want to let any applicant be screened by the security team?
If specific to my hiring comment, was meant a bit facetious, though I will point out this line in their "compliance" report by "auditor" Delve:
> The organization carries out background and/or reference checks on all new employees and contractors prior to joining in accordance with relevant laws, regulations and ethics. Management utilizes a pre-hire checklist to ensure the hiring manager has assessed the qualification of candidates to confirm they can perform the necessary job requirements.
Maybe those pre-hire checklists should include a question like "Are you a massive idiot, who'd install a game on their work computer, then on top of that be the type of idiot who likes to cheat, then on top of that be the type of idiot to install cheats on your work computer?", maybe that'd prevent this in the future. Or again, just don't give everyone Admin privileges...
If you spin up an EC2 instance with an ftp server and check the "Encrypt my EBS volume" checkbox, all those files are 'encrypted at rest', but if your ftp password is 'admin/admin', your files will be exposed in plaintext quite quickly.
Vercel's backend is of course able to decrypt them too (or else it couldn't run your app for you), and so the attacker was able to view them, and presumably some other control on the backend made it so the sensitive ones can end up in your app, but can't be seen in whatever employee-only interface the attacker was viewing.
What's best practice to handle env vars? How do poeple handle them "securely" without it just being security theater? What tools and workflows are people using?
However I do feel now like my sensitive things are better off deployed on a VPS where someone would need a ssh exploit to come at me.
Notice how their tutorial says "run 'dotenvx run -- yourapp'". If you did 'dotenvx run -- env', all your secrets would be printed right there in plaintext, at runtime, since they're just encrypted at rest.
The equivalent in vercel would be encrypted in the database (the encrypted '.env' file), with a decryption key in the backend (the '.env.keys' file by default in dotenvx) used to show them in the frontend and decrypt them for running apps.
The point of encryption is often times about what other software or hardware attacks are minimized or eliminated.
However, if someone figures out access to a running system, theres really no way to both allow an app to run and keep everything encrypted. It certainly is possible, like the way keepass encrypts items in memory, but if an attacker has root on a server, they just wait for it to be accessed if not outright find the key that encrypted it.
This is to say, 99.9% of the apps and these platforms arn't secure against this type of low level intrusion.
Various certifications require this, I guess because they were written before hyper scalers and the assumed attack vector was that someone would literally steal a hard drive.
A running machine is not “at rest”, just like you can read files on your encrypted Mac HDD, the running program has decrypted access to the hard drive.
You can, theoretically, decompile the system memory dump and try to mine the credentials out of the credential server's heap, but that exploit is exponentially more difficult to do that a simple `cat /proc/1234/environ`.
For non-sensitive environment variables, they also show you the value in the dashboard so you can check and edit them later.
Things like 'NODE_ENV=production' vs 'NODE_ENV=development' is probably something the user wants to see, so that's another argument for letting the backend decrypt and display those values even ignoring the "running your app" part.
You're welcome to add an input that goes straight to '/dev/null' if you want, but it's not exactly a useful feature.
Piping to /dev/null is of course pointless.
What you really want is the /dev/null as a Service Enterprise plan for $500/month with its High Availability devnull Cluster ;)
(And modern Linux is unusable without root access, thanks to Docker and other fast-and-loose approaches.)
Because I never do, unless I'm down in the depths of /var/lib/docker doing stuff I shouldn't.
And I thought it was bad when my son got compromised by a Roblox cheat, but they only they grabbed his Gamepass cookies and bought 4 Minecraft licenses, which MS quickly refunded...
Feels like the employee pulled a LastPass Plex move.
It’s not a competitive platform like say WoW or overwatch; nobody is really there to win and there are zero stakes if you do or don’t.
The thing that concerns me is that even at a site like HN, where a lot of people are very familiar with LLMs, it seems to be passing.
I hate to think this will become the norm but it's not the first HN linked post that's gotten a lot of earnest engagement despite being AI generated (or partly AI generated).
I'm very comfortable with AI generated code, if the humans involved are doing due diligence, but I really dislike the idea of LLM generated prose taking over more and more of the front page.
So I believe the author has exposure to the issue and interest in understanding it, that’s more than AI alone has got.
If I don't see asterisks, I'm not hitting save on the field with a secret in it. Maybe they were setting them programmatically? They should definitely still be looking to pass some kind of a secret flag, though. This is a weird problem for a company like Vercel to have.
(Of course there are tons of other red flags not looked at in the article, eg. how does an employees machine get access to production systems and from there access to customers connected with oauth and how does the attacker get to env vars from a google workspace account)