Top
Best
New

Posted by queenelvis 19 hours ago

The Vercel breach: OAuth attack exposes risk in platform environment variables(www.trendmicro.com)
Vercel April 2026 security incident - https://news.ycombinator.com/item?id=47824463 - April 2026 (485 comments)

A Roblox cheat and one AI tool brought down Vercel's platform - https://news.ycombinator.com/item?id=47844431 - April 2026 (145 comments)

325 points | 109 comments
westont5 18 hours ago|
I'm not sure I've seen it mentioned yet that when Vercel rolled out their environment variable UI, there was no "sensitive" option https://github.com/vercel/vercel/discussions/4558#discussion.... There was ~2 years or more until it was introduced https://vercel.com/changelog/sensitive-environment-variables...
nopointttt 15 hours ago||
A sensitive flag at the UI layer doesn't actually change runtime. Once it's in process.env during a build, any dep that decides to grep it can. The real problem isn't a missing checkbox, it's that we still stuff every secret into one env bag and hand the build tools the whole bag. Cloudflare scoped bindings and Fly already split it up, other platforms are just slower.
_pdp_ 18 hours ago|||
Sensitive does not mean it is not readable. It is just simply not exposed through the UI. It can be easily leaked if you return a bit too much props from the action functions or routes.

The only way to defend against these types of issues is to encrypt your environment with your own keys, with secrets possibly baked into source as there are no other facilities to separate them. An attacker would need to not only read the environments but also download the compiled functions and find the decryption keys.

It is not ideal but it could work as a workaround.

harikb 18 hours ago|||
> with secrets possibly baked into source

please don't suggest this. The right way is to have the creds fetched from a vault, which is programmed to release the creds auth-free to your VM (with machine level identify managed by the parent platform)

This is how Google Secrets or AWS Vaults work.

jcgl 17 hours ago|||
> The right way is to have the creds fetched from a vault, which is programmed to release the creds auth-free to your VM

Or have whatever deployment tool that currently populates the env vars instead use the same information to populate files on the filesystem (like mounting creds).

chatmasta 16 hours ago||||
Next.js renders configuration that’s shared by client and server into a JSON blob in the HTML page. These config variables often come from environment variables. It’s a very common mistake for people to not realize this, and accidentally put what should be a server-only secret into this config. I’ve seen API secrets in HTML source code because of this. The client app doesn’t even use it, but it’s part of the next config so it renders into the page.
socalgal2 16 hours ago|||
IIRC, react had this issue so they required env vars seen in react to be prefixed by REACT_ The hope being that SECRET is not prefixed and so is not available. Of course it requires you to know why they are prefixed and not make REACT_SECRET
whh 16 hours ago|||
That's essentially what NEXT_PUBLIC_ is for... but serializing process.env is a new one for me.
chatmasta 15 hours ago||
They don’t serialize process.env, but devs will take config values from environment variables. Obviously you’re not supposed to do this but it’s a footgun.
_pdp_ 17 hours ago||||
I was reffering to Vercel. Other cloud environments have much better mechanisms for securing secrets.
SoftTalker 17 hours ago||||
This is just another layer of indirection (which isn't bad; it adds to the difficulty of executing a breach). The fundamental problem with encrypted secrets is that at some point you need to access and decrypt them.
tetha 13 hours ago|||
Lifetime is the underlying issue.

For example, it is possible to create a vault lease for exactly one CI build and tie the lifetime of secrets the CI build needs to the lifetime of this build. Practically, this would mean that e.g. a token, some oauth client-id/client-secret or a username/password credential to publish an artifact is only valid while the build runs plus a few seconds. Once the build is done, it's invalidated and deleted, so exfiltration is close to meaningless.

There are two things to note about this though:

This means the secret management has to have access to powerful secrets, which are capable of generating other secrets. So technically we are just moving goal posts from one level to another. That is fine usually though - I have 5 vault clusters to secure, and 5 different CI builds every 10 minutes or so, or couple thousand application instances in prod. I can pay more attention to the vault clusters.

But this is also not easy to implement. It needs a vault cluster, dynamic PostgreSQL users take years to get right, we are discovering how applications can be terrible at handling short-lived certificates every month (and some even regress. Grafana seems to have with PostgreSQL client certs in v11/v12), we've found quite a few applications who never thought that certs with less than a year of lifetime even exists. Oh and if your application is a single-instance monolith, restarting to reload new short-lived DB-certs is also terrible.

Automated, aggressive secret management and revocation imo is a huge problem to many secret exfiltration attacks, but it is hard to do and a lot of software resists it very heavily on many layers.

zbentley 14 hours ago||||
I'm not sure that's necessarily a "problem", though it is fundamental to secrets. We wouldn't say that it's a fundamental problem that doors on houses need a key--that's what the key is for--the problem is if the key isn't kept secure from unauthorized actors.

Like, sure, you can go HAM here and use network proxy services to do secret decryption, and only talk from the app to those proxies via short-lived tokens; that's arguably a qualitative shift from app-uses-secret-directly, and it has some real benefits (and costs, namely significant complexity/fragility).

Instead, my favored option is to scope secret use to network locations. If, for example, a given NPM token can only be used for API calls issued from the public IP endpoint of the user's infrastructure, that's a significant added layer of security. People don't agree on whether or not this counts as a "token ACL", but it's certainly ACL-like in its functionality--just controlled by location, rather than identity.

This approach can also be adopted gradually and with less added fragility than the proxy-all-the-things approach: token holders can initially allowlist broad or shared network location ranges, and narrow allowed access sources over time as their networks are improved.

Of course, that's a fantasy. API providers would have to support network-scoped API access credentials, and almost none of them do.

niyikiza 12 hours ago||
Speaking of fantansies...another approach would be holder binding: DPoP (RFC 9449) has been stable for a couple of years, AWS SigV4 does it too. The key holder proves control at call time, so a captured token without the key is useless.
socketcluster 13 hours ago||||
Yep. Then you run into the issue of where to store the secret encryption key.

Security researchers always need to give an answer whenever there's a security incident and the answer can never be "too much centralization risk" even when that is the only reasonable answer. You can't remove centralization risk.

IMO, the future is; every major centralized platform will be insecure in perpetuity and nothing can be done about it.

SAI_Peregrinus 17 hours ago||||
HSMs & similar can at least time-limit access to secrets to the period where an attacker can make requests to the HSM.
recursivegirth 16 hours ago||
I think the problem is the way we are using these "secrets" services traditionally. The requesting process/machine should NEVER see the Oauth client secret. The short-lived session token should be the only piece of data the server/client are ever privy too.

The service that encrypts the data should be the ONLY service that holds the private key to decrypt, and therefore the only service that can process the decrypted data.

oasisbob 16 hours ago||
The service wouldn't have access to the refresh token? How does authentication with the client-secret-holding intermediary work?

It's easy to see how this would work with sufficiently sophisticated clients in some use-cases, say via a vault plugin, but posing this as a universal necessity feels like a big departure from typical oauth flows, and the added complexity could be harmful depending on what home-grown solutions are used to implement it.

elwebmaster 10 hours ago|||
"The parent platform" yada yada, my parent platform is bare metal, how about that?
eecc 5 hours ago||||
Looks like how GitLab does it.

As far as I’m concerned, the only sane way is to dump credentials in a well-known path and let the environment decide what to bind them with at runtime (which is how Kubernetes does it, at least the EKS version I’ve had to work with).

IOW, JEE variable binding (JNDI) did it right 20 years or so ago.

It might be worth for architecture designers to look back at that engineering monument (in all its possible meanings, it felt complicated at times) and study its solutions before coming up with a different solution to a problem it solved

awestroke 17 hours ago|||
The better way to defend against these types of issues is to avoid Vercel and similar providers
elwebmaster 10 hours ago||
Nailed it.
thundergolfer 17 hours ago||
> AI-accelerated tradecraft. The CEO publicly attributed the attacker's unusual velocity to AI augmentation — an early, high-profile data point in the 2026 discourse around AI-accelerated adversary tradecraft.

Attributed without evidence from what I could tell. So it doesn't reveal much at all.

12_throw_away 16 hours ago||
Seems like AI is really disrupting the markets for nonsensical excuses the media will repeat uncritically!
mday27 16 hours ago|||
It's like we're back in 2009 with "did social media cause this?"
shimman 12 hours ago||
I prefer the devil and the 80s myself.
krautsauer 11 hours ago|||
I for one was getting bored of hearing about APTs.
trollbridge 10 hours ago||
To be fair, vibe coded solutions tend to recommend Vercel (just like they tended to recommend the Axios library).
tom1337 18 hours ago||
I still don't get how this exactly worked. Is the OAuth Token they talk about the one that you get when a user uses "Sign in with Google"? Aren't they then bound to the client id and client secret of that specific Google App the user signed in to? How were the attackers able to go from that to a control plane? Because even if the attacker knows the users OAuth token, the client id and the client secret, they can access the Google Drive etc. (which is bad, I get that) but I simply do not understand how they could log in into any Vercel systems from that point. Did they find the credentials in the google drive?
gizzlon 16 hours ago||
They don't really say. My guess would be something embarrassing, and that's why they are keeping it to themselves. Maybe passwords in Drive og Gmail. Or just passwordless login links (like sibling said)
_pdp_ 18 hours ago|||
Once you have a session token, which is what you get after you complete the oauth dance, you can issue requests to the API. It is simple as that. The minted token had permission to access the victim's inbox, most likely, which the attacker leveraged to read email and obtain one-time passwords, magic links and other forms of juicy information.
progbits 16 hours ago|||
If they had SSO sign in to their admin panel (trusted device checks notwithstanding) the oauth access would be useless.

Vercel is understandably trying to shift all the blame on the third party but the fact their admin panel can be accessed with gmail/drive/whatever oauth scopes is irresponsible.

zbentley 14 hours ago||
That's a low-leverage place to intervene. Whether or not the internal admin system was directly OAuth linked to Google, by the time the attacker was trying that, they already had a ton of sensitive/valuable info from the employee's Google Workspace account.

If you can only fix one thing (ideally you'd do both, but working in infosec has taught me that you can usually do one thing at most before the breach urgency political capital evaporates), fix the Google token scope/expiry, or fix the environment variable storage system.

kyle-rb 15 hours ago|||
I guess what's unusual is that the scope includes inbox access.

IMO it's probably a bad idea to have an LLM/agent managing your email inbox. Even if it's readonly and the LLM behaves perfectly, supply chain attacks have an especially large blast radius (even more so if it's your work email).

TeMPOraL 14 hours ago||
It's a bit of a pickle, given that managing your inbox (or at least reading it, classifying and summarizing contents, identifying action items etc.) is one of the most valuable applications of LLMs today, especially if you move beyond software developers having LLMs write code for them.
chatmasta 16 hours ago||
I’m not clear on it either. Was the Context.ai OAuth application compromised? So the threat actor essentially had the same visibility into every Context.ai customer’s workspace that Context.ai has? And why is a single employee being blamed? Did this Vercel employee authorize Context.ai to read the whole Vercel workspace?
tosser12344321 15 hours ago||
There are going to be a lot more like this as the IT-enabled economy at large catch up to the risk debt of broad-based experimentation with AI tools from large and small vendors.

It's "AI-enabled tradecraft" as in let's take a guess at Vercel leadership's pressure to install and test AI across the company, regardless of vendor risk? Speed speed speed.

This is an extremely vanilla exploit that every company operating without a strictly enforceable AI install allowlist is exposed to - how many AI tools like Context are installed across your scope of local and SaaS AI? Odds are, quite a bit, or ask your IT guy/gal for estimates.

These tools have access to... everything! And with a security vendor and RBAC mechanism space that'll exist in about... 18-24 months.

Vercel is the canary. It's going to get interesting here, no way in heck that Context is the only target. This is a well established, well-concerned/well-ignored threat vector, when one breaks open the other start too.

Implies a very challenging 6 months ahead if these exploits are kicking off, as everyone is auditing their AI installs now (or should be), and TAs will fire off with the access they have before it is cut.

Source - am a head of sec in tech

chasd00 12 hours ago||
I’ve said for a while now that there’s going to be a lot of bizarre security incidents in the news over the next few years.
peterldowns 12 hours ago||
I see it the same way. Interesting times…
datadrivenangel 18 hours ago||
"Effective defense requires architectural change: treating OAuth apps as third‑party vendors, eliminating long‑lived platform secrets, and designing for the assumption of provider‑side compromise."

Designing for provider-side compromise is very hard because that's the whole point of trust...

losvedir 18 hours ago||
As someone trying to think about OAuth apps at our SaaS, it certainly is very hard.

Do any marketplaces have a good approach here? I know Cloudflare, after their similar Salesloft issue, has proposed proxying all 3rd party OAuth and API traffic through them. But that feels a little bit like trading one threat vector for another.

Other than standard good practices like narrow scopes, shorter expirations, maybe OAuth Client secret rotation, etc, I'm not sure what else can be done. Maybe allowlisting IP addresses that the requests associated with a given client can come from?

mooreds 18 hours ago|||
This was probably partly a Google refresh token theft (given the length of the access). No inside info, just looking at how the attack occurred.

OAuth 2.1[0] (an RFC that has been around longer than I've been at my employer) recommends some protections around refresh tokens, either making them sender constrained (tied to the client application by public/private key cryptography) or one-time use with revocation if it is used multiple times.

This is recommended for public clients, but I think makes sense for all clients.

The first option is more difficult to implement, but is similar to the IP address solution you suggest. More robust though.

The second option would have made this attack more difficult because the refresh token held by the legit client, context.ai, would have stopped working, presumably triggering someone to look into why and wonder if the tokens had been stolen.

0: https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1

hvb2 16 hours ago||
One time use of refresh tokens is really common? Where each refresh will get you a new access token AND a new refresh token?

That's standard in oidc I believe

mooreds 16 hours ago||
I don't have data on whether it is common, but I know a few OAuth vendors support it.
wouldbecouldbe 18 hours ago|||
I mean the admin account had visibility of clients env vars, thats maybe not really great in the first place.
iririririr 17 hours ago||
you'd think. but this is a js dev world.

nextjs app bake all env vars on the client side code!! it's all public, unless you prefix the name with private_ or something.

rozenmd 17 hours ago||
This is incorrect.

You preface with PUBLIC_ to expose them in client side code.

nyc_data_geek1 18 hours ago||
Corroborates that zero-trust until now has been largely marketing gibberish. Security by design means incorporating concepts such as these to not assume that your upstream providers will not be utterly owned in a supply chain attack.
LudwigNagasena 12 hours ago||
I don't understand Stage 2. Did ContextAI app asked for access to Google mail, drive, calendar, etc? That's crazy. I can't believe any company bigger than a mom and pop shop would agree to run that outside of their own environment.

EDIT: the writeup from context.ai themselves seems quite informative: https://context.ai/security-update, it seems like it was a personal choice of one of the Vercel employees to grant full access to their Google workspace.

afunk 10 hours ago||
Such an embarrassing way to to get caught.

"The attacker compromised this OAuth application — the compromise has since been traced to a Lumma Stealer malware infection of a Context.ai employee in approximately February 2026, reportedly after the employee downloaded Roblox game exploit scripts"

saadn92 18 hours ago||
What bites people: rotating a vercel env variable doesn't invalidate old deployments, because previous deploys keep running with the old credential until you redeploy or delete them. So if you rotated your keys after the bulletin but didn't redeploy everything, then the compromised value is still live.

Also worth checking your Google Workspace OAuth authorizations. Admin Console > Security > API Controls > Third-party app access. Guarantee there are apps in there you authorized for a demo two years ago that are still sitting with full email/drive access.

quentindanjou 18 hours ago||
Usually rotating a credential means that you invalidate the previous one. Never heard of rotating credentials that would only create new ones and keep the old ones active.
simlevesque 17 hours ago||
But then every rotation would break production, wouldn't it ?
cortesoft 16 hours ago|||
rotations are usually two phased. Add new secret/credential to endpoint, and both new and old are active and valid. Release new secret/credential to clients of that endpoint, and wait until you dont see any requests using the old credential.

Then you remove the old credential from the endpoint.

NewJazz 5 hours ago||
Note that you risk reinfection if the attacker can somehow retain access while you rotate out secrets...
kstrauser 17 hours ago|||
Ideally, you can have a couple of working versions at any given time. For instance, an AWS IAM role can have 0 to 2 access keys configured at once. To rotate them, you deactivate all but one key, create a new key, and make that new key the new production value. Once everything's using that key, you can deactivate the old one.
oasisbob 15 hours ago|||
> What bites people: rotating a vercel env variable doesn't invalidate old deployments, because previous deploys keep running with the old credential until you redeploy or delete them. So if you rotated your keys after the bulletin but didn't redeploy everything, then the compromised value is still live.

That statement in the report really confuses me; feels illogical and LLM generated.

An old deployment using an older env var doesn't do anything to control whether or not the credential is still valid. This is a footgun which affects availability, not confidentiality like implied.

Another section in the report is confusing, "Environment variable enumeration (Stage 4)". The described mechanics of env var access are bizarre to me -

> Pay particular attention to any environment variable access originating from user accounts rather than service accounts, or from accounts that do not normally interact with the projects being accessed.

Are people really reading credentials out of vercel env vars for use in other systems?

trollbridge 10 hours ago|||
We have multiple Google accounts for this very reason. Of course, a lot of orgs don’t do this due to the Google Workspace per user “tax”. I tried and failed at a past employer to get some account other than my primary for doing OAuth grants like this.
wouldbecouldbe 18 hours ago|||
When you rotate them, you supposed expire your old vars
nulltrace 13 hours ago|||
Preview deploys are even worse. Every PR spins one up with the same env vars and nobody ever cleans them up. You rotate the key, redeploy prod, and there are still like 200 zombie previews sitting there with the old value.
kevinqi 18 hours ago||
yeah not redeploying on credential changes seems like a design flaw. Render redeploys on env var changes, for instance.
treexs 16 hours ago||
Vercel very clearly highlights that you need to redeploy once you make a credential change
_pdp_ 18 hours ago||
> OAuth trust relationship cascaded into a platform-wide exposure

> The CEO publicly attributed the attacker's unusual velocity to AI

> questions about detection-to-disclosure latency in platform breaches

Typical! The main failures in my mind are:

1. A user account with far too much privileges - possible many others like them

2. No or limited 2FA or any form of ZeroTrust architecture

3. Bad cyber security hygiene

JauntyHatAngle 18 hours ago|
Blaming AI is gonna be the security breach equivalent to blaming ddos when your website breaks isn't it.
progbits 16 hours ago|||
It's the new sophisticated nation state.
ekropotin 12 hours ago||||
The idea of blaming something you can choice not to do is quite strange.
paulddraper 12 hours ago||
You can choose for attackers not to use AI?
anematode 17 hours ago||||
That part of his tweet made me laugh out loud. I don't understand who it's directed toward.
BoorishBears 17 hours ago||
The market. Rauch is 'strategic' like that, he'd even use a moment like this sneak in a sound bite to froth the market he has so much skin in

"Vercel CEO says AI accelerated attack on critical infrastructure"

anematode 16 hours ago||
sigh Right.

Ironically, if the timeline is true that the attackers had been inside for months, the AIs they had access to are substantially weaker than today's frontier models. How much faster would they have achieved their goals with GLM 5.1?

xienze 16 hours ago|||
I think there’s a lot of truth to “the AI did it” though. We’re encouraging the same people who get tricked by “attached is your invoice” emails to run agent harnesses that have control of your desktop. I think there’s gonna be a lot of AI-powered exploits in the future.
oasisbob 15 hours ago|
Some of the details in this report, like the timeline beginning in 2024-2025, haven't been widely reported?

Anyone know where these dates are being sourced from? eg,

> Late 2024 – Early 2025: Attacker pivots from Context.ai OAuth access to a Vercel employee's Google Workspace account -- CONFIRMED — Rauch statement

> Early - mid-2025: Internal Vercel systems accessed; customer environment variable enumeration begins -- CONFIRMED — Vercel bulletin

captn3m0 15 hours ago|
These are all made up and likely hallucinated.
oasisbob 11 hours ago||
[dead]
More comments...