Top
Best
New

Posted by colesantiago 10 hours ago

Vercel April 2026 security incident(www.bleepingcomputer.com)
https://vercel.com/kb/bulletin/vercel-april-2026-security-in...
509 points | 304 comments
nettlin 5 hours ago|
They just added more details:

> Indicators of compromise (IOCs)

> Our investigation has revealed that the incident originated from a third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting hundreds of its users across many organizations.

> We are publishing the following IOC to support the wider community in the investigation and vetting of potential malicious activity in their environments. We recommend that Google Workspace Administrators and Google Account owners check for usage of this app immediately.

> OAuth App: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com

https://vercel.com/kb/bulletin/vercel-april-2026-security-in...

ryanscio 1 hour ago||
https://x.com/rauchg/status/2045995362499076169

> A Vercel employee got compromised via the breach of an AI platform customer called http://Context.ai that he was using.

> Through a series of maneuvers that escalated from our colleague’s compromised Vercel Google Workspace account, the attacker got further access to Vercel environments.

> We do have a capability however to designate environment variables as “non-sensitive”. Unfortunately, the attacker got further access through their enumeration.

> We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel.

Still no email blast from Vercel alerting users, which is concerning.

cowsup 1 hour ago||
> Still no email blast from Vercel alerting users, which is concerning.

On the one hand, I get that it's a Sunday, and the CEO can't just write a mass email without approval from legal or other comms teams.

But on the other hand... It's Sunday. Unless you're tuned-in to social media over the weekend, your main provider could be undergoing a meltdown while you are completely unaware. Many higher-up folks check company email over the weekend, but if they're traveling or relaxing, social media might be the furthest thing from their mind. It really bites that this is the only way to get critical information.

loloquwowndueo 58 minutes ago|||
> the CEO can't just write a mass email without approval from legal or other comms teams.

They can be brought in to do their job on a Sunday for an event of this relevance. They can always take next Friday off or something.

eclipticplane 46 minutes ago|||
Has anyone actually gotten an email from Vercel confirming their secrets were accessed? Right now we're all operating under the hope (?) that since we haven't (yet?) gotten an email, we're not completely hosed.
loloquwowndueo 44 minutes ago|||
Hope-based security should not be a thing. Did you rotate your secrets? Did you audit your platform for weird access patterns? Don’t sit waiting for that vercel email.
eclipticplane 35 minutes ago||
Of course rotated. But we don't even know when the secrets were stolen vs we were told, so we're missing a ton of info needed to _fully_ triage.
ItsClo688 8 minutes ago|||
nope...I feel u, the "Hope-based security" is exactly what Vercel is forcing on its users right now by prioritizing social media over direct notification.

If the attacker is moving with "surprising velocity," every hour of delay on an email blast is another hour the attacker has to use those potentially stolen secrets against downstream infrastructure. Using Twitter/X as a primary disclosure channel for a "sophisticated" breach is amateur hour. If legal is the bottleneck for a mass email during an active compromise, then your incident response plan is fundamentally broken.

loloquwowndueo 5 hours ago|||
The actual app name would be good to have. Understandable they don’t want to throw them under the bus but it’s just delaying taking action by not revealing what app/service this was.
pottertheotter 1 hour ago|||
It’s context.ai

https://x.com/rauchg/status/2045995362499076169

progbits 5 hours ago|||
I was trying to look it up (basically https://developers.google.com/identity/protocols/oauth2/java... -- the consent screen shows the app name) but it now says "Error 401: invalid_client; The OAuth client was not found." so it was probably deleted by the oauth client owner.
tom1337 5 hours ago|||
It indeed was deleted as this URL shows: https://accounts.google.com/o/oauth2/v2/auth?client_id=11067...
loloquwowndueo 4 hours ago|||
Makes it even more relevant to have the actual app or vendor name - who’s to say they just removed it to save face and won’t add it later?
cebert 5 hours ago|||
I don’t understand why they can’t just directly name the responsible app as it will come out eventually.
pottertheotter 1 hour ago|||
It’s context.ai

https://x.com/rauchg/status/2045995362499076169

sroussey 13 minutes ago|||
Which itself was the subject of a broader compromise as far as i can tell
SaltyBackendGuy 4 hours ago||||
Maybe legal red tape?
brookst 1 hour ago||
Yes. The oauth ID is indisputable. It it seems to be context.ai. But suppose it was a fake context.ai that the employee was tricked into using. Or… or…

Better to report 100% known things quickly. People can figure it out with near zero effort, and it reduces one tiny bit of potential liability in the ops shitstorm they’re going through.

mcdow 4 hours ago|||
They might be buying time to sell the relevant stock
newdee 5 hours ago|||
It looks like the app has already been deleted
hansmayer 4 hours ago|||
[flagged]
junon 4 hours ago||
This was a Google oauth app and it was phished. So... No.
slopinthebag 5 hours ago||
Idk exactly how to articulate my thoughts here, perhaps someone can chime in and help.

This feels like a natural consequence of the direction web development has been going for the last decade, where it's normalised to wire up many third party solutions together rather than building from more stable foundations. So many moving parts, so many potential points of failure, and as this incident has shown, you are only as secure as your weakest link. Putting your business in the hands of a third party AI tool (which is surely vibe-coded) carries risks.

Is this the direction we want to continue in? Is it really necessary? How much more complex do things need to be before we course-correct?

lijok 5 hours ago||
This isn't a web development concept. It's the unix philosophy of "write programs that do one thing and do it well" and interconnect them, being taken to the extremes that were never intended.

We need a different hosting model.

esseph 2 hours ago|||
> We need a different hosting model.

There really isn't an option here, IMO.

1. Somebody does it

2. You do it

Much happier doing it myself tbh.

slopinthebag 5 hours ago|||
In my mind the unix philosophy leads to running your cloud on your own hardware or VPS's, not this.
bdangubic 4 hours ago||
exactly this, write - not use some sh*t written by some dude from Akron OH 2 years ago”
arcfour 4 hours ago||
That's why I wrote my own compiler and coreutils. Can't trust some shit written by GNU developers 30 years ago.

And my own kernel. Can't trust some shit written by a Finnish dude 30 years ago.

And my own UEFI firmware. Definitely can't trust some shit written by my hardware vendor ever.

slopinthebag 4 hours ago|||
Yeah definitely no difference between GNU coreutils and some vibe coded AI tool released last month that wants full oAuth permissions.
eddythompson80 4 hours ago|||
I’m not joking, but weirdly enough, that’s what most AI arguments boil down to. Show me what the difference is while I pull up the endless CVE list of which ever coreutils package you had in mind. It’s a frustrating argument because you know that authors of coreutils-like packages had intentionality in their work, while an LLM has no such thing. Yet at the end, security vulnerabilities are abundant in both.

The AI maximalists would argue that the only way is through more AI. Vibe code the app, then ask an LLM to security review it, then vibe code the security fixes, then ask the LLM to review the fixes and app again, rinse and repeat in an endless loop. Same with regressions, performance, features, etc. stick the LLM in endless loops for every vertical you care about.

Pointing to failed experiments like the browser or compiler ones somehow don’t seem to deter AI maximalists. They would simply claim they needed better models/skills/harness/tools/etc. the goalpost is always one foot away.

arcfour 2 hours ago|||
I wouldn't describe myself as an AI maximalist at all. I just don't believe the false dichotomy of you either produce "vulnerable vibe coded AI slop running on a managed service" or "pure handcrafted code running on a self hosted service."

You can write good and bad code with and without AI, on a managed service, self-hosted, or something in between.

And the comment I was replying to said something about not trusting something written in Akron, OH 2 years ago, which makes no sense and is barely an argument, and I was mostly pointing out how silly that comment sounds.

slopinthebag 1 hour ago|||
It's such a bad faith argument, they basically make false equivalencies with LLMs and other software. Same with the "AI is just a higher level compiler" argument. The "just" is doing a ton of heavy lifting in those arguments.

Regarding the unix philosophy argument, comparing it to AI tools just doesn't make any sense. If you look at what the philosophy is, it's obvious that it doesn't just boil down to "use many small tools" or "use many dependencies", it's so different that it not even wrong [0].

In their Unix paper of 1974, Ritchie and Thompson quote the following design considerations:

- Make it easy to write, test, and run programs.

- Interactive use instead of batch processing.

- Economy and elegance of design due to size constraints ("salvation through suffering").

- Self-supporting system: all Unix software is maintained under Unix.

In what way does that correspond to "use dependencies" or "use AI tools"? This was then formalised later to

- Write programs that do one thing and do it well.

- Write programs to work together.

- Write programs to handle text streams, because that is a universal interface.

This has absolutely nothing in common with pulling in thousands of dependences or using hundreds of third party services.

Then there is the argument that "AI is just a higher level compiler". That is akin to me saying that "AI is just a higher level musical instrument" except it's not, because it functions completely differently to musical instruments and people operate them in a completely different way. The argument seems to be that since both of them produce music, in the same way both a compiler and LLM generate "code", they are equivalent. The overarching argument is that only outputs matter, except when they don't because the LLM produces flawed outputs, so really it's just that the outputs are equivalent in the abstract, if you ignore the concrete real-world reality. Using that same argument, Spotify is a musical instrument because it outputs music, and hey look, my guitar also outputs music!

0: https://en.wikipedia.org/wiki/Not_even_wrong

brookst 1 hour ago||||
So it’s not a binary thing, there’s context and nuance?
arcfour 4 hours ago|||
Embrace the suck.
DASD 4 hours ago|||
TempleOS, is that you?
nikcub 7 hours ago||
Claude Code defaulting to a certain set of recommended providers[0] and frameworks is making the web more homogenous and that lack of diversity is increasing the blast radius of incidents

[0] https://amplifying.ai/research/claude-code-picks/report

operatingthetan 7 hours ago||
It's interesting how many of the low-effort vibecoded projects I see posted on reddit are on vercel. It's basically the default.
Aurornis 5 hours ago|||
Reddit vibecoded LLM posts are kind of fascinating for how homogenous they are. The number of vibe coded half-finished projects posted to common subreddits daily is crazy high.

It’s interesting how they all use LLMs to write their Reddit posts, too. Some of them could have drawn in some people if they took 5 minutes to type an announcement post in their own words, but they all have the same LLM style announcement post, too. I wonder if they’re conversing with the LLM and it told them to post it to Reddit for traction?

derefr 51 minutes ago|||
I find that often the developers of these apps don't speak English, but want to target an English-speaking audience. For the marketing copy, they're using the LLM more to translate than to paraphrase, but the LLM ends up paraphrasing anyway.
politelemon 5 hours ago|||
They are not exclusive to reddit. HN has also been full of vibe submissions of the same nature.
gbgarbeb 6 hours ago||||
10 years ago it was Heroku and Three.js.
boringg 6 hours ago|||
New one coming in 5 years. Cycle repeats itself.
guelo 6 minutes ago||
I don't think so, AIs are going to freeze the tooling to what we have today since that's what's in the training corpus, and it's self reinforcing.
seattle_spring 5 hours ago|||
10 years ago it was Heroku and Ruby on Rails*
bdcravens 1 hour ago|||
More like 15. By 2016, Rails was supposedly dead and we were all going to be running the same code on the front end and back end in a full stack, MongoDB euphoria.
dzonga 1 hour ago|||
but now Ruby on Rails is not a circus like how Next.js is.

see [0]: Rails security Audit Report

[0]: https://ostif.org/ruby-on-rails-audit-complete/

fantasizr 6 hours ago||||
next, vercel, and supabase is basically the foundation of every vibecoded project by mere suggestion.
MrDarcy 4 hours ago|||
They’re all shit too. All three decided to do custom auth instead of OIDC and it’s a nightmare to integrate with any of them.
echelon 5 hours ago|||
Another Anthropic revenue stream:

Protection money from Vercel.

"Pay us 10% of revenue or we switch to generating Netlify code."

JLO64 5 hours ago||
Wouldn’t Vercel still make money in that scenario since Netlify uses them?
slopinthebag 5 hours ago||
Netlify uses AWS (and Cloudflare? Vercel def uses Cloudflare)
serhalp 1 hour ago|||
Netlify and Vercel both use AWS. AFAIK neither uses Cloudflare. Vercel did use Cloudflare for parts of its infra until about a year ago though.
arcfour 4 hours ago|||
Vercel runs on AWS.
lmm 27 minutes ago|||
Is that bad? I would think having everyone on the same handful of platforms should make securing them easier (and means those platforms have more budget to to so), and with fewer but bigger incidents there's a safety-of-the-herd aspect - you're unlikely to be the juiciest target on Vercel during the vulnerability window, whereas if the world is scattered across dozens or hundreds of providers that's less so.
neilv 6 hours ago|||
The other day, I was forcing myself to use Claude Code for a new CRUD React app[1], and by default it excreted a pile of Node JS and NPM dependencies.

So I told something like, "don't use anything node at all", and it immediately rewrote it as a Python backend, and it volunteered that it was minimizing dependencies in how it did that.

[1] only vibe coding as an exercise for a throwaway artifact; I'm not endorsing vibe coding

t0mas88 5 hours ago|||
You can tell Claude to use something highly structured like Spring Boot / Java. It's a bit more verbose in code, but the documentation is very good which makes Claude use it well. And the strict nature of Java is nice in keeping Claude on track and finding bugs early.

I've heard others had similar results with .NET/C#

lmm 38 minutes ago|||
Spring Boot is every bit as random mystery meat as Vercel or Rails. If you want explicit then use non-Boot Spring or even no Spring at all.
MrDarcy 4 hours ago|||
Same for Go.
BigTTYGothGF 5 hours ago||||
> forcing myself to use Claude Code

You don't have to live like this.

neilv 4 hours ago||
Even though I'm a hardcore programmer and software engineer, I still need to at least keep aware of the latest vibe coding stuff, so I know what's good and bad about it.
siva7 5 hours ago||||
I'm struggling to understand how they bought Bun but their own Ai Models are more fixated in writing python for everything than even the models of their competitor who bought the actual Python ecosystem (OAI with uv)
Imustaskforhelp 4 hours ago||||
> Python

I once made a golang multi-person pomodoro app by vibe coding with gemini 3.1 pro (when it had first launched first day) and I asked it to basically only have one outside dependency of gorrilla websockets and everything else from standard library and then I deployed it to hugging face spaces for free.

I definitely recommend golang as a language if you wish to vibe code. Some people recommend rust but Golang compiles fast, its cross compilation and portable and is really awesome with its standard library

(Anecdotally I also feel like there is some chances that the models are being diluted cuz like this thing then has become my benchmark test and others have performed somewhat worse or not the same as this to be honest and its only been a few days since I am now using hackernews less frequently and I am/was already seeing suspicions like these about claude and other models on the front page iirc. I don't know enough about claude opus 4.7 but I just read simon's comment on it, so it would be cool if someone can give me a gist of what is happening for the past few days.)

echelon 5 hours ago|||
It emits Actix and Axum extremely well with solid support for fully AOT type checked Sqlx.

Switch to vibe coding Rust backends and freeze your supply chain.

Super strong types. Immaculate error handling. Clear and easy to read code. Rock solid performance. Minimal dependencies.

Vibe code Rust for web work. You don't even need to know Rust. You'll osmose it over a few months using it. It's not hard at all. The "Rust is hard" memes are bullshit, and the "difficult to refactor" was (1) never true and (2) not even applicable with tools like Claude Code.

Edit: people hate this (-3), but it's where the alpha is. Don't blindly dismiss this. Serializing business logic to Rust is a smart move. The language is very clean, easy to read, handles errors in a first class fashion, and fast. If the code compiles, then 50% of your error classes are already dealt with.

Python, Typescript, and Go are less satisfactory on one or more of these dimensions. If you generate code, generate Rust.

neilv 5 hours ago|||
How are you getting low dependencies for Web backend with Rust? (All my manually-written Rust programs that use crates at all end up pulling in a large pile of transitive dependencies.)
slopinthebag 5 hours ago||||
Ok I mean this is a little crazy, "minimal dependencies" and Rust? Brother I need dependencies to write async traits without tearing my hair out.

But you're also correct in that Rust is actually possible to write in a more high level way, especially for web where you have very little shared state and the state that is shared can just be wrapped in Arc<> and put in the web frameworks context. It's actually dead easy to spin up web services in Rust, and they have a great set of ORM's if thats your vibe too. Rust is expressive enough to make schema-as-code work well.

On the dependencies, if you're concerned about the possibility of future supply chain attacks (because Rust doesn't have a history like Node) you can vendor your deps and bypass future problems. `cargo vendor` and you're done, Node has no such ergonomic path to vendoring, which imo is a better solution than anything else besides maybe Go (another great option for web services!). Saying "don't use deps" doesn't work for any other language other than something like Go (and you can run `go vendor` as well).

But yeah, in today's economy where compute and especially memory is becoming more constrained thanks to AI, I really like the peace of mind knowing my unoptimised high level Rust web services run with minimal memory and compute requirements, and further optimisation doesn't require a rewrite to a different language.

Idk mate, I used to be a big Rust hater but once I gave the language a serious try I find it more pleasant to write compared to both Typescript and Go. And it's very amiable to AI if that's your vibe(coding), since the static guarantees of the type system make it easier for AI to generate correct code, and the diagnostics messages allow it to reroute it's course during the session.

OptionOfT 5 hours ago|||
Except with using Rust like this you're using it like C#. You don't get to enjoy the type system to express your invariants.
nightski 7 hours ago|||
It's a good point, but I don't think the problem here is Claude. It's how you use it. We need to be guiding developers to not let Claude make decisions for them. It can help guide decisions, but ultimately one must perform the critical thinking to make sure it is the right choice. This is no different than working with any other teammate for that matter.
pastel8739 29 minutes ago|||
Shouldn’t Claude just refuse to make decisions, then, if it is problematic for it to do so? We’re talking about a trillion dollar company here, not a new grad with stars in their eyes
dennisy 7 hours ago||||
I think most people would agree.

However it is less clear on how to do this, people mostly take the easiest path.

fintler 6 hours ago|||
Its an eternal september moment.

https://en.wikipedia.org/wiki/Eternal_September

userbinator 4 hours ago||
Eternal Sloptember
operatingthetan 6 hours ago||||
I guess engineers can differentiate their vibecoded projects by selecting an eccentric stack.
alex7o 6 hours ago||
Choosing an eccentric stack makes the llms do better even. Like Effect.ts or Elixir
rpcope1 5 hours ago||
I actually noticed the same. Having it work on Mithril.js instead of React seems (I know it's all just kind of hearsay) to generate a lot cleaner code. Maybe it's just because I know and like Mithril better, but also is likely because of the project ethos and it's being used by people who really want to use Mithril in the wild. I've seen the same for other slightly more exotic stacks like bottle vs flask, and telling it to generate Scala or Erlang.
egeozcan 6 hours ago|||
> a. Actually do something sane but it will eat your session

> b. (Recommended) Do something that works now, you can always make it better later

duped 5 hours ago|||
No, the problem is the people building and selling these tools. They are marketed as a way of outsourcing thinking.
dennisy 5 hours ago||
So what are you suggesting do not allow companies to sell such tools?
duped 5 hours ago||
I'm suggesting people shouldn't lie to sell things because their customers will believe them and this causes measurable harm to society.
liveoneggs 5 hours ago||
AI does outsource thinking. It is not a lie.
hansmayer 4 hours ago|||
If you don't tend to think much in the first place or have low expectations, then yes
duped 4 hours ago|||
I think if you believe that you're either lying or experiencing psychosis. LLMs are the greatest innovation in information retrieval since PageRank but they are not capable of thought anymore than PageRank is.
neal_jones 6 hours ago|||
The thing I can’t stop thinking about is that Ai is accelerating convergence to the mean (I may be misusing that)

The internet does that but it feels different with this

themafia 6 hours ago||
> convergence to the mean

That's a funny way of saying "race to the bottom."

> The internet does that but it feels different with this

How does "the internet do that?" What force on the internet naturally brings about mediocrity? Or have we confused rapacious and monopolistic corporations with the internet at large?

walthamstow 4 hours ago|||
I'd call it race to the median, converging to mediocrity, or what the kids would call "mid"
slashdave 1 hour ago||||
> How does "the internet do that?"

Stack exchange. Google.

mentalgear 5 hours ago|||
Indeed 'race to the bottom' seems more like capitalism in general.
slashdave 1 hour ago|||
I'm not against making agents scapegoats, but this is a problem found among humans as well.
elric 5 hours ago|||
Interstingly, a recent conversation [1] between Hank Green and security researcher Sherri Davidoff argued the opposite. More GenAI generated code targeted at specific audiences should result in a more resilient ecosystem because of greater diversity. That obviously can't work if they end up using the same 3 frameworks in every application.

[1] https://www.youtube.com/watch?v=V6pgZKVcKpw

habinero 4 hours ago||
I love Hank, but he has such a weird EA-shaped blind spot when it comes to AI. idgi

It is true that "more diversity in code" probably means less turnkey spray-and-pray compromises, sure. Probably.

It also means that the models themselves become targets. If your models start building the same generated code with the same vulnerability, how're you gonna patch that?

kay_o 3 hours ago||
> start building the same generated code with the same vulnerability

This situation is pretty funny to me. Some of my friends who arent technical tried vibe coding and showed me what they built and asked for feedback

I noticed they were using Supabase by default, pointed out that their database was completely open with no RLS

So I told them not to use Supabase in that way, and they asked the AI (various diff LLMs) to fix it. One example prompt I saw was: please remove Supabase because of the insecure data access and make a proper secure way.

Keep in mind, these ppl dont have a technical background and do not know what supabase or node or python is. They let the llm install docker, install node, etc and just hit approve on "Do you want to continue? bash(brew install ..)"

Whats interesting is that this happened multiple times with different AI models. Instead of fixing the problem the way a developer normally would like moving the database logic to the server or creating proper API endpoints it tried to recreate an emulation of Supabase, specifically PostgREST in a much worse and less secure way.

The result was an API endpoint that looked like: /api/query?q=SELECT * FROM table WHERE x

In one example GLM later bolted on a huge "security" regular expression that blocked , admin, updateadmin, ^delete* lol

betocmn 5 hours ago|||
Yeah, I’ve been tracking what devtools different models choose: https://preseason.ai
mvkel 5 hours ago|||
That's only looking at half of the equation.

That lack of diversity also makes patches more universal, and the surface area more limited.

btown 6 hours ago|||
"Nobody ever got fired for putting their band page on MySpace."
stefan_ 6 hours ago|||
It's so trivial to seed. LLMs are basically the idiots that have fallen for all the SEO slop on Google. Did some travel planning earlier and it was telling me all about extra insurances I need and why my normal insurance doesn't cover X or Y (it does of course).
andersmurphy 7 hours ago||
That's the irony of Mythos. It doesn't need to exist. LLM vibe slop has already eroded the security of your average site.
egeozcan 6 hours ago|||
Self fulfilling prophecy: You don't need to secure anything because it doesn't make a difference, as Mythos is not just a delicious Greek beer, but also a super-intelligent system that will penetrate any of your cyber-defenses anyway.
andersmurphy 6 hours ago|||
In some ways Mythos (like many AI things) can be used as the ultimate accountability sink.

These libraries/frameworks are not insecure because of bad design and dependency bloat. No! It's because a mythical LLM is so powerful that it's impossible to defend against! There was nothing that could be done.

Something1234 5 hours ago|||
Explain more about this beer.
wonnage 6 hours ago|||
Conspiracy theory: they intentionally seeded the world with millions of slop PRs and now they’re “catching bugs” with Mythos
toddmorey 8 hours ago||
I've been part of a response team on a security incident and I really feel for them. However, this initial communication is terrible.

Something happened, we won't say what, but it was severe enough to notify law enforcement. What floors me is the only actionable advice is to "review environment variables". What should a customer even do with that advice? Make sure the variable are still there? How would you know if any of them were exposed or leaked?

The advice should be to IMMEDIATELY rotate all passwords, access tokens, and any sensitive information shared with Vercel. And then begin to audit access logs, customer data, etc, for unusual activity.

The only reason to dramatically overpay for the hosting resources they provide is because you expect them to expertly manage security and stability.

I know there is a huge fog of uncertainly in the early stages of an incident, but it spooks me how intentionally vague they seem to be here about what happened and who has been impacted.

btown 4 hours ago||
Via the incident page:

> Environment variables marked as "sensitive" in Vercel are stored in a manner that prevents them from being read, and we currently do not have evidence that those values were accessed. However, if any of your environment variables contain secrets (API keys, tokens, database credentials, signing keys) that were not marked as sensitive, those values should be treated as potentially exposed and rotated as a priority.

https://vercel.com/kb/bulletin/vercel-april-2026-security-in... as of 4:22p ET

aziaziazi 3 hours ago|||
The “sensitive” toggle is off by default. I’m curious about the rationale, what's the benefit of this default for users and/or Vercel?

https://vercel.com/docs/environment-variables/sensitive-envi...

loloquwowndueo 3 hours ago|||
Sensitive environment variables are environment variables whose values are non-readable once created.

So they are harder to introspect and review once set.

It’s probably good practice to put non-secret-material in non-sensitive variables.

(Pure speculation, I’ve never used Vercel)

_heimdall 2 hours ago||
I have used Vercel though prefer other hosts.

There are cases where I want env variables to be considered non-secure and fine to be read later, I have one in a current project that defines the email address used as the From address for automated emails for example.

In my opinion the lack of security should be opt-in rather than opt-out though. Meaning it should be considered secure by default with an option to make it readable.

throw03172019 2 hours ago|||
Simpler for vibe coders.
jtchang 2 hours ago|||
How does the app read the variable if it can't be read after you input it? Or do they mean you can't view it after providing the variable value to the UI?
ctmnt 29 seconds ago||
They mean the latter. Very unclear how that translates to meaningful security.
birdsongs 8 hours ago|||
Seriously. Why am I reading about this here and not via an email? I've been a paying customer for over a year now. My online news aggregator informs me before the actual company itself does?
shimman 7 hours ago|||
Please remember that this is the same company that couldn't figure out how to authorize 3rd party middleware and had, with what should be a company ending, critical vulnerability .

Oh and the owner likes to proudly remind people about his work on Google AMP, a product that has done major damage to the open web.

This is who they are: a bunch of incompetent engineers that play with pension funds + gulf money.

salomonk_mur 4 minutes ago||||
Says they emailed affected customers...
1970-01-01 4 hours ago|||
I just deleted my account. Their laid-back notice just is not worth it anymore. I will hold them accountable using my cash. You can get out with me. Let their apologies hit your spam filter. They need to be better prepared to react to the storm of insanity that comes with a breach or they lose my info (lose it twice, I guess..)
gherkinnn 3 hours ago|||
Last year Vercel bungled the security response to a vulnerability in Next's middleware. This is nothing new.

https://news.ycombinator.com/item?id=43448723

https://xcancel.com/javasquip/status/1903480443158298994

tcp_handshaker 3 hours ago|||
Security is hard and there are only three vendors I trust: AWS, Google and IBM ( yes IBM ). Anything else is just asking for trouble.
dd_xplore 2 hours ago|||
Oracle too
gustavus 1 hour ago||
Oracle? Oracle?

The Oracle that published an announcement that said "we didn't get hacked" when the hackers had private customer info?

The Oracle that does not allow you to do any security testing on their software unless you use one of their approved vendors?

The Oracle that one of my customers uses where they have to turn off the HR portal for 2 weeks before annual performance evaluations because there is no way to prevent people from seeing things?

The only reason Oracle isn't having nightmarish security problems published every other week is because they threaten to sue anyone that does find an issue.

Oracle is a joke in every conceivable way and I despise them on a personal level.

warmedcookie 8 minutes ago||
I love a good cathartic rant
esseph 2 hours ago|||
Having worked both public and private, I can agree with this.

Google in particular has been staggeringly good, and don't sleep on IBM when they Actually Care.

0xmattf 8 hours ago|||
> The only reason to dramatically overpay for the hosting resources they provide is because you expect them to expertly manage security and stability.

This and because it's so convenient to click some buttons and have your application running. I've stopped being lazy, though. Moved everything from Render to linode. I was paying render $50+/month. Now I'm paying $3-5.

I would never use one of those hosting providers again.

cleaning 2 hours ago|||
If you're only paying $3-5 on Linode then your level of usage would probably be comfortably at $0 on Vercel.
0xmattf 2 hours ago|||
It could be $0 on Render too, but then there's going to be a 3 minute load time for a landing page to become visible, lol. So if you don't want your server to sleep, you're going to have to pay $20/month.

Does Vercel do the same?

somewhatgoated 44 minutes ago||
No, I run several small websites on Vercel for free for years, always served static pages very quickly
esseph 2 hours ago|||
Makes sense considering the quality of Vercel's security response and customer communication.
nightski 7 hours ago||||
Looking at linode, those prices get you an instance with 1Gb of ram and a mediocre CPU. So you are running all of your applications on that?
0xmattf 6 hours ago|||
Personal projects/MVPs/small projects? Absolutely. For what I'm running, there's no reason to need anything beyond that.

The point is, I used to just throw everything up on a PaaS. Heroku/Render, etc. and pay way more than I needed to, even if I had 0 users, lol.

adhamsalama 4 hours ago|||
For $3.5, Hetzner gives 2 vCPU, 4GB RAM, 40 GB SSD, and 10 TB of bandwidth.
skeeter2020 3 hours ago||
how much work should the GP do to migrate if Linode is good enough, to potentially save up to $1.50/month (or spend 50 cents more)?
p_stuart82 3 hours ago|||
exactly people paid the premium so somebody else's OAuth screwup wouldn't become their Sunday. and here we are.
lo1tuma 3 hours ago|||
Yeah, given there insane pricing I think the expectations can be higher. Although I know it is impossible to provide 100% secure system, but if something like that happens, then the communication should at least be better. Don’t wait until you have talked to the lawyers... inform your customers first, ideally without this cooperate BS speak, most vercel customers are probably developers, so they understand that incidents like this can happen, just be transparent about it
rybosome 7 hours ago|||
Completely agreed. At minimum they should be advising secret rotation.

The only possibility for that not being a reasonable starting point is if they think the malicious actors still have access and will just exfiltrate rotated secrets as well. Otherwise this is deflection in an attempt to salvage credibility.

elmo2you 4 hours ago||
Welcome to the show.

While a different kind of incident (in hindsight), the other week Webflow had a serious operational incident.

Sites across the globe going down (no clue if all or just a part of them). They posted plenty of messages, I think for about 12 hours, but mostly with the same content/message: "working on fixing this with an upstream provider" (paraphrased). No meaningful info about what was the actual problem or impact.

Only the next day did somebody write about what happened. Essentially a database running out of storage space. How that became a single point of failure, to at least plenty of customers: no clue. Sounds like bad architecture to me though. But what personally rubbed me the wrong way most of all, was the insistence on their "dashboard" having indicated anything wrong with their database deployment, as it allegedly had misrepresented the used/allocated storage. I don't who this upstream service provider of Webflow is, but I know plenty about server maintenance.

Either that upstream provider didn't provide a crucial metric (on-disk storage use) on their "dashboard", or Webflow was throwing this provider under the bus for what may have been their own ignorant/incompetent database server management. I guess it all depends to which extend this database was a managed service or something Webflow had more direct control over. Either way, with any clue about the provider or service missing from their post-mortem, customers can only guess as to who was to blame for the outage.

I have a feeling that we probably aren't the only customer they lost over this. Which in our case would probably not have happened, if they had communicated things in a different way. For context: I personally would never need nor recommend something like Webflow, but I do understand why it might be the right fit for people in a different position. That is, as long as it doesn't break down like it did. I still can't quite wrap my head around that apparent single point of failure for a company the size of Webflow though.

/anecdote

nettlin 5 hours ago||
They just added more details:

> Indicators of compromise (IOCs)

> Our investigation has revealed that the incident originated from a third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting hundreds of its users across many organizations.

> We are publishing the following IOC to support the wider community in the investigation and vetting of potential malicious activity in their environments. We recommend that Google Workspace Administrators and Google Account owners check for usage of this app immediately.

> OAuth App: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com

https://vercel.com/kb/bulletin/vercel-april-2026-security-in...

dev360 4 hours ago|
I wonder which tool that is
_jab 6 hours ago||
> Vercel did not specify which of its systems were compromised

I’m no security engineer, but this is flatly unacceptable, right? This feels like Vercel is covering its own ass in favor of helping its customers understand the impact of this incident.

hyperadvanced 1 hour ago|
I dunno. If I work on GitHub and I say “obscure subsystem X” has been breached, it’s no more useful than the level of specificity that Vercel has already given (“some customer environments have been compromised”)
jtreminio 10 hours ago||
I'm on a macbook pro, Google Chrome 147.0.7727.56.

Clicking the Vercel logo at the top left of the page hard crashes my Chrome app. Like, immediate crash.

What an interesting bug.

embedding-shape 8 hours ago||
Huh, curiously; I'm on Arch Linux, crash happens in Google Chrome (147.0.7727.101) for me too, but not in Firefox (149.0.2) nor even in Chromium (147.0.7727.101).

I find it fun we're all reading a story how Vercel likely is compromised somehow, and managed to reproduce a crash on their webpage, so now we all give it a try. Surely could never backfire :)

nozzlegear 8 hours ago|||
Works in Safari too. Sounds like a Google Chrome thing.
sbrother 6 hours ago|||
Following since I just reproduced the crash on my own system (Chrome on Ubuntu)
bel8 4 hours ago|||
Sadly I coudn't make Chrome crash here. Would be fun.

Chrome Version 147.0.7727.101 (Official Build) (64-bit). Windows 11 Pro.

Video: https://imgur.com/a/pq6P4si

I use uBlock Origin Lite. Maybe it blocks some crash causing script? edit: still no crash when I disabled UBO.

devld 7 hours ago|||
Reminds me of circa 2021 Chromium bug where opening the dropdown menu on GitHub would crash the entire system on Linux. At some point, it got fixed.
Malipeddi 8 hours ago|||
Same with Chrome on Windows 11. I opened the vercel home page using the url once after which it stopped crashing when clicking on the logo.
plexicle 8 hours ago|||
MBP - M4 Max - Chrome 146.0.7680.178.

No crash.

Now I don't want to click that "Finish update" button.

152334H 7 hours ago||
if it does so happen that the crash originates from a browser exploit, you should expect to be more at risk due to the absence of a crash on an older version, not less
burnte 9 hours ago|||
I'm running 147.0.7727.57 and this doesn't happen. Macbook Air M5. VERY interesting.
farnulfo 9 hours ago|||
Same hard crash on Chrome Windows 11
itaintmagic 9 hours ago||
Do you have a chrome://crashes/ entry ?
rapfaria 9 hours ago||
it did add an entry - windows 11, chrome
MattIPv4 10 hours ago||
Related: https://news.ycombinator.com/item?id=47824426

https://x.com/theo/status/2045862972342313374

> I have reason to believe this is credible.

https://x.com/theo/status/2045870216555499636

> Env vars marked as sensitive are safe. Ones NOT marked as sensitive should be rolled out of precaution

https://x.com/theo/status/2045871215705747965

> Everything I know about this hack suggests it could happen to any host

https://x.com/DiffeKey/status/2045813085408051670

> Vercel has reportedly been breached by ShinyHunters.

tom1337 5 hours ago||
> Ones NOT marked as sensitive should be rolled out of precaution

if it's not marked as sensitive (because it is not sensitive) there is no reason to roll them. if you must roll a insensitive env var it should've been sensitive in the first place, no?

jackconsidine 4 hours ago||
There's a difference between sensitive, private and public. If public (i.e. NEXT_PUBLIC_) then yeah likely not a reason to roll. Private keys that aren't explicitly sensitive probably are still sensitive. It doesn't seem to be the default to have things "sensitive" and I can't tell if that's a new classification or has always been there.

I can imagine the reason why an env variable would be sensitive, but need to be re-read at some point. But overwhelmingly it makes sense for the default to be set, and never access again (i.e. Fly env values, GCP secret manager etc)

otterley 9 hours ago||
Who is this “theo” person and why are multiple people quoting him? He seems to have little to say that’s substantive at this point.
gordonhart 9 hours ago|||
He’s a tech influencer, probably getting quoted here because he has the biggest reach of people covering this so far.
Aurornis 5 hours ago||||
He’s a streamer who talks about tech. Previously had a sponsorship relationship with Vercel so is theoretically more well connected than average on the topic. He’s also very divisive because he does a lot of ragebait, grievance reporting, and contrarian takes but famously has blind spots for a few companies and technologies that he’s favored in past videos or been sponsored by. I have friends who watch a lot of his videos but I’ve never been able to get into it.
MikeNotThePope 9 hours ago||||
Theo Browne is a reasonably well known YouTuber & YC founder.

https://t3.gg/

nothinkjustai 8 hours ago||||
He is a paid Vercel shill (literally, he does sponsored content for them on his YouTube channel)
djeastm 3 hours ago|||
Not in a few years.
TiredOfLife 6 hours ago|||
He literally doesn't. https://x.com/theo/status/1832228209573949947
reactordev 8 hours ago|||
YT tech vlogger
nike-17 8 hours ago||
Incidents like this are a good reminder of how concentrated our single points of failure have become in the modern web ecosystem. I appreciate the transparency in their disclosure so far, but it definitely makes you re-evaluate the risk profile of leaning entirely on fully managed PaaS solutions.
Izmaki 5 hours ago||
A "limited subset of customers" could be 99% of them and the phrase would still be technically true.
swingboy 8 hours ago|
Is this one of those situations where _a lot_ of customers are affected and the “subset” are just the bigger ones they can’t afford to lose?
toddmorey 8 hours ago|
Conjecture, but the wording "limited subset" rarely turns out to be good news. Usually a provider will say "less than 1% of our users" or some specific number when they can to ease concerns. My guess is they don't have the visibility or they don't like the number.

I feel for the team; security incidents suck. I know they are working hard, I hope they start to communicate more openly and transparently.

loloquwowndueo 8 hours ago||
“Less than 1% of our users” means 10k affected users if you have 1 million users. 10k victims is a lot! Imagine “air travel is safe, only a subset of 1% of travellers die”
More comments...