Top
Best
New

Posted by sangeeth96 12/11/2025

Denial of service and source code exposure in React Server Components(react.dev)
See also: https://blog.cloudflare.com/react2shell-rsc-vulnerabilities-..., https://nextjs.org/blog/security-update-2025-12-11
346 points | 225 comments
simonw 12/11/2025|
React Server Components always felt uncomfortable to me because they make it hard to look at a piece of JavaScript code and derive which parts of it are going to run on the client and which parts will run on the server.

It turns out this introduces another problem too: in order to get that to work you need to implement some kind of DEEP serialization RPC mechanism - which is kind of opaque to the developer and, as we've recently seen, is a risky spot in terms of potential security vulnerabilities.

tom1337 12/11/2025||
I was a fan of NextJS in the pages router era. You knew exactly where the line was between server and client code and it was pretty easy to keep track of that. Then I've began a new project and wanted to try out app router and I hated it. So many (to me common things) where just not possible because the code can run in the client and on the server so Headers might not always be available and it was just pure confusion whats running where.
Uehreka 12/11/2025|||
I think we (the Next.js user community) need to organize and either convince Vercel to announce official support of the Pages router forever (or at least indefinitely, and stop posturing it as a deprecated-ish thing), or else fork Next.js and maintain the stable version of it that so many of us enjoyed. Every time Next comes up I see a ton of comments like this, everyone I talk to says this, and I almost never hear anyone say they like the App Router (and this is a pretty contrarian site, so if they existed I’d expect to see them here).
hmcdona1 12/11/2025|||
I would highly recommend just checking out TanStack Router/Start instead. It fills a different niche, with a slightly different approach, that the Next.js app router just hasn't prioritized enabling anymore.

What app router has become has its ideal uses, but if you explicitly preferred the DX of the pages router, you might enjoy TanStack Router/Start even more.

cjonas 12/12/2025|||
Last time I tried tanstack router, I spent half a day trying to get breadcrumbs to work. Nit: I also can't stand their docs site.
rustystump 12/12/2025|||
Tanstack anything has breaking changes constantly and they all exist in perpetual alpha states. It also has jumped on the rsc train with the same complexity pitfalls.

Some libs in the stack are great but they were made pre rsc fad.

hmcdona1 12/16/2025||
If you're using an alpha library then that's on you for not expecting breaking changes. They have plenty of 1.0+ libraries that do not receive any breaking changes between major releases and have remained stable for well over a year.

Also, you're just wrong? You literally cannot serve RSC components _at all_ even in TanStack Start yet. Even when support for them is added it will be opt-in for only certain kinds of RPC functions and they will work slightly differently than they do in Next.js app router(where they are the default everywhere). RPC != RSC.

Plus you can always stick to using TanStack Router exclusively (zero server at all) and you never will even have to worry about anything to do with RSCs...

bryanrasmussen 12/12/2025||||
OK I am personally surprised that anyone likes the Pages router? Pages routing has all the benefits (simple to get started the first time) and all the downsides (maintainability of larger projects goes to hell) of having your routing being determined by where in the file system things are.

I don't care about having things simple to get started the first time, because soon I will have to start things a second or third time. If I have a little bit more complexity to get things started because routing is handled by code and not filesystem placement then I will pretty quickly develop templates to handle this, and in the end it will be easier to get things started the nth time than it is with the simple version.

Do I like the app router? No, Vercel does a crap job on at least two things - routing and building (server codes etc. can be considered as a subset of the routing problem), but saying I dislike app router is praising page router with too faint a damnation.

morsmodr 12/12/2025||||
Remix 2 is beautiful in its abstractions. The thing with NextJS Roadmap is that it is tightly coupled with Vercel's financial incentives. A more complex & more server code runs ensure more $$$ for them. I don't see community being able to do much change just like how useContextSelector was deprioritized by the React Core team.

Align early on wrt values of a framework and take a closer look at the funder's incentives.

berekuk 12/12/2025||||
I've been using React since its initial release; I think both RSC and App Router are great, and things are better than ever.

It's the first stack that allows me to avoid REST or GraphQL endpoints by default, which was the main source of frontend overhead before RSC. Previously I had to make choices on how to organize API, which GraphQL client to choose (and none of them are perfect), how to optimize routes and waterfalls, etc. Now I just write exactly what I mean, with the very minimal set of external helper libs (nuqs and next-safe-action), and the framework matches my mental model of where I want to get very well.

Anti-React and anti-Next.js bias on HN is something that confuses me a lot; for many other topics here I feel pretty aligned with the crowd opinion on things, but not on this.

codemonkey-zeta 12/12/2025|||
Can you describe how rsc allows you to avoid rest endpoints? Are you just putting your rsc server directly on top of your database?
berekuk 12/12/2025||
If I control both the backend and the frontend, yes. Server-only async components on top of layout/page component hierarchy, components -> DTO layer -> Prisma. Similar to this: https://nextjs.org/blog/security-nextjs-server-components-ac...

You still need API routes for stuff like data-heavy async dropdowns, or anything else that's hard to express as a pure URL -> HTML, but it cuts down the number of routes you need by 90% or more.

skydhash 12/12/2025||
You’re just shifting the problem from HTTP to an adhoc protocol on top of it.
afavour 12/12/2025||
Yes but they’re also shifting the problem from one they explicitly have to deal with themselves to one the framework handles for them.

Personally I don’t like it but I do understand the appeal.

skydhash 12/12/2025||
Maybe, but you go from one of the most tested protocol with a lot of tooling to another with not even a specification.
c-hendricks 12/12/2025|||
Some of the anti-next might be from things like solid-start and tanstack-start existing, which can do similar things but without the whole "you've used state without marking as a client component thus I will stop everything" factor of nextjs.

Not to mention the whole middleware and being able to access the incoming request wherever you like.

kyleee 12/12/2025||
And vercel
c-hendricks 12/12/2025||
That's true, can't blame people for having a bad taste of VC funded companies taking the reigns on open source projects.
reissbaker 12/12/2025||||
Personally, I love App Router: it reminds me of the Meta monorepos, where everything related to a certain domain is kept in the same directory. For example, anything related to user login/creation/deletion might be kept in the /app/users directory, etc.

But I really, really do not like React Server Components as they work today. I think it's probably better to strip them out in favor of just a route.ts file in the directory, rather than the actions files with "use server" and all the associated complexity.

Technically, you can build apps like that using App Router by just not having "use server" anywhere! But it's an annoying, sometimes quite dangerous footgun to have all the associated baggage there waiting for an exploit... The underlying code is there even if you aren't using it.

I think my ideal setup would be:

1. route.ts for RESTful routes

2. actions/SOME_FORM_NAME.ts for built-in form parsing + handling. Those files can only expose a POST, and are basically a named route file that has form data parsing. There's no auto-RPC, it's just an HTTP handler that accepts form data at the named path.

3. no other built-in magic.

robertoandred 12/12/2025||
Route files are still RSCs. Actions/“use server” are unrelated.
reissbaker 12/12/2025||
Route files are no different than the pages router that preceded them, except they sit in a different filepath. They're not React components, and definitely not React Server Components. They're not even tsx/jsx files, which should hint at the fact that they're not components! They just declare ordinary HTTP endpoints.

RSCs are React components that call server side code. https://react.dev/reference/rsc/server-components

Actions/"use server" functions are part of RSC: https://react.dev/reference/rsc/server-functions They're the RPC system used by client components to call server functions.

And they're what everyone here is talking about: the vulnerabilities were all in the action/use server codepaths. I suppose the clearest thing I could have said is that I like App Router + route files, but I dislike the magic RPC system: IMO React should simplify to JSON+HTTP and forms+HTTP, rather than a novel RPC system that doesn't interoperate with anything else and is much more difficult to secure.

stack_framer 12/12/2025||||
I find myself just wanting to go all the way back to SPAs—no more server-side rendering at all. The arguments about performance, time to first paint, and whatever else we're supposed to care about just don't seem to matter on any projects I've worked on.

Vercel has become a merchant of complexity, as DHH likes to say.

farley13 12/12/2025|||
I think the context matters here - for SEO heavy marketing pages I still see google only executing a full browser based crawl for a subset of pages. So SSR matters for the remainder.
yawaramin 12/12/2025||||
Htmx does full server rendering and it works beautifully. Everything is RESTful–endpoints are resources, you GET (HTML) and POST (HTTP forms) on well-defined routes, and it works with any backend. Performance, including time to interactive and user device battery life, are great.
robertoandred 12/12/2025|||
SPAs can still be server rendered.
awestroke 12/12/2025||||
We're migrating away from both Next and Vercel post-haste
Seattle3503 12/12/2025||
What are you migrating to? Vanilla React?
awestroke 12/12/2025||
Vanilla react, ts-rest
Seattle3503 12/13/2025||
Gotcha, that makes sense. Thanks!
spoiler 12/12/2025|||
Probably an unpopular take, but I really think Vercel has lost the plot. I don't know what happened to the company internally. But, it feels like the first few, early, iterations of Next were great, and then it all started progressively turning into slop from a design perspective.

An example of this is filesystem routing. Started off great, but now most Next projects look like the blast radius of a shell script gone terribly wrong.

There's also a(n in)famous GitHub response from one of the maintainers backwards-rationalising tech debt and accidental complexity as necessary. They're clearly smart, but the feeling I got from reading that comment was that they developed Stockholm syndrome towards their own codebase.

dawnerd 12/11/2025|||
I pretty much dumped a side project that was using next over the new router. It's so much more convoluted, way too many limitations. Who even really wants to make database queries in front end code? That's sketchy as heck.
Frotag 12/12/2025||
A lot of functionality is obviously designed for Vercel's hosting platform, with local equivalents as an afterthought.
sangeeth96 12/11/2025|||
This is what I asked my small dev team after I recently joined and saw that we were using Next for the product — do we know how this works? Do we have even a partial mental model of what's happening? The answers were sadly, pretty obvious. It was hard enough to get people to understand how hooks worked when they were introduced, but the newer Next versions seem even more difficult to grok.

I do respect the things React + Next team is trying to accomplish and it does feel like magic when it works but I find myself caring more and more about predictability when working with a team and with every major version of Next + React, that aspect seems to be drifting further and further away.

stack_framer 12/12/2025||
I feel the same. In fact, I'll soon be preparing a lunch and learn on trying out Solid.js. I'm hoping to convince the team that we should at least try a different mental model and see if we like it.
thewtf 12/12/2025||
Should just use Vue.
bargainbin 12/12/2025||
Should just use Svelte.
braebo 12/13/2025||
Using React instead of Svelte to build an app is like using a pile of wet noodles instead of a nail gun.
0xblinq 12/12/2025|||
This is why I'm a big advocate of Inertia.js [1]. For me it's the right balance of using "serious" batteries included traditional MVC backends like Laravel, Rails, Adonis, Django, etc... and modern component based frontend tools like React, Vue, Svelte, etc. Responsibilities are clear, working in it is easy, and every single time I used it feels like you're using the right tool for each task.

I can't recommend it enough. If you never tried/learnt about it, check it out. Unless you're building an offline first app, it's 100% the safest way to go in my opinion for 99.9% of projects.

[1] https://inertiajs.com/

tacker2000 12/12/2025||
I am also in love with Inertia, it lets you use a React frontend and a Laravel backend without a dedicated API or endpoints, its so much faster to develop and iterate, and you dont need to change your approach or mental model, it just makes total sense.

Instead of creating routes and using fetch() you just pass the data directly to the client side react jsx template, inertia automatically injects the needed data as json into the client page.

jaredklewis 12/12/2025|||
I do think RSC and server side rendering in general was over adopted.

Have a Landing/marketing page? Then, yes, by all means render on the server (or better yet statically render to html files) so you squeeze every last millisecond you can out of that FCP. Also easy to see the appeal for ecommerce or social media sites like facebook, medium, and so on. Though these are also use cases that probably benefit the least from React to begin with.

But for the "app" part of most online platforms, it's like, who cares? The time to load the JS bundle is a one time cost. If loading your SaaS dashboard after first login takes 2 seconds versus 3 seconds, who cares? The amount of complexity added by SSR and RSC is immense, I think the payout would have to be much more than it is.

sakesun 12/12/2025||
Deeply agree.
procaryote 12/12/2025||
I've been at an embarassing number of places where turning off server side rendering improved performance as the number of browsers rendering content scales with the number of users, but the server-side rendering provisioning doesn't
TZubiri 12/11/2025|||
I had this issue with a React app I inherited, there was a .env with credentials, and I couldn't figure out whether it was being read from the frontend or the backend.

So I ran a static analysis (grep) on the apk generated and

points light at face dramatically

the credentials were inside the frontend!

jaredwiener 12/12/2025||
Why would you have anything for the backend in an APK? Wouldnt that be an app, that by definition runs on the client?

Most frameworks also by default block ALL environment variables on the client side unless the name is preceded by something specific, like NEXT_PUBLIC_*

mcpeepants 12/12/2025|||
> Most frameworks also by default block ALL environment variables on the client side

I’ve been out of full stack dev for ~5 years now, and this statement is breaking my brain

TZubiri 12/12/2025||||
Why would you have anything for the backend in a browser app? Wouldn't that by definition run on the client?

These kind of node + Mobile apps typically use an embedded browser like electron or a builtin browser, it's not much different than a web app.

joshdavham 12/12/2025|||
I'm no javascript framework expert, but how vulnerable do people estimate other frameworks like Angular, Sveltekit and Nuxt to be to this sort of thing? Is React more disposed to be at risk? Is it just because there are more eyes on React due to its popularity?
rk06 12/12/2025||
nuxt, sveltekit etc don't have RSC equivalent. and won't have in future either. Vue has discussed it and explicitly rejected it. also RSC was proposed to sveltekit, they also rejected it citing public endpoint should not be hidden

they may get other vulnemerelities as they are also in JS, but RSC class vulelnebereleties won't be there

rk06 12/12/2025||
please forgive typos in above comment. i can no longer edit them
joshdavham 12/12/2025||
Haha don’t sweat it dude. Happens to literally everyone on HN.
ashishb 12/11/2025|||
This happens in Next.js as well https://github.com/vercel/next.js/discussions/11106
lmm 12/12/2025|||
Yeah. Being able to write code that's polymorphic between server and client is great, but it needs to be explicit and checked rather than invisible and magic. I see an analogy with e.g. code that can operate on many different types: it's a great feature, but really you want a generics feature where you can control which types which pieces of code operate on, not a completely untyped language.
danabramov 12/12/2025||
It is explicit and checked.

You have two poison pills (`import "server-only"` and `import "client-only"`) that cause a build error when transitively imported from the wrong environment. This lets you, for example, constrain that a database layer or an env file can never make it into the client bundle (or that some logic that requires client state can never be accidentally used from the stateless request/response cycle). You also have two directives that explicitly expose entry points between the two worlds.

The vulnerabilities in question aren't about wrong code/data getting pulled into a wrong environment. They're about weaknesses in the (de)serialization protocol which relied on dynamic nature of JavaScript (shared prototypes being writable, function having a string constructor, etc) to trick the server into executing code or looping. These are bad, yes, but they're not due to the client/server split being implicit. They're in the space of (de)serialization.

lmm 12/15/2025||
> You have two poison pills (`import "server-only"` and `import "client-only"`) that cause a build error when transitively imported from the wrong environment.

Sure, but even though it's "build time" it feels more like runtime (indeed I think the only reason it works that way is your code is being run at build time). I should be able to look at a given piece of code and know locally whether it's client-side, server-side, or polymorphic, rather than having to trace through all its transitive imports.

dirkc 12/12/2025|||
I 100% agree. I didn't even bother to think about the security implications - why worry about security implications if the whole things seems like a bad idea?

In retrospect I should have given it more thought since React Server Components are punted in many places!

tonyhart7 12/12/2025||
turns out a separation of concern is valid approach for decades

React team reinvent the wheel again and again and now we back to laravel

WatchDog 12/12/2025||
When I looked into RSC last week, I was struck by how complex it was, and how little documentation there seems to be on it.

In fairness react present it as an "experimental" library, although that didn't stop nextjs from widely deploying it.

I suspect there will be many more security issues found in it over the next few weeks.

Nextjs ups the complexity orders of magnitude, I couldn't even figure out how to set any breakpoints on the RSC code within next.

Next vendors most of their dependencies, and they have an enormously complex build system.

The benefits that next and RSC offer, really don't seem to be worth the cost.

mexicocitinluez 12/12/2025||
> and how little documentation there seems to be on it

DISCLAIMER: After years of using Angular/Ember/Jquery/VanillaJs, jumping into React's functional components made me enjoy building front-ends again (and still remains that way to this very day). That being said:

This has been maybe the biggest issue in React land for the last 5 years at least. And not just for RSC, but across the board.

It took them forever to put out clear guidance on how to start a new React project. They STILL refuse to even acknowledge CRA exist(s/ed). The maintainers have actively fought with library makers on this exact point, over and over and over again.

The new useEffect docs are great, but years late. It'll take another 3-4 years before teh code LLMs spit out even resemble that guidance because of it.

And like sure, in 2020 maybe it didn't make sense to spell out the internals of RSC because it was still in active development. But it's 2025. And people are using it for real things. Either you want people to be successful or you want to put out shiny new toys. Maybe Guillermo needs to stop palling around with war criminals and actually build some shit.

It might be one of the most absurd things about React's team: their constitutional refusal to provide good docs until they're backed into a corner.

firtoz 12/12/2025||
People did complain about next exposing "react, not ready for production" things as "the latest and greatest thing from nextjs" for quite a while now

I had moved off nextjs for reasons like these, the mind load was getting too heavy for not too much benefit

chuckadams 12/11/2025||
I remember when the point of an SPA was to not have all these elaborate conversations with the server. Just "here's the whole app, now only ask me for raw data."
_jzlw 12/12/2025||
It's funny (in a "wtf" sort of way) how in C# right now, the new hotness Microsoft is pushing is Blazor Server, which is basically old-school .aspx Web Forms but with websockets instead of full page reloads.

Every action, every button click, basically every input is sent to the server, and the changed dom is sent back to the client. And we're all just supposed to act like this isn't absolutely insane.

oefrha 12/12/2025|||
Yes, I say this every time this topic comes up: it took many years to finally have mainstream adoption of client-side interactivity so that things are finally mostly usable on high latency/lossy connections, but now people who’re always on 10ms connections are trying to snatch that away so that entirely local interactions like expanding/collapsing some panels are fucked up the moment a WebSocket is disconnected. Plus nice and simple stateless servers now need to hold all those long-lived connections. WTF. (Before you tell me about Alpine.js, have you actually tried mutating state on both client and server? I have with Phoenix and it sucks.)
seer 12/12/2025||||
Isn’t that what Phoenix (Elixir) is? All server side, small js lib for partial loads, each individual website user gets their own thread on the backend with its own state and everything is tied together with websockets.

Basically you write only backend code, with all the tools available there, and a thin library makes sure to stich the user input to your backend functions and output to the front end code.

Honestly it is kinda nice.

dmix 12/12/2025|||
Also what https://anycable.io/ does in Rails (with a server written in Go)

Websockets+thin JS are best for real time stuff more than standard CRUD forms. It will fill in for a ton of high-interactivity usecases where people often reach for React/Vue (then end up pushing absolutely everything needlessly into JS). While keeping most important logic on the server with far less duplication.

For simple forms personally I find the server-by-default solution of https://turbo.hotwired.dev/ to be far better where the server just sends HTML over the wire and a JS library morph-replaces a subset of the DOM, instead of doing full page reloads (ie, clicking edit to in-place change a small form, instead of redirecting to one big form).

_jzlw 12/12/2025||||
Idk about Phoenix, but having tried Blazor, the DX is really nice. It's just a terrible technical solution, and network latency / spotty wifi makes the page feel laggy. Not to mention it eats up server resources to do what could be done on the client instead with way fewer moving parts. Really the only advantage is you don't have to write JS.
Ndymium 12/12/2025||||
It's basically what Phoenix LiveView specifically is. That's only one way to do it, and Phoenix is completely capable of traditional server rendering and SPA style development as well.

LiveView does provide the tools to simulate latency and move some interactions to be purely client side, but it's the developers' responsibility to take advantage of those and we know how that usually goes...

brendanmc6 12/12/2025|||
> Honestly it is kinda nice.

It's extremely nice! Coming from the React and Next.js world there is very little that I miss. I prefer to obsess over tests, business logic, scale and maintainability, but the price I pay is that I am no longer able to obsess over frontend micro-interactions.

Not the right platform for every product obviously, but I am starting to believe it is a very good choice for most.

array_key_first 12/12/2025||||
This is how client-server applications have been done for decades, it's basically only the browser that does the whole "big ole requests" thing.

The problem with API + frontend is:

1. You have two applications you have to ensure are always in sync and consistent.

2. Code is duplicated.

3. Velocity decreases because in order to implement almost anything, you need buy-in from the backend AND frontend team(s).

The idea of Blazor Server or Phoenix live view is "the server runs the show". There's now one source of truth, and you don't have to spend time making sure it's consistent.

I would say, really, 80% of bugs in web applications come from the client and server being out of sync. Even if you think about vulnerability like unauthorized access, it's usually just this. If you can eliminate those 80% or mitigate them, then that's huge.

Oh, and thats not even touching on the performance implications. APIs can be performant, but they usually aren't. Usually adding or editing an API is treated as such a high risk activity that people just don't do it - so instead they contort, like, 10 API calls together and discard 99% of the data to get the thing they want on the frontend.

christophilus 12/12/2025|||
No, it's not. I've built native Windows client-server applications, and many old-school web applications. I never once sent data to the server on every click, keydown, keyup, etc. That's the sort of thing that happens with a naive "livewire-like" approach. Most of the new tools do ship a little JavaScript, and make it slightly less chatty, but it's still not a great way to do it.

A web application should either be server-generated HTML with a little JS sprinkled in, or a client-side application with traditional RPC-like calls when necessary.

Blazor (and old-school .NET Web Forms) do a lot more back-and-forth than either of those two approaches.

array_key_first 12/13/2025||
Yes, as I've stated, the big stuff is new Web stuff.

When I say traditional client-server applications, I mean the type of stuff like X or IPC - the stuff before the Web.

> A web application should either be server-generated HTML with a little JS sprinkled in, or a client-side application with traditional RPC-like calls when necessary.

There's really no reason it "should" be either one or the other because BOTH have huge drawbacks.

The problem with the first approach (SSR with JS sprinkled) is that particular interactions become very, very hard. Think, for example, a node editor. Why would we have a node editor? We're actually doing this at work right now, building out a node editor for report writing. We're 95% SSR.

Turns out, super duper hard to do with this approach. Because it's so heavily client-side interactive so you need lots and lots of sync points, and ultimately the SERVER will be the one generating the report.

But actually, the client-side approach isn't very good either. Okay, maybe we just serialize the entire node graph and sent it over the pipe once, and then save it now and again. But what if we want to preview what the output is going to look like in real-time? Now this is really, really hard - because we need to incrementally serialize the node graph and send it to the server, generate a bit of report, and get it back, OR we just redo the report generation on the front-end with some front-loaded data - in which case our "preview" isn't a preview at all, it's a recreation.

The solution here is, actually, a chatty protocol. This is the type of thing that's super common and trivial in desktop applications - it's what gives them superpowers. But it's so rare to see on the Web.

fatbird 12/12/2025||||
You have two applications you have to ensure are always in sync and consistent.

No, the point of the API is to loosely couple the frontend and backend with a contract. The frontend doesn't need to model the backend, and the backend doesn't need to know what's happening on the frontend, it just needs to respect the API output. Changes/additions in the API are handled by API versioning, allowing overlap between old and new.

Code is duplicated.

Not if the frontend isn't trying to model the internals of the backend.

Velocity decreases because in order to implement almost anything, you need buy-in from the backend AND frontend team(s).

Velocity increases because frontend works to a stable API, and backend doesn't need to co-ordinate changes that don't affect the API output. Also, changes involving both don't require simultaneous co-ordinated release: once the PM has approved a change, the backend implements, releases non-breaking API changes, and then frontend goes on its way.

array_key_first 12/13/2025||
> No, the point of the API is to loosely couple the frontend and backend with a contract. The frontend doesn't need to model the backend, and the backend doesn't need to know what's happening on the frontend, it just needs to respect the API output. Changes/additions in the API are handled by API versioning, allowing overlap between old and new.

This is the idea, and idea which can never be fully realized.

The backend MUST understand what the frontend sees to some degree, because of efficiency, performance, and user-experience.

If we build the perfect RESTful API, where each object is an endpoint and their relationships are modeled by URLs, we have almost realized this vision. But it cost us our server catching on fire. It thrashed our user experience. Our application sucks ass, it's almost unusable. Things show up on the front-end but they're ghosts, everything takes forever to load, every button is a liar, and the quality of our application has reached new depths of hell.

And, we haven't realized the vision even. What about Authentication? User access? Routing?

> Not if the frontend isn't trying to model the internals of the backend.

The frontend does not get a choice, because the model is the model. When you go against the grain of the model and you say "everything is abstract", then you open yourself up to the worst bugs imaginable.

No - things are linked, things are coupled. When we just pretend they are not, we haven't done anything but obscure the points where failure can happen.

> Velocity increases because frontend works to a stable API, and backend doesn't need to co-ordinate changes that don't affect the API output. Also, changes involving both don't require simultaneous co-ordinated release: once the PM has approved a change, the backend implements, releases non-breaking API changes, and then frontend goes on its way.

No, this is a stark decrease in velocity.

When I need to display a new form that, say, coordinates 10 database tables in a complex way, I can just do that if the application is SSR or Livewire-type. I can just do that. I don't need the backend team to implement it in 3 months and then I make the form. I also don't need to wrangle together 15+ APIs and then recreate a database engine in JS to do it.

Realistically, those are your two options. Either you have a performant backend API interface full of one-off implementations, what we might consider spaghetti, or you have a "clean" RESTful API that falls apart as soon as you even try to go against the grain of the data model.

There are, of course, in-betweens. RPC is a great example. We don't model data, we model operations. Maybe we have a "generateForm" method on the backend and the frontend just uses this. You might notice this looks a lot like SSR with extra steps...

But this all assumes the form is generated and then done. What if the data is changing? Maybe it's not a form, maybe it's a node editor? SSR will fall apart here, and so will the clean-code frontend-backend. It will be so hellish, so evil, so convoluted.

Bearing in mind, this is something truly trivial for desktop applications to do. The models of modern web apps just cannot do this in a scalable, or reliable, way. But decades old technology like COM, dbus, and X can. We need to look at what the difference is and decide how we can utilize that.

chuckadams 12/12/2025||||
The problem with all-backend is that to change the order of a couple buttons, you now need buy-in from the backend team. There's definitely a happy medium or several between these extremes: one of them is that you have full-stack devs and don't rigidly separate teams by the implementation technology. Some devs will of course specialize in one area more than others, but that's the point of having a diverse team. There's no good reason that communicating over http has to come with an automatic political boundary.
array_key_first 12/13/2025||
Communicating over HTTP comes with pretty much as many physical boundaries as possible. The main problem, and power, of APIs is their inflexibility. By their design, and even the design of HTTP itself, they are difficult to change over time. They're interfaces, with defined inputs and outputs.

Say I want to draw a box which has many checkboxes - like a multi-select. A very, very simple, but powerful, widget. In most Web applications, this widget is incredibly hard to develop.

Why is that? Well first we need to get the data for the box, and ideally just this particular page of the box, if it's paginated. So we have to use an API. But the API is going to come with so much baggage - we only need identifiers really, since we're just checking a checkbox. But what API endpoint is going to return a list of just identifiers? Maybe some RESTful APIs, but not most.

Okay okay, so we get a bunch of data and then throw away most of it. Whatever. But oh no - we don't want this multi-select to be split by logical objects, no, we have a different categorization criteria. So then we rope in another API, or maybe a few more, and we then group all the stuff together and try to splice it up ourselves. This is a lot of code, yes, and horribly frail. The realization strikes that we're essentially doing SQL JOIN and GROUP BY in JS.

Okay, so we'll build an API. Oh no you won't. You can't just build an API, it's an interface. What, you're going to write an API for your one-off multi-select? But what if someone else needs it? What about documentation? Versioning? I mean, is this even RESTful? Sure doesn't look like it. This is spaghetti code.

Sigh. Okay, just use the 5 API endpoints and recreate a small database engine on the frontend, who cares.

Or, alternative: you just draw the multi-select. When you need to lazily update it, you just update it. Like you were writing a Qt application and not a web application. Layers and layers of complexity and friction just disappear.

chuckadams 12/13/2025||
There's a lot of different decisions to make with every individual widget, sure, but I was talking about political boundaries, not physical ones. My point is that it's possible for a single team to make decisions across the stack like whether it's primarily server-side, client-side, or some mashup, and that stuff like l10n and a11y should be the things that get coordinated and worked out across teams. A lot of that starts with keeping hardcore True Believers off the team.
procaryote 12/12/2025|||
Stop having backend and frontend teams. Start having crossfunctional teams. Problem solved.
c0balt 12/12/2025||||
Hotwire et al are also doing part of this. It isn't a new concept but it seems to come and go it terms of popularity
JeremyNT 12/12/2025||||
Well, maybe it isn't so insane?

Server side rendering has been with us since the beginning, and it still works great.

Client side page manipulation has its place in the world, but there's nothing wrong with the server sending page fragments, especially when you can work with a nice tech stack on the backend to generate it.

qingcharles 12/12/2025||
Sure. The problem with some frameworks is that they attached server events to things that should be handled on the front-end without a roundtrip.

For instance, I've seen pages with a server-linked HTML button that would open a details panel. That button should open the panel without resorting to sending the event and waiting for a response from the server, unless there is a very, very specific reason for it.

McGlockenshire 12/12/2025||||
> And we're all just supposed to act like this isn't absolutely insane.

This is insane to you only if you didn't experience the emergence of this technique 20-25 years ago. Almost all server-side templates were already partials of some sort in almost all the server-side environments, so why not just send the filled in partial?

Business logic belongs on the server, not the client. Never the client. The instant you start having to make the client smart enough to think about business logic, you are doomed.

crubier 12/12/2025||
> The instant you start having to make the client smart enough to think about business logic, you are doomed.

Could you explain more here? What do you consider "business logic". Context: I have a client app to fly drone using gamepad, mouse and keyboard, and video feedback and maps, and drone tasking etc.

CharlieDigital 12/12/2025||||
It's kinda nice.

Main downside is the hot reload is not nearly as nice as TS.

But the coding experience with a C# BE/stack is really nice for admin/internal tools.

tracker1 12/12/2025||||
Yeah, I kind of hate it... Blazor has a massive payload and/or you're waiting seconds to see a response to a click event. I'm not fond of RSC either... and I say this as someone absolutely and more than happy with React, Redux and MUI for a long while at this point.

I've been loosely following the Rust equivalents (Leptos, Yew, Dioxux) for a while in the hopes that one of them would see a component library near the level of Mantine or MUI (Leptos + Thaw is pretty close). It feels a little safer in the longer term than Blazor IMO and again, RSC for react feels icky at best.

vbezhenar 12/12/2025|||
I saw this kind of interactivity in Apache Wicket Java framework. It's very interesting approach.
pjmlp 12/11/2025|||
Until they discovered why so many of us have kept with server side rendering, and only as much JS as needed.

Then they rediscovered PHP, Rails, Java EE/Spring, ASP.NET, and reboted SPAs into fullstack frameworks.

sangeeth96 12/11/2025|||
> Then they rediscovered PHP, Rails, Java EE/Spring, ASP.NET, and reboted SPAs into fullstack frameworks.

I can understand the dislike for Next but this is such a poor comparison. If any of those frameworks at any point did half the things React + Next-like frameworks accomplished and the apps/experiences we got since then, we wouldn't be having this discussion.

acdha 12/11/2025|||
> If any of those frameworks at any point did half the things React + Next-like frameworks accomplished and the apps/experiences we got since then, we wouldn't be having this discussion.

This is interesting because every Next/React project I see has a slower velocity than the median Rails/Django product 15 years ago. They’re just as busy, but pushing so much complexity around means any productivity savings is cancelled out by maintenance and how much harder state management and security are. Theoretically performance is the justification for this but the multi-second page load times are unconvincing.

From my perspective, it really supports the criticism about culture in our field: none of this is magic, we can measure things like page-weight, response times, or time to complete common tasks (either for developers or our users), but so much of it is driven by what’s in vogue now rather than data.

ricardobeat 12/12/2025||
+1 to this. I seriously believe frontend was more productive in the 2010-2015 era than now, despite the flaws in legacy tech. Projects today have longer timelines, are more complex, slower, harder to deploy, and a maintenance nightmare.
c-hendricks 12/12/2025|||
I'm not so sure those woes are unique to frontend development.
chuckadams 12/12/2025|||
I remember maintaining webpack-based projects, and those were not exactly a model of simplicity. Nor was managing a fleet of pet dev instances with Puppet.
acdha 12/12/2025|||
Puppet isn’t a front end problem, but I do agree on Webpack - which is one reason it wasn’t super common. A lot of sites either didn’t try to bundle things or had simple Make-level workflows which were at least very simple, and at the time I noted that these often performed similarly: people did, and still do, want to believe there’s a magic go-faster switch for their front end which obviates the need to reconsider their architectural choices but anyone who actually measured it knew that bundlers just didn’t deliver savings on that scale.
chuckadams 12/12/2025||
I do kind of miss gulp and wish there was a modern TS version. Vite is mighty powerful, but pretty opaque.
ricardobeat 12/12/2025|||
Webpack came out in late 2012 and took a few years to take over, thankfully. I was lucky to avoid it at dayjob™ until around 2019.
seer 12/12/2025||||
I still remember the joy of using the flagship rails application - basecamp. Minimal JS, at least compared to now, mostly backend rendering, everything felt really fast and magical to use.

Now they accomplished this by imposing a lot of constraints on what you could do, but honestly it was solid UX at the time so it was fine.

Like the things you could do were just sane things to do in the first place, thus it felt quite ok as a dev.

React apps, _especially_ ones hosted on Next.js rarely feel as snappy, and that is with the benefit of 15 years of engineering and a few order of magnitude perf improvement to most of the tech pieces of the stack.

It’s just wild to me that we had faster web apps, with better organizarion, better dev ex, faster to build and easier to maintain.

The only “wins” I can see for a nextjs project is flexibility, animation (though this is also debatable), and maybe deployment cost, but again I’m comparing to deploying rails 15 years ago, things have improved there as well I’m sure.

I know react can accomplish _a ton_ more on the front end but few projects actually need that power.

tacker2000 12/11/2025||||
How does Next accomplish more than a PHP/Ruby/whatever backend with a React frontend?

If anything the latter is much easier to maintain and to develop for.

Atotalnoob 12/11/2025||||
Blazor? Razor pages?
brazukadev 12/11/2025||||
We are having this discussion because at some point, the people behind React decided it should be profitable and made it become the drug gateway for NextJS/Vercel
pjmlp 12/12/2025||
Worse, because Vercel then started its marketing wave, thus many SaaS products only support React/Next.js as extensions points.

Using anything else requires yak shaving instead of coding the application code.

That is the only reason I get to use them.

pjmlp 12/12/2025|||
They weren't the new shinny to pump up the CV, and fill the Github repo for HR applications.
whizzter 12/11/2025|||
I sometimes feel like I go on and on about this... but there is a difference between application and pages (even if blurry at times), and Next is a result of people doing pages adopting React that was designed for applications when they shouldn't have.
tshaddox 12/11/2025|||
That was indeed one of the main points of SPAs, but React Server Components are generally not used for pure SPAs.
reactordev 12/11/2025|||
Correct, their main purpose is ecosystem lock-in. Because why return json when you can return html. Why even build a SPA when the old school model of server-side includes and PHP worked just fine? TS with koa and htmx if you must but server-side react components are kind of a waste of time. Give me one example where server side react components are the answer over a fetch and json or just fetching an html page?
nawgz 12/11/2025|||
The only example that has any traction in my view are web-shops, which claim that time-to-render and time-to-interactivity are critical for customer retention.

Surely there are not so many people building e-commerce sites that server components should have ever become so popular.

skydhash 12/12/2025||
The thing is time to render and interactivity is much more reliant on the database queries and the internet connection of the user than anything else. Now instead of a spinner or a progress bar in the toolbar of the browser, now I got skeleton loaders and use half of GB for one tab.
nawgz 12/12/2025||
Not to defend the practice, I’ve never partaken, but I think there’s some legit timing arguments that a server renderer can integrate more requests faster thanks to being collocated with services and dbs.
reactordev 12/12/2025||
which brings me back to my main point of the web 1.0 architecture. Serving pages from the server-side, where the data lives, and we've come full circle.
tshaddox 12/11/2025|||
I like RSCs and mostly dislike SPAs, but I also understand your sentiment.
robertoandred 12/12/2025|||
Sure they are. Next sites are SPAs.
rustystump 12/11/2025|||
It also decoupled fe and backend. You could use the same apis for say mobile, desktop and web. Teams didnt have to cross streams allowing for deeper expertise on each side.

Now they are shoving server rendering into react native…

epolanski 12/12/2025|||
Yeah, but then people started building bloated static websites with those libraries instead of using a saner template engine + javascript approach which is fast, easy to cache, debug, and has stellar performance and SEO.

Little it helped that even React developers were saying that it was the wrong tool for plenty use cases.

Worst of all?

The entire nuance of choosing the right tool for the job has been long lost on most developers. Even the comments I read on HN make me question where the engineering part of the job starts.

CodingJeebus 12/12/2025||
It also doesn't help that non-technical stakeholders sometimes want a say in a tech stack conversation as well. I've been at more than one company where either the product team or the acquiring firm wanted us to migrate away from a tried and true Rails setup to a fullstack JS platform simply because they either wanted the UI development flexibility or to not have to hire Ruby devs.

Non-technical MBA's seem to have a hard time grasping that a JS-only platform is not a panacea and comes with serious tradeoffs.

hedayet 12/11/2025|||
I'd be interested in adopting a sole-purpose framework like that.
moomoo11 12/11/2025||
I think people just never understood SPA.

Like with almost everything people then shit on something they don’t understand.

tagraves 12/11/2025||
It's really concerning that the biggest, most eye-grabbing part of this posting is the note with the following: "It’s common for critical CVEs to uncover follow‑up vulnerabilities."

Trying to justify the CVE before fully explaining the scope of the CVE, who is affected, or how to mitigate it -- yikes.

treesknees 12/11/2025||
What’s concerning about it? The first thing I thought when I read the headline was “wow, another react CVE?” It’s not a justification, it’s an explanation to the most obvious immediate question.
vcarl 12/11/2025|||
It's definitely a defensive statement, proactively covering the situation as "normal". Normal it may be, but emphasizing that in the limited space of a tweet thread definitely indicates where their mind is on this, I'd think.
treesknees 12/11/2025||
Are you reading a different link? This statement is on a React blog post, not a Twitter thread.
tom1337 12/11/2025||||
But it is another React CVE. Doesn't really matter why it was uncovered, it's bad that it existed either way
brazukadev 12/11/2025|||
an insecure software will have multiple CVEs, not necessarily related to each other. Those 3 are probably not the only ones.
rickhanlonii 12/11/2025|||
Thanks for the feedback, I adjusted it here so the first note is related to the impacted versions:

https://github.com/reactjs/react.dev/pull/8195

tagraves 12/11/2025||
I appreciate the follow up! I think it looks great now and doesn’t read as defensively anymore!
rickhanlonii 12/11/2025||
Yeah agreed, thanks again for the feedback. The priority here is clear disclosure and upgrade steps.
haileys 12/11/2025|||
Perception management

https://en.wikipedia.org/wiki/Perception_management

samdoesnothing 12/11/2025|||
Also kind of funny that they're comparing it to Log2Shell. Maybe not the best sort of company to be keeping...
everfrustrated 12/12/2025||
React is the new JavaBean
zwnow 12/11/2025|||
Welcome to the React, Next, Vercel ecosystem. Our tech may be shite but we look fancy.
brazukadev 12/11/2025||
The Vercel CEO post congratulating his team for how they managed the vulnerability was funny
hitekker 12/11/2025|||
There are a lot of careers riding on the optics here.
IceDane 12/12/2025||
No, there aren't. The react team isn't going to axe half the team because there's a high severity CVE.
0xblinq 12/12/2025|||
I think the same. To me it looks like a Vercel marketing employee wrote that.
TZubiri 12/11/2025||
Very standard in security, announcements always always always try to downplay their severity.
rickhanlonii 12/12/2025||
fwiw, the goal here wasn't to downplay the severity, but to explain the context to an audience who might not be familiar with CVEs and what's considered normal. I moved the note down so the more important information like severity, impacted versions, and upgrade instructions are first.
isodev 12/12/2025|||
> an audience who might not be familiar with CVEs

If there are so many React developers out there using server side components while not familiar with the concept of CVEs, we’re in very serious trouble.

TZubiri 12/12/2025|||
It's ok, you gotta play the game. I'm more concerned about the fact that the downtime issue ranks higher than the security issue. But I'm assuming it relates to the specifics of the issue rather than reflecting on the priorities of the project as a whole.
hbbio 12/12/2025||
We pioneered a lot of things with Opa, 15 years ago now. Opa featured automatic code "splitting" between client and server, introduced the JSX syntax although it wasn't called that way (Jordan at Facebook used Opa before creating React, but the discussions around the syntax happened at W3C notably with another Facebook employee, Tobie).

Since the Opa compiler was implemented in OCaml (we were looking more like Svelte than React as a pure lib), we performed a lot of statical analysis to prevent the wide range of attacks on frontend code (XSS, CSRF, etc.) and backend code. The Opa compiler became a huge beast in part because of that.

In retrospect, better separation of concerns and foregoing completely the idea of automatic code splitting (what React Server Components is) or even having a single app semantics is probably better for the near future. Our vision (way too early), was that we could design a simple language for the semantics and a perfect advanced compiler that would magically output both the client and the server from that specification. Maybe it's still doable with deterministic methods. Maybe LLMs will get to automatic code generation of all parts in one shot before.

danabramov 12/12/2025||
Note that the exploits so far haven’t had much to do with “server code/data getting bundled into the client code” or similar which you’re alluding to. Also, RSC does not try to “guess” how to split code — it is deterministic and always user-controlled.

The vulnerabilities so far were weaknesses in the (de)serializer stemming from the dynamism of JavaScript — ability to hijack root object prototype, ability to toString functions to get their code, ability to override a Promise then implementation, ability to construct a function from a string. The patches are patching the (de)serializer to work around those dynamic pieces of JavaScript to avoid those gaps. This is similar to mistakes in parsers where they’re fooled by properties called hasOwnProperty/constructor/etc.

The serialization format is essentially “JSON with Promises and code chunk references”, and it seems like there’s enough pieces where dynamic nature of JS can leak that needed to be plugged. Hopefully with more scrutiny on the protocol, these will be well-understood by the team. The surface area there isn’t growing much anymore (it’s close to being feature-complete), and the (de)serializers themselves are roughly 5 kloc each.

The problem you had in Opa is solved in RSC with build-time assertions (import "server-only" is the server environment poison pill, and import "client-only" is the client environment poison pill). These poison pills work transitively up the module import stack and are statically enforced and prevent code (eg DB code, secrets, etc) from being pulled into the wrong environment. Of course this doesn’t prevent bugs in the (de)serializer but it’s why the overall approach is sound, in the absence of (de)serialization vulnerabilities.

hbbio 12/14/2025||
The problem we tried to solve with Opa was more general than RSC, probably too general.

    // Opa decides
    function client_or_server (x, y) { ... }
    // Client-side
    client function client_function(x, y) {= }
    // Server-side
    server function server_function(x, y) {... }
Without the optional side inference (which could also use both), it seems we had similar side constraints, and serializers/sanitizers. Probably with the same flaws as the recent vulnerabilities... Like all the OWASP AppSec circa 2013-2015 range of exploits in browser countermeasures when the browsers where starting to roll out defense in depth with string matching :)
Philpax 12/12/2025|||
You might be interested in Electric Clojure [1], although I must admit that I have not used it myself.

[1]: https://github.com/hyperfiddle/electric

yawaramin 12/12/2025||
Ocsigen Eliom did it before Opa, no?
hollowturtle 12/11/2025||
Wouldn't make more sense keeping React smaller and left those features to frameworks? I liked it more when it was marketed as the View in MVC. Surely can still be used like that today but it still feels bloated
TZubiri 12/11/2025||
But the react-components are a separate library, they are not installed by default
hollowturtle 12/12/2025||
? afaik react server components made it to core
silverwind 12/12/2025||
They shouldn't be loaded in a React SPA at least, e.g. `react-dom` and `react` packages should be unaffected.
TZubiri 12/12/2025||
So they are part of the standard distribution (like through npm install react), but are unused by default? Something like that?
danabramov 12/12/2025||
This code doesn’t exist in `react` or `react-dom`, no. Packages are released in lockstep to avoid confusion which is why everything got a version bump.

The vulnerable packages are the ones starting with `react-server-` (like `react-server-dom-webpack') or anything that vendors their code (like `next` does).

ivanjermakov 12/11/2025||
git checkout v15.0.0

There we go.

hollowturtle 12/11/2025||
Can I have v15 with the rendering optimizations of further versions?
dizlexic 12/12/2025||
I'm not going to let go my argument with Dan Abramov on x 3 years ago where he held up rsc as an amazing feature and i told him over and over he was making a foot gun. tahdah!

I'm a nobody PHP dev. He's a brilliant developer. I can't understand why he couldn't see this coming.

danabramov 12/12/2025||
For what it’s worth, I’ve just built an app for myself with RSC, and I’m still a huge fan of this way of building and structuring web software.

I agree I underestimated the likelihood of bugs like this in the protocol, though that’s different from most discussions I’ve had about RSC (where concerns were about user code). The protocol itself has a fairly limited surface area (the serializer and deserializer are a few kloc each), and that’s where all of the exploits so far have concentrated.

Vulnerabilities are frustrating, and this seems to be the first time the protocol is getting a very close look from the security community. I wish this was something the team had done proactively. We’ll probably hear more from the team after things stabilize a bit.

brazukadev 12/20/2025||
RSC is not a protocol, that is probably one of the reasons it is bad and affected only NextJS - most other server framework struggled and gave up this mistake that was React Server.
locallost 12/12/2025|||
I'm not defending React and this feature, and I also don't use it, but when making a statement like that the odds are stacked in your favor. It's much more likely that something's a bad idea than a good idea, just as a baseball player will at best fail just 65-70% of the time at the plate. Saying for every little thing that it's a bad idea will make you right most of the time.

But sometimes, occasionally, a moonshot idea becomes a home run. That's why I dislike cynicism and grizzled veterans for whom nothing will ever work.

dizlexic 12/12/2025||
You're probably right. This one just felt like Groundhog Day, but I can't argue with "nothing ventured nothing gained".
jdkoeck 12/12/2025|||
A tale as old as time: hubris. A successful system is destined to either stop growing or morph into a monstrosity by taking on too many responsibilities. It's hard to know when to stop.

React lost me when it stopped being a rendering library and became a "runtime" instead. What do you know, when a runtime starts collapsing rendering, data fetching, caching, authorization boundaries, server and client into a single abstraction, the blast radius of any mistake becomes enormous.

peacebeard 12/12/2025|||
You might be more brilliant than you think.
hu3 12/12/2025||
I never saw brilliance in his contributions. Specially as React keeps being duct-taped.

Making complex things complex is easy.

Vue on the other hand is just brilliant. No wonder it's creator, Evan You went on to also create Vite. A creation so superior that it couldn't be confined to Vue and React community adopted it.

https://evanyou.me

epolanski 12/12/2025||
There's no need to take down and diminish other's contributions, especially in open source where everybody's free to bring a better solution to the table.

Or just fork if the maintainers want to go their way. If your solution has its merits it will find its fans.

hu3 12/13/2025||
That's utopia.

While everyone is free to fork and maintain React. It's by no means an easy task, specially if it's not their job like Dan's is.

Plus, industry tends to gravitate towards what is popular. Network effects an all. So if a massively popular tool is subpar, the complications of it aren't without impact.

And no one is immune to criticism. LLMs are criticised for their sycophancy but some humans are no different when it comes to gatekeeping criticism.

sangeeth96 12/11/2025||
Next team just published this: https://nextjs.org/blog/security-update-2025-12-11

Seems to affect 14.x, 15.x and 16.x.

_heimdall 12/11/2025||
I do hope this means we can finally stop hearing about RSC. The idea is an interesting solution to problems that never should exist in the first place.
delifue 12/12/2025|
React server component is frontend's attempt of "eating" backend.

On the contrary, HTMX is the attempt of backend "eating" frontend.

HTMX preserves the boundary between client and server so it's more safe in backend, but less safe in frontend (risk of XSS).

yawaramin 12/12/2025|
Htmx doesn't really have an XSS problem, this was solved by templating language long ago. See https://htmx.org/essays/web-security-basics-with-htmx/#alway...
More comments...