Posted by sangeeth96 3 days ago
It turns out this introduces another problem too: in order to get that to work you need to implement some kind of DEEP serialization RPC mechanism - which is kind of opaque to the developer and, as we've recently seen, is a risky spot in terms of potential security vulnerabilities.
What app router has become has its ideal uses, but if you explicitly preferred the DX of the pages router, you might enjoy TanStack Router/Start even more.
I don't care about having things simple to get started the first time, because soon I will have to start things a second or third time. If I have a little bit more complexity to get things started because routing is handled by code and not filesystem placement then I will pretty quickly develop templates to handle this, and in the end it will be easier to get things started the nth time than it is with the simple version.
Do I like the app router? No, Vercel does a crap job on at least two things - routing and building (server codes etc. can be considered as a subset of the routing problem), but saying I dislike app router is praising page router with too faint a damnation.
Align early on wrt values of a framework and take a closer look at the funder's incentives.
It's the first stack that allows me to avoid REST or GraphQL endpoints by default, which was the main source of frontend overhead before RSC. Previously I had to make choices on how to organize API, which GraphQL client to choose (and none of them are perfect), how to optimize routes and waterfalls, etc. Now I just write exactly what I mean, with the very minimal set of external helper libs (nuqs and next-safe-action), and the framework matches my mental model of where I want to get very well.
Anti-React and anti-Next.js bias on HN is something that confuses me a lot; for many other topics here I feel pretty aligned with the crowd opinion on things, but not on this.
You still need API routes for stuff like data-heavy async dropdowns, or anything else that's hard to express as a pure URL -> HTML, but it cuts down the number of routes you need by 90% or more.
Personally I don’t like it but I do understand the appeal.
Not to mention the whole middleware and being able to access the incoming request wherever you like.
But I really, really do not like React Server Components as they work today. I think it's probably better to strip them out in favor of just a route.ts file in the directory, rather than the actions files with "use server" and all the associated complexity.
Technically, you can build apps like that using App Router by just not having "use server" anywhere! But it's an annoying, sometimes quite dangerous footgun to have all the associated baggage there waiting for an exploit... The underlying code is there even if you aren't using it.
I think my ideal setup would be:
1. route.ts for RESTful routes
2. actions/SOME_FORM_NAME.ts for built-in form parsing + handling. Those files can only expose a POST, and are basically a named route file that has form data parsing. There's no auto-RPC, it's just an HTTP handler that accepts form data at the named path.
3. no other built-in magic.
RSCs are React components that call server side code. https://react.dev/reference/rsc/server-components
Actions/"use server" functions are part of RSC: https://react.dev/reference/rsc/server-functions They're the RPC system used by client components to call server functions.
And they're what everyone here is talking about: the vulnerabilities were all in the action/use server codepaths. I suppose the clearest thing I could have said is that I like App Router + route files, but I dislike the magic RPC system: IMO React should simplify to JSON+HTTP and forms+HTTP, rather than a novel RPC system that doesn't interoperate with anything else and is much more difficult to secure.
An example of this is filesystem routing. Started off great, but now most Next projects look like the blast radius of a shell script gone terribly wrong.
There's also a(n in)famous GitHub response from one of the maintainers backwards-rationalising tech debt and accidental complexity as necessary. They're clearly smart, but the feeling I got from reading that comment was that they developed Stockholm syndrome towards their own codebase.
Vercel has become a merchant of complexity, as DHH likes to say.
I do respect the things React + Next team is trying to accomplish and it does feel like magic when it works but I find myself caring more and more about predictability when working with a team and with every major version of Next + React, that aspect seems to be drifting further and further away.
I can't recommend it enough. If you never tried/learnt about it, check it out. Unless you're building an offline first app, it's 100% the safest way to go in my opinion for 99.9% of projects.
Instead of creating routes and using fetch() you just pass the data directly to the client side react jsx template, inertia automatically injects the needed data as json into the client page.
Have a Landing/marketing page? Then, yes, by all means render on the server (or better yet statically render to html files) so you squeeze every last millisecond you can out of that FCP. Also easy to see the appeal for ecommerce or social media sites like facebook, medium, and so on. Though these are also use cases that probably benefit the least from React to begin with.
But for the "app" part of most online platforms, it's like, who cares? The time to load the JS bundle is a one time cost. If loading your SaaS dashboard after first login takes 2 seconds versus 3 seconds, who cares? The amount of complexity added by SSR and RSC is immense, I think the payout would have to be much more than it is.
they may get other vulnemerelities as they are also in JS, but RSC class vulelnebereleties won't be there
In retrospect I should have given it more thought since React Server Components are punted in many places!
You have two poison pills (`import "server-only"` and `import "client-only"`) that cause a build error when transitively imported from the wrong environment. This lets you, for example, constrain that a database layer or an env file can never make it into the client bundle (or that some logic that requires client state can never be accidentally used from the stateless request/response cycle). You also have two directives that explicitly expose entry points between the two worlds.
The vulnerabilities in question aren't about wrong code/data getting pulled into a wrong environment. They're about weaknesses in the (de)serialization protocol which relied on dynamic nature of JavaScript (shared prototypes being writable, function having a string constructor, etc) to trick the server into executing code or looping. These are bad, yes, but they're not due to the client/server split being implicit. They're in the space of (de)serialization.
So I ran a static analysis (grep) on the apk generated and
points light at face dramatically
the credentials were inside the frontend!
Most frameworks also by default block ALL environment variables on the client side unless the name is preceded by something specific, like NEXT_PUBLIC_*
I’ve been out of full stack dev for ~5 years now, and this statement is breaking my brain
These kind of node + Mobile apps typically use an embedded browser like electron or a builtin browser, it's not much different than a web app.
React team reinvent the wheel again and again and now we back to laravel
In fairness react present it as an "experimental" library, although that didn't stop nextjs from widely deploying it.
I suspect there will be many more security issues found in it over the next few weeks.
Nextjs ups the complexity orders of magnitude, I couldn't even figure out how to set any breakpoints on the RSC code within next.
Next vendors most of their dependencies, and they have an enormously complex build system.
The benefits that next and RSC offer, really don't seem to be worth the cost.
I had moved off nextjs for reasons like these, the mind load was getting too heavy for not too much benefit
DISCLAIMER: After years of using Angular/Ember/Jquery/VanillaJs, jumping into React's functional components made me enjoy building front-ends again (and still remains that way to this very day). That being said:
This has been maybe the biggest issue in React land for the last 5 years at least. And not just for RSC, but across the board.
It took them forever to put out clear guidance on how to start a new React project. They STILL refuse to even acknowledge CRA exist(s/ed). The maintainers have actively fought with library makers on this exact point, over and over and over again.
The new useEffect docs are great, but years late. It'll take another 3-4 years before teh code LLMs spit out even resemble that guidance because of it.
And like sure, in 2020 maybe it didn't make sense to spell out the internals of RSC because it was still in active development. But it's 2025. And people are using it for real things. Either you want people to be successful or you want to put out shiny new toys. Maybe Guillermo needs to stop palling around with war criminals and actually build some shit.
It might be one of the most absurd things about React's team: their constitutional refusal to provide good docs until they're backed into a corner.
Every action, every button click, basically every input is sent to the server, and the changed dom is sent back to the client. And we're all just supposed to act like this isn't absolutely insane.
The problem with API + frontend is:
1. You have two applications you have to ensure are always in sync and consistent.
2. Code is duplicated.
3. Velocity decreases because in order to implement almost anything, you need buy-in from the backend AND frontend team(s).
The idea of Blazor Server or Phoenix live view is "the server runs the show". There's now one source of truth, and you don't have to spend time making sure it's consistent.
I would say, really, 80% of bugs in web applications come from the client and server being out of sync. Even if you think about vulnerability like unauthorized access, it's usually just this. If you can eliminate those 80% or mitigate them, then that's huge.
Oh, and thats not even touching on the performance implications. APIs can be performant, but they usually aren't. Usually adding or editing an API is treated as such a high risk activity that people just don't do it - so instead they contort, like, 10 API calls together and discard 99% of the data to get the thing they want on the frontend.
A web application should either be server-generated HTML with a little JS sprinkled in, or a client-side application with traditional RPC-like calls when necessary.
Blazor (and old-school .NET Web Forms) do a lot more back-and-forth than either of those two approaches.
I've been loosely following the Rust equivalents (Leptos, Yew, Dioxux) for a while in the hopes that one of them would see a component library near the level of Mantine or MUI (Leptos + Thaw is pretty close). It feels a little safer in the longer term than Blazor IMO and again, RSC for react feels icky at best.
Basically you write only backend code, with all the tools available there, and a thin library makes sure to stich the user input to your backend functions and output to the front end code.
Honestly it is kinda nice.
Websockets+thin JS are best for real time stuff more than standard CRUD forms. It will fill in for a ton of high-interactivity usecases where people often reach for React/Vue (then end up pushing absolutely everything needlessly into JS). While keeping most important logic on the server with far less duplication.
For simple forms personally I find the server-by-default solution of https://turbo.hotwired.dev/ to be far better where the server just sends HTML over the wire and a JS library morph-replaces a subset of the DOM, instead of doing full page reloads (ie, clicking edit to in-place change a small form, instead of redirecting to one big form).
LiveView does provide the tools to simulate latency and move some interactions to be purely client side, but it's the developers' responsibility to take advantage of those and we know how that usually goes...
It's extremely nice! Coming from the React and Next.js world there is very little that I miss. I prefer to obsess over tests, business logic, scale and maintainability, but the price I pay is that I am no longer able to obsess over frontend micro-interactions.
Not the right platform for every product obviously, but I am starting to believe it is a very good choice for most.
Server side rendering has been with us since the beginning, and it still works great.
Client side page manipulation has its place in the world, but there's nothing wrong with the server sending page fragments, especially when you can work with a nice tech stack on the backend to generate it.
For instance, I've seen pages with a server-linked HTML button that would open a details panel. That button should open the panel without resorting to sending the event and waiting for a response from the server, unless there is a very, very specific reason for it.
Main downside is the hot reload is not nearly as nice as TS.
But the coding experience with a C# BE/stack is really nice for admin/internal tools.
This is insane to you only if you didn't experience the emergence of this technique 20-25 years ago. Almost all server-side templates were already partials of some sort in almost all the server-side environments, so why not just send the filled in partial?
Business logic belongs on the server, not the client. Never the client. The instant you start having to make the client smart enough to think about business logic, you are doomed.
Then they rediscovered PHP, Rails, Java EE/Spring, ASP.NET, and reboted SPAs into fullstack frameworks.
I can understand the dislike for Next but this is such a poor comparison. If any of those frameworks at any point did half the things React + Next-like frameworks accomplished and the apps/experiences we got since then, we wouldn't be having this discussion.
This is interesting because every Next/React project I see has a slower velocity than the median Rails/Django product 15 years ago. They’re just as busy, but pushing so much complexity around means any productivity savings is cancelled out by maintenance and how much harder state management and security are. Theoretically performance is the justification for this but the multi-second page load times are unconvincing.
From my perspective, it really supports the criticism about culture in our field: none of this is magic, we can measure things like page-weight, response times, or time to complete common tasks (either for developers or our users), but so much of it is driven by what’s in vogue now rather than data.
Now they accomplished this by imposing a lot of constraints on what you could do, but honestly it was solid UX at the time so it was fine.
Like the things you could do were just sane things to do in the first place, thus it felt quite ok as a dev.
React apps, _especially_ ones hosted on Next.js rarely feel as snappy, and that is with the benefit of 15 years of engineering and a few order of magnitude perf improvement to most of the tech pieces of the stack.
It’s just wild to me that we had faster web apps, with better organizarion, better dev ex, faster to build and easier to maintain.
The only “wins” I can see for a nextjs project is flexibility, animation (though this is also debatable), and maybe deployment cost, but again I’m comparing to deploying rails 15 years ago, things have improved there as well I’m sure.
I know react can accomplish _a ton_ more on the front end but few projects actually need that power.
If anything the latter is much easier to maintain and to develop for.
Using anything else requires yak shaving instead of coding the application code.
That is the only reason I get to use them.
Little it helped that even React developers were saying that it was the wrong tool for plenty use cases.
Worst of all?
The entire nuance of choosing the right tool for the job has been long lost on most developers. Even the comments I read on HN make me question where the engineering part of the job starts.
Non-technical MBA's seem to have a hard time grasping that a JS-only platform is not a panacea and comes with serious tradeoffs.
Surely there are not so many people building e-commerce sites that server components should have ever become so popular.
Now they are shoving server rendering into react native…
Like with almost everything people then shit on something they don’t understand.
Trying to justify the CVE before fully explaining the scope of the CVE, who is affected, or how to mitigate it -- yikes.
If there are so many React developers out there using server side components while not familiar with the concept of CVEs, we’re in very serious trouble.
Since the Opa compiler was implemented in OCaml (we were looking more like Svelte than React as a pure lib), we performed a lot of statical analysis to prevent the wide range of attacks on frontend code (XSS, CSRF, etc.) and backend code. The Opa compiler became a huge beast in part because of that.
In retrospect, better separation of concerns and foregoing completely the idea of automatic code splitting (what React Server Components is) or even having a single app semantics is probably better for the near future. Our vision (way too early), was that we could design a simple language for the semantics and a perfect advanced compiler that would magically output both the client and the server from that specification. Maybe it's still doable with deterministic methods. Maybe LLMs will get to automatic code generation of all parts in one shot before.
The vulnerabilities so far were weaknesses in the (de)serializer stemming from the dynamism of JavaScript — ability to hijack root object prototype, ability to toString functions to get their code, ability to override a Promise then implementation, ability to construct a function from a string. The patches are patching the (de)serializer to work around those dynamic pieces of JavaScript to avoid those gaps. This is similar to mistakes in parsers where they’re fooled by properties called hasOwnProperty/constructor/etc.
The serialization format is essentially “JSON with Promises and code chunk references”, and it seems like there’s enough pieces where dynamic nature of JS can leak that needed to be plugged. Hopefully with more scrutiny on the protocol, these will be well-understood by the team. The surface area there isn’t growing much anymore (it’s close to being feature-complete), and the (de)serializers themselves are roughly 5 kloc each.
The problem you had in Opa is solved in RSC with build-time assertions (import "server-only" is the server environment poison pill, and import "client-only" is the client environment poison pill). These poison pills work transitively up the module import stack and are statically enforced and prevent code (eg DB code, secrets, etc) from being pulled into the wrong environment. Of course this doesn’t prevent bugs in the (de)serializer but it’s why the overall approach is sound, in the absence of (de)serialization vulnerabilities.
The vulnerable packages are the ones starting with `react-server-` (like `react-server-dom-webpack') or anything that vendors their code (like `next` does).
There we go.
I'm a nobody PHP dev. He's a brilliant developer. I can't understand why he couldn't see this coming.
I agree I underestimated the likelihood of bugs like this in the protocol, though that’s different from most discussions I’ve had about RSC (where concerns were about user code). The protocol itself has a fairly limited surface area (the serializer and deserializer are a few kloc each), and that’s where all of the exploits so far have concentrated.
Vulnerabilities are frustrating, and this seems to be the first time the protocol is getting a very close look from the security community. I wish this was something the team had done proactively. We’ll probably hear more from the team after things stabilize a bit.
React lost me when it stopped being a rendering library and became a "runtime" instead. What do you know, when a runtime starts collapsing rendering, data fetching, caching, authorization boundaries, server and client into a single abstraction, the blast radius of any mistake becomes enormous.
Making complex things complex is easy.
Vue on the other hand is just brilliant. No wonder it's creator, Evan You went on to also create Vite. A creation so superior that it couldn't be confined to Vue and React community adopted it.
Or just fork if the maintainers want to go their way. If your solution has its merits it will find its fans.
But sometimes, occasionally, a moonshot idea becomes a home run. That's why I dislike cynicism and grizzled veterans for whom nothing will ever work.
Seems to affect 14.x, 15.x and 16.x.
On the contrary, HTMX is the attempt of backend "eating" frontend.
HTMX preserves the boundary between client and server so it's more safe in backend, but less safe in frontend (risk of XSS).