Posted by rayhaanj 9 hours ago
The server was deserializing untrusted input from the client directly into module+export name lookups, and then invoking whatever the client asked for (without verifying that metadata.name was an own property).
return moduleExports[metadata.name]
We can patch hasOwnProperty and tighten the deserializer, but there is deeper issue. React never really acknowledged that it was building an RPC layer. If you look at actual RPC frameworks like gPRC or even old school SOAP, they all start with schemas, explicit service definitions and a bunch of tooling to prevent boundary confusion. React went the opposite way: the API surface is whatever your bundler can see, and the endpoint is whatever the client asks for.My guess is this won't be the last time we see security fallout from that design choice. Not because React is sloppy, but because it’s trying to solve a problem category that traditionally requires explicitness, not magic.
The fact that React embodies an RPC scheme in disguise is quite obvious if you look at the kind of functionality that is implemented, some of that simply can not be done any other way. But then you should own that decision and add all of the safeguards that such a mechanism requires, you can't bolt those on after the fact.
A similar bug could be introduced in the implementation of other RPC systems too. It's not entirely specific to this design.
(I contribute to React but not really on RSC.)
Imagine these dozens of people, working at Meta.
They sit at the table, they agree to call eval() and not think "what could go wrong"
Building a private, out of date repo doesn't seem great either.
The problem is this specific "call whatever server code the client asks" pattern. Traditional APIs with defined endpoints don’t have that issue.
Architecturally there appears to be an increasingly insecure attack surface appearing in JavaScript at large, based on the insecurities in mandatory dependencies.
If the foundation and dependencies of react has vulnerabilities, react will have security issues indirectly and directly.
This explicit issue seems to be a head scratcher. How could something so basic exist for so long?
Again I ask about react and next.js from their perspective or position of leadership in the JavaScript ecosystem. I don’t think this is a standard anyone wants.
Could there be code reviews created for LLMs to search for issues once discovered in code?
If you remember “mashups” these were basically just using the fact that you can load any code from any remote server and run it alongside your code and code from other servers while sharing credentials between all of them. But hey it is very useful to let Stripe run their stripe.js on your domain. And AdSense. And Mixpanel. And while we are at it let’s let npm install 1000 packages for a single dependency project. It’s bad.
> A pre-authentication remote code execution vulnerability exists in React Server Components versions 19.0.0, 19.1.0, 19.1.1, and 19.2.0 including the following packages: react-server-dom-parcel, react-server-dom-turbopack, and react-server-dom-webpack. The vulnerable code unsafely deserializes payloads from HTTP requests to Server Function endpoints.
React's own words: https://react.dev/blog/2025/12/03/critical-security-vulnerab...
> React Server Functions allow a client to call a function on a server. React provides integration points and tools that frameworks and bundlers use to help React code run on both the client and the server. React translates requests on the client into HTTP requests which are forwarded to a server. On the server, React translates the HTTP request into a function call and returns the needed data to the client.
> An unauthenticated attacker could craft a malicious HTTP request to any Server Function endpoint that, when deserialized by React, achieves remote code execution on the server. Further details of the vulnerability will be provided after the rollout of the fix is complete.
I think deserialization that relying on blacklists of properties is a dangerous game.
I think rolling your own object deserialization in a library that isn’t fully dedicated to deserialization is about as dangerous as writing your own encryption code.
Intentionally? That's a scary feature
What does server components do so much better than SSR? What minute performance gain is achieved more than client side rendering?
Why won’t they invest more on solving the developer experience that took a nosedive when hooks were introduced? They finally added a compiler, but instead of going the svelte route of handling the entire state, it only adds memoization?
If I can send a direct message to the react team it would be to abandon all their current plans, and work on allowing users to write native JS control flows in their component logic.
sorry for the rant.
I like to think of Server Components as componentized BFF ("backend for frontend") layer. Each piece of UI has some associated "API" with it (whether REST endpoints, GraphQL, RPC, or what have you). Server Components let you express the dependency between the "backend piece" and the "frontend piece" as an import, instead of as a `fetch` (client calling server) or a <script> (server calling client). You can still have an API layer of course, but this gives you a syntactical way to express that there's a piece of backend that prepares data for this piece of frontend.
This resolves tensions between evolving both sides: the each piece of backend always prepares the exact data the corresponding piece of frontend needs because they're literally bound by a function call (or rather JSX). This also lets you load data as granularly as you want without blocking (very nice when you have a low-latency data layer).
Of course you can still have a traditional REST API if you want. But you can also have UI-specific server computation in the middle. There's inherent tension between the data needed to display the UI (a view model) and the way the data is stored (database model); RSC gives you a place to put UI-specific logic that should execute on the backend but keeps composability benefits of components.
I understand the logic, but there are several issues I can think of.
1 - as I said, SSR and API layers are good enough, so investing heavily in RSC when the hooks development experience is still so lacking seems weird to me. React always hailed itself as the “just JS framework”, but you can’t actually write regular JS in components since hooks have so many rules that bind the developer in a very specific way of writing code.
2 - as react was always celebrated as an unopinionated framework, RSC creates a deep coupling between 2 layers which were classically very far apart.
Here are a list of things that would rather have react provide:
- advanced form functionality that binds to model, and supports validation
- i18n, angular has the translations compiled into the application and fetching a massive json with translations is not needed
- signals, for proper reactive state
- better templating ability for control flows
- native animation library
All of these things are important so I wouldn’t have to learn each new project’s permutation of the libraries de jour.
I agree that the developer experience provided by the compiler model used in Svelte and React is much nicer to work with
And what they DO add? Only things that improve dev exp
The 1 to 2 transition was one hell of a burn though; people are probably still smarting...
Having both old and new Angular running in one project is super weird, but everything worked out in the end.
The migration path between angular 1 and 2 is the same as react and angular, it’s just glue holding 2 frameworks together
And that change happened 10 years ago
Angular.js and angular. That's not confusing at all :-)
So yes, people got burnt (when we were told that there will be a migration path), and I will never rely on another Google-backed UI framework.
Lit is pretty good :-) Though it was never positioned as a framework. And it recently was liberated from google.
RSC is their solution to not being able to figure out how to make SSR faster and an attempt to reduce client-side bloat (which also failed)
Ultimately that so-called ”isomorphism” causes more numerous and difficult problems than it solves.
A purist approach with short term thinking got everyone deep in a rabbit hole with too many pitfalls.
https://www.arrow-js.com/docs/
It makes it easy to have a central JSON-like state object representing what's on the page, then have components watch that for changes and re-render. That avoids the opaqueness of Redux and promise chains, which can be difficult to examine and debug (unless we add browser extensions for that stuff, which feels like a code smell).
I've also heard heard good things about Astro, which can wrap components written in other frameworks (like React) so that a total rewrite can be avoided:
https://docs.astro.build/en/guides/imports/
I'm way outside my wheelhouse on this as a backend developer, so if anyone knows the actual names of the frameworks I'm trying to remember (hah), please let us know.
IMHO React creates far more problems than it solves:
- Virtual DOM: just use Facebook's vast budget to fix the browser's DOM so it renders 1000 fps using the GPU, memoization, caching, etc and then add the HTML parsing cruft over that
- Redux: doesn't actually solve state transfer between backend and frontend like, say, Firebase
- JSX: do we really need this when Javascript has template literals now?
- Routing: so much work to make permalinks when file-based URLs already worked fine 30 years ago and the browser was the V in MVC
- Components: steep learning curve (but why?) and they didn't even bother to implement hooks for class components, instead putting that work onto users, and don't tell us that's hard when packages like react-universal-hooks and react-hookable-component do it
- Endless browser console warnings about render changing state and other errata: just design a unidirectional data flow that detects infinite loops so that this scenario isn't possible
I'll just stop there. The more I learn about React, the less I like it. That's one of the primary ways that I know that there's no there there when learning new tools. I also had the same experience with the magic convention over configuration in Ruby.What's really going on here, and what I would like to work on if I ever win the internet lottery (unlikely now with the arrival of AI since app sales will soon plummet along with website traffic) is a distributed logic flow. In other words, a framework where developers write a single thread of execution that doesn't care if it's running on backend or frontend, that handles all state synchronization, preferably favoring a deterministic fork/join runtime like Go over async behavior with promise chains. It would work a bit like a conflict-free replicated data type (CRDT) or software transactional memory (STM) but with full atomicity/consistency/isolation/durability (ACID) compliance. So we could finally get back to writing what looks like backend code in Node.js, PHP/Laravel, whatever, but have it run in the browser too so that users can lose their internet connection and merge conflicts "just work" when they go back online.
Somewhat ironically, I thought that was how Node.js worked before I learned it, where maybe we could wrap portions of the code to have @backend {} or @frontend {} annotations that told it where to run. I never dreamed that it would go through so much handwaving to even allow module imports in the browser!
But instead, it seems that framework maintainers that reached any level of success just pulled up the ladder behind them, doing little or nothing to advance the status quo. Never donating to groups working from first principles. Never rocking the boat by criticizing established norms. Just joining all of the other yes men to spread that gospel of "I've got mine" to the highest financial and political levels.
So much of this feels like having to send developers to the end of the earth to cater to the runtime that I question if it's even programming anymore. It would be like having people write the low-level RTF codewords in MS word rather than just typing documents via WYSIWYG. We seem to have all lost our collective minds ..the emperor has no clothes.
I'm not sure what this is a reference to? Is it actually about Rails?
https://github.com/facebook/react/commit/bbed0b0ee64b89353a4...
and it looks like its been squashed with some other stuff to hide it or maybe there are other problems as well.
this pattern appears 4 times and looks like it is reducing the functions that are exposed to the 'whitelist'. i presume the modules have dangerous functions in the prototype chain and clients were able to invoke them.
- return moduleExports[metadata.name];
+ if (hasOwnProperty.call(moduleExports, metadata.name)) {
+ return moduleExports[metadata.name];
+ }
+ return (undefined: any);So I don't think this mechanism is exactly correct, can you demo it with an actual nextjs project, instead of your mock server?
I'm debugging it currently, maybe I'm not on the right path after all.
https://vercel.com/changelog/cve-2025-55182
> Cloudflare WAF proactively protects against React vulnerability
We still strongly recommend everyone to upgrade their Next, React, and other React meta-frameworks (peer)dependencies immediately.
and Deno Deploy/Subhosting: https://deno.com/blog/react-server-functions-rce
I guess now we'll see more bots scanning websites for "/_next" path rather than "/wp-content".
> Experimental React Flight bindings for DOM using Webpack.
> Use it at your own risk.
311,955 weekly downloads though :-|
Beyond that, I think there is good reason to believe that the number is inflated due to automated downloads from things like CI pipelines, where hundreds or thousands of downloads might only represent a single instance in the wild.
The above could be seen as spin too, how could cvss be more accurate so you’d feel better?