I think your checklist of characteristics frames things well. it reminds me of Remix's introduction to the library
https://remix.run/docs/en/main/discussion/introduction > Building a plain HTML form and server-side handler in a back-end heavy web framework is just as easy to do as it is in Remix. But as soon as you want to cross over into an experience with animated validation messages, focus management, and pending UI, it requires a fundamental change in the code. Typically, people build an API route and then bring in a splash of client-side JavaScript to connect the two. With Remix, you simply add some code around the existing "server side view" without changing how it works fundamentally
it was this argument (and a lot of playing around with challengers like htmx and JSX like syntax for Python / Go) that has brought me round to the idea that RSCs or something similar might well be the way to go.
Bit of a shame seeing how poor some of the engagement has been on here and Reddit though. I thought the structure and length of the article was justified and helpful. Concerning how many peoples' responses are quite clearly covered in TFA they didn't read...
In 2D, it seems like you're just reinventing the wheel. But in 3D, you can see that some hack or innovation allowed you to take a new stab at the problem.
Other times I imagine trilemmas, as depicted in Scott McCloud's awesome book Understanding Comics.
There's a bounded design (solution) space, with concerns anchoring each corner. Like maybe fast, simple, and correct. Or functional, imperative, and declarative. Or weight, durability, and cost. Or...
Our job is to divine a solution that lands somewhere in that space, balancing those concerns, as best appropriate for the given context.
By extension, there's no one-size fits all perfect solution. (Though there are "good enough" general purpose solutions.)
The beauty of experiencing many, many different cuts at a problem, is that one can start to intuit things. Like quickly understand how a new product fits in the space. Like quickly narrowing the likely solution space for the current project. Comparing and contrasting stuff in an open-minded semi-informed way.
Blah, blah, blah.
Vercel fixes this for a fee: https://vercel.com/docs/skew-protection
I do wonder how many people will use the new React features and then have short outages during deploys like the FOUC of the past. Even their Pro plan has only 12 hours of protection so if you leave a tab open for 24 hours and then click a button it might hit a server where the server components and functions are incompatible.
If your rollout times are very short then skew is not a big concern for you, because it will impact very few users. If it lasts hours, then you have to solve it.
After the rollout is complete, then reload is fine. It's a bit user hostile but they will reload into a usable state.
JSX is a descendant of a PHP extention called XHP [1] [2]
[1] https://legacy.reactjs.org/blog/2016/09/28/our-first-50000-s...
When you'd annotate a React component with ReactXHP (if I remember correctly), some codegen would generate an equivalent XHP components that takes the same props, and can just be used anywhere in XHP. It worked very well when I last used it!
Slightly less related but still somewhat, they have an extension to GraphQL as well that allows you to call/require React components from within GraphQL. If you look at a random GraphQL response there's a good chance you will see things like `"__dr": "GroupsCometHighlightStoryAlbumAttachmentStyle.react"`. I never looked into the mechanics of how these worked.
Fascinating, I didn't know there was such a close integration between XHP and React. I imagined the history like XHP being a predecessor or prior art, but now I see there was an overlap of both being used together, long enough to have special language constructs to "bind" the two worlds.
"ReactXHP" didn't turn up anything, but XHP-JS sounds like it.
> We have a rapidly growing library of React components, but sometimes we’ll want to render the same thing from a page that is mostly static. Rewriting the whole page in React is not always the right decision, but duplicating the rendering code in XHP and React can lead to long-term pain.
> XHP-JS makes it convenient to construct a thin XHP wrapper around a client-side React element, avoiding both of these problems. This can also be combined with XHPAsync to prefetch some data, at the cost of slightly weakening encapsulation.
https://engineering.fb.com/2015/07/09/open-source/announcing...
This is from ten years ago, and it's asking some of the same big questions as the posted article, JSX over the Wire. How to efficiently serve a mixture of static and dynamic content, where the same HTML templates and partials are rendered on server and client side. How to fetch, refresh data, and re-hydrate those templates.
With this historical context, I can understand better the purpose of React Server Components, what it's supposed to accomplish. Using the same language for both client/server-side rendering solves a large swath of the problem space. I haven't finished reading the article, so I'll go enjoy the rest of it.
The bigger issue is the changes to events and how they get fired, some of which make sense, others of which just break people's expectations of how Javascript should work when they move to non-React projects.
The bigger difference that React makes from other frameworks, and from the DOM, is when it comes to events, in particular with events like `onChange` actually behaving more like the `onInput` event.
That said, "class" shows up a lot more in most html than "input", so I can see the advantage of being consistent with html there.
Ultimately this really just smooshed around the interface without solving the problem it sets out to solve: it moves the formatting of the mail markup to the server, but you can't move all of it unless your content is entirely static (and if you're getting it from the server, SOMETHING has to be interactive).
Consider making a list of posts from some sort of feed. If each item in the list is a server component, you can't have the component representing the item be a server component if you need to handle any events in that item. So now you're limited to just making the list component itself a server component. Well what good is that?
The whole point of this is to move stuff off of the client. But it's not even clear that you're saving any bytes at all in this scenario, because if there's any props duplicated across items in the list, you've got to duplicate the data in the JSON: the shallower the returned JSX, the more raw data you send instead of JSX data. Which completely defats the point of going through all this trouble in the first place.
...have a client component inside the post. For example, for each post, have a server component, that contains a <ClientDeleteButton postId={...} />.
...have a wrapper client component that takes a server components as a child. Eg. if you want to show a hover-card for each post:
<ClientHoverCard preview={<Preview />}>
<ServerPost />
</ClientHoverCard>
https://nextjs.org/docs/app/building-your-application/render...> props duplicated across items in the list, you've got to duplicate the data in the JSON
I'm pretty sure gzip would just compress that.
Bytes on the wire aren't nearly as important in this case. That value still has to be decompressed into a string and that string needs to be parsed into objects and that's all before you pump it into the renderer.
> have a wrapper client component that takes a server components as a child.
That doesn't work for the model defined in this post. Because now each post is a request to the server instead of one single request that returns a rendered list of posts. That's literally the point of doing this whole roundabout thing: to offload as much work as possible to the server.
> For example, for each post, have a server component, that contains a <ClientDeleteButton postId={...} />.
And now only the delete button reacts to being pressed. You can't remove the post from the page. You can't make the post semi transparent. You can't disable the other buttons on the post.
Without making a mess with contexts, state and interactivity can only happen in the client component islands.
And you know what? If you're building a page that's mostly static on a site that sees almost no code changes or deployments, this probably works great for certain cases. But it's far from an ideal practice for anything that's even mildly interactive.
Even just rendering the root of your render tree is problematic, because you probably want to show loading indicators and update the page title or whatever, and that means loading client code to load server code that runs more client code. At least with good old fashioned SSR, by the time code in the browser starts running, everything is already ready to be fully interactive.
Thats where you’re wrong. The JSX snippet that I posted above gets turned into:
{
type: "src/ClientHoverCard.js#ClientHoverCard",
props: {
preview: // this is already rendered on the server
children: // this is already rendered on the server
}
}
If you wanted to fade the entire post when pressing the delete button without contexts, you’d create a client component like this: "use client"
function DeletablePost({ children }: { children: ReactNode }) {
const [isDeleted, setDeleted] = useState(false)
return <div style={{ opacity: isDeleted ? 0.5 : 1 }}>
{children}
<DeleteButton onChange={setDeleted} />
</div>
}
And pass it a server component like this: <DeletablePost>
<ServerPost />
</DeletablePoast>
One way to decide if this architecture is for you, is to consider where your app lands on the curve of “how much rendering code should you ship to client vs. how much unhydrated data should you ship”. On that curve you can find everything from fully server-rendered HTML to REST APIs and everything in between, plus some less common examples too.
Fully server-rendered HTML is among the fastest to usefulness - only relying on the browser to render HTML. By contrast in traditional React server rendering is only half of the story. Since after the layout is sent a great many API calls have to happen to provide a fully hydrated page.
Your sweet spot on that curve is different for every app and depends on a few factors - chiefly, your app’s blend of rate-of-change (maintenance burden over time) and its interactivity.
If the app will not be interactive, take advantage of fully-backend rendering of HTML since the browser’s rendering code is already installed and wicked fast.
If it’ll be highly interactive with changes that ripple across the app, you could go all the way past plain React to a Redux/Flux-like central client-side data store.
And if it’ll be extremely interactive client-side (eg. Google Docs), you may wish to ship all the code to the client and have it update its local store then sync to the server in the background.
But this React Server Components paradigm is surprisingly suited to a great many CRUD apps. Definitely will consider it for future projects - thanks for such a great writeup!
Fully server-rendered HTML is the REST API. Anything feeding back json is a form of RPC call, the consumer has to be deeply familiar with what is in the response and how it can be used.
In the previous article, I was annoyed a bit by some of the fluffiness and redefinition of concepts that I was already familiar with. This one, however, felt much more concrete, and grounded in the history of the space, showing the tradeoffs and improvements in certain areas between them.
The section that amounted to "I'm doing all of this other stuff just to turn it into HTML. With nice, functional, reusable JSX components, but still." really hit close to how I've felt.
My question is: When did you first realize the usefulness of something like RSC? If React had cooked a little longer before gaining traction as the client-side thing, would it have been for "two computers"?
I'm imagining a past where there was some "fuller stack" version that came out first, then there would've been something that could've been run on its own. "Here's our page-stitcher made to run client-side-only".
So, let's assume the alternative universe, where we did not mess up and got REST wrong.
There's no constraint saying a resource (in the hypermedia sense) has to have the same shape as your business data, or anything else really. A resource should have whatever representation is most useful to the client. If your language is "components" because you're making an interactive app – sure, go ahead and represent this as a resource. And we did that for a while, with xmlhttprequest + HTML fragments, and PHP includes on the server side.
What we were missing all along was a way to decouple the browser from a single resource (the whole document), so we could have nested resources, and keep client state intact on refresh?
It has also sparked a strong desire to see RSCs compared and contrasted with Phoenix LiveView.
The distinction between RSCs sending "JSX" over the Wire, and LiveViews sending "minimal HTML diffs"[0] over the wire is fascinating to me, and I'm really curious how the two methodologies compare/contrast in practice.
It'd be especially interesting to see how client-driven mutations are handled under each paradigm. For example, let's say an "onClick" is added to the `<button>` element in the `LikeButton` client component -- it immediately brings up a laundry list of questions for me:
1. Do you update the client state optimistically? 2. If you do, what do you do if the server request fails? 3. If you don't, what do you do instead? Intermediate loading state? 4. What happens if some of your friends submit likes the same time you do? 5. What if a user accidentally "liked", and tries to immediately "unlike" by double-clicking? 6. What if a friend submitted a like right after you did, but theirs was persisted before yours?
(I'll refrain from adding questions about how all this would work in a globally distributed system (like BlueSky) with multiple servers and DB replicas ;))
Essentially, I'm curious whether RSCs offer potential solutions to the same sorts of problems Jose Valim identified here[1] when looking at Remix Submission & Revalidation.
Overall, LiveView & RSCs are easily my top two most exciting "full stack" application frameworks, and I love seeing how radically different their approaches are to solving the same set of problems.
[0]: <https://www.phoenixframework.org/blog/phoenix-liveview-1.0-r...> [1]: <https://dashbit.co/blog/remix-concurrent-submissions-flawed>
1./2.: You can update it optimistically. [0]
3.: Depends on the framework's implementation. In Next.js, you'd invalidate the cache. [1][2]
4.: In the case of the like button, it would be a "form button" [3] which would have different ways [4] to show a pending state. It can be done with useFormStatus, useTransition or useActionState depending on your other needs in this component.
5.: You block the double request with useTransition [5] to disable the button.
6.: In Next, you would invalidate the cache and would see your like and the like of the other user.
[0] https://react.dev/reference/react/useOptimistic
[1] https://nextjs.org/docs/app/api-reference/functions/revalida...
[2] https://nextjs.org/docs/app/api-reference/directives/use-cac...
[3] https://www.robinwieruch.de/react-form-button/
[4] https://www.robinwieruch.de/react-form-loading-pending-actio...
I don’t see a point in making that a server-side render. You are now coupling backend to frontend, and forcing the backend to do something that is not its job (assuming you don’t do SSR already).
One can argue that its useful if you would use the endpoint for ESI/SSI (I loved it in my Varnish days) but that’s only a sane option if you are doing server-side renders for everything. Mixing CSR and SSR is OK, but that’s a huge amount of extra complexity that you could avoid by just picking one, and generally adding SSRs is mostly for SEO-purposes, which session-dependent content is excluded anyway.
My brain much prefers the separation of concerns. Just give me a JSON API, and let the frontend take care of representation.
* that the code which fetches data required for UI is much more efficiently executed on the server-side, especially when there's data dependencies - when a later bit of data needs to be fetched using keys loaded in a previous load
* that the code which fetches and assembles data for the UI necessarily has the same structure as the UI itself; it is already tied to the UI semantically. It's made up out of front end concerns, and it changes in lockstep with the front end. Logically, if it makes life easier / faster, responsibility may migrate between the client and server, since this back end logic is part of the UI.
The BFF thing is a place to put this on the server. It's specifically a back end service which is owned by the front end UI engineers. FWIW, it's also a pattern that you see a lot in Google. Back end services serve up RPC endpoints which are consumed by front end services (or other back end services). The front end service is a service that runs server-side, and assembles data from all the back end services so the client can render. And the front end service is owned by the front end team.
Dan's post somehow reinforces the opinion that SSR frameworks are not full-stack, they can at most do some BFF jobs and you need an actual backend.
Usually the endpoints get too fat, then there's a performance push to speed them up, then you start thinking about fat and thin versions. I've seen it happen repeatedly.
Congratulations, you reinvented GraphQL. /s
Jokes apart, I don’t care much about the technology, but what exactly are we optimizing here? Does this BFF connect directly to the (relational/source of truth) DB to fetch the data with a massaged up query, or it just uses the REST API that the backend team provides? If the latter, we’re just shifting complexity around, and if the former, even if the it connects to a read-replica, you still have to coordinate schema upgrades (which is harder than coordinating a JSON endpoint).
Just let the session-dependent endpoint live in the backend. If data structure needs changes, backend team is in the best position to keep it up to date, and they can do it without waiting for the front end team to be ready to handle it on their BFF. A strong contract between both ends (ideally with an OpenAPI spec) goes a really long way.
https://overreacted.io/react-for-two-computers/ https://news.ycombinator.com/item?id=43631004 (66 points, 6 days ago, 54 comments)