(a bit sad to see all the commenters that clearly haven't read the article though)
Inertia is "dumb" in that a component can't request data, but must rely on that the API knows which data it needs.
RSC is "smarter", but also to it's detriment in my opinion. I have yet to see a "clean" Next project using RSC. Developers end up confused about which components should be what (and that some can be both), and "use client" becomes a crutch of sorts, making the projects messy.
Ultimately I think most projects would be better off with Inertia's (BFF) model, because of its simplicity.
& every interaction is server driven.
The old way was to return HTML fragments and add them to the DOM. There was still a separation of concern as the presentation layer on the server didn't care about the interface presented on the client. It was just data generally composed by a template library. The advent of SPA makes it so that we can reunite the presentation layer (with the template library) on the frontend and just send the data to be composed down with the request's response.
The issue with this approach is to again split the frontend, but now you have two template libraries to take care of (in this case one, but on the two sides). The main advantages of having a boundary is that you can have the best representation of data for each side's logic, converting only when needs. And the conversion layer needs to be simple enough to not introduce complexity of its own. JSON is fine as it's easy to audit a parser and HTML is fine, because it's mostly used as is on the other layer. We also have binary representation, but they also have strong arguments for their use.
With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
>The old way was to return HTML fragments and add them to the DOM.
Yes, and the problem with that is described at the end of this part: https://overreacted.io/jsx-over-the-wire/#async-xhp
>JSON is fine [..] With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
I really don't know what you mean; the transport literally is JSON. We're not literally sending JSX anywhere. That's also in the article. The JSON output is shown about a dozen times throughout, especially in the third part. You can search for "JSON" on the page. It appears 97 times.
Replacing innerHTML wasn’t working out particularly well—especially for the highly interative Ads product—which made an engineer (who was not me, by the way) wonder whether it’s possible to run an XHP-style “tags render other tags” paradigm directly on the client computer without losing state between the re-renders.
HTML is still a document format, and while there's a lot of features added to browsers over the year, we still have this as the core of any web page. It's always a given that state don't survive renders. In desktop software, the process is alive while the UI is shown, so that's great for having state, but web pages started as documents, and the API reflects that. So saying that it's an issue, it's the same as saying a fork is not great for cutting.React is an abstraction over the DOM for having a better API when you're trying not to re-render. And you can then simplify the format for transferring data between server and client. Net win on both side.
But the technique described in the article is like having an hammer and seeing nails everywhere. I don't see the advantages of having JSX representation of JSON objects on the server side.
That's not what we're building towards. I'm just using "breaking JSON apart" as a narrative device to show that Server Components componentize the UI-specific parts of the API logic (which previously lived in ad-hoc ViewModel-like parts of REST responses, or in the client codebase where REST responses get massaged).
The change-up happens at this point in the article: https://overreacted.io/jsx-over-the-wire/#viewmodels-revisit...
If you're interested in the "final" code, it's here: https://overreacted.io/jsx-over-the-wire/#final-code-slightl....
It blends the previous "JSON-building" into components.
If you have a full blown SPA on the client side, you shouldn't use ViewModels as that will ties your backend API to the client. If you go for a mixed approach, then your presentation layer is on the server and it's not an API.
HTMX is cognizant of this fact. What it adds are useful and nice abstractions on the basis that the interface is constructed on one end and used on the other. RSC is a complex solution for a simple problem.
It doesn’t because you can do this as a layer in front of the backend, as argued here: https://overreacted.io/jsx-over-the-wire/#backend-for-fronte...
Note “instead of replacing your existing REST API, you can add…”. It’s a thing people do these days! Recognizing the need for this layer has plenty of benefits.
As for HTMX, I know you might disagree, but I think it’s actually very similar in spirit to RSC. I do like it. Directives are like very limited Client components, server partials of your choice are like very limited Server components. It’s a good way to get a feel for the model.
I personally find HTMX pairs well with web components for client components since their lifecycle runs automatically when they get added to the DOM.
Wouldn't an HTMX update stomp over it and reset the component to its initial state?
What about internal JS state that isn't reflected in the DOM?
That said, I did include recaps of the three major sections at their end:
- https://overreacted.io/jsx-over-the-wire/#recap-json-as-comp...
- https://overreacted.io/jsx-over-the-wire/#recap-components-a...
- https://overreacted.io/jsx-over-the-wire/#recap-jsx-over-the...
That is not a good reason to make the content unnecessarily difficult for its target audience. Being smart also means being able to communicate with those who aren't as brilliant (or just don't have the time).
That's unfair.
If anything you're the one dumbing down what I wrote for your personal taste.
Quite often people read the form thread first before wasting their life on some large corpus of text that might be crap. High quality discussions can point out poor quality (or at least fundamentally incorrect) posts and the reasons behind them enlightening the rest of the readers.
RSC doesn't impede this. In fact it improves it. Instead of having your ORM's objects, to be converted to JSON, sent, parsed, and finally manipulated to your UIs needs, you skip the whole "convert to JSON" part. You can go straight from your ORM objects (best for data operations) to UI (best for rendering) and skip having to think about how the heck you'll serialize this to be serialized over the wire.
> With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
JSX is syntactic sugar for a specific format of JavaScript object. It's a pretty simple format really. From ReactJSXElement.js, L242 [1]:
element = {
// This tag allows us to uniquely identify this as a React Element
$$typeof: REACT_ELEMENT_TYPE,
// Built-in properties that belong on the element
type,
key,
ref,
props,
};
As far as I'm aware, TC39 hasn't yet specified which shape of literal is "ok" and which one is "wrong" to run on a computer, depending on wether that computer has a screen or not. I imagine this is why V8, JSC and SpiderMonkey, etc let you create objects of any shape you want on any environment. I don't understand what's wrong about using this shape on the server.[1] https://github.com/facebook/react/blob/e71d4205aed6c41b88e36...
I can't let go of the fact that you get the exact same thing if you just render html on the server. It's driving me crazy. We've really gone full circle, and I'm not sure for what benefit.
I doubt there were many systems where the server-generated HTML fragments were generic enough that the server and client HTML documents didn't need to know anything about each other's HTML. It's conceivable to build such a system, particularly if it's intended for a screen-reader or an extremely thinly-styled web page, but in either of those cases HTML injection over AJAX would have been an unlikely architectural choice.
In practice, all these systems that did HTML injection over AJAX were tightly coupled. The server made strong assumptions about the HTML documents that would be requesting HTML fragments, and the HTML documents made strong assumptions about the shape of the HTML fragments the server would give it.
> all these systems that did HTML injection over AJAX were tightly coupled
That's because the presentation layer originated on the server. What the server didn't care about was the transformation that alters the display of the HTML on the client. So you can add add an extension to your browser that translate the text to another language and it wouldn't matter to the server. Or inject your own styles. Even when you do an AJAX request, you can add JS code that discards the response.
An age ago I took interest in KnockoutJS based on Model-View-ViewModel and found it pragmatic and easy to use. It was however at the beginning of the mad javascript framework-hopping marathon, so it was considered 'obsolete' after a few months. I just peeked, Knockout still exists.
Btw, I wouldn't hop back, but better hop forward, like with Datastar that was on HN the other day: https://news.ycombinator.com/item?id=43655914
Compared to GraphQL, Server Components are a big step back: you have to do manually on the server what was given by default by GraphQL
The GraphQL which ‘elegantly’ returns a 200 on errors? The GraphQL which ‘elegantly’ encodes idempotent reads as mutating POSTS? The GraphQL which ‘elegantly’ creates its own ad hoc JSON-but-not-JSON language?
The right approach, of course, is HTMX-style real REST (incidentally there needs to be a quick way to distinguish real REST from fake OpenAPI-style JSON-as-a-service). E.g., the article says: ‘your client should be able to request all data for a specific screen at once.’ Yes, of course: the way to request a page is to (wait for it, JavaScript kiddies): request a page.
The even better approach is to advance the state of the art beyond JavaScript, beyond HTML and beyond CSS. There is no good reason for these three to be completely separate syntaxes. Fortunately, there is already a good universal syntax for trees of data: S-expressions. The original article mentions SDUI as ‘essentially it’s just JSON endpoints that return UI trees’: in a sane web development model the UI trees would be S-expressions macro-expanded into SHTML.
Not sure about caching, if anything, graphql offers a more granular level of caching so it can be reused even more?
The only issue I see with graphql is the tooling makes it much harder to get it started on a new project, but the recent projects such as gql.tada makes it much easier, though still could be easier.
N+1 I just remember the dataloader https://github.com/graphql/dataloader Is it still used?
What about the other things? I remember that Stitching and E2E type safety, for example, were pretty brittle in 2018.
E2E type safety in our case is handled by Typescript code generation. It works very well. I also happen to have to work in a NextJS codebase, which is the worst piece of technology I have ever had the displeasure of working with, and I don't really see any meaningful difference on a day to day basis between the type sharing in the NextJS codebase (where server/client is a very fuzzy boundary) and the other code base that just uses code generation and is a client only SPA.
For stitching we use Nautilus and I've never observed any issues with it. We had one outage because of some description that was updated in some dependency and that sucked but for the most part it just works. Our usage is probably relatively simple though.
or prisma
N+1 and nested queries etc were still problems last I checked (few years ago).
I’m sure there are solutions. Just that it’s not as trivial as “use graphql” and your problems are solved.
Please correct me if I’m wrong
on top of http level caching, you can do any type of caching (redis / fs / etc) just like a regular rest but at a granular level, for ex: user {comments(threadId: abc, page: 1, limit: 20) { body, postedAt} is requested and then cached, another request can come in thread(id: abc) {comments(page: 1, limit: 20) {body, postedAt} you can share the cache.
N+1 is solved by a tighter integration with the database, for ex: https://www.graphile.org/postgraphile/performance/#how-is-it... or maybe a more popular flavor using prisma: https://pothos-graphql.dev/docs/plugins/prisma/relations#:~:...
but of course, there is always the classic dataloader as well.
I am not saying that use graphql and all the problems will be solved, but i am saying that the problem that OP proposed has been solved in an arguably "better" way, as it does not tie the presentation (HTML) with the data for cases of multiplatform apps like web, or native apps.
Obviously if you have a GraphQL backend, you could care less and the only benefit you'd get is reducing bundle size f.e. for content heavy static pages. But you'll lose client-side caching, so you can't have your cake and eat it too.
Just a matter of trade-offs
I assumed RSC was more concerned with which end did the rendering, and GraphQL with how to fetch just the right data in one request
Anyway, it's hard to deny that React dev nowadays is an ugly mess. Have you given any thought to what a next-gen framework might look like (I'm sure you have)?
When you have a post with a like button and the user presses the like button, how do the like button props update? I assume that it would be a REST request to update the like model. You could make the like button refetch the like view model when the button is clicked, but then how do you tie that back to all the other UI elements that need to update as a result? E.g. what if the UI designer wants to put a highlight around posts which have been liked?
On the server, you've already lost the state of the client after that first render, so doing some sort of reverse dependency trail seems fragile. So the only option would be to have the client do it, but then you're back to the waterfall (unless you somehow know the entire state of the client on the server for the server to be able to fully re-render the sub-tree, and what if multiple separate subtrees are involved in this?). I suppose that it is do-able if there exists NO client side state, but it still seems difficult. Am I missing something?
Right, so there's actually a few ways to do this, and the "best" one kind of depends on the tradeoffs of your UI.
Since Like itself is a Client Component, it can just hit the POST endpoint and update its state locally. I.e. without "refreshing" any of the server stuff. It "knows" it's been liked. This is the traditional Client-only approach.
Another option is to refetch UI from the server. In the simplest case, refetching the entire screen. Then yes, new props would be sent down (as JSON) and this would update both the Like button (if it uses them as its source of truth) and other UI elements (like the highlights you mentioned). It'll just send the entire thing down (but it will be gracefully merged into the UI instead of replacing it). Of course, if your server always returns an unpredictable output (e.g. a Feed that's always different), then you don't want to do that. You could get more surgical with refreshing parts of the tree (e.g. a subroute) but going the first way (Client-only) in this case would be easier.
In other words, the key thing that's different is that the client-side things are highly dynamic so they have agency in whether to do a client change surgically or to do a coarse roundtrip.
I do like scrappy rails views that can be assembled fast - but the React views our FE dev is putting on top of existing rails controllers have a much better UX.
Feels like HTMX, feels like we've come full circle.
And yeap, you're right! If we need a lot more client side interactivity, just rendering JSX on server side won't cut it.
Fresh/Partials
Astro/HTMX with Partials