In my experience, a lot of SPAs transfer more data than the front-end actually needs. One team I worked on was sending 4MB over the wire to render 14kb of actual HTML. (No, there wasn't some processing happening on the front-end that needed the extra data.) And that was using graphql, some dev just plunked all the fields in the graphql query instead of the ones actually needed. I've seen that pattern a lot, although in some cases it's been to my benefit, like finding more details on a tracking website than the UI presented.
Just look at the source code of amazon.com. It's a mess. But I bet it is more of an organizational problem than a tech stack problem, for a website worked on by literally hundreds of teams (if not more) where everyone crams their little feature in the website home page
I find that some techs tend to cause badly written code. I have junior coworkers that can write clear Python after a short intro, but can't write clean R after a year using it daily. I don't know if it is caused by the philosophy behind the language, the community, the tutorials and docs...
However, I’ve recently made the difficult decision to rewrite the frontend in React (specifically React/TS, TanStack Query, Orval, and Shadcn). In a perfect world, I'd rewrite the python backend in go, but I have to table that idea for now.
The reason? The "LLM tax." While HTMX is a joy for manual development, my experience the last year is that LLMs struggle with the "glue" required for complex UI items in HTMX/Alpine. Conversely, the training data for React is so massive and the patterns so standardized that the AI productivity gains are impossible to ignore.
Recently, I used Go/React for a microservice that actually has turned into similarly complex scale as the python/htxm app I focused on most of the year, and it was so much more productive than python/htmx. In a month of work I got done what took me about 4-5 months in python/htmx. I assume because the typing with go and also LLM could generate perfectly typed hooks from my OpenAPI spec via Orval and build out Shadcn components without hallucinating.
I still love the HTMX philosophy for its simplicity, but in 2024/2025, I’ve found that I’m more productive choosing the stack that the AI "understands" best. For new projects, Go/React will now my default. If I have to write something myself again (God, I hope not) I may use htmx.
Any big reason to use HTMX instead? Is Turbo not really discussed much because of it's association to RoR?
I get server-side rendering. I can boot my server, and everything is there. If my model changes, I can update the view. It's cohesive.
I get client-side rendering. The backend returns data, the frontend decides what to do with it. It's a clear separation. The data is just data, my mobile app can consume the same "user" endpoint.
This seems like a worst-of-both-worlds paradigm, where the backend needs to be intimately aware of the frontend context. Am I not getting it or is there a massive implicit coupling?
Now if I need to display the same "user" data twice, in different formats, on my website. Say, as a table in "My account", and as a dropdown in the menu bar. Do I need to write two endpoints in the backend, returning the same data in two different formats?
Imagine you need firstName/email in once place, firstName/email in another, and firstName/D.O.B in another.
In a plain JSON world, I'd craft a single "user" endpoint, returning those three datapoints, and I would let the frontend handle it. My understanding is with HTMX, I'd have to craft (and maintain/test) three different endpoints, one per component.
I feel like you would quickly end up in a combinatorial explosion for anything but the simplest page. I really don't get the appeal at all. Of course everything can be very simple and lightweight if you hide the complexity under the bed!
via 3 different rendering logic, (such as JSX templates) same as the server
> if you hide the complexity under the bed!
which is what you just did by dismissing the reality that client-side requires the same 3 renderers that server-side requires! (plus serialization and deserialization logic - not a big deal with your simple example but can be a major bottleneck with complex, nested, native objects)
In a classic app, there's one entity that keeps the state (the server), and one entity that keeps how it is rendered. This is very easy to reason about, and the contract is very clear. If I want to understand what happens, then I can open my frontend app and see "Hello <b>{{name}}</b>".
In HTMX, the logic is spread. What I see is a construct that says "Replace this with the mystery meat from the backend, whatever that is".
Assume there's a styling issue. The name looks too big. Is it because the frontend is adding a style that cascades, or is it because the backend returns markup with a class? Now any issue has an extra level of indirection, because you've spread your rendering logic into two places.
> which is what you just did by dismissing the reality that client-side requires the same 3 renderers that server-side requires!
But what's complex isn't the number of renderers, it's where the logic happens. The HTMX website is full of examples where the header of a table is rendered in the frontend, and the body is rendered in the backend. How's that considered sane when changing the order of the columns turns into a real ordeal?
> In HTMX, the logic is spread
I disagree... with React, by definition, the logic is spread. Persistent data, and usually, business logic, is in some data store accessible via the app backend. And then a totally different entity, the front-end, renders that data (often implementing additional business logic) and manages state (which is often not yet recorded in the data store until various updates can be performed).
HTMX helps keep everything aligned. All the rendering logic is right there along with the data and the business logic. If I'm looking for a renderer, not only is it easy to find the template that produced "Hello <b>{{name}}</b>" but it is also easy to find the source of {{name}}. Which also makes it easy to alter {{name}}, say, from Smith, John to Mr. John Smith - because the data store and business logic are right there, it is low effort to switch the order and to begin including the salutation.
Your "front-end" is still all in one place, except it's on the server, and typically rendered via templates instead of React components. But the templates can often access native objects (including their properties and functions) instead of solely relying on JSON objects.
This comment is already long but regarding data tables... yea, highly dynamic pure data-based UI's such as charts and tables aren't HTMX's forte. But even then there are ways... the data- attribute is very useful, and since you are already using JS to handle sorting, filtering, re-ordering, showing tooltips, etc, it's very possible to render valid HTML fragments that can be properly rendered via that same JS (or contain data which can be).
any factoring you do on the front end you can do on the back end too, there's nothing magic about it and you don't need different end points: that can be a query parameter or whatever (if it's even a request, in most hypermedia-based apps you'd just render what you need when you need it inline with a larger request)
it's a different way of organizing things, but there are plenty of tools for organizing hypermedia on the server well, you just need to learn and use them
The main problem is that this is extremely, extremely expensive in practice. You end up in Big Webapp hell where you're returning 4mb of data to display a 10 byte string on the top right of the screen with the user's name. And then you need to do this for ALL objects.
What happens if a very simple page needs tiny bits of data from numerous objects? It's slow as all hell, and now your page takes 10 seconds to load on mobile. If you just rendered it server-side, all the data is in reach and you just... use what you need.
And that's not even taking into account the complexity. Everything becomes much more complex because the backend returns everything. You need X + 1, but you have to deal with X + 1,000.
And then simple optimization techniques just fall flat on their face, too. What if we want to do a batch update? Tsk tsk, that's not RESTful. No, instead send out 100 requests.
What about long running tasks? Maybe report generation? Tsk tsk, that's not RESTful. No, generate the report on the frontend using a bajillion different objects from god knows where across the backend. Does it even make sense with the state and constraints of the data? Probably not, that's done at a DB level and you're just throwing all that away to return JSON.
I mean, consider such a simple problem. I have a User object, the user has a key which identifies their orders, and each order has a key which identifies the products in that order. Backend-driven? Just throw that in HTML, boom, 100 lines of code.
RESTful design? First query for the User object. Then, extract their orders. Then, query for the order object. For each of those, query for their products. Now, reconstruct the relationship on the frontend, being careful to match the semantics of the data. If you don't, then your frontend is lying and you will try to persist something you can't, or display things in a way they aren't stored.
The backend went from one query to multiple endpoints, multiple queries, and 10x the amount of code. The frontend ballooned, too, and we're now essentially doing poor man's SQL in JS. But does the frontend team gets the bliss of not dealing with the backend? No, actually - because they need to check the database and backend code to make sure their semantics match the real application semantics.
You went on a long tirade against REST, which nobody mentioned. Just ... write an endpoint returning the data you need, as JSON? But write it once, instead of once per view variant?
> Just throw that in HTML, boom, 100 lines of code.
Now you need the exact same data but displayed differently. Boom, another 100 lines of code? Multiply by the number of times you need that same data? Boom indeed, it just blew up.
It isn't 2004 anymore - all of the server-side frameworks have components.
Except, now instead of using serialization and JSON, it's a real API. In code. I can click and go to definition.
> Just ... write an endpoint returning the data you need, as JSON? But write it once, instead of once per view variant?
What you just said directly contradicts itself.
If each view variant needs slightly different data, or ordering, or whatever, we now need to make N APIs. Or we don't. And now we're back at square one and everything I said is valid.
The more modular and reusable your API is, the less performant it will be and the more bugs it will introduce. I'm all for the God API that has 1 million endpoints each doing one specific thing. But it seems nobody else is, so instead we get the fucked ass RESTful APIs that are so bad and lead to such overly complex code were pushed to write critical CVEs to avoid them (looking at you, NEXT)
Ultimately the frontend state cannot exist without the backend, where data is persisted. Most apps don't need the frontend state, all it really gives you is maybe a better UX? But in most cases the tradeoff in complexity isn't worth it.
I don't see how it's any simpler to shift partial presentation duties to the backend. Consider this example:
https://htmx.org/examples/active-search/
The backend is supposed to respond with the rows to fill a table. You have an extremely tight coupling between the two. Something as simple as changing the order of the columns would require three releases:
- A new version of the backend endpoint - A frontend change to consume the new endpoint - A version to delete the old endpoint
I'm not trying to be obtuse, but I fail to see how this makes any sense.
Consider something as simple as an action updating content in multiple places. It happens all the time: changing your name and also updating the "Hello $name" in the header, cancelling an order that updates the order list but also the order count ...
There's _four_ ways to do it in HTMX. Each "more sophisticated" than the previous one. Which is one really wants, sophistication, isn't it?
https://htmx.org/examples/update-other-content/
I really struggle to decide which example is worse. Not only the backend needs to be aware of the immediate frontend context, it also needs to be aware of the entire markup.
In example two, a seemingly innocuous renaming of the id of the table would break the feature, because the backend uses it (!) to update the view "out of band".
I'm really trying to be charitable here, but I really wonder what niche this is useful for. It doesn't seem good for anything complex, and if you can only use it for simple things, what value does it bring over plain javascript?
No, there's two states here: the frontend state, and the backend state.
The name example is trivial, but in real applications, your state is complex and diverges. You need to constantly sync it, constantly validate it, and you need to do that forever. Most of your bugs will be here. The frontend will say the name is name, but actually it's new_name. How do you do fix that? Query the backend every so often? Maybe have the backend send push updates over a websocket? These are hard problems.
Ultimately, there is only one "true" state, the backend state, but this isn't the state your users see. You can see this in every web app today, even multi-billion dollars ones. I put in some search criteria, maybe a check a few boxes. Refresh? All gone. 90% of the time I'm not even on the same page. Push the back button? Your guess is as good as mine where I go. The backend thinks I'm one place, but the frontend clearly disagrees.
SSR was so simple because it FORCES the sync points. It makes them explicit and unavoidable. With a SPA, YOU have to make the sync points. And keep them in sync and perfect forever. Otherwise your app will just be buggy, and they usually are.
I fail to see how HTMX helps. I fail to see how SSR necessarily helps too. You could be serving a page for an order that's been cancelled by the time the user sees it.
> I put in some search criteria, maybe a check a few boxes. Refresh? All gone
You could see that 20 years ago too, unless you manually stored the state of the form somewhere. Again, what does it have to do with HTMX, or Rect, or SSR?
For your name example, you could use hx-swap-oob to update multiple elements. However if you're submitting a form element, I would just re-render the page.
See, that's my point - it's NOT manual, it's explicit. There's a difference. A form submission and page refresh is just that. It's very clear WHEN the sync happens and WHAT we are syncing.
With a SPA, you throw that all away and you have to do it yourself. And it's almost always done poorly and inconsistently.
Just change that example to just return the entire table element on /search. You could even add/remove columns with a single route response change, vs the multiple changes required to sync the front/backend with a JS framework.
The "two endpoints" concern assumes you're fetching fragments independently. If you're composing a full page server-side, the data is already there
There's an example on their website where the header of a table is defined in the frontend, and the body is returned by the backend. If I wanted something as simple as switching the order of the columns, I'd actually need to create a new version of my backend endpoint, release it, change the frontend to use the new endpoint, then delete the old one. That sounds crazy to me.
There are so many gains by not using a frontend. You've greatly reduced your site size, removed duplicated logic, a shitload of JS dependencies, and an unnecessary build step.
The "header frontend / body backend" split is a choice, not a requirement. I wouldn't make that choice.
I mention it because that's the first example on the official website, so I'd assume this is the right way.
Browsers keep changing along with people's expectation of what a website or app should be capable of.
Meanwhile on the backend land, the same MVC framework from 2 decades ago can still deliver acceptable results.
Why not "just use HTML"?
First - simple use cases sure great. But imagine you have to update some element out of the from tree. Now you need to have OOB swaps and your HTML must contain that fragment.
Not just that your server template code now has to determine if it is HTMX request and only render OOB fragments if so.
Even at decent size app, soon it turns super brittle.
Yet to talk about complicated interfaces. Let's not go complicated just think of variants in an E-commerce admin panel.
3 variants with 5 values each these are 125 SKU rows that must be collapsed group wise.
htmx can do it but it's going to be very very difficult and brittle.
So it is surely very useful but it is NOT the only tool for all use cases.
I really liked HTMX, and I thank the authors for this marvelous library!
I switched from Turbo to HTMX because the latter is much more flexible, and I try to avoid Node.js as much as possible, only using it to compile some JavaScript code for Stimulus.
I finally moved from HTMX to Unpoly for the following reasons:
1. Layer support: Unpoly makes it easy to create layers and modal overlays, saving a lot of time and JavaScript code. You can achieve the same functionality with HTMX, but you have to write more code.
2. JavaScript code is better organized thanks to up.compile hooks.
3. HTMX and Unpoly treat fragments slightly differently. With HTMX, you have to use an out-of-band feature to update multiple fragments together. With Unpoly, you can easily add them to the response (and declare them in the front end, of course).
In my opinion, Unpoly has a better-organized approach to everything. On the other hand, apart from the official documentation, it is difficult to find examples for some edge-case features.