In general Next.js has so many layers of abstraction that 99.9999% of projects don't need. And the ones that do are probably better off building a bespoke solution from lower level parts.
Next.js is easily the worst technology I've ever used.
Pocketbase was the ONLY good thing about this journey. Everything else sucked just so terribly.
Infinite complexity everywhere, breaking changes CONSTANTLY, impenetrable documentation everywhere.
It is just so, so awful. If we rewound the last five years of FE trends and instead focused on teaching the stuff that existed at the time properly, we'd be in a much better position.
I've also built a very complex React frontend (few thousand users, pretty heavy visual computation required in many places). And while I don't particularly like React either, Next.js was even worse.
And lastly, built a CMS in Go, with vanilla JS. And while the DX sometimes feels lacking, I just can't help but feel that I actually know wtf is going to happen when I do something. Why is that so hard?
In React and Next.js I am STILL, AFTER SIX YEARS constantly guessing what might happen. Yes, I can fix just about anything these frameworks throw at me, thanks to all the experience I've gathered about their quirks, but it all just feels to messy and badly designed.
In Go, the last time I guessed what might happen was in the first six months of learning it. No surprises since. Codebases from years ago are still rock-solid.
Why can't we do this at the frontend, goddammit?
nvm use && npm i && npm run dev
Canvas instead of DOM -> :(
EDIT: Gave it another try and more issues appear, within seconds of using. The left side has a rendering bug where the selected areas are cut off sometimes, ctrl+zoom does not zoom the page as it does on all normal websites. I can still zoom via menu. Middle mouse open link in new tab doesn't work. Z layer bugs everywhere. I expect more the longer I'd look.
The “static preview” it shows while it loads (for like 10-15 seconds!) is so much smoother and nicer to scroll around than the actual thing. On mobile, every third scroll attempt actually opens the right click context menu. It’s a stuttering mess on my high refresh rate phone. Nobody should ever make websites like this.
After a week of futzing with it I just threw up my hands and said 'no can do'. I couldn't untangle the spaghetti JS and piles of libraries. 'Compiling' would complete and if you looked at the output it was clearly missing tons of bits but never threw an error. Just tons of weirdness from the toolchain to the deployment platform.
What's the story here? I assume this group was chosen for a reason and didn't meet expectations.
If they had brought me in before hand I could have saved them a lot of work by asking the hard questions and reigning in the tech overspend.
They paid them on the strength of seeing it working, but then the consulting group basically ghosted when the customer asked to adjust it to run on cheaper hosting (probably because they couldn't), then the site got shut off because the hosting was all in the consulting groups name and they stopped paying it. Digital Ocean nuked the database for non-payment and they lost tons and tons of manual work putting in data.
Open up vercel, point it at my repo, and environment variable, it starts building and… build error. I search up the strange error log on the next issues page, and find a single issue from 3 years ago with one upvote and no response or resolution.
So I threw it on a VPS and just built it with whatever the “prod build” command is, and it totally worked fine.
So in my limited anecdotal experience, hosting it on vercel won’t save you either lol
That is my opinion as well. Things like SSR are forced onto users with a very smooth onboarding, but I'm concerned that in practical terms this perceived smoothness can only persist if the likes of us pay the likes of Vercel for hosting our work.
In some degree I feel the whole React ecosystem might have ended up being captured by a corporation. Hopefully it wasn't. Let's see.
https://react.dev/learn/creating-a-react-app
It throws you straight at Next.js
That capture happened... two years ago? (Perhaps there's a good blog post there, if it doesn't exist already)
I actually wrote exactly that blog post and did a conf talk on it earlier this year. I covered why the React team switched to directing users to use "frameworks" to build React apps, the development influences behind React Server Components, why the React docs didn't list tools like Vite as viable options until just a couple months ago, and various other related topics:
- https://blog.isquaredsoftware.com/2025/06/react-community-20...
- https://blog.isquaredsoftware.com/2025/06/presentations-reac...
Objectively that sadness does not change reality however. At least within my own professional network no-one seems comfortable starting a new project using React today. Almost 100% of the paid front end work I've been involved with myself or discussed with others recently is now using alternatives - most often Vue though I've seen other choices at least seriously considered. I've even had a couple of recruiters I haven't worked with for years suddenly reappear desperately looking for someone to take on React work and openly admit it's because they are struggling to find anyone good who wants to go near it. All of this is a sharp contrast with the market of the early 2020s when React was clearly the preferred front end choice. And all of this is surely a direct response to the push to make React a full stack framework, the added complexity that has introduced, and the apparent capture of official React development by Vercel.
I don't know much about the nextjs and whether it was open like sveltekit currently is.
To me, nextjs (I think) was always meant to favour vercel but sveltekit has a rich history of managing multiple adapters.
Now, that being said there are still some chances of a rugpull that might happen but if that ever happens, I am staying on the last sveltekit that worked with cf and other cloud providers.
Rich and Simon are incredibly important, but they're in it for Svelte and the community more so than a paycheck from Vercel. Tee has been doing most of the maintenance on SvelteKit currently funded by community donations. And this isn't counting other infrastructure like vite-plugin-svelte or the Svelte CLI which are entirely maintained by volunteers. I don't think Vercel funds a majority of the work on Svelte even if it might be close to it.
I'm sure commercial incentives would lead issues that affect paying (hosted) customers to have better resolutions than those self-hosting, but that's not enough to explain this level of pain, especially not in issues that would affect paying customers just as much.
Better use something else
I heard this excuse for Next.js and thought I’d get around it by using Vercel, which was fine for my project. It didn’t seem to make a difference.
Things will get far worse before they get better. Right now, online courses such as the ones in PluralSight are pushing Next.js on virtually all courses related to React. I have no idea what ill-advised train of thought resulted in this sad state of affairs but here we are.
It's pretty absurd to have such a broad range of web solutions, and think the same solution can cover everything.
One of the factors is that web dev pushes for a complete separation of concerns, and thus allows frontend developers to specialize in front end development. Therefore it becomes far easier to hire someone to do frontend work with a webdev background than a win32/MFC background.
Number of applicants is also a big factor. There is far more demand for webdev than pure GUI programming. You can only hire people who show up, and if no one shows up then you need to scramble.
Frontend development is also by far the most expensive part of a project. In projects which use low-level native frameworks you are forced to hire a team for each target platform. Adopting technologies that implement GUIs with webpages running in a WebView allow projects to halve the cost. This is also why technologies like React Native shine.
Also, apps like Visual Studio Code prove that webview-based apps can be both nice to look at and be performant.
It's not capabilities. It's mainly the economics.
Then there came small web applications, and still no "front-end developers", since functionality could only work on the server.
It's only when AJAX was introduced in the mid 2000's that you could start to talk about "front-end developers".
By that time, win32 and MFC was old. We had Java, C# with .net framework, etc.
So you agree both solve different problems. Well, those are 2 use cases of front-end right now.
I'm not so sure about that. We're seeing Next.js being pushed as the successor of create-react-app even in react.dev[1], which as a premise is kind of stupid. There is something wrong definitely going on.
We do a 30-min tops exercise where you create a React project to show how to use useState and useEffect, etc. I help with whatever command they want to use and allow Google/ChatGPT.
More than half of the candidates had no idea how to use React without Next.js, and some argued it was impossible, even after I told them the opposite.
For me, lately, the interview question is "here's code that ChatGPT generated for (previous interview question as related to the role we're hiring for that we could do)", what's wrong with it? What do now? (ChatGPT may or may not have actually generated the code in question.)
It's like not knowing how to write a for loop or how to access an object's property in JavaScript.
It is more like test on whether or not you can figure out random React minutiae (with Google/ChatGPT, if needed) when presented with a need. Which isn't a bad approximation for how well you will do at finding any random minutiae as needs present themselves. React-based development doesn't require much original thought — the vast majority of the job really is just figuring out the minutiae of your dependencies to fit your circumstantial need.
For fun, I asked ChatGPT for an answer and it gave a perfectly good one back without hesitation. Even if you had no idea what React was beyond knowing it is a library for developing web components, you should still be able to answer that particular question with ease.
When you're in a work meeting, do you just put ChatGPT up on one laptop and Claude on another and just sit back for 30 minutes to an hour?
To many people, it's just basic logic: "everyone must want the latest React features, and the only way to get those is with Next, so everyone must want Next".
That is extremely fishy, isn't it?
Next.js is essentially the reference and test bed impl.
Where people go wrong is thinking they need to default to the inherently complex niche feature of client hydration which is a niche optimization enabled by a quirk of web tech.
My point is that it's fishy how they push features that just so happen to be the value proposition of the only corporation that just so happens to be able to implement them.
It's also dismissive of market forces, i.e. developers have to pay bills and therefore are easier to hire if they know the skillset that is in wide use.
I've never worked or interviewed a single senior that wanted to use Next.
It surely was a development platform, but wasn't supposed to be one.
The reason for this, IT had contracted for a content management system from a Microsoft shop, because the CIO was a former Accenture/Avanade consultant. But the brochure-ware website had already been contracted to some random NYC-based web firm, but the CIO didn't want multiple usernames/passwords, so after the WordPress site hand been build, they hired the SharePoint consultants to build out the CMS that the employees would use, but it still didn't hook up to wordpress, so then it became another contractor's job (me) to join the two.
I had worked on Word Press, I even had a few decently popular plugins, but I had never seen the absolute hellscape that was SharePoint before. I wrote a codegen tool that would read the WSDL and create a library with all the classes and calls needed to use it without any SharePoint experience, and wrote some simple ETLs for the handful of "buckets". It was a 2-3 month long journey, but those libraries and my code are still in place today, where they still use wordpress for front-end, and sharepoint as backend (or at least did in 2022 still, the last I talked to anyone still working there).
I recently rewrote my auth to use better-auth (as a separate service), which has allowed me to start moving entirely off Next.js (looking at either React Router 7 or Tanstack Router).
Back when I started, Next.js made server side rendering incredibly easy, but it turns out I didn't need it. My marketing site is entirely static, and my app is entirely client rendered.
And I have some use cases where I want to have a headless crm/api “hidden” behind the front end. So in these cases using next as a backend proxy works well for me.
Sadly tan stacks releases a new version every other day and react router was complete 5 versions ago but cannot seem to keep changing the api to stay relevant in the never ending js relevancy ending war.
Tanstack seems to be following Next.js in that they’re just over complicating everything and their docs felt lacking for the most of their features.
By comparison, DIY SSR with Express takes a few days to get working and has run quietly for multiple projects for years on end.
I suppose the overly complicated ENV/.env loading hierarchy is (partly) needed because Windows doesn't (didn't?) have ENV vars. Same for inotify, port detection, thread management: *nix does it well, consistent ish. But when you want an interface or feature that works on both *nix and windows, in the same way, you'll end up with next.js alike piles of reinvented wheels and abstractions (that in the end are always leaking anyway)
Nope, windows has had perfectly standard environment variables since the DOS days
Windows' command prompt requires two separate invocations:
set KEY=value
./myApp
PowerShell also: $env:KEY='value'
./myApp
Or more "verbosely/explicitly": [System.Environment]::SetEnvironmentVariable('KEY', 'value')
./myApp
Regardless, all those methods aren't "scoped".How Powershell ever got popular is beyond me.
Anyone who has ever maintained a semi complex set of bash invocations and pipes knows it's a fragile incantation that breaks anytime you look at it funny, or something in your chain produces unexpected output.
Powershell, while absolutely horrible to read and only slightly less horrible to write (hey look, proper auto completion instead of trying to cut on the 4th, wait no sorry 5th, ah fuck it's the 6th there's an invisible space) at least produces consistent and reproductible results.
No, your python script doesn't count, it makes me do a pip install requests. Oh, sorry, pip can't be used like that, gotta run apt instill python3-pip or my whole system breaks.
As long as I can remember in my career, Windows had environment variables. So that's at least 25 years. It's both available to view/edit in the GUI and at the prompt.
Windows has pretty much everything you can dream of (although sometimes in the form of complete abominations), it's just that the people employed by Vercel don't give a shit about using native APIs well, and will map everything towards a UNIX-ish way of doing things.
Or, if you insist, that Unix is inconsistent with how windows does it.
Which is what those wrappers and abstractions do: they expose a single api to e.g. detect file changes that works with inotify, readdirectorychanges, etc.
So, yeah, speaking in hindsight is really easy.
PS: no, the UNIX way is also shit, just in a different way.
I switched to Astro from Next for most projects and haven't looked back. It's such a breath of fresh air to use.
I was part of a successful large project where we did our own SSR implementation, and we were always tinkering with it. It wasted a lot of time. Next.js "just worked". I've used Next with the pages router on two significant and complex projects and it was a great choice. I have no regrets choosing it.
Framework-defined infrastructure is a seriously cool and interesting idea, but nowadays Next feels more like an Infrastructure-defined framework; the Vercel platform's architecture and design are why things in Next are the way they are. It was supposed to be "Vercel adapts to Next", but instead we got four different kinds of subtly different function runtimes. My usage dashboard says my two most-used things are "Fluid Active CPU" and "ISR Writes". I just pay them $20/mo and pray none of those usages go over 100% because I wouldn't have the first clue why I'm going over if it does.
Half the labels on there are Star Trek technobabel, which I would take the effort of learning, except I'm convinced they're all going to change with the next major release anyway. Partly because, I keep hoping & praying they will. I know a concerning number of former die-hard Zeit fans who've taken their projects and customers elsewhere. At the end of the day, if they were to ask me what they need to address in the next major release, I seriously do not know how to answer that question beside "practically every major and minor decision y'all have made since and including the App Router was the wrong one". How do you recover from that? Idk.
Java doesn't offer isomorphic React SSR, but in most cases that is a questionable feature. Most SPAs don't need or want search-engine indexing or require instantaneous-seeming load times.
It just adds a lot of complexity even if you don't explicitly opt in to it or need it.
And while Spring has it's rough edges and quirks it is still an incredibly stable framework. Next, on the other hand, is a box of surprises that keeps on giving even when you think you saw it all.
This insanity of server side react introduces all kinds of unnecessary quirks.
Also, the VC-funded Vercel is of course purposely dumbing down Next.js, so that everyone pays them. Its a trap everyone should be aware of.
>This insanity of server side react introduces all kinds of unnecessary quirks.
Im kind of confused by what you mean here. You can use PHP, Java, Ruby, etc. for the backend with Next.js. You can even use if for the SSR server if you want.
I guess you are actually talking about simply not doing server side rendering at all? Im just clarifying because I think that still constitutes using "React for the frontend." I mean, how could React be used for anything BUT the frontend?
The server side rendering is actually one of the objective goods Next.js offers (not necessarily that it handles the complexity of it well). I mean, if you dont need that, sure... that's one more reason to just use Vite. But the backend choice is irrelevant.
Vendor lock in. Magic leaky abstractions are great until you need to debug something a few layers down when the magic stops working.
> how else do you want framework development to happen?
Loosely affiliated open source efforts maybe. If that doesn't work, I would prefer to have none at all.
While we would all like to retire to a cabin in the woods and be a carpenter, and for corporations not to exist, that seems unrealistic.
Magic leaky abstractions are orthogonal to vendor-lock in, and the source is open, so I'm not seeing the lock-in part. The "hey it's easier and cheaper to smash the deploy-to-vercel"-in, sure, but things cost money. Either to a developer, or to a company.
Stuff costs money, sure. But I don't think it's that simple. Next and Vercel come from the same organization. I have no objection to a paid hosting solution making it operationally simpler. However when that same org has control over the free thing, they can make it even more easier (probably grammatical! who knows) that it would have "naturally" been.
The only "weakness" is that it doesn't have guard rails, so may not be great for larger teams with mixed experience.
If I had to create something that has a UI I'd just go with a bog standard server rendered multi page app, built using really boring technology as well. If you like Javascript and friends, go with Express. Nowadays you can run Typescript files directly and the built-in test runner is quite capable.
If a single page application makes sense, then go with vanilla React. For a highly interactive application that's potentially behind a log in anyway, you probably don't need React Server Components.
react-router if you just want a simple React frontend, write your backend in something else.
I also had the impression they would probably follow the Vercel style, framework as a business model, with it being sold to Shopify.I don't really know where it's all going, but it is not the sort of thing I would tie myself to.
To be fair, this is partly on the kind of people who use it. E.g. if you're trying to build something that's intended to last for 10+ years but you don't think it's worth it to spend the 20 hours watching the Udemy course on Angular, then your technology is going to be a complete dumpster fire no matter which stack you choose.
* You sell a B2C product to a potentially global audience, so edge semantics actually help with latency issues
* You're willing to pay Vercel a high premium for them to host
* You have no need for background task processing (Vercel directs you to marketplace/partner services), so your architecture never pushes you to host on another provider.
Otherwise, just tread the well-trod path and stick to either a react-vite SPA or something like Rails doing ordinary SSR.I use Gleam's Lustre and am not looking back. Elm's founder had a really good case study keynote that Next.js is basically the opposite of:
There's some good stuff there
Serverless framework attempted to make this stack to run yourself for Next but it is buggy due to the complexity of Next. Open source includes being able to run it. Releasing the framework and funding OSS that also enhances NextJS is nice, but it is a trap because if it comes time to seriously run it, your only option is vercel.
Annoying, obnoxious, and always trying to get your email but god damn do they get your attention.
They hire the core contributors of all major web frameworks to continue development under their roof. Suddenly, ongoing improvement of the web platform is largely dependent on the whims of Vercel investors.
They pretend to cater to all hosting providers equally, but just look at Next, which will always be tailored toward Vercel. When will it happen to Nuxt? Sveltekit? Vercel is in a position to make strategic moves across the entire SSR market now. Regardless of whether they make use of that power, it’s bad enough they wield it at all.
When has this ever been a good idea? When has it produced a good outcome? It never has, and it never will.
The fact you use the term ssr market is a key point to how effective vercel’s marketing has been via techfluencers. There isnt a market for ssr in the way you use it, only web hosting.
There is a world of engineering outside js framework relevancy wars. It isnt only vercel pushing the framing btw. Other players are trying to replicate the vercel marketing playbook. Use/hire techfluencers/oss devs to push frameworks/stacks on devs who then push it up the company stack.
Usually it doesnt work for a startup but vercel proved it can.
Enter a million lite framework wrappers around well known tech, like supabase and postgres, upstash and redis, vercel and aws.
And I think their paid hosting was actually really good, up until they switched their $20/month plan to a whatever-it-may-cost and we-send-you-10-cryptic-emails-about-your-usage-every-month plan. That's when they lost me, not because it got more expensive but because it became intransparent and unpredictable and annoying instead of carefree.
With all the other crazy shit people are doing (multi-megabyte bundle sizes, slow API calls with dozens of round-trips to the DB, etc) doing the basics of profiling, optimizing, simplifying seems like it'd get you much further than changing to a more complex architecture.
I used to think Javascript everywhere was an advantage, and this is exactly why I now think it's a bad idea.
My company uses Inertia.js + Vue and it a significantly better experience. I still get all the power of modern frontend rendering but the overall architecture is so much simpler. The routing is 100% serverside and there's no need for a general API. (Note: Inertia works with React and Svelte too)
We tried Nuxt at first, but it was a shit show. You end up having _two_ servers instead of one: the actual backend server, and the server for your frontend. There was so much more complexity because we needed to figure out a bunch of craziness about where the code was actually being run.
Now it's dead simple. If it's PHP it's on the server. It's JS it's in the browser. Never needing to question that has been a huge boon for us.
It's positioned as a ramp up for companies where frontend and backend devs work at loggerheads and the e-commerce / product teams need some escape hatch to build their own stateless backend functions
In what way has that been a boon? Context switching between languages, especially PHP, seems like an even bigger headache. Is it strlen($var) or var.length or mb_strlen($var)?
Do you ever output JavaScript from PHP?
My biggest question though is how do you avoid ever duplicating logic between js and PHP? Validation logic, especially, but business logic leaks between the two, I've found. Doing it all in Next saves me from that particular problem.
why would anyone send JavaScript from the php? why care about duplicating a couple json translations and null checks... it's all code is today anyway.
and switching languages? you can't use most of js as it is. even something as simple as split() have so many weird bugs that everyone just code from a utils lib anyway.
spoken like someone who's not experienced enough to realize that duplicated code needs to be kept in sync, and then when it inevitably isn't, it'll lead to incidents, and also can't write JavaScript without using leftpad.
After looking through the 20 different popular front end frameworks and getting confused by SSR, CSR, etc. I decided to use Nuxt. I thought oh this is great, Vue makes a lot of sense to me and this seems like it makes it easer to make Vue apps. I could not have been more wrong. I integrated it with Supabase + Vercel and I had so many random issues I almost scrapped the entire thing to just build it with squarespace.
Just write your SPA the grown up way. Write your APIs in a language and framework well suited to such work (pick your poison, Rails, Spring, whatever Microsoft is calling this year's .NET web technology). And write your front-end in Typescript.
There's absolutely no reason to tightly couple your front-end and backend, despite how many Javascript developers learned the word "isomorphic" in 2015.
However, at a certain point, you're better off not writing a web app anymore, just an app with a somewhat wonky, imprecise runtime, one that lacks any sort of speed and has many drawbacks.
And you lose one of the most fundamentally important parts of the web, interop. I'm sure other langs can be made to speak your particular object dialect, however the same problems that plague those other type systems will still plague yours.
Which circles back to my issue, no, sticking your head in the sand and proclaiming nothing else exists, does not, in fact, make things better.
You can write your front-end and back-end in the same language.
No shade to you for finding a productive setup, but Next.js tightly couples your front-end and back-end, no question.
I'd question that statement, since it's wrong. There's no requirement to connect your NextJS server to your backend databases, you can have it only interact with your internal APIs which are the "real backends". You can have your NextJS server in a monorepo alongside your APIs which are standalone projects, and Next could exist solely to perform optimized payloads or to perform render caching (being the head of a headless CMS). It seems like a weird choice to make but you could also build almost a pure SPA have have Next only serve client components. The tightness of the coupling is entirely up to the implementor.
I used to use Django and there were so many issues that arose from having to duplicate everything in JS and Python.
The issue with mixing languages is that they have different data models, even simple things like strings and integers are different in Python and JS, and the differences only increase the more complex the objects get.
Sometimes I write some code and I realise that this code actually needs to execute on the client instead of the server (e.g. for performance) or the server instead of the client (e.g. for security). Or both. Using one language means that this is can be a relatively simple change, whereas using two different languages guarantees a painful rewrite, which can be infectious and lead to code duplication.
I'll give you one reason: Gel [1] and their awesome TypeScript query builder [2].
[1] https://www.geldata.com/ [2] https://www.geldata.com/blog/designing-the-ultimate-typescri...
This is the exact problem with the App Router. It makes it extremely difficult to figure out where your code is running. The Pages Router didn't have this issue.
"use client" does NOT mean it only renders on the client! The initial render still happens on the server. Additionally, all imports and child components inherit the "use client" directive even when it's not explicitly added in those files. So you definitely cannot just look for "use client".
See what I mean now?
From the docs:
```
On the server, Next.js uses React's APIs to orchestrate rendering. The rendering work is split into chunks, by individual route segments (layouts and pages):
Server Components are rendered into a special data format called the React Server Component Payload (RSC Payload).
Client Components and the RSC Payload are used to prerender HTML.
```
HUH?
```
On the client (first load) Then, on the client:
HTML is used to immediately show a fast non-interactive preview of the route to the user. RSC Payload is used to reconcile the Client and Server Component trees.
```
HUH? What does it mean to reconcile the Client and Server Component trees? How does that affect how I write code or structure my app? No clue.
```
Subsequent Navigations On subsequent navigations:
The RSC Payload is prefetched and cached for instant navigation. Client Components are rendered entirely on the client, without the server-rendered HTML.
```
Ok...something something initial page load is (kind of?) rendered on the server, then some reconciliation (?) happens, then after that it's client rendered...except it's not it actually does prefetching and caching under the hood - surprise.
It's insanely hard to figure out and keep track of what is happening when, and on what machine it's actually happening on.
If you try to use browser functionality in a component without 'use client' or to use server functionality in a client component, you'll get an error.
Hum... You make an entire app in node, load the UI over react, pile layers and more layers of dynamicity on top (IMO, if next.js didn't demonstrate those many layers, I wouldn't believe anybody made them work), eschew the standard CDN usage, and then want distributed execution to solve your latency issues?
If I take a look at other languages, these kind of multi-threading issues are usually represented by providing a separate context or sync package (that handle mutexes and atomics) in the stdlib.
And I guess that's what's completely missing in nodejs and browser-side JS environments: An stdlib that allows to not fall into these traps, and which is kind of enforced for a better quality of downstream packages and libraries.
If the handle() method of the middleware API would have provided, say, a context.Context parameter, most of the described debugging issues would have been gone, no?
If I went back in time, I would have called it Routing Middleware or Routing Handler. A specific hook to intercept during the routing phase, which can be delivered to the CDN edge for specialized providers. It’s also a somewhat advanced escape hatch.
Since OP mentions logging, it’s worth noting that for instrumentation and observability we’ve embraced OpenTelemetry and have an instrumentation.ts convention[2]
[1] https://nextjs.org/blog/next-15-5#nodejs-middleware-stable
[2] https://nextjs.org/docs/app/api-reference/file-conventions/i...
> Since OP mentions logging, it’s worth noting that for instrumentation and observability we’ve embraced OpenTelemetry and have an instrumentation.ts convention
That makes it sound as though the answer to a clumsy logging facility is simply to add another heavy layer of complexity. Surely not every application needs OpenTelemetry. Why can’t logger().info() just work in a sensible way? This can't be such a hard problem, can it? Every other language and framework does it!
I think OTEL is pretty sensible for a vendor-free and if you want to have a console logger you can use the console exporter[0] for debug mode during local development. Also if Next is designed as a framework to make it easy to build production-grade apps, having a standardized way to implement o11y with OTEL is a worthwhile tradeoff?
If you view that as being overkill, perhaps you're not the target audience of the framework
[0] https://opentelemetry.io/docs/languages/js/exporters/#consol...
If you wanted "dead simple" text-based logging in a situation where a service is deployed in multiple places you'd end up writing a lot of fluff to get the same log correlation abilities that most OTEL drivers provide (if you can even ship your logs off the compute to begin with)
Which again comes back to the "maybe the framework isn't for you" if you're building an application that's a monolith deployed on a single VPC somewhere. But situations where you're working on something distributed or replicated, OTEL is pretty simple to use compared to past vendor-specific alternatives
Most frameworks have powerful loggers out of the box, like Monolog in the PHP world.
There's even a handler for monolog in PHP - they are not necessarily mutually exclusive
https://github.com/open-telemetry/opentelemetry-php/blob/mai...
The fact that Monolog has a handler for this tool isnt relevant, but it shows that there is one more layer of complexity tacked on.
You can still log to a text file if you want to run locally, but for something like next.js where you're intended to deploy production to some cloud somewhere (probably serverless) the option of _just_ writing to a text file doesn't really exist. So having OTEL as an ootb supported way to do o11y is much better than the alternative of getting sucked into some vendor-specific garbage like datadog or newrelic
I think a big part of the negative sentiment derives from the fact that detailed documentation and reference documentation almost non-existant. The documentation mostly tells you what exists, but not how to use them, how they get executed, common pitfalls and gotchas etc etc.
The documentation is written to be easy and friendly to newcomers, but is really missing the details and nuances of whatever execution context a given api is in and does not touch on derived complexities of using react in a server environment etc.
This is a trend across a lot of projects these days - often missing all the nuances and details - writing good documentation is really hard. Finding the balance between making things user friendly and detailed is hard.
Keep it up
Thanks for the note! Indeed, it is also challenging when experience hides what things are not obvious or necessary to make further connections when reading the docs. It is an area of continuous improvement.
> The documentation is written to be easy and friendly to newcomers, but is really missing the details and nuances of whatever execution context a given api is in and does not touch on derived complexities of using react in a server environment etc.
I think on this particular topic, there had been an assumption made on the docs side, that, listing Edge runtime (when middleware was introduced), as its own thing, that might as well run in another computer, would also communicate that it does not share the same global environment as the underlying rendering server.
I'll do some updates to narrow this down again.
> The documentation mostly tells you what exists, but not how to use them, how they get executed, common pitfalls and gotchas etc etc.
Do you have anymore examples on this. I have been improving the revalidateTags/ Paths, layouts, fetch, hooks like useSearchParams, gotchas with Response.next, etc..
I know the OP post does talk about issues not being responded to, but that trend has been changing. If you do find/remember something as you describe, please do open a documentation issue, pointing to the docs page and the confusion/gotcha - we have been addressing these over the past months.
`npx @next/codemod@canary upgrade latest`
Here in this article, the author, failing to comprehend the domain differences, is applying the same approach to call a function everywhere. Of course it won't work.
The fallacy of nextjs is attempting to blend function domains that are inherently different. Stop doing that and you will be fine. Documentation won't work, it will be just more confusing. Blending edge and ssr and node and client-side into one is a mess, and the attempt to achieve that only results in layers upon layers of redundant framework complexity.
I spent a similar amount of time setting up opentelemetry with Next and while it would have been titled differently, I would have likely still written a blog post after this experience too.
This isn't your fault, but basically every opentelemetry package I had to setup is marked as experimental. This does not build confidence when pushing stuff to production.
Then, for the longest time I couldn't get the pino instrumentation working. I managed to figure it out eventually, but it was a pain.
First, pino has to be added to serverExternalPackages. If it's not, the OTel instrumentation does not work.
Second, the automatic instrumentation is extremely allergic to import order. And also for whatever reason, only the pino default export is instrumented. Again, this took a while to figure out.
Module local variables don't work how I would expect. I had to use globalThis instead.
And after all that I was still hit by this: https://github.com/vercel/next.js/issues/80445
It does work, but it was not great to set up. Granted, I went with the manual router (eg. not using vercel/otel).
People expect "middleware" to mean a certain thing and work a certain way.
middleware = fn(req) → next(req).
express/koa give you the use() chain.
next.js gives you one root, but nothing stops you from chaining yourself. same semantics, just manual wiring. type mw = (req: Request, next: () => Response) => Response;
const logger: mw = (req, next) => {
console.log(req.url);
return next();
}; const auth: mw = (req, next) => {
if (!req.headers.get("x-auth")) return new Response("forbidden", { status: 403 });
return next();
};
function chain(mws: mw[]) {
return (req: Request) =>
mws.reduceRight((next, mw) => () => mw(req, next), () => new Response("ok"))();
}
export function middleware(req: Request) {
return chain([logger, auth])(req);
}
root is given, chain is trivial. that’s middleware.I expect these things to be standardized by the framework and all the sharp edges filed off - thats why I go to a framework in the first place.
(My username has never been more appropriate!)
I really hate this stuff. Users raise feedback for something they need, the dev team considers the feedback, they spend a really long time thinking about the most perfect abstraction, scope the problem way out to some big fundamental system, and come up with an extremely complicated solution that is "best". The purist committee-approved solution could technically be used to address what the user asked for, with a lot of work, but that's no longer the focus. Pragmatism goes out the window; it's all about inventing fun abstract puzzles.
All the while, the user just wanted to log things.
Not saying that's the exact situation here, but the phrasing in the comment was all too real to me.
But these solutions keep coming up because they bring one thing: Self-contained / "batteries included". Just the other day there was a thread in hackernews about Laravel vs Symphony and it was the same thing: shit breaks once complexity comes in.
If you compare those solutions with the old model that made NodeJS / React SPA get so popular, so fast: Buffet-style tooling/libraries. You basically build your own swiss army knife out of spare parts. Since all the spare parts are self-contained they have to target really low abstraction levels (like React as a component library, HTTP+Express as a backend router, Postgres as DB).
This approach has many disadvantages but it really keeps things flexible and avoids tower-of-babel style over-engineering. As in a lot of layers stacked on top of each other. Not that the complexity goes away, but instead you have a lot of layers sibling to each other and it is more doable to replace one layer with another if things aren't working well.
It is understandable why "batteries included" is so popular, it is really annoying to stitch together a bunch of tools and libraries that are slightly incompatible with each other. It definitely needs people with more experience to set up everything.
You get a very batteries included approach(es) but you can always punch out of it and override it. I've never got into a situation where I'm feeling like I'm fighting the framework.
I also really like both Blazor Server and Blazor Webasm which allows you to write the frontend in C# too. Blazor server is great for internal admin panel style apps, and blazor webasm is good for saas apps, and for everything else plain old server sider rendering works great.
I'd really recommend anyone who is annoyed with their web framework to give it a go. It's extremely cross platform now (the huge drawback until about a decade ago was it was very hard to run on Linux, which isn't the case at all now - in fact, it's the opposite, harder to run on Windows), very fast and very easy to use. It takes a while to figure out the mental model of the design in your head but once it comes together you can quickly figure out how to override everything when you hit limitations (which tbh, is pretty rare compared to every other framework).
I agree people really need to update their mental model of where dotnet is at. I worked with it on Linux and it's a great experience
https://en.wikipedia.org/wiki/Nominal_type_system
https://en.wikipedia.org/wiki/Structural_type_system
Although nominal types doesn't necessarily mean OOP-ish (inheritance-heavy) it is a pre-requisite (for inheritance-heavy code).
The distinction between the two is not a black/white thing but (modern) typescript (and Flow as well) is heavily focused on structural typing while C# is heavily focused on nominal typing. In fact the whole composition vs inheritance discussion fundamentally is about making types that behave in a more structural manner.
As old school as it may be, I can accomplish basically everything my users need with just vanilla JS and .fetch() requests.
I've been playing with Blazor, and it's been great so far. However, like everything, I know it's not perfect.
Blazor server uses websockets and is just a whole other bag of hurt. You'll have to deal with disconnects even if you can stomache the increased cloud costs.
You can (and I have) definitely rendered huge data grids efficiently with Blazor.
The biggest drawback with wasm is no proper multithreading support which has been delayed for years.
On blazor server; I totally agree, it's a pain. But for 'intranet' style apps which are used internally it's by far the most productive development environment I've used for web. I wouldn't use it for anything that wasn't a total MVP for public use but it's pretty great for internal apps like admin panels.
I have done a few Angular apps and the experience/setup quoted above is basically foreign to me. I know that it is a framework and not a library but it is a very well designed framework (atleast Angular 2 onwards; I used Angular v20 for my latest component). Basically most of the commonly needed stuff is included in the framework (I just added NGXLogger for logging) and the abstractions are pretty nice and fairly similar to a backend service (services wrap libraries and components rely on services). RxJS can be a bit of a learning curve but once you are comfortable with the basics, it can take you quite far. Atleast I rarely had to fight with the framework for typical SPAs. Also, the documentation along with tutorials is great - I learned using the tour of heroes application[0] but seems angular.dev[1] is the new home for v20 docs.
[0]: https://v17.angular.io/tutorial/tour-of-heroes
[1]: https://angular.dev/
This seems to be built into the culture of companies which have those ridiculous whiteboard leetcode interviews. You find people who can produce very clever complex solutions in their sleeep, and then they do that. Interviews aren't selecting for people whose strength is simplicity and clarity instead. So you get a lot of tight loop optimizers and they tight loop optimize everything... not just the tight loops. But if your product is a library/framework being consumed by mere mortals, you probably want something simple if you want to succeed in the long run. The super car's performance is meaningless to you if you can't drive stick.
Only my own code is allowed to be clever!
This is my job. We're a small team and my job is to keep things up to date. Insanely time consuming. Packages with hard dependencies and packages that stopped being supported 5 years ago.
Fact is the only way around this in the frontend without a monolitic "batteries-included" all-encompassing all-knowing all-mighty framework is through standardization which can only be pushed by the browsers. Like if browsers themselves decided how bundlers should work and not have them be extensible.
And this tooling-hell is not only a browser frontend problem only either, it is also quite common in game development. Where you also have these monstrosities like Unreal Engine that "includes batteries" but makes it really hard to troubleshoot problems because it is so massively big and complex. A game engine is basically a bundler too that combines assets and code into a runnable system to be run on top of a platform.
Vast majority of web dev projects have zero need for an SPA framework these days and all this pain is self inflicted for little benefit.
Those tools do have good use cases still but the chances that your project is one of them I'd shrinking all the time.
Browser standards have come a long way in filling the holes that caused react to be written in the first place.
Minifying is also somewhat of a hurdle, I guess it could be done at the CDN level on-the-fly+cache, but that is also its own nest of complexity.
SPA frameworks have a place, if anything I think they will become more prevalent, but I can foresee WASM opening the door for non-JS language stacks. However they will need bundlers as well and some languages are just not built around giving ways to minimize binary size and lazy-load code. Just try to compile some C++ to wasm, you end up with 10+mb .wasm files.
I probably wasn't clear enough when I said this.
If you're talking about waterfall requests in module loading, you've missed what I said and are likely sending orders of magnitude more JS to clients than you need to.
It's really worthwhile looking at all the new features in browsers over the last 5-10 years and asking yourself if you really can't do what you need just with vanilla HTML and CSS at this point. You can always sprinkle in a bit of JS to fill in some gaps if needed. My team usually has a 2-300 line JS file in each project. No bundlers or modules ever required at that scale.
I mean it's not just the experience - it's the upfront time cost and then ongoing maintenance.. and for what? It's really easy to underestimate how much effort this will be
Having done both I genuinely think Rails is a 10x productivity boost over stitching your own mishmash of libraries together in Node
The only lack of flexibility you run into is if you really disagree with the fundamentals of the framework. If you hate the ActiveRecord pattern for example you need to stay away
"shit breaks once complexity comes in" is a skill issue
Surely this can't be right?
https://nextjs.org/docs/messages/nested-middleware > If you have more than one Middleware, you should combine them into a single file and model their execution depending on the incoming request.
By Talos, this can't be happening.
> Previously, Next.js middleware only supported the Edge Runtime, which provided better performance and isolation but had limitations when integrating with Node.js-specific libraries and APIs.
That's not something that can be resolved with a library abstraction. That was an architectural decision.
But whenever I work with Next, I feel like we lost the plot somewhere. I try a lot of frameworks and I like esoteric programming languages, but somehow Next.js, the frontier JavaScript framework embraced by React is the only experience where half the time, I have no idea what it’s error messages (if I get any to begin with) are trying to tell me. I can’t even count the hours I spent with weird hydration issues.
I was somewhat surprised when I noticed simple Next.js landing pages would break in Firefox. Worse yet, the failure mode was to overlay all of the content with a black screen and white text, "An application client side error has occurred". It was surprising in that a simple landing page couldn't render, but when I discovered that the cause was a JS frontend framework, I felt that it was par for the course.
Perhaps it makes sense to the advocates, but for those of us not on the bandwagon, it can be sincerely baffling.
If you don't mind my asking, what sort of applications have you worked on, how many contributors were there, how long was their lifespan, and how long did you work on them for? Personally, I've found the type of "vanilla" JS approach to be prohibitively difficult to scale. I've nearly exclusively worked on highly interactive SaaS apps. Using a substantial amount of JS to stitch together interactions or apply updates from the server has been unavoidable.
The engineering organizations at companies I've worked at have ranged in size from three devs to over 20,000. Projects I've worked on have ranged from three devs to maybe 500-1,000 (it's sometimes hard for me to keep track at a giant company). I've worked on projects using "vanilla" JS, Knockout, Backbone, Vue, and React[0]. The order in which I listed those technologies is also roughly how quickly the code became hard to maintain.
[0] This is not an exhaustive list of which frontend frameworks/libraries I've used, but it's the ones I have enough experience with to feel comfortable speaking of the long term support of[1]. For example, I used Ember heavily for about a year, but that year was split between two projects I spent six months each on. Similarly, I've used Next.js, but only for prototyping a few times and never deployed with it to anything other than a private server.
[1] Except Lightning Web Components, which I've used a lot but hate so much that I don't want to dishonor those other technologies by listing it alongside them.
I am happy for them and their money, but I can't use this anymore. I take Vite as the default option now, but I would prefer something more lightweight.
Aside from the abysmal middleware api you also have the dubious decision to replace having a request parameter with global functions like cookies() and headers().
Perhaps there is some underlying design constraint that I'm missing where all of these decisions make sense but it really does look like they threw out every hard fought lesson and decided to make every mistake again.
The post's author seems to conflate the edge runtime with the server runtime. They’re separate environments with different constraints and trade-offs.
I struggled with Next.js at first for the same reason: you have to know what runs where (edge, server, client). Because it’s all JavaScript, the boundaries can blur. So having a clear mental model matters. But blaming Next.js for that complexity is like blaming a toolbox for having more than a hammer.
The biggest issue is that the complexity is self-inflicted. The term middleware has a pretty well understood meaning if you've worked with basically any other framework in any language: it's a function or list of functions that are called at runtime before the request handler, and it is assumed those functions run in the same process. The fact that Next.js puts it on the edge and only allows one is breaking that assumption, and further, most applications do not need the additional complexity. To go back to your toolbox analogy, more tools mean more complexity (and money), so you wouldn't get a new tool simply because you might need it, you get it because you do need it, and the same applies to edge functionality. If Next.js wants to allow you to run code on the edge before your app is called, that's fine, but it should be opt-in, so you don't need to worry about it when you don't need it, and it shouldn't be called "middleware".
> you wouldn't get a new tool simply because you might need it
No but you get a framework precisely because it's "batteries included": many apps will need those tools. You don’t have to use all of them, but having them available reduces friction when you do.
> If Next.js wants to allow you to run code on the edge before your app is called, that's fine, but it should be opt-in
It already is. Nothing runs at the edge unless you add a middleware.ts. You can build a full app without any middleware. I'm surprised the author of the article fails to acknowledge this, given how much time was spent on finding alternative solutions and writing the article.
> If learn what is a package/module in python, try to apply that in Go without any brain power, you will complain that Go is bad. If you are using any technology, you should have some knowledge about that technology.
The problem is probably that Next.js makes it very easy to move between front and back end, but people think this part is abstracted away.
It's actually a pretty complex system, and you need to be able to handle that complexity yourself. But complexity does not mean it makes you slower or less productive.
A system with a clearly separated front- and back-end is easier to reason about, but it's also more cumbersome to get things done.
So to anyone who knows React and wants to move to Next.js, I would warn that even though you know React, Next.js has a pretty step learning curve, and some things you will have to experience yourself and figure out. But once you do, it's a convenient system to easily move between front- and back-end without too much hassle.
I like to learn and improve. A lot of comments here are just baseless negative comments. Please let’s have a real discussion.
If learn what is a package/module in python, try to apply that in Go without any brain power, you will complain that Go is bad. If you are using any technology, you should have some knowledge about that technology.
Not Next though. We built a pretty large app on Next and it was painful from start to finish. Every part of it was either weird, slow, cumbersome or completely insane.
We still maintain the app and it is the only "thing" I hate with a passion at this point. I understand that the ecosystem is pretty good and people seem to be happy with the results given that it is extremely popular. But my own experience has been negative beyond redemption. It's weird.
Personally I'd rather go in the direction of having code that's explicitly server-side, explicitly client-side, or explicitly shared utilities. But you'd need more of a type-safe mentality to take that approach, and you'd probably scare off the majority who prefer runtime errors over build-time errors.