Key is to generate capitol, which is being either a URL or playing hand in ball.
This is a small hobby project, I am not in IT.
storing the entire state in the hash component of the URL
since this is entirely client-side, you can pretty much bypass all of the limits.
one place i've seen this used is the azure portal.. (payload | gzip | b64) take of that what you will.
I design my SSR apps so that as much state as possible lives in the server. I find the session cookie to be far more critical than the URL. I could build most of my apps to be URL agnostic if I really wanted to. The current state of the client (as the server sees it) can determine its logical location in the space of resources. The URL can be more of an optional thing for when we do need to pin down a specific resource for future reference.
Another advantage of not urlizing everything is that you can implement very complex features without a torturous taxonomy. "/workflow/18" is about as detailed as I'd like to get in the URL scheme of a complex back office banking product.
Basically, your approach is easier to code, and worse to use. Bookmarks, multiple tabs, the back button, sharing URLs with others, it all becomes harder for users to do with your design. I mean feel free, because with many tech stacks it is indeed easier, but don't pretend it's not a tradeoff. It's easier and worse.
But something that can bite you with these solutions if that browsers allow you to duplicate tabs, so you also need some inter-tab mechanisms (like the broadcast API or local storage with polling) to resolve duplicate ids
Sure, in the prismjs.com case, I have one of those comments in my code too. But I expect it to break one day.
If a site is a content generator and essentially idempotent for a given set of parameters, and you think the developer has a long-term commitment to the URL parameters, then it's a reasonable strategy (and they should probably formalise it).
Perhaps you implement an explicit "save to URL" in that case.
But generally speaking, we eliminated complex variable state from URLs for good reasons to do with state leakage: logged-in or identifying state ending up in search results and forwarded emails, leaking out in referrer logs and all that stuff.
It would be wiser to assume that the complete list of possible ways that user- or session-identifying state in a URL could leak has not yet been written, and to use volatile non-URL-based state until you are sure you're talking about something non-volatile.
Search keywords: obviously. Seach result filters? yeah. Sort direction: probably. Tags? ehh, as soon as you see [] in a URL it's probably bad code: think carefully about how you represent tags. Presentation customisation? No. A backlink? no.
It's also wiser to assume people want to hack on URLs and cut bits out, to reduce them to the bit they actually want to share.
So you should keep truly persistent, identifying aspects in the path, and at least try not to merge trivial/ephemeral state into the path when it can be left in the query string.