Top
Best
New

Posted by thm 1 day ago

URLs are state containers(alfy.blog)
481 points | 210 comments
jorl17 1 day ago|
When I get my way reviewing a codebase, I make sure that as much state as possible is saved in a URL, sometimes (though rarely) down to the scroll position.

I genuinely don't understand why people don't get more upset over hitting refresh on a webpage and ending up in a significantly different place. It's mind-boggling and actually insulting as a user. Or grabbing a URL and sending to another person, only to find out it doesn't make sense.

Developing like this on small teams also tends, in my experience, to lead to better UX, because it makes you much more aware of how much state you're cramming into a view. I'll admit it makes development slower, but I'll take the hit most days.

I've seen some people in this thread comment on how having state in a URL is risky because it then becomes a sort of public API that limits you. While I agree this might be a problem in some scenarios, I think there are many others where that is not the case, as copied URLs tend to be short-lived (bookmarks and "browser history" are an exception), mostly used for refreshing a page (which will later be closed) or for sharing . In the remaining cases, you can always plug in some code to migrate from the old URL to the new URL when loading, which will actually solve the issue if you got there via browser history (won't fix for bookmarks though).

thijsvandien 1 day ago||
While I like this approach as well, these URLs ending up in the browser history isn’t ideal. Autocomplete when just trying to go to the site causes some undesired state every now and then. Maybe query params offer an advantage over paths here.
DrewADesign 1 day ago|||
I think it’s a “use the right tool for the job” thing. Putting ephemeral information like session info in URLs sucks and should only be done if you need to pass it in a get request from a non-browser program or something, and even then I think you should redirect or rewrite the url or something after the initial request. But I think actual navigational data or some sort of state if it’s in the middle of an important action is acceptable.

But if you really just want your users to be able to hit refresh and not have their state change for non-navigational stuff like field contents or whatever, unless you have a really clear use case where you need to maintain state while switching devices and don’t want to do in server-side, local storage seems like the idiomatic choice.

linked_list 1 day ago||||
JS does have features for editing the history, but it's a trade-off of not polluting the history too much while still letting the user navigate back and forth
orphea 1 day ago||
I'm glad to see that prismjs site mentioned by the blog is doing the right thing - when it updates the URL, it replaces the current history item.
embedding-shape 1 day ago||
Does that handle back button correctly? Nothing more annoying that sites/apps that overwrites the history incorrectly, so when you press the back button it goes to the entry before you entered the website/app, rather than back into what you were doing in the website/app.

Both approaches (appending/rewriting) have their uses, the tricky part is using the right thing for the right action, fuck up either and the experience is abysmal.

macNchz 1 day ago|||
It’s definitely possible to make a really stellar experience, but that winds up being the exception. The URL and history state are sort of “invisible” elements of the user experience but require thoughtful care and attention to what the user expects/wants at each step, a level of attention which is already a rarity in web development even in the most visible parts of a page…so frequently the history/back button stuff just totally sucks.
embedding-shape 1 day ago||
Yeah, in my experience you only get great stuff when both product and engineering has equal care for the final experience. If either parties lack care, you'll miss stuff, particularly things that are "invisible" as you say.
LegionMammal978 1 day ago|||
It's pretty weird, my impression is that the APIs are flexible enough to implement most sane behaviors, but websites keep managing to mess it all up. Perhaps it's just one of those things that no one bothers re-testing as the codebase changes.
embedding-shape 1 day ago|||
In my experience, the problem is two-fold. First product managers/owners don't consider the URIs, so it ends up not being specified. They say "We should have a page when user clicks X, and then on that page, user can open up modal Y", but none of it is specified in terms of what happens with the URIs and history.

Then a developer gets the task to create this, and they too don't push back on what exact URIs are being used, nor how the history is being treated. Either they don't have time, don't have the power to send back tasks to product, simply don't care or just don't think of it. They happily carry along creating whatever URIs make sense to them.

No one is responsible for URLs, no one considers that part of UX and design, so no one ends up thinking about it, people implement things as they feel is right, without having a full overview over how things are supposed to fit together.

Anyways, that's just based on my experience, I'm sure there are other holes in the process that also exacerbates the issue.

nkrisc 1 day ago|||
As a UX designer, this is a failure of the UX designers, IMO. If you're a UX designer for web, you should be aware of web technology and be thinking about these things. Even if you don't know enough to fully specify it, you should be able to enough such that you can have conversations with a developer to work together to fully spec it out.

That said, I've also worked with some developers that didn't like intruding on their turf, so to speak. Though I've also worked with others that were more than happy to collaborate and very proactive about these sorts of things.

Furthermore, as a UX designer this is the sort of topic that we're unlikely to be able to meaningfully discuss with PMs and other stakeholders as it's completely non-visual and often trying to bring this up with them and discuss it ends up feeling like pulling teeth and them wondering why we're even spending time on it. So usually it just ended up being a discussion between me and the developers with no PM oversight.

_heimdall 15 hours ago|||
Web developers should make it a habit to ask/require URL structures be part of the spec.

I've had people be surprised by the request because its something they don't usually consider, but I've never had anyone actually push back on it.

moritzwarhier 1 day ago|||
Nothing weird about it, you see people arguing right here whether a site should add a new history entry when a filter is set.

Interacting with the URL from JS within the page load cycle is inherently complex.

For what it's worth, I'd also argue that the right behavior here is to replace.

But that of course also means that now the URL on the history stack for this particular view will always have the filter in it (as opposed to an initial visit without having touched anything).

Of course the author's case is the good/special one where they already visited the site with a filter in the URL.

But when you might be interested in using the view/page with multiple queries/filters/paramerers, it might also be unexpected: for example, developers not having a dedicated search results page and instead updating the query parameters of the current URL.

Also, from the history APIs perspective, path and query parameters are interchangeable as long as the origin matches, but user expectations (and server behavior) might assign them different roles.

Still, we're commenting on a site where the main view parameter (item ID, including submission pages) is a query parameter. So this distinction is pretty arbitrary.

And the most extreme case of misusing pushState (instead if replace) are sites where each keystroke in some typeahead filter creates a new history entry.

All of this doesn't even touch the basic requirement that is most important and addressed in the article: being able to refresh the page without losing state and being able to bookmark things.

Manually implementing stuff like this on top of a basic routing functionality (which should use pushState) in an SPA is complex very quickly.

hdjrudni 1 day ago||
> But that of course also means that now the URL on the history stack for this particular view will always have the filter in it (as opposed to an initial visit without having touched anything).

I would have one state for when the user first entered the page, and then the first time they modify a filter, add a 2nd state. From thereon, keep updating/replacing that state.

This way if the user clicks into the page, and modifies a dozen things they can

1. Refresh and keep all their filters, or share with a friend 2. Press back to basically clear all their filters (get back to the initial state of the page) 3. Only 1 more press of back to get back to where-ever they came from

Dwedit 1 day ago||||
My personal take would be if it takes you to what's basically another page (such as the entire page being rewritten), then involve browser history.
hamdingers 1 day ago||||
Browser autocomplete behavior is reliably incorrect and infuriating either way, so it's not a good reason to avoid the utility of having bookmarkable/sharable urls.
SoftTalker 1 day ago||
Yeah it's an annoyance more than it helps. I always disable it.
noir_lord 1 day ago||
I do as well - it's just irritating.

Same with search ahead.

porridgeraisin 22 hours ago|||
Yeah, lichess does this.

On lichess.org/analysis, each move you make adds a history item, lichess.org/analysis#1, #2, and so on.

Pretty annoying.

SoftTalker 1 day ago|||
Yeah I use a web app regularly for work where they have implemented their own "back" button in the app. The app maintains its own state and history so the browser back button is totally broken.

The problem here is that they've implemented an application navigation feature with the same name as a browser navigation feature. As a user, you know you need to click "Back" and your brain has that wired to click the broswer back button.

Very annoying.

Having "Refresh" break things is (to me) a little more tolerable. I have the mental association of "refresh" as "start over" and so I'm less annoyed when that takes me back to some kind of front page in the app.

apitman 1 day ago|||
> I make sure that as much state as possible is saved in a URL, sometimes (though rarely) down to the scroll position.

If your page is server-rendered, you get saved scroll position on refresh for free. One of many ways using JS for everything can subtly break things.

endless1234 1 day ago|||
Still leaves the problem of not being able to simply send the current URL to someone else and know they'll see the same thing. Of course anchors can solve this, but not automatically
MrJohz 20 hours ago|||
You probably don't want that most of the time, though. The time I'm most likely to send someone an article is once I've got to the end of it, but I don't want them to jump to the end of the article, I want them to start at the beginning again.

There are situations where you want to link to a specific part of a page, and for that anchors and text anchors work well. But in my experience it isn't the default behaviour that I want for most pages.

pests 1 day ago||||
Chrome (at least?) solves this via Text Fragments[0] which are a pure client side thing and requires no server or site support.

This URI for example:

https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...

Links to an instance of "The Referer" narrowed down via a start prefix ("downgrade:") and end suffix ("to origins").

These are used across Google I believe so many have probably seen them.

[0] https://developer.mozilla.org/en-US/docs/Web/URI/Reference/F...

IanCal 20 hours ago|||
Scroll position doesn’t do this because it’s not portable between devices.
o11c 1 day ago||||
Even with JS, if it is classical synchronous JS it is much better than the modern blind push for async JS, which causes the browser to try to restore the position before the JS has actually created the content.
nextaccountic 1 day ago||
isn't there a way to instruct the browser to restore the position only after certain async thing?
kuekacang 1 day ago||
I think the hack is to store html height/width locally and restore it as early as possible so the content will then load under the scrolled view
fithisux 19 hours ago||||
True
divan 1 day ago|||
Also reminder that "refresh" is just a code word for "restart (and often redownload) the whole bloody app". It's funny how in web-world people so used to "refreshing" the apps and assume that it's a normal functionality (and not failure mode).
nextaccountic 1 day ago||
The web is similar to android, and unlike desktop apps, in that restarting the whole thing is meant to not lose (much) state

Actually it would be amazing if desktop applications were like this too, and we had a separate way to go back to the initial screen

divan 1 day ago||
Restoring state is just one of the features, that can be implemented in any app if needed, with all that baggage that comes with a feature – testing, maintaining, etc. It's just if desktop app becomes so broken/unresponsive, that the only way is to restart it – we consider it a bad experience and bad software. On web "restarting the app" is a normal daily activity when something goes wrong with state/layout/fields/forms, etc.
nextaccountic 1 day ago||
Most desktops apps are buggy enough to occasionally require restarts or even crash. I don't currently use any program that never crashed on me. On the web "restarting the app" is seamless and not imply anything wrong happened. It's like the Erlang approach to errors, but on steroids

The trouble with leaving restoring state to the application do as they wish is that most of times they will get it wrong. Also most of them don't do any of this and will never do. Good defaults matter

divan 10 hours ago||
My experience has been different – and increasingly so over the past 30 years. Crashing or leaking desktop apps are a rare experience nowadays. When it happens, it’s always an "oh, really?" moment. On the web… I often can’t even write a Facebook comment without refreshing the page.

Good defaults definitely matter. But not overloading an app with functionality matters as well. Matching feature sets to actual user needs also matters.

The problem with state restoration is that it’s one of those features that looks simple, yet can be extremely tricky to implement correctly – the point you already made. And there’s no single solution that will fit all cases, or even 80% of them. Restoring scroll position is one thing, but restoring an unfinished video editor timeline is another. Both look deceptively simple ("I just reopened the crashed app and it opened at the exact same state"), but the internal mechanics require wildly different mechanisms and trade-offs.

I do agree, however, that frameworks and SDKs should provide properly designed mechanisms for state restoration – and they often do (like the State Restoration API on iOS/macOS).

But the argument that "state restoration should be default and provided by the environment" feels like post-rationalization of the existing mechanics.

> It’s like the Erlang approach to errors, but on steroids

The Erlang approach was intentionally designed that way. Web apps’ normalization of "restarting" is just a testament to how normal buggy software has become in the web ecosystem. Anyone who has ever tried to buy tickets online or register through a simple form on a government website knows that even for such common use cases, it’s extremely hard to create a good user experience. There are some fantastic web apps nowadays, and government-backed design systems and frameworks that sometimes match native apps’ experience – but that only proves the point. It takes an enormous amount of effort to make even simple things work reliably on the web stack.

The core reason, of course, is that the "web stack" is a typesetting engine from the ’80s that was never designed for modern UI apps’ needs in the first place. Why we still use a markup language to build sophisticated UIs and think it’s fine is beyond me. I recently saw an experiment where someone played a video in Excel, using spreadsheet cells as pixels and a lot of harness code to make it work as an output device. It’s doable, but Excel was never designed for that. No matter how many layers of abstraction we put on top – or how many ExcelReact frameworks we create – the foundation is simply not right for the task.

And yet people continue to justify the “defaults” of the web stack as if they were deliberate design choices rather than byproducts. Like, "it’s so good that everything is zoomable," or "I like that everything is selectable". Which sounds fine – until it doesn’t. Why on earth would I need to select half my widget tree with a 3-pixel mouse shift? And when I really do need to select something, it often doesn’t work properly because developers take it for granted and never verify or test it.

Or zooming – whenever I zoom a Facebook page to write a comment, the view keeps jumping around because some amazing piece of JS crapcode decides to realign the interface on a timer (to show ads?). Nobody on Facebook’s QA team probably even tests how the comment section works when zoomed in Safari. The web app experience is simply one of the worst, due to this messy feature set people call "good defaults". And as someone who also has to write web apps from time to time, I can’t stress enough how disproportionately more effort it takes to make an app with sane, good default behavior.

(P.S. There are some good things in the current state of the web stack – but they’re mostly the product of the industry’s sheer size, not the stack itself.)

divan 3 hours ago||
I was just replying to someone on Messenger (the React Native app) and needed to paste a Unicode character via copy-paste. For some reason, the input field kept inserting it with a prepended space. I double- and triple-checked – copied it from different places – but nothing helped. It just kept adding that space. I ended up using the Drafts editor to write the full message and then pasted it into this crappy piece of software made by the very company that created the framework it’s built on. And the thing is, it’s not even surprising.
fittom 1 day ago|||
I completely agree. In fact, I believe URL design should be part of UX design, and although I've worked with 30+ UX designers, I've never once received guidance on URLs.
mrexroad 1 day ago||
As a UX designer that always gives guidance on URL design/strategy, I’ll say it’s not always well received. I’ve run into more than a few engineering or PM teams who feel that’s not w/in scope of design.
franciscop 1 day ago|||
As a dev who cares about UX, this is crazy to hear but resonates, I've got a few weird looks from people whenever I mentioned some URL improvements. I've also worked with people who understood it. I've seen a correlation though, when people cared enough I could share freely about this, when I did the designer's and dev work I would just add that in (I'm def not a designer, so if I'm doing design work that means the owner doesn't care about design, let alone URLs).

I can imagine in your situation as a pure designer how you got it though though, sorry to hear that and I wish other devs cared more. I've def mentoring people to care about it so hope others do so too.

pyrolistical 1 day ago|||
As a dev mentor one of my first lesson is what everybody has in common is design.

We all are trying to understand a problem and trying to figure out the best solution.

How each role approaches this has some low level specializations but high level learnings can be shared.

latexr 16 hours ago|||
> I genuinely don't understand why people don't get more upset over hitting refresh on a webpage and ending up in a significantly different place. It's mind-boggling and actually insulting as a user. Or grabbing a URL and sending to another person, only to find out it doesn't make sense.

I do dislike those cases. But I also dislike being two-thirds through a video or page, thinking “I’ve got to share this with <friend>, it’s right up their alley”, then hitting my fast combination of keys to share a URL and realising the link shared my exact place, which will make the person think I’m sharing a snippet and not the whole thing, so now I need to send another message to clarify.

I like being able to have URLs reproduce a specific state, but I also want that to be a specific decision and not something I can share or save to a bookmark by mistake.

cassepipe 14 hours ago||
I understand the inconvenience to have to leave a keyboard-driven workflow but I think the Share button --> Copy link are common enough now that it shouldn't be an issue. I know firefox also has "Copy clean link" if you right-click on the urlbar.

I did not find an extension that does just that but it should be trivial to create one and assign a shortcut to it.

latexr 14 hours ago|||
Whenever I try that flow, it either copies the link with the extra details or it screws up the link entirely (e.g. removing the `?v=` from a YouTube link). In other words, it’s extra work for worse results.
ringer 9 hours ago|||
Except when it's not implemented properly and it breaks other workflows. For example, if it only shows a button (not a link or a tag) and copies the link to the clipboard via JavaScript, consider this scenario: I want to send this "link" to my other computer using Firefox's built-in Send Page to Device feature. I have to click Share, click the copy to clipboard button, open a new page, paste the URL, and only then can I share it.

If the state were stored in the URL, I could do it in two steps: open context menu -> Send Page to Device, and I'm done.

MattDaEskimo 1 day ago|||
I can understand "shareable" state (scroll position), but _as much as possible_ seems like overkill.

Why not just use localStorage?

layer8 1 day ago||
> Why not just use localStorage?

So that I can operate two windows/tabs of the same site in parallel without them stealing each other’s scroll position. In addition, the second window/tab may have originated from duplicating the first one.

mejutoco 1 day ago|||
You could work around that if needed with a unique id per tab (I was curious myself)

https://stackoverflow.com/questions/11896160/any-way-to-iden...

layer8 1 day ago||
Yes, but how do you garbage-collect the stored per-tab state from the local storage? Note that it’s not just per tab, but per history entry of the tab. (When the user goes back, they want the respective state to be restored, and again when going forward in reverse.) Furthermore, with browser features like “reopen closed tab”. Better let the browser manage the state implicitly by managing the URLs.
MattDaEskimo 1 day ago||
Scroll position is _kind of_ fine. Typically I can link the ID in the URL as "state".

I was referring to mostly everything else

phillipseamore 1 day ago|||
sessionStorage should treat the windows/tabs as separate
DanielHB 11 hours ago|||
First SPA I built (without frameworks) I actually wrote my own router that stored most client-side state in the URL as a hash. I remember back then having some problems with IE6 4kb limit on URL length.

It actually worked really well, but obviously I had very little state. The only things I didn't store in the hash were form state and raw visualization data (like chart data).

makeitdouble 1 day ago|||
> I genuinely don't understand why people don't get more upset over hitting refresh on a webpage and ending up in a significantly different place.

Th web has evolved a lot, as users we're seeing an incredible amount of UX behaviors which makes any single action take different semantics depending on context.

When on mobile in particular, there's many cases where going back to the page's initial state is just a PITA the regular way, and refreshing the page is the fastest and cleanest action.

Some implementations of infinite scroll won't get you to the content top in any simple way. Some sites are a PITA regarding filtering and ordering, and you're stuck with some of the choices that are inside collapsible blocks you don't even remember where they were. And there's myriads of other situation where you just want the current page in anew and blank state.

The more you keep in the url, the more resetting the UX is a chore. Sometimes just refreshing is enough, sometimes cleaning the URL is necessary, sometimes you need to go back to the top and navigate back to the page you were on. And those are situations where the user is already in frustration over some other UX issue, so needing additional efforts just to reset is a adding insult to injury IMHO.

jraph 1 day ago|||
> I make sure that as much state as possible is saved in a URL

Do you have advice on how to achieve this (for purely client-side stuff)?

- How do you represent the state? (a list of key=value pair after the hash?)

- How do you make sure it stays in sync?

-- do you parse the hash part in JS to restore some stuff on page load and when the URL changes?

- How do you manage previous / next?

- How do you manage server-side stuff that can be updated client side? (a checkbox that's by default checked and you uncheck it, for instance)

MPSimmons 1 day ago|||
One example I think is super interesting is the NWS Radar site, https://radar.weather.gov/

If you go there, that's the URL you get. However, if you do anything with the map, your URL changes to something like

https://radar.weather.gov/?settings=v1_eyJhZ2VuZGEiOnsiaWQiO...

Which, if you take the base64 encoded string, strip off the control characters, pad it out to a valid base64 string, you get

"eyJhZ2VuZGEiOnsiaWQiOm51bGwsImNlbnRlciI6Wy0xMTUuOTI1LDM2LjAwNl0sImxvY2F0aW9uIjpudWxsLCJ6b29tIjo2LjM1MzMzMzMzMzMzMzMzMzV9LCJhbmltYXRpbmciOmZhbHNlLCJiYXNlIjoic3RhbmRhcmQiLCJhcnRjYyI6ZmFsc2UsImNvdW50eSI6ZmFsc2UsImN3YSI6ZmFsc2UsInJmYyI6ZmFsc2UsInN0YXRlIjpmYWxzZSwibWVudSI6dHJ1ZSwic2hvcnRGdXNlZE9ubHkiOmZhbHNlLCJvcGFjaXR5Ijp7ImFsZXJ0cyI6MC44LCJsb2NhbCI6MC42LCJsb2NhbFN0YXRpb25zIjowLjgsIm5hdGlvbmFsIjowLjZ9fQ==", which decodes into:

{"agenda":{"id":null,"center":[-115.925,36.006],"location":null,"zoom":6.3533333333333335},"animating":false,"base":"standard","artcc":false,"county":false,"cwa":false,"rfc":false,"state":false,"menu":true,"shortFusedOnly":false,"opacity":{"alerts":0.8,"local":0.6,"localStations":0.8,"national":0.6}}

I only know this because I've spent a ton of time working with the NWS data - I'm founding a company that's working on bringing live local weather news to every community that needs it - https://www.lwnn.news/

asielen 1 day ago|||
In this case, why encode the string instead of just having the options as plain text parameters?
qdotme 1 day ago||
Nesting, mostly (having used that trick a lot, though I usually sign that record if originating from server).

I've almost entirely moved to Rust/WASM for browser logic, and I just use serde crate to produce compact representation of the record, but I've seen protobufs used as well.

Otherwise you end up with parsing monsters like ?actions[3].replay__timestamp[0]=0.444 vs {"actions": [,,,{"replay":{"timestamp":[0.444, 0.888]}]}

toxik 1 day ago|||
Sorry but this is legitimately a terrible way to encode this data. The number 0.8 is encoded as base64 encoded ascii decimals. The bits 1 and 0 similarly. URLs should not be long for many reasons, like sharing and preventing them from being cut off.
capecodes 1 day ago||
The “cut off” thing is generally legacy thinking, the web has moved on and you can reliably put a lot of data in the URI… https://stackoverflow.com/questions/417142/what-is-the-maxim...
domga 1 day ago||
Links with lots of data in them are really annoying to share. I see the value in storing some state there, but I don’t think there is room for much of it.
nozzlegear 1 day ago||
What makes them annoying to share? I bet it's more an issue with the UX of whatever app or website you're sharing the link in. Take that stackoverflow link in the comment you're replying to, for example: you can see the domain and most of the path, but HN elides link text after a certain length because it's superfluous.
esafak 12 hours ago||
SO links require just the question ID; short enough to memorize.
nozzlegear 8 hours ago||
Sure, but the SO link was just an example. HN does it with any link, like this one which is 1000 characters long:

https://example.com/some/path?foo=bar&baz=bat&foo=bar&baz=ba...

If the website or app has a good UX for displaying/sharing URLs, the length doesn't really matter.

linked_list 1 day ago||||
The URL spec already takes care of a lot of this, for example /shopping/shirts?color=blue&size=M&page=3 or /articles/my-article-title#preface
yawaramin 1 day ago|||
The OP gives great guidance on these questions.
Waterluvian 1 day ago|||
The URL is a public facing interface. If anything goes into the URL, it should already be detailed in the design that the PR’d code is implementing.
eru 1 day ago|||
> I genuinely don't understand why people don't get more upset over hitting refresh on a webpage and ending up in a significantly different place. It's mind-boggling and actually insulting as a user. Or grabbing a URL and sending to another person, only to find out it doesn't make sense.

The two use cases are in slight conflict: most of the time, when I share a URL, I don't want to share a specific scroll position (which probably doesn't even make sense, if the other guy has a different screen size.)

paulddraper 1 day ago||
Scroll, as parent said, is usually not included.

Obviously the URL is not all state, it doesn’t save your cursor or IME input. So there is some distinction between “important” and “unimportant” state.

eru 1 day ago||
Perhaps a better example: should video URLs (like on youtube) include a timestamp or not?

Youtube gives you both options, and either can be what you want. Youtube also seems to be smart enough to roughly remember where you were in the video, when you are reloading the page.

VikingCoder 11 hours ago|||
I worked at a company that worked hard to make urls do heavy lifting for so many tasks, and it was freaking great.
bgilroy26 1 day ago|||
To save the url length, why not hash all possible states and have the value of the variable in the query string refer to that?
poncho_romero 1 day ago|||
This is a viable solution, but as the article mentions, you lose intent and readability (e.g. seeing a query parameter for “product=laptop” vs. “state=XBE4eHgU”). And in general, it’s unlikely you’ll run into issues with URL length. Two to eight thousand characters is a lot!
threetonesun 1 day ago||
I remember bouncing into this limit once in a project because we wanted to make a deeply customized interface shareable without a backend, and while on the site itself we didn't hit a URL limit, when someone shared it via some email clients it added it's own tracking redirect onto the URL which caused it to hit the limit and break.
capecodes 1 day ago||
base64(zstd(big state))
linked_list 1 day ago||||
Because a hash is by definition a one-way mapping, so then you'd have to keep a map of the reverse mapping hash -> state, which obviously gets impractical with state such as page index or search terms. Better just make two-way "compression" mapping
yreg 1 day ago||
They probably have meant something like base64 encode
linked_list 1 day ago||
If you base64 encode an ascii string it gets 33% longer
cyptus 1 day ago|||
and where is the hash mapped back again?
_the_inflator 11 hours ago|||
"hitting refresh"

You made my day. I totally agree with you: state, state management, UX/UI.

I am extremely proud that I lately implemented exactly this: What if... you pass a link or hit reload - or back button in browser.

I have a web app that features a table with a modal preview when hitting a row - boy am I proud to have invested 1 hour in this feature.

I like your reasoning: it ain't a technical "because I can dump anything in a url", nope, it is a means to an end, the user experience.

Convenience, what ever. I have now a pattern to put in more convenience like this, which should be pretty normal.

The only think that remains and bothers me is the verbose URL - the utter mess and clutter in the browser's input field. I feel pain here and there is a conflict inside me between URL aesthetics and flatter the user by providing convenience.

I am working on a solution, because this messy URL string hurts my eyes and takes away a little bit the magic and beauty of the state transfer. This abstract mess should be taken care of, also in regard to obfuscation. It ain't cleanly to have full-text strings in the URL, with content which doesn't belong there.

But I am on it. I cannot leave the URL string out of the convenience debate, especially not on mobile. Also it can happen that strings get stripped or copy & paste accidentally cut of parts. The shorter the better and as we see, convenience is a brutally hard job to handle. Delicate at so many levels, here error handling due to wrongly formatted strings, a field few people ever entered.

My killer feature is the initial page load - it appears way more faster, since there are no skeletons waiting for their fetch request to finish. I am extremely impressed by this little feature and its impact on so many levels.

Cheers!

DecoySalamander 18 hours ago|||
> I genuinely don't understand why people don't get more upset over hitting refresh on a webpage and ending up in a significantly different place.

I'm in the opposite camp - I find it extremely annoying when sites clutter up the browser history with unnecesarly granular state. E.g. hitting "back" button closes a modal instead of taking me to the previous page.

lvncelot 18 hours ago|||
You can achieve both a clean history and granular state in the URL with using history.replace() and history.push() where necessary.
tecleandor 17 hours ago||||
I think that'd be too much. A modal is a subordinate thing to the current window, so I think it shouldn't merit a full url change by itself...
hk__2 17 hours ago|||
This is a completely different issue; you can replace history state in JS without adding new entries.
pbreit 21 hours ago|||
I would never structure my URLs for performance reasons. 100% for usability.
lenkite 1 day ago|||
To make this work better, URL's should standardize several common semantic query parameters and fragment identifiers (like lines, etc). There is utterly no need for every website to re-invent the wheel here. It would also enable browsers to display long URL's better. It could also reduce the amount of client JS once browsers pick up the job of executing some of the client side interactions on very common fragment changes.
stonecharioteer 21 hours ago|||
Would this hijack the back button though? Genuinely curious if modifying the URL adds to the location history.
rossant 19 hours ago||
I think you can customize this. You can decide whether each URL changes the location history.
smrtinsert 1 day ago|||
Url state should be descriptive not prescriptive. Either way it is important. Unfortunately my experience on several teams is that businesses never care about stuff like this but users do.
zwnow 16 hours ago||
I hate sharing links that are like 2 pages long in Whatsapp. Simple as that. If I hit refresh on a page I do it for a reason and I expect to be set at the start of the page. Its no big deal to scroll to where I was. Bloated URLs are a pain to work with too. I highly prefer clean short links. Just store state in local storage and recover it if necessary. If the user has js disabled its kinda their issue state isnt persisted.
padolsey 1 day ago||
I agree, and this reminds me: I really wish there was better URL (and DNS) literacy amongst the mainstream 'digitally literate'. It would help reduce risk of phishing attacks, allow people to observe and control state meaningful to their experience (e.g. knowing what the '?t=_' does in youtube), trimming of personal info like tracking params (e.g. utm_) before sharing, understanding https/padlock doesn't mean trusted. Etc. Generally, even the most internet-savvy age group, are vastly ill-equipped.
noctune 20 hours ago||
It doesn't help that URLs are badly designed. It's a mix of left- and rightmost significant notation, so the most significant part is in the middle of the URL and hard to spot for someone non-technical.

Really we should be going to com.ycombinator.news/item?id=45789474 instead.

jaza 17 hours ago|||
That's how it was in the good ol Usenet days! Eg alt.tv.simpsons. Not sure how URLs ended up being the other way round.
arielcostas 19 hours ago||||
I disagree. We write left to right, so it makes sense when the URL is essentially two parts ("external" and "internal" in regards to "place on the network", "location on the server") they are written left to right and then separated in the middle.

Plus it would make using autocomplete way harder, since I can write "news.y" and get already suggested this site, or "red" and get reddit. If you were to change that, you'd need to type _at least_ "com.yc" to maybe get HN, unless you create your own shortcuts.

Conveniently enough, my browser displays the URL omitting the protocol (assuming HTTPS) and only shows host and port in black, and path+query+fragment

thrance 13 hours ago|||
Damn, now I want something we'll never have.
weikju 1 day ago|||
> Generally, even the most internet-savvy age group, are vastly ill-equipped.

It’s a losing battle when even the tools (web browsers hiding URLs by default, heck even Firefox on iOS does it now!) and companies (making posters with nothing more than QR codes or search terms) are what they’re up against….

Lord-Jobo 1 day ago||
And with commercial software like Outlook being so ubiquitous and absolutely HORRENDOUS with url obfuscation, formatting, “in network” contacts, and seemingly random spam filtering.

Our company does phishing tests like most, and their checklist of suspicious behavior is 1 to 1 useless. Every item on the list is either 1: something that our company actually does with its real emails or 2: useless because outlook sucks a huge wang. So I basically never open emails and report almost everything I get. I’m sure the IT department enjoys the 80% false report rate.

chaboud 1 day ago||
If the URL is your state container, it also becomes a leakage mechanism of internals that, at the very least, turns into a versioning requirement (so an old bookmark won’t break things). That also means that there’s some degree of implicit assumption with browsers and multi-browser passing. At some point, things might not hold up (Authentication workflows, for example).

That said, I agree with the point and expose as much as possible in the URL, in the same way that I expose as much as possible as command line arguments in command line utilities.

But there are costs and trade offs with that sort of accommodation. I understand that folks can make different design decisions intentionally, rather than from ignorance/inexperience.

dzhar11 1 day ago||
Recommendation:

https://github.com/Nanonid/rison

Super old but still a very functional library for saving state as JSON in the URL, but without all the usual JSON clutter. I first saw it used in Elastic's Kibana. I used it on a fancy internal React dashboard project around 2016, and it worked like a charm.

Sample: http://example.com/service?query=q:'*',start:10,count:10

callumgare 1 day ago|
Thank you!! There’s a tone of projects where I’ve wanted something like that. I’ve previously cobbling together something ad hoc myself but this looks way more thought out and (slightly) more standard than me making up my own thing.
Natfan 1 day ago||
RQL[0][1] or FIQL[2] might be of interest to you as well, Callum.

[0]: https://github.com/persvr/rql

[1]: https://github.com/jirutka/rsql-parser

[2]: https://datatracker.ietf.org/doc/html/draft-nottingham-atomp...

azangru 1 day ago||
> Browsers and servers impose practical limits on URL length (usually between 2,000 and 8,000 characters) but the reality is more nuanced. As this detailed Stack Overflow answer explains, limits come from a mix of browser behavior, server configurations, CDNs, and even search engine constraints. If you’re bumping against them, it’s a sign you need to rethink your approach.

So what is the reality? The linked StackOverflow answer claims that, as of 2023, it is "under 2000 characters". How much state can you fit into under 2000 characters without resorting to tricks for reducing the number of characters for different parameters? And what would a rethought approach look like?

djoldman 1 day ago|
Each of those characters (aside from domain) could be any of 66 unique ones:

   Uppercase letters: A through Z (26 characters)

   Lowercase letters: a through z (26 characters)

   Digits: 0 through 9 (10 characters)

   Special: - . _ ~ (4 characters)
So you'd get a lot of bang for your buck if you really wanted to encode a lot of information.
croes 1 day ago||
Unless you have some kind of mapping to encode different states with different character blocks your possibilities are much more limited. Like storing product ids or EAN plus the number of items. Just hope the user isn’t on a shopping spree
flexagoon 20 hours ago||
Unfortunately, too many websites use tracking parameters in URLs, so when a URL is too long I tend to assume it's tracking and just remove all the extra parameters from it when saving or sending it to anyone.

Though I guess this won't happen if it's obvious at first glance what the parameters do and that they're all just plaintext, not b64 or whatever.

vbezhenar 1 day ago||
When the system evolves, you need to change things. State structure also evolves and you will refactor and rework it. You'll rename things, move fields around.

URL is considered a permanent string. You can break it, but that's a bad thing.

So keeping state in the URL will constrain you from evolving your system. That's bad thing.

I think, that it's more appropriate to treat URL like a protocol. You can encode some state parameters to it and you can decode URL into a state on page load. You probably could even version it, if necessary.

For very simple pages, storing entire state in the URL might work.

oceanplexian 1 day ago||
I think it depends on the permanence of the thing you’re keeping state for. For example for a blog post, you might want to keep it around for a long time.

But sometimes it’s less obvious how to keep state encoded in a URL or otherwise (i.e for the convenience of your users do you want refreshing a feed to return the user to a marker point in the feed that they were viewing? Or do you want to return to the latest point in the feed since users expect a refresh action to give them a fresh feed?).

tomtomistaken 1 day ago|||
You can always do versioning.
caseysoftware 1 day ago||
HATEOAS never gets the love it deserves until you call it something else..

Probably because it sounds like the most poorly named breakfast cereal ever.

MyOutfitIsVague 1 day ago||
From a human user perspective, HATEOAS is effectively just the web. You follow links to get where you want, and forms let you send data where you want, all traversed from some root entrypoint.

From a machine client perspective, it's a different story. JSON-LD is more-or-less HATEOAS, and it works fine for ActivityPub. It's good when you want to talk to an endpoint that you know what data you want to get from it, but don't necessarily need to know the exact shape or URLs.

When you control both the server and client, HATEOAS extra pain for little to no benefit, especially when it's implemented poorly (ie. when the client still needs to know the exact shape of every endpoint anyway, and HATEOAS really just makes URLs opaque), and it interacts very badly when you need to parse the URL anyway, to pull parts from it or add query parameters.

stronglikedan 13 hours ago|||
> HATEOAS ... sounds like the most poorly named breakfast cereal ever.

I think of flight stick controllers.

cluckindan 1 day ago|||
This has nothing to do with HATEOAS. Well, apart from both using URLs. But HATEOAS really isn’t about storing state in URLs.
naasking 11 hours ago||
> But HATEOAS really isn’t about storing state in URLs.

I think saying they are unrelated isn't correct either. In order for hypermedia to be the engine of application state, the continuations of your application must be reified as URLs, ie. they must be stateful. This state could be stored server-side or in the URL, it doesn't matter, as URLs are only meaningful to the server that generated and interprets them.

btown 1 day ago||
I mean, at the end of the day it is a cerealization format…
cluckindan 1 day ago||
Jokes aside, the crux of HATEOAS is having a dumb frontend which just displays content and links from backend responses. All logic is on the server side. It is more like a terminal connection than a browser based application.
tsimionescu 1 day ago||
Not at all. HATEOAS is about defining data formats that the client and server agree on ahead of time.

Browsers running Javascript referenced from HTML is a perfect example of HATEOAS, for example. browsers and web server creators agreed on the semantics of these two data formats, and now any browser in the world can talk to any web server in the world and display what was intended to be displayed to the user.

If the web design hadn't been HATEOAS, you'd need server specific code in your browser, like AOL had a long time ago, where your browser would know how to look up specific parts of the AOL site and display them. This is also how most client apps are developed, since both the client and the server are controlled by the same entity, and there is no problem in hardcoding URLs in the client.

liampulles 17 hours ago||
You are still thinking of the web as being a hyperlinked collection of information serving the betterment of human knowledge, rather than a set of SPAs where you through trial and error try and get whatever AI enabled product you are now forced to use to do what you ask.
mexicocitinluez 16 hours ago|
Nothing of what you said has anything to do with storing state in the URL.
liampulles 14 hours ago||
My meaning is that good URL design was more prevalent when people consciously included more links to other websites within their own website. This is because making well formed URLs is of importance if you think people are actually going to take that URL and link it somewhere. The rest of my comment is snark around SPAs, because I think they conversely do not often do URL design well (manipulating the DOM off the back of JSON REST API calls, rather than guiding the state of the page off the URL, allows one not to have to think about it as much as one should).

I hope that clears things up.

mrbonner 1 day ago|
I believe draw.io achieves complete state persistence solely through the URL. This allows you to effortlessly share your diagrams with others by simply providing a link that contains an embedded Base64-encoded string representing the diagram’s data. However, I’m uncertain whether this approach would qualify as a “state container” according to the definition presented in the article.
More comments...