Youre doing two things:
1) youre moving state into an arbitrary untrusted easy to modify location.
2) youre allowing users to “deep link” into a page that is deep inside some funnel that may or may not be valid, or even exist at some future point in time, forget skipping the messages/whatever further up.
You probably dont want to do either of those two things.
I hope that clears things up.
I actually implemented a comment system where users just pick any arbitrary URL on the domain, ie, http://exampledomain.com/, and append /@say/ to the URL along with their comment so the URL is the UI. An example comment would be typed in the URL bar like,
http://exampledomain.com/somefolder/somepage.html/@say/Hey! Cool somepage. - Me
And then my perl script tailing the webserver log file sees the line and and adds the comment "Hey! Cool somepage. - Me" to the .html file on disk for comments.
I just used Pako.js which accepts a `{ dictionary: string }` option. Concat a bunch of common URL together, done.
The only downside (with both our approaches) is if you add substantially many new fields / common values later on, you need to update the dictionary, and then old URLs don't work, so you'd need some sort of versioning scheme and use the right dictionary for the right version.
// /some/path?name=Francisco
const [name, setName] = useQuery("name");
console.log(name); // Francisco
setName('whatever');
Here's a bit more complex example with a CodeSadnbox[2]: export default function SearchForm() {
const [place, setPlace] = useQuery("place");
const [max, setMax] = useQuery("max");
return (
<form>
<header>
<h1>Search Trips</h1>
<p>Start planning your holidays on a budget</p>
</header>
<TextInput
label="Location:"
name="place"
placeholder="Paris"
onChange={setPlace}
value={place}
/>
<NumberInput
label="Max Price ($):"
name="max"
placeholder="0"
onChange={setMax}
value={max}
/>
</form>
);
}
[1] https://crossroad.page/I think the fundamental issue here is that semantics matter and URLs in isolation don't make strong enough guarantees about them.
I'm all for elegant URL design but they're just one part of the puzzle.
Few years back, I built a proof-of-concept of a PDF data extraction utility, with the following characteristic - the "recipe" for extracting data from forms (think HIPAA etc) can be developed independently of confidential PDFs, signed by the server, and embedded in the URL on the client-side.
The client can work entirely offline (save the HTML to disk, airgap if you want!) off the "recipe" contained in the URL itself, process the data in WASM, all client-side. It can be trivially audited that the server does not receive any confidential information, but the software is still "web-based", "browser-based" and plays nice with the online IDE - on dummy data.
Found a working demo link - nothing gets sent to the server.
https://pdfrobots.com/robot/beta/#qNkfQYfYQOTZXShZ5J0Rw5IBgB...