The example URL here, though, is still not (helpfully) bookmarkable because the contents of page 2 will change as new items are added. To get truly bookmarkable list URLs, the best approach I've seen is ‘page starting from item X’, where X is an effectively-unique ID for the item (e.g. a primary key, or a timestamp to avoid exposing IDs).
bubblyworld · 1h ago
Yeah, solving this edge case properly can add a lot of complexity (your solution has the same problem, no? deletes would mess it up as would updates, technically). I've seen people using long-lived "idempotency tokens" point to an event log for this but it's a bit nuts. Definitely worth considering not solving it, which might be a more intuitive UX anyway (e.g. for leaderboards).
crabmusket · 9m ago
Datomic + put the database version in the URL :)
wackget · 6h ago
Dunno why you've been voted down; you're totally right. The method you mention is called token/cursor/keyset-based pagination.
nebezb · 6h ago
He’s being downvoted because suggesting cursor pagination in an example describing sorting by price (descending) is plainly wrong. While neither is bookmarkable, cursor pagination is much worse.
The UX went from “show me _almost_ the most expensive items” to “show me everything less expensive than the last item on the page I was on previously — which may be stocked out, more expensive, or heavily discounted today”. The latter isn’t something you’d bookmark.
tossandthrow · 1h ago
Well, this really depends on the intention: are you looking for the cheapest items, excluding the 20 first, or are you linking to a content list.
I use Occams razor to decide this, and conceptually it is simpler to think that you are linking to a content list - so that is likely the right answer.
the_arun · 3h ago
I cannot think of any other way to bookmark anything static unless I convert it into pdf/screenshot before sharing. Are there better ways to bookmark a list page which guarantees same list forever?
scarmig · 3h ago
This depends on use case and who or what is actually consuming the pages. Most of the time, humans don't actually want the same list for all time (though what follows would work for them).
The only way to have a static list is to have an identifier for the state of the list at a certain time, or a field that allows you to reconstruct the list (e.g. a timestamp). This also means you need to store your items' data so the list of items can be reconstructed. Concretely, this might mean a query parameter for the list at a certain time (time=Xyz). When you paginate, either a cursor-based approach, an offset approach, or a page number approach would all work.
This is not what most human users want: they would see deleted items, wouldn't see added items, and changes to fields you might sort on wouldn't be reflected in the list ordering. But it's ideal for many automated use cases.
ETA: If you're willing to restrict users to a list of undeletable items that is always sorted, ascending, by time of item creation, you can also get by with any strategy for pagination. The last page might get new items appended, and you might get new pages, but any existing pages besides the last page won't change.
johnisgood · 1h ago
Someone said he is being downvoted for suggesting cursor-based pagination, yet one of your suggestions was the cursor-based approach, and as thus, I do not understand why he is being down-voted if it is a legitimate approach, which I believe it is.
I guess we would have to hear nebezb's solutions.
If you are already sorting by price and you bookmark at the second page (which now would be in the 3rd), what would you do? I personally do not care about the item in a sorted list enough to expect a bookmarked URL to start from there, or I cannot remember when I did and why. Any ideas why would one want this? If I bookmark second page, I know that the items on page 2 may not always be on page 2. Why would anyone expect different? If you want to bookmark an item, just go to the product itself and bookmark that. I do not think I ever bookmarked a specific page expecting that to never change.
oxidant · 3h ago
Not if the items change relative position over time.
bravesoul2 · 1h ago
I agree. Most people won't expect urls to provide a wayback-machine style snapshot. Although you could add that as an option "save results as link".
btown · 4h ago
> treating URL parameters as your single source of truth... a URL like /?status=active&sortField=price&sortDir=desc&page=2 tells you everything about the current view
Hard disagree that there can be a single source of truth. There are (at least) 3 levels of state for parameter control, and I don't like when libraries think they can gloss over the differences or remove this nuance from the developer:
- The "in-progress" state of the UI widgets that someone is editing (from radio buttons to characters typed in a search box)
- The "committed" state that indicates the snapshot of those parameters that is actively desired to be loaded from the server; this may be debounced, or triggered by a Search button
- The "loaded" state that indicates what was most recently loaded from the server, and which (most likely) drives the data visualized in the non-parameter-controlling parts of the UI
What if someone types in a search bar but then hits "next page" - do we forget what they typed? What happens if you've just committed changes to your parameters, but data subsequently loaded from a prior commit? Do changes fire in sequence? Should they cancel prior requests or ignore their results? What happens if someone clicks a back button while requests are inflight, or while someone's typed uncommitted values into a pre-committed search bar? How do you visualize the loaded parameters as distinct from the in-progress parameters? What if some queries take orders of magnitude longer than others, and you want to provide guidance about this?
All of those questions and more will vary between applications. One size does not fit all.
If this comment resonates with you, choose and advocate for tooling that gives you the expressivity you feel in your gut that you'll need. Especially in a world of LLMs, terse syntax and implicit state management may not be worth losing that expressivity.
chii · 1h ago
> All of those questions and more will vary between applications. One size does not fit all.
all of those come from the fundamental "requirement" set out earlier to have no in-page state, but still require the webpage to behave as tho it did.
If you remove this requirement, then it will be like how it was back in the 2000's era web pages! And the url does indeed contain the single source of truth - there are no inflight requests that are not also a full page reloads.
tempfile · 26m ago
The example they used for "in progress" state was form inputs. Don't you count those as in-page state?
chii · 1m ago
until you pressed enter, this progress is understood to be ephemeral. It has only been recently that the user has been 'conditioned' to expect the form inputs to be retained when they click a link, and it's because the app is trying to retain the state of ephemeral progress.
So you cannot have both a webpage that is not an app, but maintain an app-like behaviour. Trying to do so is a cursed problem, and it might succeed with high effort, but ultimately not worth it.
delifue · 3h ago
Yes the simple solution is obviously not perfect in edge cases. It's a tradeoff between simplicity and edge-case-perfectness.
In my opinion the higher priority task is to optimize the query in backend so that it can refresh quickly. If loading is quick enough then that edge case will be less likely to happen.
DimmieMan · 5h ago
The JS world leaves me more and more perplexed.There's a similar rant about forms, but why is this so hard? Huge amount of dev time spent being able to execute asynchronous functions to the backend seamlessly yet pretty much every major framework is just rawdog the url string and deal with URLSearchParams object yourself.
Tanstack router[1] provides first class support for not only parsing params but giving you a typed URL helper, this should be the goal for the big meta frameworks but even tools like sveltekit that advertise themselves on simplicity and web standards have next to zero support.
I've seen even non JS frameworks, with like fifteen lines of documentation for half baked support of search params.
The industry would probably be better off if even a tenth of the effort that goes into doing literally anything to avoid learning the platform was spent making this (and post-redirect-get for forms) the path of least resistance for the 90% of the time search params are perfectly adequate.
I don't use HTMX but i do love how it and its community are pushing the rediscover of how much simpler things can be.
Nuqs[0] does a very good job at parsing and managing search params. It's a complex issue that involves serialization and deserialization, as well as throttling URL updates. It's a wonderful library. I agree, though, that it would be nice to see more native framework support for this.
Forms are also hard because they involve many different data-types, client-side state, (client?) and server validation, crossing the network boundary, contextual UI, and so on. These are not simple issues, no matter how much the average developer would love them to be. It's time we accept the problem domain as complex.
I will say that React Server Components are a huge step towards giving power back to the URL, while also allowing developers to access the full power of both the client and the server–but the community at large has deemed the mental model too complex. Notably, it enables you to build nuanced forms that work with or without javascript enabled, and handle crossing the boundary rather gracefully. After working with RSCs for several years now, I can't imagine going back. I've written several blog posts about them[1][2] and feel the community should invest more time into understanding their ideas.
I have a post in my drafts about how taking advantage of URL params properly (with or without RSCs) give our UIs object permanence. How we as web developers should be relying on them more and using it to reflect "client-side" state. Not always, but more often. But it's a hard post to finish as communicating and crystalizing these ideas are difficult. One day I'll get it out.
Don’t get me wrong, I never meant it was easy to solve, just that things could be better if search parameters didn’t somehow become this niche legacy thing with minimal appetite to fix.
Thanks for the point on RSC, probably the first argument I’ve heard that helps me contextualise why this extreme paradigm shift and tooling complexity is being pushed as the default.
valenterry · 2h ago
> Tanstack router[1] provides first class support for not only parsing params but giving you a typed URL helper, this should be the goal for the big meta frameworks
Let's not pretend that the Tanstack solution would be good. For example, What if my form changes and a new field is added but someone still runs the old html/js and sends their form from the old code? Does Tanstack have any support to 1.) detect that situation 2.) analyze / monitor / log it (for easy debugging), 3.) automatically resolve it (if possible) and 4.) allow custom handling where automatic resolution isn't possible?
It doesn't look like it from the documentation.
Sorry, frustration is causing me to rant here, but it's a classical thing of the frontend-world and it causes so much frustration. In the backend-world, many (maybe even most) libraries/frameworks/protocols have builtin support for that. See graphql with it's default values and deprecation at least, see avro or protobuff with their support for versions, schema-history and even automatic migration.
When will I not have to deal with that by hand in my frontend-code anymore?
PaulHoule · 7h ago
This is a classic pattern of web applications from the 1990s. Works amazingly well even w/o HTMX
nosefurhairdo · 6h ago
One of the legitimate grievances of SPAs is that they made this pattern less obvious.
PaulHoule · 6h ago
I find the whole thing where you configure your web server to serve the same thing from
to be absolutely nerve-wracking. Not hard to do but it's just batshit crazy and breaks the whole idea of how web crawlers are supposed to work. On the other hand, we had trouble with people (who we know want to crawl us specifically) crawling a site where you visit
http://example.com/item/448828
and it loads an SPA which in turn fetches a well-structured JSON documents like
with no cache so it downloads megabytes of HTML, Javascript, Images and who knows what -- and if they want to deal with the content in a structured way and it put it in a database it's already in the exact format they want. But I guess it's easier to stand up a Rube Goldberg machine and write parsers when you could look at our site in the developers tools and figure out how it works in five minutes... and just load those JSON documents into a document database and be querying right out of the gate.
eadmund · 5h ago
What I would want is to GET http://example.com/item/448828 with an Accept header of ‘application/s-expression,application/json;q=0.1’ instead of retrieving the HTML representation of the resource. HTTP is the API.
This is a great pattern to follow, and I highly recommend understanding it even to those working on projects that are full client-side SPAs.
Its too easy to jump right in to react, NextJS, etc. Learning why URLs work the way they work is still extremely useful. Eventually you will want to support deep linking, and when you do the URL suddenly becomes really important.
Remix has also been really interesting on this front. They leaned heavily into web standards, including URLs as a way of managing state.
TimTheTinker · 3h ago
I had a similar strategy when building early web apps with jQuery and ExtJS (but using the URL hash before the History API was available). Just read from location.hash during page load and write to it when the form state changes.
For more complex state (like table layout), I used to save it as a JSON object, then compress and base64 encode it, and stick it in the URL hash. Gave my users a bookmarklet that would create a shortened URL (like https://example.com/url/1afe9) from my server if they needed to share it.
sghiassy · 3h ago
It’s still unfortunately contentious in many development circles :/
vyrotek · 4h ago
The syncing of the state reminded me a lot of datastar Signals. And a little bit of ASP.NET ViewState.
Makes sense, HTMX and Datastar both have a common philosophical ancestor in HATEOAS
bravesoul2 · 1h ago
Please do! Dropping deep links with state is very useful.
It cant be used for everything. E.g. not dark mode!
Gilipe · 6h ago
Our dealership listings page is largely run with this pattern, as well as most of our plugins. Nothing new and very dependable. Forgo HTMX for one less dependency.
cloudking · 6h ago
I like the simplicity. I've been building some web apps with Alpine.js recently, another lightweight React alternative. It's pretty powerful and capable for building reactive SPAs, only ~16kb.
On a related note, I've found combining htmx with Parsley[0] to be very powerful for adding client-side validation to declarative server-rendered HTML form definitions. All that is needed is a simple htmx extension[1] and applicable data attribute use.
Ideally, this is how state management should work all the time, regardless. Holding too much state server-side breaks bookmarks and shares.
EDIT: Hmm. Is this comment controversial? Obviously some people disagree strongly. Mind sharing why?
rorylaitila · 7h ago
I didn't see if you were doing this, but there is an additional use case that I had when using hot swapping like HTMX: updating other links on the page to reflect the URL state, when those links are outside of the swapped content.
While the server can use the URLs to update the links in the new HTML, if other links outside that content should also reflect the changes params, you need to manually update them.
You don't need to do anything different to the other URLs on the page, by default all parameters are passed along in every request, so you just need to retrieve any expected url parameters in the server code
Maybe, I'm not an HTMX user, but looking at hx-swap-oob I think that solves another issue. My need was when other links can exist in any place, and they need to match the URL after its clicked. I didn't want to have the performance hit or remember to add extra swaps just to get links up to date. The feature basically is "when a param is marked to be synced, ensure all links on the page are updated to match the changed param"
trentnix · 7h ago
I’ve been building a Golang web platform for my own web apps and I wired up toaster notifications using hx-swap-oob. I just populate a ‘notifications’ slice in my view model and hx-swap-oob makes sure my toaster messages get loaded irrespective of what content is actually being swapped.
It sounds like a similar use case to yours.
ashwinsundar · 1h ago
I have something similar setup for toast notifications in Django (Python). I have a notifications "partial" defined, which gets returned as part of an out-of-band swap by any view function that desires to use it. This includes other partials as well. It's how I chain fragment replacements together.
As an aside, I love that we can have this conversation - people in entirely different stacks can talk a similar language, through the glue of HTMX. This is why htmx is good for web development
rorylaitila · 7h ago
Gotcha, I think I looked at hx-swap-oob before for inspiration, but didn't see it working for this case, I'll look again.
PenguinCoder · 7h ago
Are you able to share the code you have for this? I have a similar need use case and using golang and Htmx for my app.
kazinator · 2h ago
URL-driven state? Sounds like a "take that" upper cut to the jaw of the RESTful opponent.
This pattern - saving the query to the URL with the history API - is fantastic UX but never gets implemented because there’s never time. Luckily an LLM can build this quickly as it’s straightforward and mostly boilerplate.
Still the boilerplate makes me wonder if it belongs in a library, eg. a React hook that’s a drop in replacement for `useState`. Backend logic would still need to be implemented. Does something like this exist?
franky47 · 36m ago
> Still the boilerplate makes me wonder if it belongs in a library, eg. a React hook that’s a drop in replacement for `useState`
That’s exactly what `nuqs` does (disclaimer: I’m the author).
> Backend logic would still need to be implemented
Assuming your backend is written in TypeScript, you can use nuqs loaders to reuse the same validation logic on both sides.
> is fantastic UX but never gets implemented because there’s never time
Wouldn't the change take something like an hour the first time you implement it and then 10s of seconds for calling the centralized function henceforth?
I don't think the problem is "there's never time"; and if that is the problem, I don't think an LLM will "solve" that, especially since studies have shown developers are slower when they use LLMs to code for them.
monadoid · 7h ago
Agreed I love this pattern! I'm a big fan of using nuqs for this in nextjs, but really stoked to try out rust / loco / htmx for my next project.
crab_galaxy · 6h ago
> never gets implemented because there’s never time
In my experience that time is saved and more when you find you no longer need to manage Zustand/redux stores to track application state. This pattern works beautifully when incorporating the query parameters as query keys with tan stack query too.
c-hendricks · 5h ago
At work we have a hook for that with various backends (memory, local storage, redux, and search params). Supports page, offset, and cursor pagination.
Yep, `useSearchParams()`. At work I built a wrapper to incorporate zod schemas for typesafe search param state. Nuqs is the best for this if your application meets its prerequisites: https://nuqs.47ng.com/
o11c · 7h ago
Note that you can store longer state (at least 64K; more not tested) in the fragment (`location.hash`); obviously only the client gets to see this, but it's better than nothing (and JS can send it to the server if really needed).
For parameters the server does need to see, remember that params need not be &-separated kv pairs, it can be arbitrary text. Keys can usually be eliminated and just hard-coded in the web page; this may make short params longer but likely makes long ones shorter.
You absolutely should not restore state based on LocalStorage; that breaks the whole advantage of doing this properly! If I wanted previous state, I would've navigated my history thereto. I hope this isn't as bad as sites that break with multiple open tabs at least ...
NegativeLatency · 7h ago
I've seem a largish company everyone here knows of, try this and have it fail, because of various weird client things, and also eventually run out of space in the hash. It's a neat hack but I wouldn't rely on it.
ashwinsundar · 1h ago
They failed as a company?
wredcoll · 5h ago
Oh boy, hashbangs are back.
The first one that comes to mind was twitter...
mediumsmart · 1h ago
I think LinkedIn won the stuffedURLbufferOverflow Olympics back in the day.
mrits · 6h ago
When people start using or buying your software you are going to quickly find out that 64K won't work
I use Occams razor to decide this, and conceptually it is simpler to think that you are linking to a content list - so that is likely the right answer.
The only way to have a static list is to have an identifier for the state of the list at a certain time, or a field that allows you to reconstruct the list (e.g. a timestamp). This also means you need to store your items' data so the list of items can be reconstructed. Concretely, this might mean a query parameter for the list at a certain time (time=Xyz). When you paginate, either a cursor-based approach, an offset approach, or a page number approach would all work.
This is not what most human users want: they would see deleted items, wouldn't see added items, and changes to fields you might sort on wouldn't be reflected in the list ordering. But it's ideal for many automated use cases.
ETA: If you're willing to restrict users to a list of undeletable items that is always sorted, ascending, by time of item creation, you can also get by with any strategy for pagination. The last page might get new items appended, and you might get new pages, but any existing pages besides the last page won't change.
I guess we would have to hear nebezb's solutions.
If you are already sorting by price and you bookmark at the second page (which now would be in the 3rd), what would you do? I personally do not care about the item in a sorted list enough to expect a bookmarked URL to start from there, or I cannot remember when I did and why. Any ideas why would one want this? If I bookmark second page, I know that the items on page 2 may not always be on page 2. Why would anyone expect different? If you want to bookmark an item, just go to the product itself and bookmark that. I do not think I ever bookmarked a specific page expecting that to never change.
Hard disagree that there can be a single source of truth. There are (at least) 3 levels of state for parameter control, and I don't like when libraries think they can gloss over the differences or remove this nuance from the developer:
- The "in-progress" state of the UI widgets that someone is editing (from radio buttons to characters typed in a search box)
- The "committed" state that indicates the snapshot of those parameters that is actively desired to be loaded from the server; this may be debounced, or triggered by a Search button
- The "loaded" state that indicates what was most recently loaded from the server, and which (most likely) drives the data visualized in the non-parameter-controlling parts of the UI
What if someone types in a search bar but then hits "next page" - do we forget what they typed? What happens if you've just committed changes to your parameters, but data subsequently loaded from a prior commit? Do changes fire in sequence? Should they cancel prior requests or ignore their results? What happens if someone clicks a back button while requests are inflight, or while someone's typed uncommitted values into a pre-committed search bar? How do you visualize the loaded parameters as distinct from the in-progress parameters? What if some queries take orders of magnitude longer than others, and you want to provide guidance about this?
All of those questions and more will vary between applications. One size does not fit all.
If this comment resonates with you, choose and advocate for tooling that gives you the expressivity you feel in your gut that you'll need. Especially in a world of LLMs, terse syntax and implicit state management may not be worth losing that expressivity.
all of those come from the fundamental "requirement" set out earlier to have no in-page state, but still require the webpage to behave as tho it did.
If you remove this requirement, then it will be like how it was back in the 2000's era web pages! And the url does indeed contain the single source of truth - there are no inflight requests that are not also a full page reloads.
So you cannot have both a webpage that is not an app, but maintain an app-like behaviour. Trying to do so is a cursed problem, and it might succeed with high effort, but ultimately not worth it.
In my opinion the higher priority task is to optimize the query in backend so that it can refresh quickly. If loading is quick enough then that edge case will be less likely to happen.
Tanstack router[1] provides first class support for not only parsing params but giving you a typed URL helper, this should be the goal for the big meta frameworks but even tools like sveltekit that advertise themselves on simplicity and web standards have next to zero support.
I've seen even non JS frameworks, with like fifteen lines of documentation for half baked support of search params.
The industry would probably be better off if even a tenth of the effort that goes into doing literally anything to avoid learning the platform was spent making this (and post-redirect-get for forms) the path of least resistance for the 90% of the time search params are perfectly adequate.
I don't use HTMX but i do love how it and its community are pushing the rediscover of how much simpler things can be.
[1] https://tanstack.com/router/latest/docs/framework/react/guid...
Forms are also hard because they involve many different data-types, client-side state, (client?) and server validation, crossing the network boundary, contextual UI, and so on. These are not simple issues, no matter how much the average developer would love them to be. It's time we accept the problem domain as complex.
I will say that React Server Components are a huge step towards giving power back to the URL, while also allowing developers to access the full power of both the client and the server–but the community at large has deemed the mental model too complex. Notably, it enables you to build nuanced forms that work with or without javascript enabled, and handle crossing the boundary rather gracefully. After working with RSCs for several years now, I can't imagine going back. I've written several blog posts about them[1][2] and feel the community should invest more time into understanding their ideas.
I have a post in my drafts about how taking advantage of URL params properly (with or without RSCs) give our UIs object permanence. How we as web developers should be relying on them more and using it to reflect "client-side" state. Not always, but more often. But it's a hard post to finish as communicating and crystalizing these ideas are difficult. One day I'll get it out.
[0] https://nuqs.47ng.com
[1] https://saewitz.com/server-components-give-you-optionality
[2] https://saewitz.com/the-mental-model-of-server-components
Thanks for the point on RSC, probably the first argument I’ve heard that helps me contextualise why this extreme paradigm shift and tooling complexity is being pushed as the default.
Let's not pretend that the Tanstack solution would be good. For example, What if my form changes and a new field is added but someone still runs the old html/js and sends their form from the old code? Does Tanstack have any support to 1.) detect that situation 2.) analyze / monitor / log it (for easy debugging), 3.) automatically resolve it (if possible) and 4.) allow custom handling where automatic resolution isn't possible?
It doesn't look like it from the documentation.
Sorry, frustration is causing me to rant here, but it's a classical thing of the frontend-world and it causes so much frustration. In the backend-world, many (maybe even most) libraries/frameworks/protocols have builtin support for that. See graphql with it's default values and deprecation at least, see avro or protobuff with their support for versions, schema-history and even automatic migration.
When will I not have to deal with that by hand in my frontend-code anymore?
I also want http://example.com/application/with/path?and=parameters and http://example.com/application to return Link headers with rel=canonical appropriately.
I’d also like world peace.
https://www.lexo.ch/blog/2025/01/highlight-text-on-page-and-...
In any case, yeah, what was suggested in the submission is nothing esoteric, but I guess everything can be new to someone.
https://github.com/Nanonid/rison
Its too easy to jump right in to react, NextJS, etc. Learning why URLs work the way they work is still extremely useful. Eventually you will want to support deep linking, and when you do the URL suddenly becomes really important.
Remix has also been really interesting on this front. They leaned heavily into web standards, including URLs as a way of managing state.
For more complex state (like table layout), I used to save it as a JSON object, then compress and base64 encode it, and stick it in the URL hash. Gave my users a bookmarklet that would create a shortened URL (like https://example.com/url/1afe9) from my server if they needed to share it.
https://data-star.dev/guide/reactive_signals
It cant be used for everything. E.g. not dark mode!
https://alpinejs.dev/
https://github.com/alpinejs/alpine
https://alpine-ajax.js.org/
0 - https://parsleyjs.org/doc/index.html
1 - https://htmx.org/docs/#extensions
EDIT: Hmm. Is this comment controversial? Obviously some people disagree strongly. Mind sharing why?
While the server can use the URLs to update the links in the new HTML, if other links outside that content should also reflect the changes params, you need to manually update them.
In my progressive enhancement library I call this 'sync-params' https://github.com/roryl/zsx?tab=readme-ov-file#synchronize-...
https://htmx.org/attributes/hx-params/
It sounds like a similar use case to yours.
As an aside, I love that we can have this conversation - people in entirely different stacks can talk a similar language, through the glue of HTMX. This is why htmx is good for web development
Still the boilerplate makes me wonder if it belongs in a library, eg. a React hook that’s a drop in replacement for `useState`. Backend logic would still need to be implemented. Does something like this exist?
That’s exactly what `nuqs` does (disclaimer: I’m the author).
> Backend logic would still need to be implemented
Assuming your backend is written in TypeScript, you can use nuqs loaders to reuse the same validation logic on both sides.
https://nuqs.47ng.com
Wouldn't the change take something like an hour the first time you implement it and then 10s of seconds for calling the centralized function henceforth?
I don't think the problem is "there's never time"; and if that is the problem, I don't think an LLM will "solve" that, especially since studies have shown developers are slower when they use LLMs to code for them.
In my experience that time is saved and more when you find you no longer need to manage Zustand/redux stores to track application state. This pattern works beautifully when incorporating the query parameters as query keys with tan stack query too.
https://stackblitz.com/edit/github-8ssor8-rqkyew8w?file=src%...
For parameters the server does need to see, remember that params need not be &-separated kv pairs, it can be arbitrary text. Keys can usually be eliminated and just hard-coded in the web page; this may make short params longer but likely makes long ones shorter.
You absolutely should not restore state based on LocalStorage; that breaks the whole advantage of doing this properly! If I wanted previous state, I would've navigated my history thereto. I hope this isn't as bad as sites that break with multiple open tabs at least ...
The first one that comes to mind was twitter...