If server resources and scalability are a concern, speculative fetching will add more load to those resources, which may or may not be used. Same deal on the end user’s device. That’s the trade off. Also, this is basically a Blink-only feature so far
The article provides a script that tries to replicate pre-rendering that speculation rules do for Safari and Firefox, but this is only pre-fetching. It doesn’t do the full pre-render. Rendering is often half the battle when it comes to web performance
Another limitation is that if the page is at all dynamic, such as a shopping cart, speculation rules will have the same struggles as caching does: you may serve a stale response
pyman · 7h ago
I've seen some sites, like Amazon, calculate the probability of a user clicking a link and preload the page. This is called predictive preloading (similar to speculative fetching). It means they load or prepare certain pages or assets before you actually click, based on what you're most likely to do next.
What I like about this is that it's not a guess like the browser does, it's based on probability and real user behaviour. The downside is the implementation cost.
You can do this with speculation rules too. Your speculation rules are just prescriptive of what you think the user will navigate to next based on your own analytics data (or other heuristics)
Ultimately the pros/cons are similar. You just end up with potentially better (or worse) predictions. I suspect it isn’t much better than simple heuristics such as whether a cursor is hovering over a link or a link is in a viewport. You’d probably have to have a lot of data to keep your guesses accurate
Keep in mind that this will just help with the network load piece, not so much for the rendering piece. Often rendering is actually what is slowing down most heavy frontends. Especially when the largest above-the-fold content you want to display is an image or video
duxup · 9h ago
Definitely a balancing act where you consider how much work you might trigger.
But I can think of a few places I would use this for quality of life type enhancements that are for specific clients and etc.
autoexec · 6h ago
We train users to hover over links to see where it would send you before you click on them because some websites will link to malicious content/domains. Now I guess some of those users will end up silently browsing to and executing code in the background for those sites every time they do that.
Seems like a great way to track users too. Will hovering over ads count as a click through? Should users have to worry about where their mouse rests on a page or what it passes over?
MrJohz · 4h ago
In practice, this is almost entirely going to be used for internal links within a domain - you are not going to want to prerender domains you don't control, because you can't be sure they'll be prerender-safe. And I suspect most internal navigation will be obvious to the user - it's typically clear when I'm clicking links in a nav menu, or different product pages on a shopping site. So I suspect your first issue will not come up in practice - users will typically not need to check the sorts of links that will be prerendered.
Tracking is a legitimate concern, but quite frankly that's already happening, and at a much finer, more granular level than anything this feature can provide. Theoretically, this gives the possibility to add slightly more tracking for users that disable JS, but given the small proportion of such users, and the technical hoops you'd need to jump through to get useful tracking out of this, it's almost certainly not worth it.
In the above article, Harry gives a more nuanced and specific method using data attributes to target specific anchors in the document, one reason being you don't need to prerender login or logout pages.
Don't feature like this waste bandwidth and battery on mobile devices?
MrJohz · 4h ago
The point of this proposal is to bring this feature under the control of the browser, which is probably better placed to decide when preloading/prerendering should happen. It's already possible to do something very similar using JS, but it's difficult to control globally when that happens across different sites. Whereas when this feature is built into the browser, then the browser can automatically disable prerendering when the user (for example) has low battery, or is on a metered internet connection, or if the user simply doesn't want prerendering to happen at all.
So in theory, this should actually reduce bandwidth/battery wastage, by giving more control to the browser and user, rather than individual websites.
babanooey21 · 9h ago
It preloads pages on mouse hover over the a href link. On mobile there are no mouse hover events. The page can be preloaded on "touchstart" event which almost definitely results in page visit.
radicaldreamer · 9h ago
Not to mention laptops! Loads of people use those on battery power
youngtaff · 4h ago
Putting aside the lack of hover on mobile (there are other ways to trigger it)
It’s not clear it will waste battery on mobile… not sure if it’s still the case but mobile radios go to sleep and waking them used a non-trivial amount of energy so preloading a page was more efficient than let the radio go to sleep and then waking it
Need someone who’s more informed than I am to review wether that’s still the case
rafram · 9h ago
> This includes fetching all sub-resources like CSS, JavaScript, and images, and executing the JavaScript.
So not necessarily any website, because that could cause issues if one of the prerendered pages runs side-effectful JavaScript.
echoangle · 4h ago
Then it’s a badly designed website, GET requests (and arguably the JS delivered with a GET-requested HTMl page) should be side-effect free. Side effects should come from explicit user interaction.
rafram · 3h ago
> and arguably the JS delivered with a GET-requested HTMl page
That's pretty hard to achieve.
zersiax · 8h ago
From the article I'd assume this wouldn't work in any way for mobile given no hover, not for screen reader users because a website often has no idea where a screen reader's cursor is, and potentially not for keyboard users (haven't checked if keyboard focus triggers this prefetch/prerender or literally just mouse hover), so ... limited applicability, I'd say.
MrJohz · 3h ago
Part of the design of the feature is that the website doesn't have to specify "on hover" or "on focus", but instead they can just write "eagerness: moderate" (or "conservative", or "eager") and let the browser decide what the exact triggers are. If it turns out that keyboard focus is a useful predictor of whether a link is likely to be clicked, then browsers can add that as a trigger at one of the indicator levels, and websites will automatically get the new behaviour "for free".
Currently, "eagerness: conservative" activates on mouse/pointer down (as opposed to mouse/pointer up, which is when navigation normally happens), which will work for mobile devices as well. And "eagerness: moderate" includes the more conservative triggers, which means that even on devices with no hover functionality, prerendering can still kick in slightly before the navigation occurs.
Imustaskforhelp · 8h ago
maybe its the fact that its really easy adding something like this, and this (I think) or something which basically acomplishes the same thing (but in a better way?) are used by some major meta frameworks like nextJS etc.
I guess it has limited applicability but maybe its the small little gains that add victories. I really may be going on in a tangent but I always used to think that hardware is boring / there aren't too many optimizations its all transistors with and,or,not
but then.. I read about all the crazy stuff like L1 cache and the marvel machinery that is known as computers.
It blew my mind into shredders.
Compilers are some madman's work too, the amount of optimization is just bonkers just for tiny performances but those tiny performance boosts in the whole stack makes everything run so fast. Its so cool.
So I was watching this youtube short which said that this is sorta how instagram's approach to a similar problem was
So instagram founders worked at google and they found that if you had written your username, you had 80% or some high% chance to create an account since
(I think the barrier of friction has been crossed and its all easier from now, so why miss, why do all efforts and leave now, I am invested into this now and I will use this now)
So insta founders basically made it so that whenever you upload a photo it would silently upload in the background and then you would mostly write some captions of the image and that would take some time too, so in that time, the picture gets loaded into the database and that's how it was so fast compared to its own peers while using the same technology
If someone scraps the picture/story and doesn't put it, they just delete it from the system.
I will link to the youtube short since that clearly explained it better than me but this was really nice how things are so connected that what I watched on youtube is helping on HN.
myflash13 · 9h ago
The main issue I had with TurboLinks and similar hacks was that it broke scripts like the Stripe SDK which expected to be loaded exactly once. If you preloaded a page with the Stripe SDK and then navigated away, your browser console would become polluted with errors. I'm assuming this doesn't happen with a browser-native preloader because execution contexts are fully isolated (I would hope).
pelagicAustral · 8h ago
Man, the amount of headaches turbo give me... I have ended up with apps polluted with "data-turbo=false" for this exact same reason... But I also admit that when it works, it's a really nice thing to have
nickromano · 8h ago
TurboLinks only replaces the <body> so you can put any scripts you'd like loaded exactly once into the <head> tag. You can use <script async> to keep it from blocking.
myflash13 · 8h ago
yeah but I needed it loaded exactly once only on certain pages and not on others.
Off-topic (semi) but I'm a big fan of Docuseal - I use them more my client-contractor agreements without any issue. Pricing is unbeatable as well, other contract-signing services have completely lost the plot.
game_the0ry · 10h ago
Wonder if this is will replace how nextjs and nuxt do optimistic pre-fetching when users hover on links.
Also brings up the questions:
- should browser do this by default?
- if yes, would that result in too many necessary requests (more $$)?
Either way, good to know.
babanooey21 · 10h ago
It probably won't replace Nuxt.js/Nuxt’s pre-fetching, as such websites function as SPAs, using internal JavaScript pushState navigation, which has become standard for those frameworks.
However, Next.js pre-fetching can't perform pre-rendering on hover, which can cause a noticeable lag during navigation. The native Chrome API allows not only pre-fetching, but also pre-rendering, enabling instant page navigation.
exasperaited · 9h ago
Link prefetching is generally something you would want a website to retain control over, because it can distort stats, cause resource starvation, and even (when web developers are idiots) cause things like deletions (when a clickable link has a destructive outcome).
I am reminded of the infamous time when DHH had to have it explained to him that GET requests shouldn’t have side effects, after the Rails scaffolding generated CRUD deletes on GET requests.
Google were not doing anything wrong here, and DHH was merely trying to deflect blame for the incompetence of the Rails design.
But the fact remains, alas, that this kind of pattern of mistakes is so common, prefetching by default has risks.
radicaldreamer · 9h ago
Imagine mousing over a delete account button and having your browser render that page and execute JS in the background.
deanebarker · 8h ago
How does this affect analytics on the hosting site? Will they get phantom requests for pages that might not ever be viewed?
mpyne · 8h ago
Yes, but it's in concept always been true that they could get page views that wouldn't be viewed, whether due to bot scraping or even sometimes where a human clicks and then just... doesn't read it.
youngtaff · 4h ago
Analytics can detect prerendered pages and choose how they report them - GA ignores them I believe
bobro · 9h ago
Can anyone give me a sense of how much load time this will really save? How much friction is too much?
pyman · 7h ago
It depends on how heavy the assets are and the user's connection.
youngtaff · 4h ago
On the right site it can make navigation feel instant
accrual · 9h ago
This is cool but man, it feels like we're pushing more and more complexity into the browser to build webpages that work like desktop apps.
Just reading "Chrome Speculation Rules API" makes my skin crawl a bit. We already have speculative CPU instructions, now we need to speculate which pages to preload in order to help mitigate the performance issues of loading megabytes of app in the browser?
I understand the benefits and maybe this is just me yelling at clouds, but it feels crazy coming from what the web used to be.
theZilber · 8h ago
It is less about performance issues of loading megabytes on the browser (which is also an issue). It is about those cases where a fetch request may take a noticable amount of time just because of server distance, maybe the server needs to perform some work (ssr) to create the page (sometimes from data fetched from an external api).
If you have a desktop app it will also have to do the same work by fetching all the data it needs from the server, and it might sometimes cache some of the data locally (like user profile etc...). This allows the developers to load the data on user intent(hover, and some other configurable logic) instead of when application is loaded(slow preload), or when the user clicks (slow response).
Even if the the target page is 1byte, the network latency alone makes things feel slugish. This allows low effort fast ui with good opinionated api.
One of the reasons I can identify svelte sites within 5 seconds of visiting a page, is because they preload on hover, and navigating between pages feels instant. This is great and fighting against it seems unreasonable.
But I agree that in other cases where megabytes of data needs to be fetched upon navigating, using these features will probably cause more harm then good, unless applied with additional intelligent logic (if these features allow such extension).
Edit: i addressed preloading, regarding pretending its a whole new set of issues which i am less experienced with. Making web apps became easier but unfortunately them having slow rendering times and other issues.. well is a case of unmitigated tech debt that comes from making web application building more accessible.
madduci · 9h ago
The lines are more than six for Firefox, since the opinion is not supported
rafram · 9h ago
Hasn't been implemented yet, but Mozilla supports this proposal and plans to implement it:
Does this basically replaces the need for `instant.page`?
babanooey21 · 9h ago
It does, but currently is supported only in Chromium-based browsers. Also with pre-rendering on hover pages are displayed instantly unlike with instant.page where rendering happens on link click which might take a few hundred ms before displaying the page.
Update: Actually instant.page also uses Speculation Rules API where it's supported
dlcarrier · 9h ago
Is it typical to count a single bracket as a line?
No comments yet
bberenberg · 8h ago
As someone who uses Docuseal, please don’t focus on this and add UX improvements for end users. For example, filters for who has signed things.
ozgrakkurt · 9h ago
Or just put in some effort to make things actually more efficient and don’t waste resources on the user’s machine.
ashwinsundar · 9h ago
Those aren’t mutually exclusive goals. You can serve efficient pages AND enable pre-fetch/pre-render. Let’s strive for sub-50ms load times
tonyhart7 · 9h ago
Yeah but it "fake" sub 50ms load when you load it at the front before it shows
dlivingston · 8h ago
I guess you could call it fake or cheating, but ahead-of-time preparation of resources and state is used all the time. Speculative execution [0], DNS prefetching [1], shader pre-compilation, ... .
also dns not changing every second where you website need it
Yeah but you not only "count" it only when it shows tho????
I'am not saying its not "valid" but when you count it only when you shows it, are we really not missing a part why we need "cache" in the first place?
dietr1ch · 8h ago
Every layer underneath tries really hard to cheat and keep things usable/fast.
This includes libraries, kernels, CPUs, devices, drivers and controllers. The higher level at which you cheat, the higher the benefits.
dietr1ch · 8h ago
Idk, if you are starting from prerender/prefetch `where href_matches "/*"` maybe you are wasting resources like you are swinging at a piñata in a different room.
This approach will just force the pre-loader/renderer/fetcher to be cautious and just prepare a couple of items (in document order unless you randomise or figure out a ranking metric) and have low hit ratios.
I think existing preloading/rendering on hover works really well on desktop, but I'm not aware of an equivalent for mobile. Maybe you can just preload visible links as there's fewer of them? But tradeoffs on mobile are beyond just latency, so it might not be worth it.
jmull · 8h ago
Not mutually exclusive, but they compete for resources.
Prefetch/prerender use server resources, which costs money. Moderate eagerness isn’t bad, but also has a small window of effect (e.g. very slow pages will still be almost as slow, unless all your users languidly hover their mouse over each link for a while).
Creating efficient pages takes time from a competent developer, which costs money upfront, but saves server resources over time.
I don’t have anything against prefetch/render, but it’s a small thing compared to efficient pages (at which point you usually don’t need it).
ashwinsundar · 7h ago
> Creating efficient pages takes time from a competent developer, which costs money upfront, but saves server resources over time.
Not trying to be a contrarian just for the sake of it, but I don't think this has to be true. Choice of technology or framework also influences how easy it is to create an efficient page, and that's a free choice one can make*
* Unless you are being forced to make framework/language/tech decisions by someone else, in which case carry on with this claim. But please don't suggest it's a universal claim
jameslk · 9h ago
“Just” is doing a lot of heavy lifting there. There’s a lot that has to go on between the backend and frontend to make modern websites with all their dynamic moving pieces, tons of video/imagery, and heavy marketing/analytics scripts run on a single thread (yes I’m aware things can load/run on other threads but the main thread is the bottleneck). Browsers are just guessing how it will all come together on every page load using heuristics to orchestrate downloading and running all the resources. Often those heuristics are wrong, but they’re the best you can get when you have such an open ended thing as the web and all the legacy it carries with it
There’s an entire field called web performance engineering, with web performance engineers as a title at many tech companies, because shaving milliseconds here and there is both very difficult but easily pays those salaries at scale
giantrobot · 8h ago
> There’s a lot that has to go on between the backend and frontend to make modern websites with all their dynamic moving pieces, tons of video/imagery, and heavy marketing/analytics scripts run on a single thread
So there's a lot going on...with absolutely terrible sites that do everything they can to be user-hostile? The poor dears! We may need to break out the electron microscope to image the teeny tiny violin I will play for them.
All of that crap is not only unnecessary it's pretty much all under the category of anti-features. It's hard to read shit moving around a page or having a video playing underneath. Giant images and autoplaying video are a waste of my resources on the client side. They drain my battery and eat into my data caps.
The easiest web performance engineering anyone can do is fire any and all marketers, managers, or executives that ask for autoplaying videos and bouncing fade-in star wipe animations as a user scrolls a page.
jameslk · 6h ago
> The easiest web performance engineering anyone can do is fire any and all marketers, managers, or executives
Your solution to web performance issues is to fire people?
giantrobot · 5h ago
When they're the cause of the web performance problems it isn't the worst idea. The individual IC trying to get a site to load in a reasonable amount of time isn't pushing for yet another tracking framework loaded from yet another CDN or just a few more auto-playing videos in a pop-over window that can only be dismissed with a sub-pixel button.
pradn · 8h ago
You can't avoid sending the user photos in a news article, for example. So the best you can do is start fetching/rendering the page 200ms early.
cs02rm0 · 9h ago
Theres only so much efficiency you can squeeze out though if, say, you're using AWS Lambda. I can see this helping mitigate those cold start times.
CyberDildonics · 9h ago
Instead of this clickbait title it should have just been about using preloading instead of making your page load fast in the first place.
When you go to an old page with a modern computer and internet it load instantly.
drabbiticus · 8h ago
Can someone explain how this works with links that cause changes? (i.e. changing the amount of an item in a cart, or removing an item from a cart)
I assume you would have to tailor the prefetch/prerender targets to avoid these types of links? In other words, take some care with these specific wildcard targets in the link depending on your site?
jgalt212 · 8h ago
How does this not mess up your apache logs? They just show was Chrome is guessing, not what content your users are consuming.
meindnoch · 9h ago
Ah, another web standard™ from Chrome. Just what we needed!
rafram · 9h ago
Like it or not, the current web standardization process requires implementations to be shipped in multiple browsers before something can enter the spec.
epolanski · 9h ago
In canary or experimental flags.
bornfreddy · 8h ago
Multiple browser engines or just multiple browsers?
leptons · 9h ago
Internet Explorer first implemented XMLHttpRequest, and then it became a standard. Without browser makers innovating we'de be waiting a long, long time for the W3C to make any progress, if any at all.
giantrobot · 8h ago
A majority of Chrome's "standard" pushes have the alternate use of better browser fingerprinting. Ones of end users might use WebMIDI to work with musical devices but every scammy AdTech company will use it to fingerprint users. Same with most of their other "standards". It's almost like Google is an AdTech firm that happens to make a web browser.
leptons · 8h ago
I honestly don't care about fingerprinting. It's too late to worry about that, because there will always be a way to do it, to some extent. And me using a browser with WebMIDI only means I'm one of the 3.4 billion people using Chrome. There are better ways to "fingerprint" people online. Detecting browser APIs is not a particularly good one.
giantrobot · 5h ago
It's not the presence of WebMIDI that's the problem. It's the devices it can enumerate. Same with their other Web* APIs that want to enumerate devices outside the browser's normal sandboxing.
The article provides a script that tries to replicate pre-rendering that speculation rules do for Safari and Firefox, but this is only pre-fetching. It doesn’t do the full pre-render. Rendering is often half the battle when it comes to web performance
Another limitation is that if the page is at all dynamic, such as a shopping cart, speculation rules will have the same struggles as caching does: you may serve a stale response
What I like about this is that it's not a guess like the browser does, it's based on probability and real user behaviour. The downside is the implementation cost.
Just wondering if this is something you do too.
You can do this with speculation rules too. Your speculation rules are just prescriptive of what you think the user will navigate to next based on your own analytics data (or other heuristics)
Ultimately the pros/cons are similar. You just end up with potentially better (or worse) predictions. I suspect it isn’t much better than simple heuristics such as whether a cursor is hovering over a link or a link is in a viewport. You’d probably have to have a lot of data to keep your guesses accurate
Keep in mind that this will just help with the network load piece, not so much for the rendering piece. Often rendering is actually what is slowing down most heavy frontends. Especially when the largest above-the-fold content you want to display is an image or video
But I can think of a few places I would use this for quality of life type enhancements that are for specific clients and etc.
Seems like a great way to track users too. Will hovering over ads count as a click through? Should users have to worry about where their mouse rests on a page or what it passes over?
Tracking is a legitimate concern, but quite frankly that's already happening, and at a much finer, more granular level than anything this feature can provide. Theoretically, this gives the possibility to add slightly more tracking for users that disable JS, but given the small proportion of such users, and the technical hoops you'd need to jump through to get useful tracking out of this, it's almost certainly not worth it.
In the above article, Harry gives a more nuanced and specific method using data attributes to target specific anchors in the document, one reason being you don't need to prerender login or logout pages.
<a href data-prefetch>Prefetched Link</a>
<a href data-prefetch=prerender>Prerendered Link</a>
<a href data-prefetch=false>Untouched Link</a>
So in theory, this should actually reduce bandwidth/battery wastage, by giving more control to the browser and user, rather than individual websites.
It’s not clear it will waste battery on mobile… not sure if it’s still the case but mobile radios go to sleep and waking them used a non-trivial amount of energy so preloading a page was more efficient than let the radio go to sleep and then waking it
Need someone who’s more informed than I am to review wether that’s still the case
So not necessarily any website, because that could cause issues if one of the prerendered pages runs side-effectful JavaScript.
That's pretty hard to achieve.
Currently, "eagerness: conservative" activates on mouse/pointer down (as opposed to mouse/pointer up, which is when navigation normally happens), which will work for mobile devices as well. And "eagerness: moderate" includes the more conservative triggers, which means that even on devices with no hover functionality, prerendering can still kick in slightly before the navigation occurs.
I guess it has limited applicability but maybe its the small little gains that add victories. I really may be going on in a tangent but I always used to think that hardware is boring / there aren't too many optimizations its all transistors with and,or,not but then.. I read about all the crazy stuff like L1 cache and the marvel machinery that is known as computers. It blew my mind into shredders. Compilers are some madman's work too, the amount of optimization is just bonkers just for tiny performances but those tiny performance boosts in the whole stack makes everything run so fast. Its so cool.
https://www.gsma.com/r/wp-content/uploads/2023/10/The-State-...
So instagram founders worked at google and they found that if you had written your username, you had 80% or some high% chance to create an account since (I think the barrier of friction has been crossed and its all easier from now, so why miss, why do all efforts and leave now, I am invested into this now and I will use this now)
So insta founders basically made it so that whenever you upload a photo it would silently upload in the background and then you would mostly write some captions of the image and that would take some time too, so in that time, the picture gets loaded into the database and that's how it was so fast compared to its own peers while using the same technology
If someone scraps the picture/story and doesn't put it, they just delete it from the system.
I will link to the youtube short since that clearly explained it better than me but this was really nice how things are so connected that what I watched on youtube is helping on HN.
https://github.com/brave/brave-browser/issues/41164
Also brings up the questions:
- should browser do this by default?
- if yes, would that result in too many necessary requests (more $$)?
Either way, good to know.
However, Next.js pre-fetching can't perform pre-rendering on hover, which can cause a noticeable lag during navigation. The native Chrome API allows not only pre-fetching, but also pre-rendering, enabling instant page navigation.
I am reminded of the infamous time when DHH had to have it explained to him that GET requests shouldn’t have side effects, after the Rails scaffolding generated CRUD deletes on GET requests.
https://dhh.dk/arc/000454.html
Google were not doing anything wrong here, and DHH was merely trying to deflect blame for the incompetence of the Rails design.
But the fact remains, alas, that this kind of pattern of mistakes is so common, prefetching by default has risks.
Just reading "Chrome Speculation Rules API" makes my skin crawl a bit. We already have speculative CPU instructions, now we need to speculate which pages to preload in order to help mitigate the performance issues of loading megabytes of app in the browser?
I understand the benefits and maybe this is just me yelling at clouds, but it feels crazy coming from what the web used to be.
If you have a desktop app it will also have to do the same work by fetching all the data it needs from the server, and it might sometimes cache some of the data locally (like user profile etc...). This allows the developers to load the data on user intent(hover, and some other configurable logic) instead of when application is loaded(slow preload), or when the user clicks (slow response).
Even if the the target page is 1byte, the network latency alone makes things feel slugish. This allows low effort fast ui with good opinionated api.
One of the reasons I can identify svelte sites within 5 seconds of visiting a page, is because they preload on hover, and navigating between pages feels instant. This is great and fighting against it seems unreasonable.
But I agree that in other cases where megabytes of data needs to be fetched upon navigating, using these features will probably cause more harm then good, unless applied with additional intelligent logic (if these features allow such extension).
Edit: i addressed preloading, regarding pretending its a whole new set of issues which i am less experienced with. Making web apps became easier but unfortunately them having slow rendering times and other issues.. well is a case of unmitigated tech debt that comes from making web application building more accessible.
https://bugzilla.mozilla.org/show_bug.cgi?id=1969396
Update: Actually instant.page also uses Speculation Rules API where it's supported
No comments yet
[0]: https://en.wikipedia.org/wiki/Speculative_execution
[1]: https://www.chromium.org/developers/design-documents/dns-pre...
Yeah but you not only "count" it only when it shows tho????
I'am not saying its not "valid" but when you count it only when you shows it, are we really not missing a part why we need "cache" in the first place?
This includes libraries, kernels, CPUs, devices, drivers and controllers. The higher level at which you cheat, the higher the benefits.
This approach will just force the pre-loader/renderer/fetcher to be cautious and just prepare a couple of items (in document order unless you randomise or figure out a ranking metric) and have low hit ratios.
I think existing preloading/rendering on hover works really well on desktop, but I'm not aware of an equivalent for mobile. Maybe you can just preload visible links as there's fewer of them? But tradeoffs on mobile are beyond just latency, so it might not be worth it.
Prefetch/prerender use server resources, which costs money. Moderate eagerness isn’t bad, but also has a small window of effect (e.g. very slow pages will still be almost as slow, unless all your users languidly hover their mouse over each link for a while).
Creating efficient pages takes time from a competent developer, which costs money upfront, but saves server resources over time.
I don’t have anything against prefetch/render, but it’s a small thing compared to efficient pages (at which point you usually don’t need it).
Not trying to be a contrarian just for the sake of it, but I don't think this has to be true. Choice of technology or framework also influences how easy it is to create an efficient page, and that's a free choice one can make*
* Unless you are being forced to make framework/language/tech decisions by someone else, in which case carry on with this claim. But please don't suggest it's a universal claim
There’s an entire field called web performance engineering, with web performance engineers as a title at many tech companies, because shaving milliseconds here and there is both very difficult but easily pays those salaries at scale
So there's a lot going on...with absolutely terrible sites that do everything they can to be user-hostile? The poor dears! We may need to break out the electron microscope to image the teeny tiny violin I will play for them.
All of that crap is not only unnecessary it's pretty much all under the category of anti-features. It's hard to read shit moving around a page or having a video playing underneath. Giant images and autoplaying video are a waste of my resources on the client side. They drain my battery and eat into my data caps.
The easiest web performance engineering anyone can do is fire any and all marketers, managers, or executives that ask for autoplaying videos and bouncing fade-in star wipe animations as a user scrolls a page.
Your solution to web performance issues is to fire people?
When you go to an old page with a modern computer and internet it load instantly.
I assume you would have to tailor the prefetch/prerender targets to avoid these types of links? In other words, take some care with these specific wildcard targets in the link depending on your site?