14kB is a stretch goal, though trying to stick to the first 10 packets is a cool idea. A project I like that focuses on page size is 512kb.club [1] which is like a golf score for your site’s page size. My site [2] came in just over 71k when I measured before getting added (for all assets). This project also introduced me to Cloudflare Radar [3] which includes a great tool for site analysis/page sizing, but is mainly a general dashboard for the internet.
If you want to have fun with this: the initial window (IW) is determined by the sender. So you can configure your server to the right number of packets for your website. It would look something like:
ip route change default via <gw> dev <if> initcwnd 20 initrwnd 20
A web search suggests CDNs are now at 30 packets for the initial window, so you get 45kb there.
sangeeth96 · 1h ago
> A web search suggests CDNs are now at 30 packets for the initial window, so you get 45kb there.
Any reference for this?
londons_explore · 1h ago
be a bad citizen and just set it to 1000 packets... There isn't really any downside apart from potentially clogging up someone who has a dialup connection and bufferbloat.
notpushkin · 45m ago
This sounds like a terrible idea, but can anybody pinpoint why exactly?
jeroenhd · 21m ago
Anything non-standard will kill shitty middleboxes so I assume spamming packets faster than anticipated will have corporate networks block you off as a security thread of some kind. Mobile carriers also do some weird proxying hacks to "save bandwidth", especially on <4G, so you may also break some mobile connections. I don't have any proof but shitty middleboxes have broken connections with much less obvious protocol features.
But in practice, I think this should work most of the time for most people. On slower connections, your connection will probably crawl to a halt due to retransmission hell, though. Unless you fill up the buffers on the ISP routers, making every other connection for that visitor slow down or get dropped, too.
buckle8017 · 24m ago
Doing that would basically disable the congestion control at the start of the connection.
Which would be kinda annoying on a slow connection.
Either you'd have buffer issues or dropped packets.
susam · 2h ago
I just checked my home page [1] and it has a compressed transfer size of 7.0 kB.
Not bad, I think! I generate the blog listing on the home page (as well as the rest of my website) with my own static site generator, written in Common Lisp [2]. On a limited number of mathematical posts [3], I use KaTeX with client-side rendering. On such pages, KaTeX adds a whopping 347.5 kB!
Perhaps I should consider KaTeX server-side rendering someday! This has been a little passion project of mine since my university dorm room days. All of the HTML content, the common HTML template (for a consistent layout across pages), and the CSS are entirely handwritten. Also, I tend to be conservative about what I include on each page, which helps keep them small.
I would love to use MathML, not directly, but automatically generated from LaTeX, for I find LaTeX much easier to work with than MathML. I mean, while I am writing a mathematical post, I'd much rather write LaTeX (which is almost muscle memory for me), than write MathML (which often tends to get deeply nested and tedious to write). However, the last time I checked, the rendering quality of MathML was quite uneven across browsers, both in terms of aesthetics as well as in terms of accuracy.
For example, if you check the default demo at https://mk12.github.io/web-math-demo/ you'd notice that the contour integral sign has a much larger circle in the MathML rendering (with most default browser fonts) which is quite inconsistent with how contour integrals are actually written in print.
Even if I agree to fix the above problem by loading custom web fonts, there are numerous other edge cases (spacing within subscripts, sizing within subscripts within subscripts, etc.) that need fixing in MathML. At that point, I might as well use full KaTeX. A viable alternative is to have KaTeX or MathJaX generate the HTML and CSS on server-side and send that to the client and that's what I meant by server-side rendering in my earlier comment.
mk12 · 39m ago
If you want to test out some examples from your website to see how they'd look in KaTeX vs. browser MathML rendering, I made a tool for that here: https://mk12.github.io/web-math-demo/
BlackFly · 1h ago
Katex renders to MathML (either server side or client side). Generally people want a slightly more fluent way of describing an equation than is permitted by a soup of html tags. The various tex dialects (generally just referred to as latex) are the preferred methods of doing that.
mr_toad · 51m ago
Server side rendering would cut out the 277kb library. The additional MathML being sent to the client is probably going to be a fraction of that.
djoldman · 1h ago
I never understood math / latex display via client side js.
Why can't this be precomputed into html and css?
mr_toad · 46m ago
It’s a bit more work, usually you’re going to have to install Node, Babel and some other tooling, and spend some time learning to use them if you’re not already familiar with them.
VanTodi · 1h ago
Another idea maybe would be to load the heavy library after the initial page is done. But it's loaded and heavy nonetheless.
Or you could create svgs for the formulas and load them when they are in the viewport. Just my 2 cents
9dev · 3h ago
The overlap of people that don’t know what TCP Slow Start is and those that should care about their website loading a few milliseconds faster is incredibly small. A startup should focus on, well, starting up, not performance; a corporation large enough to optimise speed on that level will have a team of experienced SREs that know over which detail to obsess.
elmigranto · 3h ago
Right. That’s why all the software from, say, Microsoft works flawlessly and at peak efficiency.
SXX · 2h ago
This. It's exactly why Microsoft use modern frameworks such as React Native for their Start Menu used by billions of people every day.
Nab443 · 1h ago
And probably the reason why I have to restart it at least twice a week.
chamomeal · 27m ago
Wait… please please tell me this is a weirdly specific joke
9dev · 3h ago
That’s not what I said. Only that the responsible engineers know which tradeoffs they make, and are competent enough to do so.
samrus · 2h ago
The decision to use react for the start menu wasnt out of competency. The guy said on twitter that thats what he knew so he used it [1]. Didnt think twice. Head empty no thoughts
It is indeed an impressive feat of engineering to make the start menu take several seconds to launch in the age of 5 GHz many-core CPUs, unlimited RAM, and multi-GByte/s SSDs. As an added bonus, I now have to re-boot every couple of days or the search function stops working completely.
ldjb · 2h ago
Please do share any evidence to the contrary, but it seems that the Tweet is not serious and is not from someone who worked on the Start Menu.
bool3max · 1h ago
No way people on HN are falling for bait Tweets. We're cooked
9dev · 1h ago
That tweet is fake, and as repeatedly stated by Microsoft engineers, the start menu is written in C# of course, the only part using React native is a promotion widget within the start menu. While even that is a strange move, all the rest is just FUD spread via social media.
the_real_cher · 1h ago
Fair warning, X has has more trolls than 4chan.
mnw21cam · 2h ago
Hahaha. Keep digging.
jeroenhd · 47m ago
When your approach is "I don't care because I have more important things to focus on", you never care. There's always something you can do that's more important to a company than optimising the page load to align with the TCP window size used to access your server.
This is why almost all applications and websites are slow and terrible these days.
nasso_dev · 2h ago
I agree, it feels like it should be how you describe it.
But if Evan Wallace didn't obsess over performance when building Figma, it wouldn't be what it is today. Sometimes, performance is a feature.
austin-cheney · 1h ago
I don’t see what size of corporation has to do with performance or optimization. Almost never do I see larger businesses doing anything to execute more quickly online.
zelphirkalt · 42m ago
Too many cooks spoil the broth. If you got multiple people pushing agenda to use their favorite new JS framework, disregarding simplicity in order to chase some imaginary goal or hip thing to bolster their CV, it's not gonna end well.
anymouse123456 · 59m ago
This idea that performance is irrelevant gets under my skin. It's how we ended up with Docker and Kubernetes and the absolute slop stack that is destroying everything it touches.
Performance matters.
We've spent so many decades misinterpreting Knuth's quote about optimization that we've managed to chew up 5-6 orders of magnitude in hardware performance gains and still deliver slow, bloated and defective software products.
Performance does in fact matter and all other things equal, a fast product is more pleasurable than a slow one.
Thankfully some people like the folks at Figma took the risk and proved the point.
Even if we're innovating on hard technical problems (which most of us are not), performance still matters.
zelphirkalt · 47m ago
Performance matters, but at least initially only as far as it doesn't complicate your code significantly. That's why a simple static website often beats some hyper modern latest framework optimization journey websites. You gotta maintain that shit. And you are making sacrifices elsewhere, in the areas of accessibility and possibly privacy and possibly ethics.
So yeah, make sure not to lose performance unreasonably, but also don't obsess with performance to the point of making things unusable or way too complicated for what they do.
mr_toad · 38m ago
Containers were invented because VMs were too slow to cold start and used too much memory. Their whole raison d'être is performance.
andrepd · 3h ago
> a corporation large enough will have a team of experienced SREs that know over which detail to obsess.
Ahh, if only. Have you seen applications developed by large corporations lately? :)
achenet · 2h ago
a corporation large enough to have a team of experienced SREs that know which details to obsess over will also have enough promotion-hungry POs and middle managers to tell them devs to add 50MB of ads and trackers in the web page. Maybe another 100MB for an LLM wrapper too.
> ... analysis [by Cloudflare] suggests that the throttling [by Russian ISPs] allows Internet users to load only the first 16 KB of any web asset, rendering most web navigation impossible.
firecall · 3h ago
Damn... I'm at 17.2KB for my home page!
(not including dependencies)
FWIW I optimised the heck out of my personal homepage and got 100/100 for all Lighthouse scores. Which I had not previously thought possible LOL
Built in Rails too!
It's absolutely worth optimising your site though. It just is such a pleasing experience when a page loads without any perceptible lag!
apt-apt-apt-apt · 2h ago
Yeah, the fact that news.ycombinator.com loads instantly pleases my brain so much I flick it open during downtime automonkey-ly
Alifatisk · 2h ago
Lobsters, Dlangs forum and HN is one of the few places I know that loads instantly, I love it. This is how it should be like!
ghoshbishakh · 3h ago
rails has nothing to do with the rendered page size though. Congrats on the perfect lighthouse score.
Alifatisk · 2h ago
Doesn't Rails asset pipeline have an effect on the page size, like if Propshaft being used instead of Sprockets. From what I remember, Propshaft intentionally does not include minification or compression.
Alifatisk · 2h ago
I agree with the sentiment here, the thing is, I've noticed that the newer generations are using frameworks like Next.js as default for building simple static websites. That's their bare bone start. The era of plain html + css (and maybe a sprinkle of js) feels like it's fading away, sadly.
jbreckmckye · 2h ago
I think that makes sense.
I have done the hyper optimised, inline resource, no blocking script, hand minimised JS, 14kb website thing before and the problem with doing it the "hard" way is it traps you in a design and architecture.
When your requirements change all the minimalistic choices that seemed so efficient and web-native start turning into technical debt. Everyone fantasises about "no frameworks" until the project is no longer a toy.
Whereas the isomorphic JS frameworks let you have your cake and eat it: you can start with something that spits out compiled pages and optimise it to get performant _enough_, but you can fall back to thick client JavaScript if necessary.
austin-cheney · 1h ago
You have noticed that only just recently? This has been the case since jQuery became popular before 2010.
chneu · 15m ago
Arguably it's been this way since web 2.0 became a thing in like 2008?
fleebee · 1h ago
I think you're late enough for that realization that the trend already shifted back a bit. Most frameworks I've dealt with can emit static generated sites, Next.js included. Astro feels like it's designed for that purpose from the ground up.
hackerman_fi · 2h ago
The article has IMO two flawed arguments:
1. There is math for how long it takes to send even one packet over satellite connection (~1600ms). Its a weak argument for the 14kb rule since there is no comparison with a larger website. 10 packets wont necessarily take 16 seconds.
2. There is a mention that images on webpage are included in this 14kb rule. In what case are images inlined to a page’s initial load? If this is a special case and 99.9% of images don’t follow it, it should be mentioned at very least.
throwup238 · 1h ago
> In what case are images inlined to a page’s initial load?
Low resolution thumbnails that are blurred via CSS filters over which the real images fade in once downloaded. Done properly it usually only adds a few hundred bytes per image for above the fold images.
I don’t know if many bloggers do that, though. I do on my blog and it’s probably a feature on most blogging platforms (like Wordpress or Medium) but it’s more of a commercial frontend hyperoptimization that nudges conversions half a percentage point or so.
hsbauauvhabzb · 1h ago
Also the assumption that my userbase uses low latency satellite connections, and are somehow unable to put up with my website, when every other website in current existence is multiple megabytes.
ricardobeat · 1h ago
There was no such assumption, that was just the first example after which he mentions normal roundtrip latencies are usually in the 100-300ms range.
Just because everything else is bad, doesn't invalidate the idea that you should do better. Today's internet can feel painfully slow even on a 1Gbps connection because of this; websites were actually faster in the early 2000s, during the transition to ADSL, as they still had to cater to dial-up users and were very light as a result.
xg15 · 35m ago
> Also HTTPS requires two additional round trips before it can do the first one — which gets us up to 1836ms!
Doesn't this sort of undo the entire point of the article?
If the idea was to serve the entire web page in the first roundtrip, wouldn't you have lost the moment TLS is used? Not only does the TLS handshake send lots of stuff (including the certificate) that will likely get you over the 14kb boundary before you even get the chance to send a byte of your actual content - but the handshake also includes multiple request/response exchanges between client and server, so it would require additional roundtrips even if it stayed below the 14kb boundary.
So the article's advice only holds for unencrypted plain-TCP connections, which no one would want to use today anymore.
The advice might be useful again if you use QUIC/HTTP3, because that one ditches both TLS and TCP and provides the features from both in its own thing. But then, you'd have to look up first how congestion control and bandwidth estimation works in HTTP3 and if 14kb is still the right threshold.
the_precipitate · 3h ago
And you do know that .exe file is wasteful, .com file actually saves quite a few bytes if you can limit your executable's size to be smaller than 0xFF00h (man, I am old).
cout · 2h ago
And a.out format often saves disk space over elf, despite duplicating code across executables.
simgt · 3h ago
Aside from latency, reducing ressources consumption to the minimum required should always be a concern if we intend to have a sustainable future. The environmental impact of our network is not negligible. Given the snarky comments here, we clearly have a long way to go.
EDIT: some reply missed my point, I am not claiming this particular optimization is the holy grail, only that I'd have liked for added benefit of reducing the energy consumption to be mentioned
FlyingAvatar · 3h ago
The vast majority of internet bandwidth is people streaming video. Shaving a few megs from a webpage load would be the tiniest drop in the bucket.
I am all for efficiency, but optimizing everywhere is a recipe for using up the resources to actually optimize where it matters.
schiffern · 3h ago
In that spirit I have a userscript, ironically called Youtube HD[0], that with one edit sets the resolution to 'medium' ie 360p. On a laptop it's plenty for talking head content (the softening is nice actually), and I only find myself switching to 480p if there's small text on screen.
It's a small thing, but as you say internet video is relatively heavy.
To reduce my AI footprint I use the udm=14 trick[1] to kill AI in Google search. It generally gives better results too.
For general web browsing the best single tip is running uBlock Origin. If you can master medium[2] or hard mode (which will require un-breaking/whitelisting sites) it saves more bandwidth and has better privacy.[3]
To go all-out on bandwidth conservation, LocalCDN[4]
and CleanURLs[5] are good. "Set it and forget it," improves privacy and load times, and saves a bit of energy.
I've been using uBlock in advanced mode with 3rd party frames and scripts blocked. I recommend it, but it is indeed a pain to find the minimum set of things you need to unblock to make a website work, involving lots of refreshing.
Once you find it for a website you can just save it though so you don't need to go through it again.
LocalCDN is indeed a nobrainer for privacy! Set and forget.
OtherShrezzing · 3h ago
> but optimizing everywhere is a recipe for using up the resources to actually optimize where it matters.
Is it? My front end engineer spending 90 minutes cutting dependencies out of the site isn’t going to deny YouTube the opportunity to improve their streaming algorithms.
josephg · 2h ago
It might do the opposite. We need to teach engineers of all stripes how to analyse and fix performance problems if we’re going to do anything about them.
molszanski · 2h ago
If you turn this into open problem, without hypothetical limits of what an frontend engineer ca do it would become more interesting and more impactful in real life. That said engineer is human being who could use that time in myriad other ways that would be more productive to helping the environment
simgt · 2h ago
That's exactly it, but I fully expected whataboutism under my comment. If I had mentioned video streaming as a disclaimer, I'd probably have gotten crypto or Shein as counter "arguments".
Everyone needs to be aware that we are part of an environment that has limited resources beyond "money" and act accordingly, whatever the scale.
jbreckmckye · 2h ago
I feel this way sometimes about recycling. I am very diligent about it, washing out my cans and jars, separating my plastics. And then I watch my neighbour fill our bin with plastic bottles, last-season clothes and uneaten food.
oriolid · 2h ago
> The vast majority of internet bandwidth is people streaming video. Shaving a few megs from a webpage load would be the tiniest drop in the bucket.
Is it really? I was surprised to see that surfing newspaper websites or Facebook produces more traffic per time than Netflix or Youtube. Of course there's a lot of embedded video in ads and it could maybe count as streaming video.
danielbln · 2h ago
Cate to share that article, I find that hard to believe.
oriolid · 1h ago
No article sorry, it's just what the bandwidth display on my home router shows. I could post some screenshots but I don't care for answering to everyone who tries to debunk them. Mobile version of Facebook is by the way much better optimized than the full webpage. I guess desktop browser users are a small minority.
pyman · 2h ago
Talking about video streaming, I have a question for big tech companies:
Why?
Why are we still talking about optimising HTML, CSS and JS in 2025? This is tech from 35 years ago. Why can't browsers adopt a system like video streaming, where you "stream" a binary of your site? The server could publish a link to the uncompressed source so anyone can inspect it, keeping the spirit of the open web alive.
Do you realise how many years web developers have spent obsessing over this document-based legacy system and how to improve its performance? Not just years, their whole careers!
How many cool technologies were created in the last 35 years? I lost count.
Honestly, why are big tech companies still building on top of a legacy system, forcing web developers to waste their time on things like performance tweaks instead of focusing on what actually matters: the product.
ozim · 2h ago
I see you mistake html/css for what they were 30 years ago „documents to be viewed”.
HTML/CSS/JS is the only fully open stack, free as in beer and free and not owned by a single entity and standardized by multinational standardization bodies for building applications interfaces that is cross platform and does that excellent. Especially with electron you can build native apps with HTML/CSS/JS.
There are actually web apps not „websites” that are built. Web apps are not html with sprinkled jquery around there are actually heavy apps.
01HNNWZ0MV43FF · 1h ago
Practically it is owned by Google, or maybe Google + Apple
ahofmann · 2h ago
1. How does that help not wasting resources? It needs more energy and traffic
2. Everything in our world is dwarfes standing on the shoulders of giants. To rip everything up and create something completely new is most of the time an idea that sounds better than it really would be. Anyone who thinks something else is mostly to young to see this pattern.
hnlmorg · 2h ago
That’s already how it works.
The binary is a compressed artefact and the stream is a TLS pipe. But the principle is the same.
In fact videos streams over the web are actually based on how HTTP documents are chunked and retrieved, rather than the other way around.
01HNNWZ0MV43FF · 55m ago
> Why can't browsers adopt a system like video streaming, where you "stream" a binary of your site?
I'll have to speculate what you mean
1. If you mean drawing pixels directly instead of relying on HTML, it's going to be slower. (either because of network lag or because of WASM overhead)
2. If you mean streaming video to the browser and rendering your site server-side, it will break features like resizing the window or turning a phone sideways, and it will be hideously expensive to host.
3. It will break all accessibility features like Android's built-in screen reader, because you aren't going to maintain all the screen reader and braille stuff that everyone might need server-side, and if you do, you're going to break the workflow for someone who relies on a custom tweak to it.
4. If you are drawing pixels from scratch you also have to re-implement stuff like selecting and copying text, which is possible but not feasible.
5. A really good GUI toolkit like Qt or Chromium will take 50-100 MB. Say you can trim your site's server-side toolkit down to 10 MB somehow. If you are very very lucky, you can share some of that in the browser's cache with other sites, _if_ you are using the same exact version of the toolkit, on the same CDN. Now you are locked into using a CDN. Now your website costs 10 MB for everyone loading it with a fresh cache.
You can definitely do this if your site _needs_ it. Like, you can't build OpenStreetMap without JS, you can't build chat apps without `fetch`, and there are certain things where drawing every pixel yourself and running a custom client-side GUI toolkit might make sense. But it's like 1% of sites.
I hate HTML but it's a local minimum. For animals, weight is a type of strength, for software, popularity is a type of strength. It is really hard to beat something that's installed everywhere.
vouaobrasil · 3h ago
The problem is that a lot of people DO have their own websites for which they have some control over. So it's not like a million people optimizing their own websites will have any control over what Google does with YouTube for instance...
jychang · 2h ago
A million people is a very strong political force.
A million determined voters can easily force laws to be made which forces youtube to be more efficient.
I often think about how orthodoxical all humans are. We never think about different paths outside of social norms.
- Modern western society has weakened support for mass action to the point where it is literally an unfathomable "black swan" perspective in public discourse.
- Spending a few million dollars on TV ads to get someone elected is a lot cheaper than whatever Bill Gates spends on NGOs, and for all the money he spent it seems like aid is getting cut off.
- Hiring or acting as a hitman to kill someone to achieve your goal is a lot cheaper than the other options above. It seems like this concept, for better or worse, is not quite in the public consciousness currently. The 1960s 1970s era of assassinations have truly gone and past.
vouaobrasil · 2h ago
I sort of agree...but not really, because you'll never get a situation where a million people can vote on a specific law about making YT more efficient. One needs to muster some sort of general political will to even get that to be an issue, and that takes a lot more than a million people.
Personally, if a referendum were held tomorrow to disband Google, I would vote yes for that...but good luck getting that referendum to be held.
hnlmorg · 2h ago
It matters at web scale though.
Like how industrial manufacturing are the biggest carbon consumers and compared to them, I’m just a drop in the ocean. But that doesn’t mean I don’t also have a responsibility to recycle because culminate effect of everyone like me recycling quickly becomes massive.
Similarly, if every web host did their bit with static content, you’d still see a big reduction at a global scale.
And you’re right it shouldn’t the end of the story. However that doesn’t mean it’s a wasted effort / irrelevant optimisation
ofalkaed · 1h ago
I feel better about limiting the size of my drop in the bucket than I would feel about just saying my drop doesn't matter even if it doesn't matter. I get my internet through my phone's hotspot with its 15gig a month plan, I generally don't use the entire 15gigs. My phone and and laptop are pretty much the only high tech I have, audio interface is probably third in line and my oven is probably fourth (self cleaning). Furnace stays at 50 all winter long even when it is -40 out and if it is above freezing the furnace is turned off. Never had a car, walk and bike everywhere including groceries and laundry, have only used motorized transport maybe a dozen times in the past decade.
A nice side effect of these choices is that I only spend a small part of my pay. Never had a credit card, never had debt, just saved my money until I had enough that the purchase was no big deal.
I don't really have an issue with people who say that their drop does not matter so why should they worry, but I don't understand it, seems like they just needlessly complicate their life. Not too long ago my neighbor was bragging about how effective all the money he spent on energy efficient windows, insulation, etc, was, he saved loads of money that winter; his heating bill was still nearly three times what mine was despite using a wood stove to offset his heating bill, my house being almost the same size, barely insulated and having 70 year old windows. I just put on a sweater instead of turning up the heat.
Edit: Sorry about that sentence, not quite awake yet and doubt I will be awake enough to fix it before editing window closes.
atoav · 3h ago
Yes but drops in the bucket count. If I take anything away from your statement, it is that people should be selective where to use videos for communications and where not.
qayxc · 3h ago
It's not low-hanging fruit, though. While you try to optimise to save a couple of mWh in power use, a single search engine query uses 100x more and an LLM chat is another 100x of that. In other words: there's bigger fish to fry. Plus caching, lazy loading etc. mitigates most of this anyway.
vouaobrasil · 3h ago
Engineering-wise, it sometimes isn't. But it does send a signal that can also become a trend in society to be more respectful of our energy usage. Sometimes, it does make sense to focus on the most visible aspect of energy usage, rather than the most intensive. Just by making your website smaller and being vocal about it, you could reach 100,000 people if you get a lot of visitors, whereas Google isn't going to give a darn about even trying to send a signal.
qayxc · 3h ago
I'd be 100% on board with you if you were able to show me a single - just a single - regular website user who'd care about energy usage of a first(!) site load.
I'm honestly just really annoyed about this "society and environment"-spin on advise that would have an otherwise niche, but perfectly valid reason behind it (TFA: slow satellite network on the high seas).
This might sound harsh and I don't mean it personally, but making your website smaller and "being vocal about it" (whatever you mean by that) doesn't make an iota of difference. It also only works if your site is basically just text. If your website uses other resources (images, videos, 3D models, audio, etc.), the impact of first load is just noise anyway.
You can have a bigger impact by telling 100,000 people to drive an hour less each month and if just 1% of your hypothetical audience actually does that, you'd achieve orders of magnitude more in terms of environmental and societal impact.
vouaobrasil · 2h ago
Perhaps you are right. But I do remember one guy who had a YouTube channel and he uploaded fairly low-quality videos at a reduced framerate to achieve a high level of compression, and he explicitly put in his video that he did it to save energy.
Now, it is true that it didn't save much because probably many people were uploaded 8K videos at the time, so drop in the bucket. But personally, I found it quite inspiring and his decision was instrumental in my deciding to never upload 4K. And in general, I will say that people like that do inspire me and keep me going to be as minimal as possible when I use energy in all domains.
For me at least, trying to optimize for using as little energy as possible isn't an engineering problem. It's a challenge to do it uniformly as much as possible, so it can't be subdivided. And I do think every little bit counts, and if I can spend time making my website smaller, I'll do that in case one person gets inspired by that. It's not like I'm a machine and my only goal is time efficiency....
simgt · 2h ago
Of course, but my point is that it's still a constraint we should have in mind at every level. Dupont poisoning public water with pfas does not make you less of an arsehole if you toss your old iPhone in a pond for the sake of convenience.
timeon · 3h ago
Sure there are more resource-heavy places but I think the problem is general approach.
Neglecting of performance and overall approach to resources brought us to these resource-heavy tools.
It seems just dismissive when people pointing to places where there could be made more cuts and call it a day.
If we want to really fix places with bigger impact we need to change this approach in a first place.
qayxc · 2h ago
Sure thing, but's not low-hanging fruit. The impact is so miniscule that the effort required is too high when compared to the benefit.
This is micro-optimisation for a valid use case (slow connections in bandwidth-starved situations), but in the real world, a single hi-res image, short video clip, or audio sample would negate all your text-squeezing, HTTP header optimisation games, and struggle for minimalism.
So for the vast majority of use cases it's simply irrelevant. And no, your website is likely not going to get 1,000,000 unique visitors per hour so you'd have a hard time even measuring the impact whereas simply NOT ordering pizza and having a home made salad instead would have a measurable impact orders of magnitude greater.
Estimating the overall impact of your actions and non-actions is hard, but it's easier and more practical to optimise your assets, remove bloat (no megabytes of JS frameworks), and think about whether you really need that annoying full-screen video background. THOSE are low-hanging fruit with lots of impact. Trying to trim down a functional site to <14kB is NOT.
quaintdev · 2h ago
LLM companies should provide how much energy got consumed processing users request. Maybe people will think twice before generating AI slop
vouaobrasil · 3h ago
Absolutely agree with that. I recently visited the BBC website the other day and it loaded about 120MB of stuff into the cache - for a small text article. Not only does it use a lot of extra energy to transmit so much data, but it promotes a general atmosphere of wastefulness.
I've tried to really cut down my website as well to make it fairly minimal. And when I upload stuff to YouTube, I never use 4K, only 1080P. I think 4K and 8K video should not even exist.
A lot of people talk about adding XYZ megawatts of solar to the grid. But imagine how nice it could be if we regularly had efforts to use LESS power.
I miss the days when websites were very small in the days of 56K modems. I think there is some happy medium somewhere and we've gone way past it.
iinnPP · 2h ago
You'll find that people "stop caring" about just about anything when it starts to impact them. Personally, I agree with your statement.
Since a main argument is seemingly that AI is worse, let's remember that AI is querying these huge pages as well.
Also that the 14kb size is less than 1% of the current average mobile website payload.
lpapez · 2h ago
Being concerned about page sizes is 100% wasted effort.
Calculate how much electricity you personally consume in total browsing the Internet for a year. Multiply that by 10 to be safe.
Then compare that number to how much energy it takes to produce a single hamburger.
Do the calculation yourself if you do not believe me.
On average, we developers can make a bigger difference by choosing to eat salad one day instead of optimizing our websites for a week.
zigzag312 · 2h ago
So, anyone serious about sustainable future should stop using Python and stop recommending it as introduction to programming language? I remember one test that showed Python using 75x more energy than C to perform the same task.
mnw21cam · 2h ago
I'm just investigating why the nightly backup of the work server is taking so long. Turns out python (as conda, anaconda, miniconda, etc) have dumped 22 million files across the home directories, and this takes a while to just list, let alone work out which files have changed and need archiving. Most of these are duplicates of each other, and files that should really belong to the OS, like bin/curl.
I myself have installed one single package, and it installed 196,171 files in my home directory.
If that isn't gratuitous bloat, then I don't know what is.
hiAndrewQuinn · 2h ago
Do we? Let's compare some numbers.
Creating an average hamburger requires an input of 2-6 kWh of energy, from start to finish. At 15¢ USD/kWh, this gives us an upper limit of about 90¢ of electricity.
The average 14 kB web page takes about 0.000002 kWh to serve. You would need to serve that web page about 1-300,000 times to create the same energy demands of a single hamburger. A 14 mB web page, which would be a pretty heavy JavaScript app these days, would need about 1 to 3,000.
I think those are pretty good ways to use the energy.
swores · 44m ago
If Reddit serves 20 billion page views per month, at an average of 5MB per page (these numbers are at least in the vicinity of being right), then reducing the page size by 10% would by your maths be worth 238,000 burgers, or a 50% reduction worth almost 1.2million burgers per month. That's hardly insignificant for a single (admittedly, very popular) website!
(In addition to what justmarc said about accounting for the whole network. Plus I suspect between feeding them and the indirect effects of their contribution to climate change, I suspect you're being generous about the cost of a burger.)
justmarc · 42m ago
Slightly veering off topic but I honestly wonder how many burgers will I fry if I ask ChatGPT to make a fart app?
justmarc · 2h ago
Just wondering how do you reached at the energy calculation for serving that 14k page?
For a user's access to a random web page anywhere, assuming it's not on a CDN near the user, you're looking at at ~10 routers/networks on the way involved in the connection. Did you take that into account?
ajsnigrutin · 2h ago
Now open an average news site, with 100s of request, tens of ads, autoplaying video ads, tracking pixels, etc., using gigabytes of ram and a lot of cpu.
Then multiply that by the number of daily visitors.
Without "hamburgers" (food in general), we die, reducing the size of usesless content on websites doesn't really hurt anyone.
hiAndrewQuinn · 2h ago
Now go to an average McDonalds, with hundreds of orders, automatically added value meals, customer rewards, etc. consuming thousands of cows and a lot of pastureland.
Then multiply that by the number of daily customers.
Without web pages (information in general), we return to the Dark Ages. Reducing the number of hamburgers people eat doesn't really hurt anyone.
ajsnigrutin · 2h ago
Sure, but you've got to eat something.
Now, if mcdonalds padded 5kB of calories of a cheesburger with 10.000 kilobytes of calories in wasted food like news sites doo, it would be a different story. The ratio would be 200 kilos of wasted food for 100grams of usable beef.
spacephysics · 2h ago
This is one of those things that is high effort, low impact. Similar to recycling in some cities/towns where it just gets dumped in a landfill.
Instead we should be looking to nuclear power solutions for our energy needs, and not waste time with reducing website size if its purely a function of environmental impact.
presentation · 2h ago
Or we can just commit to building out solar infrastructure and not worry about this rounding error anymore
sylware · 2h ago
Country where 10 millions people play their fav greedy-3D game in the evening, with state-of-the-art 400W GPUs, all at the same time...
noduerme · 2h ago
Yeah, the environmental impact of jackasses mining jackass coin, or jackasses training LLMs is not insignificant. Are you seriously telling me now that if my website is 256k or 1024k I'm responsible for destroying the planet? Take it out on your masters.
And no, reducing resource use to the minimum in the name of sustainability does not scale down the same way it scales up. You're just pushing the idea that all human activity is some sort of disease that's best disposed of. That's essentially just wishing the worst on your own species for being successful.
It's never clear to me whether people who push this line are doing so because they're bitter and want to punish other humans, or because they hate themselves. Either way, it evinces a system of thought that has already relegated humankind to the dustbin of history. If, in the long run, that's what happens, you're right and everyone else is wrong. Congratulations. It will make little difference in that case to you if the rest of us move on for a few hundred years to colonize the planets and revive the biosphere. Comfort yourself with the knowledge that this will all end in 10 or 20 thousand years, and the world will go back to being a hot hive of insects and reptiles. But what glory we wrought in our time.
simgt · 2h ago
> the environmental impact of jackasses mining jackass coin, or jackasses training LLMs is not insignificant
> You're just pushing the idea that all human activity is some sort of disease that's best disposed of. That's essentially just wishing the worst on your own species for being successful.
Every bloody mention of the environmental impact of our activities gets at least a reply like yours that ticks one of these boxes.
noduerme · 2h ago
>> Every bloody mention of the environmental impact of our activities gets at least a reply like yours that ticks one of these boxes.
That's a sweeping misunderstanding of what I wrote, so I'd ask that you re-read what I said in response to the specific quote.
noduerme · 2h ago
> the environmental impact of jackasses mining jackass coin, or jackasses training LLMs is not insignificant
(this was actually stated in agreement with the original poster, who you clearly misunderstood, so there's no "what-about" involved here. They were condemning all kinds of consumption, including the frivolous ones I mentioned).
But
I'm afraid you've missed both my small point and my wider point.
My small point was to argue against the parent's comment that
>>reducing ressources consumption to the minimum required should always be a concern if we intend to have a sustainable future
I disagree with this concept on the basis that nothing can be accomplished on a large scale if the primary concern is simply to reduce resource consumption to a minimum. If you care to disagree with that, then please address it.
The larger point was that this theory leads inexorably to the idea that humans should just kill themselves or disappear; and it almost always comes from people who themselves want to kill themselves or disappear.
simgt · 1h ago
> if the primary concern is simply to reduce resource consumption to a minimum
..."required".
That allows you to fit pretty much everything in that requirement. Which actually makes my initial point a bit weak, as some would put "delivering 4K quality tiktok videos" as a requirement.
Point is that energy consumption and broad environmental impact has to be a constraint in how we design our systems (and businesses).
I stand by my accusations of whataboutism and strawmaning, though.
noduerme · 40m ago
carelessly thrown about accusations of whataboutism and strawmaning are an excellent example of whataboutism and strawmaning. I was making a specific point, directly to the topic, without either putting words in their mouth or addressing an unrelated issue. I'll stand by my retort.
mikl · 2h ago
How relevant is this now, if you have a modern server that supports HTTP/3?
HTTP/3 uses UDP rather than TCP, so TCP slow start should not apply at all.
A lot of people don't realize that all these so-called issues with TCP, like slow-start, Nagle, window sizes and congestion algorithms, are not there because TCP was badly designed, but rather that these are inherent problems you get when you want to create any reliable stream protocol on top of an unreliable datagram one. The advantage of QUIC is that it can multiplex multiple reliable streams while using only a single congestion window, which is a bit more optimal than having multiple TCP sockets.
One other advantage of QUIC is that you avoid some latency from the three-way handshake that is used in almost any TCP implementation. Although technically you can already send data in the first SYN packet, the three-way handshake is necessary to avoid confusion in some edge cases (like a previous TCP connection using the same source and destination ports).
hulitu · 2h ago
> How relevant is this now
Very relevant. A lot of websites need 5 to 30 seconds or more to load.
throwaway019254 · 2h ago
I have a suspicion that the 30 second loading time is not caused by TCP slow start.
ajross · 56m ago
Slow start is about saving small-integer-numbers of RTT times that the algorithm takes to ramp up to line speed. A 5-30 second load time is an order of magnitude off, and almost certainly due to simple asset size.
ksec · 3h ago
Missing 2021 in the title.
I know it is not the exact topic, but sometimes I think we dont need the fastest response time but consistent response time. Like every single page within the site to be fully rendered with exactly 1s. Nothing more nothing less.
sangeeth96 · 1h ago
I think the advise is still very relevant though. Plus, the varying network conditions mentioned in the article would ensure it’s difficult if impossible to guarantee consistent response time. As someone with spotty cellular coverage, I can understand the pains of browsing when you’re stuck with that.
ksec · 1h ago
Yes. I don't know how it could be achieved other than having JS rendered the whole thing, wait until time designated before showing it all. And that time could be dependent on network connection.
But this sort of goes against my no / minimal JS front end rendering philosophy.
zelphirkalt · 56m ago
My plain HTML alone is 10kB and it is mostly text. I don't think this is achievable for most sites, even the ones limiting themselves to only CSS and HTML, like mine.
3cats-in-a-coat · 55m ago
This is about your "plain HTML". If the rest is in cache, then TCP concerns are irrelevant.
MrJohz · 40m ago
Depending on who's visiting your site and how often, the rest probably isn't in cache though. If your site is a product landing page or a small blog or something else that people are rarely going to repeatedly visit, then it's probably best to assume that all your assets will need to be downloaded most of the time.
silon42 · 51m ago
You must also be careful not to generate "get-if-modified", or such checks.
smartmic · 2h ago
If I understood correctly, the rule is dependent on web server features and/or configuration. In that case, an overview of web servers which have or have not implemented the slow start algorithm would be interesting.
gammalost · 2h ago
If you care about reducing the amount of back and forth then just use QUIC.
There is an example link in the article. Listing more examples would serve no purpose apart from web design perspective
justmarc · 1h ago
Well, exactly that, I'm looking for inspiration.
palata · 4h ago
Fortunately, most websites include megabytes of bullshit, so it's not remotely a concern for them :D.
Hamuko · 3h ago
I recently used an electric car charger where the charger is controlled by a mobile app that's basically a thin wrapper over a website. Unfortunately I only had a 0.25 Mb/s Internet plan at the time and it took me several minutes just staring at the splash screen as it was downloading JavaScript and other assets. Even when I got it to load, it hadn't managed to download all fonts. Truly an eye-opening experience.
fouronnes3 · 3h ago
Why can't we just pay with a payment card at electric chargers? Drives me insane.
DuncanCoffee · 2h ago
It wasn't required by law and the ocpp charging protocol, used to manage charge sessions at a high level between the charger and the service provider (not the vehicle) did not include payments management. Everybody just found it easier to manage payments using apps and credits. But I think Europe is going to make it mandatory soon(ish)
Hamuko · 3h ago
These chargers have an RFID tag too, but I'd forgotten it in my jacket, so it was mobile app for me.
There are some chargers that take card payments though. My local IKEA has some. There's also EU legislation to mandate payment card support.
35kB after its uncompressed. On my end it sends 13.48kB.
LAC-Tech · 1h ago
This looks like such an interesting articles, but it's completely ruined by the fact that every sentence is its own paragraph.
I swear I am not just trying to be a dick here. If I didn't think it had great content I wouldn't have commented. But I feel like I'm reading a LinkedIn post. Please join some of those sentences up into paragraphs!
youngtaff · 2h ago
It’s not really relevant in 2025…
The HTTPS negotiation is going to consume the initial roundtrips which should start increasing the size of the window
Modern CDNs start with larger initial windows and also pace the packets onto the network to reduce the chances of congesting
There’s also a question as to how relevant the 14kb rule has ever been… HTML renders progressively so as long as there’s some meaningful content in the early packets then overall size is less important
zevv · 3h ago
And now try to load the same website over HTTPS
mrweasel · 3h ago
I know some people who are experimenting with using shorter certificates, i.e. shorter certificate chains, to reduce traffic. If you're a large enough site, then you can save a ton of traffic every day.
tech2 · 2h ago
Please though, for the love of dog, have your site serve a complete chain and don't have the browser or software stack do AIA chasing.
jeroenhd · 31m ago
With half of the web using Let's Encrypt certificates, I think it's pretty safe to assume the intermediates are in most users' caches. If you get charged out the ass for network bandwidth (i.e. you use Amazon/GCP/Azure) then you may be able to get away with shortened chains as long as you use a common CA setup. It's a hell of a footgun and will be a massive pain to debug, but it's possible as a traffic shaving measure if you don't care about serving clients that have just installed a new copy of their OS.
There are other ways you can try to optimise the certificate chain, though. For instance, you can pick a CA that uses ECC rather than RSA to make use of the much shorter key sizes. Entrust has one, I believe. Even if the root CA has an RSA key, they may still have ECC intermediates you can use.
xrisk · 3h ago
Yeah I think this computation doesn’t work anymore once you factor in the tls handshake.
aziaziazi · 3h ago
From TFA:
> Also HTTPS requires two additional round trips before it can do the first one — which gets us up to 1836ms!
supermatt · 3h ago
This hasn’t been the case since TLS1.3 (over 5 years ago) which reduced it to 1-RTT - or 0-RTT when keys are known (cached or preshared). Same with QUIC.
aziaziazi · 2h ago
Good to know, however "when the keys are know" refers to a second visit (or request) of the site right ? That isn’t helpful for the first data paquets - at least that what I understand from the site.
jeroenhd · 27m ago
Without cached data from a previous visit, 1-RTT mode works even if you've never vistited the site before (https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/#1-rtt-mode). It can fall back to 2-RTT if something funky happens, but that shouldn't happen in most cases.
0-RTT works after the first handshake, but enabling it allows for some forms of replay attacks so that may not be something you want to use for anything hosting an API unless you've designed your API around it.
moomoo11 · 3h ago
I’d care about this if I was selling in India or Africa.
If I’m selling to cash cows in America or Europe it’s not an issue at all.
As long as you have >10mbps download across 90% of users I think it’s better to think about making money. Besides if you don’t know that lazy loading exists in 2025 fire yourself lol.
flohofwoe · 3h ago
I wouldn't be surprised if many '3rd world' countries have better average internet speeds than some developed countries by leapfrogging older 'good enough' tech that's still dominating in the developed countries, e.g. I've been on a 16 MBit connection in Germany for a long time simply because it was mostly good enough for my internet consumption. One day my internet provider 'forcefully' upgraded me to 50 MBit because they didn't support 16 MBit anymore ;)
mrweasel · 3h ago
For the longest time I tried arguing with my ISP that I only needed around 20Mbit. They did have a 50Mbit at the time, but the price difference between 50, 100 and 250 and meant that you basically got ripped off for anything but the 100Mbit. It's the same now, I can get 300Mbit, but the price difference between 300 and and 500 is to small to be viewed as an actual saving, similar, you can get 1000Mbit, but I don't need it and the price difference is to high.
And in the last few years, access has grown tremendously, a big part of which has been Jio's aggressive push with ultra-cheap plans.
jofzar · 3h ago
It really depends on who your clients are and where they are.
https://www.mcmaster.com/ was found last year to be doing some real magic to make it load literally as fast as possible for the crapiest computers possible.
A_D_E_P_T · 3h ago
Do you have any idea what they actually did? It would be interesting to study. That site really is blazing fast.
_nivlac_ · 52m ago
I am SO glad jofzar posted this - I remember this website but couldn't recall the company name. Here's a good video on how the site is so fast, from a frontend perspective:
Quick look: GSLB (via Akamai) for low latency, tricks like using CSS sprite to serve a single image in place of 20 or so for fewer round-trips, heavy use of caching, possibly some service worker magic but I didn't dig that far. :)
Basically, looks like someone deliberately did many right things without being lazy or cheap to create a performant web site.
kosolam · 3h ago
The site is very fast indeed
actionfromafar · 3h ago
I want to buy fasteners now.
kosolam · 3h ago
Fasterners, as fast as possible
mrweasel · 3h ago
Hope you're not selling to the rural US then.
masklinn · 3h ago
There's plenty of opportunities to have slow internet (and especially long roundtrips) in developed countries e.g.
- rural location
- roommate or sibling torrent-ing the shared connection into the ground
- driving around on a road with spotty coverage
- places with poor cellular coverage (some building styles are absolutely hell on cellular as well)
paales2 · 3h ago
Or maybe we shouldn’t. A good experience doesnt have to load under 50ms, it is fine for it to take a second. 5G is common and people with slower connections accept longer waiting times. Optimizing is good but fixating isn’t.
austin-cheney · 3h ago
It seems the better solution is to not use HTTP server software that employs this slow start concept.
Using my own server software I was able to produce a complex single page app that resembled an operating system graphical user interface and achieve full state restoration as fast as 80ms from localhost page request according to the Chrome performance tab.
mzhaase · 3h ago
TCP settings are OS level. The web server does not touch them.
austin-cheney · 3h ago
The article says this is not a TCP layer technology, but something employed by servers as a bandwidth estimating algorithm.
You are correct in that TCP packets are processed within the kernel of modern operating systems.
Edit for clarity:
This is a web server only algorithm. It is not associated with any other kind of TCP traffic. It seems from the down votes that some people found this challenging.
Palmer22 · 14m ago
My bitcoin was stolen by an investment company on Telegram. Life became so frustrating to the extent I felt sick and spent some days in the hospital. I can’t imagine losing $642k to scammers just like that. I went to they local police station and made a report but nothing was done. I did all I could do within my powers to recover my lost funds but all was in vain. I nearly gave up. All I did was start looking for help both on the internet, from friends and work colleagues. They told me about a site where I meet recoverydarek@gmail.com He asked for a little commitment which I did immediately because I was really ready to get my funds back. After giving them the necessary information's they requested from me, it took them just 36 hours to recover back my money They brought back life to me by recovering 100% of my lost funds .
thierrydamiba · 11m ago
What is this and why is it the top comment? Appears to be spam for scammers. Stay away and please flag.
[1] https://512kb.club/
[2] https://anderegg.ca/
[3] https://radar.cloudflare.com/
Any reference for this?
But in practice, I think this should work most of the time for most people. On slower connections, your connection will probably crawl to a halt due to retransmission hell, though. Unless you fill up the buffers on the ISP routers, making every other connection for that visitor slow down or get dropped, too.
Which would be kinda annoying on a slow connection.
Either you'd have buffer issues or dropped packets.
[1] https://susam.net/
[2] https://github.com/susam/susam.net/blob/main/site.lisp
[3] https://susam.net/tag/mathematics.html
You could try replacing KaTeX with MathML: https://w3c.github.io/mathml-core/
I would love to use MathML, not directly, but automatically generated from LaTeX, for I find LaTeX much easier to work with than MathML. I mean, while I am writing a mathematical post, I'd much rather write LaTeX (which is almost muscle memory for me), than write MathML (which often tends to get deeply nested and tedious to write). However, the last time I checked, the rendering quality of MathML was quite uneven across browsers, both in terms of aesthetics as well as in terms of accuracy.
For example, if you check the default demo at https://mk12.github.io/web-math-demo/ you'd notice that the contour integral sign has a much larger circle in the MathML rendering (with most default browser fonts) which is quite inconsistent with how contour integrals are actually written in print.
Even if I agree to fix the above problem by loading custom web fonts, there are numerous other edge cases (spacing within subscripts, sizing within subscripts within subscripts, etc.) that need fixing in MathML. At that point, I might as well use full KaTeX. A viable alternative is to have KaTeX or MathJaX generate the HTML and CSS on server-side and send that to the client and that's what I meant by server-side rendering in my earlier comment.
Why can't this be precomputed into html and css?
1 https://x.com/philtrem22/status/1927161666732523596
This is why almost all applications and websites are slow and terrible these days.
But if Evan Wallace didn't obsess over performance when building Figma, it wouldn't be what it is today. Sometimes, performance is a feature.
Performance matters.
We've spent so many decades misinterpreting Knuth's quote about optimization that we've managed to chew up 5-6 orders of magnitude in hardware performance gains and still deliver slow, bloated and defective software products.
Performance does in fact matter and all other things equal, a fast product is more pleasurable than a slow one.
Thankfully some people like the folks at Figma took the risk and proved the point.
Even if we're innovating on hard technical problems (which most of us are not), performance still matters.
So yeah, make sure not to lose performance unreasonably, but also don't obsess with performance to the point of making things unusable or way too complicated for what they do.
Ahh, if only. Have you seen applications developed by large corporations lately? :)
:)
> ... analysis [by Cloudflare] suggests that the throttling [by Russian ISPs] allows Internet users to load only the first 16 KB of any web asset, rendering most web navigation impossible.
FWIW I optimised the heck out of my personal homepage and got 100/100 for all Lighthouse scores. Which I had not previously thought possible LOL
Built in Rails too!
It's absolutely worth optimising your site though. It just is such a pleasing experience when a page loads without any perceptible lag!
I have done the hyper optimised, inline resource, no blocking script, hand minimised JS, 14kb website thing before and the problem with doing it the "hard" way is it traps you in a design and architecture.
When your requirements change all the minimalistic choices that seemed so efficient and web-native start turning into technical debt. Everyone fantasises about "no frameworks" until the project is no longer a toy.
Whereas the isomorphic JS frameworks let you have your cake and eat it: you can start with something that spits out compiled pages and optimise it to get performant _enough_, but you can fall back to thick client JavaScript if necessary.
1. There is math for how long it takes to send even one packet over satellite connection (~1600ms). Its a weak argument for the 14kb rule since there is no comparison with a larger website. 10 packets wont necessarily take 16 seconds.
2. There is a mention that images on webpage are included in this 14kb rule. In what case are images inlined to a page’s initial load? If this is a special case and 99.9% of images don’t follow it, it should be mentioned at very least.
Low resolution thumbnails that are blurred via CSS filters over which the real images fade in once downloaded. Done properly it usually only adds a few hundred bytes per image for above the fold images.
I don’t know if many bloggers do that, though. I do on my blog and it’s probably a feature on most blogging platforms (like Wordpress or Medium) but it’s more of a commercial frontend hyperoptimization that nudges conversions half a percentage point or so.
Just because everything else is bad, doesn't invalidate the idea that you should do better. Today's internet can feel painfully slow even on a 1Gbps connection because of this; websites were actually faster in the early 2000s, during the transition to ADSL, as they still had to cater to dial-up users and were very light as a result.
Doesn't this sort of undo the entire point of the article?
If the idea was to serve the entire web page in the first roundtrip, wouldn't you have lost the moment TLS is used? Not only does the TLS handshake send lots of stuff (including the certificate) that will likely get you over the 14kb boundary before you even get the chance to send a byte of your actual content - but the handshake also includes multiple request/response exchanges between client and server, so it would require additional roundtrips even if it stayed below the 14kb boundary.
So the article's advice only holds for unencrypted plain-TCP connections, which no one would want to use today anymore.
The advice might be useful again if you use QUIC/HTTP3, because that one ditches both TLS and TCP and provides the features from both in its own thing. But then, you'd have to look up first how congestion control and bandwidth estimation works in HTTP3 and if 14kb is still the right threshold.
EDIT: some reply missed my point, I am not claiming this particular optimization is the holy grail, only that I'd have liked for added benefit of reducing the energy consumption to be mentioned
I am all for efficiency, but optimizing everywhere is a recipe for using up the resources to actually optimize where it matters.
It's a small thing, but as you say internet video is relatively heavy.
To reduce my AI footprint I use the udm=14 trick[1] to kill AI in Google search. It generally gives better results too.
For general web browsing the best single tip is running uBlock Origin. If you can master medium[2] or hard mode (which will require un-breaking/whitelisting sites) it saves more bandwidth and has better privacy.[3]
To go all-out on bandwidth conservation, LocalCDN[4] and CleanURLs[5] are good. "Set it and forget it," improves privacy and load times, and saves a bit of energy.
Sorry this got long. Cheers
[0] https://greasyfork.org/whichen/scripts/23661-youtube-hd
[1] https://arstechnica.com/gadgets/2024/05/google-searchs-udm14...
[2] https://old.reddit.com/r/uBlockOrigin/comments/1j5tktg/ubloc...
[3] https://github.com/gorhill/ublock/wiki/Blocking-mode
[4] https://www.localcdn.org/
[5] https://github.com/ClearURLs/Addon
Once you find it for a website you can just save it though so you don't need to go through it again.
LocalCDN is indeed a nobrainer for privacy! Set and forget.
Is it? My front end engineer spending 90 minutes cutting dependencies out of the site isn’t going to deny YouTube the opportunity to improve their streaming algorithms.
Everyone needs to be aware that we are part of an environment that has limited resources beyond "money" and act accordingly, whatever the scale.
Is it really? I was surprised to see that surfing newspaper websites or Facebook produces more traffic per time than Netflix or Youtube. Of course there's a lot of embedded video in ads and it could maybe count as streaming video.
HTML/CSS/JS is the only fully open stack, free as in beer and free and not owned by a single entity and standardized by multinational standardization bodies for building applications interfaces that is cross platform and does that excellent. Especially with electron you can build native apps with HTML/CSS/JS.
There are actually web apps not „websites” that are built. Web apps are not html with sprinkled jquery around there are actually heavy apps.
2. Everything in our world is dwarfes standing on the shoulders of giants. To rip everything up and create something completely new is most of the time an idea that sounds better than it really would be. Anyone who thinks something else is mostly to young to see this pattern.
The binary is a compressed artefact and the stream is a TLS pipe. But the principle is the same.
In fact videos streams over the web are actually based on how HTTP documents are chunked and retrieved, rather than the other way around.
I'll have to speculate what you mean
1. If you mean drawing pixels directly instead of relying on HTML, it's going to be slower. (either because of network lag or because of WASM overhead)
2. If you mean streaming video to the browser and rendering your site server-side, it will break features like resizing the window or turning a phone sideways, and it will be hideously expensive to host.
3. It will break all accessibility features like Android's built-in screen reader, because you aren't going to maintain all the screen reader and braille stuff that everyone might need server-side, and if you do, you're going to break the workflow for someone who relies on a custom tweak to it.
4. If you are drawing pixels from scratch you also have to re-implement stuff like selecting and copying text, which is possible but not feasible.
5. A really good GUI toolkit like Qt or Chromium will take 50-100 MB. Say you can trim your site's server-side toolkit down to 10 MB somehow. If you are very very lucky, you can share some of that in the browser's cache with other sites, _if_ you are using the same exact version of the toolkit, on the same CDN. Now you are locked into using a CDN. Now your website costs 10 MB for everyone loading it with a fresh cache.
You can definitely do this if your site _needs_ it. Like, you can't build OpenStreetMap without JS, you can't build chat apps without `fetch`, and there are certain things where drawing every pixel yourself and running a custom client-side GUI toolkit might make sense. But it's like 1% of sites.
I hate HTML but it's a local minimum. For animals, weight is a type of strength, for software, popularity is a type of strength. It is really hard to beat something that's installed everywhere.
A million determined voters can easily force laws to be made which forces youtube to be more efficient.
I often think about how orthodoxical all humans are. We never think about different paths outside of social norms.
- Modern western society has weakened support for mass action to the point where it is literally an unfathomable "black swan" perspective in public discourse.
- Spending a few million dollars on TV ads to get someone elected is a lot cheaper than whatever Bill Gates spends on NGOs, and for all the money he spent it seems like aid is getting cut off.
- Hiring or acting as a hitman to kill someone to achieve your goal is a lot cheaper than the other options above. It seems like this concept, for better or worse, is not quite in the public consciousness currently. The 1960s 1970s era of assassinations have truly gone and past.
Personally, if a referendum were held tomorrow to disband Google, I would vote yes for that...but good luck getting that referendum to be held.
Like how industrial manufacturing are the biggest carbon consumers and compared to them, I’m just a drop in the ocean. But that doesn’t mean I don’t also have a responsibility to recycle because culminate effect of everyone like me recycling quickly becomes massive.
Similarly, if every web host did their bit with static content, you’d still see a big reduction at a global scale.
And you’re right it shouldn’t the end of the story. However that doesn’t mean it’s a wasted effort / irrelevant optimisation
A nice side effect of these choices is that I only spend a small part of my pay. Never had a credit card, never had debt, just saved my money until I had enough that the purchase was no big deal.
I don't really have an issue with people who say that their drop does not matter so why should they worry, but I don't understand it, seems like they just needlessly complicate their life. Not too long ago my neighbor was bragging about how effective all the money he spent on energy efficient windows, insulation, etc, was, he saved loads of money that winter; his heating bill was still nearly three times what mine was despite using a wood stove to offset his heating bill, my house being almost the same size, barely insulated and having 70 year old windows. I just put on a sweater instead of turning up the heat.
Edit: Sorry about that sentence, not quite awake yet and doubt I will be awake enough to fix it before editing window closes.
I'm honestly just really annoyed about this "society and environment"-spin on advise that would have an otherwise niche, but perfectly valid reason behind it (TFA: slow satellite network on the high seas).
This might sound harsh and I don't mean it personally, but making your website smaller and "being vocal about it" (whatever you mean by that) doesn't make an iota of difference. It also only works if your site is basically just text. If your website uses other resources (images, videos, 3D models, audio, etc.), the impact of first load is just noise anyway.
You can have a bigger impact by telling 100,000 people to drive an hour less each month and if just 1% of your hypothetical audience actually does that, you'd achieve orders of magnitude more in terms of environmental and societal impact.
Now, it is true that it didn't save much because probably many people were uploaded 8K videos at the time, so drop in the bucket. But personally, I found it quite inspiring and his decision was instrumental in my deciding to never upload 4K. And in general, I will say that people like that do inspire me and keep me going to be as minimal as possible when I use energy in all domains.
For me at least, trying to optimize for using as little energy as possible isn't an engineering problem. It's a challenge to do it uniformly as much as possible, so it can't be subdivided. And I do think every little bit counts, and if I can spend time making my website smaller, I'll do that in case one person gets inspired by that. It's not like I'm a machine and my only goal is time efficiency....
If we want to really fix places with bigger impact we need to change this approach in a first place.
This is micro-optimisation for a valid use case (slow connections in bandwidth-starved situations), but in the real world, a single hi-res image, short video clip, or audio sample would negate all your text-squeezing, HTTP header optimisation games, and struggle for minimalism.
So for the vast majority of use cases it's simply irrelevant. And no, your website is likely not going to get 1,000,000 unique visitors per hour so you'd have a hard time even measuring the impact whereas simply NOT ordering pizza and having a home made salad instead would have a measurable impact orders of magnitude greater.
Estimating the overall impact of your actions and non-actions is hard, but it's easier and more practical to optimise your assets, remove bloat (no megabytes of JS frameworks), and think about whether you really need that annoying full-screen video background. THOSE are low-hanging fruit with lots of impact. Trying to trim down a functional site to <14kB is NOT.
I've tried to really cut down my website as well to make it fairly minimal. And when I upload stuff to YouTube, I never use 4K, only 1080P. I think 4K and 8K video should not even exist.
A lot of people talk about adding XYZ megawatts of solar to the grid. But imagine how nice it could be if we regularly had efforts to use LESS power.
I miss the days when websites were very small in the days of 56K modems. I think there is some happy medium somewhere and we've gone way past it.
Since a main argument is seemingly that AI is worse, let's remember that AI is querying these huge pages as well.
Also that the 14kb size is less than 1% of the current average mobile website payload.
Calculate how much electricity you personally consume in total browsing the Internet for a year. Multiply that by 10 to be safe.
Then compare that number to how much energy it takes to produce a single hamburger.
Do the calculation yourself if you do not believe me.
On average, we developers can make a bigger difference by choosing to eat salad one day instead of optimizing our websites for a week.
I myself have installed one single package, and it installed 196,171 files in my home directory.
If that isn't gratuitous bloat, then I don't know what is.
Creating an average hamburger requires an input of 2-6 kWh of energy, from start to finish. At 15¢ USD/kWh, this gives us an upper limit of about 90¢ of electricity.
The average 14 kB web page takes about 0.000002 kWh to serve. You would need to serve that web page about 1-300,000 times to create the same energy demands of a single hamburger. A 14 mB web page, which would be a pretty heavy JavaScript app these days, would need about 1 to 3,000.
I think those are pretty good ways to use the energy.
(In addition to what justmarc said about accounting for the whole network. Plus I suspect between feeding them and the indirect effects of their contribution to climate change, I suspect you're being generous about the cost of a burger.)
For a user's access to a random web page anywhere, assuming it's not on a CDN near the user, you're looking at at ~10 routers/networks on the way involved in the connection. Did you take that into account?
Then multiply that by the number of daily visitors.
Without "hamburgers" (food in general), we die, reducing the size of usesless content on websites doesn't really hurt anyone.
Then multiply that by the number of daily customers.
Without web pages (information in general), we return to the Dark Ages. Reducing the number of hamburgers people eat doesn't really hurt anyone.
Now, if mcdonalds padded 5kB of calories of a cheesburger with 10.000 kilobytes of calories in wasted food like news sites doo, it would be a different story. The ratio would be 200 kilos of wasted food for 100grams of usable beef.
Instead we should be looking to nuclear power solutions for our energy needs, and not waste time with reducing website size if its purely a function of environmental impact.
And no, reducing resource use to the minimum in the name of sustainability does not scale down the same way it scales up. You're just pushing the idea that all human activity is some sort of disease that's best disposed of. That's essentially just wishing the worst on your own species for being successful.
It's never clear to me whether people who push this line are doing so because they're bitter and want to punish other humans, or because they hate themselves. Either way, it evinces a system of thought that has already relegated humankind to the dustbin of history. If, in the long run, that's what happens, you're right and everyone else is wrong. Congratulations. It will make little difference in that case to you if the rest of us move on for a few hundred years to colonize the planets and revive the biosphere. Comfort yourself with the knowledge that this will all end in 10 or 20 thousand years, and the world will go back to being a hot hive of insects and reptiles. But what glory we wrought in our time.
Whataboutism. https://en.m.wikipedia.org/wiki/Whataboutism
> You're just pushing the idea that all human activity is some sort of disease that's best disposed of. That's essentially just wishing the worst on your own species for being successful.
Strawmaning. https://en.m.wikipedia.org/wiki/Straw_man
Every bloody mention of the environmental impact of our activities gets at least a reply like yours that ticks one of these boxes.
That's a sweeping misunderstanding of what I wrote, so I'd ask that you re-read what I said in response to the specific quote.
(this was actually stated in agreement with the original poster, who you clearly misunderstood, so there's no "what-about" involved here. They were condemning all kinds of consumption, including the frivolous ones I mentioned).
But
I'm afraid you've missed both my small point and my wider point.
My small point was to argue against the parent's comment that
>>reducing ressources consumption to the minimum required should always be a concern if we intend to have a sustainable future
I disagree with this concept on the basis that nothing can be accomplished on a large scale if the primary concern is simply to reduce resource consumption to a minimum. If you care to disagree with that, then please address it.
The larger point was that this theory leads inexorably to the idea that humans should just kill themselves or disappear; and it almost always comes from people who themselves want to kill themselves or disappear.
..."required".
That allows you to fit pretty much everything in that requirement. Which actually makes my initial point a bit weak, as some would put "delivering 4K quality tiktok videos" as a requirement.
Point is that energy consumption and broad environmental impact has to be a constraint in how we design our systems (and businesses).
I stand by my accusations of whataboutism and strawmaning, though.
HTTP/3 uses UDP rather than TCP, so TCP slow start should not apply at all.
One other advantage of QUIC is that you avoid some latency from the three-way handshake that is used in almost any TCP implementation. Although technically you can already send data in the first SYN packet, the three-way handshake is necessary to avoid confusion in some edge cases (like a previous TCP connection using the same source and destination ports).
Very relevant. A lot of websites need 5 to 30 seconds or more to load.
I know it is not the exact topic, but sometimes I think we dont need the fastest response time but consistent response time. Like every single page within the site to be fully rendered with exactly 1s. Nothing more nothing less.
But this sort of goes against my no / minimal JS front end rendering philosophy.
Would love it if someone kept a list.
https://250kb.club/
Hopefully you'll find some of them aesthetically pleasing
There are some chargers that take card payments though. My local IKEA has some. There's also EU legislation to mandate payment card support.
https://electrek.co/2023/07/11/europe-passes-two-big-laws-to...
I swear I am not just trying to be a dick here. If I didn't think it had great content I wouldn't have commented. But I feel like I'm reading a LinkedIn post. Please join some of those sentences up into paragraphs!
The HTTPS negotiation is going to consume the initial roundtrips which should start increasing the size of the window
Modern CDNs start with larger initial windows and also pace the packets onto the network to reduce the chances of congesting
There’s also a question as to how relevant the 14kb rule has ever been… HTML renders progressively so as long as there’s some meaningful content in the early packets then overall size is less important
There are other ways you can try to optimise the certificate chain, though. For instance, you can pick a CA that uses ECC rather than RSA to make use of the much shorter key sizes. Entrust has one, I believe. Even if the root CA has an RSA key, they may still have ECC intermediates you can use.
> Also HTTPS requires two additional round trips before it can do the first one — which gets us up to 1836ms!
0-RTT works after the first handshake, but enabling it allows for some forms of replay attacks so that may not be something you want to use for anything hosting an API unless you've designed your API around it.
If I’m selling to cash cows in America or Europe it’s not an issue at all.
As long as you have >10mbps download across 90% of users I think it’s better to think about making money. Besides if you don’t know that lazy loading exists in 2025 fire yourself lol.
https://www.mcmaster.com/ was found last year to be doing some real magic to make it load literally as fast as possible for the crapiest computers possible.
https://youtu.be/-Ln-8QM8KhQ
Basically, looks like someone deliberately did many right things without being lazy or cheap to create a performant web site.
- rural location
- roommate or sibling torrent-ing the shared connection into the ground
- driving around on a road with spotty coverage
- places with poor cellular coverage (some building styles are absolutely hell on cellular as well)
Using my own server software I was able to produce a complex single page app that resembled an operating system graphical user interface and achieve full state restoration as fast as 80ms from localhost page request according to the Chrome performance tab.
You are correct in that TCP packets are processed within the kernel of modern operating systems.
Edit for clarity:
This is a web server only algorithm. It is not associated with any other kind of TCP traffic. It seems from the down votes that some people found this challenging.