Software developer’s should be more aware of the media layer . I appreciate the author’s post about 3g /5g reliability and latency. Radio almost always retries, and with most HTTP your packets need to arrive in order.
A single REST request is only truly a single packet if the request and response are both < 1400 bytes. Any more than that and your “single” request is now multiple requests & responses . Any one of them may need a retry and they all need to arrive in order for the UI to update.
For practical experiments, try chrome dev tools in 3g mode with some packet loss and you can see even “small” optimizations improving UI responsiveness dramatically.
This is one of the most compelling reasons to make APIs and UIs as small as possible.
No comments yet
susam · 14h ago
I just checked my home page [1] and it has a compressed transfer size of 7.0 kB.
Not bad, I think! I generate the blog listing on the home page (as well as the rest of my website) with my own static site generator, written in Common Lisp [2]. On a limited number of mathematical posts [3], I use KaTeX with client-side rendering. On such pages, KaTeX adds a whopping 347.5 kB!
Perhaps I should consider KaTeX server-side rendering someday! This has been a little passion project of mine since my university dorm room days. All of the HTML content, the common HTML template (for a consistent layout across pages), and the CSS are entirely handwritten. Also, I tend to be conservative about what I include on each page, which helps keep them small.
Not trying to discourage you from adopting better solutions, but the delayed client-side rendering of a dynamic component like a Latex expression is almost (or sometimes literally) imperceptible to the averge user.
There is such a thing as over-optimization. All this SEO-driven performance chasing is really only worthwhile if the thing you're building is getting click-through-traffic in the millions of views.
It's a bit like worrying about the aerodynamics of a rowboat when you're the only one in it, and you're lost at sea, and you've got to fish for food and make sure the boat doesn't spring any leaks.
Yes, in the abstract, it's a worthwhile pursuit. But when you factor in the ratio of resources required vs gain recieved, it is hardly ever a wise choice of how to use your energy.
welpo · 14h ago
> That said, I do use KaTeX with client-side rendering on a limited number of pages that have mathematical content
I would love to use MathML, not directly, but automatically generated from LaTeX, since I find LaTeX much easier to work with than MathML. I mean, while I am writing a mathematical post, I'd much rather write LaTeX (which is almost muscle memory for me), than write MathML (which often tends to get deeply nested and tedious to write). However, the last time I checked, the rendering quality of MathML was quite uneven across browsers, both in terms of aesthetics as well as in terms of accuracy.
For example, if you check the default demo at https://mk12.github.io/web-math-demo/ you'd notice that the contour integral sign has a much larger circle in the MathML rendering (with most default browser fonts) which is quite inconsistent with how contour integrals actually appear in print.
Even if I decide to fix the above problem by loading custom web fonts, there are numerous other edge cases (spacing within subscripts, sizing within subscripts within subscripts, etc.) that need fixing in MathML. At that point, I might as well use full KaTeX. A viable alternative is to have KaTeX or MathJaX generate the HTML and CSS on server-side and send that to the client and that's what I meant by server-side rendering in my earlier comment.
AnotherGoodName · 9h ago
Math expressions are like regex to me nowadays. I ask the llm coding assistant to write it and it’s very very good at it. I’ll probably forget the syntax soon but no big deal.
“MathML for {very rough textual form of the equation}” seems to give a 100% hit rate for me. Even when i want some formatting change i can ask the llm and that pretty much always has a solution (mathml can render symbols and subscripts in numerous ways but the syntax is deep). It’ll even add the css needed to change it up in some way if asked.
BlackFly · 13h ago
Katex renders to MathML (either server side or client side). Generally people want a slightly more fluent way of describing an equation than is permitted by a soup of html tags. The various tex dialects (generally just referred to as latex) are the preferred methods of doing that.
mr_toad · 12h ago
Server side rendering would cut out the 277kb library. The additional MathML being sent to the client is probably going to be a fraction of that.
mk12 · 12h ago
If you want to test out some examples from your website to see how they'd look in KaTeX vs. browser MathML rendering, I made a tool for that here: https://mk12.github.io/web-math-demo/
em3rgent0rdr · 8h ago
Nice tool! Seems "New Computer Modern" font is the Native MathML rendering that looks closest like standard LaTeX rendering, I guess cause LaTeX uses Computer Modern by default. But I notice extra space around the parenthesis, which annoys me because LaTeX math allows you to be so precise about how wide your spaces (e.g. \, \: \; \!). Is there a way to get the spaces around the parenthesis to be just as wide as standard LaTeX math? And the ^ hat above f(x) isn't nicely above just the top part of the f.
djoldman · 12h ago
I never understood math / latex display via client side js.
Why can't this be precomputed into html and css?
creata · 3m ago
I usually prefer compiling it to HTML or SVGs, but sometimes, if you have a lot of math on your page, bundling MathJax can take up less space. (Not sure if that'd still be true after compression.)
susam · 11h ago
> I never understood math / latex display via client side js. Why can't this be precomputed into html and css?
It can be. But like I mentioned earlier, my personal website is a hobby project I've been running since my university days. It's built with Common Lisp (CL), which is part of the fun for me. It's not just about the end result, but also about enjoying the process.
While precomputing HTML and CSS is definitely a viable approach, I've been reluctant to introduce Node or other tooling outside the CL ecosystem into this project. I wouldn't have hesitated to add this extra tooling on any other project, but here I do. I like to keep the stack simple here, since this website is not just a utility; it is also my small creative playground, and I want to enjoy whatever I do here.
whism · 7h ago
Perhaps you could stand up a small service on another host using headless chrome or similar to render, and fall back to client side if the service is down and you don’t already have the pre rendered result stored somewhere. I suggest this only because you mentioned not wanting to pollute your current server environment, and I enjoy seeing these kind of optimizations done :^)
dfc · 10h ago
Is it safe to say the website is your passion project?
mr_toad · 12h ago
It’s a bit more work, usually you’re going to have to install Node, Babel and some other tooling, and spend some time learning to use them if you’re not already familiar with them.
creata · 6m ago
For rendering math to HTML+CSS or SVGs, you can just use Node.js and MathJax. I'm not sure why you'd want Babel.
(You can probably use KaTeX, too, but I prefer the look of MathJax's output.)
marcthe12 · 9h ago
Well there is mathml but it has poor support in chrome til recently. That is the website native equations formatting.
VanTodi · 13h ago
Another idea maybe would be to load the heavy library after the initial page is done. But it's loaded and heavy nonetheless.
Or you could create svgs for the formulas and load them when they are in the viewport. Just my 2 cents
GavinAnderegg · 12h ago
14kB is a stretch goal, though trying to stick to the first 10 packets is a cool idea. A project I like that focuses on page size is 512kb.club [1] which is like a golf score for your site’s page size. My site [2] came in just over 71k when I measured before getting added (for all assets). This project also introduced me to Cloudflare Radar [3] which includes a great tool for site analysis/page sizing, but is mainly a general dashboard for the internet.
What are you doing with the extra 500kB for me, the user?
> 90% of the time in interested in text. Most of the reminder vector graphics would suffice.
14 kB is a lot of text and graphics for a page. What is the other 500 for?
filleduchaos · 8h ago
Text, yes. Graphics? SVGs are not as small as people think especially if they're any more complex than basic shapes, and there are plenty of things that simply cannot be represented as vector graphics anyway.
It's fair to prefer text-only pages, but the "and graphics" is quite unrealistic in my opinion.
LarMachinarum · 3h ago
How much is gained by using SVG (as opposed to a raster graphics format) varies a lot depending on the content. For some files (even with complex shape paths depending on a couple details) it can be an enormous gain, and for some files it can indeed be disappointing.
That being said, while raw SVG suffers in that respect from the verbosity of the format (being XML-based and designed so as to be humanly readable and editable as text), it would be unfair to compare, for the purpose of HTTP transmission, the size of the raster format image (heavily compressed) with the size of the SVG file (uncompressed) as one would if it were for desktop use. SVG tends to lend itself very well to compressed transmission, even with high-performance compression algorithms like brotli (which is supported by all relevant browsers and lots of HTTP servers), and you can use pre-compressed files (e.g. for nginx with the module ngx_brotli) so that the server doesn't have to handle compression ad hoc.
mousethatroared · 4h ago
By vector graphics I meant primitive graphics.
Outside of youtube and... twitter? I really don't need fancy stuff. HN is literally the web ideal for me, and Id think most users if given an option.
ssernikk · 6h ago
I use it for fonts. My website [0] consists of about 15kB of compressed HTML + CSS and 200kB of fonts.
Why do I care about fonts? Honestly, if my browser had an option not to load fonts and use my default to save load time I ld choose that 19 out 20 times.
nicce · 9h ago
If you want a fancy syntax highlighter for code blocks with multiple languages on your website, that is alone about that size. E.g. regex rules and the regex engine.
masfuerte · 9h ago
As an end user I want a website that does the highlighting once on the back end.
FlyingSnake · 10h ago
Second this. I also find 512kb as a more realistic benchmark and use it for my website.
The modern web has crossed the rubicon long time ago for 14kb websites.
Brajeshwar · 9h ago
512kb is pretty achievable for personal websites. My next target is to stay within 99kb (100kb as the ceiling). Should be pretty trivial on a few weekends. My website is in the Orange on 512kb.
crawshaw · 15h ago
If you want to have fun with this: the initial window (IW) is determined by the sender. So you can configure your server to the right number of packets for your website. It would look something like:
ip route change default via <gw> dev <if> initcwnd 20 initrwnd 20
A web search suggests CDNs are now at 30 packets for the initial window, so you get 45kb there.
nh2 · 2h ago
13 years ago, 10 packets was considered "cheating":
We are in a strange world today because our MTU was decided for 10mbps ethernet (MTU/bandwidth on a hub controls latency). The world is strange because 10mbps is still common for end-user network connections, while 10gbps is common for servers, and a goodly number of consumers have 1gbps.
The range means MTU varies from reasonable, where you can argue that an IW of anything from 1-30 packets is good, to a world where the MTU is ridiculously small and the IW is similarly absurd.
We would probably be better off if consumers on >1gbps links got higher MTUs, then an IW of 10-30 could be reasonable everywhere. MTU inside cloud providers is higher (AWS uses 9001), so it is very possible.
sangeeth96 · 13h ago
> A web search suggests CDNs are now at 30 packets for the initial window, so you get 45kb there.
I'm not going to dig it up for you, but this is in line with what I've read and observed. I set this to 20 packets on my personal site.
londons_explore · 13h ago
be a bad citizen and just set it to 1000 packets... There isn't really any downside apart from potentially clogging up someone who has a dialup connection and bufferbloat.
notpushkin · 12h ago
This sounds like a terrible idea, but can anybody pinpoint why exactly?
jeroenhd · 12h ago
Anything non-standard will kill shitty middleboxes so I assume spamming packets faster than anticipated will have corporate networks block you off as a security thread of some kind. Mobile carriers also do some weird proxying hacks to "save bandwidth", especially on <4G, so you may also break some mobile connections. I don't have any proof but shitty middleboxes have broken connections with much less obvious protocol features.
But in practice, I think this should work most of the time for most people. On slower connections, your connection will probably crawl to a halt due to retransmission hell, though. Unless you fill up the buffers on the ISP routers, making every other connection for that visitor slow down or get dropped, too.
r1ch · 8h ago
Loss-based TCP congestion control and especially slow start are a relic from the 80s when the internet was a few dialup links and collapsed due to retransmissions. If an ISP's links can't handle a 50 KB burst of traffic then they need to upgrade them. Expecting congestion should be an exception, not the default.
Disabling slow start and using BBR congestion control (which doesn't rely on packet loss as a congestion signal) makes a world of difference for TCP throughput.
buckle8017 · 12h ago
Doing that would basically disable the congestion control at the start of the connection.
Which would be kinda annoying on a slow connection.
Either you'd have buffer issues or dropped packets.
> ... analysis [by Cloudflare] suggests that the throttling [by Russian ISPs] allows Internet users to load only the first 16 KB of any web asset, rendering most web navigation impossible.
9dev · 15h ago
The overlap of people that don’t know what TCP Slow Start is and those that should care about their website loading a few milliseconds faster is incredibly small. A startup should focus on, well, starting up, not performance; a corporation large enough to optimise speed on that level will have a team of experienced SREs that know over which detail to obsess.
jeroenhd · 12h ago
When your approach is "I don't care because I have more important things to focus on", you never care. There's always something you can do that's more important to a company than optimising the page load to align with the TCP window size used to access your server.
This is why almost all applications and websites are slow and terrible these days.
sgarland · 9h ago
This. A million times this.
Performance isn’t seen as sexy, for reasons I don’t understand. Devs will be agog about how McMaster-Carr manages to make a usable and incredibly fast site, but they don’t put that same energy back into their own work.
People like responsive applications - you can’t tell me you’ve never seen a non-tech person frustratingly tapping their screen repeatedly because something is slow.
whoisyc · 6h ago
> This is why almost all applications and websites are slow and terrible these days.
The actual reason is almost always some business bullshit. Advertising trackers, analytics etc. No amount of trying to shave kilobytes off a response can save you if your boss demands you integrate code from a hundred “data partners” and auto play a marketing video.
Blaming bad web performance on programmers not going for the last 1% of optimization is like blaming climate change on Starbucks not using paper straws. More about virtue signaling than addressing the actual problem.
keysdev · 11h ago
That and SPA
andix · 9h ago
SPAs are great for highly interactive pages. Something like a mail client. It's fine if it takes 2-3 seconds extra when opening the SPA, it's much more important to have instant feedback when navigating.
SPAs are really bad for mostly static websites. News sites, documentation, blogs.
marcosdumay · 8h ago
Well, half of a second is a small difference. So yeah, there will probably be better things to work on up to the point when you have people working exclusively on your site.
> This is why almost all applications and websites are slow and terrible these days.
But no, there are way more things broken on the web than lack of overoptimization.
hinkley · 7h ago
> half a second is a small difference
I don’t even know where to begin. Most of us are aiming for under a half second total for response times. Are you working on web applications at all?
marcosdumay · 6h ago
> Most of us are aiming for under a half second total for response times.
I know people working on that exist. "Most of us" is absolutely not, if they were so many, the web wouldn't be like it's now.
Anyway, most people working towards instantaneous response aren't optimizing the very-high latency case where the article may eventually get a 0.5s slowdown. And almost nobody gets to the extremely low-gain kinds of optimizations there.
Mawr · 2h ago
"More than 10 years ago, Amazon found that every 100ms of latency cost them 1% in sales. In 2006, Google found an extra .5 seconds in search page generation time dropped traffic by 20%."
elmigranto · 15h ago
Right. That’s why all the software from, say, Microsoft works flawlessly and at peak efficiency.
SXX · 14h ago
This. It's exactly why Microsoft use modern frameworks such as React Native for their Start Menu used by billions of people every day.
chamomeal · 12h ago
Wait… please please tell me this is a weirdly specific joke
kevindamm · 11h ago
Only certain live portions of it, and calling it React is a stretch but not entirely wrong:
the notion was popularized as an explanation for a CPU core spiking whenever the start menu opens on Win11
Nab443 · 12h ago
And probably the reason why I have to restart it at least twice a week.
hinkley · 7h ago
And this is why SteamOS is absolutely kicking Windows’ ass on handhelds.
9dev · 14h ago
That’s not what I said. Only that the responsible engineers know which tradeoffs they make, and are competent enough to do so.
samrus · 14h ago
The decision to use react for the start menu wasnt out of competency. The guy said on twitter that thats what he knew so he used it [1]. Didnt think twice. Head empty no thoughts
It is indeed an impressive feat of engineering to make the start menu take several seconds to launch in the age of 5 GHz many-core CPUs, unlimited RAM, and multi-GByte/s SSDs. As an added bonus, I now have to re-boot every couple of days or the search function stops working completely.
ldjb · 13h ago
Please do share any evidence to the contrary, but it seems that the Tweet is not serious and is not from someone who worked on the Start Menu.
bool3max · 13h ago
No way people on HN are falling for bait Tweets. We're cooked
I googled the names of the people holding the talk and they're both employed by Microsoft as software engineers, I don't see any reason to doubt what they're presenting. Not the whole start menu is React Native, but parts are.
hinkley · 7h ago
Why is that somehow worse?
hombre_fatal · 11h ago
"Hi it's the guy who did <thing everyone hates>" is a Twitter meme.
hinkley · 7h ago
Orange Cat Programmer.
9dev · 13h ago
That tweet is fake, and as repeatedly stated by Microsoft engineers, the start menu is written in C# of course, the only part using React native is a promotion widget within the start menu. While even that is a strange move, all the rest is just FUD spread via social media.
the_real_cher · 13h ago
Fair warning, X has has more trolls than 4chan.
Henchman21 · 9h ago
Please, it has more trolls than Middle Earth
nasso_dev · 14h ago
I agree, it feels like it should be how you describe it.
But if Evan Wallace didn't obsess over performance when building Figma, it wouldn't be what it is today. Sometimes, performance is a feature.
andersmurphy · 10h ago
Doesn’t have to be a choice it could just be the default. My billion cells/checkboxes[1] demos both use datastar and so are just over 10kb. It can make a big difference on mobile networks and 3G. I did my own tests and being over 14kb often meant an extra 3s load time on bad connections. The nice thing is I got this for free because the datastar maintainer cares about tcp slow star even though I might not.
I don’t see what size of corporation has to do with performance or optimization. Almost never do I see larger businesses doing anything to execute more quickly online.
zelphirkalt · 12h ago
Too many cooks spoil the broth. If you got multiple people pushing agenda to use their favorite new JS framework, disregarding simplicity in order to chase some imaginary goal or hip thing to bolster their CV, it's not gonna end well.
anymouse123456 · 12h ago
This idea that performance is irrelevant gets under my skin. It's how we ended up with Docker and Kubernetes and the absolute slop stack that is destroying everything it touches.
Performance matters.
We've spent so many decades misinterpreting Knuth's quote about optimization that we've managed to chew up 5-6 orders of magnitude in hardware performance gains and still deliver slow, bloated and defective software products.
Performance does in fact matter and all other things equal, a fast product is more pleasurable than a slow one.
Thankfully some people like the folks at Figma took the risk and proved the point.
Even if we're innovating on hard technical problems (which most of us are not), performance still matters.
mr_toad · 12h ago
Containers were invented because VMs were too slow to cold start and used too much memory. Their whole raison d'être is performance.
anymouse123456 · 8h ago
That's another reason they're so infuriating. Containers are intended to make things faster and easier. But the allure of virtualization has made most work much, much slower and much, much worse.
If you're running infra at Google, of course containers and orchestration make sense.
If you're running apps/IT for an SMB or even small enterprise, they are 100% waste, churn and destruction. I've built for both btw.
The contexts in which they are appropriate and actually improve anything at all are vanishingly small.
9dev · 4h ago
I have wasted enough time caressing Linux servers to accommodate for different PHP versions that I know what good containers can do. An application gets tested, built, and bundled with all its system dependencies, in the CI; then pushed to the registry, deployed to the server. All automatic. Zero downtime. No manual software installation on the server. No server update downtimes. No subtle environment mismatches. No forgotten dependencies.
I fail to see the churn and destruction. Done well, you decouple the node from the application, even, and end up with raw compute that you can run multiple apps on.
hinkley · 7h ago
Part of why I adopted containers fairly early was inspired by the time we decided to make VMs for QA with our software on it. They kept fucking up installs and reporting ghost bugs that were caused by a bad install or running an older version and claiming the bugs we fixed weren’t fixed.
Building disk images was a giant pain in the ass but less disruptive to flow than having QA cry wolf a couple times a week.
I could do the same with containers, and easier.
bobmcnamara · 9h ago
Can you live fork containers like you can VMs?
VM clone time is surprisingly quick once you stop copying memory, after that it's mostly ejecting the NIC and bringing up the new one.
9dev · 4h ago
Why would you, if you can simply start replacement containers in another location and reroute traffic there, then dispose of the old ones?
marcosdumay · 8h ago
You mean creating a different container that is exactly equal to the previous one?
It's absolutely possible, but I'm not sure there's any tool out there with that command... because why would you? You'll get about the same result as forking a process inside the container.
mort96 · 9h ago
I can't say I've ever cared about live forking a container (or VM, for that matter)
hinkley · 7h ago
Your cloud provider may be doing it for you. Ops informed me one day that AWS was pushing out a critical security update to their host OS. So of course I asked if that meant I needed to redeploy our cluster, and they responded no, and in fact they had already pushed it.
Our cluster keeps stats on when processes start. So we can alert on crashes, and because new processes (cold JIT) can skew the response numbers, and are inflection points to analyze performance improvements or regressions. There were no restarts that morning. So they pulled the tablecloth out from under us. TIL.
mort96 · 4h ago
None of this is making live forking a container desirable to me, I'm not a cloud hosting company (and if I was, I'd be happy to provide a VPS as a VM rather than a container)
hinkley · 4h ago
There’s using a feature, having a vendor use it for you, or denying its worth.
Anything else is dissonant.
mort96 · 4h ago
For the VM case, I'm sure I might have benefited from it, if Digital Ocean have been able to patch something live without restarting my VPS. Great. Nothing I need to care about, so I have never cared about live forking a VM. It hasn't come up in my use of VMs.
It's not a feature I miss in containers, is what I'm saying.
anonymars · 11h ago
Yeah, I think Electron would be the poster child
sgarland · 9h ago
Agreed, though containers and K8s aren’t themselves to blame (though they make it easier to get worse results).
Debian Slim is < 30 MB. Alpine, if you can live with musl, is 5 MB. The problem comes from people not understanding what containers are, and how they’re built; they then unknowingly (or uncaringly) add in dozens of layers without any attempt at reducing or flattening.
Similarly, K8s is of course just a container orchestration platform, but since it’s so easy to add to, people do so without knowing what they’re doing, and you wind up with 20 network hops to get out of the cluster.
zelphirkalt · 12h ago
Performance matters, but at least initially only as far as it doesn't complicate your code significantly. That's why a simple static website often beats some hyper modern latest framework optimization journey websites. You gotta maintain that shit. And you are making sacrifices elsewhere, in the areas of accessibility and possibly privacy and possibly ethics.
So yeah, make sure not to lose performance unreasonably, but also don't obsess with performance to the point of making things unusable or way too complicated for what they do.
anymouse123456 · 8h ago
This kind of thinking is exactly the problem.
Yes, at the most absurd limits, some autists may occasionally obsess and make things worse. We're so far from that problem today, it would be a good one to have.
IME, making things fast almost always also makes them simpler and easier to understand.
Building high-performance software often means building less of it, which translates into simpler concepts, fewer abstractions, and shorter times to execution.
It's not a trade-off, it's valuable all the way down.
Treating high performance as a feature and low performance as a bug impacts everything we do and ignoring them for decades is how you get the rivers of garbage we're swimming in.
raekk · 4h ago
> It's not a trade-off, it's valuable all the way down.
This.
sgarland · 9h ago
> way too complicated for what they do
Notably, this is subjective. I’ve had devs tell me that joins (in SQL) are too complicated, so they’d prefer to just duplicate data everywhere. I get that skill is a spectrum, but it’s getting to the point where I feel like we’ve passed the floor, and need to firmly state that there are in fact some basic ideas that are required knowledge.
01HNNWZ0MV43FF · 9h ago
Docker good actually
anymouse123456 · 8h ago
nah - we'll look back on Docker the same way many of are glaring at our own sins with OO these days.
hinkley · 7h ago
Docker is just making all the same promises we were made in 1991 that never came to fruition. Preemptive multitasking OSes with virtual memory were suppose to solve all of our noisy neighbor problems.
hinkley · 7h ago
If you’re implying that Docker is the slop, instead of an answer to the slop, I haven’t seen it.
andrepd · 15h ago
> a corporation large enough will have a team of experienced SREs that know over which detail to obsess.
Ahh, if only. Have you seen applications developed by large corporations lately? :)
achenet · 14h ago
a corporation large enough to have a team of experienced SREs that know which details to obsess over will also have enough promotion-hungry POs and middle managers to tell them devs to add 50MB of ads and trackers in the web page. Maybe another 100MB for an LLM wrapper too.
:)
hinkley · 7h ago
Don’t forget adding 25 individual Google Tag Managers to every page.
sgarland · 9h ago
Depending on the physical distance, it can be much more than a few msec, as TFA discusses.
CyberDildonics · 10h ago
If you make something that, well, wastes my time because you feel it is, well, not important, then, well, I don't want to use it.
exiguus · 11h ago
I think, this is just an Art project.
firecall · 14h ago
Damn... I'm at 17.2KB for my home page!
(not including dependencies)
FWIW I optimised the heck out of my personal homepage and got 100/100 for all Lighthouse scores. Which I had not previously thought possible LOL
Built in Rails too!
It's absolutely worth optimising your site though. It just is such a pleasing experience when a page loads without any perceptible lag!
apt-apt-apt-apt · 13h ago
Yeah, the fact that news.ycombinator.com loads instantly pleases my brain so much I flick it open during downtime automonkey-ly
Alifatisk · 13h ago
Lobsters, Dlangs forum and HN is one of the few places I know that loads instantly, I love it. This is how it should be like!
leptons · 6h ago
I did a lot of work optimizing the template code we use on thousands of sites to get to 100/100/100/100 scores on Lighthouse. We also score perfect 100s on mobile too. It was a wild adventure.
Our initial page load is far bigger than 17.2KB, it's about 120KB of HTML, CSS, and JS. The big secret is eliminating all extra HTTP requests, and only evaluating JS code that needs to run for things "above the fold" (lazy-evaluating any script that functions below the fold, as it scrolls into view). We lazy-load everything we can, only when it's needed. Defer any script that can be deferred. Load all JS and CSS in-line where possible. Use 'facade' icons instead of loading the 3rd-party chat widget at page load, etc. Delay loading tracking widgets if possible. The system was already built on an SSR back-end, so SSR is also a big plus here. We even score perfect 100s with full-page hires video backgrounds playing at page load above-the-fold, but to get there was a pretty big lift, and it only works with Vimeo videos, as Youtube has become a giant pain in the ass for that.
The Google Lighthouse results tell you everything you need to know to get to 100 scores. It took a whole rewrite of our codebase to get there, the old code was never going to be possible to refactor. It took us a whole new way of looking at the problem using the Lighthouse results as our guide. We went from our customers complaining about page speeds, to being far ahead of our competition in terms of page speed scores. And for our clients, page speed does make a big difference when it factors into SEO rankings (though it's somewhat debatable if page speed affects SEO, but not with an angry client that sees a bad page speed score).
ghoshbishakh · 14h ago
rails has nothing to do with the rendered page size though. Congrats on the perfect lighthouse score.
Alifatisk · 13h ago
Doesn't Rails asset pipeline have an effect on the page size, like if Propshaft being used instead of Sprockets. From what I remember, Propshaft intentionally does not include minification or compression.
firecall · 9h ago
It’s all Rails 8 + Turbo + Stimulus JS with Propshaft handling the asset bundling / pipeline.
All the Tailwind building and so on is done using common JS tools, which are mostly standard out of the box Rails 8 supplied scripts!
Sprockets used to do the SASS compilation and asset bundling, but the Rails standard now is to facilitate your own preferences around compilation of CSS/JS.
firecall · 9h ago
Indeed it does not :-)
It was more a quick promote Rails comment as it can get dismissed as not something to build fast website in :-)
hackerman_fi · 13h ago
The article has IMO two flawed arguments:
1. There is math for how long it takes to send even one packet over satellite connection (~1600ms). Its a weak argument for the 14kb rule since there is no comparison with a larger website. 10 packets wont necessarily take 16 seconds.
2. There is a mention that images on webpage are included in this 14kb rule. In what case are images inlined to a page’s initial load? If this is a special case and 99.9% of images don’t follow it, it should be mentioned at very least.
throwup238 · 13h ago
> In what case are images inlined to a page’s initial load?
Low resolution thumbnails that are blurred via CSS filters over which the real images fade in once downloaded. Done properly it usually only adds a few hundred bytes per image for above the fold images.
I don’t know if many bloggers do that, though. I do on my blog and it’s probably a feature on most blogging platforms (like Wordpress or Medium) but it’s more of a commercial frontend hyperoptimization that nudges conversions half a percentage point or so.
hinkley · 7h ago
Inlined svg as well. It’s a mess.
hsbauauvhabzb · 13h ago
Also the assumption that my userbase uses low latency satellite connections, and are somehow unable to put up with my website, when every other website in current existence is multiple megabytes.
ricardobeat · 12h ago
There was no such assumption, that was just the first example after which he mentions normal roundtrip latencies are usually in the 100-300ms range.
Just because everything else is bad, doesn't invalidate the idea that you should do better. Today's internet can feel painfully slow even on a 1Gbps connection because of this; websites were actually faster in the early 2000s, during the transition to ADSL, as they still had to cater to dial-up users and were very light as a result.
sgarland · 9h ago
> Just because everything else is bad, doesn't invalidate the idea that you should do better.
I get this all the time at my job, when I recommend a team do something differently in their schema or queries: “do we have any examples of teams currently doing this?” No, because no one has ever cared to try. I understand not wanting to be guinea pigs, but you have a domain expert asking you to do something, and telling you that they’ll back you up on the decision, and help you implement it. What more do you want?!
Alifatisk · 13h ago
I agree with the sentiment here, the thing is, I've noticed that the newer generations are using frameworks like Next.js as default for building simple static websites. That's their bare bone start. The era of plain html + css (and maybe a sprinkle of js) feels like it's fading away, sadly.
jbreckmckye · 13h ago
I think that makes sense.
I have done the hyper optimised, inline resource, no blocking script, hand minimised JS, 14kb website thing before and the problem with doing it the "hard" way is it traps you in a design and architecture.
When your requirements change all the minimalistic choices that seemed so efficient and web-native start turning into technical debt. Everyone fantasises about "no frameworks" until the project is no longer a toy.
Whereas the isomorphic JS frameworks let you have your cake and eat it: you can start with something that spits out compiled pages and optimise it to get performant _enough_, but you can fall back to thick client JavaScript if necessary.
fleebee · 13h ago
I think you're late enough for that realization that the trend already shifted back a bit. Most frameworks I've dealt with can emit static generated sites, Next.js included. Astro feels like it's designed for that purpose from the ground up.
austin-cheney · 12h ago
You have noticed that only just recently? This has been the case since jQuery became popular before 2010.
chneu · 11h ago
Arguably it's been this way since web 2.0 became a thing in like 2008?
zos_kia · 10h ago
Next.js bundles the code and aggressively minifies it, because their base use case is to deploy on lambdas or very small servers. A static website using next would be quite optimal in terms of bundle size.
simgt · 15h ago
Aside from latency, reducing ressources consumption to the minimum required should always be a concern if we intend to have a sustainable future. The environmental impact of our network is not negligible. Given the snarky comments here, we clearly have a long way to go.
EDIT: some reply missed my point, I am not claiming this particular optimization is the holy grail, only that I'd have liked for added benefit of reducing the energy consumption to be mentioned
FlyingAvatar · 14h ago
The vast majority of internet bandwidth is people streaming video. Shaving a few megs from a webpage load would be the tiniest drop in the bucket.
I am all for efficiency, but optimizing everywhere is a recipe for using up the resources to actually optimize where it matters.
schiffern · 14h ago
In that spirit I have a userscript, ironically called Youtube HD[0], that with one edit sets the resolution to 'medium' ie 360p. On a laptop it's plenty for talking head content (the softening is nice actually), and I only find myself switching to 480p if there's small text on screen.
It's a small thing, but as you say internet video is relatively heavy.
To reduce my AI footprint I use the udm=14 trick[1] to kill AI in Google search. It generally gives better results too.
For general web browsing the best single tip is running uBlock Origin. If you can master medium[2] or hard mode (which will require un-breaking/whitelisting sites) it saves more bandwidth and has better privacy.[3]
To go all-out on bandwidth conservation, LocalCDN[4]
and CleanURLs[5] are good. "Set it and forget it," improves privacy and load times, and saves a bit of energy.
I've been using uBlock in advanced mode with 3rd party frames and scripts blocked. I recommend it, but it is indeed a pain to find the minimum set of things you need to unblock to make a website work, involving lots of refreshing.
Once you find it for a website you can just save it though so you don't need to go through it again.
LocalCDN is indeed a nobrainer for privacy! Set and forget.
OtherShrezzing · 14h ago
> but optimizing everywhere is a recipe for using up the resources to actually optimize where it matters.
Is it? My front end engineer spending 90 minutes cutting dependencies out of the site isn’t going to deny YouTube the opportunity to improve their streaming algorithms.
josephg · 14h ago
It might do the opposite. We need to teach engineers of all stripes how to analyse and fix performance problems if we’re going to do anything about them.
molszanski · 14h ago
If you turn this into open problem, without hypothetical limits of what an frontend engineer ca do it would become more interesting and more impactful in real life. That said engineer is human being who could use that time in myriad other ways that would be more productive to helping the environment
simgt · 14h ago
That's exactly it, but I fully expected whataboutism under my comment. If I had mentioned video streaming as a disclaimer, I'd probably have gotten crypto or Shein as counter "arguments".
Everyone needs to be aware that we are part of an environment that has limited resources beyond "money" and act accordingly, whatever the scale.
jbreckmckye · 13h ago
I feel this way sometimes about recycling. I am very diligent about it, washing out my cans and jars, separating my plastics. And then I watch my neighbour fill our bin with plastic bottles, last-season clothes and uneaten food.
yawaramin · 6h ago
Recycling is mostly a scam. Most municipalities don't bother separating out the plastics and papers that would be recyclable, decontaminating them, etc. because it would be too expensive. They just trash them.
extra88 · 9h ago
At least you and your neighbor are operating on the same scale. Don't stop those individual choices but more members of the populace making those choices is not how the problem gets fixed, businesses and whole industries are the real culprits.
oriolid · 14h ago
> The vast majority of internet bandwidth is people streaming video. Shaving a few megs from a webpage load would be the tiniest drop in the bucket.
Is it really? I was surprised to see that surfing newspaper websites or Facebook produces more traffic per time than Netflix or Youtube. Of course there's a lot of embedded video in ads and it could maybe count as streaming video.
danielbln · 14h ago
Cate to share that article, I find that hard to believe.
oriolid · 12h ago
No article sorry, it's just what the bandwidth display on my home router shows. I could post some screenshots but I don't care for answering to everyone who tries to debunk them. Mobile version of Facebook is by the way much better optimized than the full webpage. I guess desktop browser users are a small minority.
Capricorn2481 · 3h ago
Well Facebook has video on it. Highly unlikely that a static site is going to even approach watching a video.
hxorr · 2h ago
It may surprise you how heavy Facebook is these days
vouaobrasil · 14h ago
The problem is that a lot of people DO have their own websites for which they have some control over. So it's not like a million people optimizing their own websites will have any control over what Google does with YouTube for instance...
jychang · 14h ago
A million people is a very strong political force.
A million determined voters can easily force laws to be made which forces youtube to be more efficient.
I often think about how orthodoxical all humans are. We never think about different paths outside of social norms.
- Modern western society has weakened support for mass action to the point where it is literally an unfathomable "black swan" perspective in public discourse.
- Spending a few million dollars on TV ads to get someone elected is a lot cheaper than whatever Bill Gates spends on NGOs, and for all the money he spent it seems like aid is getting cut off.
- Hiring or acting as a hitman to kill someone to achieve your goal is a lot cheaper than the other options above. It seems like this concept, for better or worse, is not quite in the public consciousness currently. The 1960s 1970s era of assassinations have truly gone and past.
vouaobrasil · 14h ago
I sort of agree...but not really, because you'll never get a situation where a million people can vote on a specific law about making YT more efficient. One needs to muster some sort of general political will to even get that to be an issue, and that takes a lot more than a million people.
Personally, if a referendum were held tomorrow to disband Google, I would vote yes for that...but good luck getting that referendum to be held.
hnlmorg · 13h ago
It matters at web scale though.
Like how industrial manufacturing are the biggest carbon consumers and compared to them, I’m just a drop in the ocean. But that doesn’t mean I don’t also have a responsibility to recycle because culminate effect of everyone like me recycling quickly becomes massive.
Similarly, if every web host did their bit with static content, you’d still see a big reduction at a global scale.
And you’re right it shouldn’t the end of the story. However that doesn’t mean it’s a wasted effort / irrelevant optimisation
ofalkaed · 13h ago
I feel better about limiting the size of my drop in the bucket than I would feel about just saying my drop doesn't matter even if it doesn't matter. I get my internet through my phone's hotspot with its 15gig a month plan, I generally don't use the entire 15gigs. My phone and and laptop are pretty much the only high tech I have, audio interface is probably third in line and my oven is probably fourth (self cleaning). Furnace stays at 50 all winter long even when it is -40 out and if it is above freezing the furnace is turned off. Never had a car, walk and bike everywhere including groceries and laundry, have only used motorized transport maybe a dozen times in the past decade.
A nice side effect of these choices is that I only spend a small part of my pay. Never had a credit card, never had debt, just saved my money until I had enough that the purchase was no big deal.
I don't really have an issue with people who say that their drop does not matter so why should they worry, but I don't understand it, seems like they just needlessly complicate their life. Not too long ago my neighbor was bragging about how effective all the money he spent on energy efficient windows, insulation, etc, was, he saved loads of money that winter; his heating bill was still nearly three times what mine was despite using a wood stove to offset his heating bill, my house being almost the same size, barely insulated and having 70 year old windows. I just put on a sweater instead of turning up the heat.
Edit: Sorry about that sentence, not quite awake yet and doubt I will be awake enough to fix it before editing window closes.
atoav · 14h ago
Yes but drops in the bucket count. If I take anything away from your statement, it is that people should be selective where to use videos for communications and where not.
pyman · 14h ago
Talking about video streaming, I have a question for big tech companies:
Why?
Why are we still talking about optimising HTML, CSS and JS in 2025? This is tech from 35 years ago. Why can't browsers adopt a system like video streaming, where you "stream" a binary of your site? The server could publish a link to the uncompressed source so anyone can inspect it, keeping the spirit of the open web alive.
Do you realise how many years web developers have spent obsessing over this document-based legacy system and how to improve its performance? Not just years, their whole careers!
How many cool technologies were created in the last 35 years? I lost count.
Honestly, why are big tech companies still building on top of a legacy system, forcing web developers to waste their time on things like performance tweaks instead of focusing on what actually matters: the product.
ozim · 13h ago
I see you mistake html/css for what they were 30 years ago „documents to be viewed”.
HTML/CSS/JS is the only fully open stack, free as in beer and free and not owned by a single entity and standardized by multinational standardization bodies for building applications interfaces that is cross platform and does that excellent. Especially with electron you can build native apps with HTML/CSS/JS.
There are actually web apps not „websites” that are built. Web apps are not html with sprinkled jquery around there are actually heavy apps.
pyman · 6h ago
I'm talking about big ideas. Bigger than WebAssembly. My message was about the future of the www, the next‑gen web, not the past.
ozim · 4h ago
OK now I think you don't understand all the implications of the status quo.
Everyone writing about "future view" or "next gen" would have to prove to me that they really understand current state of things.
pyman · 58m ago
I spent 10 years teaching computer science and some of my ex-students now work at Google, Amazon, Uber, and Microsoft. They still come by to say hi. It's teachers who inspire change, and we do it by talking about the past, present, and future of tech in classrooms, not online forums.
I like reminding students about Steve Jobs. While others pushed for web apps, he launched the App Store alongside the iPhone in 2008 and changed everything. I ask my students, why not stick with web apps? Why go native?
Hopefully, these questions get them thinking about new platforms, new technologies and solutions, and maybe even spark ideas that lead to something better.
Just picture this: it's 2007, and you're standing in front of Steve Jobs telling him: “You don't understand anything about the web, Steve.”
Yeah, good luck with that. For that reason I'll politely decline your offer to prove how much I know about this topic, but you're more than welcome to share your own perspective.
01HNNWZ0MV43FF · 12h ago
Practically it is owned by Google, or maybe Google + Apple
hnlmorg · 13h ago
That’s already how it works.
The binary is a compressed artefact and the stream is a TLS pipe. But the principle is the same.
In fact videos streams over the web are actually based on how HTTP documents are chunked and retrieved, rather than the other way around.
pyman · 6h ago
I see, I didn't know this
ahofmann · 13h ago
1. How does that help not wasting resources? It needs more energy and traffic
2. Everything in our world is dwarfes standing on the shoulders of giants. To rip everything up and create something completely new is most of the time an idea that sounds better than it really would be. Anyone who thinks something else is mostly to young to see this pattern.
Naru41 · 8h ago
The ideal HTML I have in mind is a DOM tree represented entirely in TLV binary -- and a compiled .so file instead of .js. And a unpacked data to be used directly in C programming data structure. Zero copy, no parsing, (data vaildation is unavoidable but) that's certainly fast.
01HNNWZ0MV43FF · 12h ago
> Why can't browsers adopt a system like video streaming, where you "stream" a binary of your site?
I'll have to speculate what you mean
1. If you mean drawing pixels directly instead of relying on HTML, it's going to be slower. (either because of network lag or because of WASM overhead)
2. If you mean streaming video to the browser and rendering your site server-side, it will break features like resizing the window or turning a phone sideways, and it will be hideously expensive to host.
3. It will break all accessibility features like Android's built-in screen reader, because you aren't going to maintain all the screen reader and braille stuff that everyone might need server-side, and if you do, you're going to break the workflow for someone who relies on a custom tweak to it.
4. If you are drawing pixels from scratch you also have to re-implement stuff like selecting and copying text, which is possible but not feasible.
5. A really good GUI toolkit like Qt or Chromium will take 50-100 MB. Say you can trim your site's server-side toolkit down to 10 MB somehow. If you are very very lucky, you can share some of that in the browser's cache with other sites, _if_ you are using the same exact version of the toolkit, on the same CDN. Now you are locked into using a CDN. Now your website costs 10 MB for everyone loading it with a fresh cache.
You can definitely do this if your site _needs_ it. Like, you can't build OpenStreetMap without JS, you can't build chat apps without `fetch`, and there are certain things where drawing every pixel yourself and running a custom client-side GUI toolkit might make sense. But it's like 1% of sites.
I hate HTML but it's a local minimum. For animals, weight is a type of strength, for software, popularity is a type of strength. It is really hard to beat something that's installed everywhere.
pyman · 6h ago
Thanks for explaining this in such detail
qayxc · 15h ago
It's not low-hanging fruit, though. While you try to optimise to save a couple of mWh in power use, a single search engine query uses 100x more and an LLM chat is another 100x of that. In other words: there's bigger fish to fry. Plus caching, lazy loading etc. mitigates most of this anyway.
vouaobrasil · 15h ago
Engineering-wise, it sometimes isn't. But it does send a signal that can also become a trend in society to be more respectful of our energy usage. Sometimes, it does make sense to focus on the most visible aspect of energy usage, rather than the most intensive. Just by making your website smaller and being vocal about it, you could reach 100,000 people if you get a lot of visitors, whereas Google isn't going to give a darn about even trying to send a signal.
qayxc · 14h ago
I'd be 100% on board with you if you were able to show me a single - just a single - regular website user who'd care about energy usage of a first(!) site load.
I'm honestly just really annoyed about this "society and environment"-spin on advise that would have an otherwise niche, but perfectly valid reason behind it (TFA: slow satellite network on the high seas).
This might sound harsh and I don't mean it personally, but making your website smaller and "being vocal about it" (whatever you mean by that) doesn't make an iota of difference. It also only works if your site is basically just text. If your website uses other resources (images, videos, 3D models, audio, etc.), the impact of first load is just noise anyway.
You can have a bigger impact by telling 100,000 people to drive an hour less each month and if just 1% of your hypothetical audience actually does that, you'd achieve orders of magnitude more in terms of environmental and societal impact.
Mawr · 3h ago
Correct. It's even worse than that, they'll say they optimized the energy usage of their website by making it 1kb smaller and then fly overseas for holiday. How many billions of page loads would it take to approximate the environmental impact of a single intercontinental flight?
vouaobrasil · 14h ago
Perhaps you are right. But I do remember one guy who had a YouTube channel and he uploaded fairly low-quality videos at a reduced framerate to achieve a high level of compression, and he explicitly put in his video that he did it to save energy.
Now, it is true that it didn't save much because probably many people were uploaded 8K videos at the time, so drop in the bucket. But personally, I found it quite inspiring and his decision was instrumental in my deciding to never upload 4K. And in general, I will say that people like that do inspire me and keep me going to be as minimal as possible when I use energy in all domains.
For me at least, trying to optimize for using as little energy as possible isn't an engineering problem. It's a challenge to do it uniformly as much as possible, so it can't be subdivided. And I do think every little bit counts, and if I can spend time making my website smaller, I'll do that in case one person gets inspired by that. It's not like I'm a machine and my only goal is time efficiency....
Mawr · 3h ago
Youtube's compression already butchers the quality of anything 1080p and below. Uploading in 1440p or 4K is the only way to get youtube to preserve at least some of the bitrate. There's a 1080p extra bitrate option available on some videos, but it's locked behind premium, so I'm not sure how well it works.
Depending on the type of video this may not matter, but it often does. For example, my FPS gaming and dashcam footage gets utterly destroyed if uploaded to youtube at 1080p. Youtube's 4K seems roughly equivalent to my high bitrate 1080p recordings.
whoisyc · 5h ago
Realistically “my website fits in 14kb” is a terrible signal because it is invisible to 99.99% of the population. How many HNers inspect the network usage when loading a random stranger’s website?
Plus, trying to signal your way to societal change can have unintended downsides. It makes you feel you are doing something when you are actually not making any real impact. It attracts the kind of people who care more about signaling the right signals than doing the right thing into your camp.
marcosdumay · 8h ago
So, literally virtue signaling?
And no, a million small sites won't "become a trend in society".
vouaobrasil · 6h ago
You really don't know if it could become a trend or not. Certainly trends happen in the opposite direction, such as everyone using AI. I think every little difference you can make is a step in the right direction, and is not virtue signalling if you really apply yourself across all domains of life. But perhaps it is futile, given that there are so many defeatist individuals such as yourself crowding the world.
victorbjorklund · 9h ago
On the other hand - its kind of like saying we dont need to drive env friendly cars because it is a drop in the bucket compares to containerships etc
simgt · 13h ago
Of course, but my point is that it's still a constraint we should have in mind at every level. Dupont poisoning public water with pfas does not make you less of an arsehole if you toss your old iPhone in a pond for the sake of convenience.
timeon · 14h ago
Sure there are more resource-heavy places but I think the problem is general approach.
Neglecting of performance and overall approach to resources brought us to these resource-heavy tools.
It seems just dismissive when people pointing to places where there could be made more cuts and call it a day.
If we want to really fix places with bigger impact we need to change this approach in a first place.
qayxc · 14h ago
Sure thing, but's not low-hanging fruit. The impact is so miniscule that the effort required is too high when compared to the benefit.
This is micro-optimisation for a valid use case (slow connections in bandwidth-starved situations), but in the real world, a single hi-res image, short video clip, or audio sample would negate all your text-squeezing, HTTP header optimisation games, and struggle for minimalism.
So for the vast majority of use cases it's simply irrelevant. And no, your website is likely not going to get 1,000,000 unique visitors per hour so you'd have a hard time even measuring the impact whereas simply NOT ordering pizza and having a home made salad instead would have a measurable impact orders of magnitude greater.
Estimating the overall impact of your actions and non-actions is hard, but it's easier and more practical to optimise your assets, remove bloat (no megabytes of JS frameworks), and think about whether you really need that annoying full-screen video background. THOSE are low-hanging fruit with lots of impact. Trying to trim down a functional site to <14kB is NOT.
quaintdev · 14h ago
LLM companies should provide how much energy got consumed processing users request. Maybe people will think twice before generating AI slop
lpapez · 13h ago
Being concerned about page sizes is 100% wasted effort.
Calculate how much electricity you personally consume in total browsing the Internet for a year. Multiply that by 10 to be safe.
Then compare that number to how much energy it takes to produce a single hamburger.
Do the calculation yourself if you do not believe me.
On average, we developers can make a bigger difference by choosing to eat salad one day instead of optimizing our websites for a week.
Mawr · 2h ago
Or how much energy it took to even get to work by car that day.
vouaobrasil · 15h ago
Absolutely agree with that. I recently visited the BBC website the other day and it loaded about 120MB of stuff into the cache - for a small text article. Not only does it use a lot of extra energy to transmit so much data, but it promotes a general atmosphere of wastefulness.
I've tried to really cut down my website as well to make it fairly minimal. And when I upload stuff to YouTube, I never use 4K, only 1080P. I think 4K and 8K video should not even exist.
A lot of people talk about adding XYZ megawatts of solar to the grid. But imagine how nice it could be if we regularly had efforts to use LESS power.
I miss the days when websites were very small in the days of 56K modems. I think there is some happy medium somewhere and we've gone way past it.
raekk · 4h ago
Let's take it further: That atmosphere of wastefulness not only concerns bandwidth and energy use but also architectural decisions. There's technology that punches far above its weight class in terms of efficiency and there's the opposite. It seems like a collective form of learned helplessness, on both sides, the vendors and users. IMHO, the only real reason for slow, JavaScript-heavy sites is surveillance and detailed, distributed profiling of users. The other would be animated UI giving dopamine hits, but that could totally be confined to entertainment and shouldn't be a cue for "quality" software.
iinnPP · 13h ago
You'll find that people "stop caring" about just about anything when it starts to impact them. Personally, I agree with your statement.
Since a main argument is seemingly that AI is worse, let's remember that AI is querying these huge pages as well.
Also that the 14kb size is less than 1% of the current average mobile website payload.
zigzag312 · 14h ago
So, anyone serious about sustainable future should stop using Python and stop recommending it as introduction to programming language? I remember one test that showed Python using 75x more energy than C to perform the same task.
mnw21cam · 14h ago
I'm just investigating why the nightly backup of the work server is taking so long. Turns out python (as conda, anaconda, miniconda, etc) have dumped 22 million files across the home directories, and this takes a while to just list, let alone work out which files have changed and need archiving. Most of these are duplicates of each other, and files that should really belong to the OS, like bin/curl.
I myself have installed one single package, and it installed 196,171 files in my home directory.
If that isn't gratuitous bloat, then I don't know what is.
sgarland · 9h ago
Conda is its own beast tbf. Not saying that Python packaging is perfect, but I struggle to imagine a package pulling in 200K files. What package is it?
hiAndrewQuinn · 14h ago
Do we? Let's compare some numbers.
Creating an average hamburger requires an input of 2-6 kWh of energy, from start to finish. At 15¢ USD/kWh, this gives us an upper limit of about 90¢ of electricity.
The average 14 kB web page takes about 0.000002 kWh to serve. You would need to serve that web page about 1-300,000 times to create the same energy demands of a single hamburger. A 14 mB web page, which would be a pretty heavy JavaScript app these days, would need about 1 to 3,000.
I think those are pretty good ways to use the energy.
justmarc · 12h ago
Slightly veering off topic but I honestly wonder how many burgers will I fry if I ask ChatGPT to make a fart app?
hombre_fatal · 11h ago
A tiny fraction of a burger.
ajsnigrutin · 14h ago
Now open an average news site, with 100s of request, tens of ads, autoplaying video ads, tracking pixels, etc., using gigabytes of ram and a lot of cpu.
Then multiply that by the number of daily visitors.
Without "hamburgers" (food in general), we die, reducing the size of usesless content on websites doesn't really hurt anyone.
hiAndrewQuinn · 14h ago
Now go to an average McDonalds, with hundreds of orders, automatically added value meals, customer rewards, etc. consuming thousands of cows and a lot of pastureland.
Then multiply that by the number of daily customers.
Without web pages (information in general), we return to the Dark Ages. Reducing the number of hamburgers people eat doesn't really hurt anyone.
ajsnigrutin · 14h ago
Sure, but you've got to eat something.
Now, if mcdonalds padded 5kB of calories of a cheesburger with 10.000 kilobytes of calories in wasted food like news sites doo, it would be a different story. The ratio would be 200 kilos of wasted food for 100grams of usable beef.
hombre_fatal · 11h ago
You don't need to eat burgers though. You can eat food that consumes a small fraction of energy, calorie, land, and animal input of a burger. And we go to McDonalds because it's a dopamine luxury.
It's just an inconvenient truth for people who only care about the environmental impact of things that don't require a behavior change on their part. And that reveals an insincere, performative, scoldy aspect of their position.
Sure, but beef tastes good. I mean.. there are better ways to eat beef than mixed with soy at mcdonalds, but still...
What benefit does an individual get from downloading tens of megabytes of useless data to get ~5kB of useful data in an article? It wastes download time, bandwidth, users time (having to close the autoplaying ad), power/battery, etc.
justmarc · 14h ago
Just wondering how do you reached at the energy calculation for serving that 14k page?
For a user's access to a random web page anywhere, assuming it's not on a CDN near the user, you're looking at at ~10 routers/networks on the way involved in the connection. Did you take that into account?
swores · 12h ago
If Reddit serves 20 billion page views per month, at an average of 5MB per page (these numbers are at least in the vicinity of being right), then reducing the page size by 10% would by your maths be worth 238,000 burgers, or a 50% reduction worth almost 1.2million burgers per month. That's hardly insignificant for a single (admittedly, very popular) website!
(In addition to what justmarc said about accounting for the whole network. Plus I suspect between feeding them and the indirect effects of their contribution to climate change, I suspect you're being generous about the cost of a burger.)
presentation · 14h ago
Or we can just commit to building out solar infrastructure and not worry about this rounding error anymore
spacephysics · 14h ago
This is one of those things that is high effort, low impact. Similar to recycling in some cities/towns where it just gets dumped in a landfill.
Instead we should be looking to nuclear power solutions for our energy needs, and not waste time with reducing website size if its purely a function of environmental impact.
sylware · 14h ago
Country where 10 millions people play their fav greedy-3D game in the evening, with state-of-the-art 400W GPUs, all at the same time...
noduerme · 14h ago
Yeah, the environmental impact of jackasses mining jackass coin, or jackasses training LLMs is not insignificant. Are you seriously telling me now that if my website is 256k or 1024k I'm responsible for destroying the planet? Take it out on your masters.
And no, reducing resource use to the minimum in the name of sustainability does not scale down the same way it scales up. You're just pushing the idea that all human activity is some sort of disease that's best disposed of. That's essentially just wishing the worst on your own species for being successful.
It's never clear to me whether people who push this line are doing so because they're bitter and want to punish other humans, or because they hate themselves. Either way, it evinces a system of thought that has already relegated humankind to the dustbin of history. If, in the long run, that's what happens, you're right and everyone else is wrong. Congratulations. It will make little difference in that case to you if the rest of us move on for a few hundred years to colonize the planets and revive the biosphere. Comfort yourself with the knowledge that this will all end in 10 or 20 thousand years, and the world will go back to being a hot hive of insects and reptiles. But what glory we wrought in our time.
simgt · 14h ago
> the environmental impact of jackasses mining jackass coin, or jackasses training LLMs is not insignificant
> You're just pushing the idea that all human activity is some sort of disease that's best disposed of. That's essentially just wishing the worst on your own species for being successful.
Every bloody mention of the environmental impact of our activities gets at least a reply like yours that ticks one of these boxes.
noduerme · 13h ago
>> Every bloody mention of the environmental impact of our activities gets at least a reply like yours that ticks one of these boxes.
That's a sweeping misunderstanding of what I wrote, so I'd ask that you re-read what I said in response to the specific quote.
noduerme · 14h ago
> the environmental impact of jackasses mining jackass coin, or jackasses training LLMs is not insignificant
(this was actually stated in agreement with the original poster, who you clearly misunderstood, so there's no "what-about" involved here. They were condemning all kinds of consumption, including the frivolous ones I mentioned).
But
I'm afraid you've missed both my small point and my wider point.
My small point was to argue against the parent's comment that
>>reducing ressources consumption to the minimum required should always be a concern if we intend to have a sustainable future
I disagree with this concept on the basis that nothing can be accomplished on a large scale if the primary concern is simply to reduce resource consumption to a minimum. If you care to disagree with that, then please address it.
The larger point was that this theory leads inexorably to the idea that humans should just kill themselves or disappear; and it almost always comes from people who themselves want to kill themselves or disappear.
simgt · 13h ago
> if the primary concern is simply to reduce resource consumption to a minimum
..."required".
That allows you to fit pretty much everything in that requirement. Which actually makes my initial point a bit weak, as some would put "delivering 4K quality tiktok videos" as a requirement.
Point is that energy consumption and broad environmental impact has to be a constraint in how we design our systems (and businesses).
I stand by my accusations of whataboutism and strawmaning, though.
noduerme · 12h ago
carelessly thrown about accusations of whataboutism and strawmaning are an excellent example of whataboutism and strawmaning. I was making a specific point, directly to the topic, without either putting words in their mouth or addressing an unrelated issue. I'll stand by my retort.
Capricorn2481 · 3h ago
Wow, are low-effort comments like this really welcome here?
Why don't you read this comment and see if you have the same energy for hamburger eaters that you do for people with websites over 14kb. Because if you don't, it's obvious you're looking to sweat people who actually care about their environmental impact over absolutely nothing.
FYI, it's not Whataboutism to say there are more effective things to focus on.
ksec · 15h ago
Missing 2021 in the title.
I know it is not the exact topic, but sometimes I think we dont need the fastest response time but consistent response time. Like every single page within the site to be fully rendered with exactly 1s. Nothing more nothing less.
sangeeth96 · 13h ago
I think the advise is still very relevant though. Plus, the varying network conditions mentioned in the article would ensure it’s difficult if impossible to guarantee consistent response time. As someone with spotty cellular coverage, I can understand the pains of browsing when you’re stuck with that.
ksec · 12h ago
Yes. I don't know how it could be achieved other than having JS rendered the whole thing, wait until time designated before showing it all. And that time could be dependent on network connection.
But this sort of goes against my no / minimal JS front end rendering philosophy.
the_precipitate · 15h ago
And you do know that .exe file is wasteful, .com file actually saves quite a few bytes if you can limit your executable's size to be smaller than 0xFF00h (man, I am old).
cout · 14h ago
And a.out format often saves disk space over elf, despite duplicating code across executables.
mikl · 14h ago
How relevant is this now, if you have a modern server that supports HTTP/3?
HTTP/3 uses UDP rather than TCP, so TCP slow start should not apply at all.
A lot of people don't realize that all these so-called issues with TCP, like slow-start, Nagle, window sizes and congestion algorithms, are not there because TCP was badly designed, but rather that these are inherent problems you get when you want to create any reliable stream protocol on top of an unreliable datagram one. The advantage of QUIC is that it can multiplex multiple reliable streams while using only a single congestion window, which is a bit more optimal than having multiple TCP sockets.
One other advantage of QUIC is that you avoid some latency from the three-way handshake that is used in almost any TCP implementation. Although technically you can already send data in the first SYN packet, the three-way handshake is necessary to avoid confusion in some edge cases (like a previous TCP connection using the same source and destination ports).
gbuk2013 · 10h ago
They also tend to focus on bandwidth and underestimate the impact of latency :)
Interesting to hear that QUIC does away with the 3WHS - it always catches people by surprise that it takes at least 4 x latency to get some data on a new TCP connection. :)
hulitu · 14h ago
> How relevant is this now
Very relevant. A lot of websites need 5 to 30 seconds or more to load.
throwaway019254 · 14h ago
I have a suspicion that the 30 second loading time is not caused by TCP slow start.
ajross · 12h ago
Slow start is about saving small-integer-numbers of RTT times that the algorithm takes to ramp up to line speed. A 5-30 second load time is an order of magnitude off, and almost certainly due to simple asset size.
> Also HTTPS requires two additional round trips before it can do the first one — which gets us up to 1836ms!
Doesn't this sort of undo the entire point of the article?
If the idea was to serve the entire web page in the first roundtrip, wouldn't you have lost the moment TLS is used? Not only does the TLS handshake send lots of stuff (including the certificate) that will likely get you over the 14kb boundary before you even get the chance to send a byte of your actual content - but the handshake also includes multiple request/response exchanges between client and server, so it would require additional roundtrips even if it stayed below the 14kb boundary.
So the article's advice only holds for unencrypted plain-TCP connections, which no one would want to use today anymore.
The advice might be useful again if you use QUIC/HTTP3, because that one ditches both TLS and TCP and provides the features from both in its own thing. But then, you'd have to look up first how congestion control and bandwidth estimation works in HTTP3 and if 14kb is still the right threshold.
toast0 · 7h ago
Modern TLS adds one round trip, unless you have TCP fast open or 0-RTT resumption; neither of which are likely in a browser case, so call it 1 extra round trip. Modern TLS includes TLS 1.3 as well as TLS 1.2 with TLS False Start (RFC 7918, August 2016).
And TLS handshakes aren't that big, even with certificates... Although you do want to use ECC certs if you can, the keys are much smaller. The client handshake should fit in 1-2 packets, the server handshake should fit in 2-3 packets. But more importantly, the client request can only be sent after receiving the whole server handshake, so the congestion window will be refreshed. You could probably calculate how much larger the congestion window is likely to be, and give yourself a larger allowance, since TLS will have expanded your congestion window.
Otoh, the important concept, is that early throughput is limited by latency and congestion control, and it takes many round trips to hit connection limits.
One way to apply that is if you double your page weight at the same time you add many more service locations and traffic direction, you can see page load times stay about the same.
zelphirkalt · 12h ago
My plain HTML alone is 10kB and it is mostly text. I don't think this is achievable for most sites, even the ones limiting themselves to only CSS and HTML, like mine.
3cats-in-a-coat · 12h ago
This is about your "plain HTML". If the rest is in cache, then TCP concerns are irrelevant.
MrJohz · 12h ago
Depending on who's visiting your site and how often, the rest probably isn't in cache though. If your site is a product landing page or a small blog or something else that people are rarely going to repeatedly visit, then it's probably best to assume that all your assets will need to be downloaded most of the time.
3cats-in-a-coat · 9h ago
While it'd be fun to try, I doubt you can produce any page at all that's total 14kb with assets, even back at the dawn of the web in the 90s, aside from the spartan minimal academic pages some have. And where loading faster is completely irrelevant.
MrJohz · 5h ago
The homepage for my blog is apparently 9.95kB, which includes all styles, some JS, and the content. There is an additional 22kB font file that breaks the rule, but when I first designed the site I used built-in browser fonts only, and it looked fine. There are no images on the homepage apart from a couple of inlined SVG icons in the footer.
Looking at the posts themselves, they vary in size but the content/styles/JS probably average around 14kB. You've also got the font file, but again a more minimal site could strip that. Finally, each post has a cover image that makes up the bulk of the content size. I don't think you're ever going to get that under 14kB, but they're also very easy to load asynchronously, and with a CSS-rendered blur hash placeholder, you could have an initial page load that looks fairly good where everything not in the initial 14kB can be loaded later without causing FOUCs/page layout shifts/etc.
For a magazine site or a marketing site, the 14kB thing is almost certainly impossible, but for blogs or simple marketing pages where the content is more text-based or where there are minimal above-the-fold images, 14kB is pretty viable.
You must also be careful not to generate "get-if-modified", or such checks.
gammalost · 14h ago
If you care about reducing the amount of back and forth then just use QUIC.
mikae1 · 11h ago
> Once you lose the autoplaying videos, the popups, the cookies, the cookie consent banners, the social network buttons, the tracking scripts, javascript and css frameworks, and all the other junk nobody likes — you're probably there.
How about a single image? I suppose a lot of people (visitors and webmasters) like to have an image or two on the page.
coolspot · 6h ago
As long as your page doesn’t wait for that image, your page is going to be shown faster if it is 14kb.
justmarc · 14h ago
Does anyone know have examples of tiny, yet aesthetically pleasing websites or pages?
There is an example link in the article. Listing more examples would serve no purpose apart from web design perspective
justmarc · 13h ago
Well, exactly that, I'm looking for inspiration.
smartmic · 14h ago
If I understood correctly, the rule is dependent on web server features and/or configuration. In that case, an overview of web servers which have or have not implemented the slow start algorithm would be interesting.
paales2 · 15h ago
Or maybe we shouldn’t. A good experience doesnt have to load under 50ms, it is fine for it to take a second. 5G is common and people with slower connections accept longer waiting times. Optimizing is good but fixating isn’t.
The geostationary satellite example, while interesting, is kinda obsolete in the age of Starlink
theandrewbailey · 11h ago
Starlink is only 1 option in the satellite internet market. There are too many embedded systems and legacy infrastructure that its not reasonable to assume that 'satellite internet' means Starlink. Maybe in 20 years, but not today.
maxlin · 10h ago
That's like saying vacuum tubes are only one option in the radio market.
The quality of connection is so much better, and as you can get a starlink mini with a 50GB plan for very little money, its already in the zone that just one worker could grab his own and bring it on the rig to use on his free time and to share.
Starlink terminals aren't "infrastructure". Campers often toss one on their roof without even leaving the vehicle. Easier than moving a chair. So, as I said, the geostationary legacy system immediately becomes entirely obsolete other than for redundancy, and is kinda irrelevant for uses like browsing the web.
3cats-in-a-coat · 9h ago
"Obsolete" suggests Starlink is clearly better and sustainable, and that's a very bold statement to make at this point. I suspect in few decades the stationary satellites will still be around, while Starlink would've either evolved drastically or gone away.
youngtaff · 13h ago
It’s not really relevant in 2025…
The HTTPS negotiation is going to consume the initial roundtrips which should start increasing the size of the window
Modern CDNs start with larger initial windows and also pace the packets onto the network to reduce the chances of congesting
There’s also a question as to how relevant the 14kb rule has ever been… HTML renders progressively so as long as there’s some meaningful content in the early packets then overall size is less important
nottorp · 12h ago
So how bad is it when you add https?
palata · 15h ago
Fortunately, most websites include megabytes of bullshit, so it's not remotely a concern for them :D.
Hamuko · 15h ago
I recently used an electric car charger where the charger is controlled by a mobile app that's basically a thin wrapper over a website. Unfortunately I only had a 0.25 Mb/s Internet plan at the time and it took me several minutes just staring at the splash screen as it was downloading JavaScript and other assets. Even when I got it to load, it hadn't managed to download all fonts. Truly an eye-opening experience.
fouronnes3 · 15h ago
Why can't we just pay with a payment card at electric chargers? Drives me insane.
DuncanCoffee · 14h ago
It wasn't required by law and the ocpp charging protocol, used to manage charge sessions at a high level between the charger and the service provider (not the vehicle) did not include payments management. Everybody just found it easier to manage payments using apps and credits. But I think Europe is going to make it mandatory soon(ish)
Hamuko · 15h ago
These chargers have an RFID tag too, but I'd forgotten it in my jacket, so it was mobile app for me.
There are some chargers that take card payments though. My local IKEA has some. There's also EU legislation to mandate payment card support.
I know some people who are experimenting with using shorter certificates, i.e. shorter certificate chains, to reduce traffic. If you're a large enough site, then you can save a ton of traffic every day.
tech2 · 14h ago
Please though, for the love of dog, have your site serve a complete chain and don't have the browser or software stack do AIA chasing.
jeroenhd · 12h ago
With half of the web using Let's Encrypt certificates, I think it's pretty safe to assume the intermediates are in most users' caches. If you get charged out the ass for network bandwidth (i.e. you use Amazon/GCP/Azure) then you may be able to get away with shortened chains as long as you use a common CA setup. It's a hell of a footgun and will be a massive pain to debug, but it's possible as a traffic shaving measure if you don't care about serving clients that have just installed a new copy of their OS.
There are other ways you can try to optimise the certificate chain, though. For instance, you can pick a CA that uses ECC rather than RSA to make use of the much shorter key sizes. Entrust has one, I believe. Even if the root CA has an RSA key, they may still have ECC intermediates you can use.
tech2 · 6h ago
The issue with the lack of intermediates in the cert isn't browsers (they'll just deal with it). Sure, if they aren't already in the cache then there's a small hit first time. The problem is that if your SSL endpoint is accessed by any programming language (for example, you offer image URL to a B2B system to download so they can perform image resizing for you, or somesuch) then there's a chance the underlying platform doesn't automatically do AIA chasing. Python is one-such system I'm aware of, but there are others that will be forced to work around this for no net benefit.
mrweasel · 10h ago
That is a really good point. Googles certificate service can issue a certificate signed directly by Google, but not even Google themselves are using it. They use the one that's cross signed by GlobalSign (I think).
But yes, ensure that you're serving the entire chain, but keep the chain as short as possible.
xrisk · 15h ago
Yeah I think this computation doesn’t work anymore once you factor in the tls handshake.
aziaziazi · 15h ago
From TFA:
> Also HTTPS requires two additional round trips before it can do the first one — which gets us up to 1836ms!
supermatt · 14h ago
This hasn’t been the case since TLS1.3 (over 5 years ago) which reduced it to 1-RTT - or 0-RTT when keys are known (cached or preshared). Same with QUIC.
aziaziazi · 14h ago
Good to know, however "when the keys are know" refers to a second visit (or request) of the site right ? That isn’t helpful for the first data paquets - at least that what I understand from the site.
jeroenhd · 12h ago
Without cached data from a previous visit, 1-RTT mode works even if you've never vistited the site before (https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/#1-rtt-mode). It can fall back to 2-RTT if something funky happens, but that shouldn't happen in most cases.
0-RTT works after the first handshake, but enabling it allows for some forms of replay attacks so that may not be something you want to use for anything hosting an API unless you've designed your API around it.
LAC-Tech · 13h ago
This looks like such an interesting articles, but it's completely ruined by the fact that every sentence is its own paragraph.
I swear I am not just trying to be a dick here. If I didn't think it had great content I wouldn't have commented. But I feel like I'm reading a LinkedIn post. Please join some of those sentences up into paragraphs!
eviks · 14h ago
Has this theory been tested?
adastra22 · 14h ago
The linked page is 35kB.
fantyoon · 13h ago
35kB after its uncompressed. On my end it sends 13.48kB.
adastra22 · 11h ago
Makes sense, thanks!
moomoo11 · 15h ago
I’d care about this if I was selling in India or Africa.
If I’m selling to cash cows in America or Europe it’s not an issue at all.
As long as you have >10mbps download across 90% of users I think it’s better to think about making money. Besides if you don’t know that lazy loading exists in 2025 fire yourself lol.
flohofwoe · 14h ago
I wouldn't be surprised if many '3rd world' countries have better average internet speeds than some developed countries by leapfrogging older 'good enough' tech that's still dominating in the developed countries, e.g. I've been on a 16 MBit connection in Germany for a long time simply because it was mostly good enough for my internet consumption. One day my internet provider 'forcefully' upgraded me to 50 MBit because they didn't support 16 MBit anymore ;)
mrweasel · 14h ago
For the longest time I tried arguing with my ISP that I only needed around 20Mbit. They did have a 50Mbit at the time, but the price difference between 50, 100 and 250 and meant that you basically got ripped off for anything but the 100Mbit. It's the same now, I can get 300Mbit, but the price difference between 300 and and 500 is to small to be viewed as an actual saving, similar, you can get 1000Mbit, but I don't need it and the price difference is to high.
jofzar · 15h ago
It really depends on who your clients are and where they are.
https://www.mcmaster.com/ was found last year to be doing some real magic to make it load literally as fast as possible for the crapiest computers possible.
A_D_E_P_T · 15h ago
Do you have any idea what they actually did? It would be interesting to study. That site really is blazing fast.
_nivlac_ · 12h ago
I am SO glad jofzar posted this - I remember this website but couldn't recall the company name. Here's a good video on how the site is so fast, from a frontend perspective:
I was intrigued that they request pages in the background on mouse-over, then swap on click. I decided to do likewise on my blog, since my pages are about a dozen kb of HTML, and I aggressively cache things.
gbuk2013 · 14h ago
Quick look: GSLB (via Akamai) for low latency, tricks like using CSS sprite to serve a single image in place of 20 or so for fewer round-trips, heavy use of caching, possibly some service worker magic but I didn't dig that far. :)
Basically, looks like someone deliberately did many right things without being lazy or cheap to create a performant web site.
And in the last few years, access has grown tremendously, a big part of which has been Jio's aggressive push with ultra-cheap plans.
mrweasel · 14h ago
Hope you're not selling to the rural US then.
masklinn · 14h ago
There's plenty of opportunities to have slow internet (and especially long roundtrips) in developed countries e.g.
- rural location
- roommate or sibling torrent-ing the shared connection into the ground
- driving around on a road with spotty coverage
- places with poor cellular coverage (some building styles are absolutely hell on cellular as well)
austin-cheney · 15h ago
It seems the better solution is to not use HTTP server software that employs this slow start concept.
Using my own server software I was able to produce a complex single page app that resembled an operating system graphical user interface and achieve full state restoration as fast as 80ms from localhost page request according to the Chrome performance tab.
mzhaase · 15h ago
TCP settings are OS level. The web server does not touch them.
austin-cheney · 14h ago
The article says this is not a TCP layer technology, but something employed by servers as a bandwidth estimating algorithm.
You are correct in that TCP packets are processed within the kernel of modern operating systems.
Edit for clarity:
This is a web server only algorithm. It is not associated with any other kind of TCP traffic. It seems from the down votes that some people found this challenging.
A single REST request is only truly a single packet if the request and response are both < 1400 bytes. Any more than that and your “single” request is now multiple requests & responses . Any one of them may need a retry and they all need to arrive in order for the UI to update.
For practical experiments, try chrome dev tools in 3g mode with some packet loss and you can see even “small” optimizations improving UI responsiveness dramatically.
This is one of the most compelling reasons to make APIs and UIs as small as possible.
No comments yet
[1] https://susam.net/
[2] https://github.com/susam/susam.net/blob/main/site.lisp
[3] https://susam.net/tag/mathematics.html
There is such a thing as over-optimization. All this SEO-driven performance chasing is really only worthwhile if the thing you're building is getting click-through-traffic in the millions of views.
It's a bit like worrying about the aerodynamics of a rowboat when you're the only one in it, and you're lost at sea, and you've got to fish for food and make sure the boat doesn't spring any leaks.
Yes, in the abstract, it's a worthwhile pursuit. But when you factor in the ratio of resources required vs gain recieved, it is hardly ever a wise choice of how to use your energy.
You could try replacing KaTeX with MathML: https://w3c.github.io/mathml-core/
I would love to use MathML, not directly, but automatically generated from LaTeX, since I find LaTeX much easier to work with than MathML. I mean, while I am writing a mathematical post, I'd much rather write LaTeX (which is almost muscle memory for me), than write MathML (which often tends to get deeply nested and tedious to write). However, the last time I checked, the rendering quality of MathML was quite uneven across browsers, both in terms of aesthetics as well as in terms of accuracy.
For example, if you check the default demo at https://mk12.github.io/web-math-demo/ you'd notice that the contour integral sign has a much larger circle in the MathML rendering (with most default browser fonts) which is quite inconsistent with how contour integrals actually appear in print.
Even if I decide to fix the above problem by loading custom web fonts, there are numerous other edge cases (spacing within subscripts, sizing within subscripts within subscripts, etc.) that need fixing in MathML. At that point, I might as well use full KaTeX. A viable alternative is to have KaTeX or MathJaX generate the HTML and CSS on server-side and send that to the client and that's what I meant by server-side rendering in my earlier comment.
“MathML for {very rough textual form of the equation}” seems to give a 100% hit rate for me. Even when i want some formatting change i can ask the llm and that pretty much always has a solution (mathml can render symbols and subscripts in numerous ways but the syntax is deep). It’ll even add the css needed to change it up in some way if asked.
Why can't this be precomputed into html and css?
It can be. But like I mentioned earlier, my personal website is a hobby project I've been running since my university days. It's built with Common Lisp (CL), which is part of the fun for me. It's not just about the end result, but also about enjoying the process.
While precomputing HTML and CSS is definitely a viable approach, I've been reluctant to introduce Node or other tooling outside the CL ecosystem into this project. I wouldn't have hesitated to add this extra tooling on any other project, but here I do. I like to keep the stack simple here, since this website is not just a utility; it is also my small creative playground, and I want to enjoy whatever I do here.
(You can probably use KaTeX, too, but I prefer the look of MathJax's output.)
[1] https://512kb.club/
[2] https://anderegg.ca/
[3] https://radar.cloudflare.com/
What are you doing with the extra 500kB for me, the user?
> 90% of the time in interested in text. Most of the reminder vector graphics would suffice.
14 kB is a lot of text and graphics for a page. What is the other 500 for?
It's fair to prefer text-only pages, but the "and graphics" is quite unrealistic in my opinion.
That being said, while raw SVG suffers in that respect from the verbosity of the format (being XML-based and designed so as to be humanly readable and editable as text), it would be unfair to compare, for the purpose of HTTP transmission, the size of the raster format image (heavily compressed) with the size of the SVG file (uncompressed) as one would if it were for desktop use. SVG tends to lend itself very well to compressed transmission, even with high-performance compression algorithms like brotli (which is supported by all relevant browsers and lots of HTTP servers), and you can use pre-compressed files (e.g. for nginx with the module ngx_brotli) so that the server doesn't have to handle compression ad hoc.
Outside of youtube and... twitter? I really don't need fancy stuff. HN is literally the web ideal for me, and Id think most users if given an option.
[0] https://wyczawski.dev/
The modern web has crossed the rubicon long time ago for 14kb websites.
https://news.ycombinator.com/item?id=3632765
https://web.archive.org/web/20120603070423/http://blog.benst...
The range means MTU varies from reasonable, where you can argue that an IW of anything from 1-30 packets is good, to a world where the MTU is ridiculously small and the IW is similarly absurd.
We would probably be better off if consumers on >1gbps links got higher MTUs, then an IW of 10-30 could be reasonable everywhere. MTU inside cloud providers is higher (AWS uses 9001), so it is very possible.
Any reference for this?
* https://www.cdnplanet.com/blog/initcwnd-settings-major-cdn-p...
But in practice, I think this should work most of the time for most people. On slower connections, your connection will probably crawl to a halt due to retransmission hell, though. Unless you fill up the buffers on the ISP routers, making every other connection for that visitor slow down or get dropped, too.
Disabling slow start and using BBR congestion control (which doesn't rely on packet loss as a congestion signal) makes a world of difference for TCP throughput.
Which would be kinda annoying on a slow connection.
Either you'd have buffer issues or dropped packets.
> ... analysis [by Cloudflare] suggests that the throttling [by Russian ISPs] allows Internet users to load only the first 16 KB of any web asset, rendering most web navigation impossible.
This is why almost all applications and websites are slow and terrible these days.
Performance isn’t seen as sexy, for reasons I don’t understand. Devs will be agog about how McMaster-Carr manages to make a usable and incredibly fast site, but they don’t put that same energy back into their own work.
People like responsive applications - you can’t tell me you’ve never seen a non-tech person frustratingly tapping their screen repeatedly because something is slow.
The actual reason is almost always some business bullshit. Advertising trackers, analytics etc. No amount of trying to shave kilobytes off a response can save you if your boss demands you integrate code from a hundred “data partners” and auto play a marketing video.
Blaming bad web performance on programmers not going for the last 1% of optimization is like blaming climate change on Starbucks not using paper straws. More about virtue signaling than addressing the actual problem.
SPAs are really bad for mostly static websites. News sites, documentation, blogs.
> This is why almost all applications and websites are slow and terrible these days.
But no, there are way more things broken on the web than lack of overoptimization.
I don’t even know where to begin. Most of us are aiming for under a half second total for response times. Are you working on web applications at all?
I know people working on that exist. "Most of us" is absolutely not, if they were so many, the web wouldn't be like it's now.
Anyway, most people working towards instantaneous response aren't optimizing the very-high latency case where the article may eventually get a 0.5s slowdown. And almost nobody gets to the extremely low-gain kinds of optimizations there.
https://news.ycombinator.com/item?id=44124688#:~:text=Just%2...
the notion was popularized as an explanation for a CPU core spiking whenever the start menu opens on Win11
1 https://x.com/philtrem22/status/1927161666732523596
I googled the names of the people holding the talk and they're both employed by Microsoft as software engineers, I don't see any reason to doubt what they're presenting. Not the whole start menu is React Native, but parts are.
But if Evan Wallace didn't obsess over performance when building Figma, it wouldn't be what it is today. Sometimes, performance is a feature.
- [1] https://checkboxes.andersmurphy.com
Performance matters.
We've spent so many decades misinterpreting Knuth's quote about optimization that we've managed to chew up 5-6 orders of magnitude in hardware performance gains and still deliver slow, bloated and defective software products.
Performance does in fact matter and all other things equal, a fast product is more pleasurable than a slow one.
Thankfully some people like the folks at Figma took the risk and proved the point.
Even if we're innovating on hard technical problems (which most of us are not), performance still matters.
If you're running infra at Google, of course containers and orchestration make sense.
If you're running apps/IT for an SMB or even small enterprise, they are 100% waste, churn and destruction. I've built for both btw.
The contexts in which they are appropriate and actually improve anything at all are vanishingly small.
I fail to see the churn and destruction. Done well, you decouple the node from the application, even, and end up with raw compute that you can run multiple apps on.
Building disk images was a giant pain in the ass but less disruptive to flow than having QA cry wolf a couple times a week.
I could do the same with containers, and easier.
VM clone time is surprisingly quick once you stop copying memory, after that it's mostly ejecting the NIC and bringing up the new one.
It's absolutely possible, but I'm not sure there's any tool out there with that command... because why would you? You'll get about the same result as forking a process inside the container.
Our cluster keeps stats on when processes start. So we can alert on crashes, and because new processes (cold JIT) can skew the response numbers, and are inflection points to analyze performance improvements or regressions. There were no restarts that morning. So they pulled the tablecloth out from under us. TIL.
Anything else is dissonant.
It's not a feature I miss in containers, is what I'm saying.
Debian Slim is < 30 MB. Alpine, if you can live with musl, is 5 MB. The problem comes from people not understanding what containers are, and how they’re built; they then unknowingly (or uncaringly) add in dozens of layers without any attempt at reducing or flattening.
Similarly, K8s is of course just a container orchestration platform, but since it’s so easy to add to, people do so without knowing what they’re doing, and you wind up with 20 network hops to get out of the cluster.
So yeah, make sure not to lose performance unreasonably, but also don't obsess with performance to the point of making things unusable or way too complicated for what they do.
Yes, at the most absurd limits, some autists may occasionally obsess and make things worse. We're so far from that problem today, it would be a good one to have.
IME, making things fast almost always also makes them simpler and easier to understand.
Building high-performance software often means building less of it, which translates into simpler concepts, fewer abstractions, and shorter times to execution.
It's not a trade-off, it's valuable all the way down.
Treating high performance as a feature and low performance as a bug impacts everything we do and ignoring them for decades is how you get the rivers of garbage we're swimming in.
This.
Notably, this is subjective. I’ve had devs tell me that joins (in SQL) are too complicated, so they’d prefer to just duplicate data everywhere. I get that skill is a spectrum, but it’s getting to the point where I feel like we’ve passed the floor, and need to firmly state that there are in fact some basic ideas that are required knowledge.
Ahh, if only. Have you seen applications developed by large corporations lately? :)
:)
FWIW I optimised the heck out of my personal homepage and got 100/100 for all Lighthouse scores. Which I had not previously thought possible LOL
Built in Rails too!
It's absolutely worth optimising your site though. It just is such a pleasing experience when a page loads without any perceptible lag!
Our initial page load is far bigger than 17.2KB, it's about 120KB of HTML, CSS, and JS. The big secret is eliminating all extra HTTP requests, and only evaluating JS code that needs to run for things "above the fold" (lazy-evaluating any script that functions below the fold, as it scrolls into view). We lazy-load everything we can, only when it's needed. Defer any script that can be deferred. Load all JS and CSS in-line where possible. Use 'facade' icons instead of loading the 3rd-party chat widget at page load, etc. Delay loading tracking widgets if possible. The system was already built on an SSR back-end, so SSR is also a big plus here. We even score perfect 100s with full-page hires video backgrounds playing at page load above-the-fold, but to get there was a pretty big lift, and it only works with Vimeo videos, as Youtube has become a giant pain in the ass for that.
The Google Lighthouse results tell you everything you need to know to get to 100 scores. It took a whole rewrite of our codebase to get there, the old code was never going to be possible to refactor. It took us a whole new way of looking at the problem using the Lighthouse results as our guide. We went from our customers complaining about page speeds, to being far ahead of our competition in terms of page speed scores. And for our clients, page speed does make a big difference when it factors into SEO rankings (though it's somewhat debatable if page speed affects SEO, but not with an angry client that sees a bad page speed score).
All the Tailwind building and so on is done using common JS tools, which are mostly standard out of the box Rails 8 supplied scripts!
Sprockets used to do the SASS compilation and asset bundling, but the Rails standard now is to facilitate your own preferences around compilation of CSS/JS.
It was more a quick promote Rails comment as it can get dismissed as not something to build fast website in :-)
1. There is math for how long it takes to send even one packet over satellite connection (~1600ms). Its a weak argument for the 14kb rule since there is no comparison with a larger website. 10 packets wont necessarily take 16 seconds.
2. There is a mention that images on webpage are included in this 14kb rule. In what case are images inlined to a page’s initial load? If this is a special case and 99.9% of images don’t follow it, it should be mentioned at very least.
Low resolution thumbnails that are blurred via CSS filters over which the real images fade in once downloaded. Done properly it usually only adds a few hundred bytes per image for above the fold images.
I don’t know if many bloggers do that, though. I do on my blog and it’s probably a feature on most blogging platforms (like Wordpress or Medium) but it’s more of a commercial frontend hyperoptimization that nudges conversions half a percentage point or so.
Just because everything else is bad, doesn't invalidate the idea that you should do better. Today's internet can feel painfully slow even on a 1Gbps connection because of this; websites were actually faster in the early 2000s, during the transition to ADSL, as they still had to cater to dial-up users and were very light as a result.
I get this all the time at my job, when I recommend a team do something differently in their schema or queries: “do we have any examples of teams currently doing this?” No, because no one has ever cared to try. I understand not wanting to be guinea pigs, but you have a domain expert asking you to do something, and telling you that they’ll back you up on the decision, and help you implement it. What more do you want?!
I have done the hyper optimised, inline resource, no blocking script, hand minimised JS, 14kb website thing before and the problem with doing it the "hard" way is it traps you in a design and architecture.
When your requirements change all the minimalistic choices that seemed so efficient and web-native start turning into technical debt. Everyone fantasises about "no frameworks" until the project is no longer a toy.
Whereas the isomorphic JS frameworks let you have your cake and eat it: you can start with something that spits out compiled pages and optimise it to get performant _enough_, but you can fall back to thick client JavaScript if necessary.
EDIT: some reply missed my point, I am not claiming this particular optimization is the holy grail, only that I'd have liked for added benefit of reducing the energy consumption to be mentioned
I am all for efficiency, but optimizing everywhere is a recipe for using up the resources to actually optimize where it matters.
It's a small thing, but as you say internet video is relatively heavy.
To reduce my AI footprint I use the udm=14 trick[1] to kill AI in Google search. It generally gives better results too.
For general web browsing the best single tip is running uBlock Origin. If you can master medium[2] or hard mode (which will require un-breaking/whitelisting sites) it saves more bandwidth and has better privacy.[3]
To go all-out on bandwidth conservation, LocalCDN[4] and CleanURLs[5] are good. "Set it and forget it," improves privacy and load times, and saves a bit of energy.
Sorry this got long. Cheers
[0] https://greasyfork.org/whichen/scripts/23661-youtube-hd
[1] https://arstechnica.com/gadgets/2024/05/google-searchs-udm14...
[2] https://old.reddit.com/r/uBlockOrigin/comments/1j5tktg/ubloc...
[3] https://github.com/gorhill/ublock/wiki/Blocking-mode
[4] https://www.localcdn.org/
[5] https://github.com/ClearURLs/Addon
Once you find it for a website you can just save it though so you don't need to go through it again.
LocalCDN is indeed a nobrainer for privacy! Set and forget.
Is it? My front end engineer spending 90 minutes cutting dependencies out of the site isn’t going to deny YouTube the opportunity to improve their streaming algorithms.
Everyone needs to be aware that we are part of an environment that has limited resources beyond "money" and act accordingly, whatever the scale.
Is it really? I was surprised to see that surfing newspaper websites or Facebook produces more traffic per time than Netflix or Youtube. Of course there's a lot of embedded video in ads and it could maybe count as streaming video.
A million determined voters can easily force laws to be made which forces youtube to be more efficient.
I often think about how orthodoxical all humans are. We never think about different paths outside of social norms.
- Modern western society has weakened support for mass action to the point where it is literally an unfathomable "black swan" perspective in public discourse.
- Spending a few million dollars on TV ads to get someone elected is a lot cheaper than whatever Bill Gates spends on NGOs, and for all the money he spent it seems like aid is getting cut off.
- Hiring or acting as a hitman to kill someone to achieve your goal is a lot cheaper than the other options above. It seems like this concept, for better or worse, is not quite in the public consciousness currently. The 1960s 1970s era of assassinations have truly gone and past.
Personally, if a referendum were held tomorrow to disband Google, I would vote yes for that...but good luck getting that referendum to be held.
Like how industrial manufacturing are the biggest carbon consumers and compared to them, I’m just a drop in the ocean. But that doesn’t mean I don’t also have a responsibility to recycle because culminate effect of everyone like me recycling quickly becomes massive.
Similarly, if every web host did their bit with static content, you’d still see a big reduction at a global scale.
And you’re right it shouldn’t the end of the story. However that doesn’t mean it’s a wasted effort / irrelevant optimisation
A nice side effect of these choices is that I only spend a small part of my pay. Never had a credit card, never had debt, just saved my money until I had enough that the purchase was no big deal.
I don't really have an issue with people who say that their drop does not matter so why should they worry, but I don't understand it, seems like they just needlessly complicate their life. Not too long ago my neighbor was bragging about how effective all the money he spent on energy efficient windows, insulation, etc, was, he saved loads of money that winter; his heating bill was still nearly three times what mine was despite using a wood stove to offset his heating bill, my house being almost the same size, barely insulated and having 70 year old windows. I just put on a sweater instead of turning up the heat.
Edit: Sorry about that sentence, not quite awake yet and doubt I will be awake enough to fix it before editing window closes.
HTML/CSS/JS is the only fully open stack, free as in beer and free and not owned by a single entity and standardized by multinational standardization bodies for building applications interfaces that is cross platform and does that excellent. Especially with electron you can build native apps with HTML/CSS/JS.
There are actually web apps not „websites” that are built. Web apps are not html with sprinkled jquery around there are actually heavy apps.
Everyone writing about "future view" or "next gen" would have to prove to me that they really understand current state of things.
I like reminding students about Steve Jobs. While others pushed for web apps, he launched the App Store alongside the iPhone in 2008 and changed everything. I ask my students, why not stick with web apps? Why go native?
Hopefully, these questions get them thinking about new platforms, new technologies and solutions, and maybe even spark ideas that lead to something better.
Just picture this: it's 2007, and you're standing in front of Steve Jobs telling him: “You don't understand anything about the web, Steve.”
Yeah, good luck with that. For that reason I'll politely decline your offer to prove how much I know about this topic, but you're more than welcome to share your own perspective.
The binary is a compressed artefact and the stream is a TLS pipe. But the principle is the same.
In fact videos streams over the web are actually based on how HTTP documents are chunked and retrieved, rather than the other way around.
2. Everything in our world is dwarfes standing on the shoulders of giants. To rip everything up and create something completely new is most of the time an idea that sounds better than it really would be. Anyone who thinks something else is mostly to young to see this pattern.
I'll have to speculate what you mean
1. If you mean drawing pixels directly instead of relying on HTML, it's going to be slower. (either because of network lag or because of WASM overhead)
2. If you mean streaming video to the browser and rendering your site server-side, it will break features like resizing the window or turning a phone sideways, and it will be hideously expensive to host.
3. It will break all accessibility features like Android's built-in screen reader, because you aren't going to maintain all the screen reader and braille stuff that everyone might need server-side, and if you do, you're going to break the workflow for someone who relies on a custom tweak to it.
4. If you are drawing pixels from scratch you also have to re-implement stuff like selecting and copying text, which is possible but not feasible.
5. A really good GUI toolkit like Qt or Chromium will take 50-100 MB. Say you can trim your site's server-side toolkit down to 10 MB somehow. If you are very very lucky, you can share some of that in the browser's cache with other sites, _if_ you are using the same exact version of the toolkit, on the same CDN. Now you are locked into using a CDN. Now your website costs 10 MB for everyone loading it with a fresh cache.
You can definitely do this if your site _needs_ it. Like, you can't build OpenStreetMap without JS, you can't build chat apps without `fetch`, and there are certain things where drawing every pixel yourself and running a custom client-side GUI toolkit might make sense. But it's like 1% of sites.
I hate HTML but it's a local minimum. For animals, weight is a type of strength, for software, popularity is a type of strength. It is really hard to beat something that's installed everywhere.
I'm honestly just really annoyed about this "society and environment"-spin on advise that would have an otherwise niche, but perfectly valid reason behind it (TFA: slow satellite network on the high seas).
This might sound harsh and I don't mean it personally, but making your website smaller and "being vocal about it" (whatever you mean by that) doesn't make an iota of difference. It also only works if your site is basically just text. If your website uses other resources (images, videos, 3D models, audio, etc.), the impact of first load is just noise anyway.
You can have a bigger impact by telling 100,000 people to drive an hour less each month and if just 1% of your hypothetical audience actually does that, you'd achieve orders of magnitude more in terms of environmental and societal impact.
Now, it is true that it didn't save much because probably many people were uploaded 8K videos at the time, so drop in the bucket. But personally, I found it quite inspiring and his decision was instrumental in my deciding to never upload 4K. And in general, I will say that people like that do inspire me and keep me going to be as minimal as possible when I use energy in all domains.
For me at least, trying to optimize for using as little energy as possible isn't an engineering problem. It's a challenge to do it uniformly as much as possible, so it can't be subdivided. And I do think every little bit counts, and if I can spend time making my website smaller, I'll do that in case one person gets inspired by that. It's not like I'm a machine and my only goal is time efficiency....
Depending on the type of video this may not matter, but it often does. For example, my FPS gaming and dashcam footage gets utterly destroyed if uploaded to youtube at 1080p. Youtube's 4K seems roughly equivalent to my high bitrate 1080p recordings.
Plus, trying to signal your way to societal change can have unintended downsides. It makes you feel you are doing something when you are actually not making any real impact. It attracts the kind of people who care more about signaling the right signals than doing the right thing into your camp.
And no, a million small sites won't "become a trend in society".
If we want to really fix places with bigger impact we need to change this approach in a first place.
This is micro-optimisation for a valid use case (slow connections in bandwidth-starved situations), but in the real world, a single hi-res image, short video clip, or audio sample would negate all your text-squeezing, HTTP header optimisation games, and struggle for minimalism.
So for the vast majority of use cases it's simply irrelevant. And no, your website is likely not going to get 1,000,000 unique visitors per hour so you'd have a hard time even measuring the impact whereas simply NOT ordering pizza and having a home made salad instead would have a measurable impact orders of magnitude greater.
Estimating the overall impact of your actions and non-actions is hard, but it's easier and more practical to optimise your assets, remove bloat (no megabytes of JS frameworks), and think about whether you really need that annoying full-screen video background. THOSE are low-hanging fruit with lots of impact. Trying to trim down a functional site to <14kB is NOT.
Calculate how much electricity you personally consume in total browsing the Internet for a year. Multiply that by 10 to be safe.
Then compare that number to how much energy it takes to produce a single hamburger.
Do the calculation yourself if you do not believe me.
On average, we developers can make a bigger difference by choosing to eat salad one day instead of optimizing our websites for a week.
I've tried to really cut down my website as well to make it fairly minimal. And when I upload stuff to YouTube, I never use 4K, only 1080P. I think 4K and 8K video should not even exist.
A lot of people talk about adding XYZ megawatts of solar to the grid. But imagine how nice it could be if we regularly had efforts to use LESS power.
I miss the days when websites were very small in the days of 56K modems. I think there is some happy medium somewhere and we've gone way past it.
Since a main argument is seemingly that AI is worse, let's remember that AI is querying these huge pages as well.
Also that the 14kb size is less than 1% of the current average mobile website payload.
I myself have installed one single package, and it installed 196,171 files in my home directory.
If that isn't gratuitous bloat, then I don't know what is.
Creating an average hamburger requires an input of 2-6 kWh of energy, from start to finish. At 15¢ USD/kWh, this gives us an upper limit of about 90¢ of electricity.
The average 14 kB web page takes about 0.000002 kWh to serve. You would need to serve that web page about 1-300,000 times to create the same energy demands of a single hamburger. A 14 mB web page, which would be a pretty heavy JavaScript app these days, would need about 1 to 3,000.
I think those are pretty good ways to use the energy.
Then multiply that by the number of daily visitors.
Without "hamburgers" (food in general), we die, reducing the size of usesless content on websites doesn't really hurt anyone.
Then multiply that by the number of daily customers.
Without web pages (information in general), we return to the Dark Ages. Reducing the number of hamburgers people eat doesn't really hurt anyone.
Now, if mcdonalds padded 5kB of calories of a cheesburger with 10.000 kilobytes of calories in wasted food like news sites doo, it would be a different story. The ratio would be 200 kilos of wasted food for 100grams of usable beef.
It's just an inconvenient truth for people who only care about the environmental impact of things that don't require a behavior change on their part. And that reveals an insincere, performative, scoldy aspect of their position.
https://ourworldindata.org/land-use-diets
What benefit does an individual get from downloading tens of megabytes of useless data to get ~5kB of useful data in an article? It wastes download time, bandwidth, users time (having to close the autoplaying ad), power/battery, etc.
For a user's access to a random web page anywhere, assuming it's not on a CDN near the user, you're looking at at ~10 routers/networks on the way involved in the connection. Did you take that into account?
(In addition to what justmarc said about accounting for the whole network. Plus I suspect between feeding them and the indirect effects of their contribution to climate change, I suspect you're being generous about the cost of a burger.)
Instead we should be looking to nuclear power solutions for our energy needs, and not waste time with reducing website size if its purely a function of environmental impact.
And no, reducing resource use to the minimum in the name of sustainability does not scale down the same way it scales up. You're just pushing the idea that all human activity is some sort of disease that's best disposed of. That's essentially just wishing the worst on your own species for being successful.
It's never clear to me whether people who push this line are doing so because they're bitter and want to punish other humans, or because they hate themselves. Either way, it evinces a system of thought that has already relegated humankind to the dustbin of history. If, in the long run, that's what happens, you're right and everyone else is wrong. Congratulations. It will make little difference in that case to you if the rest of us move on for a few hundred years to colonize the planets and revive the biosphere. Comfort yourself with the knowledge that this will all end in 10 or 20 thousand years, and the world will go back to being a hot hive of insects and reptiles. But what glory we wrought in our time.
Whataboutism. https://en.m.wikipedia.org/wiki/Whataboutism
> You're just pushing the idea that all human activity is some sort of disease that's best disposed of. That's essentially just wishing the worst on your own species for being successful.
Strawmaning. https://en.m.wikipedia.org/wiki/Straw_man
Every bloody mention of the environmental impact of our activities gets at least a reply like yours that ticks one of these boxes.
That's a sweeping misunderstanding of what I wrote, so I'd ask that you re-read what I said in response to the specific quote.
(this was actually stated in agreement with the original poster, who you clearly misunderstood, so there's no "what-about" involved here. They were condemning all kinds of consumption, including the frivolous ones I mentioned).
But
I'm afraid you've missed both my small point and my wider point.
My small point was to argue against the parent's comment that
>>reducing ressources consumption to the minimum required should always be a concern if we intend to have a sustainable future
I disagree with this concept on the basis that nothing can be accomplished on a large scale if the primary concern is simply to reduce resource consumption to a minimum. If you care to disagree with that, then please address it.
The larger point was that this theory leads inexorably to the idea that humans should just kill themselves or disappear; and it almost always comes from people who themselves want to kill themselves or disappear.
..."required".
That allows you to fit pretty much everything in that requirement. Which actually makes my initial point a bit weak, as some would put "delivering 4K quality tiktok videos" as a requirement.
Point is that energy consumption and broad environmental impact has to be a constraint in how we design our systems (and businesses).
I stand by my accusations of whataboutism and strawmaning, though.
Why don't you read this comment and see if you have the same energy for hamburger eaters that you do for people with websites over 14kb. Because if you don't, it's obvious you're looking to sweat people who actually care about their environmental impact over absolutely nothing.
https://news.ycombinator.com/item?id=44614291
FYI, it's not Whataboutism to say there are more effective things to focus on.
I know it is not the exact topic, but sometimes I think we dont need the fastest response time but consistent response time. Like every single page within the site to be fully rendered with exactly 1s. Nothing more nothing less.
But this sort of goes against my no / minimal JS front end rendering philosophy.
HTTP/3 uses UDP rather than TCP, so TCP slow start should not apply at all.
One other advantage of QUIC is that you avoid some latency from the three-way handshake that is used in almost any TCP implementation. Although technically you can already send data in the first SYN packet, the three-way handshake is necessary to avoid confusion in some edge cases (like a previous TCP connection using the same source and destination ports).
Interesting to hear that QUIC does away with the 3WHS - it always catches people by surprise that it takes at least 4 x latency to get some data on a new TCP connection. :)
Very relevant. A lot of websites need 5 to 30 seconds or more to load.
A 14kb page can load much faster than a 15kb page - https://news.ycombinator.com/item?id=32587740 - Aug 2022 (343 comments)
Doesn't this sort of undo the entire point of the article?
If the idea was to serve the entire web page in the first roundtrip, wouldn't you have lost the moment TLS is used? Not only does the TLS handshake send lots of stuff (including the certificate) that will likely get you over the 14kb boundary before you even get the chance to send a byte of your actual content - but the handshake also includes multiple request/response exchanges between client and server, so it would require additional roundtrips even if it stayed below the 14kb boundary.
So the article's advice only holds for unencrypted plain-TCP connections, which no one would want to use today anymore.
The advice might be useful again if you use QUIC/HTTP3, because that one ditches both TLS and TCP and provides the features from both in its own thing. But then, you'd have to look up first how congestion control and bandwidth estimation works in HTTP3 and if 14kb is still the right threshold.
And TLS handshakes aren't that big, even with certificates... Although you do want to use ECC certs if you can, the keys are much smaller. The client handshake should fit in 1-2 packets, the server handshake should fit in 2-3 packets. But more importantly, the client request can only be sent after receiving the whole server handshake, so the congestion window will be refreshed. You could probably calculate how much larger the congestion window is likely to be, and give yourself a larger allowance, since TLS will have expanded your congestion window.
Otoh, the important concept, is that early throughput is limited by latency and congestion control, and it takes many round trips to hit connection limits.
One way to apply that is if you double your page weight at the same time you add many more service locations and traffic direction, you can see page load times stay about the same.
Looking at the posts themselves, they vary in size but the content/styles/JS probably average around 14kB. You've also got the font file, but again a more minimal site could strip that. Finally, each post has a cover image that makes up the bulk of the content size. I don't think you're ever going to get that under 14kB, but they're also very easy to load asynchronously, and with a CSS-rendered blur hash placeholder, you could have an initial page load that looks fairly good where everything not in the initial 14kB can be loaded later without causing FOUCs/page layout shifts/etc.
For a magazine site or a marketing site, the 14kB thing is almost certainly impossible, but for blogs or simple marketing pages where the content is more text-based or where there are minimal above-the-fold images, 14kB is pretty viable.
For reference, my blog is https://jonathan-frere.com/, and you can see a version of it from before I added the custom fonts here: https://34db2c38.blog-8a1.pages.dev/ I think both of these versions are not "spartan minimal academic pages".
How about a single image? I suppose a lot of people (visitors and webmasters) like to have an image or two on the page.
Would love it if someone kept a list.
https://250kb.club/
Hopefully you'll find some of them aesthetically pleasing
The quality of connection is so much better, and as you can get a starlink mini with a 50GB plan for very little money, its already in the zone that just one worker could grab his own and bring it on the rig to use on his free time and to share.
Starlink terminals aren't "infrastructure". Campers often toss one on their roof without even leaving the vehicle. Easier than moving a chair. So, as I said, the geostationary legacy system immediately becomes entirely obsolete other than for redundancy, and is kinda irrelevant for uses like browsing the web.
The HTTPS negotiation is going to consume the initial roundtrips which should start increasing the size of the window
Modern CDNs start with larger initial windows and also pace the packets onto the network to reduce the chances of congesting
There’s also a question as to how relevant the 14kb rule has ever been… HTML renders progressively so as long as there’s some meaningful content in the early packets then overall size is less important
There are some chargers that take card payments though. My local IKEA has some. There's also EU legislation to mandate payment card support.
https://electrek.co/2023/07/11/europe-passes-two-big-laws-to...
There are other ways you can try to optimise the certificate chain, though. For instance, you can pick a CA that uses ECC rather than RSA to make use of the much shorter key sizes. Entrust has one, I believe. Even if the root CA has an RSA key, they may still have ECC intermediates you can use.
But yes, ensure that you're serving the entire chain, but keep the chain as short as possible.
> Also HTTPS requires two additional round trips before it can do the first one — which gets us up to 1836ms!
0-RTT works after the first handshake, but enabling it allows for some forms of replay attacks so that may not be something you want to use for anything hosting an API unless you've designed your API around it.
I swear I am not just trying to be a dick here. If I didn't think it had great content I wouldn't have commented. But I feel like I'm reading a LinkedIn post. Please join some of those sentences up into paragraphs!
If I’m selling to cash cows in America or Europe it’s not an issue at all.
As long as you have >10mbps download across 90% of users I think it’s better to think about making money. Besides if you don’t know that lazy loading exists in 2025 fire yourself lol.
https://www.mcmaster.com/ was found last year to be doing some real magic to make it load literally as fast as possible for the crapiest computers possible.
https://youtu.be/-Ln-8QM8KhQ
Basically, looks like someone deliberately did many right things without being lazy or cheap to create a performant web site.
- rural location
- roommate or sibling torrent-ing the shared connection into the ground
- driving around on a road with spotty coverage
- places with poor cellular coverage (some building styles are absolutely hell on cellular as well)
Using my own server software I was able to produce a complex single page app that resembled an operating system graphical user interface and achieve full state restoration as fast as 80ms from localhost page request according to the Chrome performance tab.
You are correct in that TCP packets are processed within the kernel of modern operating systems.
Edit for clarity:
This is a web server only algorithm. It is not associated with any other kind of TCP traffic. It seems from the down votes that some people found this challenging.