Just want to say how much I thank YCom for not f'ing up the HN interface, and keeping it fast.
I distinctly remember when Slashdot committed suicide. They had an interface that was very easy for me to scan and find high value comments, and in the name of "modern UI" or some other nonsense needed to keep a few designers employed, completely revamped it so that it had a ton of whitespace and made it basically impossible for me to skim the comments.
I think I tried it for about 3 days before I gave up, and I was a daily Slashdot reader before then.
Eji1700 · 2h ago
Information density and ease of identification is the antithesis of "engagement" which often has some time on site metric they're hunting.
If you can find what you want and read it you might not spend 5 extra seconds lost on their page and thus they can pad their stats for advertisers. Bonus points if the stupid page loads in such a way you accidentally click on something and give them a "conversion".
Sadly financial incentive is almost always towards tricking people into doing something they don't want to do instead of just actually giving them what they fucking want.
andsoitis · 9m ago
> Sadly financial incentive is almost always towards tricking people into doing something they don't want to do instead of just actually giving them what they fucking want.
Northstar should be user satisfaction. For some products that might be engagement (eg entertainment service) while for others it is accomplishing a task as quickly as possible and exiting the app.
KPGv2 · 1h ago
The one and only thing I'd do is make the font bigger and increase padding. There's overwhelming consensus that you should have (for English) about 50–70 characters per line of text for the best, fastest, most accurate readability. That's why newspapers pair a small font with multiple columns: to limit number of characters per line of text.
HN might have over 100 chars per line of text. It could be better. I know I could do it myself, and I do. But "I fixed it for me" doesn't fix it for anyone else.
stevage · 1h ago
Increased padding comes at the cost of information density.
I think low density UIs are more beginner friendly but power users want high density.
jorvi · 41m ago
High information density, not high UI density.
Having 50 buttons and 10 tabs shoved in your face just makes for opaqueness, power user or not.
portaouflop · 33m ago
There are dozens of alternative HN front ends that would satisfy your needs
FlyingSnake · 1h ago
HN is literally the website I open to check if I have internet connectivity. HN is truly a shining beacon in the trashy landscape of web bloat.
throwawayexmple · 7m ago
I find pinging localhost a bit more reliable, and faster too.
I blame HN switching to AWS. Downtime also increased after the switch.
kikoreis · 1h ago
Oh it's lwn.net for me!
postalcoder · 33m ago
It’s not modern UIs that prevent websites from being performant. Look at old.reddit.com, for instance. It’s the worst of both worlds. An old UI that, although much better than its newer abomination, is fundamentally broken on mobile and packed to the gills with ad scripts.
HarHarVeryFunny · 59m ago
I don't think it was UI that killed Slashdot. The value was always in the comments, and in the very early years often there would be highly technical SMEs commenting on stories.
The site seemed to start to go downhill when it was sold, and got into a death spiral of less informed users, poor moderation, people leaving, etc. It's amazing that it's still around.
ChrisMarshallNY · 1h ago
That, and all the trolls that piled on, when CNN and YouTube started policing their comment sections.
MawKKe · 1h ago
Similar thing happened (to me) with Hackaday around 2010-2011. I used to check it almost daily, and then never again after the major re-design.
nashashmi · 2h ago
Is that when they went fully xhtml?
cyanydeez · 1h ago
Ive wanted tp poll HN about how many people actively track usernames.
With IRC its basically part of the task, but every forum i read, its rare that i ever consider whose saying what.
stevage · 1h ago
Yep. Dang is basically the only one I notice.
PaulHoule · 4h ago
Kinda funny but I think LLM-assisted workflows are frequently slow -- that is, if I use the "refactor" features in my IDE it is done in a second, if I ask the faster kind of assistant it comes back in 30 seconds, if I ask the "agentic" kind of assistant it comes back in 15 minutes.
I asked an agent to write an http endpoint at the end of the work day when I had just 30 min left -- my first thought was "it took 10 minutes to do what would have taken a day", but then I thought, "maybe it was 20 minutes for 4 hours worth of work". The next day I looked at it and found the logic was convoluted, it tried to write good error handling but didn't succeed. I went back and forth and ultimately wound up recoding a lot of stuff manually. In 5 hours I had it done for real, certainly with a better test suite than I would have written on my own and probably better error handling.
I've already written about this several times here. I think the current trend of LLMs chasing benchmark scores are going in the wrong direction at least as programming tools. In my experience they get it wrong with enough probability, so I always need to check the work. So I end up in a back and forth with the LLM and because of the slow responses it becomes a really painful process and I could often have done the task faster if I sat down and thought about it. What I want is an agent that responds immediately (and I mean in subseconds) even if some benchmark score is 60% instead of 80%.
pron · 1h ago
Programmers (and I'm including myself here) often go to great lengths to not think, to the point of working (with or without a coding assistant) for hours in the hope of avoiding one hour of thinking. What's the saying? "An hour of debugging/programming can save you minutes of thinking," or something like that. In the end, we usually find that we need to do the thinking after all.
I think coding assistants would end up being more helpful if, instead of trying to do what they're asked, they would come back with questions that help us (or force us) to think. I wonder if a context prompt that says, "when I ask you to do something, assume I haven't thought the problem through, and before doing anything, ask me leading questions," would help.
I think Leslie Lamport once said that the biggest resistance to using TLA+ - a language that helps you, and forces you to think - is because that's the last thing programmers want to do.
PaulHoule · 1h ago
Sometimes thinking and experimenting go together. I had to do some maintenance on some Typescript/yum that I didn't write but had done a little maintenance.
Typescript can make astonishingly complex error messages when types don't match up so I went through a couple of rounds of showing the errors to the assistant and getting suggestions to fix it that were wrong but I got some ideas and did more experiments and over the course of two days (making desired changes along the way) I figured out what was going wrong and cleared up the use of types such that I was really happy with my code and when I saw a red squiggle I usually knew right away what was wrong and if I did ask the assistant it would also get it right right away.
I think there's no way I would have understood what was going on without experimenting.
ChrisMarshallNY · 1h ago
I do both. I like to develop designs in my head, and there’s a lot of trial and error.
I think the results are excellent, but I can hit a lot of dead ends, on the way. I just spent several days, trying out all sorts of approaches to PassKeys/WebAuthn. I finally settled on an approach that I think will work great.
I have found that the old-fashioned “measure twice, cut once” approach is highly destructive. It was how I was trained, so walking away from it was scary.
markasoftware · 39m ago
GitHub copilot's inline completions still exist, and are nearly instant!
citizenpaul · 2h ago
The only thing I've found that LLM speeds up my work is a sort of advanced find replace.
A prompt like " I want to make this change in the code where any logic deals with XXX. To be/do XXX instead/additionally/somelogicchange/whatever"
It has been pretty decent at these types of changes and saves time of poking though and finding all the places I would have updated manually in a way that find/replace never could. Though I've never tried this on a huge code base.
zahlman · 42m ago
> A prompt like " I want to make this change in the code where any logic deals with XXX. To be/do XXX instead/additionally/somelogicchange/whatever"
If I reached a point where I would find this helpful, I would take this as a sign that I have structured the code wrongly.
skydhash · 2h ago
I supposed you haven’t tried emacs grep mode or vim quickfix? If the change is mechanical, you create a macro and be done in seconds. If it’s not, you still got the high level overview and quick navigation.
kfajdsl · 2h ago
Finding and jumping to all the places is usually easy, but non trivial changes often require some understanding of the code beyond just line based regex replace. I could probably spend some time recording a macro that handles all the edge cases, or use some kind of AST based search and replace, but cursor agent does it just fine in the background.
skydhash · 51m ago
Code structure is simple. Semantics is where it get tough. So if you have a good understanding of the code (and even when you don't), the overview you get from one of those tools (and the added interactivity) is nice for confirming (understanding) the needed actions that needs to be done.
> cursor agent does it just fine in the background
That's for a very broad definition of fine. And you still need to review the diff and check the surrounding context of each chunk. I don't see the improvement in metrics like productivity and cognitive load. Especially if you need to do serveral rounds.
Karrot_Kream · 2h ago
emacs macros aren't the same. You need to look at the file, observe a pattern, then start recording the macro and hope the pattern holds. An LLM can just do this.
skydhash · 35m ago
And that's why I mentionned grep-mode, and such other tools. Here is some videos about what I'm talking about
Standard search and replace in other tools pales in comparison.
Karrot_Kream · 2h ago
I guess it depends? The "refactor" stuff, if your IDE or language server can handle it, then yeah I find the LLM slower for sure. But there are other cases than an LLM helps a lot.
I was writing some URL canonicalization logic yesterday. Because we rolled this out as an MVP, customers put URLs in all sorts of ways and we stored it into the DB. My initial pass at the logic failed on some cases. Luckily URL canonicalization is pretty trivially testable. So I took the most used customers from our DB, send them to Claude and told Claude to come up with the "minimum spanning test cases" that cover this behavior. This took maybe 5-10 sec. I then told Zed's agent mode using Opus to make me a test file and use these test cases to call my function. I audited the test cases and ended up removing some silly ones. I iterated on my logic and that was that. Definitely faster than having to do this myself.
tomrod · 2h ago
I'm consistently seeing personal and shared anecdotes of a 40%-60% speedup on targeted senior work.
As much as I like agents, I am not convinced the human using them can sit back and get lazy quite yet!
stavros · 1h ago
Eeeh, I spend less time writing code, but way more time reviewing and correcting it. I'm not sure I come ahead overall, but it does make development less boilerplaty and more high level, which leads to code that otherwise wouldn't have been written.
michaelsalim · 1h ago
Curious, what do you count as senior work?
old-gregg · 2h ago
Fun story time!
Early in my career as a software engineer, I developed a reputation for speeding things up. This was back in the day where algorithm knowledge was just as important as the ability to examine the output of a compiler, every new Intel processor was met with a ton of anticipation, and Carmak and Abrash were rapidly becoming famous.
Anyway, the 22 year old me unexpectedly gets invited to a customer meeting with a large multinational. I go there not knowing what to expect. Turns out, they were not happy with the speed of our product.
Their VP of whatever said, quoting: "every saved second here adds $1M to our yearly profit". I was absolutely floored. Prior to that moment I couldn't even dream of someone placing a dollar amount on speed, and so directly. Now 20+ years later it still counts as one of the top 5 highlights of my career.
P.S. Mentioning as a reaction to the first sentence in the blog post. But the author is correct when she states that this happens rarely.
P.P.S. There was another engineer in the room, who had the nerve to jokingly ask the VP: "so if we make it execute in 0 seconds, does it mean you're going to make an infinite amount of money?". They didn't laugh, although I thought it was quite funny. Hey, Doug! :)
felideon · 2h ago
So, did you make it faster?
old-gregg · 1h ago
Unfortunately, there wasn't a single bottleneck. A bunch of us, not just me, worked our asses off improving performance by a little bit in several places. The compounded improvement IIRC was satisfactory to the customer.
adwn · 2h ago
> "so if we make it execute in 0 seconds, does it mean you're going to make an infinite amount of money?"
I don't get it. Wouldn't going from 1 second to 0 seconds add the same amount of money to the yearly profit as going from 2 seconds to 1 second did? Namely, $1M.
old-gregg · 7m ago
> I don't get it. Wouldn't going from 1 second to 0 seconds add the same amount of money to the yearly profit as going from 2 seconds to 1 second did? Namely, $1M
Of course the joke was silly. But perhaps I should have provided some context. We were making industrial automation software. This stuff runs in factories. Every saved second shrinks the manufacturing time of a part, leading to increase of the total factory output. When extrapolating to abusrd levels, zero time to manufacture means infinite output per factory (sans raw materials).
stronglikedan · 2h ago
yeah it's one of those things that are funny to the people saying it because they don't yet realize it doesn't make sense. I bet they felt that later, in the hotel room, in the shower, probably with a bottle of scotch.
Otek · 1h ago
> I bet they felt that later, in the hotel room, in the shower, probably with a bottle of scotch.
Geez, life in my opinion is not so serious. It’s okay to say stupid things and not feel bad about it, as long as you are not trying to hurt anyone.
I bet they felt great and immediately forgot about this bad joke.
andsoitis · 3m ago
Their joke could have also been interpreted as sarcasm and when you’re going to be sarcastic you want to be doubly sure that you’re correct.
But I also concur with you that it is good to bring some levity to “serious” conversations!!
betterhealth12 · 1h ago
earlier in my career it'd be appealing to make jokes like that, or include a comment in an email. eventually you realize that people - especially "older" or those already a few years into their career - mostly don't want to joke around and just want to actually get the thing done you are meeting about.
flobosg · 2h ago
A process taking 0 seconds means that, in one year, it can be run 31540000 sec/0 sec = ∞ times, multiplying the profit by ∞.
willsmith72 · 1h ago
Since when is the constraint "how many times can I run this thing"?
zahlman · 40m ago
In principle, the reason that "every second saved here is worth $x" is because running the thing generates money, and saving time on it allows for running it more often.
lblume · 1h ago
At least in theoretical computer science, often, but that's another matter entirely.
ensemblehq · 2h ago
RE: P.P.S... God I love that humour. Actually was very funny.
9rx · 4h ago
> Rarely in software does anyone ask for “fast.”
They don't explicitly ask for it, but they won't take you seriously if you don't at least pretend to be. "Fast" is assumed. Imagine if Rust had shown up, identical in every other way, but said "However, it is slower than Ruby". Nobody would have given it the time of day. The only reason it was able to gain attention was because it claimed to be "Faster than C++".
Watch HN for a while and you'll start to notice that "fast" is the only feature that is necessary to win over mindshare. It is like moths to a flame as soon as something says it is faster than what came before it.
didibus · 18m ago
You're taking the wrong conclusion, "Fast" is a winning differentiator only when you offer the same feature-set, but faster.
Your example says it, people will go, this is like X (meaning it does/has the same features as X), but faster. And now people will flock from X to your X+faster thing.
Which tells us nothing about if people would also move to a X+more-features, or a X+nicer-ux, or a X+cheaper, etc., without them being any faster than X or even possibly slower.
mvieira38 · 2h ago
Only in the small subset of programmers that post on HN is that the case. Most users or even most developers don't mind slow stuff or "getting into flow state" or anything like that, they just want a nice UI. I've seen professional data scientists using Github Desktop on Windows instead of just learning to type git commands for an easy 10x time save
Dylan16807 · 4h ago
Maybe for languages, but fast is easily left behind when looking for frameworks. People want features, people want compatibility, people will use electron all over.
9rx · 4h ago
> fast is easily left behind when looking for frameworks.
Nah. React, for example, only garnered attention because it said "Look how much faster the virtual DOM is!". We could go on all day.
> People want features, people want compatibility
Yes, but under the assumption that it is already built to be as "fast" as possible. "Fast" is assumed. That's why "faster" is such a great marketing trick, as it tunes people into "Hold up. What I'm currently using actually sucks. I'd better reconsider."
"Fast" is deemed important, but it isn't asked for as it is considered inconceivable that you wouldn't always make things "fast". But with that said, keep in mind that the outside user doesn't know what "fast" is until there is something to compare it with. That is how some products can get away with not being "fast" — until something else comes along to show that it needn't be that way.
"Look how quickly it can render the component 50 times!"
tobyhinloopen · 3h ago
"Look, it can render the whole app really quickly every time the user presses a key!"
PaulHoule · 2h ago
That gets into a very interesting question of controlled vs. uncontrolled components.
On one hand I like controlled components because there is a single source of truth for the data (a useState()) somewhere in the app, but you are forced to re-render for each keypress. With uncontrolled components on the other hand, there's the possible anarchy of having state in React and in the actual form.
which has a rational answer to the problems that turn up with uncontrolled forms.
atq2119 · 3h ago
And yet we live in a world of (especially web) apps that are incredibly slow, in the sense that an update in response to user input might take multiple seconds.
Yes, fast wins people over. And yet we live in a world where the actual experience of every day computing is often slow as molasses.
9rx · 1h ago
The trouble is that "fast" doesn't mean anything without a point of comparison. If all you have is a slow web app, you have to assume that the web app is necessarily slow — already as fast as it can be. We like to give people the benefit of the doubt, so there is no reason to think that someone would make something slower than is necessary.
"Fast" is the feature people always wanted, but absent better information, they have to assume that is what they already got. That is why "fast" marketing works so well. It reveals that what they thought was pretty good actually wasn't. Adding the missing kitchen sink doesn't offer the same emotional reaction.
renlo · 1h ago
> The trouble is that "fast" doesn't mean anything without a point of comparison.
This is what people are missing. Even those "slow" apps are faster than their alternatives. People demand and seek out "fast", and I think the OP article misses this.
Even the "slow" applications are faster than their alternatives or have an edge in terms of speed for why people use them. In other words, people here say "well wait a second, I see people using slow apps all the time! People don't care about speed!", without realizing that the user has already optimized for speed for their use case. Maybe they use app A which is 50% as fast as app B, but app A is available on their toolbar right now, and to even know that app B exists and to install it and learn how to use it would require numerous hours of ramp up time. If the user was presented with app A and app B side by side, all things equal, they will choose B every time. There's proficiency and familiarity; if B is only 5% faster than A, but switching to B has an upfront cost in days to able to utilize that speed, well that is a hidden speed cost and why the user will choose A until B makes it worth it.
Speed is almost always the universal characteristic people select for, all things equal. Just because something faster exists, and it's niche, and hard to use (not equal for comparison to the common "slow" option people are familiar with), it doesn't mean that people reject speed, they just don't want to spend time learning the new thing, because it is _slower_ to learn how to use the new thing at first.
lblume · 1h ago
> you have to assume
We don't have to assume. We know that JavaScript is slow in many cases, that shipping more bundles instead of less will decrease performance, and that with regard to the amount of content served generally less is more.
Whether this amount of baggage every web app seems to come with these days is seen as "necessary" or not is subjective, but I would tend to agree that many developers are ignorant of different methods or dislike the idea of deviating from the implied norms.
underdeserver · 1h ago
Eh, I think the HN crowd likes fast because most tech today is unreasonably slow, when we know it could be fast.
asa400 · 1h ago
To a first approximation HN is a group of people who have convinced themselves that it's a high quality user experience to spend 11 seconds shipping 3.8 megabytes of Javascript to a user that's connected via a poor mobile connection on a cheap dual-core phone so that user can have a 12 second session where they read 150 words and view 1 image before closing the tab.
Fast is _absolutely not_ the only thing we care about. Not even top 5. We are addicted to _convenience_.
lblume · 1h ago
The fact that this article and similar ones get upvoted very frequently on this platform is strong evidence against this claim.
Considering the current state of the Web and user application development, I tend to agree with regard to its developers, but HN seems to still abide by other principles.
ilyakaminsky · 2h ago
Fast is also cheap. Especially in the world of cloud computing where you pay by the second. The only way I could create a profitable transcription service [1] that undercuts the rest was by optimizing every little thing along the way. For instance, just yesterday I learned that the image size I've put together is 2.5× smaller than the next open source variant. That means faster cold boots, which reduces the cost (and providers a better service).
Is S3 slow or fast? It’s both, as far as I can tell and represents a class of systems (mine included) that go slow to go fast.
S3 is “slow” at the level of a single request. It’s fast at the level of making as many requests as needed in parallel.
Being “fast” is sometimes critical, and often aesthetic.
claytonjy · 2h ago
We have common words for those two flavors of “fast” already: latency and throughput. S3 has high latency (arguable!), but very very high throughput.
zahlman · 38m ago
Yep. I'm hoping that installed copies of PAPER (at least on Linux) will be somewhere under 2MB total (including populating the cache with its own dependencies etc). Maybe more like 1, although I'm approaching that line faster than I'd like. Compare 10-15 for pip (and a bunch more for pipx) or 35 for uv.
sipjca · 1h ago
ive approached the same thing but slightly differently. i can run it on consumer hardware for vastly cheaper than the cloud and don't have to worry about image sizes at all. (bare metal is 'faster') offering 20,000 minutes of transcription for free up to the rate limit (1 Request Every 5 Seconds)
if you ever want to chat about making transcription virtually free or so cheap for everyone let me know. I've been working on various projects related to it for a while. including open source/cross-platform superwhisper alternative https://handy.computer
HarHarVeryFunny · 1h ago
Fast doesn't necessarily mean efficient/lightweight and therefore cheaper to deploy. It may just mean that you've thrown enough expensive hardware at the problem to make it fast.
b_e_n_t_o_n · 2h ago
Your CSS is broken fyi
willsmith72 · 1h ago
Not in development and maintenance dollars it's not
ilyakaminsky · 1h ago
Hmm… That's a good point. I recall a few instances where I went too far to the detriment of production. Having a trusty testing and benchmarking suite thankfully helped with keeping things more stable. As a solo developer, I really enjoy the development process, so while that bit is costly, I didn't really consider that until you mentioned it.
Liftyee · 3h ago
I always have to remind myself of the bank transfer situation in the US whenever I read an article complaining about it. Here in the UK, bank transfers are quick and simple (the money appears to move virtually instantly). Feel free to enlighten me to why they're so slow in the US.
bobtheborg · 2h ago
"Community banks mostly don’t have programmers on staff, and are reliant on the so-called “core processors” ...
This is the largest reason why in-place upgrades to the U.S. financial system are slow. Coordinating the Faster ACH rollout took years, and the community bank lobby was loudly in favor of delaying it, to avoid disadvantaging themselves competitively versus banks with more capability to write software (and otherwise adapt operationally to the challenges same-day ACH posed)."
For ACH, it's the scheduling and batching that makes it slow. The transfer itself should be instant but often my bank sends it out around midnight. This is why Venmo and Zelle are so popular. You can also modify/cancel a bank transfer before it goes through, which is nice.
This is the same in Switzerland. If you request an IBAN transfer, it's never instant. The solution there for fast payments is called TWINT, which works at almost POS terminal (you take a picture of the displayed QR code).
I think BACS is similarly "slow" due to the settlement process.
silotis · 1h ago
These days ACH settlement runs multiple times a day. The biggest source of delay for ACH transfers is your bank delaying release of the funds for risk management. ACH transfers can be reversed even after they have "settled" and if the receiving bank has already disbursed the funds then they have to eat the cost of reimbursing the sender. Reversals are more likely to happen soon after the transfer completes, so delaying release of the funds makes it less likely the bank will be left holding the bag.
maccard · 1h ago
People are almost always talking about Faster Payments [0] rather than BACS. It really is instant.
I was pleasantly surprised when I bought a house that I could just transfer everything instantly with faster payments. I was fully expecting to deal with CHAPS, etc.
But the faster payments ceiling is large enough that buying a house falls under the limit.
Aurornis · 2h ago
The US actually has two real-time payment systems: RTP and FedNow. The number of participating banks is growing rapidly.
Prior to that, you could get instant transfers but it came with a small fee because they were routed through credit card networks. The credit card networks took a fee but credit card transactions also have different guarantees and reversibility (e.g. costing the bank more in cases of fraud)
aidenn0 · 2h ago
From the linked RTP site: "Because of development and operational costs most banks and credit unions will offer "send only" capabilities"
Which means nobody can send me money.
FedNow on the backend is supported by fewer banks than Zelle is, which is probably why hardly any banks expose a front-end for it.
kccqzy · 1h ago
I am convinced that this is in some cases a pro-consumer behavior. A credit card company once pulled money from my bank via ACH due to the automatic payment feature I set up, but that bank account didn't have enough money in it. The bank sent me at least two emails about the situation. I finally noticed that second email and wired myself more money from a different account. The credit card company didn't notice anything wrong and didn't charge any late fees or payment returned fees. The bank didn't charge any overdraft fees or insufficient funds fees. And the wire transfer didn't have a fee due to account balance. (Needless to say, from then on I no longer juggle multiple bank accounts like that.)
The bank had an opportunity to notify me precisely because ACH is not real time. And I had an opportunity to fix it because wire transfers is almost real time (finishes in minutes not days). I appreciate it when companies pull money from my account I get days of notice but if I need to move money quickly I can do it too.
IshKebab · 54m ago
In most cases it's definitely better for it to be fast. For example I sold a buggy face to face today and they paid me by bank transfer, and the reason we could do that was that I had a high confidence it would turn up quickly and they weren't trying to scam me. It actually took around 1 second which is really quite fast.
You don't need slow transfers to get an opportunity to satisfy automatic payments. I don't know how it works but in the UK direct debits (an automatic "take money from my account for bills" system) gives the bank a couple of days notice so my banking app warns me if I don't have enough money. Bank transfers are still instant.
Isn't there a law in the UK which says it must be fast?
nu11ptr · 3h ago
This is interesting. It got me to think. I like it when articles provoke me to think a bit more on a subject.
I have found this true for myself as well. I changed back over to Go from Rust mostly for the iteration speed benefits. I would replace "fast" with "quick", however. It isn't so much I think about raw throughput as much as "perceived speed". That is why things like input latency matter in editors, etc. If something "feels fast" (ie Go compiles), we often don't even feel the need to measure. Likewise, when things "feel slow" (ie Java startup), we just don't enjoy using them, even if in some ways they actually are fast (like Java throughput).
christophilus · 2h ago
I feel the same way about Go vs Rust. Compilation speed matters. Also, Rust projects resemble JavaScript projects in that they pull in a million deps. Go projects tend to be much less dependency happy.
nu11ptr · 1h ago
And that leads to dependency hell once you realize that those dependencies all need different versions of the same crate. Most of the time this "just works" (at the cost of more dependencies, longer compile time, bigger binary)... until it doesn't then it can be tough to figure out.
In general, I like cargo a lot better than the Go tooling, but I do wish the Rust stdlib was a bit more "batteries included".
asa400 · 1h ago
This is all well and good that we developers have opinions on whether Go compiles faster than Rust or whatever, but the real question is: which is faster for your users?
nu11ptr · 1h ago
...and that sounds nice to me as well, but if I never get far enough to give it to my users then what good is fast binaries? (implying that I quit, not that Rust can't deliver). The holy grail would be to have both. Go is generally 'fast enough', but I wish the language was a bit more expressive.
> Rarely in software does anyone ask for “fast.” We ask for features, we ask for volume discounts, we ask for the next data integration. We never think to ask for fast.
Almost everywhere I’ve worked, user-facing speed has been one of the highest priorities. From the smallest boring startup to the multi billion dollar company.
At companies that had us target metrics, speed and latency was always a metric.
I don’t think my experience has been all that unique. In fact, I’d be more surprised if I joined a company and they didn’t care about how responsive the product felt, how fast the pages loaded, and any weird lags that popped up.
SatvikBeri · 2h ago
At 6 out of 8 companies I've worked at (mostly a mixture of tech & finance) I have always had to fight to get any time allotted for performance optimization, to the point where I would usually just do it myself under the radar. Even at companies that measured latency and claimed it was important, it would usually take a backseat to adding more features.
codazoda · 3h ago
My experience has been that people sometimes obsess over speed for things like how fast a search result returns but not over things like how fast a page renders or how many bites we send the user.
calibas · 3h ago
Efficient code is also environmentally friendly.
First, efficient code is going to use less electricity, and thus, fewer resources will need to be consumed.
Second, efficient code means you don't need to be constantly upgrading your hardware.
breuleux · 1h ago
Well, that depends. Very inefficient code tends to only be used when absolutely needed. If an LLM becomes ten times faster at answering simple prompts, it may very well be used a hundred times more as a result, in which case electricity use will go up, not down. Efficiency gains commonly result in doing way more with more, not more with less.
lblume · 1h ago
Correct. This is also known as a rebound effect [1], or, specifically with regard to technological improvements, as the Jevons paradox [2].
Indeed, that is a common occurrence that called Jevons Paradox.
yogishbaliga · 2h ago
Very true, but in recent years feature development has taken precedence over efficiency. VP of whatever says hardware is cheap, software engineers are not.
dist-epoch · 2h ago
Energy used for lighting didn't decrease when the world moved to LED lights which use much less energy - instead we just used more lighting everywhere, and now cities are white instead of yellow.
SatvikBeri · 3h ago
I've noticed over and over again at various jobs that people underestimate the benefit of speed, because they imagine doing the same workflow faster rather than doing a different workflow.
For example, if you're running experiments in one big batch overnight, making that faster doesn't seem very helpful. But with a big enough improvement, you can now run several batches of experiments during the day, which is much more productive.
IshKebab · 52m ago
I think people also vastly underestimate the cost of context switching. They look at a command that takes 30 seconds and say "what's the point of making it take 3 seconds? you only run it 10 times in a day; it's only 5 minutes". But the cost is definitely way more than that.
01HNNWZ0MV43FF · 2h ago
Me, looking at multi-hour CI pipelines, thinking how many little lint warnings I'd fix up if CI could run in like 20 minutes
colton_padden · 3h ago
I was going to say one of the more recent times fast software excited me was with `uv` for Python packaging, and then I saw that op had a link to Charlie Marsh in the footnote. :)
chamomeal · 3h ago
Only sorta related, but it’s crazy that to me how much our standards have dropped for speed/responsiveness in some areas.
I used to play games on N64 with three friends. I didn’t even have a concept of input lag back then. Control inputs were just instantly respected by the game.
Meanwhile today, if I want to play rocket league with three friends on my Xbox series S (the latest gen, but the less powerful version), I have to deal with VERY noticeable input lag. Like maybe a quarter of a second. It’s pretty much unplayable.
Aurornis · 3h ago
> I have to deal with VERY noticeable input lag. Like maybe a quarter of a second. It’s pretty much unplayable
Your experience is not normal.
If you’re seeing that much lag, the most likely explanation is your display. Many TVs have high latency for various processing steps that doesn’t matter when you’re watching a movie or TV, but becomes painful when you’re playing games.
fouronnes3 · 3h ago
This does not undermine chamomeal's argument. The whole point is that back in the N64 days, they could not possibly have had that experience. There was no way to even make it happen. The fact that today it's a real possibility when you've done nothing obviously wrong is a definite failure.
edwcross · 2h ago
TVs back then supported a given standard (NTSC, PAL) and a lower resolution. CRTs couldn't "buffer" the image. Several aspects made it so that "cheating" was not possible.
It was either fast, or nothing. Image quality suffered, but speed was not a parameter.
With LCDs, lag became a trade-off parameter. Technology enabled something to become worse, so economically it was bound to happen.
jonny_eh · 3h ago
Luckily newer TVs and console can negotiate a low-latency mode automatically. It's called ALLM (Auto-Low Latency Mode).
chamomeal · 2h ago
it's possible, but it seems to specifically be a rocket league on xbox series s problem, not a display problem. Other games run totally fine on the same display with no lag!
izzydata · 3h ago
That may be an issue of going from a CRT tv to an LCD tv. As far as I am aware there was no software manipulation of the video input on a CRT. It just took the input and displayed it on the screen in the only way it could. Newer tvs have all kinds of settings to alter the video which takes processing time. They also typically have a game mode to turn off as much of it as it will allow.
abdullahkhalids · 3h ago
Why should the user care whether the lag is introduced by the software in the controller, or the software in the gaming console, or the software in the tv.
The lag is due to some software. So the problem is with how software engineering as a field functions.
PaulHoule · 3h ago
I hear it claimed that you're only supposed to enable game mode for competitive multiplayer games -- but I've found that many single player games like Sword Art Online: Fatal Bullet are unplayable without game mode enabled.
It could be my unusual nervous system. I'm really good at rhythm games, often clearing a level on my first try and amazing friends who can beat me at other genres. But when I was playing League of Legends, which isn't very twitchy, it seemed like I would just get hit and there was nothing I could do about it when I played on a "gaming" laptop but found I could succeed at the game when I hooked up an external monitor. I ran a clock and took pictures showing that the external monitor was 30ms faster than the built-in monitor.
Or how channel surfing now requires a 1-2 second latency per channel, versus the way it was seemingly instant from the invention of television through the early 1990s.
tobyhinloopen · 3h ago
How about you enable game mode on the TV you're using
chamomeal · 2h ago
Game mode is on! The input log is not with the display. Other games run fine.
zavg · 4h ago
Pavel Durov (founder of Telegram) totally nailed this concept.
He pays special attention to the speed of application. The Russian social network VK worked blazingly fast. The same is about Telegram.
I always noticed it but not many people verbalized it explicitly.
But I am pretty sure that people realize it subconsciously and it affects user behaviour metrics positively.
dominicq · 3h ago
Telegram is pretty slow, both the web interface and the Android app. For example, reactions to a message always take a long time to load (both when leaving one, and when looking at one). Just give me emoji, I don't need your animated emoji!
hu3 · 1h ago
Can't agree.
These operations are near instant for me on telegram mobile and desktop.
It's the fastest IM app for me by a magnitude.
bravesoul2 · 1h ago
I find most jobs I had fast becomes a big issue once things are too slow. Or expensive.
It's a retroactively fixed thing. Like imagine forgetting to make a UI, shipping just an API to a customer then thinking "oh shit, they need a UI they are not programmers". And only noticing from customer complaints. That is how performance is often treated.
This is probably because performance problems usually require load or unusual traffic patterns, which require sales, which require demos, which dont require performance tuning as there is one user!
If you want to speed your web service up first thing is invest time, maybe money in really good observability. Should be easy for anyone in the team to find a log, see what CPU is at etc. Then set up proxy metrics around speed you care about and talk about them every week and take actions.
Proxy metrics means you likely cant (well probably should not) check the speed that Harold can sum his spreadsheet every minute, but you can check the latency of the major calls involved. If something is slow but metrics look good then profiling might be needed.
Sometimes there is an easy speed up. Sometimes you need a new architecture! But at least you know what's happening.
esafak · 4h ago
A lot of people have low expectations from having to use shit products at work, and generally not being discerning.
Speed is what made Google, which was a consumer product at the time. (I say this because it matters more in consumer products.)
raincole · 25m ago
> Fast is relative
I once used Notion at work and for personal note taking. I'd never felt it was "slow." Until later I moved my notes to Obsidian. Now when I have to use Notion at my job it feels sluggish.
Night_Thastus · 2h ago
I wish I could live in a world of fast.
C++ with no forward decls, no clang to give data about why the compile time is taking so long. 20 minute compiles. Only git tool I like (git-cola) is written in Python and slows to a crawl. gitk takes a good minute just to start up. Only environments are MSYS which is slow due to Windows, and WSL which isn't slow but can't do DPI scaling so I squint at everything.
pyman · 2h ago
The web is fast.
> Rarely in software does anyone ask for “fast.”
I don't think I can relate this article to what actually happened to the web. It went from being an unusable 3D platform to a usable 2D one. The web exploded with creativity and was out of control thanks to Flash and Director, but speeds were unacceptable. Once Apple stopped supporting it, the web became boring, and fast, very fast. A lot of time and money went into optimising the platform.
So the article is probably more about LLMs being the new Flash. I know that sounds like blasphemy, but they're both slow and melt CPUs.
thfuran · 2h ago
The web might be fast compared to in 2005 but only if you don't normalize for average CPU performance and bandwidth. Websites that are mostly text often still manage to take remarkable amounts of time to finish rendering and stop moving things around.
taylorallred · 1h ago
What's amazing to me is that often all it takes to go fast is to keep things simple. JBlow once said that software should be treated like a rocket ship: every thing you add contributes weight.
OsrsNeedsf2P · 3h ago
This is one of the reasons I switched from Unity to Godot. There is something about Godot loading fast and compiling fast that makes it so much more immersive to spend hours chugging away at your projects for.
PaulHoule · 3h ago
My son told me to not develop a game with Unity because, as a player, he thought Unity games took way too long to load.
01HNNWZ0MV43FF · 2h ago
There might be some selection bias - Experienced programmers who care a lot about engine technology are more likely to use Godot and also optimize their load times. Unity includes a lot of first-time programmers who just want to get something shipped
ksec · 4h ago
>> Rarely in software does anyone ask for “fast.”
I have been asking about Latency-Free Computing for a very long time. Every Computing now is slow.
I once accidentally blocked TCP on my laptop and found out "google.com" runs on UDP, it was a nice surprise.
baba is fast.
I sometimes get calls like "You used to manage a server 6 years ago and we have an issue now" so I always tell the other person "type 'alias' and read me the output", this is how I can tell if this is really a server I used to work on.
fast is my copilot.
Jtsummers · 4h ago
Specifically HTTP/3 and QUIC (which came out of Google):
They don't require you to use QUIC to access Google, but it is one of the options. If you use a non-supporting browser (Safari prior to 2023, unless you enabled it), you'd access it with a standard TCP-based HTTP connection.
quesera · 2h ago
> this is how I can tell if this is really a server I used to work on
Hm, shell environment is fairly high on the list of things I'd expect the next person to change, even assuming no operational or functional changes to a server.
And of course they'd be using a different user account anyway.
trhaynes · 1h ago
Interesting that onkernel.com intentionally animates and slows down the loading of the web interface, making it harder to scroll and scan the site. Irony or good design?
stevage · 1h ago
> When was the last time you used airplane WiFi and actually got a lot done?
The greatest day of productivity in my life was a flight from Melbourne to New York via LAX. No wifi on either flight, but a bit in transit. Downloaded everything I needed in advance. Coded like a mofo for like 16 hours.
Fast internet is great for distractions.
cheema33 · 27m ago
Same here. I am more productive on a plane than anywhere else. And for the reasons you describe.
bitpush · 2h ago
> Superhuman's sub-100ms rule—plus their focus on keyboard shortcuts—changed the email game in a way that no one's been able to replicate, let alone beat.
I often hear this sort of thing "Facebook was a success using PHP therefore language choice isn't important" or in this case "superhuman made their product fast and they still failed so speed isn't important".
It's obviously wrong if you think about it for more than a second. All it shows is that speed isn't the only thing that matters but who was claiming that?
Speed is important. Being slow doesn't guarantee failure. Being fast doesn't guarantee success. It definitely helps though!
brailsafe · 1h ago
> Instagram usually works pretty well—Facebook knows how important it is to be fast.
Fast is indeed magical, that's why I exclusively browse Instagram from the website; it's so slow I dip out before they get me with their slot machine.
brap · 53m ago
Yep. I often choose LLM apps not because of how great the model is, but how snappy the UI feels. Similarly I might choose the more lightweight models because they’re faster.
DrewADesign · 1h ago
Speed is an important usability consideration that gets disproportionate focus among many developers because it’s one of the few they can directly address with their standard professional toolkit. I think it’s a combo of a Hammer/Nail thing, and developers loving the challenge of code performance golf. (Though I never loved stuff like that, which is in part why I’m not a developer anymore.)
Figuring out the best way to present logically complex content, controls and state to users that don’t wrangle complexity for a living is difficult and specialized work, and for most users, it’s much more important than snappy click response.
Both of these things obviously exist on a spectrum and are certainly not mutually exclusive. (ILY McMaster-Carr) But projects rarely have enough time for either complete working features, or great thoughtful usability, and if you’re weighing the relative importance of these two factors, consider your audience, their goals, and their technical savvy before assuming good performance can make up for a suboptimal user workflow.
Thanks for enduring my indulgent rant.
foxrider · 1h ago
Adding to this - that's why I insist all my students should learn touch-typing, for at least 10 minutes per lesson. It really changes how you interact with your computer, and how much touch typing quickly makes you able to type as fast as you can think changes your approach to automating things in a quick script or doing some bash-fu. A very underrated skill in todays world.
pclowes · 4h ago
Highly Agree.
Speed of all kinds is incredibly important. Give me all of it.
- Fast developers
- Fast test suites
- Fast feedback loops
- Fast experimentation
Someone (Napoleon?) is credited with saying "quantity has a quality all its own", in software it is "velocity has a quality all its own".
As long as there is some rigor and you aren't shipping complete slop, consistently moving very quickly fixes almost every other deficiency.
- It makes engineering mistakes cheaper (just fix them fast)
- It make product experimentation easy (we can test this fast and revert if needed)
- It makes developers ramp up quickly (shipping code increases confidence and knowledge)
- It actually makes rigor more feasible as the most effective rigorous processes are light weight and built-in.
Every line of code is a liability, the system that enables it to change rapidly is the asset.
Side note: every time I encounter JVM test startup lag I think someday I am going to die and will have spent time doing _this_.
b_e_n_t_o_n · 2h ago
Another benefit of speed in this regard is that it lets you slow down a bit more and appreciate other things in life.
According to Wikiquotes, this is a common misattribution, and the first known record is Ruth M. Davis from 1978, who attributes it to Lenin: https://en.wikiquote.org/wiki/Quantity
swinglock · 3h ago
Speed is the most fundamental feature. Otherwise we could do everything by hand and need no computers.
codingclaws · 3h ago
Fast and light weight. That's why I love vim/cli over IDEs.
Btw, cool site design.
chaosprint · 36m ago
> Speed conquers all in martial arts.
burnte · 2h ago
> Rarely in software does anyone ask for “fast.”
That's because it's understood that things should work as quickly as possible, and not slowly on purpose (generally). No one asks that modern language is used in the UI as opposed to Sanskrit or heiroglyphs, because it's understood.
That's why we all code with voice now, because it's faster, right? Right?
crawshaw · 3h ago
This is a great blog post. I have seen internal studies at software companies that demonstrate this, i.e. reducing UI latency encourages more software use by users. (Though a quick search suggests none are published.)
jpb0104 · 3h ago
Agree. One of my favorite tropes to product and leadership is that “performance is a feature”.
monkeydust · 2h ago
Trading software by it's nature has to be fast, fast to display new information and fast to act on it per users intent.
Will apply this for the next interfaces that im going to build
danielmarkbruce · 2h ago
Google talked about this for years.
betterhealth12 · 1h ago
Linear vs JIRA described in 1 word
beepbooptheory · 4h ago
> Asking an LLM to research for 6 minutes is already 10000x faster than asking for a report that used to take days.
Assuming, like, three days, 6 minutes is 720x faster. 10000x faster than 6 minutes is like a month and a half!
pron · 1h ago
More like 300x if you count working hours. Although I've yet to see anything that would take a person a few days (assuming the task is worth spending a few dats on) and that an LLM could do in six minutes, even with human assistance.
shayief · 1h ago
I got so tired of waiting for GitHub pages taking ~600ms to load (uncached), so decided to build my own Git hosting service with Go and HTMX.
And there is no page cache. Sub 100ms is just completely different experience.
constantcrying · 3h ago
I think that people generally underestimate what even small increases in the interaction time between human and machine cost. Interacting with sluggish software is exhausting, clicking a button and being left uncertain whether it did anything is tedious and software being fast is something you can feel.
Windows is the worst offender here, the entire desktop is sluggish even though it there is no computational task which justifies those delays.
PaulHoule · 3h ago
There's that wondering if the UI input was registered at all and the mental effort to suppress clicking again when you expect a delayed response.
globular-toast · 1h ago
Linus Torvalds said exactly that in a talk about git years ago. It's crazy to think back how people used to use version control before git. Git totally changed how you can work by being fast.
chaps · 2h ago
This is such an important principle to me that I've spent a lot of effort developing tooling and mental models to help with. Biggest catalyst? Being on-call and being woken up at 3am when you're still waking up... in that state, you really( don't want things to go slowly. You just want to fix the damn thing and get back to sleep.
For example, looking up command flags within man pages is slooooow and becomes agonizingly frustrating when you're waking up and people are waiting for you so that they can also go back to sleep. But if you've spent the time to learn those flags beforehand, you'll be able to get back to sleep sooner.
devmor · 4h ago
> Rarely in software does anyone ask for “fast.”
> But software that's fast changes behavior.
I wonder if the author stopped to consider why these opposing points make sense, instead of ignoring one to justify the other.
My opinion is that "fast" only becomes a boon when features are robust and reliable. If you prioritize going twice as "fast" over rooting out the problems, you get problems at twice the rate too.
christophilus · 2h ago
Software that goes fast changes human behavior. It seems you’re thinking it changes the software’s behavior. Not sure. Either that, or I don’t follow your comment at all.
devmor · 21m ago
I'm not really sure how to rephrase it, so I can try an example.
Lets say that the author has a machine that is a self contained assembly line, it produces cans of soup. However, the machine has a problem - every few cans of soup, one can comes out sideways and breaks the machine temporarily, making them stop and unjam it.
The author suggests to double the speed of the machine without solving that problem, giving them their soup cans twice as fast, requiring that they unjam it twice as often as well.
I believe that (with situation exceptions) this is a bad approach, and I would address the problem causing the machine to get jammed before I doubled the speed of the machine
That being said, this is a very simplistic view of the situation, in a real situation either of these solutions has a number of variables that may make it preferable over the other - my gripe with the piece is that the author suggests the "faster" approach is a good default that is "simple", "magical" and "fun". I believe it is shortsighted, causes compounding problems the more it is applied in sequence, and is only "magical" if you bury your head in the sand and tell yourself the problems are for someone else to figure out - which is exactly what the author handwaves away at the end, with a nebulous allusion to some future date when these tools that we should accept because they are fast will eventually be made good by some unknown person.
bithive123 · 4h ago
"The only way to go fast, is to go well." ― Robert C. Martin
agcat · 3h ago
Beautiful softwares are fast! Love the blog
SideburnsOfDoom · 3h ago
> Instant settle felt surprising in a world where bank transfers usually take days.
Yeah, that's not "a world" it's just the USA. Parts of the world - EU, UK etc have already moved on from that. Don't assume that the USA is leading edge in all things.
christophilus · 2h ago
You’re not wrong. The US banks all suck. I’m willing to bet that every single one of them suck, though I’ve only tried a handful.
aprilthird2021 · 3h ago
I feel like this should have some kind of "promotional" or "ad" label. I agree wholeheartedly with the words here, but I also note that the author is selling the fast developer tools she laments the dearth of: https://www.catherinejue.com/kernel
Again, no ill will intended at all, but I think it straddles the promotional angle here a bit and maybe people weren't aware
devmor · 16m ago
The tool in question is also a fairly unethical scraping and botting tool, advertising defrauding services to bypass captchas and scrape websites against the owners' wishes.
andrewmcwatters · 4h ago
Conversely, we have a whole generation of entry-level developers who think 250ms is "fast," when doing on-device processing work on computers that have dozens of cores.
keybored · 4h ago
> But software that's fast changes behavior.
(Throw tomatoes now but) Torvalds said the same thing about Git in his Google talk.
rustybolt · 4h ago
> Rarely in software does anyone ask for “fast.”
Are you kidding me? My product owner and management ask me all the time to implement features "fast".
jasonjmcghee · 4h ago
Not sure if this is sardonic obstinance... But assuming face-value - that's not what the statement is about.
I disagree with the statement too, as people definitely ask for UX / products to be "snappy", but this isn't about speed of development.
PaulHoule · 4h ago
I remember the time they were cracking down because I had entered 90%+ of the tickets into the ticket system (the product manager didn't write tickets) and told me that "every ticket has to explain why it is good for the end user".
I put it in a ticket to speed up the 40 minutes build and was asked "How does this benefit the end user?" and I said "The end user would have had the product six months ago if the build was faster."
rustybolt · 2h ago
Yeah, this was an attempt at humor. But it is quite easy to misunderstand the title.
dkarl · 3h ago
These days metrics are so ubiquitous that many internal back-end systems have SLAs for tail latencies as well.
65 · 3h ago
Premium Silicon Valley slop right here.
jrm4 · 4h ago
Ew.
Genuinely hard to read this and think little more than, "oh look, another justification for low quality software."
jebarker · 4h ago
I think you misunderstood the use of "fast" in the article? They mean that the software should run fast, not be produced fast necessarily. In my experience software that truly runs fast is usually much higher quality.
I distinctly remember when Slashdot committed suicide. They had an interface that was very easy for me to scan and find high value comments, and in the name of "modern UI" or some other nonsense needed to keep a few designers employed, completely revamped it so that it had a ton of whitespace and made it basically impossible for me to skim the comments.
I think I tried it for about 3 days before I gave up, and I was a daily Slashdot reader before then.
If you can find what you want and read it you might not spend 5 extra seconds lost on their page and thus they can pad their stats for advertisers. Bonus points if the stupid page loads in such a way you accidentally click on something and give them a "conversion".
Sadly financial incentive is almost always towards tricking people into doing something they don't want to do instead of just actually giving them what they fucking want.
Northstar should be user satisfaction. For some products that might be engagement (eg entertainment service) while for others it is accomplishing a task as quickly as possible and exiting the app.
HN might have over 100 chars per line of text. It could be better. I know I could do it myself, and I do. But "I fixed it for me" doesn't fix it for anyone else.
I think low density UIs are more beginner friendly but power users want high density.
Having 50 buttons and 10 tabs shoved in your face just makes for opaqueness, power user or not.
I blame HN switching to AWS. Downtime also increased after the switch.
The site seemed to start to go downhill when it was sold, and got into a death spiral of less informed users, poor moderation, people leaving, etc. It's amazing that it's still around.
With IRC its basically part of the task, but every forum i read, its rare that i ever consider whose saying what.
I asked an agent to write an http endpoint at the end of the work day when I had just 30 min left -- my first thought was "it took 10 minutes to do what would have taken a day", but then I thought, "maybe it was 20 minutes for 4 hours worth of work". The next day I looked at it and found the logic was convoluted, it tried to write good error handling but didn't succeed. I went back and forth and ultimately wound up recoding a lot of stuff manually. In 5 hours I had it done for real, certainly with a better test suite than I would have written on my own and probably better error handling.
See https://www.reddit.com/r/programming/comments/1lxh8ip/study_...
I think coding assistants would end up being more helpful if, instead of trying to do what they're asked, they would come back with questions that help us (or force us) to think. I wonder if a context prompt that says, "when I ask you to do something, assume I haven't thought the problem through, and before doing anything, ask me leading questions," would help.
I think Leslie Lamport once said that the biggest resistance to using TLA+ - a language that helps you, and forces you to think - is because that's the last thing programmers want to do.
Typescript can make astonishingly complex error messages when types don't match up so I went through a couple of rounds of showing the errors to the assistant and getting suggestions to fix it that were wrong but I got some ideas and did more experiments and over the course of two days (making desired changes along the way) I figured out what was going wrong and cleared up the use of types such that I was really happy with my code and when I saw a red squiggle I usually knew right away what was wrong and if I did ask the assistant it would also get it right right away.
I think there's no way I would have understood what was going on without experimenting.
I think the results are excellent, but I can hit a lot of dead ends, on the way. I just spent several days, trying out all sorts of approaches to PassKeys/WebAuthn. I finally settled on an approach that I think will work great.
I have found that the old-fashioned “measure twice, cut once” approach is highly destructive. It was how I was trained, so walking away from it was scary.
A prompt like " I want to make this change in the code where any logic deals with XXX. To be/do XXX instead/additionally/somelogicchange/whatever"
It has been pretty decent at these types of changes and saves time of poking though and finding all the places I would have updated manually in a way that find/replace never could. Though I've never tried this on a huge code base.
If I reached a point where I would find this helpful, I would take this as a sign that I have structured the code wrongly.
> cursor agent does it just fine in the background
That's for a very broad definition of fine. And you still need to review the diff and check the surrounding context of each chunk. I don't see the improvement in metrics like productivity and cognitive load. Especially if you need to do serveral rounds.
https://youtu.be/f2mQXNnChwc?t=2135
https://youtu.be/zxS3zXwV0PU
And for Vim
https://youtu.be/wOdL2T4hANk
Standard search and replace in other tools pales in comparison.
I was writing some URL canonicalization logic yesterday. Because we rolled this out as an MVP, customers put URLs in all sorts of ways and we stored it into the DB. My initial pass at the logic failed on some cases. Luckily URL canonicalization is pretty trivially testable. So I took the most used customers from our DB, send them to Claude and told Claude to come up with the "minimum spanning test cases" that cover this behavior. This took maybe 5-10 sec. I then told Zed's agent mode using Opus to make me a test file and use these test cases to call my function. I audited the test cases and ended up removing some silly ones. I iterated on my logic and that was that. Definitely faster than having to do this myself.
As much as I like agents, I am not convinced the human using them can sit back and get lazy quite yet!
Early in my career as a software engineer, I developed a reputation for speeding things up. This was back in the day where algorithm knowledge was just as important as the ability to examine the output of a compiler, every new Intel processor was met with a ton of anticipation, and Carmak and Abrash were rapidly becoming famous.
Anyway, the 22 year old me unexpectedly gets invited to a customer meeting with a large multinational. I go there not knowing what to expect. Turns out, they were not happy with the speed of our product.
Their VP of whatever said, quoting: "every saved second here adds $1M to our yearly profit". I was absolutely floored. Prior to that moment I couldn't even dream of someone placing a dollar amount on speed, and so directly. Now 20+ years later it still counts as one of the top 5 highlights of my career.
P.S. Mentioning as a reaction to the first sentence in the blog post. But the author is correct when she states that this happens rarely.
P.P.S. There was another engineer in the room, who had the nerve to jokingly ask the VP: "so if we make it execute in 0 seconds, does it mean you're going to make an infinite amount of money?". They didn't laugh, although I thought it was quite funny. Hey, Doug! :)
I don't get it. Wouldn't going from 1 second to 0 seconds add the same amount of money to the yearly profit as going from 2 seconds to 1 second did? Namely, $1M.
Of course the joke was silly. But perhaps I should have provided some context. We were making industrial automation software. This stuff runs in factories. Every saved second shrinks the manufacturing time of a part, leading to increase of the total factory output. When extrapolating to abusrd levels, zero time to manufacture means infinite output per factory (sans raw materials).
Geez, life in my opinion is not so serious. It’s okay to say stupid things and not feel bad about it, as long as you are not trying to hurt anyone.
I bet they felt great and immediately forgot about this bad joke.
But I also concur with you that it is good to bring some levity to “serious” conversations!!
They don't explicitly ask for it, but they won't take you seriously if you don't at least pretend to be. "Fast" is assumed. Imagine if Rust had shown up, identical in every other way, but said "However, it is slower than Ruby". Nobody would have given it the time of day. The only reason it was able to gain attention was because it claimed to be "Faster than C++".
Watch HN for a while and you'll start to notice that "fast" is the only feature that is necessary to win over mindshare. It is like moths to a flame as soon as something says it is faster than what came before it.
Your example says it, people will go, this is like X (meaning it does/has the same features as X), but faster. And now people will flock from X to your X+faster thing.
Which tells us nothing about if people would also move to a X+more-features, or a X+nicer-ux, or a X+cheaper, etc., without them being any faster than X or even possibly slower.
Nah. React, for example, only garnered attention because it said "Look how much faster the virtual DOM is!". We could go on all day.
> People want features, people want compatibility
Yes, but under the assumption that it is already built to be as "fast" as possible. "Fast" is assumed. That's why "faster" is such a great marketing trick, as it tunes people into "Hold up. What I'm currently using actually sucks. I'd better reconsider."
"Fast" is deemed important, but it isn't asked for as it is considered inconceivable that you wouldn't always make things "fast". But with that said, keep in mind that the outside user doesn't know what "fast" is until there is something to compare it with. That is how some products can get away with not being "fast" — until something else comes along to show that it needn't be that way.
https://krausest.github.io/js-framework-benchmark/current.ht...
On one hand I like controlled components because there is a single source of truth for the data (a useState()) somewhere in the app, but you are forced to re-render for each keypress. With uncontrolled components on the other hand, there's the possible anarchy of having state in React and in the actual form.
I really like this library
https://react-hook-form.com/
which has a rational answer to the problems that turn up with uncontrolled forms.
Yes, fast wins people over. And yet we live in a world where the actual experience of every day computing is often slow as molasses.
"Fast" is the feature people always wanted, but absent better information, they have to assume that is what they already got. That is why "fast" marketing works so well. It reveals that what they thought was pretty good actually wasn't. Adding the missing kitchen sink doesn't offer the same emotional reaction.
This is what people are missing. Even those "slow" apps are faster than their alternatives. People demand and seek out "fast", and I think the OP article misses this.
Even the "slow" applications are faster than their alternatives or have an edge in terms of speed for why people use them. In other words, people here say "well wait a second, I see people using slow apps all the time! People don't care about speed!", without realizing that the user has already optimized for speed for their use case. Maybe they use app A which is 50% as fast as app B, but app A is available on their toolbar right now, and to even know that app B exists and to install it and learn how to use it would require numerous hours of ramp up time. If the user was presented with app A and app B side by side, all things equal, they will choose B every time. There's proficiency and familiarity; if B is only 5% faster than A, but switching to B has an upfront cost in days to able to utilize that speed, well that is a hidden speed cost and why the user will choose A until B makes it worth it.
Speed is almost always the universal characteristic people select for, all things equal. Just because something faster exists, and it's niche, and hard to use (not equal for comparison to the common "slow" option people are familiar with), it doesn't mean that people reject speed, they just don't want to spend time learning the new thing, because it is _slower_ to learn how to use the new thing at first.
We don't have to assume. We know that JavaScript is slow in many cases, that shipping more bundles instead of less will decrease performance, and that with regard to the amount of content served generally less is more.
Whether this amount of baggage every web app seems to come with these days is seen as "necessary" or not is subjective, but I would tend to agree that many developers are ignorant of different methods or dislike the idea of deviating from the implied norms.
Fast is _absolutely not_ the only thing we care about. Not even top 5. We are addicted to _convenience_.
Considering the current state of the Web and user application development, I tend to agree with regard to its developers, but HN seems to still abide by other principles.
[1] https://speechischeap.com
S3 is “slow” at the level of a single request. It’s fast at the level of making as many requests as needed in parallel.
Being “fast” is sometimes critical, and often aesthetic.
https://geppetto.app
I contributed "whisperfile" as a result of this work:
* https://github.com/Mozilla-Ocho/llamafile/tree/main/whisper....
* https://github.com/cjpais/whisperfile
if you ever want to chat about making transcription virtually free or so cheap for everyone let me know. I've been working on various projects related to it for a while. including open source/cross-platform superwhisper alternative https://handy.computer
This is the largest reason why in-place upgrades to the U.S. financial system are slow. Coordinating the Faster ACH rollout took years, and the community bank lobby was loudly in favor of delaying it, to avoid disadvantaging themselves competitively versus banks with more capability to write software (and otherwise adapt operationally to the challenges same-day ACH posed)."
From the great blog Bits About Money: https://www.bitsaboutmoney.com/archive/community-banking-and...
This is the same in Switzerland. If you request an IBAN transfer, it's never instant. The solution there for fast payments is called TWINT, which works at almost POS terminal (you take a picture of the displayed QR code).
I think BACS is similarly "slow" due to the settlement process.
[0] https://en.m.wikipedia.org/wiki/Faster_Payment_System_(Unite...
But the faster payments ceiling is large enough that buying a house falls under the limit.
https://real-timepayments.com/Banks-Real-Time-Payments.html
Prior to that, you could get instant transfers but it came with a small fee because they were routed through credit card networks. The credit card networks took a fee but credit card transactions also have different guarantees and reversibility (e.g. costing the bank more in cases of fraud)
Which means nobody can send me money.
FedNow on the backend is supported by fewer banks than Zelle is, which is probably why hardly any banks expose a front-end for it.
The bank had an opportunity to notify me precisely because ACH is not real time. And I had an opportunity to fix it because wire transfers is almost real time (finishes in minutes not days). I appreciate it when companies pull money from my account I get days of notice but if I need to move money quickly I can do it too.
You don't need slow transfers to get an opportunity to satisfy automatic payments. I don't know how it works but in the UK direct debits (an automatic "take money from my account for bills" system) gives the bank a couple of days notice so my banking app warns me if I don't have enough money. Bank transfers are still instant.
I have found this true for myself as well. I changed back over to Go from Rust mostly for the iteration speed benefits. I would replace "fast" with "quick", however. It isn't so much I think about raw throughput as much as "perceived speed". That is why things like input latency matter in editors, etc. If something "feels fast" (ie Go compiles), we often don't even feel the need to measure. Likewise, when things "feel slow" (ie Java startup), we just don't enjoy using them, even if in some ways they actually are fast (like Java throughput).
In general, I like cargo a lot better than the Go tooling, but I do wish the Rust stdlib was a bit more "batteries included".
Almost everywhere I’ve worked, user-facing speed has been one of the highest priorities. From the smallest boring startup to the multi billion dollar company.
At companies that had us target metrics, speed and latency was always a metric.
I don’t think my experience has been all that unique. In fact, I’d be more surprised if I joined a company and they didn’t care about how responsive the product felt, how fast the pages loaded, and any weird lags that popped up.
First, efficient code is going to use less electricity, and thus, fewer resources will need to be consumed.
Second, efficient code means you don't need to be constantly upgrading your hardware.
[1]: https://en.wikipedia.org/wiki/Rebound_effect_(conservation)
[2]: https://en.wikipedia.org/wiki/Jevons_paradox
For example, if you're running experiments in one big batch overnight, making that faster doesn't seem very helpful. But with a big enough improvement, you can now run several batches of experiments during the day, which is much more productive.
I used to play games on N64 with three friends. I didn’t even have a concept of input lag back then. Control inputs were just instantly respected by the game.
Meanwhile today, if I want to play rocket league with three friends on my Xbox series S (the latest gen, but the less powerful version), I have to deal with VERY noticeable input lag. Like maybe a quarter of a second. It’s pretty much unplayable.
Your experience is not normal.
If you’re seeing that much lag, the most likely explanation is your display. Many TVs have high latency for various processing steps that doesn’t matter when you’re watching a movie or TV, but becomes painful when you’re playing games.
It was either fast, or nothing. Image quality suffered, but speed was not a parameter.
With LCDs, lag became a trade-off parameter. Technology enabled something to become worse, so economically it was bound to happen.
The lag is due to some software. So the problem is with how software engineering as a field functions.
It could be my unusual nervous system. I'm really good at rhythm games, often clearing a level on my first try and amazing friends who can beat me at other genres. But when I was playing League of Legends, which isn't very twitchy, it seemed like I would just get hit and there was nothing I could do about it when I played on a "gaming" laptop but found I could succeed at the game when I hooked up an external monitor. I ran a clock and took pictures showing that the external monitor was 30ms faster than the built-in monitor.
He pays special attention to the speed of application. The Russian social network VK worked blazingly fast. The same is about Telegram.
I always noticed it but not many people verbalized it explicitly.
But I am pretty sure that people realize it subconsciously and it affects user behaviour metrics positively.
These operations are near instant for me on telegram mobile and desktop.
It's the fastest IM app for me by a magnitude.
It's a retroactively fixed thing. Like imagine forgetting to make a UI, shipping just an API to a customer then thinking "oh shit, they need a UI they are not programmers". And only noticing from customer complaints. That is how performance is often treated.
This is probably because performance problems usually require load or unusual traffic patterns, which require sales, which require demos, which dont require performance tuning as there is one user!
If you want to speed your web service up first thing is invest time, maybe money in really good observability. Should be easy for anyone in the team to find a log, see what CPU is at etc. Then set up proxy metrics around speed you care about and talk about them every week and take actions.
Proxy metrics means you likely cant (well probably should not) check the speed that Harold can sum his spreadsheet every minute, but you can check the latency of the major calls involved. If something is slow but metrics look good then profiling might be needed.
Sometimes there is an easy speed up. Sometimes you need a new architecture! But at least you know what's happening.
Speed is what made Google, which was a consumer product at the time. (I say this because it matters more in consumer products.)
I once used Notion at work and for personal note taking. I'd never felt it was "slow." Until later I moved my notes to Obsidian. Now when I have to use Notion at my job it feels sluggish.
C++ with no forward decls, no clang to give data about why the compile time is taking so long. 20 minute compiles. Only git tool I like (git-cola) is written in Python and slows to a crawl. gitk takes a good minute just to start up. Only environments are MSYS which is slow due to Windows, and WSL which isn't slow but can't do DPI scaling so I squint at everything.
> Rarely in software does anyone ask for “fast.”
I don't think I can relate this article to what actually happened to the web. It went from being an unusable 3D platform to a usable 2D one. The web exploded with creativity and was out of control thanks to Flash and Director, but speeds were unacceptable. Once Apple stopped supporting it, the web became boring, and fast, very fast. A lot of time and money went into optimising the platform.
So the article is probably more about LLMs being the new Flash. I know that sounds like blasphemy, but they're both slow and melt CPUs.
I have been asking about Latency-Free Computing for a very long time. Every Computing now is slow.
baba is fast.
I sometimes get calls like "You used to manage a server 6 years ago and we have an issue now" so I always tell the other person "type 'alias' and read me the output", this is how I can tell if this is really a server I used to work on.
fast is my copilot.
https://en.wikipedia.org/wiki/HTTP/3
https://en.wikipedia.org/wiki/QUIC
They don't require you to use QUIC to access Google, but it is one of the options. If you use a non-supporting browser (Safari prior to 2023, unless you enabled it), you'd access it with a standard TCP-based HTTP connection.
Hm, shell environment is fairly high on the list of things I'd expect the next person to change, even assuming no operational or functional changes to a server.
And of course they'd be using a different user account anyway.
The greatest day of productivity in my life was a flight from Melbourne to New York via LAX. No wifi on either flight, but a bit in transit. Downloaded everything I needed in advance. Coded like a mofo for like 16 hours.
Fast internet is great for distractions.
https://blog.superhuman.com/superhuman-is-being-acquired-by-...
Being fast helps, but is rarely a product.
It's obviously wrong if you think about it for more than a second. All it shows is that speed isn't the only thing that matters but who was claiming that?
Speed is important. Being slow doesn't guarantee failure. Being fast doesn't guarantee success. It definitely helps though!
Fast is indeed magical, that's why I exclusively browse Instagram from the website; it's so slow I dip out before they get me with their slot machine.
Figuring out the best way to present logically complex content, controls and state to users that don’t wrangle complexity for a living is difficult and specialized work, and for most users, it’s much more important than snappy click response.
Both of these things obviously exist on a spectrum and are certainly not mutually exclusive. (ILY McMaster-Carr) But projects rarely have enough time for either complete working features, or great thoughtful usability, and if you’re weighing the relative importance of these two factors, consider your audience, their goals, and their technical savvy before assuming good performance can make up for a suboptimal user workflow.
Thanks for enduring my indulgent rant.
Speed of all kinds is incredibly important. Give me all of it.
- Fast developers
- Fast test suites
- Fast feedback loops
- Fast experimentation
Someone (Napoleon?) is credited with saying "quantity has a quality all its own", in software it is "velocity has a quality all its own".
As long as there is some rigor and you aren't shipping complete slop, consistently moving very quickly fixes almost every other deficiency.
- It makes engineering mistakes cheaper (just fix them fast)
- It make product experimentation easy (we can test this fast and revert if needed)
- It makes developers ramp up quickly (shipping code increases confidence and knowledge)
- It actually makes rigor more feasible as the most effective rigorous processes are light weight and built-in.
Every line of code is a liability, the system that enables it to change rapidly is the asset.
Side note: every time I encounter JVM test startup lag I think someday I am going to die and will have spent time doing _this_.
Joe Stalin, I believe. It's a grim metaphor regarding the USSR's army tactics in WW2.
https://www.goodreads.com/quotes/795954-quantity-has-a-quali...
Btw, cool site design.
That's because it's understood that things should work as quickly as possible, and not slowly on purpose (generally). No one asks that modern language is used in the UI as opposed to Sanskrit or heiroglyphs, because it's understood.
Assuming, like, three days, 6 minutes is 720x faster. 10000x faster than 6 minutes is like a month and a half!
I know this is completely different scale, but compare: [1] https://github.com/git/git [2] https://gitpatch.com/gitpatch/git-demo
And there is no page cache. Sub 100ms is just completely different experience.
Windows is the worst offender here, the entire desktop is sluggish even though it there is no computational task which justifies those delays.
For example, looking up command flags within man pages is slooooow and becomes agonizingly frustrating when you're waking up and people are waiting for you so that they can also go back to sleep. But if you've spent the time to learn those flags beforehand, you'll be able to get back to sleep sooner.
> But software that's fast changes behavior.
I wonder if the author stopped to consider why these opposing points make sense, instead of ignoring one to justify the other.
My opinion is that "fast" only becomes a boon when features are robust and reliable. If you prioritize going twice as "fast" over rooting out the problems, you get problems at twice the rate too.
Lets say that the author has a machine that is a self contained assembly line, it produces cans of soup. However, the machine has a problem - every few cans of soup, one can comes out sideways and breaks the machine temporarily, making them stop and unjam it.
The author suggests to double the speed of the machine without solving that problem, giving them their soup cans twice as fast, requiring that they unjam it twice as often as well.
I believe that (with situation exceptions) this is a bad approach, and I would address the problem causing the machine to get jammed before I doubled the speed of the machine
That being said, this is a very simplistic view of the situation, in a real situation either of these solutions has a number of variables that may make it preferable over the other - my gripe with the piece is that the author suggests the "faster" approach is a good default that is "simple", "magical" and "fun". I believe it is shortsighted, causes compounding problems the more it is applied in sequence, and is only "magical" if you bury your head in the sand and tell yourself the problems are for someone else to figure out - which is exactly what the author handwaves away at the end, with a nebulous allusion to some future date when these tools that we should accept because they are fast will eventually be made good by some unknown person.
Yeah, that's not "a world" it's just the USA. Parts of the world - EU, UK etc have already moved on from that. Don't assume that the USA is leading edge in all things.
Again, no ill will intended at all, but I think it straddles the promotional angle here a bit and maybe people weren't aware
(Throw tomatoes now but) Torvalds said the same thing about Git in his Google talk.
Are you kidding me? My product owner and management ask me all the time to implement features "fast".
I disagree with the statement too, as people definitely ask for UX / products to be "snappy", but this isn't about speed of development.
I put it in a ticket to speed up the 40 minutes build and was asked "How does this benefit the end user?" and I said "The end user would have had the product six months ago if the build was faster."
Genuinely hard to read this and think little more than, "oh look, another justification for low quality software."