It's well known, but this video[1] is a proof of concept demonstration from 4 years ago, Casey Muratori called out Microsoft's new Windows Terminal for slow performance and people argued that it wasn't possible, practical or maintainable to make a faster terminal and that his claims of "thousands of frames per second" were hyperbolic and one person said it would be a "PHD level research project".
In response, Casey spent <1 week making a single-threaded, not profiled, not tuned, skeleton terminal which handled more Unicode and escape codes than Windows Terminal at the time, used a Least Recently Used (LRU) glyph cache, ran at 6,000+ fps on a 7th gen Intel I7 from 2017, and had 10x faster throughput than Windows Terminal, even working within the constraints of using the Windows APIs for things, using DirectDraw, etc. In around five screens of commented basic maintainable-looking code.
This video[2] is by Jason Booth which is talking about his experience of game development, and practical examples of changing data layout and C++ code to make it do less work, be more cache friendly, have better memory access patterns, and run orders of magnitude faster without adding much complexity and sometimes removing it.
Kinda funny but I think LLM-assisted workflows are frequently slow -- that is, if I use the "refactor" features in my IDE it is done in a second, if I ask the faster kind of assistant it comes back in 30 seconds, if I ask the "agentic" kind of assistant it comes back in 15 minutes.
I asked an agent to write an http endpoint at the end of the work day when I had just 30 min left -- my first thought was "it took 10 minutes to do what would have taken a day", but then I thought, "maybe it was 20 minutes for 4 hours worth of work". The next day I looked at it and found the logic was convoluted, it tried to write good error handling but didn't succeed. I went back and forth and ultimately wound up recoding a lot of stuff manually. In 5 hours I had it done for real, certainly with a better test suite than I would have written on my own and probably better error handling.
As a counter example (re: agents), I routinely delegate simple tasks to Claude Code and get near-perfect results. But I've also had experiences like yours where I ended up wasting more time than saved. I just kept trying with different types of tasks, and narrowed it down to the point where I have a good intuition for what works and what doesn't. The benefit is I can fire off a request on my phone, stick it in my pocket, then do a code review some time later. This process is very low mental overhead for me, so it's a big productivity win.
DHRicoF · 9m ago
The cost is in the context switching. Throw 3 tasks that came 15, 20 and 30 min later. The first is mostly ok, you finish by hand. The second have some problems, ask for a rework. Then came the other and, while ok, is have some design problems. Ask another rework. Comes back the second one, and you have to remember the original task and what things you asked for change.
SchemaLoad · 6h ago
Sounds like a slot machine. Insert api tokens, get something that's pretty close to right, insert more tokens and hope it works this time.
resonious · 5h ago
Except the tokens you insert have meaning, and some yield better results than others. Not like a slot machine at all, really. Last I checked, those only have 1 possible input, no way to improve your odds.
saagarjha · 4h ago
Ok so it's poker rather than a slot machine
resonious · 3h ago
Yes I accept this analogy!
kmacdough · 3h ago
Not really, it's not a zero-sum game. You're not competing against anything, you're working with something. It's just a tool that takes practice, has some variability and isn't free. Like most things in life. More like buying corn or having friends.
speed_spread · 1h ago
Poker takes practice, has variability and isn't free. In fact it's the only game I know of that's pointlessly boring without money on the table.
LLM workflow is competing with other ways of writing code. DIY, stack overflow, paired, offshored...
TimTheTinker · 53m ago
> pointlessly boring without money on the table.
I bought a bunch of poker chips and taught Texas Hold'em to my kids. We have a fantastic time playing with no money on the line, just winning or losing the game based on who wins all the chips.
speed_spread · 37m ago
Give them enough time and they'll realize they can trade poker chips for other things.
Aeolun · 1h ago
That’s fine if your expectations are consummate.
xyzzy123 · 10h ago
Thats cool, how are you integrating your phone with your Claude workflow?
I’ve also seen people assign Claude code issues on GitHub and then use the GitHub mobile app on their phone to get notifications and review PRs.
ChadNauseam · 9h ago
I don't know how to do it with Claude Code, but I was at a beach vacation for the past few days and I was studying french on my phone with an webapp that I made. Sometimes I'd notice something bug me, and I used cursor's "background agents" tool to ask it to make a change. This is essentially just a website where you can type in your request, and they allocate a VM, check out your repository, then run the cursor LLM agent inside that VM to implement your requested changes, then push it and create a pull request to your repo. Because I have CI/CD setup, I then just merged the change and waited for it to deploy (usually going for a swim in-between).
I realized as I was doing it that I wouldn't be able to tell anyone about it because I would sound like the most obnoxious AI bro ever. But it worked! (For the simple requests I used it on.) The most annoying part was that I had to tell it to run rustfmt every time, because otherwise it would fail CI and I wouldn't be able to merge it. And then it would take forever to install a rust toolchain and figure out how to run clippy and stuff. But it did feel crazy to be able to work on it from the beach. Anyway, I'm apparently not very good at taking vacations, lol
Aeolun · 1h ago
I just SSH into my CC machine from the phone, then use CC.
resonious · 5h ago
My dev environment works perfectly on Termux, and so does Claude Code. So I just run `claude` like normal, and everything is identical to how I do it on desktop.
Edit: clarity
oblio · 4h ago
Do you use it on a phone or on a tablet?
resonious · 4h ago
Phone. One of those foldy ones though so pretty big screen.
cycomanic · 15h ago
I've already written about this several times here. I think the current trend of LLMs chasing benchmark scores are going in the wrong direction at least as programming tools. In my experience they get it wrong with enough probability, so I always need to check the work. So I end up in a back and forth with the LLM and because of the slow responses it becomes a really painful process and I could often have done the task faster if I sat down and thought about it. What I want is an agent that responds immediately (and I mean in subseconds) even if some benchmark score is 60% instead of 80%.
pron · 15h ago
Programmers (and I'm including myself here) often go to great lengths to not think, to the point of working (with or without a coding assistant) for hours in the hope of avoiding one hour of thinking. What's the saying? "An hour of debugging/programming can save you minutes of thinking," or something like that. In the end, we usually find that we need to do the thinking after all.
I think coding assistants would end up being more helpful if, instead of trying to do what they're asked, they would come back with questions that help us (or force us) to think. I wonder if a context prompt that says, "when I ask you to do something, assume I haven't thought the problem through, and before doing anything, ask me leading questions," would help.
I think Leslie Lamport once said that the biggest resistance to using TLA+ - a language that helps you, and forces you to think - is because that's the last thing programmers want to do.
cycomanic · 10h ago
> Programmers (and I'm including myself here) often go to great lengths to not think, to the point of working (with or without a coding assistant) for hours in the hope of avoiding one hour of thinking. What's the saying? "An hour of debugging/programming can save you minutes of thinking," or something like that. In the end, we usually find that we need to do the thinking after all.
This is such a great observation. I'm not quite sure why this is. I'm not a programmer, but a signal-processing/system engineer/researcher. The weird thing seems that it's the process of programming that causes the "not-thinking" behaviour, e.g. when I program a simulation and I find that I must have a sign error somewhere in my implementation (sometimes you can see this from the results), I end up switching every possible sign around instead of taking a pen and pencil and comparing theory and implementation, if I do other work, e.g. theory, that's not the case. I suspect we try to avoid the cost of the context switch and try to stay in the "programming-flow".
polskibus · 7h ago
This is your brain trying conserve your energy/time by recollecting/brute-forcing/following known patterns, instead of diving into unknown. Otherwise known as „being lazy” / procrastinating.
nine_k · 7h ago
There is an illusion that the error is tiny, and its nature is obvious, so it could be fixed by an instant, effortless tweak. Sometimes it is so (when the compiler complains about a forgotten semicolon), sometimes it may be arbitrarily deeply wrong (even it manifests just as a reversed sign).
ChrisMarshallNY · 15h ago
I do both. I like to develop designs in my head, and there’s a lot of trial and error.
I think the results are excellent, but I can hit a lot of dead ends, on the way. I just spent several days, trying out all sorts of approaches to PassKeys/WebAuthn. I finally settled on an approach that I think will work great.
I have found that the old-fashioned “measure twice, cut once” approach is highly destructive. It was how I was trained, so walking away from it was scary.
rablackburn · 8h ago
> I have found that the old-fashioned “measure twice, cut once” approach is highly destructive. It was how I was trained, so walking away from it was scary.
To be fair it’s great advice when you’re dealing with atoms.
Mutable patterns of electrons, not so much (:
creamyhorror · 6h ago
> "An hour of debugging/programming can save you minutes of thinking,"
I get what you're referring to here, when it's tunnel-vision debugging. Personally I usually find that coding/writing/editing is thinking for me. I'm manipulating the logic on screen and seeing how to make it make sense, like a math problem.
LLMs help because they immediately think through a problem and start raising questions and points of uncertainty. Once I see those questions in the <think> output, I cancel the stream, think through them, and edit my prompt to answer the questions beforehand. This often causes the LLM's responses to become much faster and shorter, since it doesn't need to agonise over those decisions any more.
PaulHoule · 15h ago
Sometimes thinking and experimenting go together. I had to do some maintenance on some Typescript/yum that I didn't write but had done a little maintenance.
Typescript can make astonishingly complex error messages when types don't match up so I went through a couple of rounds of showing the errors to the assistant and getting suggestions to fix it that were wrong but I got some ideas and did more experiments and over the course of two days (making desired changes along the way) I figured out what was going wrong and cleared up the use of types such that I was really happy with my code and when I saw a red squiggle I usually knew right away what was wrong and if I did ask the assistant it would also get it right right away.
I think there's no way I would have understood what was going on without experimenting.
xyzzy123 · 10h ago
Agree, also llms change the balance of plan vs do for me, sometimes it cheaper to do & review than up-front plan.
When you can see what goes wrong with the naive plan you then have all the specific context in front of you for making a better plan.
If something is wrong with the implementation then I can ask the agent to then make a plan which avoids the issues / smells I call out. This itself could probably be automated.
The main thing I feel I'm "missing" is, I think it would be helpful if there were easier ways to back up in the conversation such that the state of the working copy was restored also. Basically I want the agent's work to be directly integrated with git such that "turns" are commits and you can branch at any point.
pjmlp · 6h ago
I agree with your comment in general, however I would say that on my field, the resistence to TLA+ isn't having to think, rather having to code twice without guarantees that it actually maps to the theorical model.
Tools like Lean and Dafny are much more appreciated, as they generate code from the model.
pron · 2h ago
But both Dafny and Lean (which are really hard to put in the same category [1]) are used even less than TLA+, and the problem of formally tying a spec to code exists only when you specify at a level that's much higher than the code, which is what you want most of the time because that's where you get the most bang for you buck. It's a little like saying that the resistance to blueprints is that a rolled blueprint makes a poor hammer.
TLA+ is for when you have a 1MLOC database written in Java or a 100KLOC GC written in C++ and you want to make sure your design doesn't lead to lost data or to memory corruption/leak (or for some easier things, too). You certainly can't do that with Dafny, and while I guess you could do it in Lean (if you're masochistic and have months to spare), it wouldn't be in a way that's verifiably tied to the code.
There is no tool that actually formally ties spec to code in any affordable way and at real software scale, and I think the reason people say they want what doesn't exist is precisely because they want to avoid the thinking that they'll have to do eventually anyway.
[1]: Lean and TLA+ are sort-of similar, but Dafny is something else altogether.
sirwhinesalot · 1h ago
Architectural blueprints are very precise. What gets built is a more detailed form of what is in the blueprint.
That is not the case for the TLA+ spec and your 1MLOC Java Database. You hope with fingers crossed that you've implemented the design, but have you?
I can measure that a physical wall has the same dimensions as specified in the blueprint. How do I know my program follows the TLA+ spec?
I'm not being facetious, I think this is a huge issue. While Dafny might not be the answer we should strive to find a good way to do refinement.
And the thing is, we can do it for hardware! Software should actually be easier, not harder. But software is too much of a wild west.
That problem needs to be solved first.
panarky · 7h ago
> assume I haven't thought the problem through
This is the essence of my workflow.
I dictate rambling, disorganized, convoluted thoughts about a new feature into a text file.
I tell Claude Code or Gemini CLI to read my slop, read the codebase, and write a real functional design doc in Markdown, with a section on open issues and design decisions.
I'll take a quick look at its approach and edit the doc to tweak its approach and answer a few open questions, then I'll tell it to answer the remaining open questions itself and update the doc.
When that's about 90% good, I'll tell the local agent to write a technical design doc to think through data flow, logic, API endpoints and params and test cases.
I'll have it iterate on that a couple more rounds, then tell it to decompose that work into a phased dev plan where each phase is about a week of work, and each task in the phase would be a few hours of work, with phases and tasks sequenced to be testable on their own in frequent small commits.
Then I have the local agent read all of that again, the codebase, the functional design, the technical design, and the entire dev plan so it can build the first phase while keeping future phases in mind.
It's cool because the agent isn't only a good coder, it's also a decent designer and planner too. It can read and write Markdown docs just as well as code and it makes surprisingly good choices on its own.
And I have complete control to alter its direction at any point. When it methodically works through a series of small tasks it's less likely to go off the rails at all, and if it does it's easy to restore to the last commit and run it again.
oblio · 4h ago
1. Shame on you, that doesn't sound like fun vibe coding, at all!
2. Thank you for the detailed explanation, it makes a lot of sense. If AI is really a very junior dev that can move fast and has access to a lot of data, your approach is what I imagine works - and crucially - why there is such a difference in outcomes using it. Because what you're saying is, frankly, a lot of work. Now, based on that work you can probably double your output as a programmer, but considering the many code bases I've seen that have 0 documentation, 0 tests, I think there is a huge chunk of programmers that would never do what you're doing because "it's boring".
3. Can you share maybe an example of this, please:
> and write a real functional design doc in Markdown, with a section on open issues and design decisions.
Great comment, I've favorite'd it!
makeitdouble · 13h ago
In general agreement about the need to think it through, and she should be careful to not oraise the other extreme.
> "An hour of debugging/programming can save you minutes of thinking"
The trap so many dev fall into is assuming code behaves like they think it is. Or believing documentation or seemingly helpful comments. We really want to believe.
People's mental image is more often than not wrong, and debugging tremendously helps bridge the gap.
alfalfasprout · 8h ago
it's funny, I feel like I'm the opposite and it's why I truly hate working with stuff like claude code that constantly wants to jump into implementation. I want to be in the driver's seat fully and think about how to do something thoroughly before doing it. I want the LLM to be, at most, my assistant. Taking on the task of being a rubber duck, doing some quick research for me, etc.
It's definitely possible to adapt these tools to be more useful in that sense... but it definitely feels counter to what the hype bros are trying to push out.
cruffle_duffle · 13h ago
I like that prompt idea. Because I hatehatehate when it just starting “doing work”. Those things are much better as sounding board for ideas and clarifying my thinking than writing one-shot code.
quarkcarbon279 · 9h ago
World of LLMs or not, development should always strive for being fast. In the LLM World, users should always have the controls on accuracy Vs speed. (Though we can try for improving both and not one way or other). For eg at rtrvr.ai we use Gemini Flash as our default and did benchmarking on flash too with 0.9 min per task in the benchmark still yielding top results. That said, I have to accept there are certain web tasks on tail end sites that needs pro to accurately navigate at this point. This is the limitation given our reliance on Gemini models straight up, once we move to our models trained on web trajectories this hopefully will not be a problem.
If using off the shelf LLMs always have a bottleneck of their speed.
markasoftware · 14h ago
GitHub copilot's inline completions still exist, and are nearly instant!
citizenpaul · 17h ago
The only thing I've found that LLM speeds up my work is a sort of advanced find replace.
A prompt like " I want to make this change in the code where any logic deals with XXX. To be/do XXX instead/additionally/somelogicchange/whatever"
It has been pretty decent at these types of changes and saves time of poking though and finding all the places I would have updated manually in a way that find/replace never could. Though I've never tried this on a huge code base.
zahlman · 14h ago
> A prompt like " I want to make this change in the code where any logic deals with XXX. To be/do XXX instead/additionally/somelogicchange/whatever"
If I reached a point where I would find this helpful, I would take this as a sign that I have structured the code wrongly.
baq · 6h ago
You would be right about the code but probably wrong about the you. I’ve done such requests to clean up code written over the years by dozens of other people copying patterns around because ship was king… until it wasn’t. (They worked quite well, btw.)
rtpg · 6h ago
sometimes you want a cutpoint for a refactor and only that refactor. And turns out that there is no nice abstraction that is useful beyond that refactor.
skydhash · 16h ago
I supposed you haven’t tried emacs grep mode or vim quickfix? If the change is mechanical, you create a macro and be done in seconds. If it’s not, you still got the high level overview and quick navigation.
citizenpaul · 10h ago
I'm decent at that kind of stuff. However thats not really what I'm talking about. For instance today I needed two logic flows. One for data flowing in one direction. Then a basically but not quite reversed version of the same logic for when the data comes back. I was able to write the first version then tell the LLM
"Now duplicate this code but invert the logic for data flowing in the opposite direction."
I'm simplifying this whole example obviously but that was the basic task I was working on. It was able to spit out in a few seconds what would have taken me probably more than an hour and at least one tedium headache break. I'm not aware of any pre LLM way to do something like that.
Or a little while back I was implementing a basic login/auth for a website. I was experimenting with high output token LLM's (i'm not sure that's the technical term) and asked it to make a very comprehensive login handler. I had to stop it somewhere in the triple digits of cases and functions. Perhaps not a great "pro" example of LLM but even though it was a hilariously over complex setup it did give me some ideas I hadn't thought about. I didn't use any of the code though.
Its far from the magic LLM sellers want us to believe but it can save time same as various emac/vim tricks can to devs that want to learn them.
kfajdsl · 16h ago
Finding and jumping to all the places is usually easy, but non trivial changes often require some understanding of the code beyond just line based regex replace. I could probably spend some time recording a macro that handles all the edge cases, or use some kind of AST based search and replace, but cursor agent does it just fine in the background.
skydhash · 15h ago
Code structure is simple. Semantics is where it get tough. So if you have a good understanding of the code (and even when you don't), the overview you get from one of those tools (and the added interactivity) is nice for confirming (understanding) the needed actions that needs to be done.
> cursor agent does it just fine in the background
That's for a very broad definition of fine. And you still need to review the diff and check the surrounding context of each chunk. I don't see the improvement in metrics like productivity and cognitive load. Especially if you need to do serveral rounds.
kfajdsl · 14h ago
You mentioned grep-mode, which to my knowledge is just bringing up a buffer with all the matches for a regex and easily jumping to each point (I use rg.el myself). For the record, this is basically the same thing as VSCode's search tool.
Now, once you have that, to actually make edits, you have to record a macro to apply at each point or just manually do the edit yourself, no? I don't pretend LLMs are perfect, but I certainly think using one is a much better experience for this kind of refactoring than those two options.
skydhash · 13h ago
Maybe it's my personal workflow, but I either have sweeping changes (variable names, removing dependencies) which are easily macroable, or very targeted one (extracting functions, decoupling stuff,..,). For both, this navigation is a superpower and coupled with the other tools of emacs/vim, edit is very fast. That rely on a very good mental model of the code, but any question can be answered quickly with the above tools.
For me, it's like having a moodboard with code listings.
Karrot_Kream · 6h ago
Yes I've done this kind of refactoring for ages using emacs macros and grep. Language Server and tree-sitter in emacs has made this faster (when I can get all the dependencies setup correctly that is.) Variable name edits and function extraction is pretty much table stakes in most modern editors like IntelliJ, VSCode, Zed, etc. IIRC Eclipse had this capability 15-20 years ago.
I used to have more patience for doing it the grep/macro way in emacs. It used to feel a bit zen, like going through the code and changing all the call-sites to use my new refactor or something. But I've been coding for too long to feel this zen any longer, and my own expectations for output have gotten higher with tools like language-server and tree-sitter.
The kind of refactorings I turn to an LLM for are different, like creating interfaces/traits out of structs or joining two different modules together.
Karrot_Kream · 16h ago
emacs macros aren't the same. You need to look at the file, observe a pattern, then start recording the macro and hope the pattern holds. An LLM can just do this.
skydhash · 14h ago
And that's why I mentionned grep-mode, and such other tools. Here is some videos about what I'm talking about
Standard search and replace in other tools pales in comparison.
Karrot_Kream · 13h ago
I am familiar with grep-mode and have used that and macro recording for years. I've been using emacs for 20 years. grep-mode (these days I use rg) just brings up all the matches which lets me use a macro that I recorded. That's not the same as telling Claude Code to just make the change. Macros aren't table stakes but find-replace across projects is table stakes in pretty much any post-emacs/vim code editor (and both emacs and vimlikes obviously have plenty of support for this.)
Karrot_Kream · 16h ago
I guess it depends? The "refactor" stuff, if your IDE or language server can handle it, then yeah I find the LLM slower for sure. But there are other cases than an LLM helps a lot.
I was writing some URL canonicalization logic yesterday. Because we rolled this out as an MVP, customers put URLs in all sorts of ways and we stored it into the DB. My initial pass at the logic failed on some cases. Luckily URL canonicalization is pretty trivially testable. So I took the most used customers from our DB, send them to Claude and told Claude to come up with the "minimum spanning test cases" that cover this behavior. This took maybe 5-10 sec. I then told Zed's agent mode using Opus to make me a test file and use these test cases to call my function. I audited the test cases and ended up removing some silly ones. I iterated on my logic and that was that. Definitely faster than having to do this myself.
roncesvalles · 11h ago
All the references to LLMs in the article seemed out-of-place like poorly done product placement.
LLMs are the anti-thesis of fast. In fact, being slow is a perceived virtue with LLM output. Some sites like Google and Quora (until recently) simulate the slow typed output effect for their pre-cached LLM answers, just for credibility.
pjmlp · 6h ago
Not only that, I am already typing enough for coding, I don't want to type on chat windows as well, and so far the voice assistance is so so.
tomrod · 16h ago
I'm consistently seeing personal and shared anecdotes of a 40%-60% speedup on targeted senior work.
As much as I like agents, I am not convinced the human using them can sit back and get lazy quite yet!
stavros · 15h ago
Eeeh, I spend less time writing code, but way more time reviewing and correcting it. I'm not sure I come ahead overall, but it does make development less boilerplaty and more high level, which leads to code that otherwise wouldn't have been written.
tomrod · 13h ago
I wonder if you observe this when you use it in a domain you know well versus a domain you know less well.
I think LLM assistants help you become functional across a more broad context -- and completely agree that testing and reviewing becomes much, much more important.
E.g - a front end dev optimizing database queries, but also being given nonsensical query parameters that don't exist.
stavros · 9h ago
Oh yes, of course, if I don't know a domain well, I can't review it. That doesn't mean the LLM makes fewer mistakes there, though.
toenail · 5h ago
That sounds plausible if the senior did lots of simple coding tasks and moves that work to an agent. Then the senior basically has to be a team lead and do code reviews/qa.
michaelsalim · 16h ago
Curious, what do you count as senior work?
tomrod · 13h ago
Roughly:
A senior can write, test, deploy, and possibly maintain a scalable microservice or similar sized project without significant hand-holding in a reasonable amount of time.
A junior might be able to write a method used by a class but is still learning significant portions and concepts either in the language, workflow orchestration, or infrastructure.
A principal knows how each microservice fits into the larger domain they service, whether they understand all services and all domains they serve.
A staff has significant principal understanding across many or all domains an organization uses, builds, and maintains.
AI code assistance help increase breadth and, with oversight, improve depth. One can move from the "T" shape to "V" shape skillset far easier, but one must never fully trust AI code assistants.
cornfieldlabs · 11h ago
I switch to vs code from cursor many times a day just to use their python refactoring feature. The pylance server that comes with cursor doesn't support refactoring.
old-gregg · 17h ago
Fun story time!
Early in my career as a software engineer, I developed a reputation for speeding things up. This was back in the day where algorithm knowledge was just as important as the ability to examine the output of a compiler, every new Intel processor was met with a ton of anticipation, and Carmak and Abrash were rapidly becoming famous.
Anyway, the 22 year old me unexpectedly gets invited to a customer meeting with a large multinational. I go there not knowing what to expect. Turns out, they were not happy with the speed of our product.
Their VP of whatever said, quoting: "every saved second here adds $1M to our yearly profit". I was absolutely floored. Prior to that moment I couldn't even dream of someone placing a dollar amount on speed, and so directly. Now 20+ years later it still counts as one of the top 5 highlights of my career.
P.S. Mentioning as a reaction to the first sentence in the blog post. But the author is correct when she states that this happens rarely.
P.P.S. There was another engineer in the room, who had the nerve to jokingly ask the VP: "so if we make it execute in 0 seconds, does it mean you're going to make an infinite amount of money?". They didn't laugh, although I thought it was quite funny. Hey, Doug! :)
emmelaich · 9h ago
Working with a task scheduling system, we were told that every minute a airplane is delayed costs $10k. This was back in the 90s, so adjust accordingly.
asimovDev · 2h ago
if you ever remember that engineer's name you should tell them that I found the joke funny
felideon · 16h ago
So, did you make it faster?
old-gregg · 15h ago
Unfortunately, there wasn't a single bottleneck. A bunch of us, not just me, worked our asses off improving performance by a little bit in several places. The compounded improvement IIRC was satisfactory to the customer.
adwn · 16h ago
> "so if we make it execute in 0 seconds, does it mean you're going to make an infinite amount of money?"
I don't get it. Wouldn't going from 1 second to 0 seconds add the same amount of money to the yearly profit as going from 2 seconds to 1 second did? Namely, $1M.
old-gregg · 14h ago
> I don't get it. Wouldn't going from 1 second to 0 seconds add the same amount of money to the yearly profit as going from 2 seconds to 1 second did? Namely, $1M
Of course the joke was silly. But perhaps I should have provided some context. We were making industrial automation software. This stuff runs in factories. Every saved second shrinks the manufacturing time of a part, leading to increase of the total factory output. When extrapolating to abusrd levels, zero time to manufacture means infinite output per factory (sans raw materials).
stronglikedan · 16h ago
yeah it's one of those things that are funny to the people saying it because they don't yet realize it doesn't make sense. I bet they felt that later, in the hotel room, in the shower, probably with a bottle of scotch.
Otek · 15h ago
> I bet they felt that later, in the hotel room, in the shower, probably with a bottle of scotch.
Geez, life in my opinion is not so serious. It’s okay to say stupid things and not feel bad about it, as long as you are not trying to hurt anyone.
I bet they felt great and immediately forgot about this bad joke.
andsoitis · 14h ago
Their joke could have also been interpreted as sarcasm and when you’re going to be sarcastic you want to be doubly sure that you’re correct.
But I also concur with you that it is good to bring some levity to “serious” conversations!!
Not in front of an executive of an important customer, it isn't. They are remarkably humorless about making money.
betterhealth12 · 15h ago
earlier in my career it'd be appealing to make jokes like that, or include a comment in an email. eventually you realize that people - especially "older" or those already a few years into their career - mostly don't want to joke around and just want to actually get the thing done you are meeting about.
singpolyma3 · 14h ago
Yikes. I hope to never need to work with such people
flobosg · 16h ago
A process taking 0 seconds means that, in one year, it can be run 31540000 sec/0 sec = ∞ times, multiplying the profit by ∞.
willsmith72 · 16h ago
Since when is the constraint "how many times can I run this thing"?
zahlman · 14h ago
In principle, the reason that "every second saved here is worth $x" is because running the thing generates money, and saving time on it allows for running it more often.
lblume · 15h ago
At least in theoretical computer science, often, but that's another matter entirely.
ensemblehq · 16h ago
RE: P.P.S... God I love that humour. Actually was very funny.
9rx · 18h ago
> Rarely in software does anyone ask for “fast.”
They don't explicitly ask for it, but they won't take you seriously if you don't at least pretend to be. "Fast" is assumed. Imagine if Rust had shown up, identical in every other way, but said "However, it is slower than Ruby". Nobody would have given it the time of day. The only reason it was able to gain attention was because it claimed to be "Faster than C++".
Watch HN for a while and you'll start to notice that "fast" is the only feature that is necessary to win over mindshare. It is like moths to a flame as soon as something says it is faster than what came before it.
mvieira38 · 17h ago
Only in the small subset of programmers that post on HN is that the case. Most users or even most developers don't mind slow stuff or "getting into flow state" or anything like that, they just want a nice UI. I've seen professional data scientists using Github Desktop on Windows instead of just learning to type git commands for an easy 10x time save
lukevp · 7h ago
GitHub Desktop is way better for reviewing diffs than the git cli. Everyone I’ve ever worked with who preferred cli tools also did an add and commit everything, and their PRs always have more errors overall that would be caught before even being committed if they reviewed visual diffs while committing.
jeremyjh · 1h ago
The best interface is magit, IMO. I use a clone of it in VS Code that is nearly as good. But you get the speed of CLI while still being very easy to stage/unstage individual chunks, which is probably the piece that does not get done enough by CLI users.
sebmellen · 1h ago
Sublime Merge gets you all those benefits, PLUS it’s really fast!
0wis · 4h ago
Not everyone is conscious about it but I feel like it’s something that people will always want.
Like the « evergreen » things Amazon decided to focus on : faster delivery, greater selection, lower cost.
SchemaLoad · 6h ago
They do mind, which is why we see such a huge drop off in retention if pages load even seconds too low. They just don't describe it in the same way.
They don't say they buy the iPhone because it has the fastest CPU and most responsive OS, they just say it "just works".
didibus · 14h ago
You're taking the wrong conclusion, "Fast" is a winning differentiator only when you offer the same feature-set, but faster.
Your example says it, people will go, this is like X (meaning it does/has the same features as X), but faster. And now people will flock from X to your X+faster thing.
Which tells us nothing about if people would also move to a X+more-features, or a X+nicer-ux, or a X+cheaper, etc., without them being any faster than X or even possibly slower.
gherkinnn · 6h ago
I hate it but it's true. Look at me, my fridge as an integrated tablet that tells me the weather outside. Never mind that it is a lil louder and the doors are creaky. It tells me the weather!
willvarfar · 6h ago
And is your fridge within line of sight of a window? :)
emmelaich · 9h ago
Really not sure about that. People will give up features for speed all the time. See git vs bzr/hg/svn/darcs/monotone,...
didibus · 6h ago
Hum, personally I've always found git having more features than those, though I don't know them all, at least when git released it distinguished itself by its features mostly, specifically the distributed nature and rebase. And hg/bzr never looked to me like they had more features, more so similar features +/-, so they'd be a good example of git has the same features+faster so it won.
Dylan16807 · 18h ago
Maybe for languages, but fast is easily left behind when looking for frameworks. People want features, people want compatibility, people will use electron all over.
9rx · 18h ago
> fast is easily left behind when looking for frameworks.
Nah. React, for example, only garnered attention because it said "Look how much faster the virtual DOM is!". We could go on all day.
> People want features, people want compatibility
Yes, but under the assumption that it is already built to be as "fast" as possible. "Fast" is assumed. That's why "faster" is such a great marketing trick, as it tunes people into "Hold up. What I'm currently using actually sucks. I'd better reconsider."
"Fast" is deemed important, but it isn't asked for as it is considered inconceivable that you wouldn't always make things "fast". But with that said, keep in mind that the outside user doesn't know what "fast" is until there is something to compare it with. That is how some products can get away with not being "fast" — until something else comes along to show that it needn't be that way.
NohatCoder · 4h ago
It is only fast compared to a really dumb baseline. But you are right that the story of React being fast was a big part of selling it.
PaulHoule · 18h ago
"Look how quickly it can render the component 50 times!"
tobyhinloopen · 17h ago
"Look, it can render the whole app really quickly every time the user presses a key!"
PaulHoule · 16h ago
That gets into a very interesting question of controlled vs. uncontrolled components.
On one hand I like controlled components because there is a single source of truth for the data (a useState()) somewhere in the app, but you are forced to re-render for each keypress. With uncontrolled components on the other hand, there's the possible anarchy of having state in React and in the actual form.
Reactivity as an idea allowed you to manage data and dom/UI updates in a more performant way than the approach prior to React being popular.
But React started a movement where frontend teams were isolated from backend teams (who tend to be more conservative and performance minded), tons of the view was needlessly pushed into browser rendering, every paged started using 20 different JSON endpoints that are often polling/pushing adding overhead etc. So by every measure it made the Web slower and more complicated, in exchange for some slightly easier/cohesive design management (that needs changing yearly).
The particulars on the vdom framework itself are probably not that important in the grand scheme. Unless it's design encourages doing less of those things (which many newer ones do but React is flexible).
atq2119 · 17h ago
And yet we live in a world of (especially web) apps that are incredibly slow, in the sense that an update in response to user input might take multiple seconds.
Yes, fast wins people over. And yet we live in a world where the actual experience of every day computing is often slow as molasses.
benrutter · 7h ago
Molasses can be fast if you leave it in the packet and hurl it!
Seriously though, you're so right- I often wonder why this is. If it's that people genuinely don't care, or that it's more that say ecommerce websites compete on so many things already (or in some cases maintain monopolies) that fast doesn't come into the picture.
9rx · 16h ago
The trouble is that "fast" doesn't mean anything without a point of comparison. If all you have is a slow web app, you have to assume that the web app is necessarily slow — already as fast as it can be. We like to give people the benefit of the doubt, so there is no reason to think that someone would make something slower than is necessary.
"Fast" is the feature people always wanted, but absent better information, they have to assume that is what they already got. That is why "fast" marketing works so well. It reveals that what they thought was pretty good actually wasn't. Adding the missing kitchen sink doesn't offer the same emotional reaction.
renlo · 15h ago
> The trouble is that "fast" doesn't mean anything without a point of comparison.
This is what people are missing. Even those "slow" apps are faster than their alternatives. People demand and seek out "fast", and I think the OP article misses this.
Even the "slow" applications are faster than their alternatives or have an edge in terms of speed for why people use them. In other words, people here say "well wait a second, I see people using slow apps all the time! People don't care about speed!", without realizing that the user has already optimized for speed for their use case. Maybe they use app A which is 50% as fast as app B, but app A is available on their toolbar right now, and to even know that app B exists and to install it and learn how to use it would require numerous hours of ramp up time. If the user was presented with app A and app B side by side, all things equal, they will choose B every time. There's proficiency and familiarity; if B is only 5% faster than A, but switching to B has an upfront cost in days to able to utilize that speed, well that is a hidden speed cost and why the user will choose A until B makes it worth it.
Speed is almost always the universal characteristic people select for, all things equal. Just because something faster exists, and it's niche, and hard to use (not equal for comparison to the common "slow" option people are familiar with), it doesn't mean that people reject speed, they just don't want to spend time learning the new thing, because it is _slower_ to learn how to use the new thing at first.
lblume · 15h ago
> you have to assume
We don't have to assume. We know that JavaScript is slow in many cases, that shipping more bundles instead of less will decrease performance, and that with regard to the amount of content served generally less is more.
Whether this amount of baggage every web app seems to come with these days is seen as "necessary" or not is subjective, but I would tend to agree that many developers are ignorant of different methods or dislike the idea of deviating from the implied norms.
philipwhiuk · 1h ago
The slow web app is probably still faster than the previous solution.
whartung · 7h ago
I’ll tell you what fast is.
I’ve mentioned this before.
Quest Diagnostics, their internal app used by their phlebotomists.
I honestly don’t know how this app is done, I can only say it appears to run in the tab of a browser. For all I know it’s a VB app running in an ActiveX plugin, if they still do that on Windows.
L&F looks classic Windows GUI app, it interfaces with a signature pad, scanner, and a label printer.
And this app flies. Dialogs come and go, the operator rarely waits on this UI, when she is keying in data (and they key in quite a bit), the app is waiting for the operator.
Meanwhile, if I want to refill a prescription, it fraught with beach balls, those shimmering boxes, and, of course, lots of friendly whitespace and scrolling. All to load a med name, a drugstore address, and ask 4 yes/no questions.
I look at that Quest app mouth agape, it’s so surprisingly fast for an app in this day and age.
atq2119 · 11h ago
This is a disingenuous response because I made it plenty clear what I meant with "fast": interactive response times.
And for that, we absolutely do have points of comparison, and yeah, pretty much all web apps have bad interactivity because they are limited too much by network round trip times. It's an absolute unicorn web app that does enough offline caching.
It's also absurd to assume that applications are as fast as they could be. There is basically always room for improvement, it's just not being prioritised. Which is the whole point here.
underdeserver · 16h ago
Eh, I think the HN crowd likes fast because most tech today is unreasonably slow, when we know it could be fast.
RandomBacon · 13h ago
It's infuriating when I have to use a chatbot, and it pretends to be typing (or maybe looking up a pre-planned generic response or question)...
I'm already pissed I have to use the damn thing, please don't piss me off more.
FridgeSeal · 11h ago
Press enter.
Wait.
Wait for typing indicator.
Wait for cute text-streaming.
Skip through the paragraph of restating your question and being pointlessly sycophantic.
Finally get to the meat of the response.
It’s wrong.
qingcharles · 10h ago
What's sad is that I always open grok.com if it's a quick simple query because their UI loads about 10X faster than GPT/Gemini/Claude.
blub · 5h ago
The claim was not that Rust was faster than C++, they said it’s about as fast.
C and C++ were and are the benchmark, it would have been revolutionary to be faster and offer memory safety.
Today, in some cases Rust can be faster, in others slower.
asa400 · 15h ago
To a first approximation HN is a group of people who have convinced themselves that it's a high quality user experience to spend 11 seconds shipping 3.8 megabytes of Javascript to a user that's connected via a poor mobile connection on a cheap dual-core phone so that user can have a 12 second session where they read 150 words and view 1 image before closing the tab.
Fast is _absolutely not_ the only thing we care about. Not even top 5. We are addicted to _convenience_.
lblume · 15h ago
The fact that this article and similar ones get upvoted very frequently on this platform is strong evidence against this claim.
Considering the current state of the Web and user application development, I tend to agree with regard to its developers, but HN seems to still abide by other principles.
alt227 · 2h ago
This kind of slop is often imposed on developers by execs demanding things.
I imagine a large chunk of us would gladly throw all that out the window and only write super fast efficient code structures, if only we could all get well paid jobs doing it.
hn_throwaway_99 · 16h ago
Just want to say how much I thank YCom for not f'ing up the HN interface, and keeping it fast.
I distinctly remember when Slashdot committed suicide. They had an interface that was very easy for me to scan and find high value comments, and in the name of "modern UI" or some other nonsense needed to keep a few designers employed, completely revamped it so that it had a ton of whitespace and made it basically impossible for me to skim the comments.
I think I tried it for about 3 days before I gave up, and I was a daily Slashdot reader before then.
FlyingSnake · 15h ago
HN is literally the website I open to check if I have internet connectivity. HN is truly a shining beacon in the trashy landscape of web bloat.
theandrewbailey · 2h ago
I usually load my blog to check internet connectivity.
I work at an e-waste recycling company. Earlier this week, I had to test a bunch of laptop docking stations, so I kept force refreshing my blog to see if the Ethernet port worked. Thing is, it loads so fast, I kept the dev tools open to see if it actually refreshed.
inopinatus · 12h ago
I like to use example.com/net/org
bonus, these have both http & https endpoints if you needed a differential diagnosis or just a means to trip some shitty airline/hotel walled garden into saying hello.
yep, I do exactly the same thing. If HN isn't loading, something is definitely fckd.
dang · 6h ago
Except when HN itself is fckd.
It does happen less than it used to, but still.
kikoreis · 15h ago
Oh it's lwn.net for me!
throwawayexmple · 14h ago
I find pinging localhost a bit more reliable, and faster too.
I blame HN switching to AWS. Downtime also increased after the switch.
dang · 6h ago
When did you notice HN switching to AWS, and what changed?
(Those are trick questions, because we haven't switched to AWS. But I genuinely would like to hear the answers.)
(We did switch to AWS briefly when our hosting provider went down because of a bizarre SSD self-bricking incident a few years ago..but it was only for a day or two!)
frutiger · 13h ago
The HN UI could do with some improvements, especially on mobile devices. The low contrast and small tap areas for common operations make it less than ideal, as well as the lack of dark mode.
I wrote my take on an ideal UI (purely clientside, against the free HN firebase API, in Elm): https://seville.protostome.com/.
hn_throwaway_99 · 11h ago
To each their own, but I find the text for the number of points and "hours ago" extremely low contrast and hard to read on your site. More importantly, I think it emphasizes the wrong thing. I almost never really care who submitted a post, but I do care about its vote count.
frutiger · 11h ago
That’s all totally fair.
I actually never care about the vote count but have been on this site long enough to recognise the names worth paying attention to.
Also the higher contrast items are the click/tap targets.
dang · 6h ago
Anyone who goes to the trouble of making their own HN front end is entitled to complain as much as they want, in my book! Nicely done.
apaprocki · 5h ago
It’s hilarious to me that I find this thread. I read the comment you’re replying to before I saw who wrote it. I exclusively read HN on iOS using https://hackerweb.app/ in dark mode precisely because I found it to be the most pleasing mobile experience. And here’s dang replying to my co-worker who commented that he wrote his own HN reader because the actual site isn’t the best on mobile. I could literally reach out my hand, show my phone and share my mobile HN experience with him, except I’m 99% remote. (But I did sit at his desk just last Thursday when he was remote.)
Just goes to show that all of us reading HN don’t actually share with each other how we’re reading HN :)
Too funny… thank you!!
Eji1700 · 16h ago
Information density and ease of identification is the antithesis of "engagement" which often has some time on site metric they're hunting.
If you can find what you want and read it you might not spend 5 extra seconds lost on their page and thus they can pad their stats for advertisers. Bonus points if the stupid page loads in such a way you accidentally click on something and give them a "conversion".
Sadly financial incentive is almost always towards tricking people into doing something they don't want to do instead of just actually giving them what they fucking want.
andsoitis · 14h ago
> Sadly financial incentive is almost always towards tricking people into doing something they don't want to do instead of just actually giving them what they fucking want.
Northstar should be user satisfaction. For some products that might be engagement (eg entertainment service) while for others it is accomplishing a task as quickly as possible and exiting the app.
KPGv2 · 15h ago
The one and only thing I'd do is make the font bigger and increase padding. There's overwhelming consensus that you should have (for English) about 50–70 characters per line of text for the best, fastest, most accurate readability. That's why newspapers pair a small font with multiple columns: to limit number of characters per line of text.
HN might have over 100 chars per line of text. It could be better. I know I could do it myself, and I do. But "I fixed it for me" doesn't fix it for anyone else.
amiga-workbench · 12h ago
I use HN zoomed in at 133%. Its a lot more comfortable even when I'm wearing my glasses.
AnonC · 9h ago
I agree. In my experience, the default HN is terrible for accessibility (in many ways). I’ve just been waiting for dang and tomhow to get a lot older so that they face the issues themselves enough times to care.
stevage · 15h ago
Increased padding comes at the cost of information density.
I think low density UIs are more beginner friendly but power users want high density.
jorvi · 14h ago
High information density, not high UI density.
Having 50 buttons and 10 tabs shoved in your face just makes for opaqueness, power user or not.
NobodyNada · 13h ago
A narrow column of text can make it easier to read individual sentences, but it does so by sacrificing vertical space, which makes it harder to skim a page for relevant content and makes it easier for me to lose track of my place since I can't see as much context, images, and headings on screen all at once. I also find it much harder to read text when the paragraphs form monotonous blocks spanning 10 lines of text rather than being irregularly shaped and covering 3-5 lines. I find Wikipedia articles much harder to read in "standard" mode compared to "wide" mode for this reason.
Different people process visual information differently, and people reading articles have different goals, different eyesight, and different hardware setups. And we already have a way for users to tell a website how wide they want its content to be: resizing their browser window. I set the width of my browser window based on how wide I want pages to be; and web designers who ignore this preference and impose unreadable narrow columns because they read about the "optimal" column width in some study or another infuriate me to no end. Optimal is not the same for everyone, and pretending otherwise is the antithesis of accessibility.
ryandrake · 9h ago
The user should have the choice. If I wanted my browser to display text in a tiny column on my monitor because I thought it would be easier to read, I would... resize my browser to be a tiny column on my monitor!
accoil · 11h ago
Why would shorter lines be regular? I use hn with `max-width: 60rem;`, and I get a ragged right (which I very much prefer over justification), while also getting a line length easier for my eyes to follow.
NobodyNada · 8h ago
My eyes seem to navigate by paragraph more so than by line. It's hard to try to overanalyze how I read, but I think "corners" of a paragraph are landmarks that I latch onto, and when I reach the end of a line of text I don't scan back along the the line horizontally back to the left, I "jump" back, using the boundaries of the paragraph to estimate the start of the next line, and continue reading from there.
This means that I have a difficult time reading text with very large paragraphs. If a paragraph goes on for 10+ lines, I'll start to lose my place at the end of most lines. This is infuriating and drastically impairs my ability to read and comprehend the text.
It's interesting to me that you mention preferring a ragged right over justification, because I literally do not notice the difference. This suggests to me that we read in different ways -- perhaps you focus on the shape and boundaries of a line more than the shape of a paragraph. This makes intuitive sense to me as to why you would prefer narrower columns.
I don't think that I'm "right" for preferring wider columns or that you or anyone else are "wrong" for preferring narrower columns. I think it's just how my brain learned to process text.
I have pretty strong opinions on what's too wide of a column and what's too narrow of a column, so I won't fullscreen a browser window on anything larger than a laptop. Rather, I'll set it for a size that's comfortable for me. If some web designer decides "actually, your preferred text width is wrong, use mine instead" then I'm gonna be pretty annoyed, and I think rightfully so, because what those studies say is "optimal" for the average person is nigh unreadable for me. (Daring Fireball is the worst offender I can think of off the top of my head. I also find desktop Wikipedia's default view pretty hard to read, but the toggleable "wide" mode is excellent).
gherkinnn · 6h ago
Naturally. Centuries of typography as a field and your anecdote obliterates it.
KronisLV · 5h ago
I’d very much prefer more padding between the clickable UI elements on mobile in particular, because the zoom in -> click upvote -> zoom out, or the click downvote by accident -> try to unvote -> try to upvote again, well, it gets pretty old pretty fast.
The text density, however, I rather like.
portaouflop · 14h ago
There are dozens of alternative HN front ends that would satisfy your needs
HarHarVeryFunny · 15h ago
I don't think it was UI that killed Slashdot. The value was always in the comments, and in the very early years often there would be highly technical SMEs commenting on stories.
The site seemed to start to go downhill when it was sold, and got into a death spiral of less informed users, poor moderation, people leaving, etc. It's amazing that it's still around.
phkahler · 14h ago
It's not bad. I still read it, but less than HN.
cruffle_duffle · 13h ago
For me, Slashdot became full of curmudgeons. It’s pretty tiring when every “+5 Insightful” on a hard drive article questioning why you’d ever want so big of a drive, or why you’d require more than 256 colors or whatever new thing came out… like why are you even on a technology enthusiast site when you bitterly complain about every new thing? Basically either accept change or get left in the dust and slashdot’s crowd seemed determined to be left in the dust… forever loosing its relevance in the tech community.
Plus Rusty just pushed out Kuro5hin and it felt like “my scene” kind of migrated over.
As an aside, Kuro5hin was the only “large” forum that I ever bothered remembering people’s usernames. Every other forum it’s all just random people. (That isn’t entirely true, but true enough)
sien · 13h ago
Kuro5hin was far less about technology though.
It was interesting in a different way though.
Like Adequacy.
Did you also move over to MetaFilter ?
wldlyinaccurate · 6h ago
It brings me genuine joy to use websites like HN or Rock Auto that haven't been React-ified. The lack of frustration I feel when using fast interfaces is noticeable.
I don't really get why so many websites are slow and bloated these days. There are tools like SpeedCurve which have been around for years yet hardly anyone I know uses them.
postalcoder · 14h ago
It’s not modern UIs that prevent websites from being performant. Look at old.reddit.com, for instance. It’s the worst of both worlds. An old UI that, although much better than its newer abomination, is fundamentally broken on mobile and packed to the gills with ad scripts.
qingcharles · 10h ago
What changes have been made to the HN design since it was launched?
I know there are changes to the moderation that have taken place many times, but not to the UI. It's one of the most stable sites in terms of design that I can think of.
What other sites have lasted this long without giving in to their users' whims?
Over the last 4 years my whole design ethos has transformed to "WWHND" (What Would Hacker News Do?) every time I need to make any UI changes to a project.
jq-r · 5h ago
Slashdot looked a lot like HN with high information density. It was fast and easy to read all the comments. Then a redesign happened because of web 2.0 or "mobile-first" hype and most of the comments got hidden/collapsed by default, sorted almost randomly etc. So a new user would come there and say "wtf this is a dead conversation" or would have to click too many times to get to the full conversation. So new user would leave, and so would the old ones because the page was so hard to use. It just lost users and that was that. All because of the redesign which they never wanted to revert. Sad really because I still think it had/has the best comment moderation by far.
SatvikBeri · 8h ago
The only one I remember is adding the ability to collapse comment threads
MawKKe · 15h ago
Similar thing happened (to me) with Hackaday around 2010-2011. I used to check it almost daily, and then never again after the major re-design.
ChrisMarshallNY · 15h ago
That, and all the trolls that piled on, when CNN and YouTube started policing their comment sections.
nashashmi · 16h ago
Is that when they went fully xhtml?
fHr · 13h ago
HN interface is goated
cyanydeez · 15h ago
Ive wanted tp poll HN about how many people actively track usernames.
With IRC its basically part of the task, but every forum i read, its rare that i ever consider whose saying what.
Doesn't really help a ton with recognizing but it makes it easier to track within a thread.
tialaramex · 13h ago
I routinely notice a handful of people, such as Thomas Ptacek, whose opinions I have opinions about, and then in context I notice e.g. Martin Uecker for C and especially the pointer provenance problem (on which he has been diligently working for some years), or Walter Bright (for the D language), or Steve Klabnik (Rust)
There are people who show up much less often and have less obvious usernames, Andrew Ayer is agwa for example, and I'm sure there are people I blank on entirely.
Once in a while I will read something and realise oh, that "coincidental" similarity of username probably isn't a coincidence, I believe the first time I realised it was Martin Uecker was like that for example. "Hey, this HN person who has strong opinions about the work by Uecker et al has the username... oh... Huh. I guess I should ask"
stevage · 15h ago
Yep. Dang is basically the only one I notice.
fuzzfactor · 13h ago
It helps having the username in a lighter font than the comment.
jandrese · 11h ago
HN goes to some lengths to de-emphasize the usernames, leaving them small and medium grey against a light grey background. It's not easy to track usernames here. Some other forums put far more emphasis on them, even letting users upload icons so you can tell who is who at a glance.
spangry · 10h ago
For me it's more a recognition after the fact thing: "Oh that was a good comment who said that? Oh that guy, yeah not surprised."
altairprime · 12h ago
I don’t even slightly.
riffic · 12h ago
orange site still doesn't support markdown link tags though.
dang · 6h ago
What's a markdown link tag?
ilyakaminsky · 17h ago
Fast is also cheap. Especially in the world of cloud computing where you pay by the second. The only way I could create a profitable transcription service [1] that undercuts the rest was by optimizing every little thing along the way. For instance, just yesterday I learned that the image size I've put together is 2.5× smaller than the next open source variant. That means faster cold boots, which reduces the cost (and providers a better service).
Fast is cheap everywhere. The only reasons software isn’t faster:
* developer insecurity and pattern lock in
* platform limitations. This is typically software execution context and tool chain related more than hardware related
* most developers refuse to measure things
Even really slow languages can result in fast applications.
sipjca · 15h ago
ive approached the same thing but slightly differently. i can run it on consumer hardware for vastly cheaper than the cloud and don't have to worry about image sizes at all. (bare metal is 'faster') offering 20,000 minutes of transcription for free up to the rate limit (1 Request Every 5 Seconds)
if you ever want to chat about making transcription virtually free or so cheap for everyone let me know. I've been working on various projects related to it for a while. including open source/cross-platform superwhisper alternative https://handy.computer
ilyakaminsky · 13h ago
> i can run it on consumer hardware for vastly cheaper than the cloud
Woah, that's really cool, CJ! I've been toying the with idea of standing up a cluster of older iPhones to run Apple's Speech framework. [1] The inspiration came from this blog post [2] where the author is using it for OCR. A couple of things are holding me back: (1) the OSS models are better according to the current benchmarks and (2) I have customers all over the world, so that geographical load-balancing is a real factor. With that said, I'll definitely spend some time checking out your work. Thanks for sharing!
Is S3 slow or fast? It’s both, as far as I can tell and represents a class of systems (mine included) that go slow to go fast.
S3 is “slow” at the level of a single request. It’s fast at the level of making as many requests as needed in parallel.
Being “fast” is sometimes critical, and often aesthetic.
claytonjy · 16h ago
We have common words for those two flavors of “fast” already: latency and throughput. S3 has high latency (arguable!), but very very high throughput.
zahlman · 14h ago
Yep. I'm hoping that installed copies of PAPER (at least on Linux) will be somewhere under 2MB total (including populating the cache with its own dependencies etc). Maybe more like 1, although I'm approaching that line faster than I'd like. Compare 10-15 for pip (and a bunch more for pipx) or 35 for uv.
HarHarVeryFunny · 15h ago
Fast doesn't necessarily mean efficient/lightweight and therefore cheaper to deploy. It may just mean that you've thrown enough expensive hardware at the problem to make it fast.
b_e_n_t_o_n · 16h ago
Your CSS is broken fyi
willsmith72 · 16h ago
Not in development and maintenance dollars it's not
ilyakaminsky · 15h ago
Hmm… That's a good point. I recall a few instances where I went too far to the detriment of production. Having a trusty testing and benchmarking suite thankfully helped with keeping things more stable. As a solo developer, I really enjoy the development process, so while that bit is costly, I didn't really consider that until you mentioned it.
nu11ptr · 17h ago
This is interesting. It got me to think. I like it when articles provoke me to think a bit more on a subject.
I have found this true for myself as well. I changed back over to Go from Rust mostly for the iteration speed benefits. I would replace "fast" with "quick", however. It isn't so much I think about raw throughput as much as "perceived speed". That is why things like input latency matter in editors, etc. If something "feels fast" (ie Go compiles), we often don't even feel the need to measure. Likewise, when things "feel slow" (ie Java startup), we just don't enjoy using them, even if in some ways they actually are fast (like Java throughput).
christophilus · 16h ago
I feel the same way about Go vs Rust. Compilation speed matters. Also, Rust projects resemble JavaScript projects in that they pull in a million deps. Go projects tend to be much less dependency happy.
kettlecorn · 11h ago
One of the Rust ecosystem's biggest mistakes, in my opinion, was not establishing a fiercely defensive mindset around dependency-bloat and compilation speed.
As much as Rust's strongest defenders like to claim, compilation speed and bloat just really wasn't a goal. That's cascaded down into most of the ecosystem's most used dependencies, and so most Rust ecosystem projects just adopt the mindset of "just use the dependency". It's quite difficult to build a substantial project without pulling in 100s of dependencies.
I went on a lengthy journey of building my own game engine tools to avoid bloat, but it's tremendously time consuming. I reinvented the Mac / Windows / Web bindings by manually extracting auto-generated bindings instead of using crates that had thousands of them, significantly cutting compile time. For things like derive macros and serialization I avoided using crates like Serde that have a massive parser library included and emit lots of code. For web bindings I sorted out simpler ways of interacting with Javascript that didn't require a heavier build step and separate build tool. That's just the tip of the iceberg I can remember off the top of my head.
In the end I had a little engine that could do 3D scenes, relatively complex games, and decent GPU driven UI across Mac, Windows, and Web that built in a fraction of the time of other Rust game engines. I used it to build a bunch of small game jam entries and some web demos. A clean release build on the engine on my older laptop was about 3-4 seconds, vastly faster than most Rust projects.
The problem is that it was just a losing battle. If I wanted Linux support or to use pretty much any other crate in the Rust ecosystem, I'd have to pull in dependencies that alone would multiple the compile time.
In some ways that's an OK tradeoff for an ecosystem to make, but compile times do impede iteration loops and they do tend to reflect complexity. The more stuff you're building on top of the greater the chances are that bugs are hard to pin down, that maintainers will burn out and move on, or that you can't reasonably understand your stack deeply.
Looking completely past the languages themselves I think Zig may accrue advantages simply because its initial author so zealously defined a culture that cares about driving down compile times, and in turn complexity. Pardon the rant!
dist1ll · 3h ago
It's fascinating to me how the values and priorities of a project's leaders affect the community and its dominant narrative. I always wondered how it was possible for so many people in the Rust community to share such a strong view on soundness, undefined behavior, thread safety etc. I think it's because people driving the project were actively shaping the culture.
Meanwhile, compiler performance just didn't have a strong advocate with the right vision of what could be done. At least that's my read on the situation.
speed_spread · 59m ago
As OP demonstrated, Rust compiler performance is not the problem, it's actually quite fast for what it does. Slow builds are rather caused by reliance on popular over-generic crates that use metaprogramming to generate tons of code at compile time. It's not a Rust specific tradeoff but a consequence of the features it offers and the code style it encourages. An alternative, fast building crate ecosystem could be developed with the same tools we have now.
By comparison, Go doesn't have _that_ problem because it just doesn't have metaprogramming. It's easy to stay fast when you're dumb. Go is the Forest Gump of programming languages.
nu11ptr · 15h ago
And that leads to dependency hell once you realize that those dependencies all need different versions of the same crate. Most of the time this "just works" (at the cost of more dependencies, longer compile time, bigger binary)... until it doesn't then it can be tough to figure out.
In general, I like cargo a lot better than the Go tooling, but I do wish the Rust stdlib was a bit more "batteries included".
noisy_boy · 13h ago
I feel like Rust could have added commonly used stuff as extensions and provided separate builds that have them baked in for those that want to avoid dependency hell while still providing the standard builds like they currently do. Sure the versions would diverge somewhat but not sure how big of a problem that would be.
asa400 · 16h ago
This is all well and good that we developers have opinions on whether Go compiles faster than Rust or whatever, but the real question is: which is faster for your users?
nu11ptr · 15h ago
...and that sounds nice to me as well, but if I never get far enough to give it to my users then what good is fast binaries? (implying that I quit, not that Rust can't deliver). The holy grail would be to have both. Go is generally 'fast enough', but I wish the language was a bit more expressive.
bodhi_mind · 40m ago
I’m senior developer on a feature bloated civil engineering web app that has 2 back end servers (one just proxies to the other) and has 8k lines of stored procedures as the data layer and many multi k line react components that intentionally break react best practices.
I loathe working on it but don’t have the time to refactor legacy code.
———————-
I have another project that I am principal engineer and it uses Django, nextjs, docker compose for dev and ansible to deploy and it’s a dream to build in and push features to prod. Maybe I’m more invested so it’s more interesting to me but also not waiting 10 seconds to register and hot reload a react change is much more enjoyable.
nakedneuron · 6h ago
Website is superfast. Reason I usually go for the comments first on HN is exactly this: they're fast. THIS is notably different.
On interfaces:
It's not only the slowness of the software or machine we have to wait for, it's also the act of moving your limb that adds a delay. Navigating a button (mouse) adds more friction than having a shortcut (keyboard). It's a needless feedback loop. If you master your tool all menus should go away. People who live in the terminal know this.
As a personal anecdote, I use custom rofi menus (think raycast for Linux) extensively for all kinds of interaction with data or file system (starting scripts, opening notes, renaming/moving files). It's notable how your interaction changes if you remove friction.
Venerable tools in this vein: vim, i3, kitty (former tmux), ranger (on the brim), qutebrowser, visidata, nsxiv, sioyek, mpv...
Essence of these tools is always this: move fast, select fast and efficiently, ability to launch your tool/script/function seamlessly. Be able to do it blindly. Prefer peripheral feedback.
I wish more people saw what could be and built more bicycles for the mind.
Cthulhu_ · 5h ago
The website is fast because it's minimal, just under 80 kB of which 55 is the custom font; this is fine for plain content sites, but others will have other requirements.
There's never a reason to make a content website use heavyweight JS or CSS though.
nvarsj · 4h ago
That’s actually why I don’t like discourse at all. If your community site needs loading icons I don’t want to use it.
chasing0entropy · 3h ago
The industry (hint:this forum's readers) have replaced "fast" software with "portable" meaning
-universally addressable libraries that must load from discrete and often remote sources,
-zero hang time in programming language evolution (leaving no time for experts to discover, document, and implement optimizations)
-insistence on "the latest version" focused software with no emphasis on long term code stability
SatvikBeri · 17h ago
I've noticed over and over again at various jobs that people underestimate the benefit of speed, because they imagine doing the same workflow faster rather than doing a different workflow.
For example, if you're running experiments in one big batch overnight, making that faster doesn't seem very helpful. But with a big enough improvement, you can now run several batches of experiments during the day, which is much more productive.
IshKebab · 15h ago
I think people also vastly underestimate the cost of context switching. They look at a command that takes 30 seconds and say "what's the point of making it take 3 seconds? you only run it 10 times in a day; it's only 5 minutes". But the cost is definitely way more than that.
owlbite · 8h ago
Whenever we make our code faster the users just run bigger models :P.
01HNNWZ0MV43FF · 16h ago
Me, looking at multi-hour CI pipelines, thinking how many little lint warnings I'd fix up if CI could run in like 20 minutes
zavg · 18h ago
Pavel Durov (founder of Telegram) totally nailed this concept.
He pays special attention to the speed of application. The Russian social network VK worked blazingly fast. The same is about Telegram.
I always noticed it but not many people verbalized it explicitly.
But I am pretty sure that people realize it subconsciously and it affects user behaviour metrics positively.
dominicq · 17h ago
Telegram is pretty slow, both the web interface and the Android app. For example, reactions to a message always take a long time to load (both when leaving one, and when looking at one). Just give me emoji, I don't need your animated emoji!
hu3 · 16h ago
Can't agree.
These operations are near instant for me on telegram mobile and desktop.
It's the fastest IM app for me by a magnitude.
bravesoul2 · 15h ago
I find most jobs I had fast becomes a big issue once things are too slow. Or expensive.
It's a retroactively fixed thing. Like imagine forgetting to make a UI, shipping just an API to a customer then thinking "oh shit, they need a UI they are not programmers". And only noticing from customer complaints. That is how performance is often treated.
This is probably because performance problems usually require load or unusual traffic patterns, which require sales, which require demos, which dont require performance tuning as there is one user!
If you want to speed your web service up first thing is invest time, maybe money in really good observability. Should be easy for anyone in the team to find a log, see what CPU is at etc. Then set up proxy metrics around speed you care about and talk about them every week and take actions.
Proxy metrics means you likely cant (well probably should not) check the speed that Harold can sum his spreadsheet every minute, but you can check the latency of the major calls involved. If something is slow but metrics look good then profiling might be needed.
Sometimes there is an easy speed up. Sometimes you need a new architecture! But at least you know what's happening.
FridgeSeal · 10h ago
In addition to all this, I’m also of the opinion that most users just have software “lumped on them” and have little to no recourse for complaint, so they’re just forced/trained to put-up-and-shut-up about it.
As a result, performance (and a few other things) functionally never gets “requested”. Throw in the fact that for many mid-to-large orgs, software is not bought by the people who are forced to use it and you have the perfect storm for never hearing about performance complaints.
This in turn, justifies never prioritising performance.
calibas · 17h ago
Efficient code is also environmentally friendly.
First, efficient code is going to use less electricity, and thus, fewer resources will need to be consumed.
Second, efficient code means you don't need to be constantly upgrading your hardware.
breuleux · 16h ago
Well, that depends. Very inefficient code tends to only be used when absolutely needed. If an LLM becomes ten times faster at answering simple prompts, it may very well be used a hundred times more as a result, in which case electricity use will go up, not down. Efficiency gains commonly result in doing way more with more, not more with less.
lblume · 15h ago
Correct. This is also known as a rebound effect [1], or, specifically with regard to technological improvements, as the Jevons paradox [2].
Indeed, that is a common occurrence that called Jevons Paradox.
monkeyelite · 7h ago
Unless your code is running on a large number of machines across data centers that energy is about 2-3 figures a month in total utilization.
So if we use cost as a proxy for environment impact it’s not saving much at all.
I think this is a meme to help a different audience care about performance.
yogishbaliga · 17h ago
Very true, but in recent years feature development has taken precedence over efficiency. VP of whatever says hardware is cheap, software engineers are not.
dist-epoch · 16h ago
Energy used for lighting didn't decrease when the world moved to LED lights which use much less energy - instead we just used more lighting everywhere, and now cities are white instead of yellow.
kristianp · 12h ago
I know what you mean, but do you have a citation for that? LEDs are so much more efficient that I wonder if its true.
Aurornis · 17h ago
> Rarely in software does anyone ask for “fast.” We ask for features, we ask for volume discounts, we ask for the next data integration. We never think to ask for fast.
Almost everywhere I’ve worked, user-facing speed has been one of the highest priorities. From the smallest boring startup to the multi billion dollar company.
At companies that had us target metrics, speed and latency was always a metric.
I don’t think my experience has been all that unique. In fact, I’d be more surprised if I joined a company and they didn’t care about how responsive the product felt, how fast the pages loaded, and any weird lags that popped up.
SatvikBeri · 17h ago
At 6 out of 8 companies I've worked at (mostly a mixture of tech & finance) I have always had to fight to get any time allotted for performance optimization, to the point where I would usually just do it myself under the radar. Even at companies that measured latency and claimed it was important, it would usually take a backseat to adding more features.
noisy_boy · 13h ago
That is how it is most of the time. If you want to experience the other extreme, go to HFT or low-latency projects.
codazoda · 17h ago
My experience has been that people sometimes obsess over speed for things like how fast a search result returns but not over things like how fast a page renders or how many bites we send the user.
saagarjha · 3h ago
I have been paid to make things fast. Sometimes that was the explicit reason I was hired!
kristianp · 12h ago
No mention of google search itself being fast. It's one of the poster children of speed being part of the interface.
Microsoft needs to take heed, for example Explorer's search, Teams, make your computer seem extremely slow. VS Code on the other hand is fast enough, while slower than native editors such as Sublime Text.
psanchez · 14h ago
Fast is a distinctive feature.
For what is worth I built myself a custom jira board last month, so I could instantly search, filter and group tickets (by title, status, assignee, version, ...)
Motivation: Running queries and finding tickets on JIRA kills me sometimes.
The board is not perfect, but works fast and I made it superlightweight. In case anybody wants to give it a try:
Don't dare to try on mobile, use desktop. Unfortunately it uses a proxy and requires an API key, but doesn't store anything in backend (just proxies the request because of CORS). Maybe there is an API or a way to query jira cloud instance directly from browser, I just tried first approach and moved on. It even crossed my mind to add it somehow to Jira marketplace...
Anyway, caches stuff locally and refreshes often. Filtering uses several tricks to feel instant.
UI can be improved, but uses a minimalistic interface on purpose, like HN.
If anybody tries it, I'll be glad to hear your thoughts.
chamomeal · 18h ago
Only sorta related, but it’s crazy that to me how much our standards have dropped for speed/responsiveness in some areas.
I used to play games on N64 with three friends. I didn’t even have a concept of input lag back then. Control inputs were just instantly respected by the game.
Meanwhile today, if I want to play rocket league with three friends on my Xbox series S (the latest gen, but the less powerful version), I have to deal with VERY noticeable input lag. Like maybe a quarter of a second. It’s pretty much unplayable.
Aurornis · 17h ago
> I have to deal with VERY noticeable input lag. Like maybe a quarter of a second. It’s pretty much unplayable
Your experience is not normal.
If you’re seeing that much lag, the most likely explanation is your display. Many TVs have high latency for various processing steps that doesn’t matter when you’re watching a movie or TV, but becomes painful when you’re playing games.
fouronnes3 · 17h ago
This does not undermine chamomeal's argument. The whole point is that back in the N64 days, they could not possibly have had that experience. There was no way to even make it happen. The fact that today it's a real possibility when you've done nothing obviously wrong is a definite failure.
edwcross · 17h ago
TVs back then supported a given standard (NTSC, PAL) and a lower resolution. CRTs couldn't "buffer" the image. Several aspects made it so that "cheating" was not possible.
It was either fast, or nothing. Image quality suffered, but speed was not a parameter.
With LCDs, lag became a trade-off parameter. Technology enabled something to become worse, so economically it was bound to happen.
Luckily newer TVs and console can negotiate a low-latency mode automatically. It's called ALLM (Auto-Low Latency Mode).
chamomeal · 17h ago
it's possible, but it seems to specifically be a rocket league on xbox series s problem, not a display problem. Other games run totally fine on the same display with no lag!
izzydata · 17h ago
That may be an issue of going from a CRT tv to an LCD tv. As far as I am aware there was no software manipulation of the video input on a CRT. It just took the input and displayed it on the screen in the only way it could. Newer tvs have all kinds of settings to alter the video which takes processing time. They also typically have a game mode to turn off as much of it as it will allow.
abdullahkhalids · 17h ago
Why should the user care whether the lag is introduced by the software in the controller, or the software in the gaming console, or the software in the tv.
The lag is due to some software. So the problem is with how software engineering as a field functions.
PaulHoule · 17h ago
I hear it claimed that you're only supposed to enable game mode for competitive multiplayer games -- but I've found that many single player games like Sword Art Online: Fatal Bullet are unplayable without game mode enabled.
It could be my unusual nervous system. I'm really good at rhythm games, often clearing a level on my first try and amazing friends who can beat me at other genres. But when I was playing League of Legends, which isn't very twitchy, it seemed like I would just get hit and there was nothing I could do about it when I played on a "gaming" laptop but found I could succeed at the game when I hooked up an external monitor. I ran a clock and took pictures showing that the external monitor was 30ms faster than the built-in monitor.
How about the speed of going from a powered off console to playing the actual game? Sleep mode helps with resuming on console, but god forbid you’re on a pc with a game that has anti cheat, or comped menus. You will sit there, sometimes for a full minute waiting. I absolutely cannot stand these games.
qingcharles · 10h ago
My buddy booted up his PC after gaming on his PS5 for two weeks and every single app needed multi-gig updates. Xbox app, Logitech app, Discord, Windows 11, Chrome, Steam. The whole enchilada. Rage inducing compared to sticking a cart in an N64.
raldi · 16h ago
Or how channel surfing now requires a 1-2 second latency per channel, versus the way it was seemingly instant from the invention of television through the early 1990s.
RandomBacon · 13h ago
Having a lot more channels is cool I guess, but it was much better to watch and listen to a staticy analog channel 20 years ago, than a digital channel today where there is no audio and the image freezes.
jaza · 8h ago
Heck yes! I recently dusted off (had to literally dust the inside of the cartridges to get past a black screen, lol) my old Sega Genesis (and bought an HDMI adaptor for it), and have been letting my school age sons play it. They haven't even commented on the basic graphics. They're like "wow dad, no boot time, no connecting to server time, no waiting to skip ads time". They love it.
tobyhinloopen · 17h ago
How about you enable game mode on the TV you're using
chamomeal · 16h ago
Game mode is on! The input log is not with the display. Other games run fine.
alliao · 9h ago
make fast sexy again... please
growing up I've thoroughly enjoyed seeing workers tapping away at registers where it doesn't have a mouse, all muscle memory and layers and layers of menu accessible by key taps, whether it's airline, clothing store, even some restaurant used to have those dimly lit terminals glowing green or orange with just bunch of text and a well versed operator chatting while getting their work done. the keys were commercial grade mechanical which made pleasing sound.
nowadays it's fancy touch display, requires concentration and often sluggish, and the machine often felt cheap and made cheap sound when tapped on, I don't think the operator are ever enjoying interacting with it and the software's often slow across the network....
I'm all for fast. It shows no matter what, at least somebody cared enough for it to be blazing fast.
colton_padden · 17h ago
I was going to say one of the more recent times fast software excited me was with `uv` for Python packaging, and then I saw that op had a link to Charlie Marsh in the footnote. :)
esafak · 18h ago
A lot of people have low expectations from having to use shit products at work, and generally not being discerning.
Speed is what made Google, which was a consumer product at the time. (I say this because it matters more in consumer products.)
hinkley · 8h ago
I don’t think people realize how much working with bad tools inspired you to write equally bad applications.
Beautiful tools make you stretch to make better things with them.
doubleorseven · 18h ago
I once accidentally blocked TCP on my laptop and found out "google.com" runs on UDP, it was a nice surprise.
baba is fast.
I sometimes get calls like "You used to manage a server 6 years ago and we have an issue now" so I always tell the other person "type 'alias' and read me the output", this is how I can tell if this is really a server I used to work on.
fast is my copilot.
Jtsummers · 18h ago
Specifically HTTP/3 and QUIC (which came out of Google):
They don't require you to use QUIC to access Google, but it is one of the options. If you use a non-supporting browser (Safari prior to 2023, unless you enabled it), you'd access it with a standard TCP-based HTTP connection.
quesera · 16h ago
> this is how I can tell if this is really a server I used to work on
Hm, shell environment is fairly high on the list of things I'd expect the next person to change, even assuming no operational or functional changes to a server.
And of course they'd be using a different user account anyway.
> it's obvious to anyone that writes code that we're very far from the standards that we're used to
This is true, but I also think there's a backlash now and therefore some really nice mostly dev-focused software that is reeaaaaly fast. Just to name a few:
That's a tiny subsection of the mostly bloated software that exists. But it makes me happy when I come across something like that!
Also, browsers seems to be really responsive despite being one of the most feature bloated products on earth thanks to expanding web standards. I'm not really counting this though because Firefox and Chrone might rarely lag, the websites I view with them often do, so it's not really a fast experience.
CraigJPerry · 6h ago
>> Rarely in software does anyone ask for “fast.”
I don't know, there are a sizeable subset of folks who value fast, and it's a big subset, it's not niche.
Search for topics like turning off animations or replacing core user space tools with various go and rust replacements, you'll find us easily enough.
I'm generally a pretty happy MacOS user, especially since M1 came along. But I am seriously considering going back to linux again. I maintain a parallel laptop with nixos and i'm finding more and more niggles on the mac side where i can prioritise lower friction on linux.
ksec · 18h ago
>> Rarely in software does anyone ask for “fast.”
I have been asking about Latency-Free Computing for a very long time. Every Computing now is slow.
This is one of the reasons I switched from Unity to Godot. There is something about Godot loading fast and compiling fast that makes it so much more immersive to spend hours chugging away at your projects for.
PaulHoule · 17h ago
My son told me to not develop a game with Unity because, as a player, he thought Unity games took way too long to load.
01HNNWZ0MV43FF · 16h ago
There might be some selection bias - Experienced programmers who care a lot about engine technology are more likely to use Godot and also optimize their load times. Unity includes a lot of first-time programmers who just want to get something shipped
Liftyee · 17h ago
I always have to remind myself of the bank transfer situation in the US whenever I read an article complaining about it. Here in the UK, bank transfers are quick and simple (the money appears to move virtually instantly). Feel free to enlighten me to why they're so slow in the US.
bobtheborg · 16h ago
"Community banks mostly don’t have programmers on staff, and are reliant on the so-called “core processors” ...
This is the largest reason why in-place upgrades to the U.S. financial system are slow. Coordinating the Faster ACH rollout took years, and the community bank lobby was loudly in favor of delaying it, to avoid disadvantaging themselves competitively versus banks with more capability to write software (and otherwise adapt operationally to the challenges same-day ACH posed)."
UK Banks use FiServ too. So that can't be the only reason.
joshvm · 17h ago
For ACH, it's the scheduling and batching that makes it slow. The transfer itself should be instant but often my bank sends it out around midnight. This is why Venmo and Zelle are so popular. You can also modify/cancel a bank transfer before it goes through, which is nice.
This is the same in Switzerland. If you request an IBAN transfer, it's never instant. The solution there for fast payments is called TWINT, which works at almost POS terminal (you take a picture of the displayed QR code).
I think BACS is similarly "slow" due to the settlement process.
silotis · 15h ago
These days ACH settlement runs multiple times a day. The biggest source of delay for ACH transfers is your bank delaying release of the funds for risk management. ACH transfers can be reversed even after they have "settled" and if the receiving bank has already disbursed the funds then they have to eat the cost of reimbursing the sender. Reversals are more likely to happen soon after the transfer completes, so delaying release of the funds makes it less likely the bank will be left holding the bag.
maccard · 15h ago
People are almost always talking about Faster Payments [0] rather than BACS. It really is instant.
I was pleasantly surprised when I bought a house that I could just transfer everything instantly with faster payments. I was fully expecting to deal with CHAPS, etc.
But the faster payments ceiling is large enough that buying a house falls under the limit.
Aurornis · 16h ago
The US actually has two real-time payment systems: RTP and FedNow. The number of participating banks is growing rapidly.
Prior to that, you could get instant transfers but it came with a small fee because they were routed through credit card networks. The credit card networks took a fee but credit card transactions also have different guarantees and reversibility (e.g. costing the bank more in cases of fraud)
aidenn0 · 16h ago
From the linked RTP site: "Because of development and operational costs most banks and credit unions will offer "send only" capabilities"
Which means nobody can send me money.
FedNow on the backend is supported by fewer banks than Zelle is, which is probably why hardly any banks expose a front-end for it.
kccqzy · 15h ago
I am convinced that this is in some cases a pro-consumer behavior. A credit card company once pulled money from my bank via ACH due to the automatic payment feature I set up, but that bank account didn't have enough money in it. The bank sent me at least two emails about the situation. I finally noticed that second email and wired myself more money from a different account. The credit card company didn't notice anything wrong and didn't charge any late fees or payment returned fees. The bank didn't charge any overdraft fees or insufficient funds fees. And the wire transfer didn't have a fee due to account balance. (Needless to say, from then on I no longer juggle multiple bank accounts like that.)
The bank had an opportunity to notify me precisely because ACH is not real time. And I had an opportunity to fix it because wire transfers is almost real time (finishes in minutes not days). I appreciate it when companies pull money from my account I get days of notice but if I need to move money quickly I can do it too.
IshKebab · 15h ago
In most cases it's definitely better for it to be fast. For example I sold a buggy face to face today and they paid me by bank transfer, and the reason we could do that was that I had a high confidence it would turn up quickly and they weren't trying to scam me. It actually took around 1 second which is really quite fast.
You don't need slow transfers to get an opportunity to satisfy automatic payments. I don't know how it works but in the UK direct debits (an automatic "take money from my account for bills" system) gives the bank a couple of days notice so my banking app warns me if I don't have enough money. Bank transfers are still instant.
bjackman · 8h ago
Here in Switzerland bank transfers only take place during business hours
I believe this is because Ürs has to load my silver pieces onto the donkey and drive it to the other bank.
jaza · 8h ago
We're pretty lucky here in Australia. Over the past decade or so, PayID has been successfully rolled out to virtually all banks, giving us free and (usually) instant money transfers - I'd say more than half of all personal payments are now done with PayID. Old-skool bank transfers are still the norm for business and administrative payments, but that's changing too, and in any case, those transfers are increasingly being executed behind the scenes over Osko (aka PayID), so they end up settling in seconds (or at least hours) instead of days.
alliao · 9h ago
IS IT NOW!? last time I visited (long time ago) it was BACS, and the bank clerk told me it takes one day to "properly" register they've received my fund, one day to make sure it transferred and on the third and final day, the other bank can "properly" acknowledge they've received the fund thus why it took 3 FRIGGIN DAYS. I used so much cash back then.
Isn't there a law in the UK which says it must be fast?
Night_Thastus · 17h ago
I wish I could live in a world of fast.
C++ with no forward decls, no clang to give data about why the compile time is taking so long. 20 minute compiles. Only git tool I like (git-cola) is written in Python and slows to a crawl. gitk takes a good minute just to start up. Only environments are MSYS which is slow due to Windows, and WSL which isn't slow but can't do DPI scaling so I squint at everything.
arunc · 10h ago
I might get down voted for saying this on HN, but I'll still say it.
As C++ devs we used to complain a lot about it's compilation speed. Now after moving to Rust, sometimes we wish we could just go back to C++ due to Rust's terrible compilation speeds! :-)
pyman · 17h ago
The web is fast.
> Rarely in software does anyone ask for “fast.”
I don't think I can relate this article to what actually happened to the web. It went from being an unusable 3D platform to a usable 2D one. The web exploded with creativity and was out of control thanks to Flash and Director, but speeds were unacceptable. Once Apple stopped supporting it, the web became boring, and fast, very fast. A lot of time and money went into optimising the platform.
So the article is probably more about LLMs being the new Flash. I know that sounds like blasphemy, but they're both slow and melt CPUs.
thfuran · 16h ago
The web might be fast compared to in 2005 but only if you don't normalize for average CPU performance and bandwidth. Websites that are mostly text often still manage to take remarkable amounts of time to finish rendering and stop moving things around.
fasteo · 2h ago
> Rarely in software does anyone ask for “fast.”
It is implicit, in the same way that in a modern car you expect electric windows and air-conditioning (yes, back in the day, those were premium extras)
jspaetzel · 8h ago
Why's this so highly rated. Y'all don't know that fast is good?
alliao · 6h ago
guess everybody misses software that you can tell the maker cares...
grodes · 1h ago
What about, centered text?
lmm · 7h ago
Very much the opposite in my experience. People, especially on this site, ask for "fast" regardless of whether they need it. If asked "how fast?" the answer is always "as fast as possible". And they make extremely poor choices as a result. Fast is useful up to a point, but faster than that is useless - maybe actively detrimental if you can e.g. generate research reports faster than you can read them.
You make much better code, and much better products, if you "fast" from your vocabulary. Instead set specific, concrete latency budgets (e.g. 99.99% within x ms). You'll definitely end up with fewer errors and better maintainability than the people who tried to be "fast". You'll often end up faster than them too.
gregorvand · 5h ago
Superhuman achieved their sub-something speed maybe (has anyone measured it except them? Genuinely, post a link, appreciated)
However the capital required will probably never happen again in relation to the return for any investor involved in that product.
Props to them for pushing the envelope, but they did it in the zero interest era and its a shame this is never highlighted by them. And now the outcome is pretty clear in terms of where the company has ended up.
brailsafe · 15h ago
> Instagram usually works pretty well—Facebook knows how important it is to be fast.
Fast is indeed magical, that's why I exclusively browse Instagram from the website; it's so slow I dip out before they get me with their slot machine.
ygritte · 7h ago
How did over a thousand people upvote this hollow article? Am I the only one who was looking for substance in vain?
dang · 6h ago
The substance is in the audience, in the sense that a lot of people resonate with what the article is saying.
bitpush · 17h ago
> Superhuman's sub-100ms rule—plus their focus on keyboard shortcuts—changed the email game in a way that no one's been able to replicate, let alone beat.
I often hear this sort of thing "Facebook was a success using PHP therefore language choice isn't important" or in this case "superhuman made their product fast and they still failed so speed isn't important".
It's obviously wrong if you think about it for more than a second. All it shows is that speed isn't the only thing that matters but who was claiming that?
Speed is important. Being slow doesn't guarantee failure. Being fast doesn't guarantee success. It definitely helps though!
fuzzfactor · 13h ago
>Being fast helps, but is rarely a product.
>Being fast doesn't guarantee success.
Sometimes it can be a deciding factor though.
Also sometimes speedyness or responsiveness beyond nominal is not as much of a "must have" compared to nominally fast performance in place of sluggishness.
stevage · 15h ago
> When was the last time you used airplane WiFi and actually got a lot done?
The greatest day of productivity in my life was a flight from Melbourne to New York via LAX. No wifi on either flight, but a bit in transit. Downloaded everything I needed in advance. Coded like a mofo for like 16 hours.
Fast internet is great for distractions.
cheema33 · 14h ago
Same here. I am more productive on a plane than anywhere else. And for the reasons you describe.
twodave · 11h ago
I work on optimization a large fraction of my time. It is not something learned in a week, month or even a year.
At least in B2B applications that rely heavily on relational data, the best developers are the ones who can optimize at the database level. Algorithmic complexity pretty much screams at me these days and is quickly addressed, but getting the damned query plan into the correct shape for a variety of queries remains a challenge.
Of course, knowing the correct storage medium to use in this space is just as important as writing good queries.
pclowes · 18h ago
Highly Agree.
Speed of all kinds is incredibly important. Give me all of it.
- Fast developers
- Fast test suites
- Fast feedback loops
- Fast experimentation
Someone (Napoleon?) is credited with saying "quantity has a quality all its own", in software it is "velocity has a quality all its own".
As long as there is some rigor and you aren't shipping complete slop, consistently moving very quickly fixes almost every other deficiency.
- It makes engineering mistakes cheaper (just fix them fast)
- It make product experimentation easy (we can test this fast and revert if needed)
- It makes developers ramp up quickly (shipping code increases confidence and knowledge)
- It actually makes rigor more feasible as the most effective rigorous processes are light weight and built-in.
Every line of code is a liability, the system that enables it to change rapidly is the asset.
Side note: every time I encounter JVM test startup lag I think someday I am going to die and will have spent time doing _this_.
According to Wikiquotes, this is a common misattribution, and the first known record is Ruth M. Davis from 1978, who attributes it to Lenin: https://en.wikiquote.org/wiki/Quantity
yellowapple · 12h ago
The flip-side of this is that if something is too fast, it raises doubts about whether it actually happened at all. I'm reminded of the TurboTax case, where Intuit found that adding a bunch of artificial loading screens to make it look like TurboTax was taking its time to really pore over customers' tax returns ended up being more appealing to users than not doing so. The actual "analyses" happen within less than a second, but that was (allegedly) too fast for users to believe.
emmelaich · 9h ago
A ticket booking system I was familiar with added latency after upgrades to maintain a particular experience for the operators.
I guess they were used to typing stuff then inspecting paperwork or other stuff waiting for a response. Plus, it avoided complaints when usage inevitably increased over time.
monkeyelite · 7h ago
That’s an unusual case because most customers use it once a year, and speed is number 3 or 4 on their priorities behind getting it right (not getting in trouble), and understanding wtf is going on.
pmarreck · 10h ago
Fast is why, after decades doing high-level scripting, I'm now exploring lower-level languages that live closer to the metal...
swinglock · 18h ago
Speed is the most fundamental feature. Otherwise we could do everything by hand and need no computers.
taylorallred · 15h ago
What's amazing to me is that often all it takes to go fast is to keep things simple. JBlow once said that software should be treated like a rocket ship: every thing you add contributes weight.
marcus_holmes · 7h ago
Back in the 90's I ran a dev team building Windows applications in VB, and had the rule that the dev machines had to be lower-specced than the user machines they were programming for.
It was unpopular, because devs love the shiny. But it worked - we had nice quick applications. Which was really important for user acceptance.
I didn't make this rule because I hated devs (though self-hatred is a thing ofc), or didn't want to spend the money on shiny dev machines. I made it because if a process worked acceptably quickly on a dev machine then it never got faster than that. If the users complained that a process was slow, but it worked fine on the dev's machine, then it proved almost impossible to get that process faster. But if the dev experience of a process when first coding it up was slow, then we'd work at making it faster while building it.
I often think of this rule when staring at some web app that's taking 5 minutes to do something that appears to be quite simple. Like maybe we should have dev servers that are deliberately throttled back, or introduce random delays into the network for dev machines, or whatever. Yes, it'll be annoying for devs, but the product will actually work.
kristianp · 7h ago
> Like maybe we should have dev servers that are deliberately throttled back
This is a good point. Often datasets are smaller in dev. If a reasonable copy of live data is used, devs would have an intuition of what is making things slow. Doesn't work for live data that is too big to replicate on a developer's setup though.
codingclaws · 17h ago
Fast and light weight. That's why I love vim/cli over IDEs.
Btw, cool site design.
trhaynes · 15h ago
Interesting that onkernel.com intentionally animates and slows down the loading of the web interface, making it harder to scroll and scan the site. Irony or good design?
jkubicek · 13h ago
Browse the HTML. This site looks hand-coded. The Google fonts and some light CSS are the only imported stuff. No javascript.
It's gorgeous
foopod · 13h ago
I was curious, so I checked, it is raw html. And yes it is beautiful.
Adding to this - that's why I insist all my students should learn touch-typing, for at least 10 minutes per lesson. It really changes how you interact with your computer, and how much touch typing quickly makes you able to type as fast as you can think changes your approach to automating things in a quick script or doing some bash-fu. A very underrated skill in todays world.
RandomBacon · 13h ago
> students should learn touch-typing
I agree, but I wonder how not knowing how to spell would affect that. The highschool kids I work with, are not great spellers (nor do they have good handwriting).
Angostura · 4h ago
I'll take reliable over fast almost every time.
DrewADesign · 15h ago
Speed is an important usability consideration that gets disproportionate focus among many developers because it’s one of the few they can directly address with their standard professional toolkit. I think it’s a combo of a Hammer/Nail thing, and developers loving the challenge of code performance golf. (Though I never loved stuff like that, which is in part why I’m not a developer anymore.)
Figuring out the best way to present logically complex content, controls and state to users that don’t wrangle complexity for a living is difficult and specialized work, and for most users, it’s much more important than snappy click response.
Both of these things obviously exist on a spectrum and are certainly not mutually exclusive. (ILY McMaster-Carr) But projects rarely have enough time for either complete working features, or great thoughtful usability, and if you’re weighing the relative importance of these two factors, consider your audience, their goals, and their technical savvy before assuming good performance can make up for a suboptimal user workflow.
Thanks for enduring my indulgent rant.
abathologist · 12h ago
Finally, someone has thought about the importance of making things go faster!
Is the most pressing problem facing the world is that we are not doing enough things fast enough? Seems a bit off the mark, IMO.
croes · 1h ago
I like fast but more and more I get slow web applications where every clicks comes with a delay.
burnte · 17h ago
> Rarely in software does anyone ask for “fast.”
That's because it's understood that things should work as quickly as possible, and not slowly on purpose (generally). No one asks that modern language is used in the UI as opposed to Sanskrit or heiroglyphs, because it's understood.
sb057 · 12h ago
This page (consisting of 693 words) took a full second to load for me because it had to import multiple fonts from Google (which also constitute over 70% of the page's size).
dmix · 12h ago
Do you mean finish loading?
Google Webfont loader is (usually) non blocking when done right, but the text should appear fine before
The page loaded instantly for me
raincole · 14h ago
> Fast is relative
I once used Notion at work and for personal note taking. I'd never felt it was "slow." Until later I moved my notes to Obsidian. Now when I have to use Notion at my job it feels sluggish.
dml2135 · 12h ago
Notion just seems to get worse and worse. I used to love it but now I find it infuriatingly slow.
Glad to hear Obsidian is better as I’ve been considering it as an alternative.
dtkav · 9h ago
Obsidian's local first editing experience makes a huge difference to creativity and flow.
I've been working on making Obsidian real-time collaborative with the Relay [0] plugin. In combination with a few other plugins (and the core Bases plugin) you can build a pretty great Notion alternative.
I'm bullish on companies using Obsidian for knowledge management and AI driven workflows. It's pretty reasonable to build custom plugins for a specific vertical.
And there is no page cache. Sub 100ms is just completely different experience.
emmelaich · 9h ago
Very nice. Also a plea. Don't animate the >. Or, don't wait for the animation to finish before showing the contents.
shayief · 6h ago
ah, interesting. It starts fetching tree items on mousedown (vs onclick) to load them faster, so > starts moving a bit too early.
jpb0104 · 17h ago
Agree. One of my favorite tropes to product and leadership is that “performance is a feature”.
1970-01-01 · 9h ago
Fast is dead. The only software that keeps getting faster are emulators to run legacy, bloat-free code.
crawshaw · 17h ago
This is a great blog post. I have seen internal studies at software companies that demonstrate this, i.e. reducing UI latency encourages more software use by users. (Though a quick search suggests none are published.)
brap · 15h ago
Yep. I often choose LLM apps not because of how great the model is, but how snappy the UI feels. Similarly I might choose the more lightweight models because they’re faster.
monkeydust · 16h ago
Trading software by it's nature has to be fast, fast to display new information and fast to act on it per users intent.
beepbooptheory · 18h ago
> Asking an LLM to research for 6 minutes is already 10000x faster than asking for a report that used to take days.
Assuming, like, three days, 6 minutes is 720x faster. 10000x faster than 6 minutes is like a month and a half!
pron · 15h ago
More like 300x if you count working hours. Although I've yet to see anything that would take a person a few days (assuming the task is worth spending a few dats on) and that an LLM could do in six minutes, even with human assistance.
w10-1 · 12h ago
True, but not fast. More fun than fast.
Fast reading does not just enumerate examples.
Fast reading does not straw-man.
Fun conveys opportunity and emotion: "changing behavior", "signals simplicity", "fun". Fun creates an experience, a mode, and stickiness. It's good for marketing, but a drag on operations.
Fast is principles with methods that just work. "Got it."
Fast has a time-to-value of now.
Fast is transformative when it changes a background process requiring all the infrastructure of tracking contingencies to something that can just be done. It changes system-2 labor into system-1 activity -- like text reply vs email folders, authority vs discussion, or take-out vs. cooking.
When writers figure out how to monetize fast - how to get recurrent paying users (with out-of-band payment) just from delivering value - then we'll no longer be dragged through anecdotes and hand-waving and all the salience-stretching manipulations that tax our attention.
Imagine an AI paid by time to absorb and accept the answer instead of by the token.
Fast is better than fun -- assuming it's all good, of course :)
stickfigure · 14h ago
Would someone please forward this article to the folks that work on Jira?
mpaepper · 15h ago
That's why we all code with voice now, because it's faster, right? Right?
danielmarkbruce · 17h ago
Google talked about this for years.
topspin · 9h ago
Yes. The same insight periodically appears. Also, often it's done with high contrast text, which is great.
ryan_lane · 7h ago
> Developers ship more often when code deploys in seconds (or milliseconds) instead of minutes.
I don't want my code deployed in seconds or milliseconds. I'm happy to wait even an hour for my deployment to happen, as long as I don't have to babysit it.
I want my code deployed safely, rolled out with some kind of sane plan (like staging -> canary -> 5% -> 20% -> 50% -> 100%), ideally waiting long enough at each stage of the plan to ensure the code is likely being executed with enough time for alerts to fire (even with feature flags, I want to make sure there's no weird side effects), and for a rollback to automatically occur if anything went wrong.
I then want to enable the feature I'm deploying via a feature flag, with a plan that looks similar to the deployment. I want the enablement of the feature flag, to the configured target, to be as fast as possible.
I want rollbacks to be fast, in-case things go wrong.
Another good example is UI interactions. Adding short animations to actions makes the UI slower, but can considerably improve the experience, by making it more obvious that the action occurred and what it did.
So, no, fast isn't always better. Fast is better when the experience is directly improved by making it fast, and you should be able to back that up with data.
Oh yeah, back in the late 80s we (for some finite and not so big values of "we") were counting MOS6502/6510 cycles to catch the electron beam on a display and turn on some nice/nasty viaual effects.
Tell me "fast" again!
atoav · 4h ago
> Rarely in software does anyone ask for “fast.”
As some working on embedded audio DSP code I just had to laugh a little.
Yes, there is a ton of code that has a strict deadline. For audio that may be determined by your buffer size — don't write your samples to that buffer fast enough and you will hear it in potentially destructively loud fashion.
This changes the equation, since faster code now just means you are able to do more within that timeframe on the same hardware. Or you could do the same on cheaper hardware. Either way, it matters.
Similar things apply to shader coding, game engines, control code for electromechanical systems (there, missing the deadline can be even worse).
tylerflick · 9h ago
Slow is smooth, smooth is fast.
constantcrying · 17h ago
I think that people generally underestimate what even small increases in the interaction time between human and machine cost. Interacting with sluggish software is exhausting, clicking a button and being left uncertain whether it did anything is tedious and software being fast is something you can feel.
Windows is the worst offender here, the entire desktop is sluggish even though it there is no computational task which justifies those delays.
deergomoo · 14h ago
Apple software, especially lately, can be really bad for it too. Single core perf is slightly better on my iPad than my MacBook Pro and yet everything feels an order of magnitude slower. If I am impatiently tapping the space I know a button will appear waiting for an animation to finish, some aspect of software design has gone horribly awry.
erwincoumans · 13h ago
iOS (iPhone, iPad) UI is typically smooth and fast though. If only car navigation and UI could be as responsive.
PaulHoule · 17h ago
There's that wondering if the UI input was registered at all and the mental effort to suppress clicking again when you expect a delayed response.
henriquegodoy · 16h ago
Will apply this for the next interfaces that im going to build
devmor · 18h ago
> Rarely in software does anyone ask for “fast.”
> But software that's fast changes behavior.
I wonder if the author stopped to consider why these opposing points make sense, instead of ignoring one to justify the other.
My opinion is that "fast" only becomes a boon when features are robust and reliable. If you prioritize going twice as "fast" over rooting out the problems, you get problems at twice the rate too.
bithive123 · 18h ago
"The only way to go fast, is to go well." ― Robert C. Martin
christophilus · 16h ago
Software that goes fast changes human behavior. It seems you’re thinking it changes the software’s behavior. Not sure. Either that, or I don’t follow your comment at all.
devmor · 14h ago
I'm not really sure how to rephrase it, so I can try an example.
Lets say that the author has a machine that is a self contained assembly line, it produces cans of soup. However, the machine has a problem - every few cans of soup, one can comes out sideways and breaks the machine temporarily, making them stop and unjam it.
The author suggests to double the speed of the machine without solving that problem, giving them their soup cans twice as fast, requiring that they unjam it twice as often as well.
I believe that (with situation exceptions) this is a bad approach, and I would address the problem causing the machine to get jammed before I doubled the speed of the machine
That being said, this is a very simplistic view of the situation, in a real situation either of these solutions has a number of variables that may make it preferable over the other - my gripe with the piece is that the author suggests the "faster" approach is a good default that is "simple", "magical" and "fun". I believe it is shortsighted, causes compounding problems the more it is applied in sequence, and is only "magical" if you bury your head in the sand and tell yourself the problems are for someone else to figure out - which is exactly what the author handwaves away at the end, with a nebulous allusion to some future date when these tools that we should accept because they are fast will eventually be made good by some unknown person.
chaosprint · 14h ago
> Speed conquers all in martial arts.
TZubiri · 11h ago
"Fast signals simplicity"
Bookmarking this one
chaps · 17h ago
This is such an important principle to me that I've spent a lot of effort developing tooling and mental models to help with. Biggest catalyst? Being on-call and being woken up at 3am when you're still waking up... in that state, you really( don't want things to go slowly. You just want to fix the damn thing and get back to sleep.
For example, looking up command flags within man pages is slooooow and becomes agonizingly frustrating when you're waking up and people are waiting for you so that they can also go back to sleep. But if you've spent the time to learn those flags beforehand, you'll be able to get back to sleep sooner.
agcat · 18h ago
Beautiful softwares are fast! Love the blog
betterhealth12 · 15h ago
Linear vs JIRA described in 1 word
deergomoo · 14h ago
One of these days I’m going to get around to writing a little bash script or something that will let me take a plain-ish text file and upload it into Jira via the API.
I should be able to create a Jira ticket in however long it takes me to type the acceptance criteria plus a second or two. Instead I’ve got slow loading pages, I’ve got spinners, I’ve got dropdowns populating asynchronously that steal focus from what I’m typing, I’ve got whatever I was typing then triggering god knows what shortcuts causing untold chaos.
For a system that is—at least how we use it at my job—a glorified todo list, it is infuriating. If I’m even remotely busy lately I just add “raise a ticket for x” to my actual todo list and do it some other time instead.
hayroh · 10h ago
Facebook is fast?
nicoco · 1h ago
That article loses its credibility because of this, my thoughts too. Facebook and Instagram websites are among the worse offenders when it comes to "time-to-content" or whatever metric cool kids use these days. Maybe the apps are faster, but I'd rather avoid spyware on my pocket computers. Probably the author is running a $3k+ laptop and renews it every year?
aprilthird2021 · 17h ago
I feel like this should have some kind of "promotional" or "ad" label. I agree wholeheartedly with the words here, but I also note that the author is selling the fast developer tools she laments the dearth of: https://www.catherinejue.com/kernel
Again, no ill will intended at all, but I think it straddles the promotional angle here a bit and maybe people weren't aware
devmor · 14h ago
The tool in question is also a fairly unethical scraping and botting tool, advertising defrauding services to bypass captchas and scrape websites against the owners' wishes.
ropable · 12h ago
"Fast" is just another optimisation goal in a spectrum. First make it correct, then make it good, then make it fast (is a reasonable rubric).
abdellah123 · 7h ago
small (font)
EGreg · 13h ago
My sites. In order of increasing complexity. Are they fast?
One of the biggest things our framework does as opposed to React, Angular, Vue etc. is we lazyload all components as you need them. No need for tree-shaking or bundling files. Just render (static, cached) HTML and CSS, then start to activate JS on top of it. Also helps massively with time to first contentful paint.
Linus Torvalds said exactly that in a talk about git years ago. It's crazy to think back how people used to use version control before git. Git totally changed how you can work by being fast.
andrewmcwatters · 18h ago
Conversely, we have a whole generation of entry-level developers who think 250ms is "fast," when doing on-device processing work on computers that have dozens of cores.
keybored · 18h ago
> But software that's fast changes behavior.
(Throw tomatoes now but) Torvalds said the same thing about Git in his Google talk.
riffic · 12h ago
this would align well with the concept of Permacomputing
> Instant settle felt surprising in a world where bank transfers usually take days.
Yeah, that's not "a world" it's just the USA. Parts of the world - EU, UK etc have already moved on from that. Don't assume that the USA is leading edge in all things.
dqv · 4h ago
> Yeah, that's not "a world" it's just the USA.
"In a world" is a figure of speech which acknowledges the non-universality of the statement being made. And no it is not "just the USA". Canada and Mexico are similarly slow to adopt real-time payments.
It is wild to tell someone "don't assume" when your entire comment relies on your own incorrect assumption about what someone meant.
SideburnsOfDoom · 1h ago
There is better commentary on the same basic point regarding SEPA / Faster Payments / FedNow and how the US lags world-leading practice in the other thread here: https://news.ycombinator.com/item?id=44738579
It's a bit more substantial, and less complaints about the semantics of the wording.
christophilus · 16h ago
You’re not wrong. The US banks all suck. I’m willing to bet that every single one of them suck, though I’ve only tried a handful.
nobodywillobsrv · 6h ago
Is it just me or is the premise of this the opposite of your work life as well? I have worked in the space of "fast" primarily and that is the main objective. Fast, iterate ... Don't be like "IT" (the slow team nobody can fire who never finishes anything).
Of course fast has downsides but it's interesting this pitch is here. Must have occurred many times in the past.
"Fast" was often labeled "tactical" (as opposed to "strategic" in institutions). At the time I remember thinking a lot about how delays plus uncertainty meant death (nothing gets done, or worse). Even though Fast is often at the cost of noise and error, there is some principle that it can still improve things if not "too far".
Anyone know deeper writings on this topic?
rustybolt · 18h ago
> Rarely in software does anyone ask for “fast.”
Are you kidding me? My product owner and management ask me all the time to implement features "fast".
jasonjmcghee · 18h ago
Not sure if this is sardonic obstinance... But assuming face-value - that's not what the statement is about.
I disagree with the statement too, as people definitely ask for UX / products to be "snappy", but this isn't about speed of development.
PaulHoule · 18h ago
I remember the time they were cracking down because I had entered 90%+ of the tickets into the ticket system (the product manager didn't write tickets) and told me that "every ticket has to explain why it is good for the end user".
I put it in a ticket to speed up the 40 minutes build and was asked "How does this benefit the end user?" and I said "The end user would have had the product six months ago if the build was faster."
dkarl · 17h ago
These days metrics are so ubiquitous that many internal back-end systems have SLAs for tail latencies as well.
rustybolt · 16h ago
Yeah, this was an attempt at humor. But it is quite easy to misunderstand the title.
jrm4 · 18h ago
Ew.
Genuinely hard to read this and think little more than, "oh look, another justification for low quality software."
jebarker · 18h ago
I think you misunderstood the use of "fast" in the article? They mean that the software should run fast, not be produced fast necessarily. In my experience software that truly runs fast is usually much higher quality.
In response, Casey spent <1 week making a single-threaded, not profiled, not tuned, skeleton terminal which handled more Unicode and escape codes than Windows Terminal at the time, used a Least Recently Used (LRU) glyph cache, ran at 6,000+ fps on a 7th gen Intel I7 from 2017, and had 10x faster throughput than Windows Terminal, even working within the constraints of using the Windows APIs for things, using DirectDraw, etc. In around five screens of commented basic maintainable-looking code.
[1] https://www.youtube.com/watch?v=hxM8QmyZXtg - "How fast should an unoptimized terminal run?"
This video[2] is by Jason Booth which is talking about his experience of game development, and practical examples of changing data layout and C++ code to make it do less work, be more cache friendly, have better memory access patterns, and run orders of magnitude faster without adding much complexity and sometimes removing it.
[2] https://www.youtube.com/watch?v=NAVbI1HIzCE - "Practical Optimizations"
I asked an agent to write an http endpoint at the end of the work day when I had just 30 min left -- my first thought was "it took 10 minutes to do what would have taken a day", but then I thought, "maybe it was 20 minutes for 4 hours worth of work". The next day I looked at it and found the logic was convoluted, it tried to write good error handling but didn't succeed. I went back and forth and ultimately wound up recoding a lot of stuff manually. In 5 hours I had it done for real, certainly with a better test suite than I would have written on my own and probably better error handling.
See https://www.reddit.com/r/programming/comments/1lxh8ip/study_...
LLM workflow is competing with other ways of writing code. DIY, stack overflow, paired, offshored...
I bought a bunch of poker chips and taught Texas Hold'em to my kids. We have a fantastic time playing with no money on the line, just winning or losing the game based on who wins all the chips.
And use something like ntfy to get notifications on your phone:
https://ntfy.sh/
I’ve also seen people assign Claude code issues on GitHub and then use the GitHub mobile app on their phone to get notifications and review PRs.
I realized as I was doing it that I wouldn't be able to tell anyone about it because I would sound like the most obnoxious AI bro ever. But it worked! (For the simple requests I used it on.) The most annoying part was that I had to tell it to run rustfmt every time, because otherwise it would fail CI and I wouldn't be able to merge it. And then it would take forever to install a rust toolchain and figure out how to run clippy and stuff. But it did feel crazy to be able to work on it from the beach. Anyway, I'm apparently not very good at taking vacations, lol
Edit: clarity
I think coding assistants would end up being more helpful if, instead of trying to do what they're asked, they would come back with questions that help us (or force us) to think. I wonder if a context prompt that says, "when I ask you to do something, assume I haven't thought the problem through, and before doing anything, ask me leading questions," would help.
I think Leslie Lamport once said that the biggest resistance to using TLA+ - a language that helps you, and forces you to think - is because that's the last thing programmers want to do.
This is such a great observation. I'm not quite sure why this is. I'm not a programmer, but a signal-processing/system engineer/researcher. The weird thing seems that it's the process of programming that causes the "not-thinking" behaviour, e.g. when I program a simulation and I find that I must have a sign error somewhere in my implementation (sometimes you can see this from the results), I end up switching every possible sign around instead of taking a pen and pencil and comparing theory and implementation, if I do other work, e.g. theory, that's not the case. I suspect we try to avoid the cost of the context switch and try to stay in the "programming-flow".
I think the results are excellent, but I can hit a lot of dead ends, on the way. I just spent several days, trying out all sorts of approaches to PassKeys/WebAuthn. I finally settled on an approach that I think will work great.
I have found that the old-fashioned “measure twice, cut once” approach is highly destructive. It was how I was trained, so walking away from it was scary.
To be fair it’s great advice when you’re dealing with atoms.
Mutable patterns of electrons, not so much (:
I get what you're referring to here, when it's tunnel-vision debugging. Personally I usually find that coding/writing/editing is thinking for me. I'm manipulating the logic on screen and seeing how to make it make sense, like a math problem.
LLMs help because they immediately think through a problem and start raising questions and points of uncertainty. Once I see those questions in the <think> output, I cancel the stream, think through them, and edit my prompt to answer the questions beforehand. This often causes the LLM's responses to become much faster and shorter, since it doesn't need to agonise over those decisions any more.
Typescript can make astonishingly complex error messages when types don't match up so I went through a couple of rounds of showing the errors to the assistant and getting suggestions to fix it that were wrong but I got some ideas and did more experiments and over the course of two days (making desired changes along the way) I figured out what was going wrong and cleared up the use of types such that I was really happy with my code and when I saw a red squiggle I usually knew right away what was wrong and if I did ask the assistant it would also get it right right away.
I think there's no way I would have understood what was going on without experimenting.
When you can see what goes wrong with the naive plan you then have all the specific context in front of you for making a better plan.
If something is wrong with the implementation then I can ask the agent to then make a plan which avoids the issues / smells I call out. This itself could probably be automated.
The main thing I feel I'm "missing" is, I think it would be helpful if there were easier ways to back up in the conversation such that the state of the working copy was restored also. Basically I want the agent's work to be directly integrated with git such that "turns" are commits and you can branch at any point.
Tools like Lean and Dafny are much more appreciated, as they generate code from the model.
TLA+ is for when you have a 1MLOC database written in Java or a 100KLOC GC written in C++ and you want to make sure your design doesn't lead to lost data or to memory corruption/leak (or for some easier things, too). You certainly can't do that with Dafny, and while I guess you could do it in Lean (if you're masochistic and have months to spare), it wouldn't be in a way that's verifiably tied to the code.
There is no tool that actually formally ties spec to code in any affordable way and at real software scale, and I think the reason people say they want what doesn't exist is precisely because they want to avoid the thinking that they'll have to do eventually anyway.
[1]: Lean and TLA+ are sort-of similar, but Dafny is something else altogether.
That is not the case for the TLA+ spec and your 1MLOC Java Database. You hope with fingers crossed that you've implemented the design, but have you?
I can measure that a physical wall has the same dimensions as specified in the blueprint. How do I know my program follows the TLA+ spec?
I'm not being facetious, I think this is a huge issue. While Dafny might not be the answer we should strive to find a good way to do refinement.
And the thing is, we can do it for hardware! Software should actually be easier, not harder. But software is too much of a wild west.
That problem needs to be solved first.
This is the essence of my workflow.
I dictate rambling, disorganized, convoluted thoughts about a new feature into a text file.
I tell Claude Code or Gemini CLI to read my slop, read the codebase, and write a real functional design doc in Markdown, with a section on open issues and design decisions.
I'll take a quick look at its approach and edit the doc to tweak its approach and answer a few open questions, then I'll tell it to answer the remaining open questions itself and update the doc.
When that's about 90% good, I'll tell the local agent to write a technical design doc to think through data flow, logic, API endpoints and params and test cases.
I'll have it iterate on that a couple more rounds, then tell it to decompose that work into a phased dev plan where each phase is about a week of work, and each task in the phase would be a few hours of work, with phases and tasks sequenced to be testable on their own in frequent small commits.
Then I have the local agent read all of that again, the codebase, the functional design, the technical design, and the entire dev plan so it can build the first phase while keeping future phases in mind.
It's cool because the agent isn't only a good coder, it's also a decent designer and planner too. It can read and write Markdown docs just as well as code and it makes surprisingly good choices on its own.
And I have complete control to alter its direction at any point. When it methodically works through a series of small tasks it's less likely to go off the rails at all, and if it does it's easy to restore to the last commit and run it again.
2. Thank you for the detailed explanation, it makes a lot of sense. If AI is really a very junior dev that can move fast and has access to a lot of data, your approach is what I imagine works - and crucially - why there is such a difference in outcomes using it. Because what you're saying is, frankly, a lot of work. Now, based on that work you can probably double your output as a programmer, but considering the many code bases I've seen that have 0 documentation, 0 tests, I think there is a huge chunk of programmers that would never do what you're doing because "it's boring".
3. Can you share maybe an example of this, please:
> and write a real functional design doc in Markdown, with a section on open issues and design decisions.
Great comment, I've favorite'd it!
> "An hour of debugging/programming can save you minutes of thinking"
The trap so many dev fall into is assuming code behaves like they think it is. Or believing documentation or seemingly helpful comments. We really want to believe.
People's mental image is more often than not wrong, and debugging tremendously helps bridge the gap.
It's definitely possible to adapt these tools to be more useful in that sense... but it definitely feels counter to what the hype bros are trying to push out.
If using off the shelf LLMs always have a bottleneck of their speed.
A prompt like " I want to make this change in the code where any logic deals with XXX. To be/do XXX instead/additionally/somelogicchange/whatever"
It has been pretty decent at these types of changes and saves time of poking though and finding all the places I would have updated manually in a way that find/replace never could. Though I've never tried this on a huge code base.
If I reached a point where I would find this helpful, I would take this as a sign that I have structured the code wrongly.
"Now duplicate this code but invert the logic for data flowing in the opposite direction."
I'm simplifying this whole example obviously but that was the basic task I was working on. It was able to spit out in a few seconds what would have taken me probably more than an hour and at least one tedium headache break. I'm not aware of any pre LLM way to do something like that.
Or a little while back I was implementing a basic login/auth for a website. I was experimenting with high output token LLM's (i'm not sure that's the technical term) and asked it to make a very comprehensive login handler. I had to stop it somewhere in the triple digits of cases and functions. Perhaps not a great "pro" example of LLM but even though it was a hilariously over complex setup it did give me some ideas I hadn't thought about. I didn't use any of the code though.
Its far from the magic LLM sellers want us to believe but it can save time same as various emac/vim tricks can to devs that want to learn them.
> cursor agent does it just fine in the background
That's for a very broad definition of fine. And you still need to review the diff and check the surrounding context of each chunk. I don't see the improvement in metrics like productivity and cognitive load. Especially if you need to do serveral rounds.
Now, once you have that, to actually make edits, you have to record a macro to apply at each point or just manually do the edit yourself, no? I don't pretend LLMs are perfect, but I certainly think using one is a much better experience for this kind of refactoring than those two options.
For me, it's like having a moodboard with code listings.
I used to have more patience for doing it the grep/macro way in emacs. It used to feel a bit zen, like going through the code and changing all the call-sites to use my new refactor or something. But I've been coding for too long to feel this zen any longer, and my own expectations for output have gotten higher with tools like language-server and tree-sitter.
The kind of refactorings I turn to an LLM for are different, like creating interfaces/traits out of structs or joining two different modules together.
https://youtu.be/f2mQXNnChwc?t=2135
https://youtu.be/zxS3zXwV0PU
And for Vim
https://youtu.be/wOdL2T4hANk
Standard search and replace in other tools pales in comparison.
I was writing some URL canonicalization logic yesterday. Because we rolled this out as an MVP, customers put URLs in all sorts of ways and we stored it into the DB. My initial pass at the logic failed on some cases. Luckily URL canonicalization is pretty trivially testable. So I took the most used customers from our DB, send them to Claude and told Claude to come up with the "minimum spanning test cases" that cover this behavior. This took maybe 5-10 sec. I then told Zed's agent mode using Opus to make me a test file and use these test cases to call my function. I audited the test cases and ended up removing some silly ones. I iterated on my logic and that was that. Definitely faster than having to do this myself.
LLMs are the anti-thesis of fast. In fact, being slow is a perceived virtue with LLM output. Some sites like Google and Quora (until recently) simulate the slow typed output effect for their pre-cached LLM answers, just for credibility.
As much as I like agents, I am not convinced the human using them can sit back and get lazy quite yet!
I think LLM assistants help you become functional across a more broad context -- and completely agree that testing and reviewing becomes much, much more important.
E.g - a front end dev optimizing database queries, but also being given nonsensical query parameters that don't exist.
A senior can write, test, deploy, and possibly maintain a scalable microservice or similar sized project without significant hand-holding in a reasonable amount of time.
A junior might be able to write a method used by a class but is still learning significant portions and concepts either in the language, workflow orchestration, or infrastructure.
A principal knows how each microservice fits into the larger domain they service, whether they understand all services and all domains they serve.
A staff has significant principal understanding across many or all domains an organization uses, builds, and maintains.
AI code assistance help increase breadth and, with oversight, improve depth. One can move from the "T" shape to "V" shape skillset far easier, but one must never fully trust AI code assistants.
Early in my career as a software engineer, I developed a reputation for speeding things up. This was back in the day where algorithm knowledge was just as important as the ability to examine the output of a compiler, every new Intel processor was met with a ton of anticipation, and Carmak and Abrash were rapidly becoming famous.
Anyway, the 22 year old me unexpectedly gets invited to a customer meeting with a large multinational. I go there not knowing what to expect. Turns out, they were not happy with the speed of our product.
Their VP of whatever said, quoting: "every saved second here adds $1M to our yearly profit". I was absolutely floored. Prior to that moment I couldn't even dream of someone placing a dollar amount on speed, and so directly. Now 20+ years later it still counts as one of the top 5 highlights of my career.
P.S. Mentioning as a reaction to the first sentence in the blog post. But the author is correct when she states that this happens rarely.
P.P.S. There was another engineer in the room, who had the nerve to jokingly ask the VP: "so if we make it execute in 0 seconds, does it mean you're going to make an infinite amount of money?". They didn't laugh, although I thought it was quite funny. Hey, Doug! :)
I don't get it. Wouldn't going from 1 second to 0 seconds add the same amount of money to the yearly profit as going from 2 seconds to 1 second did? Namely, $1M.
Of course the joke was silly. But perhaps I should have provided some context. We were making industrial automation software. This stuff runs in factories. Every saved second shrinks the manufacturing time of a part, leading to increase of the total factory output. When extrapolating to abusrd levels, zero time to manufacture means infinite output per factory (sans raw materials).
Geez, life in my opinion is not so serious. It’s okay to say stupid things and not feel bad about it, as long as you are not trying to hurt anyone.
I bet they felt great and immediately forgot about this bad joke.
But I also concur with you that it is good to bring some levity to “serious” conversations!!
Required reading for internet comedians.
They don't explicitly ask for it, but they won't take you seriously if you don't at least pretend to be. "Fast" is assumed. Imagine if Rust had shown up, identical in every other way, but said "However, it is slower than Ruby". Nobody would have given it the time of day. The only reason it was able to gain attention was because it claimed to be "Faster than C++".
Watch HN for a while and you'll start to notice that "fast" is the only feature that is necessary to win over mindshare. It is like moths to a flame as soon as something says it is faster than what came before it.
Like the « evergreen » things Amazon decided to focus on : faster delivery, greater selection, lower cost.
They don't say they buy the iPhone because it has the fastest CPU and most responsive OS, they just say it "just works".
Your example says it, people will go, this is like X (meaning it does/has the same features as X), but faster. And now people will flock from X to your X+faster thing.
Which tells us nothing about if people would also move to a X+more-features, or a X+nicer-ux, or a X+cheaper, etc., without them being any faster than X or even possibly slower.
Nah. React, for example, only garnered attention because it said "Look how much faster the virtual DOM is!". We could go on all day.
> People want features, people want compatibility
Yes, but under the assumption that it is already built to be as "fast" as possible. "Fast" is assumed. That's why "faster" is such a great marketing trick, as it tunes people into "Hold up. What I'm currently using actually sucks. I'd better reconsider."
"Fast" is deemed important, but it isn't asked for as it is considered inconceivable that you wouldn't always make things "fast". But with that said, keep in mind that the outside user doesn't know what "fast" is until there is something to compare it with. That is how some products can get away with not being "fast" — until something else comes along to show that it needn't be that way.
On one hand I like controlled components because there is a single source of truth for the data (a useState()) somewhere in the app, but you are forced to re-render for each keypress. With uncontrolled components on the other hand, there's the possible anarchy of having state in React and in the actual form.
I really like this library
https://react-hook-form.com/
which has a rational answer to the problems that turn up with uncontrolled forms.
https://krausest.github.io/js-framework-benchmark/current.ht...
But React started a movement where frontend teams were isolated from backend teams (who tend to be more conservative and performance minded), tons of the view was needlessly pushed into browser rendering, every paged started using 20 different JSON endpoints that are often polling/pushing adding overhead etc. So by every measure it made the Web slower and more complicated, in exchange for some slightly easier/cohesive design management (that needs changing yearly).
The particulars on the vdom framework itself are probably not that important in the grand scheme. Unless it's design encourages doing less of those things (which many newer ones do but React is flexible).
Yes, fast wins people over. And yet we live in a world where the actual experience of every day computing is often slow as molasses.
Seriously though, you're so right- I often wonder why this is. If it's that people genuinely don't care, or that it's more that say ecommerce websites compete on so many things already (or in some cases maintain monopolies) that fast doesn't come into the picture.
"Fast" is the feature people always wanted, but absent better information, they have to assume that is what they already got. That is why "fast" marketing works so well. It reveals that what they thought was pretty good actually wasn't. Adding the missing kitchen sink doesn't offer the same emotional reaction.
This is what people are missing. Even those "slow" apps are faster than their alternatives. People demand and seek out "fast", and I think the OP article misses this.
Even the "slow" applications are faster than their alternatives or have an edge in terms of speed for why people use them. In other words, people here say "well wait a second, I see people using slow apps all the time! People don't care about speed!", without realizing that the user has already optimized for speed for their use case. Maybe they use app A which is 50% as fast as app B, but app A is available on their toolbar right now, and to even know that app B exists and to install it and learn how to use it would require numerous hours of ramp up time. If the user was presented with app A and app B side by side, all things equal, they will choose B every time. There's proficiency and familiarity; if B is only 5% faster than A, but switching to B has an upfront cost in days to able to utilize that speed, well that is a hidden speed cost and why the user will choose A until B makes it worth it.
Speed is almost always the universal characteristic people select for, all things equal. Just because something faster exists, and it's niche, and hard to use (not equal for comparison to the common "slow" option people are familiar with), it doesn't mean that people reject speed, they just don't want to spend time learning the new thing, because it is _slower_ to learn how to use the new thing at first.
We don't have to assume. We know that JavaScript is slow in many cases, that shipping more bundles instead of less will decrease performance, and that with regard to the amount of content served generally less is more.
Whether this amount of baggage every web app seems to come with these days is seen as "necessary" or not is subjective, but I would tend to agree that many developers are ignorant of different methods or dislike the idea of deviating from the implied norms.
I’ve mentioned this before.
Quest Diagnostics, their internal app used by their phlebotomists.
I honestly don’t know how this app is done, I can only say it appears to run in the tab of a browser. For all I know it’s a VB app running in an ActiveX plugin, if they still do that on Windows.
L&F looks classic Windows GUI app, it interfaces with a signature pad, scanner, and a label printer.
And this app flies. Dialogs come and go, the operator rarely waits on this UI, when she is keying in data (and they key in quite a bit), the app is waiting for the operator.
Meanwhile, if I want to refill a prescription, it fraught with beach balls, those shimmering boxes, and, of course, lots of friendly whitespace and scrolling. All to load a med name, a drugstore address, and ask 4 yes/no questions.
I look at that Quest app mouth agape, it’s so surprisingly fast for an app in this day and age.
And for that, we absolutely do have points of comparison, and yeah, pretty much all web apps have bad interactivity because they are limited too much by network round trip times. It's an absolute unicorn web app that does enough offline caching.
It's also absurd to assume that applications are as fast as they could be. There is basically always room for improvement, it's just not being prioritised. Which is the whole point here.
I'm already pissed I have to use the damn thing, please don't piss me off more.
Wait.
Wait for typing indicator.
Wait for cute text-streaming.
Skip through the paragraph of restating your question and being pointlessly sycophantic.
Finally get to the meat of the response.
It’s wrong.
C and C++ were and are the benchmark, it would have been revolutionary to be faster and offer memory safety.
Today, in some cases Rust can be faster, in others slower.
Fast is _absolutely not_ the only thing we care about. Not even top 5. We are addicted to _convenience_.
Considering the current state of the Web and user application development, I tend to agree with regard to its developers, but HN seems to still abide by other principles.
I imagine a large chunk of us would gladly throw all that out the window and only write super fast efficient code structures, if only we could all get well paid jobs doing it.
I distinctly remember when Slashdot committed suicide. They had an interface that was very easy for me to scan and find high value comments, and in the name of "modern UI" or some other nonsense needed to keep a few designers employed, completely revamped it so that it had a ton of whitespace and made it basically impossible for me to skim the comments.
I think I tried it for about 3 days before I gave up, and I was a daily Slashdot reader before then.
I work at an e-waste recycling company. Earlier this week, I had to test a bunch of laptop docking stations, so I kept force refreshing my blog to see if the Ethernet port worked. Thing is, it loads so fast, I kept the dev tools open to see if it actually refreshed.
bonus, these have both http & https endpoints if you needed a differential diagnosis or just a means to trip some shitty airline/hotel walled garden into saying hello.
It does happen less than it used to, but still.
I blame HN switching to AWS. Downtime also increased after the switch.
(Those are trick questions, because we haven't switched to AWS. But I genuinely would like to hear the answers.)
(We did switch to AWS briefly when our hosting provider went down because of a bizarre SSD self-bricking incident a few years ago..but it was only for a day or two!)
I wrote my take on an ideal UI (purely clientside, against the free HN firebase API, in Elm): https://seville.protostome.com/.
I actually never care about the vote count but have been on this site long enough to recognise the names worth paying attention to.
Also the higher contrast items are the click/tap targets.
Just goes to show that all of us reading HN don’t actually share with each other how we’re reading HN :)
Too funny… thank you!!
If you can find what you want and read it you might not spend 5 extra seconds lost on their page and thus they can pad their stats for advertisers. Bonus points if the stupid page loads in such a way you accidentally click on something and give them a "conversion".
Sadly financial incentive is almost always towards tricking people into doing something they don't want to do instead of just actually giving them what they fucking want.
Northstar should be user satisfaction. For some products that might be engagement (eg entertainment service) while for others it is accomplishing a task as quickly as possible and exiting the app.
HN might have over 100 chars per line of text. It could be better. I know I could do it myself, and I do. But "I fixed it for me" doesn't fix it for anyone else.
I think low density UIs are more beginner friendly but power users want high density.
Having 50 buttons and 10 tabs shoved in your face just makes for opaqueness, power user or not.
Different people process visual information differently, and people reading articles have different goals, different eyesight, and different hardware setups. And we already have a way for users to tell a website how wide they want its content to be: resizing their browser window. I set the width of my browser window based on how wide I want pages to be; and web designers who ignore this preference and impose unreadable narrow columns because they read about the "optimal" column width in some study or another infuriate me to no end. Optimal is not the same for everyone, and pretending otherwise is the antithesis of accessibility.
This means that I have a difficult time reading text with very large paragraphs. If a paragraph goes on for 10+ lines, I'll start to lose my place at the end of most lines. This is infuriating and drastically impairs my ability to read and comprehend the text.
It's interesting to me that you mention preferring a ragged right over justification, because I literally do not notice the difference. This suggests to me that we read in different ways -- perhaps you focus on the shape and boundaries of a line more than the shape of a paragraph. This makes intuitive sense to me as to why you would prefer narrower columns.
I don't think that I'm "right" for preferring wider columns or that you or anyone else are "wrong" for preferring narrower columns. I think it's just how my brain learned to process text.
I have pretty strong opinions on what's too wide of a column and what's too narrow of a column, so I won't fullscreen a browser window on anything larger than a laptop. Rather, I'll set it for a size that's comfortable for me. If some web designer decides "actually, your preferred text width is wrong, use mine instead" then I'm gonna be pretty annoyed, and I think rightfully so, because what those studies say is "optimal" for the average person is nigh unreadable for me. (Daring Fireball is the worst offender I can think of off the top of my head. I also find desktop Wikipedia's default view pretty hard to read, but the toggleable "wide" mode is excellent).
The text density, however, I rather like.
The site seemed to start to go downhill when it was sold, and got into a death spiral of less informed users, poor moderation, people leaving, etc. It's amazing that it's still around.
Plus Rusty just pushed out Kuro5hin and it felt like “my scene” kind of migrated over.
As an aside, Kuro5hin was the only “large” forum that I ever bothered remembering people’s usernames. Every other forum it’s all just random people. (That isn’t entirely true, but true enough)
It was interesting in a different way though.
Like Adequacy.
Did you also move over to MetaFilter ?
I don't really get why so many websites are slow and bloated these days. There are tools like SpeedCurve which have been around for years yet hardly anyone I know uses them.
I know there are changes to the moderation that have taken place many times, but not to the UI. It's one of the most stable sites in terms of design that I can think of.
What other sites have lasted this long without giving in to their users' whims?
Over the last 4 years my whole design ethos has transformed to "WWHND" (What Would Hacker News Do?) every time I need to make any UI changes to a project.
With IRC its basically part of the task, but every forum i read, its rare that i ever consider whose saying what.
Doesn't really help a ton with recognizing but it makes it easier to track within a thread.
There are people who show up much less often and have less obvious usernames, Andrew Ayer is agwa for example, and I'm sure there are people I blank on entirely.
Once in a while I will read something and realise oh, that "coincidental" similarity of username probably isn't a coincidence, I believe the first time I realised it was Martin Uecker was like that for example. "Hey, this HN person who has strong opinions about the work by Uecker et al has the username... oh... Huh. I guess I should ask"
[1] https://speechischeap.com
* developer insecurity and pattern lock in
* platform limitations. This is typically software execution context and tool chain related more than hardware related
* most developers refuse to measure things
Even really slow languages can result in fast applications.
https://geppetto.app
I contributed "whisperfile" as a result of this work:
* https://github.com/Mozilla-Ocho/llamafile/tree/main/whisper....
* https://github.com/cjpais/whisperfile
if you ever want to chat about making transcription virtually free or so cheap for everyone let me know. I've been working on various projects related to it for a while. including open source/cross-platform superwhisper alternative https://handy.computer
Woah, that's really cool, CJ! I've been toying the with idea of standing up a cluster of older iPhones to run Apple's Speech framework. [1] The inspiration came from this blog post [2] where the author is using it for OCR. A couple of things are holding me back: (1) the OSS models are better according to the current benchmarks and (2) I have customers all over the world, so that geographical load-balancing is a real factor. With that said, I'll definitely spend some time checking out your work. Thanks for sharing!
[1] https://developer.apple.com/documentation/speech
[2] https://terminalbytes.com/iphone-8-solar-powered-vision-ocr-...
S3 is “slow” at the level of a single request. It’s fast at the level of making as many requests as needed in parallel.
Being “fast” is sometimes critical, and often aesthetic.
I have found this true for myself as well. I changed back over to Go from Rust mostly for the iteration speed benefits. I would replace "fast" with "quick", however. It isn't so much I think about raw throughput as much as "perceived speed". That is why things like input latency matter in editors, etc. If something "feels fast" (ie Go compiles), we often don't even feel the need to measure. Likewise, when things "feel slow" (ie Java startup), we just don't enjoy using them, even if in some ways they actually are fast (like Java throughput).
As much as Rust's strongest defenders like to claim, compilation speed and bloat just really wasn't a goal. That's cascaded down into most of the ecosystem's most used dependencies, and so most Rust ecosystem projects just adopt the mindset of "just use the dependency". It's quite difficult to build a substantial project without pulling in 100s of dependencies.
I went on a lengthy journey of building my own game engine tools to avoid bloat, but it's tremendously time consuming. I reinvented the Mac / Windows / Web bindings by manually extracting auto-generated bindings instead of using crates that had thousands of them, significantly cutting compile time. For things like derive macros and serialization I avoided using crates like Serde that have a massive parser library included and emit lots of code. For web bindings I sorted out simpler ways of interacting with Javascript that didn't require a heavier build step and separate build tool. That's just the tip of the iceberg I can remember off the top of my head.
In the end I had a little engine that could do 3D scenes, relatively complex games, and decent GPU driven UI across Mac, Windows, and Web that built in a fraction of the time of other Rust game engines. I used it to build a bunch of small game jam entries and some web demos. A clean release build on the engine on my older laptop was about 3-4 seconds, vastly faster than most Rust projects.
The problem is that it was just a losing battle. If I wanted Linux support or to use pretty much any other crate in the Rust ecosystem, I'd have to pull in dependencies that alone would multiple the compile time.
In some ways that's an OK tradeoff for an ecosystem to make, but compile times do impede iteration loops and they do tend to reflect complexity. The more stuff you're building on top of the greater the chances are that bugs are hard to pin down, that maintainers will burn out and move on, or that you can't reasonably understand your stack deeply.
Looking completely past the languages themselves I think Zig may accrue advantages simply because its initial author so zealously defined a culture that cares about driving down compile times, and in turn complexity. Pardon the rant!
Meanwhile, compiler performance just didn't have a strong advocate with the right vision of what could be done. At least that's my read on the situation.
By comparison, Go doesn't have _that_ problem because it just doesn't have metaprogramming. It's easy to stay fast when you're dumb. Go is the Forest Gump of programming languages.
In general, I like cargo a lot better than the Go tooling, but I do wish the Rust stdlib was a bit more "batteries included".
I loathe working on it but don’t have the time to refactor legacy code.
———————-
I have another project that I am principal engineer and it uses Django, nextjs, docker compose for dev and ansible to deploy and it’s a dream to build in and push features to prod. Maybe I’m more invested so it’s more interesting to me but also not waiting 10 seconds to register and hot reload a react change is much more enjoyable.
On interfaces:
It's not only the slowness of the software or machine we have to wait for, it's also the act of moving your limb that adds a delay. Navigating a button (mouse) adds more friction than having a shortcut (keyboard). It's a needless feedback loop. If you master your tool all menus should go away. People who live in the terminal know this.
As a personal anecdote, I use custom rofi menus (think raycast for Linux) extensively for all kinds of interaction with data or file system (starting scripts, opening notes, renaming/moving files). It's notable how your interaction changes if you remove friction.
Venerable tools in this vein: vim, i3, kitty (former tmux), ranger (on the brim), qutebrowser, visidata, nsxiv, sioyek, mpv...
Essence of these tools is always this: move fast, select fast and efficiently, ability to launch your tool/script/function seamlessly. Be able to do it blindly. Prefer peripheral feedback.
I wish more people saw what could be and built more bicycles for the mind.
There's never a reason to make a content website use heavyweight JS or CSS though.
For example, if you're running experiments in one big batch overnight, making that faster doesn't seem very helpful. But with a big enough improvement, you can now run several batches of experiments during the day, which is much more productive.
He pays special attention to the speed of application. The Russian social network VK worked blazingly fast. The same is about Telegram.
I always noticed it but not many people verbalized it explicitly.
But I am pretty sure that people realize it subconsciously and it affects user behaviour metrics positively.
These operations are near instant for me on telegram mobile and desktop.
It's the fastest IM app for me by a magnitude.
It's a retroactively fixed thing. Like imagine forgetting to make a UI, shipping just an API to a customer then thinking "oh shit, they need a UI they are not programmers". And only noticing from customer complaints. That is how performance is often treated.
This is probably because performance problems usually require load or unusual traffic patterns, which require sales, which require demos, which dont require performance tuning as there is one user!
If you want to speed your web service up first thing is invest time, maybe money in really good observability. Should be easy for anyone in the team to find a log, see what CPU is at etc. Then set up proxy metrics around speed you care about and talk about them every week and take actions.
Proxy metrics means you likely cant (well probably should not) check the speed that Harold can sum his spreadsheet every minute, but you can check the latency of the major calls involved. If something is slow but metrics look good then profiling might be needed.
Sometimes there is an easy speed up. Sometimes you need a new architecture! But at least you know what's happening.
As a result, performance (and a few other things) functionally never gets “requested”. Throw in the fact that for many mid-to-large orgs, software is not bought by the people who are forced to use it and you have the perfect storm for never hearing about performance complaints.
This in turn, justifies never prioritising performance.
First, efficient code is going to use less electricity, and thus, fewer resources will need to be consumed.
Second, efficient code means you don't need to be constantly upgrading your hardware.
[1]: https://en.wikipedia.org/wiki/Rebound_effect_(conservation)
[2]: https://en.wikipedia.org/wiki/Jevons_paradox
So if we use cost as a proxy for environment impact it’s not saving much at all.
I think this is a meme to help a different audience care about performance.
Almost everywhere I’ve worked, user-facing speed has been one of the highest priorities. From the smallest boring startup to the multi billion dollar company.
At companies that had us target metrics, speed and latency was always a metric.
I don’t think my experience has been all that unique. In fact, I’d be more surprised if I joined a company and they didn’t care about how responsive the product felt, how fast the pages loaded, and any weird lags that popped up.
Microsoft needs to take heed, for example Explorer's search, Teams, make your computer seem extremely slow. VS Code on the other hand is fast enough, while slower than native editors such as Sublime Text.
For what is worth I built myself a custom jira board last month, so I could instantly search, filter and group tickets (by title, status, assignee, version, ...)
Motivation: Running queries and finding tickets on JIRA kills me sometimes.
The board is not perfect, but works fast and I made it superlightweight. In case anybody wants to give it a try:
https://jetboard.pausanchez.com/
Don't dare to try on mobile, use desktop. Unfortunately it uses a proxy and requires an API key, but doesn't store anything in backend (just proxies the request because of CORS). Maybe there is an API or a way to query jira cloud instance directly from browser, I just tried first approach and moved on. It even crossed my mind to add it somehow to Jira marketplace...
Anyway, caches stuff locally and refreshes often. Filtering uses several tricks to feel instant.
UI can be improved, but uses a minimalistic interface on purpose, like HN.
If anybody tries it, I'll be glad to hear your thoughts.
I used to play games on N64 with three friends. I didn’t even have a concept of input lag back then. Control inputs were just instantly respected by the game.
Meanwhile today, if I want to play rocket league with three friends on my Xbox series S (the latest gen, but the less powerful version), I have to deal with VERY noticeable input lag. Like maybe a quarter of a second. It’s pretty much unplayable.
Your experience is not normal.
If you’re seeing that much lag, the most likely explanation is your display. Many TVs have high latency for various processing steps that doesn’t matter when you’re watching a movie or TV, but becomes painful when you’re playing games.
It was either fast, or nothing. Image quality suffered, but speed was not a parameter.
With LCDs, lag became a trade-off parameter. Technology enabled something to become worse, so economically it was bound to happen.
Sounds reasonable, but no.
https://www.extron.com/article/ntscdb4
The lag is due to some software. So the problem is with how software engineering as a field functions.
It could be my unusual nervous system. I'm really good at rhythm games, often clearing a level on my first try and amazing friends who can beat me at other genres. But when I was playing League of Legends, which isn't very twitchy, it seemed like I would just get hit and there was nothing I could do about it when I played on a "gaming" laptop but found I could succeed at the game when I hooked up an external monitor. I ran a clock and took pictures showing that the external monitor was 30ms faster than the built-in monitor.
nowadays it's fancy touch display, requires concentration and often sluggish, and the machine often felt cheap and made cheap sound when tapped on, I don't think the operator are ever enjoying interacting with it and the software's often slow across the network....
I'm all for fast. It shows no matter what, at least somebody cared enough for it to be blazing fast.
Speed is what made Google, which was a consumer product at the time. (I say this because it matters more in consumer products.)
Beautiful tools make you stretch to make better things with them.
baba is fast.
I sometimes get calls like "You used to manage a server 6 years ago and we have an issue now" so I always tell the other person "type 'alias' and read me the output", this is how I can tell if this is really a server I used to work on.
fast is my copilot.
https://en.wikipedia.org/wiki/HTTP/3
https://en.wikipedia.org/wiki/QUIC
They don't require you to use QUIC to access Google, but it is one of the options. If you use a non-supporting browser (Safari prior to 2023, unless you enabled it), you'd access it with a standard TCP-based HTTP connection.
Hm, shell environment is fairly high on the list of things I'd expect the next person to change, even assuming no operational or functional changes to a server.
And of course they'd be using a different user account anyway.
Working quickly is more important than it seems (2015) - https://news.ycombinator.com/item?id=36312295 - June 2023 (183 comments)
Speed matters: Why working quickly is more important than it seems (2015) - https://news.ycombinator.com/item?id=20611539 - Aug 2019 (171 comments)
Speed matters: Why working quickly is more important than it seems - https://news.ycombinator.com/item?id=10020827 - Aug 2015 (139 comments)
jsomers gets a lot of much-deserved love here!
This is true, but I also think there's a backlash now and therefore some really nice mostly dev-focused software that is reeaaaaly fast. Just to name a few:
- Helix editor - Ripgrep - Astral python tools (ruff, uv, ty)
That's a tiny subsection of the mostly bloated software that exists. But it makes me happy when I come across something like that!
Also, browsers seems to be really responsive despite being one of the most feature bloated products on earth thanks to expanding web standards. I'm not really counting this though because Firefox and Chrone might rarely lag, the websites I view with them often do, so it's not really a fast experience.
I don't know, there are a sizeable subset of folks who value fast, and it's a big subset, it's not niche.
Search for topics like turning off animations or replacing core user space tools with various go and rust replacements, you'll find us easily enough.
I'm generally a pretty happy MacOS user, especially since M1 came along. But I am seriously considering going back to linux again. I maintain a parallel laptop with nixos and i'm finding more and more niggles on the mac side where i can prioritise lower friction on linux.
I have been asking about Latency-Free Computing for a very long time. Every Computing now is slow.
This is the largest reason why in-place upgrades to the U.S. financial system are slow. Coordinating the Faster ACH rollout took years, and the community bank lobby was loudly in favor of delaying it, to avoid disadvantaging themselves competitively versus banks with more capability to write software (and otherwise adapt operationally to the challenges same-day ACH posed)."
From the great blog Bits About Money: https://www.bitsaboutmoney.com/archive/community-banking-and...
This is the same in Switzerland. If you request an IBAN transfer, it's never instant. The solution there for fast payments is called TWINT, which works at almost POS terminal (you take a picture of the displayed QR code).
I think BACS is similarly "slow" due to the settlement process.
[0] https://en.m.wikipedia.org/wiki/Faster_Payment_System_(Unite...
But the faster payments ceiling is large enough that buying a house falls under the limit.
https://real-timepayments.com/Banks-Real-Time-Payments.html
Prior to that, you could get instant transfers but it came with a small fee because they were routed through credit card networks. The credit card networks took a fee but credit card transactions also have different guarantees and reversibility (e.g. costing the bank more in cases of fraud)
Which means nobody can send me money.
FedNow on the backend is supported by fewer banks than Zelle is, which is probably why hardly any banks expose a front-end for it.
The bank had an opportunity to notify me precisely because ACH is not real time. And I had an opportunity to fix it because wire transfers is almost real time (finishes in minutes not days). I appreciate it when companies pull money from my account I get days of notice but if I need to move money quickly I can do it too.
You don't need slow transfers to get an opportunity to satisfy automatic payments. I don't know how it works but in the UK direct debits (an automatic "take money from my account for bills" system) gives the bank a couple of days notice so my banking app warns me if I don't have enough money. Bank transfers are still instant.
I believe this is because Ürs has to load my silver pieces onto the donkey and drive it to the other bank.
C++ with no forward decls, no clang to give data about why the compile time is taking so long. 20 minute compiles. Only git tool I like (git-cola) is written in Python and slows to a crawl. gitk takes a good minute just to start up. Only environments are MSYS which is slow due to Windows, and WSL which isn't slow but can't do DPI scaling so I squint at everything.
As C++ devs we used to complain a lot about it's compilation speed. Now after moving to Rust, sometimes we wish we could just go back to C++ due to Rust's terrible compilation speeds! :-)
> Rarely in software does anyone ask for “fast.”
I don't think I can relate this article to what actually happened to the web. It went from being an unusable 3D platform to a usable 2D one. The web exploded with creativity and was out of control thanks to Flash and Director, but speeds were unacceptable. Once Apple stopped supporting it, the web became boring, and fast, very fast. A lot of time and money went into optimising the platform.
So the article is probably more about LLMs being the new Flash. I know that sounds like blasphemy, but they're both slow and melt CPUs.
It is implicit, in the same way that in a modern car you expect electric windows and air-conditioning (yes, back in the day, those were premium extras)
You make much better code, and much better products, if you "fast" from your vocabulary. Instead set specific, concrete latency budgets (e.g. 99.99% within x ms). You'll definitely end up with fewer errors and better maintainability than the people who tried to be "fast". You'll often end up faster than them too.
However the capital required will probably never happen again in relation to the return for any investor involved in that product.
Props to them for pushing the envelope, but they did it in the zero interest era and its a shame this is never highlighted by them. And now the outcome is pretty clear in terms of where the company has ended up.
Fast is indeed magical, that's why I exclusively browse Instagram from the website; it's so slow I dip out before they get me with their slot machine.
https://blog.superhuman.com/superhuman-is-being-acquired-by-...
Being fast helps, but is rarely a product.
It's obviously wrong if you think about it for more than a second. All it shows is that speed isn't the only thing that matters but who was claiming that?
Speed is important. Being slow doesn't guarantee failure. Being fast doesn't guarantee success. It definitely helps though!
>Being fast doesn't guarantee success.
Sometimes it can be a deciding factor though.
Also sometimes speedyness or responsiveness beyond nominal is not as much of a "must have" compared to nominally fast performance in place of sluggishness.
The greatest day of productivity in my life was a flight from Melbourne to New York via LAX. No wifi on either flight, but a bit in transit. Downloaded everything I needed in advance. Coded like a mofo for like 16 hours.
Fast internet is great for distractions.
At least in B2B applications that rely heavily on relational data, the best developers are the ones who can optimize at the database level. Algorithmic complexity pretty much screams at me these days and is quickly addressed, but getting the damned query plan into the correct shape for a variety of queries remains a challenge.
Of course, knowing the correct storage medium to use in this space is just as important as writing good queries.
Speed of all kinds is incredibly important. Give me all of it.
- Fast developers
- Fast test suites
- Fast feedback loops
- Fast experimentation
Someone (Napoleon?) is credited with saying "quantity has a quality all its own", in software it is "velocity has a quality all its own".
As long as there is some rigor and you aren't shipping complete slop, consistently moving very quickly fixes almost every other deficiency.
- It makes engineering mistakes cheaper (just fix them fast)
- It make product experimentation easy (we can test this fast and revert if needed)
- It makes developers ramp up quickly (shipping code increases confidence and knowledge)
- It actually makes rigor more feasible as the most effective rigorous processes are light weight and built-in.
Every line of code is a liability, the system that enables it to change rapidly is the asset.
Side note: every time I encounter JVM test startup lag I think someday I am going to die and will have spent time doing _this_.
Joe Stalin, I believe. It's a grim metaphor regarding the USSR's army tactics in WW2.
https://www.goodreads.com/quotes/795954-quantity-has-a-quali...
I guess they were used to typing stuff then inspecting paperwork or other stuff waiting for a response. Plus, it avoided complaints when usage inevitably increased over time.
It was unpopular, because devs love the shiny. But it worked - we had nice quick applications. Which was really important for user acceptance.
I didn't make this rule because I hated devs (though self-hatred is a thing ofc), or didn't want to spend the money on shiny dev machines. I made it because if a process worked acceptably quickly on a dev machine then it never got faster than that. If the users complained that a process was slow, but it worked fine on the dev's machine, then it proved almost impossible to get that process faster. But if the dev experience of a process when first coding it up was slow, then we'd work at making it faster while building it.
I often think of this rule when staring at some web app that's taking 5 minutes to do something that appears to be quite simple. Like maybe we should have dev servers that are deliberately throttled back, or introduce random delays into the network for dev machines, or whatever. Yes, it'll be annoying for devs, but the product will actually work.
This is a good point. Often datasets are smaller in dev. If a reasonable copy of live data is used, devs would have an intuition of what is making things slow. Doesn't work for live data that is too big to replicate on a developer's setup though.
Btw, cool site design.
It's gorgeous
https://github.com/juecd/juecd.github.io
I agree, but I wonder how not knowing how to spell would affect that. The highschool kids I work with, are not great spellers (nor do they have good handwriting).
Figuring out the best way to present logically complex content, controls and state to users that don’t wrangle complexity for a living is difficult and specialized work, and for most users, it’s much more important than snappy click response.
Both of these things obviously exist on a spectrum and are certainly not mutually exclusive. (ILY McMaster-Carr) But projects rarely have enough time for either complete working features, or great thoughtful usability, and if you’re weighing the relative importance of these two factors, consider your audience, their goals, and their technical savvy before assuming good performance can make up for a suboptimal user workflow.
Thanks for enduring my indulgent rant.
Is the most pressing problem facing the world is that we are not doing enough things fast enough? Seems a bit off the mark, IMO.
That's because it's understood that things should work as quickly as possible, and not slowly on purpose (generally). No one asks that modern language is used in the UI as opposed to Sanskrit or heiroglyphs, because it's understood.
Google Webfont loader is (usually) non blocking when done right, but the text should appear fine before
The page loaded instantly for me
I once used Notion at work and for personal note taking. I'd never felt it was "slow." Until later I moved my notes to Obsidian. Now when I have to use Notion at my job it feels sluggish.
Glad to hear Obsidian is better as I’ve been considering it as an alternative.
I've been working on making Obsidian real-time collaborative with the Relay [0] plugin. In combination with a few other plugins (and the core Bases plugin) you can build a pretty great Notion alternative.
I'm bullish on companies using Obsidian for knowledge management and AI driven workflows. It's pretty reasonable to build custom plugins for a specific vertical.
[0] https://relay.md
I know this is completely different scale, but compare: [1] https://github.com/git/git [2] https://gitpatch.com/gitpatch/git-demo
And there is no page cache. Sub 100ms is just completely different experience.
Assuming, like, three days, 6 minutes is 720x faster. 10000x faster than 6 minutes is like a month and a half!
Fast reading does not just enumerate examples.
Fast reading does not straw-man.
Fun conveys opportunity and emotion: "changing behavior", "signals simplicity", "fun". Fun creates an experience, a mode, and stickiness. It's good for marketing, but a drag on operations.
Fast is principles with methods that just work. "Got it."
Fast has a time-to-value of now.
Fast is transformative when it changes a background process requiring all the infrastructure of tracking contingencies to something that can just be done. It changes system-2 labor into system-1 activity -- like text reply vs email folders, authority vs discussion, or take-out vs. cooking.
When writers figure out how to monetize fast - how to get recurrent paying users (with out-of-band payment) just from delivering value - then we'll no longer be dragged through anecdotes and hand-waving and all the salience-stretching manipulations that tax our attention.
Imagine an AI paid by time to absorb and accept the answer instead of by the token.
Fast is better than fun -- assuming it's all good, of course :)
I don't want my code deployed in seconds or milliseconds. I'm happy to wait even an hour for my deployment to happen, as long as I don't have to babysit it.
I want my code deployed safely, rolled out with some kind of sane plan (like staging -> canary -> 5% -> 20% -> 50% -> 100%), ideally waiting long enough at each stage of the plan to ensure the code is likely being executed with enough time for alerts to fire (even with feature flags, I want to make sure there's no weird side effects), and for a rollback to automatically occur if anything went wrong.
I then want to enable the feature I'm deploying via a feature flag, with a plan that looks similar to the deployment. I want the enablement of the feature flag, to the configured target, to be as fast as possible.
I want rollbacks to be fast, in-case things go wrong.
Another good example is UI interactions. Adding short animations to actions makes the UI slower, but can considerably improve the experience, by making it more obvious that the action occurred and what it did.
So, no, fast isn't always better. Fast is better when the experience is directly improved by making it fast, and you should be able to back that up with data.
Tell me "fast" again!
As some working on embedded audio DSP code I just had to laugh a little.
Yes, there is a ton of code that has a strict deadline. For audio that may be determined by your buffer size — don't write your samples to that buffer fast enough and you will hear it in potentially destructively loud fashion.
This changes the equation, since faster code now just means you are able to do more within that timeframe on the same hardware. Or you could do the same on cheaper hardware. Either way, it matters.
Similar things apply to shader coding, game engines, control code for electromechanical systems (there, missing the deadline can be even worse).
Windows is the worst offender here, the entire desktop is sluggish even though it there is no computational task which justifies those delays.
> But software that's fast changes behavior.
I wonder if the author stopped to consider why these opposing points make sense, instead of ignoring one to justify the other.
My opinion is that "fast" only becomes a boon when features are robust and reliable. If you prioritize going twice as "fast" over rooting out the problems, you get problems at twice the rate too.
Lets say that the author has a machine that is a self contained assembly line, it produces cans of soup. However, the machine has a problem - every few cans of soup, one can comes out sideways and breaks the machine temporarily, making them stop and unjam it.
The author suggests to double the speed of the machine without solving that problem, giving them their soup cans twice as fast, requiring that they unjam it twice as often as well.
I believe that (with situation exceptions) this is a bad approach, and I would address the problem causing the machine to get jammed before I doubled the speed of the machine
That being said, this is a very simplistic view of the situation, in a real situation either of these solutions has a number of variables that may make it preferable over the other - my gripe with the piece is that the author suggests the "faster" approach is a good default that is "simple", "magical" and "fun". I believe it is shortsighted, causes compounding problems the more it is applied in sequence, and is only "magical" if you bury your head in the sand and tell yourself the problems are for someone else to figure out - which is exactly what the author handwaves away at the end, with a nebulous allusion to some future date when these tools that we should accept because they are fast will eventually be made good by some unknown person.
Bookmarking this one
For example, looking up command flags within man pages is slooooow and becomes agonizingly frustrating when you're waking up and people are waiting for you so that they can also go back to sleep. But if you've spent the time to learn those flags beforehand, you'll be able to get back to sleep sooner.
I should be able to create a Jira ticket in however long it takes me to type the acceptance criteria plus a second or two. Instead I’ve got slow loading pages, I’ve got spinners, I’ve got dropdowns populating asynchronously that steal focus from what I’m typing, I’ve got whatever I was typing then triggering god knows what shortcuts causing untold chaos.
For a system that is—at least how we use it at my job—a glorified todo list, it is infuriating. If I’m even remotely busy lately I just add “raise a ticket for x” to my actual todo list and do it some other time instead.
Again, no ill will intended at all, but I think it straddles the promotional angle here a bit and maybe people weren't aware
https://magarshak.com
https://miracles.community
https://qbix.com
https://intercoin.app
Here is some extensive advice for making complex websites load extremely quickly
https://community.qbix.com/t/qbix-websites-loading-quickly/2...
Here is also how to speed up APIs:
https://community.qbix.com/t/building-efficient-apis-with-qb...
One of the biggest things our framework does as opposed to React, Angular, Vue etc. is we lazyload all components as you need them. No need for tree-shaking or bundling files. Just render (static, cached) HTML and CSS, then start to activate JS on top of it. Also helps massively with time to first contentful paint.
https://community.qbix.com/t/designing-tools-in-qbix-platfor...
All this evolved from 2021 when I gave this talk:
http://www.youtube.com/watch?v=yKPKuH6YCTc
(Throw tomatoes now but) Torvalds said the same thing about Git in his Google talk.
https://permacomputing.net/
Yeah, that's not "a world" it's just the USA. Parts of the world - EU, UK etc have already moved on from that. Don't assume that the USA is leading edge in all things.
"In a world" is a figure of speech which acknowledges the non-universality of the statement being made. And no it is not "just the USA". Canada and Mexico are similarly slow to adopt real-time payments.
It is wild to tell someone "don't assume" when your entire comment relies on your own incorrect assumption about what someone meant.
It's a bit more substantial, and less complaints about the semantics of the wording.
Of course fast has downsides but it's interesting this pitch is here. Must have occurred many times in the past.
"Fast" was often labeled "tactical" (as opposed to "strategic" in institutions). At the time I remember thinking a lot about how delays plus uncertainty meant death (nothing gets done, or worse). Even though Fast is often at the cost of noise and error, there is some principle that it can still improve things if not "too far".
Anyone know deeper writings on this topic?
Are you kidding me? My product owner and management ask me all the time to implement features "fast".
I disagree with the statement too, as people definitely ask for UX / products to be "snappy", but this isn't about speed of development.
I put it in a ticket to speed up the 40 minutes build and was asked "How does this benefit the end user?" and I said "The end user would have had the product six months ago if the build was faster."
Genuinely hard to read this and think little more than, "oh look, another justification for low quality software."