Ask HN: Why hasn't x86 caught up with Apple M series?
411 points by stephenheron 1d ago 589 comments
Ask HN: Is there a temp phone number like temp email?
9 points by piratesAndSons 20h ago 11 comments
Stop squashing your commits. You're squashing your AI too
4 points by jannesblobel 1d ago 8 comments
Ask HN: Best codebases to study to learn software design?
100 points by pixelworm 3d ago 89 comments
Ask HN: Are AI filters becoming stricter than society itself?
29 points by tsevis 3d ago 16 comments
The Therac-25 Incident (2021)
294 lemper 161 8/27/2025, 6:57:56 AM thedailywtf.com ↗
If you only take one thing away from this article, it should be this one! The Therac-25 incident is a horrifying and important part of software history, it's really easy to think type-systems, unit-testing and defensive-coding can solve all software problems. They definitely can help a lot, but the real failure in the story of the Therac-25 from my understanding, is that it took far too long for incidents to be reported, investigated and fixed.
There was a great Cautionary Tales podcast about the device recently[0], one thing mentioned was that, even aside from the catasrophic accidents, Therac-25 machines were routinely seen by users to show unexplained errors, but these issues never made it to the desk of someone who might fix it.
[0] https://timharford.com/2025/07/cautionary-tales-captain-kirk...
> It's the end result of a process
In my experience, it's even more than that. It's a culture.
They don't need to be especially talented engineers, but, in my experience (and I actually have quite a bit of it, in this area), they need to be dedicated to a culture of Quality.
And it is entirely possible for very talented engineers to produce shite. I've seen exactly that.
> software quality doesn't appear because you have good developers
Good developers are a necessary ingredient of a much larger recipe.
People think that a good process means you can toss in crap developers, or that great developers mean that you can have a bad process.
In my experience, I worked for a 100-year-old Japanese engineering company that had a decades-long culture of Quality. People stayed at that company for their entire career, and most of them were top-shelf people. They had entire business units, dedicated to process improvement and QA.
It was a combination of good talent, good process, and good culture. If any one of them sucks, so does the product.
Another interesting fact mentioned in the podcast is that the earlier (manually operated) version of the machine did have the same fault. But it also had a failsafe fuse that blew so the fault never materialized. Excellent demonstration of the Swiss Cheese Model: https://en.wikipedia.org/wiki/Swiss_cheese_model
> the earlier (manually operated) version of the machine did have the same fault. But it also had a failsafe fuse that blew so the fault never materialized.
#1 virtue of electromechanical failsafes is that their conception, design, implementation, and failure modes tend to be orthogonal to those of the software. One of the biggest shortcomings of Swiss Cheese safety thinking is that you too-often end up using "neighbor slices from the same wheel of cheese".
#2 virtue of electromechanical failsafes is that running into them (the fuse blew, or whatever) is usually more difficult for humans to ignore. Or at least it's easier to create processes and do training that actually gets the errors reported up the chain. (Compared to software - where the worker bees all know you gotta "ignore, click 'OK', retry, reboot" all the time, if you actually want to get anything done):
But, sadly, electromechanical failsafes are far more expensive then "we'll just add some code to check that" optimism. And PHB's all know that picking up nickles in front of the steamroller is how you get to the C-suite.
Wrong, any software failure can have huge consequences in someone's life, or company, by preventing some critical flow to take place, corrupting data related to someone's life, professional or medical record, preventing a payment on some specific goods that had to be acquired on that moment or never,....
It is business who requests features ASAP to cut costs and and then there are customers who don’t want to pay for „ideal software” but rather have every software for free.
Most devs and QA workers I know want to deliver best quality software and usually are gold plating stuff anyway.
Also, speaking out when the train is visibly going against a wall.
https://edition.cnn.com/2025/08/27/us/alaska-f-35-crash-acci...
If you want professional quality, we're the first line of actually making it happen, blaming others won't change anything.
Despite all the procedures and tests, the software still managed to endanger the lives of the passengers.
That may seem a bit hypothetical but it can easily happen if you have a company that systematically underpays, which I'm sure many of us don't need to think hard to imagine, in which case they will systematically hire poor developers (because those are the only ones that ever applied).
It used to be that the poor performers (dangerous hip-shootin' code commitin' cowpokes) were limited in the amount of code that they could produce per time unit, leaving enough time for others to correct course. Now the cowpokes are producing ridiculous amount of code that you just can't keep up with.
A lot of times that is boring meetings to discuss the simplification.
I can extend the same analogy to all the gen ai bs that’s floating around right now as well.
But average construction worker is also average and average doctor as well.
World cannot be running on „best of the best” - just wrap your head around the fact whole economy and human activities are run by average people doing average stuff.
That’s not to say talent is unimportant, however, I’d need to see some real examples of high talent, no process, teams compared to low talent, high process, teams, then some mixture of the groups to make a fair statement. Even then, how do you measure talent? I think I’m talented but I wouldn’t be surprised to learn others think I’m an imbecile who only knows Python!
What I've been saying is methodology is mostly irrelevant, not that waterfall is specifically better than agile. Talent wins over the process but I can see how this idea is controversial.
I’d need to see some real examples of high talent, no process, teams compared to low talent, high process, teams, then some mixture of the groups to make a fair statement. Even then, how do you measure talent?
Yep, even if I made it my life's mission to run a formal study on programmer productivity (which I clearly won't) that wouldn't save the argument from nitpicking.
> Yep, even if I made it my life's mission to run a formal study on programmer productivity (which I clearly won't) that wouldn't save the argument from nitpicking.
I didn't ask for this, I just asked for sensible examples, either from your experience or from publicly available information.
> Throughout the 80s and 90s there was just a feeling in medicine that computers were dangerous <snip> This is why, when I was a resident in 2002-2006 we still were writing all of our orders and notes on paper.
I was briefly part of an experiment with electronic patient records in an ICU in the early 2000s. My job was to basically babysit the server processing the records in the ICU.
The entire staff hated the system. They hated having to switch to computers (this was many years pre-ipad and similarly sleek tablets) to check and update records. They were very much used to writing medications (what, when, which dose, etc) onto bedside charts, which were very easy to consult and very easy to update. Any kind of dataloss in those records could have fatal consequences. Any delay in getting to the information could be bad.
This was *not* just a case of doctors having unfounded "feelings" that computers were dangerous. Computers were very much more dangerous than pen and paper.
I haven't been involved in that industry since then, and I imagine things have gotten better since, but still worth keeping in mind.
Those who want to escape the office altogether, can hop on one of the company’s 600 cow-print bikes to take meetings from a treehouse, slide down a rabbit hole or grab lunch in a train car.
https://www.cnbc.com/2024/09/01/inside-epic-systems-mythical...
These are commercial products being deployed.
The other theory is there are soo many bureaucratic hoops to jump through in order to make anything in the medical space, that no one does it willingly.
I am a developer and whatever software system I touch breaks horribly. When my family wants to use an ATM, they tell me to stand at a distance, so that my aura doesn't break things. This is why I will not get into a self-driving car in the foreseeable future — I think we place far too much confidence in these complex software systems. And yet I see that the overwhelming majority of HN readers are not only happy to be beta-testers for this software as participants in road traffic, but also are happy to get in those cars. They are OK with trusting their life to new, complex, poorly understood and poorly tested software systems, in spite of every other software system breaking and falling apart around them.
[anticipating immediate common responses: 1) yes, I know that self-driving car companies claim that their cars are statistically safer than human drivers, this is beyond the point here. One, they are "safer" largely because they drive so badly that other road participants pay extra attention and accommodate their weirdness, and two, they are still new, complex and poorly understood systems. 2) "you already trust your life to software systems" — again, beyond the point, not quite true as many software systems are built to have human supervision and override capability (think airplanes), and others are built to strict engineering requirements (think brakes in cars) while self-driving cars are not built that way.]
Because the alternative isn't bug-free driving -- it's a human being. Who maybe didn't sleep last night, who might have a heart attack while their foot is on the accelerator, who might pull over and try to sexually assault you.
You don't need to "place confidence in these complex software systems" -- you just need to look at their safety stats vs e.g. regular Uber. It's not a matter of trust; it's literally just a matter of statistics, and choosing the less risky option.
I think in this case, the thought process was based on the experience with older, electro-mechanical machines where the most common failure modern was parts wearing out.
Since software can, indeed, not "wear out", someone made the assumption that it was therefore inherently more reliable.
Bureaucracy being (per Graeber 2006) something like the ritual where by means of a set of pre-fashioned artifacts for each other's sake we all operate at 2% of our normal mental capacities and that's how modern data-driven, conflict-averse societies organize work and distribute resources without anyone being able to have any complaints listened to.
>Bureaucracies public and private appear—for whatever historical reasons—to be organized in such a way as to guarantee that a significant proportion of actors will not be able to perform their tasks as expected. It also exemplifies what I have come to think of the defining feature of a utopian form of practice, in that, on discovering this, those maintaining the system conclude that the problem is not with the system itself but with the inadequacy of the human beings involved.
Most places where a computer system is involved in the administration of a public service or something of the caliber, has that been a grassroots effort, hey computers are cool and awesome let's see what they change? No, it's something that's been imposed in the definitive top-down manner of XX century bureaucracies. Remember the cohort of people who used to become stupid the moment a "thinking machine" was powered within line of sight (before the last uncomputed generation retired and got their excuse to act dumb for the rest of it)? Consider them in view the literally incomprehensible number of layers that any "serious" piece of software consists of; layers which we're stuck producing more of, when any software professional knows the best kind of software is less of it.
But at least it saves time and the forest, right? Ironically, getting things done in a bureaucratic context with less overhead than filling out paper forms or speaking to human beings, makes them even easier to fuck up. And then there's the useful fiction of "the software did it" that e.g. "AI agents" thing is trying to productize. How about they just give people a liability slider in the spinup form, eh, but nah.
Wanna see a miracle? A miracle is when people hype each other into pretending something impossible happened. To the extent user-operated software is involved in most big-time human activities, the daily miracle is how it seems to work well enough, for people to be able to pretend it works any good at all. Many more than 3 such cases. But of course remembering the catastrophal mistakes of the past can be turned into a quaint fun-time activity. Building things that empower people to make less mistakes, meanwhile, is a little different from building artifacts for non-stop "2% time".
>One failure occurred when a particular sequence of keystrokes was entered on the VT100 terminal that controlled the PDP-11 computer: If the operator were to press "X" to (erroneously) select 25 MeV photon mode, then use "cursor up" to edit the input to "E" to (correctly) select 25 MeV Electron mode, then "Enter", all within eight seconds of the first keypress and well within the capability of an experienced user of the machine, the edit would not be processed and an overdose could be administered. These edits were not noticed as it would take 8 seconds for startup, so it would go with the default setup
Kinda reminds me how everything is touchscreen nowadays from car interfaces to industry critical software
Personally, I've found even the latest batch of agents fairly poor at embedded systems, and I shudder at the thought of giving them the keys to the kingdom to say... a radiation machine.
The core takeaway developers should have from Therac-25 is not that this happens just on "really important" software, but that all software is important, and all software can kill, and you need to always care.
If software "engineers" want to be taken seriously, then they should also have the obligation to report unsafe/broken software and refuse to ship unsafe/broken software. The developers are just as much to blame as the post office:
> Fujitsu was aware that Horizon contained software bugs as early as 1999 [2]
[1] https://engineerscanada.ca/news-and-events/news/the-duty-to-...
[2] https://en.wikipedia.org/wiki/British_Post_Office_scandal
That law is irrelevant to this situation, except in that the lawyers for Fujitsu / Royal Mail used it to imply their code was infallable.
No comments yet
In the Therac-25 case, the killing was quite immediate and it would have happened even if the correct radiation dose was recorded.
> Provenance and proper traceability would have allowed
But there wasn't those things, so they couldn't, so they were driven to suicide.
Bad software killed people. It being slow or fast doesn't seem to matter.
Which is not to say that software hasn't killed people before (Horizon, Boeing, probably loads of industrial accidents and indirect process control failures leading to dangerous products, etc, etc). Hell, there's a suspicion that austerity is at least partly predicated on a buggy Excel spreadsheet, and with about 200k excess deaths in a decade (a decade not including Covid) in one country, even a small fraction of those being laid at the door of software is a lot of Theracs.
AI will probably often skate away from responsibility in the same way that Horizon does: by being far enough removed and with enough murky causality that they can say "well, sure, it was a bug, but them killing themselves isn't our fault"
I also find AI copilot things do not work well with embedded software. Again, people YOLOing embedded isn't new, but it might be about to get worse.
Agreed on the future but I think we were headed there regardless.
i am pretty confident they wont let claude touch if it they dont even let deterministic automations run...
that being said, maybe there are places. but this is always the sentiment i got. no automating, no scanning, no patching. device is delivered certified and any modifications will invalidate that. any changes need to be validated and certified.
its a different world that makin apps thats for sure.
not to say mistakes arent made and change doesnt happen, but i dont think people designing medical devices will be going yolo mode on their dev cycle anytime soon... give the folks in safety critical system engineering some credit..
I don't have the same faith in corporate leadership as you, at least not when they see potentially huge savings by firing some of the expensive developers and using AI to write more of the code.
I mean even simple crud web apps where the data models are more complex, and where the same data has multiple structures, the LLMs get confused after the second data transformation (at the most).
E.g. You take in data with field created_at, store it as created_on, and send it out to another system as last_modified.
> Taking a couple of programming courses or programming a home computer does not qualify anyone to produce safety-critical software. Although certification of software engineers is not yet required, more events like those associated with the Therac-25 will make such certification inevitable. There is activity in Britain to specify required courses for those working on critical software. Any engineer is not automatically qualified to be a software engineer — an extensive program of study and experience is required. Safety-critical software engineering requires training and experience in addition to that required for noncritical software.
After 32 years, this didn't go the way the report's authors expected, right?
Two decades ago there was a lot of talk about turning software development into a structured engineering discipline, but that plan seems to have largely been abandoned.
I was taught about this in engineering school, as part of a general engineering course also covering things like bathtub reliability curves and how to calculate the number of redundant cooling pumps a nuclear power plant needs. But it's a long time since I was in college.
Is this sort of thing still taught to engineers and developers in college these days?
Analog systems do not behave like computers.
https://strawpoll.com/NMnQNX9aAg6
If not, why not hardware limit the power input to the machine, so even if the software completely failed, it would not be physically capable of delivering a fatal dose like this?
While the cause is noble, the medical detection of child abuse faces serious issues with undetected and unacknowledged false positives [2], since ground truth is almost never knowable. The prevailing idea is that certain medical findings are considered proof beyond reasonable doubt of violent abuse, even without witnesses or confessions (denials are extremely common). These beliefs rest on decades of medical literature regarded by many as low quality because of methodological flaws, especially circular reasoning (patients are classified as abuse victims because they show certain medical findings, and then the same findings are found in nearly all those patients—which hardly proves anything [3]).
I raise this point because, while not exactly software bugs, we are now seeing black-box AIs claiming to detect child abuse with supposedly very high accuracy, trained on decades of this flawed data [4, 5]. Flawed data can only produce flawed predictions (garbage in, garbage out). I am deeply concerned that misplaced confidence in medical software will reinforce wrongful determinations of child abuse, including both false positives (unjust allegations potentially leading to termination of parental rights, foster care placements, imprisonment of parents and caretakers) and false negatives (children who remain unprotected from ongoing abuse).
[1] https://hs.memberclicks.net/executive-committee
[2] https://news.ycombinator.com/item?id=37650402
[3] https://pubmed.ncbi.nlm.nih.gov/30146789/
[4] https://rdcu.be/eCE3l
[5] https://www.sciencedirect.com/science/article/pii/S002234682...
The Therac-25 was a radiation therapy machine built by Atomic Energy Canada Limited in the 1980s. It was the first to rely entirely on software for safety controls, with no hardware interlocks. Between 1985 and 1987, at least six patients received massive overdoses of radiation, some fatally, due to software flaws.
One major case in March 1986 at the East Texas Cancer Center involved a technician who mistyped the treatment type, corrected it quickly, and started the beam. Because of a race condition, the correction didn’t fully register. Instead of the prescribed 180 rads, the patient was hit with up to 25,000 rads. The machine reported an underdose, so staff didn’t realize the harm until later.
Other hospitals reported similar incidents, but AECL denied overdoses were possible. Their safety analysis assumed software could not fail. When the FDA investigated, AECL couldn’t produce proper test plans and issued crude fixes like telling hospitals to disable the “up arrow” key.
The root problem was not a single bug but the absence of a rigorous process for safety-critical software. AECL relied on old code written by one developer and never built proper testing practices. The scandal eventually pushed regulators to tighten standards. The Therac-25 remains a case study of how poor software processes and organizational blind spots can kill—a warning echoed decades later by failures like the Boeing 737 MAX.
Engineers in other fields need to sign off on designs, and can be held liable if something goes wrong. Software hasn't caught up to that yet.
In my experience, hardware people really dis software. It's hard to get them to take it seriously.
When something like this happens, they tend to double down on shading software.
I have found it very, very difficult to get hardware people to understand that software has a different ruleset and workflow, from hardware. They interpret this as "cowboy software," and think we're trying to weasel out of structure.
The Therac-25 incident was a radiation overdose in Texas.
Cheers
Critical issues happen with customers, blame gets shifted, a useless fix is proposed in the post mortem and implemented (add another alert to the waterfall of useless alerts we get on call), and we continue to do ineffective testing. Procedural improvements are rejected by the original authors who were then promoted and want to keep feeling like they made something good and are now in a position to enforce that fiction.
So IMO the lesson here isn't that everyone should focus on culture and process, it's that you won't have the right culture and process and (apparently) laws and regulation can overcome the lack of culture and process.
They can't. There was a single developer, he left, no tests existed, no one understood the mess to confidently make changes. At this point you can either lie your way through the regulators or scrap the product altogether.
I've seen this kind of devs and companies running their software in regulated industries like in the therac incident, just now we are in the year 2025. I left because i understood that it's a criminal charge waiting to happen.
Which makes me very nervous about AI generated code and people who don’t clam human authorship. The bug that creeps in where we scapegoat the AI isn’t gonna cut it in a safety situation.
> A commission attributed the primary cause to generally poor software design and development practices, rather than singling out specific coding errors.
Which to me reads as "this entire codebase was so awful that it was bound to fail in some or other way".
By focusing on particular errors, there's the possibility you'll think "problem solved".
By focusing on process, you hope to catch mistakes as early as possible.
> That standard [IEC 62304] is surrounded by other technical reports and guidances recognized by the FDA, on software risk management, safety cases, software validation. And I can tell you that the FDA is very picky, when they review your software design and testing documentation. For the first version and for every design change.
> That’s good news for all of us. An adverse event like the Therac 25 is very unlikely today.
This is a case where regulation is a good thing. Unfortunately I see a trend lately where almost any regulation is seen as something stopping innovation and business growth. There are room for improvements and some areas are over regulated, but we don't want a "DOGE" chainsaw to regulations without knowing what the consequences are.
https://www.youtube.com/watch?v=7EQT1gVsE6I
I've gone off Kyle Hill after a lot of people pointed out that he was promoting a scam (BetterHelp) on his video about fraud and his response was just to tell people to deal with it
Unfortunately Computer Science is still in its too-cool-for-school phase, see OpenAI being sued over recently encouraging a suicidal teenager to kill themself. You'd think it would be common sense for that to be a hard stop outside of the LLM processing the moment a conversation turns to subjects like that, but nope.
Can't imagine that radiation might be a factor here...
BTW: Relevant XKCD: https://xkcd.com/2347/
https://xkcd.com/2030/
This says it all.
https://www.theguardian.com/uk-news/2024/jan/09/how-the-post...
One member of the development team, David McDonnell, who had worked on the Epos system side of the project, told the inquiry that “of eight [people] in the development team, two were very good, another two were mediocre but we could work with them, and then there were probably three or four who just weren’t up to it and weren’t capable of producing professional code”.
What sort of bugs resulted?
As early as 2001, McDonnell’s team had found “hundreds” of bugs. A full list has never been produced, but successive vindications of post office operators have revealed the sort of problems that arose. One, named the “Dalmellington Bug”, after the village in Scotland where a post office operator first fell prey to it, would see the screen freeze as the user was attempting to confirm receipt of cash. Each time the user pressed “enter” on the frozen screen, it would silently update the record. In Dalmellington, that bug created a £24,000 discrepancy, which the Post Office tried to hold the post office operator responsible for.
Another bug, called the Callendar Square bug – again named after the first branch found to have been affected by it – created duplicate transactions due to an error in the database underpinning the system: despite being clear duplicates, the post office operator was again held responsible for the errors.
It's an archetypal example of 'one law for the connected, another law for the proles'.
http://www0.cs.ucl.ac.uk/staff/a.finkelstein/papers/lascase....
Killing 20 innocents and one Hamas member is not a bug - it is callous, but that's a policy decision and the software working as intended. But when it is a false positive (10% of the time), due to inadequate / outdated data and inadequate models, that could reasonably classified as a bug - so all 21 deaths for each of those bombings would count as deaths caused by a bug. Apparently (at least earlier versions) of Gospel were trained on positive examples that mean someone is a member of Hamas, but not on negative examples; other problems could be due to, for example, insufficient data, and interpolation outside the valid range (e.g. using pre-war data about, e.g. how quickly cell phones are traded, or people movements, when behaviour is different post-war).
I'd therefore estimate that deaths due to classification errors from those systems is likely in the thousands (out of the 60k+ Palestinian deaths in the conflict). Therac-25's bugs caused 6 deaths for comparison.
It's almost never just software. It's almost never just one cause.
They deliberately designed it to only look at one of the Pitot tubes, because if they had designed it to look at both, then they would have had to implement a warning message for conflicting data.
And if they had implemented a warning message, they would have had to tell the pilots about the new system, and train them how to deal with it.
It wasn't a mistake in logic either. This design went through their internal safety certification, and passed.
As far as I'm aware, MCAS functioned exactly as designed, zero bugs. It's just that the design was very bad.
Conflict resolution in redundant systems seems to be one of the weakest spots in modern aircraft software.
Inputs were averaged, but supposedly there’s at least a warning: Confused, Bonin exclaimed, "I don't have control of the airplane any more now", and two seconds later, "I don't have control of the airplane at all!"[42] Robert responded to this by saying, "controls to the left", and took over control of the aircraft.[84][44] He pushed his side-stick forward to lower the nose and recover from the stall; however, Bonin was still pulling his side-stick back. The inputs cancelled each other out and triggered an audible "dual input" warning.
Worryingly, e2e / full integration testing was also the main cause of other Boeing blunders, like the Starliner capsule.
[edit as I can't reply to the child comment]: The FAA and EASA both looked into the stall characteristics afterwards and concluded that the plane was stable enough to be certified without MCAS and while it did have more of a tenancy to pitch up at high angles of attack it was still an acceptable amount.
The patriot missile system used floating point for time, so as uptime extended the clock became more and more granular, eventually to the point where time skipped so far that the range gate was tripped.
The fix was being deployed earlier that year but this unit hadn't been updated yet.
https://www.cs.unc.edu/~smp/COMP205/LECTURES/ERROR/lec23/nod...
https://www.androidauthority.com/psa-google-pixel-911-emerge...
There's a chart here that shows it clearly for Toyota's rollout:
https://www.embedded.com/unintended-acceleration-and-other-e...
Not a "bug" per se, but texting while driving kills ~400 people per year in the US. It's a bug at some level of granularity.
To be tongue in cheek a bit, buggy JIRA latency has probably wasted 10,000 human years. Those are many whole human lives if you count them up.
These kind of calculations always make me wonder...say someone wasted one minute of everybody's life, is the cost ~250 lives? One minute? Somewhere in between?
Take this post-mortem here [1] as a great warning and which also highlights exactly what could go horribly wrong if the LLM misreads comments.
What's even more scarier is each time I stumble across a freshly minted project on GitHub with a considerable amount of attention, not only it is 99% vibe-coded (very easy to detect) but it completely lacks any tests written for it.
Makes me question the ability of the user prompting the code in the first place if they even understand how to write robust and battle-tested software.
[0] https://news.ycombinator.com/item?id=44764689
[1] https://sketch.dev/blog/our-first-outage-from-llm-written-co...