About Asteroids, Atari's biggest arcade hit (goto10retro.com)
10 points by rbanffy 3d ago 3 comments
$30 Homebrew Automated Blinds Opener (2024) (sifter.org)
296 points by busymom0 21h ago 131 comments
Spaced repetition systems have gotten better (domenic.me)
918 points by domenicd 1d ago 477 comments
Telum II at Hot Chips 2024: Mainframe with a Unique Caching Strategy
74 rbanffy 19 5/19/2025, 10:27:34 AM chipsandcheese.com ↗
This doesn't really detract from the overall point - stacking a huge per-core L2 cache and using cross-chip reads to emulate L3 with clever saturation metrics and management is very different to what any x86 CPU I'm aware of has ever done, and I wouldn't be surprised if it works extremely well in practice. It's just that it'd have made a stronger article IMO if it had instead compared dedicated L2 + shared L2 (IBM) against dedicated L2 + shared L3 (intel), instead of dedicated L2 + sharded L3 (amd).
For applications, COBOL and Java. For middleware and systems and utilities etc, assembly, C, C++, probably still some PL/X going on too.
Running Linux, from a user's perspective, it feels just like a normal server with a fast CPU and extremely fast IO.
I wonder how big a competitive edge that will remain in an era where ordinary cloud VMs can do 10 GB/s to zone-redundant remote storage.
Speed is not the only reason why some org/business would have Big Iron in their closet.
Whilst cloud platforms are the new mainframe (so to speak), and they have all made great strides in improving the SLA guarantees, storage is still accessed over the network (plus extra moving parts – coordination, consistency etc). They will get there, though.
As one grey beard said it to me: Java is loosely typed and dynamic compared to colbol/db2/pl-sql. He was particularly annoyed that the smallest numerical type a ‘byte’ in Java was quote: “A waste of bits” and that Java was full of “useless bounds checking” both of which were causing “performance regressions”.
The way mainframe programs are written is: the entire thing is statically typed.
sounds like "don't try to out-optimize the compiler."
Other than that, there are Java, C, C++ implementations for mainframes, for a while IBM even had a JVM implementation for IBM i (AS/400), that would translate JVM bytecodes into IBM i ones.
Additionally all of them have POSIX environments, think WSL like but for mainframes, here anything that goes into AIX, or a few selected enterprise distros like Red-Hat and SuSE.
I would enjoy an ELI5 for the market differences between commodity chips and these mainframe grade CPUs. Stuff like design, process, and supply chain, anything of interest to a general (nerd) audience.
IBM sells 100s of Z mainframes per year, right? Each can have a bunch of CPUs, right? So Samsung is producing 1,000s of Telums per year? That seems incredible.
Given such low volumes, that's a lot more verification and validation, right?
Foundaries have to keep running to be viable, right? So does Samsung bang out all the Telums for a year in one burst, then switch to something else? Or do they keep producing a steady trickle?
Not that this info would change my daily work or life in any way. I'm just curious.
TIA.