Ask HN: Why hasn't x86 caught up with Apple M series?
437 points by stephenheron 3d ago 616 comments
Ask HN: Best codebases to study to learn software design?
106 points by pixelworm 5d ago 92 comments
Jam – Zero Hallucination Big Data Storage Engine
1 cithorum 3 8/29/2025, 3:41:50 PM cithorum.ca ↗
What if you could offload training and inferencing onto the SSD’s SATA controller instead of a GPU?
Well it turns out you can, and the result is: lossless hyper-compression, 100x smaller files, a nearly 50,000x cost reduction in terms of terabytes learned per watt, a 44x reduction in upfront hardware costs, indirect-encryption that resists man-in-the-middle attacks, and you gain the ability to train and infer on mission critical datasets such as VMs and DNA in real-time.
By pushing AI compute to the edge of the SSD controller (away from CPU and GPU memory) we unlocked a new type of storage computing engine with far and wide practical uses.
JAM was originally developed for speeding up transfer times in industrial site-to-site datacenter backups, and to significantly reduce the memory footprint required for cross-domain learning operations, but has turned into a more general purpose archival tool.
This post is our first formal announcement since the inception of our JAM product almost 10 years ago, a public demo is provided on our website.
We are eager to hear your thoughts.
JAM is merely an archiver like TAR, you put files into it and take files out of it, except JAM performs large scale pattern matching up to 18 exabytes apart while using sub-linear memory requirements, thus solving the big data storage problem.
The original idea was to "train on anything" then "infer on anything" and JAM was the architected to fit that requirement.