What if you could train and inference without any hallucinations?
What if you could offload training and inferencing onto the SSD’s SATA controller instead of a GPU?
Well it turns out you can, and the result is: lossless hyper-compression, 100x smaller files, a nearly 50,000x cost reduction in terms of terabytes learned per watt, a 44x reduction in upfront hardware costs, indirect-encryption that resists man-in-the-middle attacks, and you gain the ability to train and infer on mission critical datasets such as VMs and DNA in real-time.
By pushing AI compute to the edge of the SSD controller (away from CPU and GPU memory) we unlocked a new type of storage computing engine with far and wide practical uses.
JAM was originally developed for speeding up transfer times in industrial site-to-site datacenter backups, and to significantly reduce the memory footprint required for cross-domain learning operations, but has turned into a more general purpose archival tool.
This post is our first formal announcement since the inception of our JAM product almost 10 years ago, a public demo is provided on our website.
We are eager to hear your thoughts.
anon84873628 · 4h ago
Honestly, my thought is that you shoved AI buzzwords into something that doesn't need them. I can't figure out what the product is really supposed to do... Enable rolling upgrades to the JVM or some AI magic? Those sound like awfully different domains for one product to be good at both.
cithorum · 4h ago
Thank you for the honest feedback, this is quite a bespoke product with a wide range of use cases so the cross-domain features can sound hard to believe at first.
JAM is merely an archiver like TAR, you put files into it and take files out of it, except JAM performs large scale pattern matching up to 18 exabytes apart while using sub-linear memory requirements, thus solving the big data storage problem.
The original idea was to "train on anything" then "infer on anything" and JAM was the architected to fit that requirement.
What if you could offload training and inferencing onto the SSD’s SATA controller instead of a GPU?
Well it turns out you can, and the result is: lossless hyper-compression, 100x smaller files, a nearly 50,000x cost reduction in terms of terabytes learned per watt, a 44x reduction in upfront hardware costs, indirect-encryption that resists man-in-the-middle attacks, and you gain the ability to train and infer on mission critical datasets such as VMs and DNA in real-time.
By pushing AI compute to the edge of the SSD controller (away from CPU and GPU memory) we unlocked a new type of storage computing engine with far and wide practical uses.
JAM was originally developed for speeding up transfer times in industrial site-to-site datacenter backups, and to significantly reduce the memory footprint required for cross-domain learning operations, but has turned into a more general purpose archival tool.
This post is our first formal announcement since the inception of our JAM product almost 10 years ago, a public demo is provided on our website.
We are eager to hear your thoughts.
JAM is merely an archiver like TAR, you put files into it and take files out of it, except JAM performs large scale pattern matching up to 18 exabytes apart while using sub-linear memory requirements, thus solving the big data storage problem.
The original idea was to "train on anything" then "infer on anything" and JAM was the architected to fit that requirement.