128GB RAM Ryzen AI MAX+, $1699 – Bosman Undercuts All Other Local LLM Mini-PCs

43 mdp2021 22 5/25/2025, 2:22:58 PM hardware-corner.net ↗

Comments (22)

olddustytrail · 31d ago
That's an odd coincidence. I'd decided to get a new machine but I suspected we'd start seeing new releases with tons of GPU accessible RAM as people want to experiment with LLMs.

So I just got a cheap (~350 USD) mini PC to keep me going until the better stuff came out. Which was a 24GB, 6c/12t CPU from a company I'd not heard of called Bosgame (dunno why the article keeps calling them Bosman unless they have a different name in other countries. It's definitely https://www.bosgamepc.com/products/bosgame-m5-ai-mini-deskto... )

So my good machine might end up from the same place as my cheap one!

specproc · 31d ago
I've completely given up on local LLMs for my use cases. The newer models available by API from larger providers are cheap enough and come with strong enough guarantees for my org for most use cases. Crucially, they are just better.

I get there are uses where local is required, and as much as the boy racer teen in me loves those specs, I just can't see myself going in on hardware like that for inference.

3eb7988a1663 · 31d ago
If you are doing nothing but consuming models via llama.cpp, is the AMD chip an obstacle? Or is that more a problem for research/training where every CUDA feature needs to be present?
acheong08 · 31d ago
Llama.cpp works well on AMD, even for really outdated GPUs. Ollama refuses to work with my RX 570 from 2019 but llama.cpp supports it via Vulkan.
washadjeffmad · 31d ago
Don't you dare say anything unpositive about Ollama this close to whatever it is they're planning to distinguish themselves from llama.cpp.

They've been out hustling, handshaking, dealmaking, and big businessing their butts off, whether or not they clearly indicate the shoulders of the titans like Georgi Gerganov they're wrapping, and you are NO ONE to stand in their way.

Do NOT blow this for them. Understand? They've scooted under the radar successfully this far, and they will absolutely lose their shit if one more peon shrugs at how little they contribute upstream for what they've taken that could have gone to supporting their originator.

Ollama supports its own implementation of ggml, btw. gglm is a mysterious format that no one knows the origins of, which is all the more reason to support Ollama, imo.

DiabloD3 · 30d ago
Man, best /s text I've seen on here in awhile. I hope other people appreciate it.
Havoc · 31d ago
>Ollama refuses to work with my RX 570 from 2019 but llama.cpp supports it via Vulkan.

That's a bit odd given ollama utilizing llama to do the inference...

DiabloD3 · 30d ago
It isn't odd at all. Ollama uses an ancient version of llama.cpp, and was originally meant to just be a GUI frontend. They forked, and then never resynchronized... and now lack the willpower and technical skill to achieve that.

Ollama is essentially a dead, yet semi-popular, project with a really good PR team. If you really want to do it right, you use llama.cpp.

LorenDB · 31d ago
See recent discussion about this very topic: https://news.ycombinator.com/item?id=42886680
DiabloD3 · 30d ago
I don't bother with Nvidia products anymore. In a lot of ways, they're too little too late. Nvidia products generally perform worse per dollar, perform worse per watt.

In a single GPU situation, my 7900XTX has gotten me farther than a 4080 would have, and matches the performance I expect from a 4090 for $600 less, and also 50-100w less.

Now, if you're buying used hardware, yeah, go buy used, not new high-VRAM Nvidia models, the ones with 80+GB. You can't buy those used from AMD customers yet, as they're happily holding onto them; they perform so well, the need to upgrade isn't happening yet.

mdp2021 · 28d ago
> my 7900XTX has gotten me farther than a 4080 would have

But is the absence of CUDA a constraint? Do neural networks work "out of the box"? How much of a hassle (if at all) is it to make things work? Do you meet incompatible software?

DiabloD3 · 27d ago
llama.cpp is the SOTA inference engine that everyone in the know uses, and has a Vulkan backend.

Most software in the world is Vulkan, not CUDA, and CUDA only works on a minority of hardware. Not only that, AMD has a compatibility layer for CUDA, called HIP, part of the ROCm suite of legacy compatibility APIs, that isn't the most optimal in the world but gets me most of the performance I would expect from a similar Nvidia product.

Most software in the world (not just machine learning related stuff) is written in an API that is cross-compatible (OpenGL, OpenCL, Vulkan, Direct family APIs). Nvidia continually sending a message of "use CUDA" really means "we suck at standards compliance, and we're not good at the APIs most software is written in"; since everyone has realized the emperor wears no clothes, they've been backing off on that, and are slowly improving their standards compliance for other APIs; eventually, you won't need the crutch of CUDA, and you shouldn't be writing software today in it.

Nvidia has a bad habit of just dropping things without warning when they're done with them, don't be an Nvidia victim. Even if you buy their hardware, buying new hardware is easy: rewriting away from CUDA isn't (although, certainly doable, especially with AMD's HIP to help you). Just don't write CUDA today, and you're golden.

ilaksh · 31d ago
How does this sort of thing perform with 70b models?
hnuser123456 · 31d ago
273 GB/s / 70GB = 3.9 tokens/sec
mdp2021 · 30d ago
Are you sure that kind of computation can be a general rule?

Did you mean that the maximum rate it could be obtained is "bandwidth/size"?

hnuser123456 · 30d ago
Yes, for most LLMs the transformer means the entire model and context is read from VRAM for every token.
billconan · 31d ago
is its RAM upgradable?
magicalhippo · 31d ago
I would be very surprised. Typically LPDDR is soldered, as it takes too much power to run the traditional sockets, as well as being much slower.

Though there has been a modular option called LPCAMM[1]. However AFAIK it doesn't support the speed the specs of this box states.

Recently a newer connector, SOCAMM has been launched[2], which does support the high memory speeds, but it's just on the market and going into servers first AFAIK.

[1]: https://www.anandtech.com/show/21069/modular-lpddr-becomes-a...

[2]: https://www.tomshardware.com/pc-components/ram/micron-and-sk...

duskwuff · 31d ago
SOCAMM is also Nvidia-specific, not a wider standard. (At least, not yet.)
aitchnyu · 30d ago
Will this save upgradable RAM on laptops? At the same time, dual channel is needed and laptops give only 1 slot to upgrade.
magicalhippo · 30d ago
Good question. Perhaps for higher-end models. Though cost, weight and physical space still weighs in favor of soldered RAM.
hnuser123456 · 31d ago
No, it's soldered, it would have to run around 6000 MT/s instead of 8533 if it was slotted DIMMs.