I generally download the safetensors and make my own GGUFs, usually at Q8_0.
Is there any measurable benefit to your dynamic quants at that quant level?
I looked at your dynamic quant 2.0 page, but all the charts and graphs appear to cut off at Q4.
danielhanchen · 10h ago
Oh I also upload Q8_K_XL for eg, which will upcast important layers to BF16 / F16 as well!
There definitely is a benefit for dynamically selecting layers to be at diff bit rates - I wrote about the difference between naively quantizing and selectively quantizing: https://unsloth.ai/blog/deepseekr1-dynamic
DrPhish · 8h ago
Thanks Daniel. I know you upload them, but I was hoping for some solid numbers on your dynamic q8 vs a naive quant. There doesn't seem to be anything on either of those links to show improvement at those quant levels.
My gut feeling is that there's not enough benefit to outweigh the risk of putting a middleman in the chain of custody from the original model to my nvme.
However, I can't know for sure without more testing than I have the time or inclination for, which is why I was hoping there had been some analysis you could point me to.
arcanemachiner · 19h ago
Would I notice a difference between the Q2_K and Q2_K_XL variants?
danielhanchen · 18h ago
Oh I would always use Q2_K_XL :) It uses our dynamic methodology to quantize certain layers in different bits ie 2, 3, 4, 5, 6, 8 bits - the more important the layer is, the higher the bitrate
Squeeze2664 · 16h ago
How do you determine the importance of a layer in this case?
Afaik they have a test bench that they use and take the activation data from that.
danielhanchen · 10h ago
Yes we have around 1 to 3 million tokens of high quality self verified data that we use to calibrate models!
mycpuorg · 15h ago
@danielhanchen, can we use these steps to fine tune other qwen3 models too? like 480B coder or embeddings model?
danielhanchen · 10h ago
Oh for finetuning - we do have some code for MoE finetuning for Qwen at https://github.com/unslothai/unsloth, but we haven't yet announced it yet!
lostmsu · 13h ago
How usable are 1 and less than 1 bit quants?
danielhanchen · 10h ago
Oh 1bit reminder isn't 1bit, but a mixture of 1 to 8bit! I would still use 2bit dynamic - 1bit dynamic sometimes can go into weird repetitive loops, but it still does produce reasonable output.
Larger models with 1bit do better - for eg 480B Coder 1bit actually does very well!
aliljet · 14h ago
I see the term 'local inference' everywhere. It's an absurd misnomer without hardware and cost defined. I can also run a coal fired power plant in my backyard, but in practice, there's no reasonable way to make that economical beyond being a toy.
(And I should add, you are a hero for doing this work, only love in my comment, but still a demand for detail$!)
danielhanchen · 10h ago
The trick of llama.cpp and our dynamic quants is you can actually offload the model to RAM / even an SSD! If you have GPU VRAM + RAM + SSD > the model size (say 90GB for dynamic 2bit quant), then it'll run well!
Ie you can actually run it on a local desktop or even your laptop now! You don't need a 90GB GPU for example, but say a 24GB GPU + 64GB to 128GB RAM.
Hardware and cost is assumed to be approximately desktop-class. If you've got a gaming rig with an RTX 4090 and 128MB RAM, you can run this if you pick the right quant.
cmpxchg8b · 14h ago
128MB? Quantization has come a long way!
danielhanchen · 10h ago
I think they mis-spoke 128GB* :)
regularfry · 9h ago
Wishful thinking there on my part.
danielhanchen · 8h ago
Though technically < 1GB is enough - one had offload it to the SSD, albeit with very slow speeds!
christianqchung · 16h ago
For what it's worth, the Qwen team misreported an ARC-AGI score benchmark on the non-thinking model by a factor of 4, which has not been explained yet. They claimed a score of 41.8% on ARC-AGI 1 [0] which is much higher than what non-chain of thought models have been able to achieve (GPT 4.5 got 10%). The ARC team later benchmarked it at 11%[1], which is still a high score, but not the same as 41.8%. It's still probably a significant update on the model though.
Maybe 41.8% is the score of Qwen3-235B-A22B-Thinking-2507, lol.
11% for the non-thinking model is pretty high
jug · 15h ago
Makes sense, it's in line with Gemini 2.5 Pro in that case. It aligns with their other results in the post.
christianqchung · 14h ago
They made it very clear that they were reporting that score for the non-thinking model[0]. I still don't have any guesses as to what happened here, maybe something format related. I can't see a motivation to blatantly lie on a benchmark which would very obviously be publicly corrected.
Could it be the public eval set vs the private eval the ARC team has? The public eval set is slightly easier and may have had a some unintentional data leakage since it was released before their training data cutoff.
sophia01 · 17h ago
If this is actually competitive with Gemini 2.5 Pro that would be insane esp for an Apache2 truly open weights model, let's hope it's not too hacked to shine on benchmarks!
lvl155 · 17h ago
Qwen3 models are solid and at such a low cost, it doesn’t hurt to pair it with something like Sonnet 4 as a check. I mean it does eliminate a lot of Claude’s “You’re absolutely right!” loops.
apwell23 · 16h ago
> I mean it does eliminate a lot of Claude’s “You’re absolutely right!” loops.
not as scary as "Let me try a completely different approach" . Now you have to throw out all the AI slop and start from scratch.
cma · 16h ago
If you aren't using source control
No comments yet
sophia01 · 11h ago
Have been using this all morning for some integral-heavy math for my PhD (trying to bound certain analytically intractable integrals). It's a bit hit-or-miss. It's been able to come up with some pretty impressive bounds but also feels like more than half it does some really dumb stuff. Compared to Gemini 2.5 Pro it's pretty solid. Its thought traces are really silly though sometimes: it'll pretend to check websites or "pull out a calculator".
pama · 17h ago
Anyone here has tips for the code and hardware setup to get best per-GPU throughput on H200 or B200 hardware for large reasoning traces and inputs of around 10k–40k tokens? Is there an equivalent effort to sglang’s optimization of the V3/R1 throughput for this class of models?
Put this prompt into qwen3-thinking, and then compare with gemini 2.5 pro:
---
As candidates for creators, we should first address chaos. What is chaos? If for a given event X in A, all possible events can occur in B, and if such independence is universal, we are faced with chaos. If, however, event X in A limits in some way what can occur in B, a relationship exists between A and B. If X in A limits B unequivocally (we flip a switch, the lamp turns on), the relationship between A and B is deterministic. If X in A limits B in such a way that after X in A, events Y or Z can occur in B, where Y occurs 40 times out of 100 after X in A, while Z occurs 60 times, then the relationship between A and B is probabilistic.
---
You have to rewrite the above acting as David Foster Wallace in 2025. Don't mention the year. Make it postmodern. Refer to current and projected events and trends. AI, robotics, etc. you have full creative control. you can make it long if you wish. change every word. make it captivating and witty. You are acting as a demiurge DFW. You need to pass the Turing test here. Sell it to the reader. Write good, high-brow fiction. Avoid phrases that are typical to LLMs/AI writers.
adamredwoods · 15h ago
Interesting, Qwen won't answer specific historical events (Tiananmen Square).
yunohn · 13h ago
Is it really that interesting to point out for every Chinese oss model release?
ondra · 10h ago
Yes. The original DeepSeek-R1 answered those questions just fine. The newer models seem to be much more brainwashed.
mceachen · 13h ago
Is it not relevant to reiterate the bias (or lobotomization) for people new to this space?
lurking_swe · 1h ago
No, it’s not really relevant. Should I point out that all the models from providers in the west are very “left-leaning” every time one is released? Is that helpful to the technical discussion, in any way?
If you are using an LLM for historical knowledge, questions, or research, then the chinese censorship is relevant. Or for questions about geopolitics.
OldfieldFund · 14h ago
It's made by Alibaba :)
donedanadone · 15h ago
Evals aside why are American labs not able to release open source models at the same speec?
ttul · 15h ago
The Chinese labs can’t compete on inference scale because they have been prevented from accessing the most efficient chips. But since training is a mere fraction of inference these days, they can at least hurt the American companies that are generating billions via inference services.
If you can’t beat ‘em, at least pour some sand into their moat, giving China some time to perfect its own nanometer-scale fabrication. It’s a society-wide effort.
bugglebeetle · 14h ago
They could, they’re just greedy, self-serving, and short-sighted. China’s doing the AI equivalent of Belt and Road to reap tremendous strategic advantages, as well as encourage large-scale domestic innovation.
Eisenstein · 15h ago
They don't release such huge open weights models because people who run open weights don't have the capability to run them effectively. Instead they concentrate on models like Gemma 3 which goes from 1B to 27B, which when quantized fits perfectly into the VRAM you can get on a consumer GPU.
regularfry · 14h ago
That shouldn't be the case here. Yes, it's memory-bandwidth-limited, but this is an MOE with 22B active. As long as the whole thing fits in RAM, it should be tolerable. It's right at the limit, though.
No comments yet
lossolo · 13h ago
> They don't release such huge open weights models because people who run open weights don't have the capability to run them effectively
This is a naive take. There are multiple firms that can host these models for you, or you can host them yourself by renting GPUs. Thousands of firms could also host open-source models independently. They don’t release them because they fear competition and losing their competitive advantage. If it weren’t for Chinese companies open-sourcing their models, we’d be limited to using closed-source, proprietary models from the U.S., especially considering the recent LLaMA fiasco.
Eisenstein · 11h ago
Given the assumption that Google has Google's own interests at heart, the question isn't 'why doesn't Google release models that allow other companies to compete with them' but 'what is the reasoning behind the models they release' and that reasoning is 'for research and for people to use personally on their own hardware'.
We should be asking why Meta released the large Llama models and why the Chinese are releasing large models. I can't figure out a reason for it except prestige.
tosh · 19h ago
If the evals hold up this is a mindblowing new weight to capability ratio
edit: afaiu deepseek r1 was 671B with 37B active params
energy123 · 16h ago
Am I the only one who ignores evals, unless they're holdout datasets like a new IMO competition, or at a minimum evals with a semi-private test set like ARC-AGI 2? How can we trust that these companies don't put copies of these benchmarks in their training data? They can get whatever score they want, up to 100%, easily, by just training on that data sufficiently many times.
christianqchung · 16h ago
There is something of a middle ground here for benchmark skepticism. Big companies wouldn't want a massive divergence between benchmarks and real performance that people could actually notice, and I'd argue for the most part that this hasn't happened too much (although above I posted a problem with Qwen and ARC). However, finetunes by random people/groups don't carry the same downside so I'm basically skeptical of all finetunes before using them for a particular case.
energy123 · 16h ago
I don't believe these companies see their customers as being able to tell the difference between a real GPQA score and a GPQA score that's fudged upwards by 10%. Look at Elon Musk presenting himself to the world as a Path of Exile expert when in reality he likely hired an expert to level up his account while he himself is an amateur. They think we are idiots and will lie to us to capture market share and lock us into their ecosystem.
christianqchung · 14h ago
That's true, I certainly wouldn't be able to tell. I was thinking on the order of a 20% score vs 70%, but I realize that's not a very compelling range for my point when people are boasting about <5% shifts.
osti · 14h ago
For the coding benchmarks, does anyone know what are OJBench and CFEval?
nonhaver · 18h ago
impressive evals. i wonder how much of that can be attributed to the enhanced context understanding. i feel like that/length are the bottleneck of the majority of commercial models.
Eisenstein · 15h ago
I don't know, I think that extending context windows is actually detrimental because people assume they can just dump things in there until it fills up. You still have to deal with the limited attention that the models have, and only filling the context with things relevant to the particular thing you are trying to solve is going to be the most effective approach. If you have too much information for it to fit into a 128K window, I think you just have too much information. The entirety of Don Quixote at over 1000 pages is less than 64,000 tokens.
CamperBob2 · 14h ago
That sounds low by about 10x, assuming Don Quixote has 430k words (per Google).
Still, yes, I don't know of a single model that doesn't go off the rails if you actually try to take advantage of its context length specification.
Eisenstein · 13h ago
Well, I loaded up Llama 3 and downloaded the novel, and for the English translation we get 545997 tokens and in the original Spanish 653981 tokens. So when I estimated it did lose a an order of magnitude. Thanks for the correction.
Alifatisk · 16h ago
Alibaba has been on fire lately, do they even sleep?
Oh the blog at https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs does talk about 1, 2, 3, 4, 5, 6 and 8bit dynamic GGUFs as well!
There definitely is a benefit for dynamically selecting layers to be at diff bit rates - I wrote about the difference between naively quantizing and selectively quantizing: https://unsloth.ai/blog/deepseekr1-dynamic
My gut feeling is that there's not enough benefit to outweigh the risk of putting a middleman in the chain of custody from the original model to my nvme.
However, I can't know for sure without more testing than I have the time or inclination for, which is why I was hoping there had been some analysis you could point me to.
Larger models with 1bit do better - for eg 480B Coder 1bit actually does very well!
(And I should add, you are a hero for doing this work, only love in my comment, but still a demand for detail$!)
Ie you can actually run it on a local desktop or even your laptop now! You don't need a 90GB GPU for example, but say a 24GB GPU + 64GB to 128GB RAM.
The speeds are around 3 to 5 tokens / second, so still ok! I write more about improving speed for local devices here: https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tun...
[0] https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507
[1] https://x.com/arcprize/status/1948453132184494471
[0] https://x.com/JustinLin610/status/1947836526853034403
not as scary as "Let me try a completely different approach" . Now you have to throw out all the AI slop and start from scratch.
No comments yet
Put this prompt into qwen3-thinking, and then compare with gemini 2.5 pro:
---
As candidates for creators, we should first address chaos. What is chaos? If for a given event X in A, all possible events can occur in B, and if such independence is universal, we are faced with chaos. If, however, event X in A limits in some way what can occur in B, a relationship exists between A and B. If X in A limits B unequivocally (we flip a switch, the lamp turns on), the relationship between A and B is deterministic. If X in A limits B in such a way that after X in A, events Y or Z can occur in B, where Y occurs 40 times out of 100 after X in A, while Z occurs 60 times, then the relationship between A and B is probabilistic.
---
You have to rewrite the above acting as David Foster Wallace in 2025. Don't mention the year. Make it postmodern. Refer to current and projected events and trends. AI, robotics, etc. you have full creative control. you can make it long if you wish. change every word. make it captivating and witty. You are acting as a demiurge DFW. You need to pass the Turing test here. Sell it to the reader. Write good, high-brow fiction. Avoid phrases that are typical to LLMs/AI writers.
If you are using an LLM for historical knowledge, questions, or research, then the chinese censorship is relevant. Or for questions about geopolitics.
If you can’t beat ‘em, at least pour some sand into their moat, giving China some time to perfect its own nanometer-scale fabrication. It’s a society-wide effort.
No comments yet
This is a naive take. There are multiple firms that can host these models for you, or you can host them yourself by renting GPUs. Thousands of firms could also host open-source models independently. They don’t release them because they fear competition and losing their competitive advantage. If it weren’t for Chinese companies open-sourcing their models, we’d be limited to using closed-source, proprietary models from the U.S., especially considering the recent LLaMA fiasco.
We should be asking why Meta released the large Llama models and why the Chinese are releasing large models. I can't figure out a reason for it except prestige.
edit: afaiu deepseek r1 was 671B with 37B active params
Still, yes, I don't know of a single model that doesn't go off the rails if you actually try to take advantage of its context length specification.