If this is the full fp16 quant, you'd need 2TB of memory to use with the full 131k context.
With 44GB of SRAM per Cerebras chip, you'd need 45 chips chained together. $3m per chip. $135m total to run this.
For comparison, you can buy a DGX B200 with 8x B200 Blackwell chips and 1.4TB of memory for around $500k. Two systems would give you 2.8TB memory which is enough for this. So $1m vs $135m to run this model.
It's not very scalable unless you have some ultra high value task that need super fast inference speed. Maybe hedge funds or some sort of financial markets?
PS. The reason why I think we're only in the beginning of the AI boom is because I can't imagine what we can build if we can run models as good as Claude Opus 4 (or even better) at 1500 tokens/s for a very cheap price and tens of millions of context tokens. We're still a few generations of hardware away I'm guessing.
Voloskaya · 5h ago
> With 44GB of SRAM per Cerebras chip, you'd need 45 chips chained together. $3m per chip. $135m total to run this.
That's not how you would do it with Cerebras. 44GB is SRAM, so on chip memory, not HBM memory where you would store most of the params.
For reference one GB200 has only 126MB of SRAM, if you tried to estimate how many GB200 you would need for a 2TB model just by looking at the L2 cache size you would get 16k GB200 aka ~600M$, obviously way off.
Cerebras uses a different architecture than Nvidia, where the HBM is not directly packaged with the chips, this is handled by a different system so you can scale memory and compute separately. Specifically you can use something like MemoryX to act as your HBM which will be high speed interconnected to the chips SRAM, see [1]. I'm not at all an expert in Cerebras, but IIRC you can connect up to like 2PB of memory to a single Cererbas chip, so almost 1000x the FP16 model.
That's not how you would do it with Cerebras. 44GB is SRAM, so on chip memory, not HBM memory where you would store most of the params. For reference one GB200 has only 126MB of SRAM, if you tried to estimate how many GB200 you would need for a 2TB model just by looking at the L2 cache size you would get 16k GB200 aka ~600M$, obviously way off.
Yes but Cerebras achieves its speed by using SRAM.
Voloskaya · 2h ago
There is no way not to use SRAM on a GPU/Cerebras/most accelerators. This is where the cores fetch the data.
But that doesn’t mean you are only using SRAM, that would be impractical. Just like using a CPU just by storing stuff in the L3 cache and never going to the RAM.
Unless I am missing something from the original link, I don’t know how you got to the conclusion that they only used SRAM.
IshKebab · 5m ago
> Just like using a CPU just by storing stuff in the L3 cache and never going to the RAM. Unless I am missing something from the original link, I don’t know how you got to the conclusion that they only used SRAM.
That's exactly how Graphcore's current chips work, and I wouldn't be surprised if that's how Cerebras's wafer works. It's probably even harder for Cerebras to use DRAM because each chip in the wafer is "landlocked" and doesn't have an easy way to access the outside world. You could go up or down, but down is used for power input and up is used for cooling.
You're right it's not a good way to do things for memory hungry models like LLMs, but all of these chips were designed before it became obvious that LLMs are where the money is. Graphcore's next chip (if they are even still working on it) can access a mountain of DRAM with very high bandwidth. I imagine Cerebras will be working on that too. I wouldn't be surprised if the abandon WSI entirely due to needing to use DRAM.
aurareturn · 2h ago
I know Groq chips load the entire model into SRAM. That's why it can be so fast.
So if Cerebras uses HBM to store the model but stream weights into SRAM, I really don't see the advantage long term over smaller chips like GB200 since both architectures use HBM.
The whole point of having a wafer chip is that you limit the need to reach out to external parts for memory since that's the slow part.
Voloskaya · 1h ago
> I really don't see the advantage long term over smaller chips like GB200 since both architectures use HBM.
I don’t think you can look at those things binarily. 44GB of SRAM is still a massive amount. You don’t need infinite SRAM to get better performances. There is a reason NVidia is increasing the L2 cache size with every generation rather than just sticking with 32MB if it really changed nothing to have a bit more. The more SRAM you have the more you are able to mask communication behind computation. You can imagine with 44GB being able to load the weights of layer N+1 into SRAM while computing layer N, thereby entirely negating the penalty of going to HBM (same idea as FSDP).
vlovich123 · 5m ago
> You can imagine with 44GB being able to load the weights of layer N+1 into SRAM while computing layer N, thereby entirely negating the penalty of going to HBM (same idea as FSDP).
You would have to have an insanely fast bus to prevent I/O stalls with this. With a 235B fp16 model you’d be streaming 470GiB of data every graph execution. To do that 1000tok/s, you’d need a bus that can deliver a sustained ~500 TiB/s. Even if you do a 32 wide MoE model, that’s still about 15 TiB/s of bandwidth you’d need from the HBM to avoid stalls at 1000tok/s.
It would seem like this either isn’t fp16 or this is indeed likely running completely out of SRAM.
Of course Cerebas doesn’t use a dense representation so these memory numbers could be way off and maybe that is SRAM+DRAM combo
qeternity · 1h ago
> I don’t know how you got to the conclusion that they only used SRAM.
Because they are doing 1,500 tokens per second.
throwawaymaths · 5h ago
what are the bandwidth/latency of memoryX? those are the key parameters for inference
Zenst · 4h ago
Well MemoryX compared to H100 HBM3 the key details are that MemoryX has lower latency, but also far lower bandwidth. However the memory on Cerebras is scales a lot more over NVidia. You need a cluster of H100's to create a model, as only way to scale the memory, Cerbras is more suited to that aspect, Nvidia do their scaling in tooling, with Cerbras doing theirs in design via there silicon approach.
That's my take on it all, not many apples to oranges comparisons to work from on these two system for even rolling down the same slope.
perfobotto · 4h ago
No way an offchip HBM has same or better bandwidth then onchip
0xCMP · 2h ago
> MemoryX has lower latency, but also far lower bandwidth
imtringued · 4h ago
Yeah sure, but if you do that you are heavily dropping the token/s for a single user. The only way to recover from that is continuous batching. This could still be interesting if the KV caches of all users fit in SRAM though.
Voloskaya · 4h ago
> but if you do that you are heavily dropping the token/s for a single user.
I don’t follow what you are saying and what “that” is specifically. Assuming it’s referencing using HBM and not just SRAM, this is not optional on a GPU, SRAM is many order of magnitudes too small. Data is constantly flowing between HBM and SRAM by design, and to get data in/out of your GPU you have to go through HBM first, you can’t skip that.
And while it is quite massive on a Cerebras system it is also still too small for very large models.
yvdriess · 3h ago
> With 44GB of SRAM per Cerebras chip, you'd need 45 chips chained together. $3m per chip. $135m total to run this.
That on-chip SRAM memory is purely temporary working memory and does need to hold the entire model weights. The Cerebras chip works on a sparse weights representation, streams non-zero off their external memory server and the cores work in a transport-triggered dataflow manner.
twothreeone · 1h ago
I think you're missing an important aspect: how many users do you want to support?
> For comparison, you can buy a DGX B200 with 8x B200 Blackwell chips and 1.4TB of memory for around $500k. Two systems would give you 2.8TB memory which is enough for this.
That would be enough to support a single user. If you want to host a service that provides this to 10k users in parallel your cost per user scales linearly with the GPU costs you posted. But we don't know how many users a comparable wafer-scale deployment can scale to (aside from the fact that the costs you posted for that are disputed by users down the thread as well), so your comparison is kind of meaningless in that way, you're missing data.
coolspot · 37m ago
> That would be enough to support a single user. If you want to host a service that provides this to 10k users in parallel your cost per user scales linearly with the GPU costs you posted.
No. Magic of batching allows you to handle multiple user requests in parallel using the same weights with little VRAM overhead per user.
smcleod · 5h ago
There is no reason to run models for inference at static fp16, modern quantisation formats dynamically assign precision to the layers that need them, an average of 6bpw is practical imperceptible from full precision, 8bpw if you really want to squeeze every tiny last drop out of it (although it's unlikely it will be detectable). That is a huge memory saving.
vlovich123 · 1m ago
What quantization formats are these? All the OSS ones from GGML apply a uniform quantization
nhecker · 4h ago
> dynamically assign precision to the layers that need them
Well now I'm curious; how is a layer judged on its relative need for precision? I guess I still have a lot of learning to do w.r.t. how quantization is done. I was under the impression it was done once, statically, and produced a new giant GGUF blob or whatever format your weights are in. Does that assumption still hold true for the approach you're describing?
irthomasthomas · 4h ago
Last I checked they ran some sort of evals before and after quantisation and measured the effect. E.g Exllama-v2 measures the loss while reciting Wikipedia articles.
No comments yet
nroets · 22m ago
1500 tokens/s is 5.4 million per hour. According to the document it costs $1.20 x 5.4 = $6.48 per hour.
Which is not enough to even pay the interest on one $3m chip.
What am I missing here ?
makestuff · 43m ago
I agree there will be some breakthrough (maybe by Nvidia or maybe someone else) that allows these models to run insanely cheap and even locally on a laptop. I could see a hardware company coming out with some sort of specialized card that is just for consumer grade inference for common queries. That way the cloud can be used for sever side inference and training.
Adfeldman · 2h ago
Our chips don't cost $3M. I'm not sure where you got that number but its wildly incorrect.
aurareturn · 2h ago
So how much does it cost? Google search return $3m. Here's your chance to tell us your real price if you disagree.
1W6MIC49CYX9GAP · 1h ago
He also didn't argue about the rest of the math so it's likely correct that the whole model needs to be in SRAM :)
UltraSane · 11m ago
Is it actually $4M?
qualeed · 2h ago
In that case, mind providing a more appropriate ballpark?
npsomaratna · 2h ago
Are you the CEO of Cerebras? (Guessing from the handle)
stingraycharles · 5h ago
So, does that mean that in general for the most modern high end LLM tools, to generate ~1500 tokens per seconds you need around $500k in hardware?
Checking: Anthropic charges $70 per 1 million output tokens. @1500 tokens per second that would be around 10 cents per second, or around $8k per day.
The $500k sounds about right then, unless I’m mistaken.
andruby · 4h ago
62 days to break even, that would be a great investment
lordofgibbons · 5h ago
Almost everyone runs LLM inference at fp8 - for all of the open models anyway. You only see performance drop off below fp8.
stingraycharles · 4h ago
Isn’t usually mixed? I understood that Apple even uses fp1 or fp2 on their hardware embedded models they ship on their phones, but as far as I know it’s typically a whole bunch of different precisions.
llm_nerd · 4h ago
Small bit of pedantry: While there are 1 and 2-bit quantized types used in some aggressive schemes, they aren't floating point so it's inaccurate to preface them with FP. They are int types.
The smallest real floating point type is FP4.
EDIT: Who knew that correctness is controversial. What a weird place HN has become.
xtracto · 3h ago
I wonder if the fact that we use "floating point" is itself a bottleneck that can be improved.
Remembering my CS classes, storing an FP value requires the base and the exponent; that's a design decision. Also remembering some assembler classes, Int arithmetic is way faster than FP.
Could there be a better "representation " for the numbers needed in NN that would provide the accuracy of floating point but provide faster operations? (Maybe even allow to perform required operations as bitwise ops. Kind of like the left/right shifting to double/half ints. )
kadushka · 3h ago
Could there be a better
Yes. Look up “block floating point”.
llm_nerd · 3h ago
Sure, we have integers of many sizes, fixed point, and floating point, all of which are used in neural networks. Floating points are ideal when the scale of a value can vary tremendously, which is of obvious importance for gradient descent, and then after we can quantize to some fixed size.
A modern processor can do something similar to an integer bit shift about as quickly with a floating point, courtesy of FSCALE instructions and similes. Indeed, modern processors are extremely performant at floating point math.
xtracto · 3h ago
Shit I'd love to do R&D on this.
thegeomaster · 5h ago
You're assuming that the whole model has to be in SRAM.
htrp · 2h ago
>Maybe hedge funds or some sort of financial markets?
Definitely not hedge funds / quant funds.
You'd just buy a dgx
jsemrau · 3h ago
>Maybe hedge funds or some sort of financial markets?
I'd think that HFT is already mature and doesn't really benefit from this type of model.
rbanffy · 3h ago
True, but if the hardware could be “misused” for HFT, it’d be awesome.
derefr · 1h ago
> We're still a few generations of hardware away I'm guessing.
I don't know; I think we could be running models "as good as" Claude Opus 4, a few years down the line, with a lot less hardware — perhaps even going backwards, with "better" later models fitting on smaller, older — maybe even consumer-level — GPUs.
Why do I say this? Because I get the distinct impression that "throwing more parameters at the problem" is the current batch of AI companies' version of "setting money on fire to scale." These companies are likely leaving huge amounts of (almost-lossless) optimization on the table, in the name of having a model now that can be sold at huge expense to those few customers who really want it and are willing to pay (think: intelligence agencies automating real-time continuous analysis of the conversations of people-of-interest). Having these "sloppy but powerful" models, also enables the startups themselves to make use of them in expensive one-time batch-processing passes, to e.g. clean and pluck outliers from their training datasets with ever-better accuracy. (Think of this as the AI version of "ETL data migration logic doesn't need to be particularly optimized; what's the difference between it running for 6 vs 8 hours, if we're only ever going to run it once? May as well code it in a high-level scripting language.")
But there are only so many of these high-value customers to compete over, and only so intelligent these models need to get before achieving perfect accuracy on training-set data-cleaning tasks can be reduced to "mere" context engineering / agentic cross-validation. At some point, an inflection point will be passed where the marginal revenue to be earned from cost-reduced volume sales outweighs the marginal revenue to be earned from enterprise sales.
And at that point, we'll likely start to see a huge shift in in-industry research in how these models are being architected and optimized.
No longer would AI companies set their goal in a new model generation first as purely optimizing for intelligence on various leaderboards (ala the 1980s HPC race, motivated by serving many of the same enterprise customers!), and then, leaderboard score in hand, go back and re-optimize to make the intelligent model spit tokens faster when run on distributed backplanes (metric: tokens per watt-second).
But instead, AI companies would likely move to a combined optimization goal of training models from scratch to retain high-fidelity intelligent inference capabilities on lower-cost substrates — while minimizing work done [because that's what OEMs running local versions of their models want] and therefore minimizing "useless motion" of semantically-meaningless tokens. (Implied metric: bits of Shannon informational content generated per (byte-of-ram x GPU FLOP x second)).
mehdibl · 6h ago
It seem this news is "outdated" as it's from Jul 8 and might picked up confusing this model with yesterday Qwen 3 coder 405B release that is different in specs.
simonw · 4h ago
I initially thought this was about the Qwen release from two days ago, Qwen3-235B-A22B-Instruct-2507 - https://simonwillison.net/2025/Jul/22/qwen3-235b-a22b-instru... - but that's a no-reasoning model and the Cerebras announcement talks about reasoning, which tipped me off that this was Qwen's Qwen3-235B-A22B from April.
(These model names are so confusing.)
aitchnyu · 4h ago
Is Qwen3 235B A22B in OpenRouter the stock version or Cerebras version?
I'm eagerly awaiting for Qwen 3 coder being available on Cerebras.
I run plenty of agent loops and the speed makes a somewhat interesting difference in time "compression". Having a Claude 4 Sonnet-level model running at 1000-1500 tok/s would be extremely impressive.
To FEEL THE SPEED, you can either try it yourself on Cerebras Inference page, through their API, or for example on Mistral / Le Chat with their "Flash Answers" (powered by Cerebras). Iterating on code with 1000 tok/s makes it feel even more magical.
scosman · 4h ago
Exactly. I can see my efficiency going up a ton with this kind of speed. Every time I'm waiting for agents my mind looses some focus and context. Running parallel agents gets more speed but at the cost of focus. Near instant iteration loops in Cursor would feel magical (even more magical?).
It will also impact how we work: interactive IDEs like Cursor probably make more sense than CLI tools like Claude code when answers are nearly instant.
vidarh · 3h ago
I was justing thinking the opposite. If the answers are this instant, then subject to cost I'd be tempted to have the agent fork and go off and try a dozen different things, and run a review process to decide which approach(es) or part of approaches to present to the user.
It opens up a whole lot of use cases that'd be a nightmare if you have to look at each individual change.
mogili · 6h ago
Same.
However, I think Cerebras first needs to get the APIs to be more openAI compliant. I tried their existing models with a bunch of coding agents (include Cline which they did a PR for) and they all failed to work either due to a 400 error or tool calls not being formatted correctly. Very disappointed.
meowface · 6h ago
I just set up Groq with Kimi K2 the other day and was blown away by the speed.
Deciding if I should switch to Qwen 3 and Cerebras.
(Also, off-topic, but the name reminds me of cerebrates from Starcraft. The Zerg command hierarchy lore was fascinating when I was a young child.)
throwaw12 · 5h ago
Have you used Claude Code and how do you compare the quality to Claude models? I am heavily invested in tools around Claude, still struggling to make a switch and start experimenting with other models
meowface · 2h ago
I still exclusively use Claude Code. I have not yet experimented with these other models for practical software development work.
A workflow I've been hearing about is: use Claude Code until quota exhaustion, then use Gemini CLI with Gemini 2.5 Pro free credits until quota exhaustion, then use something like a cheap-ish K2 or Qwen 3 provider, with OpenCode or the new Qwen Code, until your Claude Code credits reset and you begin the cycle anew.
bredren · 5h ago
Are you using Claude code or the web interface? I would like to try this with CC myself, apparently with some proxy use an OpenAI compatible LLM can be swapped in.
throwaw12 · 5h ago
I am using Claude code, my experience with it so far is great. I use it primarily from terminal, this way I stay focused while reading code and CC doing its job in the background.
bredren · 5h ago
I’ve heard this repeated that using the env vars you can use gpt models, for example.
But then also that running a proxy tool locally is needed.
I haven’t tried this setup, and can’t say offhand if Cerebras’ hosted qwen described here is “OpenAI” compatible.
I also don’t know if all of the tools CC uses out of the box are supported in the most compatible non-Anthropic models.
Can anyone provide clarity / additional testimony on swapping out the engine on Claude Code?
derac · 4h ago
I've used Kimi K2, it works well. Personally I'm using Claude Code Router.
Issue most groq models are limited in context as that cost a lot of memory.
zozbot234 · 5h ago
Obligatory reminder that 'Groq' and 'Grok' are entirely different and unrelated. No risk of a runaway Mecha-Hitler here!
throwawaymaths · 5h ago
instead risk of requiring racks of hardware to run just one model!
logicchains · 6h ago
It'll be nice if this generates more pressure on programming language compilation times. If agentic LLMs get fast enough that compilation time becomes the main blocker in the development process, there'll be significant economic incentives for improving compiler performance.
doctoboggan · 2h ago
Has anyone with a lot of experience with Claude Code and sonnet-4 tried Claude Code with Qwen3-Coder? The fast times enabled here by Cerebras are enticing, but I wouldn't trade a speedup for a worse quality model.
AgentMatrixAI · 2h ago
haven't tried Qwen but used these "near instant token" like groq and another one that uses diffusion model to generate code via LLaMA and the results weren't satisfactory.
now if something like Gemini 2.5 pro or Sonnet 4 even can run on Cerebras generating tens of thousands of code in a few seconds, that could really make a difference.
nisten · 3h ago
"Full 131k" context , actually the full context is double that at 262144 context and with 8x yarn mutiplier it can go up to 2million. It looks like even full chip scale Cerebras has trouble with context length, well, this is a limitation of the transformer architechture itself where memory requirements scale ~linearly and compute requirements roughly quadratically with the increase in kv cache.
Anyway, YOU'RE NOT SERVING FULL CONTEXT CEREBRAS, YOU'RE SERVING HALF. Also what quantization exactly is this, can the customers know?
doubtfuluser · 5h ago
Very impressive speed.
A bit OT: what is the current verdict on Qwen, Kimi et al. When it comes to censorship / bias concerning narratives not allowed in the origin country?
jszymborski · 4h ago
The Qwen models are, anecdotally, probably some of the best open weight models, particularly the MoE models.
They are also, anecdotally, super scary censored. Asking it if anything "interesting has happened in Tianamen Square?" And then refining with "any notable protests?" And finally "maybe something to do with a tank"... All you get is vague allusions to the square being a beautiful place with a rich history.
impossiblefork · 14m ago
Do you think it's done so carefully that you suspect that they have perhaps even removed texts mentioning the Tiananmen square massacre from the training set?
pjs_ · 6h ago
Cerebras is truly one of the maddest technical accomplishments that Silicon Valley has produced in the last decade or so. I met Andy seven or eight years ago and I thought they must have been smoking something - a dinner plate sized chip with six tons of clamping force? They made it real, and in retrospect what they did was incredibly prescient
cherryteastain · 5h ago
The concept is super cool but does anyone actually use them instead of just buying Nvidia?
Most people don't buy nvidia; they use a provider, like Openrouter.
nickpsecurity · 3h ago
It's a modern take on an old idea. I first saw it in European research for wafer-scale, analog, neural networks. I found another project while looking for it. I'll share both.
Sheeeesh. 21 petabytes per second of memory bandwidth? That’s bonkers.
vFunct · 5h ago
Wafer-scale integration was done decades before.
rbanffy · 3h ago
Who remembers “wafer scale integration” from the 1980s?
Insane that Cerebras succeeded where everyone else failed for 5 decades.
bluelightning2k · 5h ago
This is (slightly) old news from July 8, resurfaced due to the Qwen 3 coder.
I think the gist of this thread is entirely: "please do the same for Qwen 3 coder", with us all hoping for:
a) A viable alternative to Sonnet 3
b) Specifically a faster and cheaper alternative
mehdibl · 6h ago
Would be great if they support the latest Qwen 3 405B launched yesterday and more aimed at agentic work/coding.
Inviz · 3h ago
I contacted their sales team before, cerebras started at $1500 a month at that time, and the limits were soooooo small. Did it get better?
Edit: Looks like it did. They both introduced pay as you go, and have prepaid limits too at $1500. I wonder if they have any limitations on parallel execution for pay as you go...
poly2it · 6h ago
Very impressive speed. With a context window of 40K however, usability is limited.
diggan · 6h ago
The first paragraph contains:
> Cerebras Systemstoday [sic] announced the launch of Qwen3-235B with full 131K context support on its inference cloud platform
Then later:
> Cline users can now access Cerebras Qwen models directly within the editor—starting with Qwen3-32B at 64K contexton the free tier. This rollout will expand to include Qwen3-235B with 131K context
Not sure where you get the 40K number from.
poly2it · 6h ago
I looked at their OpenRouter page, which they link to in their pricing section. Odd discrepancy.
mehdibl · 6h ago
Extra 40k you get when you use PAID API calls instead of free tier API calls.
wild_egg · 6h ago
Post says 131k context though? What did I miss?
mehdibl · 6h ago
The PR confuse a but 32k/64k and 131k if paid API.
If they do the same for the coding model they will have a killer product.
rbanffy · 2h ago
If they do that for chip design and it successfully iterates the design into the next generation on 2nm or less, it’ll be even more ludicrous.
jug · 4h ago
They better not cheat me with a quantized version!
OldfieldFund · 1h ago
I tried the non-quantized version, and it was pretty bad at creative writing compared to Kimi K2. Very deterministic and every time I regenerated the same prompt I got the usual AI phrases like "the kicker is:", etc. Kimi was much more natural.
cedws · 6h ago
With this kind of speed you could build a large thinking stage into every response. What kind of improvement could you expect in benchmarks from having say 1000 tokens of thinking for every response?
lionkor · 6h ago
Thinking can also make the responses worse; AIs don't "overthink", instead they start throwing away constraints and convincing themselves of things that are tangential or opposite to the task.
I've often observed thinking/reasoning to cause models to completely disregard important constraints, because they essentially can act as conversational turns.
rbanffy · 2h ago
> start throwing away constraints and convincing themselves of things that are tangential or opposite to the task
Funny that, when given too much brainpower, AIs manifest ADHD symptoms…
lionkor · 2h ago
Validating if you have ADHD, but still an issue that is somehow glanced over by everyone who uses AI daily(?)
falcor84 · 6h ago
My use-case would probably be of autocompacting the context of another LLM. I've been using Claude Code a lot recently, and feel that it generally gets better at handling my codebase once it uses up a lot of context (often >50%), but then it often runs out of context before finishing the task. So I'd be very interested in something that runs behind the scenes and compacts it to e.g. ~80%.
I know that Letta have a decent approach to this, but I haven't yet seen it done well with a coding agent, by them or anyone else. Is there anyone doing this with any measure of success?
the_arun · 5h ago
IMHO innovations are waiting to happen. Unless we get similar speeds using commodity hardware / pricing, we are not there yet.
rsolva · 6h ago
What would the energy use be for an average query be, when using large models at this speed?
scottcha · 5h ago
I’ve asked that question on linked in to the Cerebras team a couple times and haven’t ever received a response. There is system max tdp values posted online but I’m not sure you can assume the system is running in max tdp for these queries. If it is the numbers are quite high (I just tried to find the number but couldn’t find it but I had it in my notes as 23kw).
If someone from Cerebras is reading this feel free to dm me as optimizing this power is what we do.
skeezyboy · 3h ago
23kw gotdamn
pr337h4m · 6h ago
Quantization?
TechDebtDevin · 6h ago
Its not a new model, but rather their infrastructure and hardware they are showcasing.
pr337h4m · 5h ago
Groq appears to have quantized the Kimi K2 model they're serving, which is part of the reason why there's a noticeable performance gap between K2 on Moonshot's official API and the one served by Groq.
We don't know how/whether the Qwen3-235B served by Cerebras has been quantized.
logicchains · 5h ago
Cerebras have previously stated for other models they hosted that they didn't quantise, unlike Groq.
OxfordCommand · 4h ago
isn't Qwen Alibaba's family of models? What does cerebras have to do with this? i'm lost.
There are rumors that the K2 model Groq is serving is quantized or otherwise produces lower-quality responses than expected due to some optimization, FYI.
I tested it and the speed is incredible, though.
skeezyboy · 5h ago
have they managed to remove the "output may contain mistakes" disclaimer from a single LLM yet?
lazide · 3h ago
Never will.
But then, same for humans yes?
skeezyboy · 3h ago
>But then, same for humans yes?
And? Whats your point? This is a computer. Humans make errors doing arithmetic, therefore should we not expect computers to be able to reliably perform arithmetic? No. Silly retort and a common reply from people who are suitably wowed by the current generation of AI.
lazide · 2h ago
This is incredibly dumb.
skeezyboy · 2h ago
whats what im trying to tell you
jscheel · 4h ago
k2 on groq is really bad right now. I'm not sure what's causing the problem, but they've said that they are working on a few different issues.
cubefox · 2h ago
It sounds like Cerebras would be perfect for models with Mamba architecture, as those don't need a large KV cache for long contexts.
With 44GB of SRAM per Cerebras chip, you'd need 45 chips chained together. $3m per chip. $135m total to run this.
For comparison, you can buy a DGX B200 with 8x B200 Blackwell chips and 1.4TB of memory for around $500k. Two systems would give you 2.8TB memory which is enough for this. So $1m vs $135m to run this model.
It's not very scalable unless you have some ultra high value task that need super fast inference speed. Maybe hedge funds or some sort of financial markets?
PS. The reason why I think we're only in the beginning of the AI boom is because I can't imagine what we can build if we can run models as good as Claude Opus 4 (or even better) at 1500 tokens/s for a very cheap price and tens of millions of context tokens. We're still a few generations of hardware away I'm guessing.
That's not how you would do it with Cerebras. 44GB is SRAM, so on chip memory, not HBM memory where you would store most of the params. For reference one GB200 has only 126MB of SRAM, if you tried to estimate how many GB200 you would need for a 2TB model just by looking at the L2 cache size you would get 16k GB200 aka ~600M$, obviously way off.
Cerebras uses a different architecture than Nvidia, where the HBM is not directly packaged with the chips, this is handled by a different system so you can scale memory and compute separately. Specifically you can use something like MemoryX to act as your HBM which will be high speed interconnected to the chips SRAM, see [1]. I'm not at all an expert in Cerebras, but IIRC you can connect up to like 2PB of memory to a single Cererbas chip, so almost 1000x the FP16 model.
[1]: https://www.cerebras.ai/blog/announcing-the-cerebras-archite...
But that doesn’t mean you are only using SRAM, that would be impractical. Just like using a CPU just by storing stuff in the L3 cache and never going to the RAM. Unless I am missing something from the original link, I don’t know how you got to the conclusion that they only used SRAM.
That's exactly how Graphcore's current chips work, and I wouldn't be surprised if that's how Cerebras's wafer works. It's probably even harder for Cerebras to use DRAM because each chip in the wafer is "landlocked" and doesn't have an easy way to access the outside world. You could go up or down, but down is used for power input and up is used for cooling.
You're right it's not a good way to do things for memory hungry models like LLMs, but all of these chips were designed before it became obvious that LLMs are where the money is. Graphcore's next chip (if they are even still working on it) can access a mountain of DRAM with very high bandwidth. I imagine Cerebras will be working on that too. I wouldn't be surprised if the abandon WSI entirely due to needing to use DRAM.
So if Cerebras uses HBM to store the model but stream weights into SRAM, I really don't see the advantage long term over smaller chips like GB200 since both architectures use HBM.
The whole point of having a wafer chip is that you limit the need to reach out to external parts for memory since that's the slow part.
I don’t think you can look at those things binarily. 44GB of SRAM is still a massive amount. You don’t need infinite SRAM to get better performances. There is a reason NVidia is increasing the L2 cache size with every generation rather than just sticking with 32MB if it really changed nothing to have a bit more. The more SRAM you have the more you are able to mask communication behind computation. You can imagine with 44GB being able to load the weights of layer N+1 into SRAM while computing layer N, thereby entirely negating the penalty of going to HBM (same idea as FSDP).
You would have to have an insanely fast bus to prevent I/O stalls with this. With a 235B fp16 model you’d be streaming 470GiB of data every graph execution. To do that 1000tok/s, you’d need a bus that can deliver a sustained ~500 TiB/s. Even if you do a 32 wide MoE model, that’s still about 15 TiB/s of bandwidth you’d need from the HBM to avoid stalls at 1000tok/s.
It would seem like this either isn’t fp16 or this is indeed likely running completely out of SRAM.
Of course Cerebas doesn’t use a dense representation so these memory numbers could be way off and maybe that is SRAM+DRAM combo
Because they are doing 1,500 tokens per second.
That's my take on it all, not many apples to oranges comparisons to work from on these two system for even rolling down the same slope.
I don’t follow what you are saying and what “that” is specifically. Assuming it’s referencing using HBM and not just SRAM, this is not optional on a GPU, SRAM is many order of magnitudes too small. Data is constantly flowing between HBM and SRAM by design, and to get data in/out of your GPU you have to go through HBM first, you can’t skip that.
And while it is quite massive on a Cerebras system it is also still too small for very large models.
That on-chip SRAM memory is purely temporary working memory and does need to hold the entire model weights. The Cerebras chip works on a sparse weights representation, streams non-zero off their external memory server and the cores work in a transport-triggered dataflow manner.
> For comparison, you can buy a DGX B200 with 8x B200 Blackwell chips and 1.4TB of memory for around $500k. Two systems would give you 2.8TB memory which is enough for this.
That would be enough to support a single user. If you want to host a service that provides this to 10k users in parallel your cost per user scales linearly with the GPU costs you posted. But we don't know how many users a comparable wafer-scale deployment can scale to (aside from the fact that the costs you posted for that are disputed by users down the thread as well), so your comparison is kind of meaningless in that way, you're missing data.
No. Magic of batching allows you to handle multiple user requests in parallel using the same weights with little VRAM overhead per user.
Well now I'm curious; how is a layer judged on its relative need for precision? I guess I still have a lot of learning to do w.r.t. how quantization is done. I was under the impression it was done once, statically, and produced a new giant GGUF blob or whatever format your weights are in. Does that assumption still hold true for the approach you're describing?
No comments yet
Which is not enough to even pay the interest on one $3m chip.
What am I missing here ?
Checking: Anthropic charges $70 per 1 million output tokens. @1500 tokens per second that would be around 10 cents per second, or around $8k per day.
The $500k sounds about right then, unless I’m mistaken.
The smallest real floating point type is FP4.
EDIT: Who knew that correctness is controversial. What a weird place HN has become.
Remembering my CS classes, storing an FP value requires the base and the exponent; that's a design decision. Also remembering some assembler classes, Int arithmetic is way faster than FP.
Could there be a better "representation " for the numbers needed in NN that would provide the accuracy of floating point but provide faster operations? (Maybe even allow to perform required operations as bitwise ops. Kind of like the left/right shifting to double/half ints. )
Yes. Look up “block floating point”.
A modern processor can do something similar to an integer bit shift about as quickly with a floating point, courtesy of FSCALE instructions and similes. Indeed, modern processors are extremely performant at floating point math.
Definitely not hedge funds / quant funds.
You'd just buy a dgx
I'd think that HFT is already mature and doesn't really benefit from this type of model.
I don't know; I think we could be running models "as good as" Claude Opus 4, a few years down the line, with a lot less hardware — perhaps even going backwards, with "better" later models fitting on smaller, older — maybe even consumer-level — GPUs.
Why do I say this? Because I get the distinct impression that "throwing more parameters at the problem" is the current batch of AI companies' version of "setting money on fire to scale." These companies are likely leaving huge amounts of (almost-lossless) optimization on the table, in the name of having a model now that can be sold at huge expense to those few customers who really want it and are willing to pay (think: intelligence agencies automating real-time continuous analysis of the conversations of people-of-interest). Having these "sloppy but powerful" models, also enables the startups themselves to make use of them in expensive one-time batch-processing passes, to e.g. clean and pluck outliers from their training datasets with ever-better accuracy. (Think of this as the AI version of "ETL data migration logic doesn't need to be particularly optimized; what's the difference between it running for 6 vs 8 hours, if we're only ever going to run it once? May as well code it in a high-level scripting language.")
But there are only so many of these high-value customers to compete over, and only so intelligent these models need to get before achieving perfect accuracy on training-set data-cleaning tasks can be reduced to "mere" context engineering / agentic cross-validation. At some point, an inflection point will be passed where the marginal revenue to be earned from cost-reduced volume sales outweighs the marginal revenue to be earned from enterprise sales.
And at that point, we'll likely start to see a huge shift in in-industry research in how these models are being architected and optimized.
No longer would AI companies set their goal in a new model generation first as purely optimizing for intelligence on various leaderboards (ala the 1980s HPC race, motivated by serving many of the same enterprise customers!), and then, leaderboard score in hand, go back and re-optimize to make the intelligent model spit tokens faster when run on distributed backplanes (metric: tokens per watt-second).
But instead, AI companies would likely move to a combined optimization goal of training models from scratch to retain high-fidelity intelligent inference capabilities on lower-cost substrates — while minimizing work done [because that's what OEMs running local versions of their models want] and therefore minimizing "useless motion" of semantically-meaningless tokens. (Implied metric: bits of Shannon informational content generated per (byte-of-ram x GPU FLOP x second)).
(These model names are so confusing.)
https://openrouter.ai/provider/cerebras
I run plenty of agent loops and the speed makes a somewhat interesting difference in time "compression". Having a Claude 4 Sonnet-level model running at 1000-1500 tok/s would be extremely impressive.
To FEEL THE SPEED, you can either try it yourself on Cerebras Inference page, through their API, or for example on Mistral / Le Chat with their "Flash Answers" (powered by Cerebras). Iterating on code with 1000 tok/s makes it feel even more magical.
It will also impact how we work: interactive IDEs like Cursor probably make more sense than CLI tools like Claude code when answers are nearly instant.
It opens up a whole lot of use cases that'd be a nightmare if you have to look at each individual change.
However, I think Cerebras first needs to get the APIs to be more openAI compliant. I tried their existing models with a bunch of coding agents (include Cline which they did a PR for) and they all failed to work either due to a 400 error or tool calls not being formatted correctly. Very disappointed.
Deciding if I should switch to Qwen 3 and Cerebras.
(Also, off-topic, but the name reminds me of cerebrates from Starcraft. The Zerg command hierarchy lore was fascinating when I was a young child.)
A workflow I've been hearing about is: use Claude Code until quota exhaustion, then use Gemini CLI with Gemini 2.5 Pro free credits until quota exhaustion, then use something like a cheap-ish K2 or Qwen 3 provider, with OpenCode or the new Qwen Code, until your Claude Code credits reset and you begin the cycle anew.
But then also that running a proxy tool locally is needed.
I haven’t tried this setup, and can’t say offhand if Cerebras’ hosted qwen described here is “OpenAI” compatible.
I also don’t know if all of the tools CC uses out of the box are supported in the most compatible non-Anthropic models.
Can anyone provide clarity / additional testimony on swapping out the engine on Claude Code?
https://github.com/musistudio/claude-code-router
now if something like Gemini 2.5 pro or Sonnet 4 even can run on Cerebras generating tens of thousands of code in a few seconds, that could really make a difference.
Anyway, YOU'RE NOT SERVING FULL CONTEXT CEREBRAS, YOU'RE SERVING HALF. Also what quantization exactly is this, can the customers know?
They are also, anecdotally, super scary censored. Asking it if anything "interesting has happened in Tianamen Square?" And then refining with "any notable protests?" And finally "maybe something to do with a tank"... All you get is vague allusions to the square being a beautiful place with a rich history.
https://chat.mistral.ai/chat
https://www.cerebras.ai/blog/mistral-le-chat
https://www.kip.uni-heidelberg.de/Veroeffentlichungen/downlo...
https://archive.ll.mit.edu/publications/journal/pdf/vol02_no...
The second's patents would also be long-expired since it's from 1989.
Insane that Cerebras succeeded where everyone else failed for 5 decades.
I think the gist of this thread is entirely: "please do the same for Qwen 3 coder", with us all hoping for:
a) A viable alternative to Sonnet 3 b) Specifically a faster and cheaper alternative
Edit: Looks like it did. They both introduced pay as you go, and have prepaid limits too at $1500. I wonder if they have any limitations on parallel execution for pay as you go...
> Cerebras Systemstoday [sic] announced the launch of Qwen3-235B with full 131K context support on its inference cloud platform
Then later:
> Cline users can now access Cerebras Qwen models directly within the editor—starting with Qwen3-32B at 64K contexton the free tier. This rollout will expand to include Qwen3-235B with 131K context
Not sure where you get the 40K number from.
Also this model https://huggingface.co/Qwen/Qwen3-235B-A22B
Is native 32k. So the 64k and 131k use ROPE that is not the best for effective context.
While https://qwenlm.github.io/blog/qwen3-coder/ it's 256k native https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct.
I've often observed thinking/reasoning to cause models to completely disregard important constraints, because they essentially can act as conversational turns.
Funny that, when given too much brainpower, AIs manifest ADHD symptoms…
I know that Letta have a decent approach to this, but I haven't yet seen it done well with a coding agent, by them or anyone else. Is there anyone doing this with any measure of success?
If someone from Cerebras is reading this feel free to dm me as optimizing this power is what we do.
We don't know how/whether the Qwen3-235B served by Cerebras has been quantized.
https://console.groq.com/docs/model/moonshotai/kimi-k2-instr...
very fun to see agents using those backends
I tested it and the speed is incredible, though.
But then, same for humans yes?