Deepseek R1-0528

374 error404x 205 5/28/2025, 5:59:02 PM huggingface.co ↗

Comments (205)

jacob019 · 14h ago
Well that didn't take long, available from 7 providers through openrouter.

https://openrouter.ai/deepseek/deepseek-r1-0528/providers

May 28th update to the original DeepSeek R1 Performance on par with OpenAI o1, but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass.

Fully open-source model.

jazzyjackson · 13h ago
No sign of what source material it was trained on though right? So open weight rather than reproducible from source.

I remember there's a project "Open R1" that last I checked was working on gathering their own list of training material, looks active but not sure how far along they've gotten:

https://github.com/huggingface/open-r1

pradn · 11h ago
Isn't it basically not possible for the input data set list to be listed? It's an open secret all these labs are using immense amounts of copyrighted material.

There's a few efforts at full open data / open weight / open code models, but none of them have gotten to leading-edge performance.

3abiton · 2h ago
The only way this would work is with "leaks". But even then as we saw with everything on the internet, it just added another guardrail on content. Now I can't watch youtube videos without logging in, and nearly every website I need to solve some weird ash captchas. It's becoming easier to interact with this chatbots rather than search for a solution online. And I wonder with Veo 4 copy cats, it might be even easier to prompt for a video rather than search for one.
prmoustache · 7h ago
That doesn't mean it isn't possible.
bee_rider · 7h ago
“Not possible” = “a business-destroying level of honesty”?
rcxdude · 4h ago
Even if training on the copyrighted material is OK, just providing a data dump of it almost certainly is not.
alpaca128 · 4h ago
No need for a data dump, just list all URLs or whatever else of their training data sources. Afaik that's how the LAION training dataset was published.
anonymoushn · 3h ago
providing a large list of bitrotted URLs and titles of books which the user should OCR themselves before attempting to reproduce the model doesn't seem very useful.
echoangle · 2h ago
Aren't the datasets mostly shared in torrents? They probably won't bitrot for some time.
tokioyoyo · 7h ago
There is a "keep doing what you're doing, as we would want one of our companies to be on top of the AI race" signal from the governments. It could've been stopped, maybe, 5 years ago. But now we're way past it, so nobody cares about these sort of arguments.
behnamoh · 12h ago
> No sign of what source material it was trained on though right?

out of curiosity, does anyone do anything "useful" with that knowledge? it's not like people can just randomly train models..

marci · 8h ago
When you're trully open source, you can make ethings like this:

Today we introduce OLMoTrace, a one-of-a-kind feature in the Ai2 Playground that lets you trace the outputs of language models back to their full, multi-trillion-token training data in real time. OLMoTrace is a manifestation of Ai2’s commitment to an open ecosystem – open models, open data, and beyond.

https://allenai.org/blog/olmotrace

kreijstal · 6h ago
you can do these same, except you would need to be a pirate website. It would even be better. except illegal. but it would be better.
marci · 1h ago
That is why the others can't provide stuff like this. RAG/Hallucination check. I just wish Allen.AI models had bigger context, 4k is too small nowadays.
ToValueFunfetti · 12h ago
Would be useful for answering "is this novel or was it in the training data", but that's not typically what the point of open source is
anonymoushn · 3h ago
If labs provided the corpus and source code for training their tokenizers, it would be a lot easier to produce results about tokenizers. As it is, they provide neither, so it is impossible to compare different algorithms running on the same data if you also want to include the vocabs that are commonly used.
DANmode · 4h ago
Depending on how you use "randomly", they absolutely can..?
m00x · 7h ago
Many are speculating it was trained by o1/o3 for some of the initial reasoning.
fulafel · 7h ago
Are there any widely used models that publish this? If not, then no I guess.
chrsw · 12h ago
Based on commit history Open R1 still active and they're still making progress. Long may it continue, it's an ambitious project.
make3 · 12h ago
I don't think people make the distinction like that. The open source vs non open source distinction boils down to, usually, can you use it for commercial use.

what you're saying is just that it's non reproducible, which is a completely valid but separate issue

alpaca128 · 4h ago
There's already established terms and licenses for non-commercial use. Like "open weights".

Open source has the word "source" in it for a reason, and those models ain't open source and have nothing to do with it.

piperswe · 12h ago
But where's the source? I just see a binary blob, what makes it open source?
1una · 9h ago
I won't call it "binary blob". Safetensors is just a simple format for storing tensors safely: https://huggingface.co/docs/safetensors/index
jacob019 · 9h ago
The weights are the source. It isn't as though something was compiled into weights. They're trained directly. But I know what you mean, it would be more open to have the training pipeline and souce dataset available.
timschmidt · 8h ago
The weights seem much more like a binary to me, the training pipeline the compiler, and the training dataset the source.
jumski · 5h ago
Come here to write this - perfect analogy!
reedciccio · 5h ago
It's very imperfect analogy though these things can't be rebuilt "from scratch" like a program, the training process doesn't seem to be replicable anyway. Nonetheless, full data disclosure is necessary, according to the result of the years-long consultation led by the Open Source Initiative https://opensource.org/ai
timschmidt · 5h ago
> the training process doesn't seem to be replicable anyway

The training process is fully deterministic. It's just an algorithm. Feed the same data in and you'll get the same weights out.

If you're speaking about the computational cost, it used to be that way for compilers too. Give it 20 years and you'll be able to train one of today's models on your phone.

kouteiheika · 3h ago
> The training process is fully deterministic. It's just an algorithm. Feed the same data in and you'll get the same weights out.

No it is not. The training process is non-deterministic, and given exactly the same data, the same code and the same seeds you'll get different weights. Even the simplest operations like matrix multiplication will give you slightly different results depending on the hardware you're using (e.g. you'll get different results on CPU, on GPU from vendor #1 and on GPU from vendor #2, and probably on different GPUs from the same vendor, and on different CUDA versions, etc.), but also depending on the dimensions of the matrices you'll get different results (e.g. if you fuse the QKV weights from modern transformers into a single matrix and do a single multiplication instead of multiplying each separately you'll get different results), and some algorithms (e.g. backwards pass of Flash Attention) are explicitly non-deterministic to be faster.

timschmidt · 3h ago
> Even the simplest operations like matrix multiplication will give you slightly different results depending on the hardware you're using

That has everything to do with implementation, and nothing to do with algorithm. There is an important difference.

Math is deterministic. The way [random chip] implements floating point operations may not be.

Lots of scientific software has the ability to use IEEE-754 floats for speed or to flip a switch for arbitrary precision calculations. The calculation being performed remains the same.

kouteiheika · 1h ago
> Math is deterministic.

The point is none of these models are trained with pure "math". It doesn't matter that you can describe a theoretical training process using a set of deterministic equations, because in practice it doesn't work that way. Your claim that "the training process is fully deterministic" is objectively wrong in this case because none of the non-toy models use (nor they practically can use) such a deterministic process. There is a training process which is deterministic, but no one uses it (for good reasons).

If you had infinite budget, exactly the same code, the same training data, and even the same hardware you would not be able to reproduce the weights of Deepseek R1, because it wasn't trained using a deterministic process.

willmarch · 4h ago
I’m pretty sure the initial weights are randomized meaning no two models will train in the same way twice. The order in which you feed in training data to the model would also add an element of randomness. Model training is closer to growing a plant than running a compiler.
timschmidt · 4h ago
That's still a deterministic algorithm. The random data and the order of feeding training data into it are part of the data which determines the output. Again, if you do it twice the same way, you'll get the same output.
willmarch · 4h ago
If they saved the initial randomized model and released it and there was no random bit flipping during copying, then possibly but it would still be difficult when you factor in the RLHF that comes about through random humans interacting with the model to tweak its workings. If you preserved that data as well, and got all of the initial training correct... maybe. But I'd bet against it.
timschmidt · 4h ago
So long as the data provided was identical, and sources of error like floating point errors due to hardware implementation details are accounted for, I see no reason output wouldn't be identical.

Where would other non-determinism come from?

I'm open to there being another source. I'd just like to know what it would be. I haven't found one yet.

reedciccio · 4h ago
> if you do it twice the same way, you'll get the same output

Point at the science that says that, please: Current scientific knowledge doesn't agree with you.

timschmidt · 4h ago
> Current scientific knowledge doesn't agree with you.

I'd love a citation. So far you haven't even suggested a possible source for this non-determinism you claim exists.

desdenova · 4h ago
What makes models non-deterministic isn't the training algorithm, but the initial weights being random.

Training is reproducible only if, besides the pipeline and data, you also start from the same random weights.

timschmidt · 4h ago
That would fall under "Feed the same data in and you'll get the same weights out." Lots of deterministic algorithms use a random seed.
alfiedotwtf · 4h ago
So is there no “introduce randomness” at some step afterwards? If not, I would guess these models would be getting stuck in a local maxima
reedciccio · 5h ago
Can you point at the research that says that the training process of a LLM at least the size of OLMo or Pythia is deterministic?
timschmidt · 5h ago
Can you point to something that says it's not? The only source of non-determinism I've read of affecting LLM training is floating point error which is well understood and worked around easily enough.
reedciccio · 4h ago
Search more, there is a lot of literature discussing how hard the problem of reproducibility of GenAI/LLMs/Deep Learning is, how far we are from solving it for trivial/small models (let alone for beasts the size of the most powerful ones) and even how pointless the whole exercise is.
timschmidt · 4h ago
If there's a lot, then it should be easy for you to link an example right? One that points toward something other than floating point error.

There simply aren't that many sources of non-determinism in a modern computer.

Though I'll grant that if you've engineered your codebase for speed and not for determinism, error can creep in via floating point error, sloppy ordering of operations, etc. These are not unavoidable implementation details, however. CAD kernels and other scientific software do it every day.

When you boil down what's actually happening during training, it's just a bunch of matrix math. And math is highly repeatable. Size of the matrix has nothing to do with it.

I have little doubt that some implementations aren't deterministic, due to software engineering choices as discussed above. But the algorithms absolutely are. Claiming otherwise seems equivalent to claiming that 2 + 2 can sometimes equal 5.

kouteiheika · 3h ago
> I have little doubt that some implementations aren't deterministic

Not some of them; ALL OF THEM. Engineering training pipelines for absolute determinism would be, quite frankly, extremely dumb, so no one does it. When you need millions of dollars worth of compute to train a non-toy model are you going to double or triple your cost just so that the process is deterministic, without actually making the end result perform any better?

timschmidt · 3h ago
Depends on how much you value repeatability in testing, and how much compute you have. It's a choice which has been made often in the history of computer science.

The cost of adaptive precision floats can be negligible depending on application. One example I'm familiar with from geometry processing: https://www.cs.cmu.edu/~quake/robust.html

Integer math often carries no performance penalty compared to floating point.

I guess my takeaway from this conversation is that there's a market for fast high-precision math techniques in the AI field.

danieldk · 5h ago
There is work to try to reproduce (the original) R1: https://huggingface.co/open-r1
otabdeveloper4 · 7h ago
You can fine-tune their weights and release your own take.

E.g. see all the specialized third-party models out there based on Qwen.

"Open-source" is the wrong word here, what they mean is "you can modify and redistribute these weights".

yetihehe · 6h ago
You can also reverse engineer and modify closed source programs (see mods for games). Weights are like compiled version of source data.
otabdeveloper4 · 3h ago
Finetuning isn't reverse engineering. Finetuning is a standard supported workflow for these models.

Also, the "redistribute" part is key here.

yetihehe · 2h ago
> Finetuning isn't reverse engineering

Fully agree, it isn't. Reverse engineering isn't necessary for modifying compiled program behaviour, so comparing it to finetuning is not applicable. Finetuning applied to program domain would be more like adding plugins or patching in some compiled routines. Reverse-engineering applied to models would be like extracting source documents from weights.

> Finetuning is a standard supported workflow for these models.

Yes, so is adding mods for some games, just put your files in a designated folder and game automatically picks it up and does required modifications.

> Also, the "redistribute" part is key here.

It is not. Redistributability and being open source is orthogonal. You can have a source for a program and not be able to redistribute source or program, or you can redistribute a compiled program, but not have it's source (freeware).

macrolime · 5h ago
Not legally. That's the difference.
timschmidt · 5h ago
Sure you can. It's often legally protected activity. You're just limited to distributing your modifications without the original work.
JKCalhoun · 11h ago
Is there a downloadable model? (Not familiar with openrouter and not seeing the model on ollama.)
zargon · 10h ago
This HN submission goes directly to the downloadable model.
fragmede · 13h ago
cavisne · 8h ago
"knowing why a model refuses to answer something matters"

The companies that create these models cant answer that question! Models get jailbroken all the time to ignore alignment instructions. The robust refusal logic normally sits on top of the model, ie looking at the responses and flagging anything that they don't want to show to users.

The best tool we have for understanding if a model is refusing to answer a problem or actually doesn't know is mechanistic interp, which you only need the weights for.

This whole debate is weird, even with traditional open source code you cant tell the intent of a programmer, what sources they used to write that code etc.

echelon · 12h ago
Open source is a crazy new beast in the AI/ML world.

We have numerous artifacts to reason about:

- The model code

- The training code

- The fine tuning code

- The inference code

- The raw training data

- The processed training data (which might vary across various stages of pre-training and potentially fine-tuning!)

- The resultant weights

- The inference outputs (which also need a license)

- The research papers (hopefully it's described in literature!)

- The patents (or lack thereof)

The term "open source" is wholly inadequate here. We need a 10-star grading system for this.

This is not your mamma's C library.

AFAICT, DeepSeek scores 7/10, which is better than OpenAI's 0/10 (they don't even let you train on the outputs).

This is more than enough to distill new models from.

Everybody is laundering training data, and it's rife with copyrighted data, PII, and pilfered outputs from other commercial AI systems. Because of that, I don't expect we'll see much legally open training data for some time to come. In fact, the first fully open training data of adequate size (not something like LJSpeech) is likely to be 100% synthetic or robotically-captured.

Tepix · 7h ago
I think you‘re trying to make it look more complex than it is. Put the amount of data next to every entry in that list of yours.
echelon · 7h ago
Most of those items map to a job description.

If you think the data story isn't a complicated beast, then consider:

If you wanted an "open" dataset, would you want it before or after it was processed? There are a lot of cleaning, categorizing, feature extraction steps. The data typically undergoes a lot of analysis, extra annotation, bucketing, and transformation.

If the pre-train was done in stages, and the training process was complicated, how much hand-holding do you need to replicate that process?

Do you need all of the scripts to assist with these processes? All of the infra and MLOps pieces? There's a lot of infrastructure to just move the data around and poke it.

Where are you going to host those terabytes or petabytes of data? Who is going to download it? How often? Do you expect it to be downloaded as frequently as the Linux kernel sources?

Did you scrub it of PII? Are you sure?

And to clarify, we're not even talking about trained models at this point.

reedciccio · 5h ago
Https://opensource.org/ai ... Lots of reasoning has been done on those artifacts
xnickb · 7h ago
I'd argue we don't need a 10 star system. The single bit we have now is enough. And the question is also pretty clear: did $company steal other peoples work?

The answer is also known. So the reason one would want an open source model (read reproducible model), would be that of ethics

selfhoster11 · 6h ago
We use pop-cultural references to communicate all the time these days. Those don't necessarily come from only the most commonly known sections of these works, so the AI would necessarily need the full work (or a functional transformation of the work) to be able to hit the theoretical maximum of the ability to decode about and reason using such references. To exclude copyrighted works from the training set is to expect it to decode from the outside what amounts to humanity's own in-group jokes.

That's my formal argument. The less formal one is that copyright protection is something that smaller artists deserve more than rich conglomerates, and even then, durations shouldn't be "eternity and a day". A huge chunk of what is being "stolen" should be in the commons anyway.

echelon · 7h ago
I truthfully cannot think of a single model that satisfies your criteria.

And if we wait for the the internet to be wholly eaten by AI, if we accept perfect as the enemy of good, then we'll have nothing left to cling to.

> And the question is also pretty clear: did $company steal other peoples work?

Who the hell cares? By the time this is settled - and I'd argue you won't get a definitive agreement - the internet will be won by the hyperscalers.

Accept corporate gifts of AI, and keep pushing them forward. Commoditize. Let there be no moat.

There will be infinite synthetic data available to us in the future anyway. And none of this bickering will have even mattered.

behnamoh · 12h ago
it's got more 'source' than whatever OpenAI provides for their models.
stavros · 12h ago
No it doesn't, it has exactly the same source, zero. It has more downloadable binary.
Aeolun · 8h ago
That’s the ‘source’ for what the model spits out though, if not the source for what spits out the model.
prmoustache · 7h ago
It is just freeware, not open source.
numpad0 · 12h ago
less alcoholic beverages are fully alcoholic beverages
fragmede · 11h ago
but they're not bleach, and no amount of adding or removing alcohol can transmute the alcohol into something else.
quarters · 13h ago
stavros · 12h ago
Slapping an MIT license on a compiled binary doesn't make it open source.
quarters · 11h ago
They're keeping some stuff to themselves which is fine. I don't expect anyone to have to fully release everything they've got especially considering the vast costs associated with researching and developing these models.

What they have released has been distilled into many new models that others have been using for commercial benefit and I appreciate the contributions that they have made.

alpaca128 · 4h ago
> I don't expect anyone to have to fully release everything they've got

I also don't expect Microsoft to release their full Windows 11 source code, but that also means it's not open source. And that's okay, because Microsoft doesn't call it open source.

acheong08 · 18h ago
No information to be found about it. Hopefully we get benchmarks soon. Reminds me of the days when Mistral would just tweet a torrent magnet link
chvid · 16h ago
Benchmarks seem like a fools errand at this point; overly tuning models just to specific test already published tests, rather than focusing on making them generalize.

Hugging face has a leader board and it seems dominated by models that are finetunings of various common open source models, yet don't seem be broader used:

https://huggingface.co/open-llm-leaderboard

EvgeniyZh · 15h ago
There are quite a few benchmarks for which that's not the case:

- live benchmarks (livebench, livecodebench, matharena, SWE-rebench, etc)

- benchmarks that do not have a fixed structure, like games or human feedback benches (balrog, videogamebench, arena)

- (to some extent) benchmark without existing/published answers (putnambench, frontiermath). You could argue that someone could hire people to solve those or pay off benchmark dev, but it's much more complicated.

Most of the benchmarks that don't try to tackle future contamination are much less useful, that's true. Unfortunately, HLE kind of ignored it (they plan to add a hidden set to test for contamination, but once the answers are there, it's a lost game IMHO); I really liked the concept.

Edit: it is true that these benchmarks are focusing only on a fairly specific subset of the model capabilities. For everything else vibe check is your best bet.

chvid · 5h ago
I agree with you.

Of course, some benchmarks are still valid and will remain valid. Ie. we can make the models play chess against each other and score them on how well they do. But those benchmarks are in general fairly narrow. They don't really measure the "broader" intelligence we are after. And often, LLMs perform worse than specialized models. Ie. I don't think there is any LLM out there that can beat a traditional chess program (surely not using the same computing power).

What is really bad are the QA benchmarks which leak over time into the training data of the models. And sometimes, one can suspect even big labs have an economic incentive in scoring well on popular benchmarks which cause them to manipulate the models way beyond what is reasonable.

And taking a bunch of flawed benchmarks and combining them in indexes, saying this model is 2% better than that model is just completely meaningless but of course fun and draws a lot of attention.

So, yes, we are kind of left with vibe checks, but in theory, we could do more; take a bunch of models, double-blind, and have a big enough, representative group of human evaluators score them against each other on meaningful subjects.

Of course, done right, that would be really expensive. And those sponsoring might not like the result.

EvgeniyZh · 4h ago
> But those benchmarks are in general fairly narrow. They don't really measure the "broader" intelligence we are after.

I think a general model that can

- finish nethack, doom, zelda and civilization,

- solve the hardest codeforces/atcoder problems,

- formally prove putnam solution with high probability, not given the answer

- write a PR to close a random issue on github

is likely to have some broader intelligence. I may be mistaken, since there were tasks in the past that appeared to be unsolvable without human-level intelligence, but in fact weren't.

I agree that such benchmarks are limited to either environment with well-defined feedback and rules (games) or easily verifiable ones (code/math), but I wouldn't say it's super narrow, and there are no non-LLM models to perform significantly better on these (except some games); though specialized LLMs work better. Finding other examples, I think, is one of the important problems in AI metrology.

> So, yes, we are kind of left with vibe checks, but in theory, we could do more; take a bunch of models, double-blind, and have a big enough, representative group of human evaluators score them against each other on meaningful subjects.

You've invented an arena (who just raised quite a lot of money). Can argue about "representative," of course. However, I think the SNR in the arena is not too high now; it turns out that the average arena user is quite biased, the most of their queries are trivial for LLMs, and for non-trivial ones, they cannot necessarily figure out which answer is better. MathArena goes in opposite directions: narrow domain, but expert evaluation. You could imagine a bunch of small arenas, each with its own domain experts. I think it may happen eventually if money flow into AI continues.

chvid · 1h ago
A couple of things:

I wasn't trying to invent anything. Just describing what you would obviously have to do if you were to take a "scientific" or "objective" approach: Sound experiments, reproducible, free of financial incentives.

As far as I can tell, no one is doing that at a significant scale. Everything is buried in hype and marketing.

Now for that broad set of benchmarks (PRs to GitHub, Putnam, Zelda). There is something to that, but it depends on the model. A lot of what is out there are “mixtures of experts" either by implicit or explicit design. So there is a mechanism that looks at the problem and then picks the subsystem to delegate it to. Is it a game of chess - boot up the chess program? Is it poetry? Boot up the poetry generator.

That sort of thing is not showing broad intelligence anymore than a person both knowing a chess player and a poet is having broad intelligence.

Deepseek is, as far as I can tell, the leading open-source model; and in some way, that makes it the leading model. I don't think you can fairly compare a model that you can run locally with something that is running behind a server-side API - because who knows what is really going on behind the API.

Deepseek being Chinese makes it political and even harder to have a sane conversation about; but I am sure that had it been China that did mostly closed models and the US that did open ones; we would hold that against them, big time.

behnamoh · 12h ago
right, all benchmarks collapse once you go beyond 32K tokens. I've rarely seen any benchmarks focusing on long range, which is where most programming needs are at.
lossolo · 15h ago
The only benchmarks that match my experience with different models are here https://livebench.ai/#/
ribelo · 12h ago
livebench was good, but now it's a joke. Gemini flash is better in coding than pro and sonnet 3.7. And this is only the beginning of weird results.
pdimitar · 6h ago
Flash is better than Pro in coding? Whoa... [makes a note to try a few things later this day]

Out of curiosity, how did you gauge that?

code_biologist · 4h ago
I think your parent comment is citing that as an example of why livebench is no longer a good benchmark. That said, the new Flash is very good for what it is, and IMO after the Pro 05-06 nerfs the two models are much closer in performance for many tasks than they really should be — Pro should be / was way better (RIP 03-25 release). That livebench result may be wrong about the specific ranking, but I think it's right that Flash is in the same class of coding strength as Sonnet 3.7.
pdimitar · 4h ago
Thanks, that's very informative.

My ignorance is showing here: why is the Pro 05-06 a nerf?

halyconWays · 12h ago
>overly tuning models just to specific test already published tests, rather than focusing on making them generalize.

I think you just described SATs and other standardized tests

Mistletoe · 10h ago
SAT has a correlation to IQ of 0.82 to 0.86 and I do think IQ is very useful in judging intelligence.

https://gwern.net/doc/iq/high/smpy/2004-frey.pdf

tptacek · 8h ago
It's a useful diagnostic when used in a battery of diagnostic tests of cognitive function, but to the point of this thread: it is notoriously not a good ranking mechanism.
kbumsik · 13h ago
Artificial Analysis is the only stable source. Don't look at others like HF Leaderboard.

https://artificialanalysis.ai/

z2 · 16h ago
There's a table here showing some "Overall" and "Median" score, but no context on what exactly was tested. It appears to be in the ballpark as the latest models, but with some cost advantages with the downside of being just as slow as the original r1 (likely lots of thinking tokens). https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....
xelos · 15h ago
It’s appeared on the Livecodebench leaderboard too. Performance on par with O4 Mini - https://livecodebench.github.io/leaderboard.html
swyx · 16h ago
i think usually deepseek posts a paper after a model release about a day later.

no idea why they cant just wait a bit to coordinate stuff. bit messy in the news cycle.

Destiner · 16h ago
honestly a power move.

it's almost as if they don't care about creating a proper buzz.

wyre · 15h ago
From what I understand, isn’t DeepSeek just a pet project from a Chinese hedge fund? They have much less reason to create a buzz compared to openAI, Anthropic, or Google.
TeMPOraL · 14h ago
None of those players you mention actually need to create a buzz. People will do it for them for free. DeepSeek joined this group after releasing R1.

Despite constant protestations of hype among the tech crowd, GenAI really is big enough of a deal that new developments don't need to be pushed onto market; people are voluntarily seeking them out.

janalsncm · 11h ago
Well OpenAI is constantly asking for and raising money. If “buzz” isn’t the right word maybe mystique? Because “race to the bottom in a hyper-commoditized space” probably doesn’t get you billions. No, Sam Altman wants people to believe they are very close to AGI and a TAM of global GDP.
wongarsu · 14h ago
OpenAI does a lot of work hyping themselves up and creating buzz around things they do or have a vague idea that they might try to do in the future.

Not to make people aware of GenAI, but to make sure OpenAI continues to be perceived as the AI company. The company that leads and revolutionizes, with everyone just copying them and trying to match them. That perception is a significant part of their value and probably their biggest moat

ktallett · 13h ago
Considering just how quickly others followed it's also obviously not the case. Infact the best AI software as in most useful is not theirs. Claude is far more reliable.
TeMPOraL · 6h ago
Only goes to shown how strong their brand already is in the eyes of the public. I question how much active maintenance it takes, though. In my eyes, they already won big with ChatGPT - the name still is synonymous with LLMs to the general population. Everyone knows what ChatGPT is. Few known what Claude or Gemini is; arguably, more people know what Deepseek is thanks to the splash they made tanking NVidia stock and becoming part of general news coverage for a few days. Still, for regular folks (including business folks in tech industry, too), they all are, respectively, "that other ChatGPT", "ChatGPT from Google" and "that Chinese ChatGPT". It's a pretty sticky perception.
aibrother · 17h ago
getting a similar vibe yeah. given how adjacent they are, wouldn't be surprised if this was an intentional nod from DeepSeek
willchen · 18h ago
I love how Deepseek just casually drops new updates (that deliver big improvements) without fanfare.
doctoboggan · 17h ago
Honest question, how do you know this is a big improvement? Are there any benchmarks anywhere?
KeyBoardG · 16h ago
There will be a video from FireShip if its a big one. /s
sundarurfriend · 5h ago
Ah FireShip, I forgot that channel existed at all. I asked YouTube to not recommend that channel after every vaguely AI-related news was "BIG NEWS!!!", the videos were also thin on actual content, and there were repeated factual errors over multiple videos too. At that point, the only thing it's good for is to make yourself (falsely) feel like you're keeping up.
therein · 12h ago
Much more preferred to what OpenAI always did and Anthropic recently started doing. Just write some complicated narrative about how scary this new model is and how it tried to escape and deceive and hack the mainframe while telling the alignment operators bed time stories.
camkego · 7h ago
Really? I missed this. The new hype trick is implying the new LLM releases are almost AGI? Love it.
modeless · 17h ago
I like it too, but some benchmark numbers would be nice at least.
ilaksh · 16h ago
I think they did make an announcement on WeChat.
hd4 · 18h ago
On the day Nvidia report earnings too. Pretty sure it's just a coincidence, bro.
margorczynski · 17h ago
Yeah the timing seems strange. Considering how much money will move hands based on those results this might be some kind of play to manipulate the market at least a bit.
consumer451 · 17h ago
I believe that they are funded by a hedge fund. So, there are no coincidences here.
rwmj · 15h ago
Is releasing a better product really "market manipulation"? It seems to me like regular, good competition.
FirmwareBurner · 15h ago
It's "manipulating the market" only when your geopolitical adversary brings the competition.
Maxatar · 17h ago
How does releasing it today affect the market compared to releasing it last week?
doctoboggan · 17h ago
Hard to say exactly how it will affect the market, but IIRC when deepseek was first released Nvidia stock took a big hit as people realized that you could develop high performing LLMs without access to Nvidia hardware.
jimmyl02 · 17h ago
I thought the reaction was more so that you can train SOTA models without an extremely large quantity of hyper-expensive GPU clusters?

But I would say that the reaction was probably vastly overblown as what Deepseek really showed was there are much more efficient ways of doing things (which can also be applied with even larger clusters).

If this checkpoint is trained using non-Nvidia GPUs that would definitely be a much bigger situation but it doesn't seem like there has been any associated announcements.

TeMPOraL · 14h ago
Plans take time to adjust; I imagine a big part of the impact was companies realizing that they need to buy/rent much less expensive GPU compute to realize the plans they've already committed to for the next couple years. Being able to spend less to get the same results is an immediate win; expanding the plan to make use of suddenly available surplus money/compute takes some time.

And then part of the impact was just "woah, if some noname team from China can casually leapfrog major western players on a tiny budget and kill one of their moats in the same move, what other surprises like this are possible?". The event definitely invalidated a lot of assumptions investors had about what is or isn't possible near-term; the stock market reacted to suddenly increased uncertainty.

lexandstuff · 11h ago
Except that, all Deepseek models so far have been trained on Nvidia hardware. For Deepseek v3, they literally mention that they used 2,048 NVIDIA H800 GPUs right in the abstract: https://arxiv.org/html/2505.09343v1
hbbio · 11h ago
Actually, the "narrative" crashed Nvidia for no reason.

Not only DeepSeek uses a lot of Nvidia hardware for the training.

But even more so, by releasing an open weight frontier model, people around the world need more Nvidia chips than ever for inference.

lvturner · 9h ago
I know of enterprises in APAC now spending millions of dollars on Huawei GPUs, while they might not be as efficient, they are seen as geopolitically more stable (especially given the region).

DeepSeek helped "prove" to a lot of execs that "Good" is "Good enough" and that there are viable alternatives with less perceived risk of supply chain disruption - even if facts differ may from this narrative.

hbbio · 1h ago
Yes, I know them too, I live there!

The hardware is great, CANN is not CUDA yet.

kreijstal · 6h ago
someone has not heard about huawei GPU
belter · 17h ago
Plenty of manipulation to go around..

"Tech Chip software stocks sink on report Trump ordered halt to China sales" - https://www.cnbc.com/2025/05/28/chip-software-trump-china.ht...

dyauspitr · 6h ago
What big improvements?
esafak · 17h ago
Anyone got benchmarks?
transcriptase · 18h ago
Out of sheer curiosity: What’s required for the average Joe to use this, even at a glacial pace, in terms of hardware? Or is it even possible without using smart person magic to append enchanted numbers and make it smaller for us masses?
danielhanchen · 17h ago
We made DeepSeek R1 run on a local device via offloading and 1.58bit quantization :) https://unsloth.ai/blog/deepseekr1-dynamic

I'm working on the new one!

behnamoh · 12h ago
> 1.58bit quantization

of course we can run any model if quantize it enough. but I think the OP was talking about the unquantized version.

danielhanchen · 11h ago
Oh you can still run them unquantized! See https://docs.unsloth.ai/basics/llama-4-how-to-run-and-fine-t... where we show you can offload all MoE layers to system RAM, and leave non MoE layers on the GPU - the speed is still pretty good!

You can do it via `-ot ".ffn_.*_exps.=CPU"`

behnamoh · 9h ago
Thanks, I'll try it! I guess "mixing" GPU+CPU would hurt the perf tho.
CamperBob2 · 17h ago
Your 1.58-bit dynamic quant model is a religious experience, even at one or two tokens per second (which is what I get on my 128 MB Raptor Lake+4090). It's like owning your own genie... just ridiculously smart. Thanks for the work you've put into it!
nxobject · 13h ago
Likewise - for me, it feels how I imagined getting a microcomputer in the 70s was like. (Including the hit to the wallet… an Apple II cost the 2024 equivalent of ~$5k, too.)
danielhanchen · 11h ago
:) The good ol days!
danielhanchen · 17h ago
Oh thank you! :) Glad they were useful!
screaminghawk · 17h ago
I use this a lot! Thanks for your work and looking forward to the next one
danielhanchen · 17h ago
Thank you!! New versions should be much better!
terhechte · 18h ago
You can run the 4bit quantized version of it on a M3 Ultra 512GB. That's quite expensive though. Another alternative is a fast CPU with 500GB of DDR5 RAM. That of course, is also not cheap and slower than the M3 Ultra. Or, you buy multiple Nvidia cards to reach ~500GB of VRam. That is probably the most expensive option but also the fastest
lodovic · 16h ago
If you use the excess memory for AI only it's cheaper to rent . A single H100 costs less than $2 per hour. (incl power)
diggan · 16h ago
Vast.ai has a bunch of 1x H100 SXM available, right now the cheapest at $1.554/hr.

Not affiliated, just a (mostly) happy user, although don't trust the bandwidth numbers, lots of variance (not surprising though, it is a user-to-user marketplace).

omneity · 14h ago
Worth mentioning that a single H100 (80-96GB) is not enough to run R1. You're looking at 6-8 GPUs on the lower end, and factor in the setup and download time.

An alternative is to use serverless GPU or LLM providers which abstract some of this for you, albeit at a higher cost and slow starts when you first use your model for some time.

zackangelo · 6h ago
Yeah, to run the full precision model you need either two 8xH100 nodes connected via Infiniband or one 8xH200 node or one 8xB200 node.

Not for the GPU poor, to be sure.

girvo · 12h ago
It is enough to run the dynamically quantised 1.56 bit version I believe, which is fun to play around with.
behohippy · 18h ago
About 768 gigs of ddr5 RAM in a dual socket server board with 12 channel memory and an extra 16 gig or better GPU for prompt processing. It's a few grand just to run this thing at 8-10 tokens/s
wongarsu · 14h ago
About $8000 plus the GPU. Let's throw in a 4080 for about $1k, and you have the full setup for the price of 3 RTX5090. Or cheaper than a single A100. That's not a bad deal.

For the hobby version you would presumably buy a used server and a used GPU. DDR4 ECC Ram can be had for a little over $1/GB, so you could probably build the whole thing for around $2k

JKCalhoun · 11h ago
Been putting together a "mining rig" [1] (or rather I was before the tariffs, ha ha.) Going to try to add a 2nd GPU soon. (And I should try these quantized versions.)

Mobo was some kind of mining rig from AliExpress for less than $100. GPU is an inexpensive NVIDIA TESLA card that I 3D printed a shroud for (added fans). Power supply a cheap 2000 Watt Dell server PS off eBay....

[1] https://bsky.app/profile/engineersneedart.com/post/3lmg4kiz4...

phonon · 13h ago
This is the state of the art for such a setup. Really good performance!

https://github.com/kvcache-ai/ktransformers

mechagodzilla · 17h ago
I have a $2k used dual-socket xeon with 768GB of DDR4 - It runs at about 1.5 tokens/sec for the 4-bit quantized version.
SkyPuncher · 18h ago
Practically, smaller, quantized versions of R1 can be run on a pretty typically Macbook Pro setup. Quantized versions are definitely less performant, but they will absolutely run.

Truthfully, it's just not worth it. You either run these things so slowly that you're wasting your time or you have to buy 4- or 5-figures of hardware that's going to sit, mostly unused.

hu3 · 18h ago
It's probably going to be free at OpenRouter.

There's already a 685B parameter DeepSeek V3 for free there.

https://openrouter.ai/deepseek/deepseek-chat-v3-0324:free

latchkey · 18h ago
It is free to use, but you're feeding OR data and someone is profiting off that.
ankit219 · 17h ago
Thats how a lot of application layer startups are going to make money. There is a bunch of high quality usage data. Either you monetize it yourself (cursor), get acquired (windsurf) or provide that data to others at a fee (lmsys, mercor). This is inevitable and a market for this is just going to increaase. If you want to prevent this as an org, there arent many ways out. Either use open source models you can deploy, or deal directly with model providers where you can sign specific contracts.
85392_school · 17h ago
You're actually sending data to random GPUs connected to one of the Bittensor subnets that run LLMs.
latchkey · 17h ago
That can, today, collect that data and sell it. There is work being done to add TEE, but it isn't live yet.
dist-epoch · 17h ago
Not every prompt is privacy sensitive.

For example you could use it to summarize a public article.

latchkey · 17h ago
Every prompt is valuable.
criddell · 16h ago
And you are getting something valuable in return. It's probably a good trade for many, especially when they are doing something like summarizing a public article.
jacob019 · 14h ago
I'm not so sure. I have agents that do categorization work. Take a title, drill through a browse tree to find the most applicable leaf category. Lots of other classification tasks that are not particularly sensitive and it's hard to imagine them being very good for training. Also transformations of anonymized numerical data, parsing, etc.
dist-epoch · 16h ago
Using an AI for free is also valuable. Seems win/win.
hadlock · 18h ago
As mentioned you can run this on a server board with 768+ gb memory in cpu mode. Average joe is going to be running quantized 30b (not 600b+) models on an $300/$400/$900 8/12/16gb GPU
rahimnathwani · 18h ago
I'm not sure that's enough RAM to run it at full precision (FP8).

This guy ran a 4-bit quantized version with 768GB RAM: https://news.ycombinator.com/item?id=42897205

jazzyjackson · 13h ago
You can pay Amazon to do it for you at about a penny per 10 thousand tokens.

There's a couple of guides for setting it up "manually" on ec2 instances so you're not paying the Bedrock per-token-prices, here's [1] that states four g6e.48xlarge instances (192 vCPUs, 1536GB RAM, 8x L40S Tensor Core GPUs that come with 48 GB of memory per GPU)

Quick google tells me that g6e.48xlarge is something like 22k USD per month?

[0] https://aws.amazon.com/bedrock/deepseek/

[1] https://community.aws/content/2w2T9a1HOICvNCVKVRyVXUxuKff/de...

jacob019 · 18h ago
I'm sure it will be on OpenRouter within the next day or so. Not really practical to run a 685B param model at home.
z2 · 15h ago
Hardware: any computer from the last 20 or so years.

Software: client of choice to https://openrouter.ai/deepseek/deepseek-r1-0528

Sorry I'm being cheeky here, but realistically unless you want to shell out 10k for the equivalent of a Mac Studio with 512GB of RAM, you are best using other services or a small distilled model based on this one.

threeducks · 16h ago
> even at a glacial pace

If speed is truly not an issue, you can run Deepseek on pretty much any PC with a large enough swap file, at a speed of about one token every 10 minutes assuming a plain old HDD.

Something more reasonable would be a used server CPU with as many memory channels as possible and DDR4 ram for less than $2000.

But before spending big, it might be a good idea to rent a server to get a feel for it.

karencarits · 10h ago
What use cases are people using local LLMs for? Have you created any practical tools that actually increase your efficiency? I've been experimenting a bit but find it hard to get inspiration for useful applications
jsemrau · 10h ago
I have a signal tracer that evaluates unusual trading volumes. Given those signals, my local agent receives news items through API to make an assessment what happens. This helps me tremendously. If I would do this through a remote app, I'd have to spend a several dollars per day. So I have this on existing hardware.
karencarits · 8h ago
Thank you, this is a great example!
dyauspitr · 5h ago
Do you want to share it?
codedokode · 8h ago
Anyone who does not want to leak their data? I am actually surprised that people are ok with trusting their secrets to a random foreign company.
karencarits · 8h ago
But what do you do with these secrets? Like tagging emails, summarizing documents?
nprateem · 6h ago
No one cares about your 'secrets' as much as you think. They're only potentially valuable if you're doing unpatented research or they can tie them back to you as an individual. The rest is paranoia.

Having said that, I'm paranoid too. But if I wasn't they'd have got me by now.

rurban · 4h ago
A random foreign company is far better than a big 5 eyes country, which syphon everything to the NSA, and use it against you.

Whilst the Chinese intelligence agency will have not much power over you.

itsmevictor · 3h ago
I do a lot of data cleaning as part of my job, and I've found that small models could be very useful for that, particularly in the face of somewhat messy data.

You can for instance use them to extract some information such as postal codes from strings, or to translate and standardize country names written in various languages (e.g. Spanish, Italian and French to English), etc.

I'm sure people will have more advanced use cases, but I've found them useful for that.

lvturner · 10h ago
Also worth it for the speed of AI autocomplete in coding tools, the round trip to my graphics card is much faster than going out over the network.
sudomarcma · 10h ago
Any companies with any type of sensitive data will love to have anything to do with LLM done locally.
thenameless7741 · 9h ago
A recent example: a law firm hired this person [0] to build a private AI system for document summarization and Q&A.

[0] https://xcancel.com/glitchphoton/status/1927682018772672950

bcoates · 9h ago
I use the local LLM-based autocomplete built into PyCharm and I'm pretty happy with it
jacob019 · 18h ago
Not much to go off of here. I think the latest R1 release should be exciting. 685B parameters. No model card. Release notes? Changes? Context window? The original R1 has impressive output but really burns tokens to get there. Can't wait to learn more!
mickey475778 · 6h ago
The focus on DeepSeek-R1-0528's reasoning capabilities, especially for code generation (LiveCodeBench performance near o3!), is really exciting. The Reddit thread mentions a 'thinking' phase before responses, suggesting a chain-of-thought (CoT) approach. This aligns with recent research emphasizing explicit reasoning steps for complex tasks. It'll be interesting to see how it handles novel code challenges beyond standard benchmarks, and if this CoT is as robust as models that explicitly expose their thought process
AJAlabs · 9h ago
671B parameters! Well, it doesn't look like I'll be running that locally.
amy_petrik · 9h ago
there is a small community of people that do indeed run this locally. typically on CPU/RAM (lots and lots of RAM), insofar as that's cheaper than GPU(s).
mjcohen · 11h ago
Deepseek seems to be one of the few LLMs that run on a iPod Touch because of the older version of ios.
cropcirclbureau · 8h ago
Hey! You! You can't just say that and not explain. Come back.
MrPowerGamerBR · 8h ago
If I had to guess, they were talking about the DeepSeek iOS app: https://apps.apple.com/br/app/deepseek-assistente-de-ia/id67...
titaniumtown · 9h ago
... What?
deepsquirrelnet · 10h ago
I think it’s cool to see this kind of international participation in fierce tech competition. It’s exciting. It’s what I think capitalism should be.

This whole “building moats” and buying competitors fascination in the US has gotten boring, obvious and dull. The world benefits when companies struggle to be the best.

cesarvarela · 13h ago
About half the price of o4 mini high for not that much worse performance, interesting

edit: most providers are offering a quantized version...

htrp · 17h ago
You're gonna need at least 8 h100 80s for this....
overfeed · 14h ago
That's about $16-24 per hour - depending on the number of tokens you're slinging in that period, it may be much cheaper than paying OpenAI for similar functionality.
vietvu · 12h ago
Or paying deepseek for slightly cheaper and worse performance than OpenAI.
canergly · 16h ago
I want to see it in groq asap !
porphyra · 15h ago
Groq doesn't even have any true deepseek models --- I thought they only had `deepseek-r1-distill-llama-70b` which was distilled onto llama 70b [1].

[1] https://console.groq.com/docs/models

jacob019 · 14h ago
Groq has a weak selection of models, which is frustrating because their inference speed is insane. I get it though, selection + optimization = performance.
jbentley1 · 12h ago
From conversation with someone from Groq, they have a custom compiler and runtime for the models to run on their custom hardware, which is why the selection is poor. For every model type they need to port the architecture to run on their compiler beforehand.
boroboro4 · 12h ago
They can't host DeepSeek because it's too big. Their chips have 230mb of memory, so it will take them ~3000 chips to host the model + (possible large) number of chips to keep kv cache. I bet it's just too hard to bring such topology online at all, and impossible to make even near to be profitable.
sergiotapia · 14h ago
the only reason they are fast is because the models they host are severely quantized so i've heard.
jacob019 · 14h ago
Huh. I heard a podcast with the founder talking about their custom hardware, but quantization would explain it.
christianqchung · 13h ago
Quantization alone does not explain it. It's mostly custom hardware[0].

[0] https://groq.com/the-groq-lpu-explained/

zargon · 7h ago
Why repeat this nonsense when it’s so trivial to just check. The reason Groq is fast is because they employ absolutely ludicrous amounts of SRAM. (Which is 10 times faster than the fastest VRAM.)
behnamoh · 12h ago
they responded to my tweet last year and said they didn't quantize the models.
boroboro4 · 12h ago
It's very hard to find right now but I'm sure they said they don't quantize KV cache, but their weights are in fp8.
heyhuy · 17h ago
Deepseek bought some NVidia puts last night

No comments yet