Diffusion Models Explained Simply

111 onnnon 21 5/19/2025, 1:06:55 PM seangoedecke.com ↗

Comments (21)

ActorNightly · 5h ago
The thing to understand about any model architecture is that there isn't really anything special about one or the other - as long as the process differentiable, ML can learn it.

You can build an image generator that basically renders each word on one line in an image, and then uses a transformer architecture to morph the image of the words into what the words are describing.

They only big difference is really efficiency, but we are just taking stabs at the dark at this point - there is work that Google is doing that eventually is going to result in the most optimal model for a certain type of task.

noosphr · 11m ago
Without going into too much detail: the complexity space of tensor operations is for all practical purposes infinite. The general tensor which captures all interactions between all elements of an input of length N is NxN.

This is worse than exponential and means we have nothing but tricks to try and solve any problem that we see in reality.

As an example solving mnist and its variants of 28x28 pixels will be impossible until the 2100s because we don't have enough memory to store the general tensor which stores the interactions between group of pixels with every other group pixels.

fisian · 3h ago
I found this course very helpful if you're interested in a bit of math (but all very well explained): https://diffusion.csail.mit.edu/

It is short, with good lecture notes and has hands on examples that are very approachable (with solutions available if you get stuck).

woolion · 2h ago
Discussed on hn: https://news.ycombinator.com/item?id=43238893

I found it to be the best resource to understand the material. That's certainly a good reference to delve deeper into the intuitions given by OP (it's about 5 hours of lectures, plus exercises).

bcherry · 6h ago
"The sculpture is already complete within the marble block, before I start my work. It is already there, I just have to chisel away the superfluous material."

- Michelangelo

g42gregory · 4h ago
One of the key intuitions: If you take a natural image and add random noise, you will get a different random noise image every time you do this. However, all of these (different!) random noise images will be lined up in the direction perpendicular to the natural images manifold.

So you will always know where to go to restore the original image: shortest distance to the natural image manifold.

How all these random images end up perpendicular to the manifold? High dimensional statistics and the fact that the natural image manifold has much lower dimension than the overall space.

yubblegum · 40m ago
TIL.

Generative Visual Manipulation on the Natural Image Manifold

https://arxiv.org/abs/1609.03552

For me, the most intriguing aspect of LLMs (and friends) are the embedding space and the geometry of the embedded manifolds. Curious if anyone has looked into comparative analysis of the geometry of the manifolds corresponding to distinct languages. Intuitively I see translations as a mapping from one language manifold to another, with expressions being paths on that manifold, which makes me wonder if there is a universal narrative language manifold that captures 'human expression semantics' in the same way as a "natural image manifold".

porphyra · 6h ago
Meanwhile, if you want diffusion models explained with math for a graduate student, there's Tony Duan's Diffusion Models From Scratch.

[1] https://www.tonyduan.com/diffusion/index.html

user14159265 · 8h ago
Philpax · 7h ago
Notably, Lilian did not explain diffusion models simply. This is a fantastic resource that details how they actually work, but your casual reader is unlikely to develop any sort of understanding from this.
Y_Y · 1h ago
> your casual reader is unlikely to develop any sort of understanding [from this]

"Hell, if I could explain it to the average person, it wouldn't have been worth the Nobel prize." - Richard Feynman

kmitz · 7h ago
Thanks, I was looking for an article like this, with a focus on the differences between generative AI techniques. My guess is that since LLMs and image generation became mainstream at the same time, most people don't have the slightest idea they are based on fundamentally different technologies.
cubefox · 6h ago
It's nice that this contains a comparison between diffusion models that are used for image models, and the autoregressive models that are used for LLMs.

But recently (2024 NeuIPS paper of the year) there was a new paper on autoregressive image modelling that apparently outperforms diffusion models: https://arxiv.org/abs/2404.02905

The innovation is that it doesn't predict image patches (like older autoregressive image models) but somehow does some sort of "next scale" or "next resolution" prediction.

In the past, autoregressive image models did not perform as well as diffusion models, which meant that most image models used diffusion. Now it seems autoregressive techniques have a strict advantage over diffusion models. Another advantage is that they can be integrated with autoregressive LLMs (multimodality), which is not possible with diffusion image models. In fact, the recent GPT-4o image generation is autoregressive according to OpenAI. I wonder whether diffusion models still have a future now.

og_kalu · 59m ago
>The innovation is that it doesn't predict image patches (like older autoregressive image models) but somehow does some sort of "next scale" or "next resolution" prediction.

It still predicts image patches, left to right and top to bottom. The main difference is that we start with patches at a low resolution.

earthnail · 3h ago
From what I can tell, it doesn't look like the recent GPT-4o image generation includes the research of the NeurIPS paper you cited. If it did, we wouldn't see a line-by-line generation of the image, which we do currently in GPT-4o, but rather a decoding similar to progressive JPEG.

I'm not 100% convinced that diffusion models are dead. That paper fixes autoregression for 2D spaces by basically turning the generation problem from pixel-by-pixel to iterative upsampling, but if 2D was the problem (and 1D was not), why don't we have more autoregressive models in 1D spaces like audio?

og_kalu · 59m ago
>From what I can tell, it doesn't look like the recent GPT-4o image generation includes the research of the NeurIPS paper you cited. If it did, we wouldn't see a line-by-line generation of the image, which we do currently in GPT-4o, but rather a decoding similar to progressive JPEG.

You would, because it's still autoregressive. It still generates patches left to right, top to bottom. It's just that we're not starting with patches at the target resolution.

IncreasePosts · 1h ago
Are there any diffusion models for text? I'd imagine they'd be very fast, if the whole result can be processed simultaneously, instead of outputting a linear series of tokens that each depend on the last
woadwarrior01 · 18m ago
Diffusion for text is a nascent field. There are a few pretrained models. Here's one[1], AFAIK it's currently the largest open weights text diffusion model.

[1]: https://ml-gsai.github.io/LLaDA-demo/

imbnwa · 1h ago
Need a text diffusion model to output a version of Eden!Eden!Eden!
jdthedisciple · 5h ago
Not to be that guy but an article on diffusion models with only one image ... and that too just noise?
cubefox · 7h ago
That's a nice high-level explanation: short and easy to understand.