I found it to be the best resource to understand the material. That's certainly a good reference to delve deeper into the intuitions given by OP (it's about 5 hours of lectures, plus exercises).
ActorNightly · 4h ago
The thing to understand about any model architecture is that there isn't really anything special about one or the other - as long as the process differentiable, ML can learn it.
You can build an image generator that basically renders each word on one line in an image, and then uses a transformer architecture to morph the image of the words into what the words are describing.
They only big difference is really efficiency, but we are just taking stabs at the dark at this point - there is work that Google is doing that eventually is going to result in the most optimal model for a certain type of task.
bcherry · 5h ago
"The sculpture is already complete within the marble block, before I start my work. It is already there, I just have to chisel away the superfluous material."
- Michelangelo
porphyra · 5h ago
Meanwhile, if you want diffusion models explained with math for a graduate student, there's Tony Duan's Diffusion Models From Scratch.
One of the key intuitions: If you take a natural image and add random noise, you will get a different random noise image every time you do this. However, all of these (different!) random noise images will be lined up in the direction perpendicular to the natural images manifold.
So you will always know where to go to restore the original image: shortest distance to the natural image manifold.
How all these random images end up perpendicular to the manifold? High dimensional statistics and the fact that the natural image manifold has much lower dimension than the overall space.
Notably, Lilian did not explain diffusion models simply. This is a fantastic resource that details how they actually work, but your casual reader is unlikely to develop any sort of understanding from this.
Y_Y · 8m ago
> your casual reader is unlikely to develop any sort of understanding [from this]
"Hell, if I could explain it to the average person, it wouldn't have been worth the Nobel prize." - Richard Feynman
kmitz · 6h ago
Thanks, I was looking for an article like this, with a focus on the differences between generative AI techniques.
My guess is that since LLMs and image generation became mainstream at the same time, most people don't have the slightest idea they are based on fundamentally different technologies.
IncreasePosts · 56m ago
Are there any diffusion models for text? I'd imagine they'd be very fast, if the whole result can be processed simultaneously, instead of outputting a linear series of tokens that each depend on the last
imbnwa · 32m ago
Need a text diffusion model to output a version of Eden!Eden!Eden!
cubefox · 6h ago
It's nice that this contains a comparison between diffusion models that are used for image models, and the autoregressive models that are used for LLMs.
But recently (2024 NeuIPS paper of the year) there was a new paper on autoregressive image modelling that apparently outperforms diffusion models: https://arxiv.org/abs/2404.02905
The innovation is that it doesn't predict image patches (like older autoregressive image models) but somehow does some sort of "next scale" or "next resolution" prediction.
In the past, autoregressive image models did not perform as well as diffusion models, which meant that most image models used diffusion. Now it seems autoregressive techniques have a strict advantage over diffusion models. Another advantage is that they can be integrated with autoregressive LLMs (multimodality), which is not possible with diffusion image models. In fact, the recent GPT-4o image generation is autoregressive according to OpenAI. I wonder whether diffusion models still have a future now.
og_kalu · 5m ago
>The innovation is that it doesn't predict image patches (like older autoregressive image models) but somehow does some sort of "next scale" or "next resolution" prediction.
It still predicts image patches, left to right and top to bottom. The main difference is that we start with patches at a low resolution.
earthnail · 2h ago
From what I can tell, it doesn't look like the recent GPT-4o image generation includes the research of the NeurIPS paper you cited. If it did, we wouldn't see a line-by-line generation of the image, which we do currently in GPT-4o, but rather a decoding similar to progressive JPEG.
I'm not 100% convinced that diffusion models are dead. That paper fixes autoregression for 2D spaces by basically turning the generation problem from pixel-by-pixel to iterative upsampling, but if 2D was the problem (and 1D was not), why don't we have more autoregressive models in 1D spaces like audio?
og_kalu · 6m ago
>From what I can tell, it doesn't look like the recent GPT-4o image generation includes the research of the NeurIPS paper you cited. If it did, we wouldn't see a line-by-line generation of the image, which we do currently in GPT-4o, but rather a decoding similar to progressive JPEG.
You would, because it's still autoregressive. It still generates patches left to right, top to bottom. It's just that we're not starting with patches at the target resolution.
jdthedisciple · 4h ago
Not to be that guy but an article on diffusion models with only one image ... and that too just noise?
cubefox · 6h ago
That's a nice high-level explanation: short and easy to understand.
It is short, with good lecture notes and has hands on examples that are very approachable (with solutions available if you get stuck).
I found it to be the best resource to understand the material. That's certainly a good reference to delve deeper into the intuitions given by OP (it's about 5 hours of lectures, plus exercises).
You can build an image generator that basically renders each word on one line in an image, and then uses a transformer architecture to morph the image of the words into what the words are describing.
They only big difference is really efficiency, but we are just taking stabs at the dark at this point - there is work that Google is doing that eventually is going to result in the most optimal model for a certain type of task.
- Michelangelo
[1] https://www.tonyduan.com/diffusion/index.html
So you will always know where to go to restore the original image: shortest distance to the natural image manifold.
How all these random images end up perpendicular to the manifold? High dimensional statistics and the fact that the natural image manifold has much lower dimension than the overall space.
"Hell, if I could explain it to the average person, it wouldn't have been worth the Nobel prize." - Richard Feynman
But recently (2024 NeuIPS paper of the year) there was a new paper on autoregressive image modelling that apparently outperforms diffusion models: https://arxiv.org/abs/2404.02905
The innovation is that it doesn't predict image patches (like older autoregressive image models) but somehow does some sort of "next scale" or "next resolution" prediction.
In the past, autoregressive image models did not perform as well as diffusion models, which meant that most image models used diffusion. Now it seems autoregressive techniques have a strict advantage over diffusion models. Another advantage is that they can be integrated with autoregressive LLMs (multimodality), which is not possible with diffusion image models. In fact, the recent GPT-4o image generation is autoregressive according to OpenAI. I wonder whether diffusion models still have a future now.
It still predicts image patches, left to right and top to bottom. The main difference is that we start with patches at a low resolution.
I'm not 100% convinced that diffusion models are dead. That paper fixes autoregression for 2D spaces by basically turning the generation problem from pixel-by-pixel to iterative upsampling, but if 2D was the problem (and 1D was not), why don't we have more autoregressive models in 1D spaces like audio?
You would, because it's still autoregressive. It still generates patches left to right, top to bottom. It's just that we're not starting with patches at the target resolution.