On a tangentially-related note: does anyone have a good intuition for why ChatGPT-generated images (like the one in this piece) are getting increasingly yellow? I often see explanations attributing this to a feedback loop in training data but I don't see why that would persist for so long and not be corrected at generation time.
minimaxir · 1h ago
They aren't getting increasingly yellow (I don't think the base model has been updated since the release of GPT-4o Image Generation), but the fact that they are always so yellow is bizarre and I am still shocked OpenAI shipped it knowing that effect exists, especially since it has the practical effect of instantly being able to clock it as an AI image generation.
Generally when training image encoders/decoders, the input images are normalized so some base commonality is possible (when playing around with Flux Kontext image-to-image I've noticed subtle adjustments in image temperature), but the fact that it's piss yellow is baffling. The autoregressive nature of the generation would not explain it either.
Workaccount2 · 2h ago
Man, people in the "it's just maths and probably" camp are in for a world of hurt when they learn that everything is just maths and probability.
The observation that LLMs are just doing math gets you nowhere, everything is just doing math.
perching_aix · 1h ago
I largely agree, and upon reading this article is sadly also in that camp of applying this perspective to be dismissive.
However, I find it incredibly valuable generally to know things aren't magic, and that there's a method to the madness.
For example, I had a bit of a spat with a colleague who was 100% certain that AI models are not only unreliable because from a human perspective, insignificant changes to their inputs can cause significant changes to their outputs, but because, in their idea, they were actually random, in the nondeterministic sense. That I was speaking in hypotheticals when I took an issue with this, as he recalled my beliefs about superdeterminism, and inferred that "yeah if you know where every atom is in your processor and the state they're in, then sure, maybe they're deterministic, but that's not a useful definition of deterministic".
Me "knowing" that they're not only not any more special than any other program, but that it's just a bunch of matrix math, provided me with the confidence and resiliency necessary to reason my colleague out of his position, including busting out a local model to demonstrate the reproducibility of model interactions first hand, that he was then able to replicate on his end on a completely different hardware. Even learned a bit about the "magic" involved myself along the way (that different versions of ollama may give different results, although not necessarily).
captn3m0 · 1h ago
I also had to argue with a lawyer on the same point - he held a firm belief that “Modern GenAI systems” are different from older ML systems in that they are non-deterministic and random. And that this inherent randomness is what makes them both unexplainable (you can’t guarantee what it would type) and useful (they can be creative).
perching_aix · 18m ago
I honestly find this kinda stuff more terrifying than the models themselves.
pxc · 1h ago
> [The] article is sadly also in that camp of applying this perspective to be dismissive.
TFA literally and unironically includes such phrases as "AI is awesome".
It characterizes AI as "useful", "impressive" and capable of "genuine technological marvels".
In what sense is the article dismissive? What, exactly, is it dismissive of?
perching_aix · 25m ago
> TFA literally and unironically includes such phrases as "AI is awesome". It characterizes AI as "useful", "impressive" and capable of "genuine technological marvels".
This does not contradict what I said.
> In what sense is the article dismissive? What, exactly, is it dismissive of?
Consider the following exact quotes:
> It’s like having the world’s most educated parrot: it has heard everything, and now it can mimic a convincing answer.
or
> they generate responses using the same principle: predicting likely answers from huge amounts of training text. They don’t understand the request like a human would; they just know statistically which words tend to follow which. The result can be very useful and surprisingly coherent, but it’s coming from calculation, not comprehension
I believe these examples are self-evidently dismissive, but to further put it into words, the article - ironically - rides on the idea that there's more to understanding then just pattern recognition at a large scale, something mystical and magical, something beyond the frameworks of mathematics and computing, and thus these models are no true scotsmans. I wholeheartedly disagree with this idea; I find the sheer capability of higher level semantic information extraction and manipulation to be already a clear and undeniable evidence of an understanding. This is one thing the article is dismissive of (in my view).
They even put it into words:
> As impressive as the output is, there’s no mystical intelligence at play – just a lot of number crunching and clever programming.
Implying that real intelligence is mystical, not even just in the epistemological but in the ontological sense, too.
> But here at Zero Fluff, we don’t do magic – we do reality.
Please.
It also blatantly contradicts very easily accessible information on how a typical modern LLM works; no, they are not just spouting off a likely series of words (or tokens) in order, as if they were reciting from somewhere. This is also a common lie that this article just propagates further. If that's really how they worked, they'd be even less useful than they presently are. This is another thing the article is dismissive of (in my view).
ninetyninenine · 37m ago
Yeah your brain is also maths and probability.
It’s called mathematical modeling and anything we understand in the universe can be modeled. If we don’t understand something we feel a model should exist we just don’t know it yet.
AI we don’t have a model. Like we have a model for atoms and we know the human brain is made of atoms so in that sense the brain can be modeled but we don’t have a high level model that can explain things in a way we understand.
It’s the same with AI. We understand it from the perspective of prediction and best fit curve at the lowest level but we don’t fully understand what’s going on at a higher level.
4b11b4 · 1h ago
You're just mapping from distribution to distribution
- one of my professors
blahburn · 1h ago
Yeah, but it’s kinda magic
hackinthebochs · 1h ago
LLMs are modelling the world, not just "predicting the next token". They are not akin to "stochastic parrots". Some examples here[1][2][3]. Anyone claiming otherwise at this point is not arguing in good faith. There are so many interesting things to say about LLMs, yet somehow the conversation about them is stuck in 2021.
LLMs are still trained to predict the next token: gradient descent just inevitably converges on building a world model as the best way to do it.
Masked language modeling and its need to understand inputs both forwards and backwards is a more intuitive way for having a model learn a representation of the world, but causal language modeling goes brrrrrrrr.
ninetyninenine · 17m ago
In theory one can make a token predictor virtually indistinguishable from a human. In fact… I myself am a best token predictor.
I and all humans fit the definition of what a best token predictor is. Think about it.
israrkhan · 1h ago
A computer (or a phone) is not magic, its just billions of transistors.
or perhaps we can further simplify and call it just sand?
Generally when training image encoders/decoders, the input images are normalized so some base commonality is possible (when playing around with Flux Kontext image-to-image I've noticed subtle adjustments in image temperature), but the fact that it's piss yellow is baffling. The autoregressive nature of the generation would not explain it either.
The observation that LLMs are just doing math gets you nowhere, everything is just doing math.
However, I find it incredibly valuable generally to know things aren't magic, and that there's a method to the madness.
For example, I had a bit of a spat with a colleague who was 100% certain that AI models are not only unreliable because from a human perspective, insignificant changes to their inputs can cause significant changes to their outputs, but because, in their idea, they were actually random, in the nondeterministic sense. That I was speaking in hypotheticals when I took an issue with this, as he recalled my beliefs about superdeterminism, and inferred that "yeah if you know where every atom is in your processor and the state they're in, then sure, maybe they're deterministic, but that's not a useful definition of deterministic".
Me "knowing" that they're not only not any more special than any other program, but that it's just a bunch of matrix math, provided me with the confidence and resiliency necessary to reason my colleague out of his position, including busting out a local model to demonstrate the reproducibility of model interactions first hand, that he was then able to replicate on his end on a completely different hardware. Even learned a bit about the "magic" involved myself along the way (that different versions of ollama may give different results, although not necessarily).
TFA literally and unironically includes such phrases as "AI is awesome".
It characterizes AI as "useful", "impressive" and capable of "genuine technological marvels".
In what sense is the article dismissive? What, exactly, is it dismissive of?
This does not contradict what I said.
> In what sense is the article dismissive? What, exactly, is it dismissive of?
Consider the following exact quotes:
> It’s like having the world’s most educated parrot: it has heard everything, and now it can mimic a convincing answer.
or
> they generate responses using the same principle: predicting likely answers from huge amounts of training text. They don’t understand the request like a human would; they just know statistically which words tend to follow which. The result can be very useful and surprisingly coherent, but it’s coming from calculation, not comprehension
I believe these examples are self-evidently dismissive, but to further put it into words, the article - ironically - rides on the idea that there's more to understanding then just pattern recognition at a large scale, something mystical and magical, something beyond the frameworks of mathematics and computing, and thus these models are no true scotsmans. I wholeheartedly disagree with this idea; I find the sheer capability of higher level semantic information extraction and manipulation to be already a clear and undeniable evidence of an understanding. This is one thing the article is dismissive of (in my view).
They even put it into words:
> As impressive as the output is, there’s no mystical intelligence at play – just a lot of number crunching and clever programming.
Implying that real intelligence is mystical, not even just in the epistemological but in the ontological sense, too.
> But here at Zero Fluff, we don’t do magic – we do reality.
Please.
It also blatantly contradicts very easily accessible information on how a typical modern LLM works; no, they are not just spouting off a likely series of words (or tokens) in order, as if they were reciting from somewhere. This is also a common lie that this article just propagates further. If that's really how they worked, they'd be even less useful than they presently are. This is another thing the article is dismissive of (in my view).
It’s called mathematical modeling and anything we understand in the universe can be modeled. If we don’t understand something we feel a model should exist we just don’t know it yet.
AI we don’t have a model. Like we have a model for atoms and we know the human brain is made of atoms so in that sense the brain can be modeled but we don’t have a high level model that can explain things in a way we understand.
It’s the same with AI. We understand it from the perspective of prediction and best fit curve at the lowest level but we don’t fully understand what’s going on at a higher level.
- one of my professors
[1] https://arxiv.org/abs/2405.15943
[2] https://x.com/OwainEvans_UK/status/1894436637054214509
[3] https://www.anthropic.com/research/tracing-thoughts-language...
Masked language modeling and its need to understand inputs both forwards and backwards is a more intuitive way for having a model learn a representation of the world, but causal language modeling goes brrrrrrrr.
I and all humans fit the definition of what a best token predictor is. Think about it.
or perhaps we can further simplify and call it just sand?
or maybe atoms?