Being “Confidently Wrong” is holding AI back

126 tango12 201 8/22/2025, 12:14:35 PM promptql.io ↗

Comments (201)

lucideer · 4h ago
While the thrust of this article is generally correct, I have two issues with it:

1. The words "the only thing" massively underplays the difficulty of this problem. It's not a small thing.

2. One of the issues I've seen with a lot of chat LLMs is their willingness to correct themselves when asked - this might seem, on the surface, to be a positive (allowing a user to steer the AI toward a more accurate or appropriate solution), but in reality it simply plays into users' biases & makes it more likely that the user will accept & approve of incorrect responses from the AI. Often, rather than "correcting" itself it merely "teaches" the AI how to be confidently wrong in an amenable & subtle manner which the individual user finds easy to accept (or more difficult to spot).

If anything, unless/until we can solve the (insurmountable) problem of AI being wrong, AI should at least be trained to be confidently & stubbornly wrong (or right). This would also likely lead to better consistency in testing.

traceroute66 · 4h ago
> is their willingness to correct themselves when asked

Except they don't correct themselves when asked.

I'm sure we've all been there, many, many, many,many,many times ....

   - User: "This is wrong because X"
   - AI: "You're absolutely right !  Here's a production-ready fixed answer"
   - User: "No, that's wrong because Y"
   - AI: "I apologise for frustrating you ! Here's a robust answer that works"
   - User: "You idiot, you just put X back in there"
   - and so continues the vicious circle....
ACCount37 · 4h ago
1-turn instruction following and multi-turn instruction following are not the same exact capability, and some AIs only "get good" at the former. 1-turn gets more training attention - because it's more noticeable, in casual use and benchmarks both, and also easier to train for.

With weak multi-turn instruction following, context data will often dominate over user instructions. Resulting in very "loopy" AI - and more sessions that are easier to restart from scratch than to "fix".

Gemini is notorious for underperforming at this, while Claude has relatively good performance. I expect that many models from lesser known providers would also have a multi-turn instruction following gap.

vidarh · 2h ago
This is a good point, and to drive this home to people, if you have a conversation of this pattern:

    User: Fix this problem ...
    Assistant: X
    User: No, don't do X
    Assistant: Y
    User: No, Y is wrong too.
    Assistant: X
It is generally pointless to continue. You now have a context that is full of the assistant explaining to you and itself why X and Y are the right answers, and much less context of you explaining why it is wrong.

If you reach that state, start over, and constrain your initial request to exclude X and Y. If it brings up either again, start over, and constrain your request further.

If the model is bad at handling multiple turns without getting into a loop, telling it that it is wrong is not generally going to achieve anything, but starting over with better instructions often will.

I see so many people get stuck "arguing" with a model over this, getting more and more frustrated as the model keeps repeating variations of the broken answer, without realising they're filling the context with arguments from the model for why the broken answer is right.

recursive · 41m ago
This is also a thing that's bad about LLMs. You're holding it wrong if you continue to argue. But LLMs are presented as if we can use the conventions of natural language to communicate with them. That's how they're sold. So if they fail to live up to those expectations, that's still a problem with LLMs.
_flux · 2h ago
Indeed, arguing with LLM is good if you like arguing. For results it's not the way to go.

I think often it's not required to completely start over: just identify the part where it goes off the rails, and modify your prompt just before that point. But yeah, basically the same process.

dingnuts · 1h ago
I don't know why this could be the case but I have absolutely gotten better results out of the bot after insulting it.
psadauskas · 2h ago
There's also the Pink Elephant Paradox (Whatever you do, DO NOT think about a pink elephant).

If you mention X or Y, even if they're preceded by "DO NOT" in all caps, an LLM will still end up with both X and Y into its context, making it more likely it gets used.

I'm running out of ways to tell the assistant to not use mocks for tests, it really really wants to use them.

vidarh · 2h ago
I think in some cases you "just" need to instead up temperature to increase the variety of responses, repeat requests, and use hooks to automatically review and reject bad options.

(And yes, it's a horrible workaround)

xienze · 1h ago
> I see so many people get stuck "arguing" with a model over this, getting more and more frustrated as the model keeps repeating variations of the broken answer

Maybe because people expect AI systems that are touted as all-knowing, all-powerful, coming-for-your-job to be smart enough to remember what was said two turns ago?

rtkwe · 2h ago
I have this problem all the time with minor image edits on ChatGPT the few time's I've tried it. Any time I try to do a second edit or change to the generated image it seems to take the already degraded output from it's first attempt and use that instead of the original image.
stetrain · 4h ago
Yep, the LLM will happily continue this spiral indefinitely but I've learned that if providing a bit more context and one correction doesn't provide a good solution, continuing is generally a waste of time.

They tend to very quickly lose useful context of the original problem and stated goals.

nyeah · 4h ago
Yes, that is the point of the comment.
stetrain · 2h ago
Yes, you’re absolutely right! Agreeing with the comment and adding my own experience was the point of my comment.

Is there anything else I can help you with?

nyeah · 2h ago
Ok, fair, clearly I misinterpreted what you wrote.
zero_iq · 46m ago
I've seen ChatGPT get stuck in this loop all by itself, generating a long multi-page answer where it constantly catches itself, refutes itself, offers a new answer with the same problem, rinse and repeat... All in the same response!
brookst · 1h ago
You're conflating "correct themselves" with "are guaranteed to give the correct answer", which are two really different things. And in fact you're just echoing GP's point: their corrections can be wrong.

You case is no different from:

- AI: "The capital of France is Paris"

- User: "This is wrong, it changed to Montreal in 2005"

- AI: "You're absolutely right! The capital of France is Montreal"

kelseyfrog · 48m ago
Instead I get this:

    Nope—Paris is the capital of France and has been for centuries. Montreal is in Quebec, Canada. France’s presidency (Élysée), parliament (Assemblée nationale and Sénat), and ministries are all in Paris.
therobots927 · 4h ago
Yeah I think our jobs are safe. Why doesn’t anyone acknowledge loops like this? They happen all the time and I’m only using it once a week at the most
gavinray · 4h ago

  > Yeah I think our jobs are safe.
I give myself 6-18 months before I think top-performing LLM's can do 80% of the day-to-day issues I'm assigned.

  > Why doesn’t anyone acknowledge loops like this?
Thisis something you run into early-on using LLM's and learn to sidestep. This looping is a sort of "context-rot" -- the agent has the problem statement as part of it's input, and then a series of incorrect solutions.

Now what you've got is a junk-soup where the original problem is buried somewhere in the pile.

Best approach I've found is to start a fresh conversation with the original problem statement and any improvements/negative reinforcements you've gotten out of the LLM tacked on.

I typically have ChatGPT 5 Thinking, Claude 4.1 Opus, Grok 4, and Gemini 2.5 Pro all churning on the same question at once and then copy-pasting relevant improvements across each.

dinfinity · 3h ago
I concur. Something to keep in mind is that it is often more robust to pull an LLM towards the right place than to push it away from the wrong place (or more specifically, the active parts of its latent space). Sidenote: also kind of true for humans.

That means that positively worded instructions ("do x") work better than negative ones ("don't do y"). The more concepts that you don't want it to use / consider show up in the context, the more they do still tend to pull the response towards them even with explicit negation/'avoid' instructions.

I think this is why clearing all the crap from the context save for perhaps a summarizing negative instruction does help a lot.

gavinray · 3h ago

  >  positively worded instructions ("do x") work better than negative ones ("don't do y")
I've noticed this.

I saw someone on Twitter put it eloquently: something about how, just like little kids, the moment you say "DON'T DO XYZ" all they can think about is "XYZ..."

fuzzzerd · 3h ago
> This looping is a sort of "context-rot" -- the agent has the problem statement as part of it's input, and then a series of incorrect solutions.

While I agree, and also use your work around, I think it stands to reason this shouldn't be a problem. The context had the original problem statement along with several examples of what not to do and yet it keeps repeating those very things instead of coming up with a different solution. No human would keep trying one of the solutions included in the context that are marked as not valid.

Workaccount2 · 1h ago
I'm sure somewhere in the current labs there are teams that are trying to figure out context pruning and compression.

In theory you should be able to get a multiplicative effect on context window size by consolidating context into it's most distilled form.

30,000 tokens of wheel spinning to get the model back on track consolidated to 500 tokens of "We tried A, and it didn't work because XYZ, so avoid A" and kept in recent context

traceroute66 · 3h ago
> No human would keep trying one of the solutions included in the context that are marked as not valid.

Exactly. And certainly not a genius human with the memory of an elephant and a PhD in Physics .... which is what we're constantly told LLMs are. ;-)

vidarh · 2h ago
I agree it shouldn't be a problem, but if you don't regularly run into humans who insist on trying solutions clearly signposted as wrong or not valid, you're far luckier than I am.
gavinray · 3h ago

  > No human would keep trying one of the solutions included in the context that are marked as not valid.
Yeah, definitely not. Thankfully for my employment status, we're not at "human" levels QUITE yet
zparky · 12m ago
Um, how much are you spending on running all these at once?
ModernMech · 1h ago
> I give myself 6-18 months before I think top-performing LLM's can do 80% of the day-to-day issues I'm assigned.

This is going to age like "full self driving cars in 5 years". Yeah it'll gain capabilities, maybe it does do 80% of the work, but it still can't really drive itself, so it ultimately won't replace you like people are predicting. The money train assures that AGI/FSD will always be 6-18 months away, despite no clear path to solving glaring, perennial problems like the article points out.

xienze · 1h ago
> The money train assures that AGI/FSD will always be 6-18 months away

I vividly remember when some folks from Microsoft come to my school to give a talk at some Computer Science event and proclaimed that yep, we have working AGI, the only limiting factor is hardware, but that should be resolved in about ten years.

This was in 2001.

Some grifts in technology are eternal.

ForHackernews · 2h ago
> I give myself 6-18 months before I think top-performing LLM's can do 80% of the day-to-day issues I'm assigned.

How long before there's an AI smart enough to say 'no' to half the terrible ideas I'm assigned?

more_corn · 1h ago
Herald AI has a pretty robust mechanism for context cleanup. I think I saw a blogpost from them about it.
RyanOD · 4h ago
But still under pressure in the short-term, no? As companies lean into AI as a means of efficiency / competitive advantage / cost savings, jobs will be eliminated / reduced while companies find their direction. The potential gains are said to be too big to sit on the sidelines and wait to be a late-adopter.
therobots927 · 4h ago
Yes hold onto your job like your life depends on it because after this bubble pops the job market will get even worse. Then you need to hold on through the trough until experienced engineers are valued again once all of the AI waste flushes out of the system
vidarh · 2h ago
Because it's easy to learn to stop engaging with those loops, treating them as a sign you provided too little context, and instead start a new conversation with an expanded prompt.

It doesn't mean these loops aren't an issue, because they are, but once you stop engaging with them and cut them off, they're a nuisance rather than a showstopper.

jansper39 · 3h ago
Honestly when I speak about these sorts of issues I get the feeling that other people view me as some kind of luddite, especially people above me who presumably want to replace as many people with AI as possible. I suppose me pointing out the flaws breaks the illusion of magic that people want AI to have.
aleph_minus_one · 2h ago
> I suppose me pointing out the flaws breaks the illusion of magic that people want AI to have.

My impression is rather: there exist two kinds of people who are "very invested in this illusion":

1. People who want to get rich by either investing in or working on AI-adjacent topics. They of course have an interest to uphold this illusion of magic.

2. People who have a leftist agenda ("we will soon all be replaced by AI, so politics has to implement [leftist policy measures like UBI]"). If people realize that AI is not so powerful, after all, such leftist political measures whose urgency was argued with the (hypothetical) huge societal changes that will be caused by AI will not have a lot backing in society, or at least not considered to be urgently implemented by society.

vidarh · 2h ago
The left is generally extremely sceptical to UBI, as its main proponents tend to be classically liberal groups (so not "US liberal") pushing it as a means to contain and limit welfare systems by dropping welfare programs in favour of a general, low UBI.

The more leftist position ever since the days of Marx has been that "right rather than being equal would have to be unqueal" to be equitable given that people have different needs, to paraphrase from Critique of the Gotha Program - UBI is in direct contradiction to socialist ideals of fairness.

The people I see pushing UBI, on the contrary, usually seems motivated either by the classically liberal position of using it to minimise the state, or driven by a fear of threats to the stability of capitalism. Saving capitalism from perceived threats to itself isn't a particularly leftist position.

No comments yet

therobots927 · 2h ago
I agree with your first point but regarding your second: I’m as far left as it gets and I don’t think that’s true at all. Most of the influencers I follow despise AI and also are highly skeptical of the outrageous claims made by Sam Altman etc. The reality is that the need for things like universal health care exists today. Tens of millions of people can not get medical care in the US. Insurance companies are allowed to deny claims with no justification. That has nothing to do with AI taking jobs BUT it does involve AI because United Health’s denial rate went through the roof right after they started letting AI determine which claims were covered by policy with no human review. So people on the left are talking about AI in contexts that it doesn’t seem you’re aware of
traceroute66 · 4h ago
The AI-fanbois will quickly tell you that you are misusing the context or your prompt is "wrong".

But I've had it consistently happens to me on tiny contexts (e.g. I've had to spend time trying - and failing - to get it to fix a mess it was making with a straightforward 200-ish line bash script).

And its also very frequently happened to me when I've been very careful with my prompts (e.g. explicitly telling it to use a specific version of a specific library ... and it goes and ignores me completely and picks some random library).

gavinray · 4h ago
I'd be curious if you could share some poor-performing prompts.

I would be willing to record myself using them across paid models with custom instructions and see if the output is still garbage.

dkersten · 2h ago
My pet peace is when I point out a problem it responds with acknowledgement and then explaining why it’s wrong. Like… I already know why it’s wrong, since I’m the one that pointed it out!
Wowfunhappy · 4h ago
...I don't know why, but I swear to god, when Claude gets into one of these cycles I can often get it out by dropping the f-bomb, with maybe a 50% success rate. Something about that word lets it know that it needs to break the pattern.
lucideer · 4h ago
True. This also often happens.

Probably the ideal would be to have a UI / non-chat-based mechanism for discarding select context.

burnte · 24m ago
I'm just so glad people are seeing this. I started saying this literally days after ChatGPT came out and I started examining the technology. It's SUPER useful, but it's assistive, it can't be trusted to do things autonomously yet. That's ok, though, it can make human workers more productive, rather than worrying about replacing humans.
dns_snek · 3h ago
> but in reality it simply plays into users' biases & makes it more likely that the user will accept & approve of incorrect responses from the AI.

Yes! I often find myself overthinking my phrasing to the nth degree because I've learned that even a sprinkle of bias can often make the LLM run in that direction even if it's not the correct answer.

It often feels a bit like interacting with a deeply unstable and insecure people pleasing person. I can't say anything that could possibly be interpreted as a disagreement because they'll immediately flip the script, I can't mention that I like pizza before asking them what their favorite food is because they'll just mirror me.

stingraycharles · 4h ago
> 1. The words "the only thing" massively underplays the difficulty of this problem. It's not a small thing.

Exactly. One could argue that this is just an artifact from the fundamental technique being used: it’s a really fancy autocomplete based on a huge context window.

People still think there’s actual intelligence in there, while the actual problems by making these systems appear intelligent is mostly algorithms and software managing exactly what goes into these context windows at what place.

Don’t get me wrong: it feels like magic. But I would argue that the only way to recognize a model being “confidently wrong” is to let another model, trained on completely different datasets with different techniques, judge them. And then preferably multiple.

(This is actually a feature of an MCP tool I use, “consensus” from zen-mcp-server, which enables you to query multiple different models to reach a consensus on a certain problem / solution).

tango12 · 2h ago
The AI being wrong problem is probably not insurmountable.

Humans have meta-cognition that helps them judge if they're doing a thing with lots of assumptions vs doing something that's blessed.

Humans decouple planning from execution right? Not fully but we choose when to separate it and when to not.

If we had enough data on here's a good plan given user context and here's a bad plan, it doesn't seem unreasonable to have a pretty reliable meta cognition capability on the goodness of a plan.

procaryote · 23m ago
Depending on your definitions, either:

* there are already lots of "reasoning" models trying meta-cognition, while still getting simple things wrong

or:

* the models aren't doing cognition, so meta-cognition seems very far away

energy123 · 4h ago
Mechanistic interpretability could play a role here. The sycophancy you describe in chat mode could be when the question is "too difficult" and the AI defaults to easy circuits that rely on simple rule of thumbs (like does the context contain positive words such as "excellent"). The user experiences this as the AI just following basic nudges.

Could real-time observability into the network's internals somehow feed back into the model to reduce these hallucination-inducing shortcuts? Like train the system to detect when a shortcut is being used, then do something about it?

stetrain · 4h ago
Yes, the quick to correct itself isn't really useful. I would not like a human assistant/intern/pair programmer who when asked how to do X said:

> To accomplish X you can just use Y!

But Y isn't applicable in this scenario.

> Oh, you're absolutely right! Instead of Y you can do Z.

Are you sure? I don't think Z accomplishes X.

> On second thought you're absolutely correct. Y or Z will clearly not accomplish X, but let's try Q....

decentrality · 4h ago
Agreed with #1 ( came here to say that also )

Pronoun and noun wordplay aside ( 'Their' ... `themselves` ) I also agree that LLMs can correct the path being taken, regenerate better, etc...

But the idea that 'AI' needs to be _stubbornly_ wrong ( more human in the worst way ) is a bad idea. There is a fundamental showing, and it is being missed.

What is the context reality? Where is this prompt/response taking place? Almost guaranteed to be going on in a context which is itself violated or broken; such as with `Open Web UI` in a conservative example: Who even cares if we get the responses right? Now we have 'right' responses in a cul-de-sac universe. This might be worthwhile using `Ollama` in `Zed` for example, but for what purpose? An agentic process that is going to be audited anyway, because we always need to understand the code? And if we are talking about decision-making processes in a corporate system strategy... now we are fully down the rabbit hole. The corporate context itself is coming or going on whether it is right/wrong, good/evil, etc... as the entire point of what is going on there. The entire world is already beating that corporation to death or not, or it is beating the world to death or not... so the 'AI' aspect is more of an accelerant of an underlying dynamic, and if we stand back... what corporation is not already stubbornly wrong, on average?

taco_emoji · 3h ago
> Pronoun and noun wordplay aside ( 'Their' ... `themselves` )

How is that wordplay? Those are the correct pronouns.

KoolKat23 · 3h ago
Gemini 2.5 pro is quite good at being stubborn (well at least the initial release versions, haven't tested since).
ninetyninenine · 4h ago
It’s not massively underplaying it imo. AI hype is real. This is revolutionary technology that humanity has never seen before.

But it happened at a time where hype can be delivered at a magnitude never before seen by humanity as well to a degree of volume that is completely unnatural by any standard set previously by hype machines created by humanity. Not even landing on the moon has inundated people with as much hype. But inevitably like landing on the moon, humanity is suffering from hype fatigue.

Too much hype makes us numb to the reality of how insane the technology is.

Like when someone says the only thing stopping LLMs is hallucinations… that is literally the last gap. LLMs cover creativity, comprehension, analysis, knowledge and much more. Hallucinations is it. The final problem is targeted and boxed into something much more narrower then just build a human level AI from scratch.

Don’t get me wrong. Hallucinations are hard. But this being the last thing left is not an underplay. Yes it’s a massive issue but yes it is also a massive achievement to reduce all of agi to simply solving just an hallucination problem.

indigoabstract · 3h ago
I think you would have really enjoyed living in the '50s, when the future was bright and colonizing Mars was basically a solved problem.

What we got instead is a bunch of wisecracking programmers who like to remind everyone of the 90–90 rule, or the last 10 percent.

taco_emoji · 3h ago
Oh, buddy, LLM hallucinations are not the only gap left for AGI
sfn42 · 4h ago
Being confidently wrong isn't even the problem. It's a symptom of the much deeper problem that these things aren't AI at all, they're just atocomplete bots good enough to kind of seem like AI. There's no actual intelligence. That's the problem.
indigoabstract · 2h ago
I think what matters most is that we now know that it's possible, that a computer mimicking most of our abilities (but not all) which we have long considered intelligent is obviously possible in some indeterminate future.

It's not obvious how long until that point or what form it will finally take, but it should be obvious that it's going to happen at some point.

My speculation is that until AI starts having senses like sight, hearing, touch and the ability to learn from experience, it will always be just a tool/help/aider to someone doing a job, but could not possibly replace that person in that job as it lacks the essential feedback mechanisms for successfully doing that job in the first place.

Workaccount2 · 1h ago
My favorite "paper" on AI pretty accurately describes this line of thinking

https://ai.vixra.org/pdf/2506.0065v1.pdf

ninetyninenine · 4h ago
No. The experts in the field are past this argument. People have moved on. It is clear to everyone who builds LLMs that the AI is intelligent. The algorithm was autocomplete, but we are finding as an autocomplete bot is basically autocompleting things with humanity changing intelligent content. Your opinion is a minority now and not shared by people on the forefront of building these things. Your holding onto the initial fever pitched alarmist reaction people had to LLMs when it first came out.

Like you realize humans hallucinate too right? And that there are humans that have a disease that makes them hallucinate constantly.

Hallucinations don’t preclude humans from being “intelligent”. It also doesn’t preclude the LLM from being intelligent.

CodexArcanum · 3h ago
LLMS don't "hallucinate" they generate a stochastic sequence of plausible tokens that, in context when read by a human, are a false statement or nonsensical.

They also dont have an internal world model. Well I don't think so, but the debate is far from settled. "Experts" like the cofounders of various AI companies (whose livelihood depends on selling these things) seem to believe that. Others do not.

https://aiguide.substack.com/p/llms-and-world-models-part-1

https://yosefk.com/blog/llms-arent-world-models.html

ninetyninenine · 1h ago
I’m not talking about startups with financial stake. I’m talking about academics and researchers who have zero financial stake and are observing the phenomenon. It is utterly clear now that stochastic parroting is not what’s going on.
dns_snek · 3h ago
> Your opinion is a minority now and not shared by people on the forefront of building these things.

Minority != wrong, with many historic examples that imploded in spectacular fashion. People at the forefront of building these things aren't immune from grandiose beliefs, many of them are practically predisposed to them. They also have a vested interest in perpetuating the hype to secure their generational wealth.

ninetyninenine · 1h ago
It doesn’t but I would argue that evidence is in favor of the majority.

The ai can easily answer correctly complex questions NOT in its data set. If it is generating answers to questions like these out of thin air which fits our colloquial definition of intelligence.

kilpikaarna · 3h ago
> It is clear to everyone who builds LLMs that the AI is intelligent.

So presumably we have a solid, generally-agreed-upon definition on intelligence now?

> autocompleting things with humanity changing intelligent content.

What does this even mean?

wila · 22m ago
that you're arguing with an LLM :)
ninetyninenine · 1h ago
We do it’s fuzzy but we do. You point to a rock all humans say it’s not intelligent. You point to a human all humans say it is intelligent.

Because we can do this, by logic a universally agreed upon definition exists. Otherwise we wouldn’t be able to do this.

Of course the boundaries between what’s not intelligent and what is, is where things are not as universally agreed upon. Which is what you’re referring to and unlike you I am charitably addressing that nuance rather then saying some surface level bs.

The thing is the people who say the LLM (which obviously exists at this fuzzy categorical boundary) is not intelligent will have logical paradoxes and inconsistencies when they examine there own logic.

The whole thing is actually a vocabulary problem as this boundary line is an arbitrary definition given to a made up word that humans created. But one can still say an LLM is well placed in the category of intelligent not by some majority vote but because that placement is the only one that maintains logical consistency with OTHER entities or things all humans place in the intelligent bucket.

For example a lot of people in this thread say intelligence requires actual real time learning, therefore an LLM is NOT intelligent. But then there are humans who literally have anterograde amnesia and they literally cannnot learn. Are they not intelligent? Things like this are inconsistent and it happens frequently when you place LLMs in the not intelligent bucket.

State your reasoning for why your stance is "not intelligent" and I can point out where the inconsistencies lie.

eCa · 3h ago
> Like you realize humans hallucinate too right?

A developer that hallucinates at work to the extent that LLMs does would probably have issues getting their PRs past code reviews a lot.

bigstrat2003 · 1h ago
They would have issues even remaining employed. AI defenders are very quick to point out "humans mistakes too", but that is a false equivalence because humans learn. If a junior makes a really stupid mistake, when I show him the correct way he won't make that mistake again. An AI will, because (as people correctly point out) it has no actual intelligence.
ninetyninenine · 1h ago
There’s examples of humans who can’t learn. Have you seen the movie memento.

There are cases where humans lose all ability to form long term memories and outside of a timed context window they remember nothing. That context window is minutes at best.

According to your logic these people have no actual intelligence or sentience. Therefore they should be euthanized. You personally can grab a gun and execute each of these people one by one with a bullet straight to the head because clearly these people have no actual intelligence or sentience. That’s the implication of your logic.

https://en.m.wikipedia.org/wiki/Anterograde_amnesia

It’s called anterograde amnesia. Do you see how your logic can justify gassing all these people holocaust style?

When I point out the flaw in your logic do you use the new facts to form a new conclusion? Or do you rearrange the facts to maintain support for your existing conclusion?

If you did the later I hate to tell you this, it wasn’t very intelligent. It was biased. But given that you’re human, that’s what you most likely did and it’s normal. But pause for a second and try to do the former of using the new facts to form a different more nuanced conclusion.

ninetyninenine · 1h ago
A person who has schizophrenia and hallucinates to a greater extent than LLMs are clearly defective and not intelligent or sentient.

Because of this we should euthanize all schizophrenics. Just stab them to death or put a bullet in their heads right? I mean they aren’t intelligent or sentient so you shouldn’t feel anything when you do this.

I’m baffled as to why people think of this in terms of PRs. Like the LLM is intelligent but everyone’s like oh it’s not following my command perfectly therefore it’s not intelligent.

corytheboyd · 2h ago
Isn’t it obvious that the confidently wrong problem will never go away because all of this is effectively built on a statistical next token matcher? Yeah sure you can throw on hacks like RAG, more context window, but it’s still built on the same foundation.

It’s like saying you built a 3D scene on a 2D plane. You can employ clever tricks to make 2D look 3D at the right angle, buts it’s fundamentally not 3D, which obviously shows when you take the 2D thing and turn it.

It seems like the effectiveness plateau of these hacks will soon be (has been?) reached and the smoke and mirrors snake oil sales booths cluttering Main Street will start to go away. Still a useful piece of tech, just, not for every-fucking-thing.

raynr · 1h ago
As a layman, this too strikes me as the problem underlying the "confidently wrong" problem.

The author proposes ways for an AI to signal when it is wrong and to learn from its mistakes. But that mechanism feeds back to the core next token matcher. Isn't this just replicating the problem with extra steps?

I feel like this is a framing problem. It's not that an LLM is mostly correct and just sometimes confabulates or is "confidently wrong". It's that an LLM is confabulating all the time, and all the techniques thrown at it do is increase the measured incidence of LLM confabulations matching expected benchmark answers.

kovacs · 1h ago
This is the best analogy I've read to explain what's going on and takes me back to the days of Doom and how it was so transformative at the time. Perhaps in time the current generation will be viewed as the Doom engine as we await the holy grail of full 3D in Quake.
corytheboyd · 39m ago
I guess technically 3D on computers is still clever 2D, but let’s not break the metaphor down too hard lol. Love the Doom/Quake comparison!
yifanl · 2h ago
There are people convinced that if we throw a sufficient amount of training data and VC money at more hardware, we'll overcome the gap.

Technically, I can't prove that they're wrong, novel solutions sometimes happen, and I guess the calculus is that it's likely enough to justify a trillion dollars down the hole.

gavinray · 2h ago
There's a guy, Ken Stanley, who wrote the NEAT[0]/HyperNEAT[1] algorithms.

His big idea is that evolution/advancements don't happen incrementally, but rather in unpredictable large leaps.

He wrote a whole book about it that's pretty solid IMO: "Why Greatness Cannot Be Planned: The Myth of the Objective."

[0] https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_t... [1] https://en.wikipedia.org/wiki/HyperNEAT

delichon · 2h ago
gavinray · 2h ago
Neat (no pun intended), TIL there's a word for this
devin · 1h ago
Whenever I try to tell people about the myth of the objective they look at me like I'm insane. It's not very popular to tell people that their best laid plans are actually part of the problem.
yifanl · 2h ago
I would suspect that any next step comes with a novel implementation though, not just trying to scale the same shit to infinity.

I guess the bitter lesson is gospel now, which doesn't sit right with me now that we're past the stage of Moore's Law being relevant, but I'm not the one with a trillion dollars, so I don't matter.

corytheboyd · 35m ago
I’d say it was worth throwing down some cash for, because we get cool new things by full-assing new ideas. But… yeah… a TRILLION dollars is waaaay too far.
gus_massa · 2h ago
It's easy to solve if they modify they training to remove some weight from Stack Overflow and add more weight to Yahoo! Answers :) .

I remember a few years ago, we were planing to make some kind of math forum for students in the first year of the university. My opinion was that it was too easy to do it wrong. On one way you can be like Math Overflow were all the questions are too technical (for first year of the university) and all the answers are too technical (first year of the university). On the other way, you can be like Yahoo! Answers, where more than half of the answers were "I don't know", with many "I don't know" per question.

For the AI, you want to give it some room to generalize/bullshit. It one page says that "X was a few months before Z" and another page says that "Y was a few days before Z", than you want an hallucinated reply that says that "X happened before Y".

On the other hand, you want the AI to say "I don't know.". They just gave too little weight to the questions that are still open. Do you know a good forum where people post questions that are still open?

corytheboyd · 21m ago
> For the AI, you want to give it some room to generalize/bullshit.

Totally! In my mind I’ve been playing with the phrase: it’s good at _fuzzy_ things. For example IMO voice synthesis before and after this wave of AI hype is actually night and day! In part, to my fuzzy idea, because voice synthesis isn’t factual, it’s billions of little data points coming together to emulate sound waves, which is incredibly fuzzy. Versus code, which is pointy: it has one/few correct forms, and infinite/many incorrect forms.

aidenn0 · 1h ago
I mean if it's trained on things like Reddit then it's just reflecting its training data. I asked a question on reddit just yesterday and the only response I got was confidently wrong. This is not the first time it has happened.
rwmj · 4h ago
Only thing? Just off the top of my head: That the LLM doesn't learn incrementally from previous encounters. That we appear to have run out of training data. That we seem to have hit a scaling wall (reflected in the performance of GPT5).

I predict we'll get a few research breakthroughs in the next few years that will make articles like this seem ridiculous.

energy123 · 3h ago
Re online learning - If I freeze 40 yo Einstein and make it so he can't form new memories beyond 5 minutes, that's still an incredibly useful, generally intelligent thing. Doesn't seem like a problem that needs to be solved on the critical path to AGI.

Re training data - We have synthetic data, and we probably haven't hit a wall. Gpt-5 was only 3.5 months after o3. People are reading too much into the tea leaves here. We don't have visibility into the cost of Gpt-5 relative to o3. If it's 20% cheaper, that's the opposite of a wall, that's exponential like improvement. We don't have visibility into the IMO/IOI medal winning models. All I see are people curve fitting onto very limited information.

tliltocatl · 25m ago
> If I freeze 40 yo Einstein and make it so he can't form new memories beyond 5 minutes, that's still an incredibly useful, generally intelligent thing

A "frozen mind" feels like something not unlike a book - useful, but only with a smart enough "human user", and even so be progressively less useful as time passes.

>Doesn't seem like a problem that needs to be solved on the critical path to AGI.

It definitely is one. I know we are running into definitions, but being able to form novel behavior patterns based on experience is pretty much the essence of what intelligence is. That doesn't necessary mean that a "frozen mind" will be useless, but it would certainly not qualify as AGI.

>We don't have visibility into the IMO/IOI medal winning models.

There are lies, damn lies and LLM benchmarks. IMO/IOI is not necessarily indicative of any useful tasks.

procaryote · 18m ago
Also that no companies involved seem to be making a profit, have a reasonable vision to make a profit, or even revenue in the same ballpark as costs.

Except nvidia perhaps

j-krieger · 4h ago
Never before did we have a combination of well and poison where the pollution of the first was both as instantaneous and as easily achieved.

I‘ve yet to see a convincing article for artificial training data.

AIPedant · 2h ago
It does seem like it helps with math, but in a way that demonstrates the futility of the enterprise: "after training the LLM on 10,000,000 examples of K-8 arithmetic it is now superhuman up to 12 digits, after which it falls off a cliff. Also it demonstrably doesn't understand what 'four' means conceptually and it still fails on many trivial counting problems."
ausbah · 28m ago
yeah like another commenter said, if you can get synthetic data with some some sort of easily verifiable grounding (math, games, code) models can do very well. this is one of the underpinnings of reinforcement learning that has helped some advancements in past year or so (AFAIK)
FergusArgyll · 3h ago
The problem is the kinds of "data" users will feed it. It's basically an impossible task to put a continuous learning model online and not have it devolve into the optimal mix of stalin & hitler
lvl155 · 4h ago
Incrementally learning model is pretty hard. That’s actually something I am working on right now and it’s completely different from developing/implementing LLMs.
criddell · 4h ago
I think that's what it's going to take. Eventually put the learning model in a robot body and send it out into the real world where there's no shortage of training data.
Tiktaalik · 1h ago
Yea it'll learn real quick what falling in a ravine is like
tliltocatl · 4h ago
Cool, got any previous work to share?
tliltocatl · 4h ago
> LLM doesn't learn incrementally from previous encounters

This. Lack of any way to incorporate previous experience seems like the main problem. Humans are often confidently wrong as well - and avoiding being confidently wrong is actually something one must learn rather than an innate capability. But humans wouldn't repeat same mistake indefinitely.

ACCount37 · 2h ago
You can gather feedback from inference and funnel that back into model training. It's just very, very hard to do that without shooting yourself in the foot.

The feedback you get is incredibly entangled, and disentangling it to get at the signals that would be beneficial for training is nowhere near a solved task.

Even OpenAI has managed to fuck up there - by accidentally training 4o to be a fully bootlickmaxxed synthetic sycophant. Then they struggled to fix that for a while, and only made good progress at that with GPT-5.

traceroute66 · 4h ago
> That we appear to have run out of training data.

I think the next iteration of LLM is going to be "interesting", i.e. now that all the websites they used to freely scrape have been increasingly putting up walls.

impossiblefork · 4h ago
Having run out of training data isn't something holding back LLMs in this sense.

But I agree that being confidently wrong is not the only thing they can't do. Programming, great, maths, apparently great nowadays, since Google and OpenAI have something that could solve most problems on the IMO, even if the models we get to see probably aren't models that can do this, but LLMs produce crazy output when asked to produce stories, they produce crazy output when given too long confusing contexts and have some other problems of that sort.

I think much of it is solvable. I certainly have ideas about how it can be done.

tango12 · 4h ago
Author here.

You’re right in that it’s obviously not the only problem.

But without solving this seems like no matter how good the models get it’ll never be enough.

Or, yes, the biggest research breakthrough we need is reliable calibrated confidence. And that’ll allow existing models as they are to become spectacularly more useful.

EdNutting · 4h ago
The biggest breakthrough that we need is something resembling actual intelligence in AI (human or machine, I’ll let you decide where we need it more ;) )
binarymax · 4h ago
You might be getting downvoted because you editorialized your own title. If it’s obviously not the only thing then don’t add that to the title :)
mettamage · 4h ago
> Only thing? Just off the top of my head: That the LLM doesn't learn incrementally from previous encounters. That we appear to have run out of training data.

Ha, that almost seems like an oxymoron. The previous encounters can be the new training data!

jazzyjackson · 35m ago
The old training was human responses to human questions. From this the bot learned to mimick human responses.

What would be the point of training an LLM on bot answers to human questions? This is only useful if you want to get an LLM that behaves like an already existing LLm

j-krieger · 4h ago
Queries are questions in a sense that they are not the original facts. I don’t think they are useful for training data.
harsh3195 · 4h ago
In terms of adoption, I think the user is right. That is the only thing stopping adoption of existing models in the real world.
moduspol · 4h ago
Unclear limits on how much context can be reliably provided and effectively used without degrading the result.
firesteelrain · 4h ago
> That we appear to have run out of training data

And now, in some cases for a while, it is training on its own slop.

lazide · 4h ago
The article is the peak of confidently wrong itself, for solid irony points.
ninetyninenine · 4h ago
It does. We keep a section of the context window for memory. The LLM however is the one deciding what is remembered. Technically via the system prompt we can have it remember every prompt if needed.

But memory is a minor thing. Talking to a knowledgeable librarian or professor you never met is the level we essentially need to get it to for this stuff to take off.

therobots927 · 4h ago
roxolotl · 4h ago
The big thing here is that they can’t even be confident. There is no there there. They are a, admittedly very useful, statistical model. Ascribing confidence to it is an anthropomorphizing mistake which is easy to make since we’re wired to trust text that feels human.

They are at their most useful when it is cheaper to verify their output than it is to generate it yourself. That’s why code is rather ok; you can run it. But once validation becomes more expensive than doing it yourself, be it code or otherwise, their usefulness drops off significantly.

projektfu · 4h ago
The article buries the lede by waiting until the very end to talk about solutions like having the LLM write DSL code. Presumably if you feed an LLM your orders table and a question about it, you'll get an answer that you can't trust. But if you ask it to write some SQL or similar thing based on your database to get the answer and run it, you can have more confidence.
IsTom · 2h ago
Until it mishandles a NULL somewhere in a condition on does JOIN instead of a LEFT JOIN and outputs something plausibly-looking that is just plain wrong. To verify it you'll need to do the work that it would take to write it anyway.
hodgehog11 · 2h ago
But as a statistical model, it should be able to report some notion of statistical uncertainty, not necessarily in its next-token outputs, but just as a separate measure. Unfortunately, there really doesn't seem to be a lot of effort going into this.
PessimalDecimal · 1h ago
Even then, wouldn't its uncertainty be about the probability of the output given the input? That's different from probability of being correct in some factual sense. At least for this class of models.
z3c0 · 1h ago
The statistical certainty is indeed present in the model. Each token comes with a probablility; if your softmax results approach a uniform distribution (i.e. all selected tokens at the given temp have near equal probabilities), then the next most likely token is very uncertain. Reporting the probabilities of the returned tokens can help the user understand how likely hallucinations are. However, that information is deliberately obfuscated now, to prevent distillation techniques.
z3c0 · 3h ago
Agreed. All these attempts to benchmark LLM performance based on the interpreted validity of the outputs are completely misguided. It may be the semantics of "context" causing people to anthropomorphize the models (besides the lifelike outputs.) Establishing context for humans is the process of holding external stimuli against an internal model of reality. Context for an LLM is literally just "the last n tokens". In that case, the performance would be how valid the most probablistic token was with the prior n tokens being present, which really has nothing to do with the perceived correctness of the output.
NoGravitas · 4h ago
The thing holding AI back is that LLMS are not world models, and do not have world models. Being confidently wrong is just a side effect of that. You need a model of the world to be uncertain about. Without one, you have no way to estimate whether your next predicted sentence is true, false, or uncertain; one predicted sentence is as good as another as long as it resembles the training data.
mojuba · 3h ago
In other words, just like with autonomous driving, you need real world experience aka general intelligence to be truly useful. Having a model of the world and knowing your place in it is one of the critical parts of intelligence that both autonomous vehicle systems and LLM's are missing.
pxc · 1h ago
For programming, at least, there are also problems with overall output quality, instruction following, and the scopes of changes.

LLMs don't do well at following style instructions, and existing memory systems aren't adequate for "remembering" my style preferences.

When you ask for one change, you often get loads of other changes alongside it. Transformers suck at targeted edits.

The hallucination problem and the sycophancy/suggestibility problem (which perhaps both play into the phenomenon of being "confidently wrong") are both real and serious. But they hardly form a singular bottleneck for the usefulness of LLMs.

rar00 · 4h ago
I know people are pushing back, taking "only" literally, but from a reasonable perspective what causes LLMs (technically their outputs) to give that impression is indeed the crux of what holds progress back: how/what LLMs learn from data. In my personal opinion, there's something fundamentally flawed the whole field has yet to properly pinpointing and fix.
jqpabc123 · 3h ago
there's something fundamentally flawed the whole field has yet to properly pinpointing and fix.

Isn't it obvious?

It's all built around probability and statistics.

This is not how you reach definitive answers. Maybe the results make sense and maybe they're just nice sounding BS. You guess which one is the case.

The real catch --- if you know enough to spot the BS, you probably didn't need to ask the question in the first place.

ctoth · 1h ago
> It's all built around probability and statistics.

Yes, the world is probabilistic.

> This is not how you reach definitive answers.

Do go on? This is the only way to build anything approximating certainty in our world. Do you think that ... answers just exist? What type of weird deterministic video game world do you live in where this is not the case?

jqpabc123 · 23m ago
How many "r's" are in the word "strawberry"?

I'm certain this simple question has a definitive answer.

procaryote · 14m ago
which apparently hard to come up with by compiling lots of text into a statistical model for what text is most likely to come after your question
darth_avocado · 4h ago
Funnily the same thing would get you promoted in corporate America as a human
jqpabc123 · 3h ago
But only if you are physically attractive and skilled at golf.
bwfan123 · 1h ago
Arguably, the biggest breakthroughs we have had came out of formalization of our world models. Math formalizes abstract worlds, and science formalizes the real world with testable actions.

The key feature of formalization is the ability to create statements, and test statements for correctness. ie, we went from fuzzy feel-good thinking to precise thinking thanks to the formalization.

Furthermore, the ingenuity of humans is to create new worlds and formalize them, ie we have some resonance with the cosmos so to speak, and the only resonance that the LLMs have is with their training datasets.

CloseChoice · 4h ago
LLMs are largely used by developers, who (in some sense or the other) supervise what the LLM does constantly (even if that means for sum committing to main and running in production). We do already have a lot of tools: tests, compilation, a programming language with its harsh restrictions compared to natural language, and of course the eye test, this is not the case for a lot of jobs where GenAI is used for hyperautomation, so I am really curious in which way it will or won't get adopted in other areas.
EE84M3i · 2h ago
This is so prominent in the cultural consciousness that it was lampooned in this week episode of south park, where Randy Marsh goes on a chatgpt (and ketamone) fueled bender and destroys his business.
tangotaylor · 4h ago
I don't think humans are good at assessing the accuracy of their own opinions either and I'm not sure how AI is going to do it. Usually what corrects us is failure: some external stimulus that is indifferent or hostile to us.

As Mazer Rackham from Ender's Game said: "Only the enemy shows you where you are weak."

nijave · 4h ago
Maybe AI isn't artificial enough here...
1vuio0pswjnm7 · 26m ago
What is interesting IMO about the "confidently wrong" phenomenon is that this was also commonly found in internet forums and online commentary in general prior to widespread use of today's confidently wrong "AI". That is, online commenters routinely were and still are "confidently wrong". IMHO and IME, the "confidentlay wrong" phenonmenon was and still is greater represented in online commnentary than "IRL".

No surprise IMO that, generally, online commenters and so-called "tech" companies who tend to be overly fixated on computers as the solution to all problems, are also the most numerous promoters of confidently wrong "AI".

The nature of the medium itself and those so-called "tech" companies that have sought to dominate it through intermediation and "ad services"^1 could have something to do with the acceptance and promotion of confidently wrong "AI". Namely, its ability to reduce critical thinking and the relative ease with which uninformed opinions, misinformation, and other non-factual "confidently wrong" information can be spread by virtually anyone.

1. If "confidently wrong" information is popular, if it "goes viral", then with few exceptions it will be promoted by these companies to drive traffic and increase ad services revenue.

Please note: I could be wrong.

jqpabc123 · 4h ago
Being able to recall all the data from the internet doesn't make you "intelligent".

It makes you a walking database --- an example of savant syndrome.

Combine this with failure on simple logical and cognitive tests and the diagnosis would be --- idiot savant.

This is the best available diagnosis of an LLM. It excels at recall and text generation but fails in many (if not most) other cognitive areas.

But that's ok, let's use it to replace our human workers and see what happens. Only an idiot would expect this to go well.

https://nypost.com/2024/06/17/business/mcdonalds-to-end-ai-d...

ldikrtjliaj · 1h ago
Well fucking yeah

Yesterday I asked ChatGPT a really simple, factual question. "Where is this feature on this software?" And it made up a menu that didn't exist. I told "No,, you're hallucinating, search the internet for the correct answer" and it directly responded (without the time delay and introspection bubbles that indicate an internet search) "That is not a hallucination, that is factually correct". God damn.

blibble · 4h ago
the only thing holding me back from being a billionare is my lack of a billion dollars
1970-01-01 · 2h ago
Confidently wrong while being unable to unlearn its incorrect assumption. I'd be happy with confidently wrong if it understood critical feedback is not an ask, it's an ultimatum for our current discussion to continue with the facts.
esafak · 3h ago
Bayesian models solve this problem but they occupy model capacity which practitioners have traditionally preferred to devote to improving point estimates.
hodgehog11 · 2h ago
I've always found this perspective remarkably misguided. Prediction performance is not everything; it can be extraordinarily powerful to have uncertainty estimates as well.
lenerdenator · 3h ago
Works fine for humans; I guess we'll know that AI has truly reached human levels of intelligence when being confidently wrong stops holding it back.
JCM9 · 4h ago
Add to being confidently wrong is the super annoying way it corrects itself after disastrously screwing something up.

AI: “I’ve deployed the API data into your app, following best practices and efficient code.”

Me: “Nope thats totally wrong and in fact you just wrote the API credential into my code, in plaintext, into the JavaScript which basically guarantees that we’re gonna get hacked.”

AI: “You’re absolutely right. Putting API credentials into the source code for the page is not a best practice, let me fix that for you.”

jqpabc123 · 4h ago
AI Apologetics: "It's all your fault for not being specific enough."
ColinEberhardt · 4h ago
I agree with the overall sentiment here, having written something similar recently:

“LLMs don’t know what they don’t know” https://blog.scottlogic.com/2025/03/06/llms-dont-know-what-t...

But I wouldn’t say it is the only problem with this technology! Rather, it is a subtle issue that most users don’t understand

kemcho · 2h ago
The angle that being to detect confidently wrong, which then helps kicks off new learning is interesting.

Has anyone had any success with continuous learning type AI products? Seems like there’s a lot of hype around RL to specialise.

ACCount37 · 1h ago
There's no "hype" because continuous learning is algorithmically hard and computationally intensive.

There's no known good recipe for continuous learning that's "worth it". No ready-made solution for everyone to copy. People are working on it, no doubt, but it's yet to get to the point of being readily applicable.

rokkamokka · 4h ago
The interesting question here is if a statistical model like GPTs actually can encode this is a meaningful way. Nobody has quite found it yet, if so
ACCount37 · 4h ago
They can, and they already do it somewhat. We've found enough to know that.

As the most well known example: Anthropic examined their AIs and found that they have a "name recognition" pathway - i.e. when asked about biographic facts, the AI will respond with "I don't know" if "name recognition" has failed.

This pathway is present even in base models, but only results in consistent "I don't know" if AI was trained for reduced hallucinations.

AIs are also capable of recognizing their own uncertainity. If you have an AI-generated list of historic facts that includes hallucinated ones, you can feed that list back to the same AI and ask it about how certain it is about every fact listed. Hallucinated entries will consistently have less certainty. This latent "recognize uncertainty" capability can, once again, be used in anti-hallucination training.

Those anti-hallucination capabilities are fragile, easy to damage in training, and do not fully generalize.

Can't help but think that limited "self-awareness" - and I mean that in a very mechanical, no-nonsense "has information about its own capabilities" way - is a major cause of hallucinations. An AI has some awareness of its own capabilities and how certain it is about things - but not nearly enough of it to avoid hallucinations consistently across different domains and settings.

meindnoch · 4h ago
Not just AI.
witnessme · 2h ago
I like the DSL approach but can't imagine how practical and effective it is. Specially considering the cost.
giancarlostoro · 3h ago
What's really funny to me is, sometimes it fixes itself if you just ask "are you SURE ABOUT THIS ANSWER?" myself and others often wonder, why the heck don't they run a 2nd model to "proofread" output or spot check it. Like did you actually answer the question or are you going off a really weird tangent.

I asked Perplexity some question for sample UI code for Rust / Slint, it gave me a beautiful web UI, I think it got confused because I wanted to make a UI for an API that has its own web UI, I told it you did NOT give me code for Slint, even though some of its output made references to "ui.slint" and other Rust files, it realized its mistake and gave me exactly what I wanted to see.

tl;dr why dont llms just vet themselves with a new context window to see if they actually answered the question? The "reasoning" models don't always reason.

ACCount37 · 1h ago
Because that would be twice as computationally intensive.

"Reasoning" models integrate some of that natively. In a way, they're trained to double check themselves - which does improve accuracy at the cost of compute.

nyeah · 4h ago
PG pointed this out a while back. He said that AIs were great at generating typical online comments. (NB I don't know which site's comments he might have been referring to.)
ChrisMarshallNY · 3h ago
My favorite is "Tested and Verified," then giving me code that won't even compile.
mtkd · 4h ago
The link is a sales pitch for some tech that uses MCPs ... see the platform overview on the product top menu

Because MCPs solve the exact issue the whole post is about

myahio · 4h ago
Yep, this is why I'm skeptical about using LLMs as a learning tool
merelysounds · 4h ago
I’m especially surprised by how little progress has been made. Today’s hallucinations, while less frequent, continue to have a major negative impact. And the problem has been noticed since the start.

> "I will admit, to my slight embarrassment … when we made ChatGPT, I didn't know if it was any good," said Sutskever.

> "When you asked it a factual question, it gave you a wrong answer. I thought it was going to be so unimpressive that people would say, 'Why are you doing this? This is so boring!'" he added.

https://www.businessinsider.com/chatgpt-was-inaccurate-borin...

dankobgd · 4h ago
i am pretty sure it has many more problems
dismalaf · 1h ago
No, AI's lack of understanding holds it back.

It's literally just a statistical model that guesses what you want based on the prompt and a whole bunch of training data.

If we want a black box that's AGI/SGI, we need a completely new paradigm. Or we apply a bunch of old-school AI techniques (aka. expert systems) to augment LLMs and get something immediately useful, yet slightly limited.

RIght now LLMs do things and are somewhat useful. Short of some expectations, butter than others, but yeah, a statistical model was never going to be more than the sum of its training data.

jeffxtreme · 1h ago
Does anyone know which XKCD comic the top image was? Or was it just created in the style of XKCD?
SalariedSlave · 4h ago
Anybody remember active learning? I'm old, and ML was much different back then, but this reminds me of grueling annotation work I had to do.

On a different note: is it just me or are some parts of this article oddly written? The sentence structure and phrasing read as confusing - which I find ironic, given the context.

captainclam · 1h ago
Wow, there really is an xkcd for everything.
Dwedit · 50m ago
Those are original cartoons drawn in the style of XKCD. But strangely enough, in the second cartoon, the Megan clone seems to change from a thin stick figure to suddenly wearing clothes?

I'm not sure if the comic was AI-assisted or not. AI-generated images do not usually contain identical pixel data when a panel repeats.

squigz · 4h ago
I've said from the beginning that until an LLM can determine and respond with "I do not know that", their usefulness will be limited and they cannot be trusted.
_Algernon_ · 4h ago
Rolling weighted dice repeatedly to generate words isn't factually accurate. More at 11.
chpatrick · 4h ago
It is if the weights are sufficiently advanced.
blueflow · 4h ago
I find such statements frightening. Too many people can not tell the different between prevalence ("everybody does it") and factually correct.
chpatrick · 4h ago
Nothing to do with dice though.
blueflow · 4h ago
The whole "stochastic means to find factual correctness" thing is an error of method, arguing about weights here is nonsense.
chpatrick · 3h ago
It isn't though, the most factually correct human expert is also stochastic. The only question is how the dice are weighted.
blueflow · 3h ago
"human expert" as reference for "factually correct", oh just gently caress yourself. Appeal to authority (expert = social status) is as much bullshit as appeal to popularity.
chpatrick · 3h ago
Right now the fully deterministic always correct oracle machine doesn't exist. The most authorative answer we can get on a subject is from a respected human in their field (who is still stochastic). It's unrealistic to hold LLMs to a higher standard than that.
blueflow · 3h ago
"The runway is free"

- Jacob Veldhuyzen van Zanten, respected aviation expert, 1977 teneriffa, brushing off the flight engineers concern about another machine on the runway

chpatrick · 3h ago
Ok, so humans are also fallible. Your point being?
Zigurd · 4h ago
The weights, so to speak, come from the knowledge base. That means you can't get away from the quality of the knowledge base. That isn't uniform across all domains of knowledge. Then the problem becomes how do you make the training material uniformly high-quality in every knowledge domain? At best it becomes the meta problem of determining the quality of knowledge in some way that makes an LLM able to calibrate confidence to a knowledge domain. But more likely we're stuck with the dubious quality that comes from human bias and wishful thinking in supposedly authoritative material.
chpatrick · 4h ago
Sure, it's only as good as the training data. But human experts also output tokens with some statistical distribution. That doesn't mean anything.
Zigurd · 4h ago
That sounds plausible. But it doesn't explain why LLM's make laughably bad errors that even a biased and haphazard human researcher wouldn't make.
Zigurd · 4h ago
Gemini seems to have a user interface that, for the way most people encounter Gemini, is more closely linked to search results. This leads me to suspect that Google's approach to training could be uniquely informed by both current and historic web crawling.
chpatrick · 4h ago
I think that's been a lot less true over the last year or so. Gemini 2.5 Pro is the first LLM I actually find pretty damn reliable.
contagiousflow · 4h ago
If you think talking to an LLM is the same experience as talking to a human you should probably talk to more humans
chpatrick · 3h ago
That's not what I said. What I said is that the claim "LLMs aren't intelligent because they stochastically produce characters" doesn't hold because humans do that too even if they're intelligent and authorative.
krapp · 3h ago
We don't actually know how human cognition works, so how do you know that humans "stochastically produce characters?"
chpatrick · 3h ago
Do humans always answer exactly the same way to the same question? No.

Also you could always pick the most likely token in an LLM as well to make it deterministic if you really wanted.

krapp · 3h ago
That doesn't really prove anything. I could create a Markov chain with a random seed that doesn't always answer the same question the same way, but that doesn't prove the human brain works like a Markov chain with a random seed.

One thing humans tend not to do is confabulate entirely to the degree that LLMs do. When humans do so, it's considered a mental illness. Simply saying the same thing in a different way is not the same as randomly randomly syntactically correct nonsense. Most humans will not, now and then, answer that 2 + 2 = 5, or that the sun rises in the southeast.

chpatrick · 3h ago
I'm not making any claim about how the human brain works. The only thing I'm saying is that humans also produce somewhat randomized output for the same question, which is pretty uncontroversial I think. That doesn't mean they're unintelligent. Same for LLMs.
staticman2 · 3h ago
I really wish people into LLMs would limit themselves to terms from neuroscience or philosophy when descrbing humans.

You are in my mind rightfully getting pushback for writing "human experts also output tokens with some statistical distribution. "

chpatrick · 2h ago
That's just a mathematical fact.

You have a big opaque box with a slot where you can put text in and you can see text come out. The text that comes out follows some statistical distribution (obviously), and isn't always the same.

Can you decide just from that if there's an LLM or a human sitting inside the box? No. So you can't make conclusions about whether the box as a system is intelligent just because it outputs characters in a stochastic manner according to some distribution.

staticman2 · 2h ago
Okay... I objected to your use of the word token. Humans don't think in tokens or even write in tokens so obviously what you wrote is not a fact.

That shouldn't even be controversial, I don't think?

You wrote "The text that comes out follows some statistical distribution".

At the risk of being over my head here did you mean the text can be described statistically or "follows some statistical distribution". Are these two concepts the same thing? I don't think so.

A program by design follows some statistical distribution. A human is doing whatever electrochemical thing it's doing that can be described statistically after the fact.

Regardless my point was pretty simple, I know this will never happen but I wish tech people would drop this tech language when describing humans and adopt neuroscience language.

chpatrick · 2h ago
> Humans don't think in tokens or even write in tokens so obviously what you wrote is not a fact.

Doesn't matter what they think in. A token can be a letter or a word or a sound. The point is that the box takes some sequence of tokens and produces some sequence of tokens.

> You wrote "The text that comes out follows some statistical distribution". > At the risk of being over my head here did you mean the text can be described statistically or "follows some statistical distribution". Are these two concepts the same thing? I don't think so. > A program by design follows some statistical distribution. A human is doing whatever electrochemical thing it's doing that can be described statistically after the fact.

Again, it doesn't matter how the box works internally. You can only observe what goes in and out and observe its distribution.

> Regardless my point was pretty simple, I know this will never happen but I wish tech people would drop this tech language when describing humans and adopt neuroscience language.

My point is neuroscience or not doesn't matter. People make the claim that "the box just produces characters with some stochastic process, therefore it's not intelligent or correct", and I'm saying that implication is not true because there could just as well be a human in the box.

You can't decide whether a system is intelligent just based of the method with which it communicates.

staticman2 · 1h ago
I think we are talking past each other but this has been entertaining.

I'd say anybody who writes "the LLM just produces characters with some stochastic process, therefore it's not intelligent or correct" is making an implicit argument about the way the LLM works and the way the human brain works. There might even be an implicit argument about how intelligence works.

They are not making the argument that you can't make up statistical models to describe a box, a human generated text, or an expert human opinion. But that seems to be the claim you are responding to.

nijave · 4h ago
MCP and agents seem like a solutions but as far as I know maintaining sufficient context is still a problem

I.e. ability to plug in expert data sources

Zigurd · 3h ago
Find tuning and RAG should, in theory, enable applications of LLM's to perform better in specific knowledge, domains, by focusing annotation of knowledge on the domains specific to the application.
JamesSwift · 4h ago
I think youre missing the point. The issue is not the amount of knowledge it possesses. The problem is that theres no way to go from "statistically generate the next word" to "what is your confidence level in the fact you just stated". Maybe, with an enormous amount of computation we could layer another AI on top to evaluate or add confidence intervals, but I just dont see how we get there wihthout another quantum leap.
chpatrick · 3h ago
Of course there is. If its training forces it to develop a theory of mind then it will weight the dice so that it's more likely to output "I don't know". Most likely the culprit is that it's hard to make training data for things that it doesn't know.
paul7986 · 4h ago
And being overHyped with the doom and gloom of it's affects on society.

chatGPT (5) is not there especially in replacing my field and skills: graphic, web design and web development. The first 2 there it spits out solid creations per your prompt request yet can not edit it's creations just creates new ones lol. So it's just another tool in my arsenal not a replacement to me.

Such Makes me wonder how it generates the logos and website designs ... is it all just hocus pocus.. the Wizard of OZ?

nijave · 4h ago
I don't know much about it but apparently we've been having success at work with Figma MCP hooked up to Claude in Cursor. Apparently it can pull from our component library and generate useable code (although still needs engineering to productionalize)

I don't know about replacing anyone but our UI/UX designers are claiming it's significantly faster than traditional mock ups

paul7986 · 3h ago
Well until these LLMs are able to spit out initial creations it's user likes and then is able to edit it properly per each request entered into the text prompt our jobs are safe! Even better if you are also a UX Researcher along with a Designer and Developer. Research requires human interaction and AI can't touch that present to a decade or more away.
dgfitz · 4h ago
s/confidently//

Because “ai” is fallible, right now it is at best a very powerful search engine that can also muck around in (mostly JavaScript) codebases. It also makes mistakes in code, adds cruft, and gives incorrect responses to “research-type” questions. It can usually point you in the right direction, which is cool, but Google was able to do that before its enshittification.

s/AI/LLMs

The part where people call it AI is one of the greatest marketing tricks of the 2020s.