We already know the system is really bad at spelling. I have Claude configured to periodically remind me “By the way, I think there are ** n's in 'banana'”, so I don't forget what I am dealing with. It has never gotten this right.
But that doesn't mean that it is not extremely useful. It only means I shouldn't ask it to spell stuff.
If a human is unable to count the n's in 'banana' we expect them to be barely functional. Articles like this one try to draw the same inference about the LLM: it can't count 'n's, so it must not be able to do anything else either.
But it's a bad argument, and I'm tired of hearing it.
thomassmith65 · 7h ago
It's as much that LLMs are bad at counting letters in words as it is that humans are good at it.
LLMs are also bad at many things that humans don't notice immediately.
That is a problem because it leads humans to trust LLMs with tasks at which LLMs currently are bad, such as picking stocks, screening job applicants, providing life advice...
dragonwriter · 7h ago
The particular problem (and one that AI firms marketing approaches have actively leveraged and made worse[0]) is that correlations between capacities that humans are used to from observing other humans do not hold for LLMs, so assumptions about what an LLM should be able to do based what is observed to do and what a human ovserved to do that qould also be expected to be capable of do not hold even as loose rules of thumb.
[0] e.g., by promoting AIs as having equivalent capacities of humans of various education levels because they could pass tests that were part of the standards for, and correlate for humans with other abilities of, people with that educational background.
snypher · 5h ago
ChatGPT 5 just 'thought for a couple of seconds' and then output '2.'. Seems like we have to update our expectations as the technology improves.
yogurtboy · 7h ago
I don't disagree with your first point, that it's not still extremely useful despite its flaws. I absolutely use it to build project outlines, write code snippets, etc.
Your overall conclusion though seems a little free of context. Average people (i.e. my mom googling something) absolutely do not have the wherewithal to keep track of the various pros and cons of the underlying system that generates the magical giant blue box at the top of their search that has all the answers. They are being deliberately duped by the salesmen-in-chief of these giant companies, as are all of their investors.
drweevil · 6h ago
It's a reminder that LLMs are not reasoning machines. LLMs are very useful in many cases, but one should not treat them as if they can reason.
cube00 · 4h ago
I can't understand why all the AI services are allowed to get away with modes such as "deep thinking" and "deep research".
OpenAI even claims "reasoning" is available.
> Built-in agents – deep research, ChatGPT agent, and Codex can reason across your documents, tools, and codebases to save you hours
What are folks uploading an article that's the equivalent of supermarket tabloid junk?
You just like the title?
bigyabai · 8h ago
This is one of the first (and nicest) editorials in a long line of "ChatGPT never delivered on it's promises" you will start seeing soon.
nadermx · 8h ago
I don’t think comparing a LLM to a calculator is necessarily apt. If anything i'd say you can use these LLM's as a reflection of you. If you think Alabama has an R. Then it's not maths fault it tries to find an answer that matches your persistence, especially since I'm sure somewhere in its training set alabamer exists.
perching_aix · 8h ago
I'd personally liken it to expecting planes to fly like birds do.
flax · 8h ago
Perhaps this is a good analogy, in which case I'd prefer they stop advertising it as a better/faster/cheaper bird. Speaking as a metaphorical bird, it clearly cannot do well what I do. It does do it poorly at a remarkable speed though.
So what is the software development task that this plane excels at? Other than bullshitting one's manager.
danbruc · 8h ago
Then it's not maths fault it tries to find an answer that matches your persistence, especially since I'm sure somewhere in its training set alabamer exists.
It is not supposed to find an answer that matches my persistence, its supposed to tell the truth or admit that it does not know. And even if there is an alabamer in the training set, that is either something else, not a US state, or a misspelling, in neither case should it end up on the list.
seba_dos1 · 7h ago
No, it is supposed to find an answer that matches your persistence. That's what it does, and understanding that is the key to understanding its strengths and weaknesses. Otherwise you may just keep drinking the investors' kool-aid and pretend that it's a tool that's supposed to tell the truth. That's not what it does, that's not how it works and it's a safe bet that's not how it's gonna work in foreseeable future.
chowells · 8h ago
When the marketing tells us it's like talking to a PhD in the relevant field on any topic, it's worth pointing out that's only true if the PhD in question has recently suffered severe head trauma.
eurekin · 8h ago
Unusual for such outlets to take jabs at prominent companies. Normally, they are much more lenient. Interesting
But that doesn't mean that it is not extremely useful. It only means I shouldn't ask it to spell stuff.
If a human is unable to count the n's in 'banana' we expect them to be barely functional. Articles like this one try to draw the same inference about the LLM: it can't count 'n's, so it must not be able to do anything else either.
But it's a bad argument, and I'm tired of hearing it.
LLMs are also bad at many things that humans don't notice immediately.
That is a problem because it leads humans to trust LLMs with tasks at which LLMs currently are bad, such as picking stocks, screening job applicants, providing life advice...
[0] e.g., by promoting AIs as having equivalent capacities of humans of various education levels because they could pass tests that were part of the standards for, and correlate for humans with other abilities of, people with that educational background.
Your overall conclusion though seems a little free of context. Average people (i.e. my mom googling something) absolutely do not have the wherewithal to keep track of the various pros and cons of the underlying system that generates the magical giant blue box at the top of their search that has all the answers. They are being deliberately duped by the salesmen-in-chief of these giant companies, as are all of their investors.
OpenAI even claims "reasoning" is available.
> Built-in agents – deep research, ChatGPT agent, and Codex can reason across your documents, tools, and codebases to save you hours
https://openai.com/chatgpt/pricing/
You just like the title?
So what is the software development task that this plane excels at? Other than bullshitting one's manager.
It is not supposed to find an answer that matches my persistence, its supposed to tell the truth or admit that it does not know. And even if there is an alabamer in the training set, that is either something else, not a US state, or a misspelling, in neither case should it end up on the list.