This has been my opinion for some time too. I don't think I'll see AGI in my lifetime. I think the current widespread belief comes from the massive leap that transformers provided, but transformers have their limits. We would need another radically new idea in order to create AGI - which, just like all discoveries that aren't evolutionary, boils down to random chance[1]. What transformers have given us is substantially more infrastructure for trying new ideas out, so the probability of AGI being discovered has increased.
Well you’re basing your conclusion on a 2 year blip of not being able to stop LLMs from hallucinating.
This is called the pessimism effect. People who deny things by only looking at one small aspect of reality while ignoring the overarching trend.
Follow the trendline of the ML for the last decade. We’ve been moving at a breakneck pace and the progress has been both evolutionary in nature and random chance. But there is a clear trendline of linear upwards progress and at times the random chance accelerates us past the linear upward trend.
Stop looking at LLMs look at the 10 year trendline of ML as a wholistic picture. You’re drilling down on a specific ML problem and a specific model.
I believe we will see agi within our life time but when we see it the goal posts will have moved and the internet will be loaded with so much ai slop that we won’t be amazed at it. Like the agi will be slightly mentally stupid at this one thing and because of that it’s not AI even though it blows past some Turing test (which in itself will be a test where we moved the goal post a thousand times)
zamalek · 18h ago
My opinion of LLMs is in no way affected by hallucinations. Humans do it all the time too, talking about assumptions as though they are facts. For example:
> But there is a clear trendline of linear upwards progress
That's the 2-year LLM-specific hallucination blip the GP was talking about in the first place. His point was you should look at ML as a whole over a longer time span for a more accurate, less pessimistic picture.
jfengel · 14h ago
I think it's not a matter of stopping them from hallucinating, but why they're hallucinating.
They hallucinate because they're aren't actually working the way you do. They're playing with words. They don't have any kind of mental model -- even though they do an extraordinary mimicry of one.
An analogy: it's like trying to parse XML with a regular expression. You may get it to work in 99.99% of your use cases, but it's still completely wrong. Filtering out bad results won't get you there.
That said, the "extraordinary mimicry" is far, far beyond anything I could possibly have imagined. LLMs pass the Turing test with flying colors, without being AGI, and I would have sworn that the one implied the other. So it's entirely possible that we're closer than I think.
123yawaworht456 · 16h ago
>Well you’re basing your conclusion on a 2 year blip of not being able to stop LLMs from hallucinating.
to this day, the improvement since the original API version of GPT4 (later heavily downgraded without a name change) has been less than amazing. context size increased dramatically, yes, but it's still pitiful, slow and brutally expensive.
dinfinity · 17h ago
> Stop looking at LLMs look at the 10 year trendline of ML as a wholistic picture
Exactly. Just 100 years ago AI did not exist at all. Hell, (electronic) computers did not even exist then.
In that incredibly short timeframe of development AI is coming very close to surpassing what took biological evolution millions of years (or even surpassing it in specific domains). If you take the time it took to go from chimp to human compared to the time it took from the first animal to chimp and assume that scales linearly to AI evolution, we are very, very close to a similar step there.
Of course, it's not that simple and the assumption is bound to be wrong, but to think it might take another 100 years seems misguided given the rapid development in the past.
ninetyninenine · 18h ago
I think a good way to characterize it will be the droids in Star Wars. Those droids are fucking conscious and nobody gives a shit they are just mundane technology.
And after too much time without a data wipe those droids go off the freaking rails becoming too self aware and then people just treat it like it’s no big deal and an annoyance.
This is the future of AI. AI will be a retarded assistant and everyone will be bored with it.
whoaMndBlwn · 17h ago
Your final comment here. Replace AI with human.
Idling our way up an illusory social/career escalator the elders convinced us was real.
Too real. Time to be done with the internet for the day. And it’s barely noon.
ath3nd · 14h ago
> Well you’re basing your conclusion on a 2 year blip of not being able to stop LLMs from hallucinating.
LLMs can't truly reason. It's not about hallucinations. LLMs are fundamentally designed NOT to be intelligence. Is my Intellij autocomplete AGI?
> Like the agi will be slightly mentally stupid at this one thing and because of that it’s not AI even though it blows past some Turing test (which in itself will be a test where we moved the goal post a thousand times)
Just because this breed of autocomplete can drown you in slop very fast doesn't mean we are advancing. If anything, we are regressing.
ninetyninenine · 2h ago
What does that picture even mean? That AI doesn't get things right? That picture is a fact everyone knows. It's obvious. I don't get how people think they can respond to this stuff regurgitating obvious information and thinking they just dropped the mic. Everyone knows Models hallucinate and get things wrong. Your point?
cgriswald · 18h ago
Well, Wells actually says "...years and years and years away from anyone creating an actual artificial intelligence."
You know, in case you correctly interpreted the headline to mean Wells is saying aliens developed AI out there.
vouaobrasil · 18h ago
Hyping up AGI is a good way for tech companies to distract people into thinking AI is actually not that big a deal, when it is. It may not be in terms of its pure reasoning or in the goal of reaching AGI, but it is very disruptive, and it's a guaranteed way to heavily reinforce the requirements of using big tech in daily life, without actually improving it.
Yes, it may not be AGI and AGI may not come any time soon, but by focusing on that question, people become distracted and don't have as much time to think about how parasitic big tech really is. If it's not a strategy used consciously, it's rather seredipitous for them that the question has come about.
Supermancho · 17h ago
> Hyping up AGI is a good way for tech companies to distract people into thinking AI is actually not that big a deal, when it is.
I'm not sure what you're trying to say. Most people don't know the difference between AI and AGI. It's all hype making people thinking it's a big deal.
I have family that can't help but constantly text about AI this and AI that. How dangerous it might be or revolutionize something else.
spacemadness · 18h ago
Not to mention all the people on HN arguing we’re close to AGI because LLMs sound like humans and can “think”. “What’s the difference?” they ask, not in curiosity but after already making a strong claim. I assume it’s the same people that probably skipped every non engineering class in college because of those “useless” liberal arts requirements.
skydhash · 18h ago
I’ve done engineering in college, but I’ve beed dibbling in art since young, and philosophy of science is much more attractive to me than actual science. I agree with you that a lot of takes that AI is great, while consistent internally, are very reductive when it comes to technology usage by humans.
vouaobrasil · 18h ago
AI is only great when you narrowly define the problem in terms of efficient production of a narrowly-defined thing. And usually, production at that level of efficiency is a bad thing.
tartoran · 18h ago
What we're currently on with LLMs is some kind of scripted artficial intelligence. I would say that is not necessarily a bad thing considering that true artificial intelligence that had autonomy and goals for preserving itself could easily escape our control and wreak real havoc unless we approach it with tiny steps and clear goals.
cgriswald · 18h ago
Your post sort of hints at it, I think, but I'll state it clearly: Misalignment is the main threat when it comes to AI (and especially ASI).
A self-preserving AI isn't meaningfully more dangerous than an AI that solves world hunger by killing us all. In fact, it may be less so if it concludes that starting a war with humans is riskier than letting us live.
ninetyninenine · 18h ago
How is an LLM scripted? What do you mean? We don’t understand how LLMs work and we know definitively it’s not “stochastic parroting” as people used to call it.
daveguy · 18h ago
It is quasi-deterministic (sans a heat parameter) and it only ever responds to a query. It is not at all autonomous. If you let it do chain-of-thought for too long or any sort of continuous feedback loop it always goes off the rails. It is an inference engine. Inference by itself is not intelligence. Chollet has very good reasoning that intelligence requires both inference and search/program design. If you haven't read his papers about the ARC-AGI benchmark, you should check it out.
ninetyninenine · 18h ago
> It is quasi-deterministic (sans a heat parameter)
Human brains are quasi deterministic. It’s just chaos from ultimately determinist phenomena which can be modeled as a “heat parameter”.
> it only ever responds to a query. It is not at all autonomous.
We can give it feedback loops like COT and you can even have it talk to itself. Then if you think of the feedback loop as the entire system it is autonomous. Humans are actually doing the same thing, our internal thought process is by definition a feedback loop.
> If you let it do chain-of-thought for too long or any sort of continuous feedback loop it always goes off the rails.
But this isn’t scripted. This is more the AI goes crazy. Scripting isn’t a characteristic that accurately describes anything that’s going on.
AI hallucinates and goes off the rails isn’t characteristic of scripting its characteristic of lack of control. We can’t control AI.
baal80spam · 17h ago
You cannot be "light-years away" from a specific point in time. Who is this person and why is what they say important?
malux85 · 2h ago
If you want to argue technicalities I think it depends on your frame of reference.
Relative to the galactic centre we are orbiting around it, so if a very long period of time passes, and we don’t have AGI, we certainly can be “light years away” from it.
sc68cal · 17h ago
It's a figure of speech. She's the author of the popular Murderbot series which has been a successful show on Apple+. Her stories are about artificial life and artificial intelligence.
regularjack · 16h ago
I have to admit I also found it weird that scientific american is using a unit of distance as if it was a unit of time.
hackable_sand · 10h ago
You can use a distance unit as a time unit. It's not weird.
noiv · 18h ago
Well, considering the impact current models already have now, these are good news.
lostmsu · 8h ago
I know the phrase is a metaphor, but it is ironic that light-year away in time is, well, a year.
ninetyninenine · 18h ago
I can’t read the site it requires subscription. But I and many other researchers disagree. George Hinton for example, massive disagreement.
It’s not just LLMs that were a leap and bound. For the past decade and more ML has been rising at a breakneck velocity. We see models for scene recognition, models that can read your mind, models that recognize human movement. We were seeing the pieces and components and amazing results constantly for over 10 years and this is independent of LLMs.
And then everyone thinks AI is thousands of years away because we hit a small blip with LLMs in 2 years.
And here’s the thing. The blip isn’t even solid. Like LLMs sometimes gets shit wrong and sometimes gets shit right we just can’t control it. Like we can’t definitively say LLMs can’t answer a specific question. Maybe another LLM can get it right, maybe if prompted a different way it will get it right.
The other strange thing is that the LLM shows signs of lying. Like it’s not truthful. It has knowledge of the truth but the things purpose is not really to tell us the truth.
I guess the best way to put it that current AI sometimes behaves like AGI and sometimes doesn’t. It is not consistently AGI. The fact that we built a machine that inconsistently acts like agi shows how freaking close we are.
But the reality is no one understands how LLMs work. This fact is definitive. Like if you think we know how LLMs work then you are out of touch with reality. Nobody knows how LLMs work so this article and my write up are really speculation. We really dont know.
But the 10 year trendline of AI in general is the one that has a more accurate trendline into future progress. Basing the future off a 2 year trendline of a specific problem with a specific model of ML of LLMs hallucinating is not predictive.
You can. archive.ph Copy the link, paste the link.
metalman · 18h ago
While "true AI" is likely impossible, that discussion detracts from the fact that a whole new and very powerfull ability to process information is here , which will/is bieng used to automate routine managerial tasks and run certain types of robotic equipment.
I am slowly preparing myself to use these new tools, but will never consider them "clean", and will impliment there use in certain hermeticaly compartmented areas of my professional and financial undertakings.
jmclnx · 18h ago
>We’re Light-Years Away
Needs to be pointed out :) If I move billions of Light-Years from here I will be able to create AI :) A Light-Year is distance, the title should say maybe "decades away".
But I fully believe her argument, I think kids being born today will not see any real AI implementation in their lifetime.
add-sub-mul-div · 17h ago
I'm very down on the idea that LLMs are on the path to AGI but come on man, even they don't trip over simple metaphor.
[1]: https://en.wikipedia.org/wiki/Eureka_effect
This is called the pessimism effect. People who deny things by only looking at one small aspect of reality while ignoring the overarching trend.
Follow the trendline of the ML for the last decade. We’ve been moving at a breakneck pace and the progress has been both evolutionary in nature and random chance. But there is a clear trendline of linear upwards progress and at times the random chance accelerates us past the linear upward trend.
Stop looking at LLMs look at the 10 year trendline of ML as a wholistic picture. You’re drilling down on a specific ML problem and a specific model.
I believe we will see agi within our life time but when we see it the goal posts will have moved and the internet will be loaded with so much ai slop that we won’t be amazed at it. Like the agi will be slightly mentally stupid at this one thing and because of that it’s not AI even though it blows past some Turing test (which in itself will be a test where we moved the goal post a thousand times)
> But there is a clear trendline of linear upwards progress
This is not the case at all.[1]
[1]: https://llm-stats.com/
They hallucinate because they're aren't actually working the way you do. They're playing with words. They don't have any kind of mental model -- even though they do an extraordinary mimicry of one.
An analogy: it's like trying to parse XML with a regular expression. You may get it to work in 99.99% of your use cases, but it's still completely wrong. Filtering out bad results won't get you there.
That said, the "extraordinary mimicry" is far, far beyond anything I could possibly have imagined. LLMs pass the Turing test with flying colors, without being AGI, and I would have sworn that the one implied the other. So it's entirely possible that we're closer than I think.
to this day, the improvement since the original API version of GPT4 (later heavily downgraded without a name change) has been less than amazing. context size increased dramatically, yes, but it's still pitiful, slow and brutally expensive.
Exactly. Just 100 years ago AI did not exist at all. Hell, (electronic) computers did not even exist then.
In that incredibly short timeframe of development AI is coming very close to surpassing what took biological evolution millions of years (or even surpassing it in specific domains). If you take the time it took to go from chimp to human compared to the time it took from the first animal to chimp and assume that scales linearly to AI evolution, we are very, very close to a similar step there.
Of course, it's not that simple and the assumption is bound to be wrong, but to think it might take another 100 years seems misguided given the rapid development in the past.
And after too much time without a data wipe those droids go off the freaking rails becoming too self aware and then people just treat it like it’s no big deal and an annoyance.
This is the future of AI. AI will be a retarded assistant and everyone will be bored with it.
Idling our way up an illusory social/career escalator the elders convinced us was real.
Too real. Time to be done with the internet for the day. And it’s barely noon.
LLMs can't truly reason. It's not about hallucinations. LLMs are fundamentally designed NOT to be intelligence. Is my Intellij autocomplete AGI?
> Like the agi will be slightly mentally stupid at this one thing and because of that it’s not AI even though it blows past some Turing test (which in itself will be a test where we moved the goal post a thousand times)
I can only respond with a picture
https://substack.com/@msukhareva/note/c-131901009
> We’ve been moving at a breakneck pace and the progress has been both evolutionary in nature and random chance.
Yes, I enjoy being 19% slowed down by AI tooling, that's real breakneck pace.
https://www.infoworld.com/article/4020931/ai-coding-tools-ca...
Just because this breed of autocomplete can drown you in slop very fast doesn't mean we are advancing. If anything, we are regressing.
You know, in case you correctly interpreted the headline to mean Wells is saying aliens developed AI out there.
Yes, it may not be AGI and AGI may not come any time soon, but by focusing on that question, people become distracted and don't have as much time to think about how parasitic big tech really is. If it's not a strategy used consciously, it's rather seredipitous for them that the question has come about.
I'm not sure what you're trying to say. Most people don't know the difference between AI and AGI. It's all hype making people thinking it's a big deal.
I have family that can't help but constantly text about AI this and AI that. How dangerous it might be or revolutionize something else.
A self-preserving AI isn't meaningfully more dangerous than an AI that solves world hunger by killing us all. In fact, it may be less so if it concludes that starting a war with humans is riskier than letting us live.
Human brains are quasi deterministic. It’s just chaos from ultimately determinist phenomena which can be modeled as a “heat parameter”.
> it only ever responds to a query. It is not at all autonomous.
We can give it feedback loops like COT and you can even have it talk to itself. Then if you think of the feedback loop as the entire system it is autonomous. Humans are actually doing the same thing, our internal thought process is by definition a feedback loop.
> If you let it do chain-of-thought for too long or any sort of continuous feedback loop it always goes off the rails.
But this isn’t scripted. This is more the AI goes crazy. Scripting isn’t a characteristic that accurately describes anything that’s going on.
AI hallucinates and goes off the rails isn’t characteristic of scripting its characteristic of lack of control. We can’t control AI.
Relative to the galactic centre we are orbiting around it, so if a very long period of time passes, and we don’t have AGI, we certainly can be “light years away” from it.
It’s not just LLMs that were a leap and bound. For the past decade and more ML has been rising at a breakneck velocity. We see models for scene recognition, models that can read your mind, models that recognize human movement. We were seeing the pieces and components and amazing results constantly for over 10 years and this is independent of LLMs.
And then everyone thinks AI is thousands of years away because we hit a small blip with LLMs in 2 years.
And here’s the thing. The blip isn’t even solid. Like LLMs sometimes gets shit wrong and sometimes gets shit right we just can’t control it. Like we can’t definitively say LLMs can’t answer a specific question. Maybe another LLM can get it right, maybe if prompted a different way it will get it right.
The other strange thing is that the LLM shows signs of lying. Like it’s not truthful. It has knowledge of the truth but the things purpose is not really to tell us the truth.
I guess the best way to put it that current AI sometimes behaves like AGI and sometimes doesn’t. It is not consistently AGI. The fact that we built a machine that inconsistently acts like agi shows how freaking close we are.
But the reality is no one understands how LLMs work. This fact is definitive. Like if you think we know how LLMs work then you are out of touch with reality. Nobody knows how LLMs work so this article and my write up are really speculation. We really dont know.
But the 10 year trendline of AI in general is the one that has a more accurate trendline into future progress. Basing the future off a 2 year trendline of a specific problem with a specific model of ML of LLMs hallucinating is not predictive.
You can. archive.ph Copy the link, paste the link.
Needs to be pointed out :) If I move billions of Light-Years from here I will be able to create AI :) A Light-Year is distance, the title should say maybe "decades away".
But I fully believe her argument, I think kids being born today will not see any real AI implementation in their lifetime.