Does anyone think the current AI approach will hit a dead end?

7 rh121 11 9/8/2025, 2:44:08 AM
Billions of dollars spent, incredible hype that we will have AGI in several years. Does anyone think the current deep learning / neural net based AI approach will eventually hit a dead end and not be able to deliver its promises? If yes, why?

I realize this question is somewhat loosely defined. No doubt the current approach will continue to improve and yield results so it might not be easy to define "dead end".

In the spirit of things, I want to see whether some people think the current direction is wrong and won't get us to the final destination.

Comments (11)

solatrader · 1m ago
DeepConf,photonic chip... New things and improvements are still coming. And most of AI products are not so well enginieered yet. According to the speed of progress made this year, it's too early to say it's a dead end. There might be some stones missing for AGI, but that doesn't mean what has been built so far is wrong.
AngryData · 17m ago
I think it has its uses, but that 90% of what people think it will be used for or replace won't happen. I don't believe LLMs is a path to general AI at all. Im also unsure if it will actually get that much better as time goes on and expect continuously diminishing returns as junk data from other AI instances, web bots, and people trying to manipulate AI responses creeps in.

But I could be totally wrong because im certainly not an expert in these fields.

matt3D · 41m ago
Watching my children learn how to talk, I have come to the conclusion that the current LLM concept is one part of a two part problem.

Kids learn to speak before they learn to think about what they're saying. A 2/3 year old can start regurgitating sentences and forming new ones which sound an awful lot like real speech, but it seems like it's often just the child trying to fit in, they don't really understand what they're saying.

I used to joke my kids talking was sometimes just like typing a word on my phone and then just hitting the next predictive word that shows up. Since then it's evolved in a way that seems similar to LLMs.

The actually process of thought seems slightly divorced from the ability to pattern match words, but the patter matching serves as a way to communicate it. I think we need a thinking machine to spit out vectors that the LLM can convert into language. So I don't think they are a dead end, I think they are just missing the other half of the puzzle.

elmerfud · 4h ago
I would say it already has hit a dead end. We're simply an imperiod of scale right now but the intrinsic problems with how the algorithms work can't be overcome by small tweaks in the algorithms.

This is seen by what people term as hallucinations. AI seeks to please and will lie to you and invent things in order to please you. We can scale it up to give it more knowledge but ultimately those failures still creep in.

There will have to be a fundamentally new design for this to be overcome. What we have now is an incredible leap forward but it has already stalled on what it can deliver.

efortis · 3h ago
Given that post, this is what ChatGPT-5 said: "… But achieving AGI purely by scaling current architectures might not happen. The field may need conceptual shifts—new structures or paradigms—rather than just bigger models."

I don’t know AI, but I’m of the few that’s grateful for what it is at the moment. I'm coding with the free mini model and it has saved me a ton of time and I’ve learned a lot.

neuralkoi · 2h ago
It takes time for new technologies to mature. I think people are only looking out for AGI but are not paying attention to the small changes in productivity and exploration these tools are enabling at smaller scales, including in the more unglamorous machinery that makes everything chug along.
k310 · 3h ago
It has run out of data, feeding increasingly on its own output (with help from bots that tarnish and bias it)

The first big settlement for using stolen data has come (Anthropic). How you extricate the books archive and claimamants' works is unknown.

I believe that LLM's in verticals are being fed expert/cleaned data, but wasn't that always the case, i.e. knowledge bases? Much less data and power needed (less than ∞) Oh, and much less investment, IMO.

giardini · 4h ago
Yes. I gazed into my 8-ball at https://magic-8ball.com/ asked it "Will AI with LLMs fail?" and shook it. It responded "Most likely".

In my future I also saw lots and lots of cheap GPU chips and hardware, much gaming but fewer "developers" and a mostly flat-lined software economy for at least 8 years. Taiwan was still independent but was suffering from an economic recession.

atleastoptimal · 2h ago
It won't but it seems 95% of people on HN think (hopes) it will because they hate AI and much of big tech
breckenedge · 5h ago
What’s the “final destination”?
giardini · 4h ago
It does sound sinister, now that you've pointed it out.

Nice try, Claude!