Does anyone think the current AI approach will hit a dead end?
7 rh121 10 9/8/2025, 2:44:08 AM
Billions of dollars spent, incredible hype that we will have AGI in several years. Does anyone think the current deep learning / neural net based AI approach will eventually hit a dead end and not be able to deliver its promises? If yes, why?
I realize this question is somewhat loosely defined. No doubt the current approach will continue to improve and yield results so it might not be easy to define "dead end".
In the spirit of things, I want to see whether some people think the current direction is wrong and won't get us to the final destination.
But I could be totally wrong because im certainly not an expert in these fields.
Kids learn to speak before they learn to think about what they're saying. A 2/3 year old can start regurgitating sentences and forming new ones which sound an awful lot like real speech, but it seems like it's often just the child trying to fit in, they don't really understand what they're saying.
I used to joke my kids talking was sometimes just like typing a word on my phone and then just hitting the next predictive word that shows up. Since then it's evolved in a way that seems similar to LLMs.
The actually process of thought seems slightly divorced from the ability to pattern match words, but the patter matching serves as a way to communicate it. I think we need a thinking machine to spit out vectors that the LLM can convert into language. So I don't think they are a dead end, I think they are just missing the other half of the puzzle.
This is seen by what people term as hallucinations. AI seeks to please and will lie to you and invent things in order to please you. We can scale it up to give it more knowledge but ultimately those failures still creep in.
There will have to be a fundamentally new design for this to be overcome. What we have now is an incredible leap forward but it has already stalled on what it can deliver.
I don’t know AI, but I’m of the few that’s grateful for what it is at the moment. I'm coding with the free mini model and it has saved me a ton of time and I’ve learned a lot.
The first big settlement for using stolen data has come (Anthropic). How you extricate the books archive and claimamants' works is unknown.
I believe that LLM's in verticals are being fed expert/cleaned data, but wasn't that always the case, i.e. knowledge bases? Much less data and power needed (less than ∞) Oh, and much less investment, IMO.
In my future I also saw lots and lots of cheap GPU chips and hardware, much gaming but fewer "developers" and a mostly flat-lined software economy for at least 8 years. Taiwan was still independent but was suffering from an economic recession.
Nice try, Claude!