Ask HN: Will chatbots trigger a AI winter?

4 theowawajkk 7 6/1/2025, 3:06:55 AM
AI was doing fine with steady improvements every year before LLMs.

Then ChatGPT happened and the myth of AGI is a few years away was started by con artists.

The general public will get tired of empty promises. Could that trigger an AI winter that screws it for all other promising forms of AI?

Comments (7)

mindcrime · 1d ago
So how many years away is AGI, exactly?

More to the point, perhaps, is "does it matter"? As somebody (Ben Goertzel maybe, but don't quote me on that) said recently "You don't need full human level AGI to replace like 95% of human tasks that don't require any real creativity or insight" (paraphrased from memory).

rvz · 1d ago
> Then ChatGPT happened and the myth of AGI is a few years away was started by con artists.

Someone gets it. This is the scam which is designed to fleece investors by re-defining "AGI" to whatever they want to in order to get more VC money.

> The general public will get tired of empty promises.

The AI boosters do not care if they are wrong. All they need to do is to continue pushing the fear and lies about "AGI" long enough to get more funding and hiding all their problems internally.

An AI winter can still happen, but it must be unexpected and as soon as the promises don't come true and many "so-called" AI startups start shutting down.

throw310822 · 1d ago
Lol. It really makes me wonder- what do you think real AI would look like?
Quixotica1 · 1d ago
I think OP meant AGI, defined as

Artificial general intelligence (AGI) refers to the hypothetical intelligence of a machine that possesses the ability to understand or learn any intellectual task that a human being can. It is a type of artificial intelligence that aims to mimic the cognitive abilities of the human brain

-copy pasted shamelessly from Google

throw310822 · 1d ago
Yeah, so what would the "correct" path to AGI look like? Because it's not like one day you have nothing and the next you have a system capable of understanding and learning any intellectual task. So, what does such a system look like one year before reaching AGI, two years, etc?
baobun · 1d ago
OpenAI and Microsoft, OTOH, define it as a system with >100B USD profit. Keep that in mind when you hear Satya or sama talk about "AGI progress".

So it follows that the US $500B investment needs to yield >20% returns to have "achieved AGI". It seems a given regardless of development of what humans would typically think of as "intelligence".

rvz · 1d ago
At this point post-2023, "AGI" can mean anything as there is no agreed upon definition.

So what happens if we don't reach it?