Ask HN: What would convince you to take AI seriously?
Now, success in a tough math exam isn't "automating all human labor" but it is certainly a benchmark many thought AI would not achieve easily. Even so, many are claiming it isn't really a big deal, and that humans will still be far smarter than AI's for the foreseeable future.
My question is, if you are in the aforementioned camp, what would it take you to adopt a frame of mind roughly analogous to "It is realistic that AI systems will become smarter than humans, and could automate all human labor and cognitive outputs within a single-digit number of years".
Would it require seeing a humanoid robot perform some difficult task? (the Metaculus definition of AGI requires that a robot be able to satisfactorily assemble a (or the equivalent of a) circa-2021 Ferrari 312 T4 1:8 scale automobile model.). Would it involve a Turing test of sufficient rigor? I'm curious what people's personal definition of "ok this is really real" is.
We aren't getting that with next-token generators. I don't think we'll get there by throwing shit at the wall and seeing what sticks, either, I think we'll need a deeper understanding of the mind and what intelligence actually is before we can implement it on our own, virtually or otherwise.
Similarly, we're pretty good at creating purpose-built machines, but when it comes to general/universal purpose, it's still in its infancy. The hand is still the most useful universal tool we have. It's hard to compete with the human mind + body when it comes to adapting to and manipulating the environment with purpose. There's quite literally billions of them, they create themselves and their labor is really cheap, too.
There's my serious answer.
And let’s say AI does automate all human labor… what’s the plan? That happening, will lead to chaos, without some massive changes in how society is organized and functions. That change itself with be chaotic and massively disruptive and painful for millions. If someone can’t answer that question, they have no business hyping up the end of human involvement in civilization.
Capitalism is driving this hype around cost cutting with AI, but capitalism requires people have capital to buy various goods and services. Where is that going to come from when unemployment hits 100%? Who are the customers?
Why would anyone be excited about this future before solving for this problem?
The last 50-80 years have been an aberration in terms of distribution of wealth, income and power. What AI owners want is a return to a world of lords and peasants, and with that comes with a shift of economy that serves the needs of consumers to an economy that suits the needs of those with incredible wealth.
Institutional investors will leave the middle and lower classes behind in favor of making a ton of money serving the needs of the incredibly rich, their families and their friends, and that will be the new formal economy. Everyone else will be served by informal economies that don't see institutional investment.
See also: Citigroup's plutonomy paper[1].
[1] https://delong.typepad.com/plutonomy-1.pdf
With the automation of labor and cognitive effort, MONEY won't matter. They don't need customers. They only need the automation required to produce. Which will be broadly and cheaply available, all the way to the end because people will be competing for disappearing jobs.
There is no precedence for this kind of change; think Internet, computers, and the assembly line all packed together into a 5 year window, globally. And consider that there's no apparent end to the level of development and impact. Using historical metrics (like customer base or resource availability) is not going to help understand what's coming.
From OP's other comments:
> A lot of people here have an emotional aversion to accepting AI progress. They’re deep in the bargaining/anger/denial phase.
I too wish AI progress wasn't happening as fast as it is. I, as a software developer, want to imagine a future where my skills are useful. However I haven't seen much convincing evidence or arguments on this site that appropriately critique short-term AI timelines that don't resort to logical fallacies, name calling, ad-hominems, or other tired attempts at zingers that contribute nothing to the discourse.
Defined as what, by who?
> a constrained problem that someone made up
How is that different from what humans do when asking questions?
No comments yet
This isn't an exhaustive list, it's just intended to illustrate when I'd start seriously thinking AGI was possible with incremental improvements.
I take AI seriously in the sense that it's useful, can solve problems, and represents a lot of value for a lot of businesses, but not in the sense that I think the current methods are headed for AGI without further breakthroughs. I'm also not an expert.
You just know when it's good.
Idk though, I'm not sure you could ever convince me to agree that anything can replace all human labor and cognitive output
Why do I even need to make up my mind about it?
But call me when you can load all human knowledge circa 1905 and have them spit out the Theory of Relativity.
And even then I might shift my goalposts.
There are many good tasks improved by AI, e.g. generic writing, brainstorming, research and simple tasks. However, I keep finding it struggle with more complicated or open-ended problems, making obvious mistakes. If it stops making obvious mistakes, or even if we simply discover automated ways to correct them, I'd take it more seriously.
AI is also not creative IMO: I find AI-generated art noticeably lower-quality than real art, even though it looks better on the surface, because it doesn't have as much semantic detail. If I find an AI-generated art that would be almost as good as the human-talent equivalent, or an especially impressive AI-generated art that would be very hard to generate without AI, I'd also take it more seriously.
The moment it can do that is the moment we each singularity.
We are not there yet. Everyone will know when we hit that point.
For the now it’s a great tool for helping with thought work and that’s about it.
For original work: solving some well known but unsolvable problem, like the Collatz conjecture.