Stop treating `AGI' as the north-star goal of AI research

35 todsacerdoti 18 5/3/2025, 4:24:24 AM arxiv.org ↗

Comments (18)

YetAnotherNick · 10h ago
I highly doubt any company are just focusing on AGI, including OpenAI. Else they wouldn't keep on releasing 5 versions of 4o with different "personality".
ivape · 5h ago
We can't.

I'll explain why very simply. The vision of AI and the vision of Virtual Reality all existed well before the technology. We envisioned humanoid robots well before we ever had a chance of making them. We also envisioned an all-knowing AI well before we had our current technology. We will continue to envision the end-state because it is the most natural conclusion. No human can not not imagine the inevitable. That every human, technical or not, has the capacity to fully imagine this future, which means the entirety of the human race will be directed to this forgone conclusion.

Like God and Death (and taxes). shrugs

Smith: It is inevitable Mr. Anderson

yupitsme123 · 19m ago
Some marketer decided to call this stuff AI precisely because they wanted to make the connection to those grand visions that you're talking about.

If instead we called them what they are, Large Language Models, would you still say that they were hurtling inevitably towards Generalized Intelligence?

ivape · 8m ago
Yeah.
Nullabillity · 1h ago
We don't have to build the torment nexus.
bmacho · 7h ago
Is it just me, or this title is gross and annoying up to the point that it's straight up trolling?
tim333 · 5h ago
It is kinda. And reading the abstract it's maybe worse.
adityamwagh · 4h ago
Yeah. I also don’t understand why it’s an Arxiv article, rather than a blogpost.
umbra07 · 4h ago
Because papers are increasingly written to catch the attention of news publications/blogs/social media instead of professors/academics/researchers.
cwillu · 3h ago
Position papers are not a recent phenomenon.
ivape · 4h ago
The talent pool has thinned due to oversaturation, isn't that obvious?
belter · 7h ago
AGI persists not because it’s a coherent scientific objective, but because it functions as a lucrative mythology perfectly aligned with VC expectations...Next step...analyze AGI not just as bad science, but as good branding.
tim333 · 5h ago
It seems something like a scientific objective. To understand human thinking try making a machine that can do it.
chunkmonke99 · 3h ago
Are we sure that is what is happening? Can you really do any meaningful "science" when the subject understudy is a black box that is under a shroud of secrecy? What has been learned from LLMs regarding human cognition and is there broad convergence on that view?
tim333 · 15m ago
It's not the main driver of what's happening but it's an aspect of it that goes back a way. For example Turing writing in 1946:

>I am more interested in the possibility of producing models of the action of the brain than in the applications to practical computing...although the brain may in fact operate by changing its neuron circuits by the growth of axons and dendrites, we could nevertheless make a model... https://en.wikipedia.org/wiki/Unorganized_machine

Der_Einzige · 12h ago
No.
az09mugen · 10h ago
Yes.
ashoeafoot · 7h ago
Introducing the two bit weight! Now you can pack all your uniform greyzones into the variable name. Save memory, process yoir dara faster on smaller chips! We can retrain them we have the technology !