Ask HN: Why hasn't x86 caught up with Apple M series?
Ask HN: Best codebases to study to learn software design?
AGI Overhyped?
Firstly, what is AGI? I've never heard a decent definition. Some say it's an AI that is as smart or as general as humans; some say it's an AI that's conscious. I don't see how it could be a "step" by these definitions because nothing actually changes between a regular AI and an AGI. To make AI more general, you just make the input tokenization more granular, change the training method, and perhaps add some kind of iterative framework on top to make it do more stuff. Also, AI is already more capable than humans in almost every way except scale.
I personally think that it's just a hype word that Altman spammed to get more funding and interest in OpenAI. Even if you snapped your fingers and had the right training mechanism and the right networks etc for an AGI/ASI, I get the feeling it wouldn't even be smarter than people in the technical sense. AI already blows most people out of an IQ test at a fraction of the computational power of a brain, but that's because IQ tests compare competence to get relative intelligence; they don't test computation.
With that assumption, if AI can't be computationally stronger than humans, it's safe to say we won't have conscious computers for a while, but instead computers that act consciously instead. Does that mean that AI from hear on out is a waste that does nothing but take control from people while benefiting us the same? What is ChatGPT going to look like in 5 years? Am I just going to type in "do my taxes," and it's just going to do whatever it wants on my pc until my taxes are done? Why would I ever want that over a system designed to do my taxes correctly EVERY TIME by accountants? One thing I know about AI is it is slow as a mf, AI is great but you really have to think, we really just built a giant dictionary guy who's going to have the same problems human employees have.
I don't know just kind of spewing thoughts, I'd love to hear from people who are actual experts in designing these things.
Not at all, AI still ridiculously fails at tasks that are quite simple for humans/children.
It's probably a bigger step than that. For example humans learn from experience, current AIs are trained offline and then frozen and can only "learn" by stuffing their short term memory with notes, like someone with anterograde amnesia.
AI technology is changing each year, and AGI will probably be more different from today's transformer-based systems than they were from convolutional nets.
But it will probably incorporate ideas and components from today's LLMs, and just as importantly, it's development will probably be paid for from the pockets of investors who have been tantalized by today's AI and think AGI must soon follow.
I personally hope it's a long way off but it might only be a few key insights away at this point.
I would say AGI can do every intellectual task a human can do. Maybe the raw cognitive power is already there but scale matters. If it can't work on a task for months on end, it's not AGI. Because humans can do it.
> AI already blows most people out of an IQ test at a fraction of the computational power of a brain
Does it? I thought the brain is much more energy efficient.
It strongly depends on what you're trying to do with the AI. Consider "G" in "AGI" as being a dot-product over the quality of results for all the things (I_a) that some AI can do and the things (I_h) a human can do.
Stuff where the AI is competent enough to give an answer, it's often (but not always) lower energy than a human.
As an easy example of mediocre AI with unambiguously low power: think models that run on a high-end mac laptop, in the cases where such models produce good-enough answers and do so fast enough that it would have been like asking a human.
More ambiguously: If OpenAI's prices even closely resemble cost of electricity, then GPT-5-nano is similar to human energy cost if you include our bodies, beats us by a lot if you also account for humans having a 25% duty cycle when we're employed and a lifetime duty cycle of 10-11%.
Stuff where the AI isn't competent enough to give an answer… well, there's theoretical reasons to think you can make a trade-off for more competence by having it "think" for longer, but it's an exponential increase in runtime for linear performance improvements, so you very quickly reach a point where the AI far too energy intensive to bother with.
The systems appear smart because they're language models trained on quality tested text.
AFAIK, IQ tests used in psychological evaluations do not contain any randomness so exact answers are almost always in distribution. I haven't seen someone compare AI to an IQ test that is not in distribution.
On ARC-AGI, which is mildly similar to a randomly generated IQ test, humans still are much better than LLMs. https://arcprize.org/ (scroll down for chart)
The only chart I found was comparing the costs of different models.
For any AI concept, I think it might be instructive to consider the human equivalent.
For example, what is the definition of a genius? We don't have one, but genius certainly is a thing that exists. We might argue about who is/isn't a genius because of the missing definition, but we probably agree that exceptional people are a thing.
With AGI it seems that expectation is to cover all the subjects. Which I think is more like god. You either believe it or you don't. Noone have definite proof of its existence or non-existence.
That doesn't mean it's all nonsense. I think objectively we are getting quite a bit of emergent definitions that emerge from established fact and technology that kind of narrow this down. LLMs seem part of the solution on a path to some form of artificial intelligence that can keep up with us and that we might struggle to keep up with. But LLMs are not the whole solution. Though what they can't do keeps shifting. From glorified autocomplete to solving mathematical problems that were previously not solved. In the space of less than 3 years since the launch of chat GPT in November 2022. If the math wasn't solved before, there has to be a bit more to it than a glorified autocomplete.
Of course there are also counter points to that. But it at least challenges the notion that LLMs can't come up with new stuff. Even the notion that they can is deeply upsetting to some people because it challenges their world views. This debate is as much about that as it is about coming up with workable definitions.
LLMs are obviously lacking in a lot of ways. Perhaps the most obvious thing is that after their training is completed, they stop learning and adapting. They have no memory beyond their prompts and what they learned during training. A chat gpt conversation is just a large prompt with the entire history of the conversation and some clever engineering to load that into GPU memory relatively quickly. There are a lot of party tricks that involve elaborate system prompts and other workarounds.
The ability to remember and learn seems pretty fundamental to AGI, whatever that is. It's also a thing that doesn't sound like it's unsolvable. Some kind of indexed DB external to the AI for RAG kind of works but it does not solve the learning problem. And it shifts the problem to how to query and filter the data, which might not be optimal. And it's more similar to us using Google to find something than it is to us remembering information.
Also, it's not like we're particularly good at cramming large amounts of information down or learning. Learning is a slow process. And we kind of get set in our ways as we age. Meaning we're reluctant to learn more stuff and less capable of doing so. That suggests that even modest improvements might have dramatic results. LLMs + some short term memory and ability to learn might end up being a lot more useful.
I like duck typing in programming and that's also my mental model for AGIs. The Turing test is kind of obsolete. But if it quacks like duck and walks like a duck, it probably is a duck. Turing was onto something. Once we have a hard time telling apart the average person in society from an AI over days/weeks/years of interaction, we'll have AGIs. I think that's doable. But I'm not an ML engineer. A lot of those seem to believe it's doable too though.
The rest of society in the form of self appointed arm chair professors, religious leaders, eminent philosophers, and (too put this politely) lesser qualified individuals with strong opinions makes a lot of confused noises. But it does not generate a lot of definitions that have broad consensus.
Fundamentally for me I can't get over the idea that extraordinary claims require extraordinary evidence, and so far I haven't really seen any evidence. Certainly not any that I would consider extraordinary.
It's like saying that if a magician worked _really really hard_ on improving, evolving and revolutionising their "cut the assistant in half and put them back together" trick, they'll eventually be able to actually cut them in half and put them back together.
I have not seen a convincing reason to think that the path that is being walked down ends up at actual intelligence, in the same way that there is no convincing reason to think the path magicians walk down ends up in actual magic.
So, surgery?
As the stage magicians Penn and Teller (well, just Penn) said, stage magic is about making something very hard look easy, so much so that your audience simply doesn't even imagine the real effort you put into it.
Better analogy here would be asking if we're trying to summon cargo gods with hand-carved wooden "radios".
We never expected that there even could be a magic trick that came so close to mimicing human intelligence without actually being it. I think there's only so many ways that matter can be arranged to perform such tricks, we're putting lots of work into exploring that design space more thoroughly than ever before, and sooner or later we'll stumble on the same magic tricks that we humans are running under the hood.
Right, so this is the extraordinary claims bit. I'm not an expert in any of the required fields, to be clear, but I'm just not seeing the path, and no one as yet has written a clear and concise explainer on it.
So my presumption, given past experience, is that it is hype designed to drive investment plus hopium, not something that is based on any actual reasoned thought process.
To be fair, there is obviously some economic value in the fungibility of crypto-currency. The political and technical aspects are dubious.
> extraordinary claims require extraordinary evidence
Agreed, the only extraordinary achievement for this magic act so far is market capitalisation.
Even some cryptocurrencies like Monero have value if you consider "making digital transactions anonymously" to have value. I definitely do.
Correct, but many have noticed this including Sam Altman in some interviews.
Everyone disagrees about all three initials of the initialism, plus if the whole even means something implied by those initials at all, plus treating each as a boolean rather than a number.
Also some people loudly reject this observation.
The definition I was using for "AGI" befor ChatGPT came out was met by ChatGPT-3.5: it's an AI that's general-purpose, it doesn't just e.g. play chess. But there's people who reject that it even counts as "AI" at all, despite, well, the Turing Test.
Anyway.
> I don't see how it could be a "step" by these definitions because nothing actually changes between a regular AI and an AGI. To make AI more general, you just make the input tokenization more granular, change the training method, and perhaps add some kind of iterative framework on top to make it do more stuff.
That's a lot of things.
> Also, AI is already more capable than humans in almost every way except scale.
No, not really. There's specific metrics where some AI can beat a lot of humans, like playing chess, or how many languages it speaks to the level of a competent adult learner, and they can pay attention to a lot more than we can, and they can process data faster than we can, but…
…but LLMs* are only giving the illusion of intelligence by using superhuman speed and superhuman attention to make up for having a mind with the complexity of a squirrel's that's had half a million years of experience reading everything it can lay its metaphorical hands on.
* not VLMs, they're not fast enough or smart-looking enough yet
> AI already blows most people out of an IQ test at a fraction of the computational power of a brain, but that's because IQ tests compare competence to get relative intelligence; they don't test computation.
IQ tests are indeed bad, can be learned. This is demonstrable because the graph you may have seen for AI IQ ratings has two variants from the same people, one with a public IQ test and one with a private IQ test, and most of the AI do much much worse with the private test: https://trackingai.org/home
The problem you're pointing at is also why the ARC-AGI test exists, and indeed this shows that current AI aren't anything close to human performance.
> With that assumption, if AI can't be computationally stronger than humans, it's safe to say we won't have conscious computers for a while, but instead computers that act consciously instead.
If you think "AGI" is poorly defined, you'll be horrified to learn that "consciousness" has something like 40 definitions. Nobody knows where it comes from in our brain structures or how much structure we need to have it. Is a mouse conscious? If so, there's enough compute going on for many AI models to be also. But the quantity of compute is likely a red herring, and the compute exists even if it's not running an AI, just as our brain chemistry still goes on while we're sleeping.
> Does that mean that AI from hear on out is a waste that does nothing but take control from people while benefiting us the same?
Yes, but such people will take any opportunity or tech to do so, we've seen that since "automation" meant "steam engine".
> What is ChatGPT going to look like in 5 years? Am I just going to type in "do my taxes," and it's just going to do whatever it wants on my pc until my taxes are done?
If you're lucky. I think probably not, but 5 years is too long to rule out an architectural breakthrough that makes them much less error-prone; the Transformer models in 2020 were unimpressive for anything except translation.
> Why would I ever want that over a system designed to do my taxes correctly EVERY TIME by accountants?
$$$; plus accountants aren't perfect, they're only human.
> One thing I know about AI is it is slow as a mf, AI is great but you really have to think, we really just built a giant dictionary guy who's going to have the same problems human employees have.
If we're lucky.
Most likely it will continue to have new and exciting problems.
AGI is the biggest succesful scam in human history Sam Altman came up with to get the insane investment and hype they are getting. They are intentionally not defined what it is and when will be achieved, making it a never-reachable goal to keep the flow of money going. "we will be there in a couple of years", "this feels like AGI" was told every fucking GPT release.
It's the best interest for every AI lab to keep this lie going. They are not stupid, they know it can't be reached with the current state-of-the-art techniques, transformers, and even with the recent groundbreaking techniques like reasoning, and I think we are not even close.