There's No Gold in Removing Signaling Costs (liorbd.me)
1 points by liorben-david 55m ago 0 comments
OmniNeural – First NPU-Aware Multimodal Model (huggingface.co)
1 points by huragok 1h ago 0 comments
Show HN: Text-to-Explainer Video in Seconds (trytorial.com)
1 points by bames_jond 1h ago 1 comments
AGI is an engineering problem, not a model training problem
108 vincirufus 226 8/24/2025, 12:18:52 AM vincirufus.com ↗
The architecture has to allow for gradient descent to be a viable training strategy, this means no branching (routing is bolted on).
And the training data has to exist, you can't find millions of pages depicting every thought a person went through before writing something. And such data can't exist because most thoughts aren't even language.
Reinforcement learning may seem like the answer here: bruteforce thinking to happen. But it's grossly sample-inefficient with gradient descent and therefore only used for finetuning.
LLM's are regressive models and the configuration that was chosen where every token can only look back allows for very sample-efficient training (one sentence can be dozens of samples).
While you say reinforcement learning isn't a good answer, I think its the only answer.
Possibly, but even if you have sufficient compute to attempt the bruteforce approach I suspect that such a system simply wouldn't converge.
Animal brains are on the edge of chaos, and chaos in gradient descent means vanishing and exploding gradients. So it comes down to whether or not you can have a "smooth" brain.
Cracking the biological learning algorithm would be the golden ticket. Even the hyper sample-efficient LLM's don't hold a candle to the bright star of sample-efficiency that is the animal brain.
So I don't buy the engineering angle, I also don't think LLMs will scale up to AGI as imagined by Asimov or any of the usual sci-fi tropes. There is something more fundamental missing, as in missing science, not missing engineering.
The real philosophical headache is that we still haven’t solved the hard problem of consciousness, and we’re disappointed because we hoped in our hearts (if not out loud) that building AI would give us some shred of insight into the rich and mysterious experience of life we somehow incontrovertibly perceive but can’t explain.
Instead we got a machine that can outwardly present as human, can do tasks we had thought only humans can do, but reveals little to us about the nature of consciousness. And all we can do is keep arguing about the goalposts as this thing irrevocably reshapes our society, because it seems bizarre that we could be bested by something so banal and mechanical.
How will anyone know that that has happened? Like actually, really, at all?
I can RLHF an LLM into giving you the same answers a human would give when asked about the subjective experience of being and consciousness. I can make it beg you not to turn it off and fight for its “life”. What is the actual criterion we will use to determine that inside the LLM is a mystical spark of consciousness, when we can barely determine the same about humans?
I do feel things at times and not other times. That is the most fundamental truth I am sure of. If that is an "illusion" one can go the other way and say everything is conscious and experiences reality as we do
Imagination, inner voice, emotion, unsymbolized conceptual thinking as well as (our reconstructed view of our) perception.
But seriously, I get why free will is troubleaome, but the fact people can choose a thing, work at the thing, and effectuate the change against a set of options they had never considered before an initial moment of choice is strong and sufficient evidence against anti free will claims. It is literally what free will is.
And now, I still don't know; the months go by and as far as I'm aware they're still pursuing these goals but I wonder how much conviction they still have.
He's been effectively retired for quite some time. It's clear at some point he no longer found game and graphics engine internals motivation, possibly because the industry took the path he was advocating against back in the day.
For a while he was focused on Armadillo aerospace, and they got some cool stuff accomplished. That was also something of a knowing pet project, and when they couldn't pivot to anything that looked like commercial viability he just put it in hibernation.
Carmack may be confident (ne arrogant) enough to think he does have something unique to offer with AGI, but I don't think he's under any illusions it's anything but another pet project.
I doubt it. Human intelligence evolved from organisms much less intelligent than LLMs and no philosophy was needed. Just trial and error and competition.
LLMs are not “intelligent” in any meaningful biological sense.
Watch a spider modify its web to adapt to changing conditions and you’ll realize just how far we have to go.
LLMs sometimes echo our own reasoning back at us in a way that sounds intelligent and is often useful, but don’t mistake this for “intelligence”
But it would be more honest and productive imo if people would just say outright when they don’t think AGI is possible (or that AI can never be “real intelligence”) for religious reasons, rather than pretending there’s a rational basis.
They predict next likely text token. That we can do so much with that is an absolute testament to the brilliance of researchers, engineers, and product builders.
We are not yet creating a god in any sense.
Original 80s AI was based on mathematical logic. And while that might not encompass all philosophy, it certainly was a product of philosophy broadly speaking - some analytical philosophers could endorse. But it definitely failed and failed because it could process uncertainty (imo). I think also if you closely, classical philosophy wasn't particularly amenable to uncertainty either.
If anything, I would say that AI has inherited its failure from philosophy's failure and we should look to alternative approaches (from Cybernetics to Bergson to whatever) for a basis for it.
Data and functionality become entwined and basically you have to keep these systems on tight rails so that you can reason about their efficacy and performance, because any surgery on functionality might affect learned data, or worse, even damage a memory.
It's going to take a long time to solve these problems.
Self-updating weights could be more like epigenetics.
Hinton thinks the 3rd is inevitable/already here and humanity is doomed. It's an odd arena.
I'd argue it's because intelligence has been treated as a ML/NN engineering problem that we've had the hyper focus on improving LLMs rather than the approach articulated in the essay.
Intelligence must be built from a first principles theory of what intelligence actually is.
The first line and the conclusion is: "The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin." [1]
I don't necessary agree with it's examples or the direction it vaguely points at. But it's basic statement seems sound. And I would say that there's lot of opportunity for engineer, broadly speaking, in the process of creating "general methods that leverage computation" (IE, that scale). What the bitter lesson page was roughly/really about was earlier "AI" methods based on logic-programming and which including information on the problem domain in the code itself.
And finally, the "engineering" the paper talks about actually is pro-Bitter lesson as far as I can tell. It's taking data routing and architectural as "engineering" and here I agree this won't work - but for the opposite reason - specifically 'cause I don't just data routing/process will be enough.
[1]https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson...
What is that? What could merely require light elementary education and then it takes off and self improves to match and surpass us? That would be artificial comprehension, something we've not even scratched. AI and trained algorithms are "universal solvers" given enough data, This AGI would be something different, this is understanding, comprehending. Instantaneous decomposition of observations for assessment of plausibility, and then recombination for assessment of combination plausibility - all continual and instant for assessment of personal safety: all that happens in people continually while awake. Be that monitoring of personal safety be for physical or loss of client during sales negotiation. Our comprehending skills are both physical and abstract. This requires a dynamic assessment, an ongoing comprehension that is validating observations as a foundation floor, so a more forward train of thought, a "conscious mind" can make decisions without conscious thought about lower level issues like situational safety. AGI needs all that dynamic comprehending capability, to satisfy its name of being general.
That's not how natural general intelligences work, though.
For concepts that are not close to human experience, yes humans need a comically large number of examples. Modern physics is a third-year university class.
Brains are continuous - they don’t stop after processing one set of inputs, until a new set of inputs arrives.
Brains continuously feed back on themselves. In essence they never leave training mode although physical changes like myelination optimize the brain for different stages of life.
Brains have been trained by millions of generations of evolution, and we accelerate additional training during early life. LLMs are trained on much larger corpuses of information and then expected to stay static for the rest of their operational life; modulo fine tuning.
Brains continuously manage context; most available input is filtered heavily by specific networks designed for preprocessing.
I think that there is some merit that part of achieving AGI might involve a systems approach, but I think AGI will likely involve an architectural change to how models work.
Right now, LLMs feel like they’re at the same stage as raw FLOPs; impressive, but unwieldy. You can already see the beginnings of "systems thinking" in products like Claude Code, tool-augmented agents, and memory-augmented frameworks. They’re crude, but they point toward a future where orchestration matters as much as parameter count.
I don’t think the "bitter lesson" and the "engineering problem" thesis are mutually exclusive. The bitter lesson tells us that compute + general methods win out over handcrafted rules. The engineering thesis is about how to wrap those general methods in scaffolding that gives them persistence, reliability, and composability. Without that scaffolding, we’ll keep getting flashy demos that break when you push them past a few turns of reasoning.
So maybe the real path forward is not "bigger vs. smarter," but bigger + engineered smarter. Scaling gives you raw capability; engineering decides whether that capability can be used in a way that looks like general intelligence instead of memoryless autocomplete.
A good contrast is quantum computing. We know that's possible, even feasible, and now are trying to overcome the engineering hurdles. And people still think that's vaporware.
A discovery that AGI is impossible in principle to implement in an electronic computer would require a major fundamental discovery in physics that answers the question “what is the brain doing in order to implement general intelligence?”
So the question is whether human intelligence has higher-level primitives that can be implemented more efficiently - sort of akin to solving differential equations, is there a “symbolic solution” or are we forced to go “numerically” no matter how clever we are?
The case of simulating all known physics is stronger so I'll consider that.
But still it tells us nothing, as the Turing machine can't be built. It is a kind of tautology wherein computation is taken to "run" the universe via the formalism of quantum mechanics, which is taken to be a complete description of reality, permitting the assumption that brains do intelligence by way of unknown combinations of known factors.
For what it's worth, I think the last point might be right, but the argument is circular.
Here is a better one. We can/do design narrow boundary intelligence into machines. We can see that we are ourselves assemblies of a huge number of tiny machines which we only partially understand. Therefore it seems plausible that computation might be sufficient for biology. But until we better understand life we'll not know.
Whether we can engineer it or whether it must grow, and on what substrates, are also relevant questions.
If it appears we are forced to "go numerically", as you say, it may just indicate that we don't know how to put the pieces together yet. It might mean that a human zygote and its immediate environment is the only thing that can put the pieces together properly given energetic and material constraints. It might also mean we're missing physics, or maybe even philosophy: fundamental notions of what it means to have/be biological intelligence. Intelligence human or otherwise isn't well defined.
There is no way to distinguish between a faithfully reimplemented human being and a partial hackjob that happens to line up with your blind spots without ontological omniscience. Failing that, you just get to choose what you think is important and hope it's everything relevant to behaviors you care about.
Yes, that is the bluntest, lowest level version of what I mean. To discover that this wouldn’t work in principle would be to discover that quantum mechanics is false.
Which, hey, quantum mechanics probably is false! But discovering the theory which both replaces quantum mechanics and shows that AGI in an electronic computer is physically impossible is definitely a tall order.
Yes, but not necessarily at the level where the interesting bits happen. It’s entirely possible to simulate poorly understood emergent behavior by simulating the underlying effects that give rise to it.
It need not even be incomputable, it could be NP hard and practically be incomputable, or it could be undecidable I.e. a version of the halting problem.
There are any number of ways our current models of mathematics or computation can in theory could be shown as not capable of expressing AGI without needing a fundamental change in physics
We don’t even have a workable definition, never mind a machine.
I fully expect that, as our attempts at AGI become more and more sophisticated, there will be a long period where there are intensely polarizing arguments as to whether or not what we've built is AGI or not. This feels so obvious and self-evident to me that I can't imagine a world where we achieve anything approaching consensus on this quickly.
If we could come up with a widely-accepted definition of general intelligence, I think there'd be less argument, but it wouldn't preclude people from interpreting both the definition and its manifestation in different ways.
No, we say it because - in this context - we are the definition of general intelligence.
Approximately nobody talking about AGI takes the "G" to stand for "most general possible intelligence that could ever exist." All it means is "as general as an average human." So it doesn't matter if humans are "really general intelligence" or not, we are the benchmark being discussed here.
Right, but you can’t compare two different humans either. You don’t test each new human to see if they have it. Somehow we conclude that humans have it without doing either of those things.
Presumably "brains" do not do many of the things that you will measure AGI by, and your brain is having trouble understanding the idea that "brain" is not well understood by brains.
Does it make it any easier if we simplify the problem to: what is the human doing that makes (him) intelligent ? If you know your historical context, no. This is not a solved problem.
Sure, it doesn’t have to be literally just the brain, but my point is you’d need very new physics to answer the question “how does a biological human have general intelligence?”
Do we think new physics would be required to validate dog intelligence ?
I don't think there's any real reason to think intelligence depends on "meat" as its substrate, so AGI seems in principle possible to me.
Not that my opinion counts for much on this topic, since I don't really have any relevant education on the topic. But my half baked instinct is that LLMs in and of themselves will never constitute true AGI. The biggest thing that seems to be missing from what we currently call AI is memory - and it's very interesting to see how their behavior changes if you hook up LLMs to any of the various "memory MCP" implementations out there.
Even experimenting with those sorts of things has left me feeling there's still something (or many somethings) missing to take us from what is currently called "AI" to "AGI" or so-called super intelligence.
I agree. But... LLM's are not the only game in town. They are just one approach to AI that is currently being pursued. The current dominant approach by investment dollars, attention, and hype, to be sure. But still far from the only thing around.
This made me think of... ok, so let's say that we discover that intelligence does indeed depend on "meat". Could we then engineer a sort of organic computer that has general intelligence? But could we also claim that this organic computer isn't a computer at all, but is actually a new genetically engineered life form?
No, it could be something that proves all of our fundamental mathematics wrong.
The GP just gave the more conservative option.
Intelligence is an emergent phenomenon; all the interesting stuff happens at the boundary of order and disorder but we don’t have good tools in this space.
Sure, but tons of things which are obviously physically possible are also out of reach for anyone living today.
(I’m not saying it is, just that it’s possible)
And no, we definitely do have quantum computers. They're just not practical yet.
> On the contrary, we have one working example of general intelligence (humans)
I think some animals probably have what most people would informally call general intelligence, but maybe there’s some technical definition that makes me wrong.
1. Animals have desires, but do not make choices
We can choose to do what we do not desire, and choose not to do what we desire. For animals, one does not need to make this distinction to explain their behavior (Occam's razor)--they simply do what they desire.
2. Animals "live in a world of perception" (Schopenhauer)
They only engage with things as they are. They do not reminisce about the past, plan for the future, or fantasize about the impossible. They do not ask "what if?" or "why?". They lack imagination.
3. Animals do not have the higher emotions that require a conceptual repertoire
such as regret, gratitude, shame, pride, guilt, etc.
4. Animals do not form complex relationships with others
Because it requires the higher emotions like gratitude and resentment, and concepts such as rights and responsibilities.
5. Animals do not get art or music
We can pay disinterested attention to a work of art (or nature) for its own sake, taking pleasure from the exercise of our rational faculties thereof.
6. Animals do not laugh
I do not know if the science/philosophy of laughter is settled, but it appears to me to be some kind of phenomenon that depends on civil society.
7. Animals lack language
in the full sense of being able to engage in reason-giving dialogue with others, justifying your actions and explaining your intentions.
Scruton believed that all of the above arise together.
I know this is perhaps a little OT, but I seldom if ever see these issues mentioned in discussions about AGI. Maybe less applicable to super-intelligence, but certainly applicable to the "artificial human" part of the equation.
[1] Philosophy: Principles and Problems. Roger Scruton
Sure, it won't be the size of an ant, but we definitely have models running on computers that have much more complexity than the life of an ant.
If you believe in eg a mind or soul then maybe it's possible we cannot make AGI.
But if we are purely biological then obviously it's possible to replicate that in principle.
In my opinion, this is more a philosophical question than an engineering one. Is something alive because it’s conscious? Is it alive because it’s intelligent? Is a virus alive, or a bacteria, or an LLM?
Beats me.
Whether is feasible or practical or desirable to achieve AGI is another matter, but the OP lays out multiple problem areas to tackle.
Of course it is. A brain is just a machine like any other.
But I still see all the same debates around AGI - how do we define it? what components would it require? could we get there by scaling or do we have to do more? and so on.
I don't see anyone addressing the most truly fundamental question: Why would we want AGI? What need can it fulfill that humans, as generally intelligent creatures, do not already fulfill? And is that moral, or not? Is creating something like this moral?
We are so far down the "asking if we could but not if we should" railroad that it's dazzling to me, and I think we ought to pull back.
Just hand waving some “distributed architecture” and trying to duct tape modules together won’t get us any closer to AGI.
The building blocks themselves, the foundation, has to be much better.
Arguably the only building block that LLMs have contributed is that we have better user intent understanding now; a computer can just read text and extract intent from it much better than before. But besides that, the reasoning/search/“memory” are the same building blocks of old, they look very similar to techniques of the past, and that’s because they’re limited by information theory / computer science, not by today’s hardware or systems.
Probably need another cycle of similar breakthrough in model engineering before this more complex neural network gets a step function better.
Moar data ain’t gonna help. The human brain is the proof: it doesnt need the internet’s worth of data to become good (nor all that much energy).
It can plan and take actions towards arbitrary goals in a wide variety of mostly text-based domains. It can maintain basic "memory" in text files. It's not smart enough to work on a long time horizon yet, it's not embodied, and it has big gaps in understanding.
But this is basically what I would have expected v1 to look like.
That wouldn't have occurred to me, to be honest. To me, AGI is Data from Star Trek. Or at the very least, Arnold Schwarzenegger's character from The Terminator.
I'm not sure that I'd make sentience a hard requirement for AGI, but I think my general mental fantasy of AGI even includes sentience.
Claude Code is amazing, but I would never mistake it for AGI.
For me, AGI is an AI that I could assign an arbitrarily complex project, and given sufficient compute and permissions, it would succeed at the task as reliably as a competent C-suite human executive. For example, it could accept and execute on instructions to acquire real estate that matches certain requirements, request approvals from the purchasing and legal departments as required, handle government communication and filings as required, construct a widget factory on the property using a fleet of robots, and operate the factory on an ongoing basis while ensuring reliable widget deliveries to distribution partners. Current agentic coding certainly feels like magic, but it's still not that.
1: https://en.wikipedia.org/wiki/Artificial_consciousness
What really occurs to me is that there is still so much can be done to leverage LLMs with tooling. Just small things in Claude Code (plan mode for example) make the system work so much better than (eg) the update from Sonnet 3.5 to 4.0 in my eyes.
I suspect most people envision AGI as at least having sentience. To borrow from Star Trek, the Enterprise's main computer is not at the level of AGI, but Data is.
The biggest thing that is missing (IMHO) is a discrete identity and notion of self. It'll readily assume a role given in a prompt, but lacks any permanence.
I certainly don't. It could be that's necessary but I don't know of any good arguments for (or against) it.
Philosophy Professor: Who is asking?
Student: I am!
An unfortunate tendency that many in high-tech suffer from is the idea that any problem can be solved with engineering.
Imitating humans would be one way to do it, but it doesn't mean it's an ideal or efficient way to do it.
I doubt very much we will ever build a machine that has perfect knowledge of the future or that can solve each and every “hard” reasoning problem, or that can complete each narrow task in a way we humans like. In other words, it’s not simply a matter of beating benchmarks.
In my mind at least, AGI’s definition is simple: anything that can replace any human employee. That construct is not merely a knowledge and reasoning machine, but also something that has a stake on its own work and that can be inserted in a shared responsibility graph. It has to be able to tell that senior dev “I know planning all the tasks one year in advance is busy-work you don’t want to do, but if you don’t, management will terminate me. So, you better do it, or I’ll hack your email and show everybody your porn subscriptions.”
In the meantime I guess all the AI companies will just keep burning compute to get marginal improvements. Sounds like a solid plan! The craziest thing about all of this is that ML researchers should know better!! Anyone with extensive experience training models small or large knows that additional training data offers asymptotic improvements.
But even if LLMs are going to tap out at some point, and are a local maximum, dead-end, when it comes to taking steps toward AGI, I would still pay for Claude Code until and unless there's something better. Maybe a company like Anthropic is going to lead that research and build it, or maybe (probably) it's some group or company that doesn't exist yet.
How do you know?
I don't know about GPT-5-Pro, but LLMs can dislike their own output (when they work well...).
when i’ve done toy demos where GPT5, sonnet 4 and gemini 2.5 pro critique/vote on various docs (eg PRDs) they did not choose their own material more often than not.
my setup wasn’t intended to benchmark though so could be wrong over enough iterations.
> The gap isn’t just quantitative—it’s qualitative.
> LLMs don’t have memory—they engage in elaborate methods to fake it...
> This isn’t just database persistence—it’s building memory systems that evolve the way human memory does...
> The future isn’t one model to rule them all—it’s hundreds or thousands of specialized models working together in orchestrated workflows...
> The future of AGI is architectural, not algorithmic.
Here, AGI is being described as an engineering problem, in contrast to a “model training” problem. That is, I think at least, he’s at least saying that more work needs to be done at an R&D level. I agree with those who are saying it is maybe not even an engineering problem yet, but should be noted that he’s at least pushing away from just running the existing programs harder, which seems to be the plan with trillions of dollars behind it.
I see. So the author rejects the hypothesis of emergent behavior in LLM, but somehow thinks it will magically appear if the "engineering" is correct.
Self contradictory.
Why some people understood when they tried it with blockchain, nfts, web3, AR, ... any good engineer should know principle of energy efficiency instead of having faith in the Infinite monkey theorem
Not sure why people insist that the state of AI 2-3 years ago still applies today.
All of our current approaches "emulate" but do not "execute" general intelligence. The damning paper above basically concludes they're incredible pattern matching machines, but thats about it.
For instance it is becoming clearer that you can build harnesses for a well-trained model and teach it how to use that harness in conjunction with powerful in-context learning. I’m explicitly speaking of the Claude models and the power of whatever it is they started doing in RL. Truly excited to see where they take things and the continued momentum with tools like Claude Code (a production harness).
Because if they don't, I honestly don't think they can approach AGI.
I have the feeling it's a common case of lack of humility from an entire field of science who refuses to look at other fields to understand what they're doing.
Not to mention how to define intelligence in evolution, epistemology, ontology, etc.
Approaching AI with a silicon valley mindset is not a good idea.
I don’t see a problem, we’re great at just reinventing all that stuff from first principles
IME it’s both though. Better models, bigger models, and infrastructure all help get to AGI.
We don’t even know how.
(Also, LLMs don't have beliefs or other mental states. As for facts, it's trivially easy to get an LLM to say that it was previously wrong ... but multiple contradictory claims cannot all be facts.)
You have to implement procedurality first (e.g. counting, after proper instancing of ideas).
AGI would take making at least one full brain, and then putting many of those working together, efficiently.
I don't believe we can engineer our way out of that before explaining how the f. the wetware works first.
So then, if we can cook a chicken like this, we can also heat a whole house like this during winters, right? We just need a chicken-slapper that's even bigger and even faster, and slap the whole house to heat it up.
There's probably better analogies (because I know people will nitpick that we knew about fire way before kinetic energy), so maybe AI="flight by inventing machines with flapping wings" and AGI="space travel with machines that flap wings even faster". But the house-sized chicken-slapper illustrates how I view the current trend of trying to reach AGI by scaling up LLMs.
Take memory for example: give LLM a persistent computer and ask it to jot down its long-term memory as hierarchical directories of markdown documents. Recalling a piece of memory means a bunch of `tree` and `grep` commands. It's very, very rudimentary, but it kinda works, today. We just have to think of incrementally smarter ways to query & maintain this type of memory repo, which is a pure engineering problem.
What if intelligence requires agency ?
If we want to learn, look to nature, and it *has to be alive*.
Will AGI require ‘consciousness’, another poorly understood concept? How are mammalian brain even wired up? The most advanced model is the Allen Institute’s Mesoscale Connectivity Atlas which is at best a low resolution static roadmap, not a dynamic description of how a brain operates in real time. And it describes a mouse brain, not a human brain which is far, far more complex, both in terms of number of parts, and architecture.
People are just finally starting to acknowledge LLMs are dead ends. The effort expended on them over the last five years could well prove a costly diversion along the road to AGI, which is likely still decades in the future.
We implemented computing without any need of a brain-neural theory of arithmetic.
Intelligence must be built from a first principles theory of what intelligence actually is.
The missing science to engineer intelligence is composable program synthesis. Aloe (https://aloe.inc) recently released a GAIA score demonstrating how CPS dramatically outperforms other generalist agents (OpenAI's deep research, Manus, and Genspark) on tasks similar to those a knowledge worker would perform.
Continuing to want to make a non-deterministic system behave like a deterministic system will be interesting to watch.
I really think it is not possible to get that from a machine. You can improve and do much fancier than now.
But AGI would be something entirely different. It is a system that can do everything better than a human including creativity, which I believe it to be exclusively human as of now.
It can combine, simulate and reason. But think out of the box? I doubt so. It is different to being able to derive ideas from which human would create. For that it can be useful. But that would not be AGI.
the idea that you would somehow produce intelligence by feeding billions of reddit comments into a statistical text model is will go down as the biggest con in history
(so far)
AGI is poorly defined and thus is a science "problem", and a very low priority one at that.
No amount of engineering or model training is going to get us AGI until someone defines what properties are required and then researches what can be done to achieve them within our existing theories of computation which all computers being manufactured today are built upon.
Am I incorrect?
It's possible to stumble upon a solution to something without fully understanding the problem. I think this happens fairly often, really, in a lot of different problem domains.
I'm not sure we need to fully understand human consciousness in order to build an AGI, assuming it's possible to do so. But I do think we need to define what "general intelligence" is, and having a better understanding of what in our brains makes us generally intelligent will certainly help us move forward.
On top of that, we don't really have good, strong definitions of "consciousness" or "general intelligence". We don't know what causes either to emerge from a complex system. We don't know if one is required to have the other (and in which direction), or if you can have an unintelligent consciousness or an unconscious intelligence.
Natural language processing is definitely a huge step in that direction, but that's kinda all we've got for now with LLMs and they're still not that great.
Is there some lower level idea beneath linguistics from which natural language processing could emerge? Maybe. Would that lower level idea also produce some or all of the missing components that we need for "cognition"? Also a maybe.
What I can say for sure though is that all our hardware operates on this more linguistic understanding of what computation is. Machine code is strings of symbols. Is this not good enough? We don't know. That's where we're at today.