Neuromorphic computing

52 LAsteNERD 38 6/5/2025, 6:37:25 PM lanl.gov ↗

Comments (38)

datameta · 13h ago
I could be mistaken with this nitpick but isn't there a unit mismatch in "...just 20 watts—the same amount of electricity that powers two LED lightbulbs for 24 hours..."?
rcoveson · 13h ago
Just 20 watts, the same amount of electricity that powers 2 LED lightbulbs for 24 hours, one nanosecond, or twelve-thousand years.

No comments yet

DavidVoid · 12h ago
There is indeed; Watts aren't energy, and it's a common enough mistake that Technology Connections made a pretty good 52 minute video about it the other month [1].

[1]: https://www.youtube.com/watch?v=OOK5xkFijPc

quantum_state · 7h ago
Surprising that the article was not reviewed enough to ensure accurate use of basic physics concepts .. from LANL!!!
kokanee · 13h ago
Philosophical thought: if the aim of this field is to create an artificial human brain, then it would be fair to say that the more advanced the field becomes, the less difference there is between the artificial brain and a real brain. This begs two questions:

1) Is the ultimate form of this technology ethically distinguishable from a slave?

2) Is there an ethical difference between bioengineering an actual human brain for computing purposes, versus constructing a digital version that is functionally identical?

layer8 · 12h ago
For most applications, we don’t want “functionally identical”. We do not want it to have its own desires and a will of its own, biological(-analogous) needs, having a circadian rhythm, getting tired and needing sleep, being subject to mood changes and emotional swings, feeling pain, having a sexual drive, needing recognition and validation, and so on. So we don’t want to copy the neural and bodily correlates that give rise to those phenomena, which arguably are not essential to how the human brain manages to have the intelligence it has. That is likely to drastically change the ethics of it. We will have to learn more about how those things work in the brain to avoid the undesirables.
kokanee · 11h ago
If we back away from philosophy and think like engineers, I think you're entirely right and the question should be moot. I can't help but think, though, that in spite of it all, the Elon Musks and Sam Altmans of the future will not be stopped from attempting to create something indistinguishable from flesh and blood.
tough · 11h ago
I mean have you watched Westworld?
falcor84 · 12h ago
In my opinion, one of the best works of fiction exploring this is qntm's "Lena" - https://qntm.org/mmacevedo
dlivingston · 11h ago
To 1) and 2), assuming a digital consciousness capable of self-awareness and introspection, I think the answer is clearly 'no'.

But:

> it would be fair to say that the more advanced the field becomes, the less difference there is between the artificial brain and a real brain.

I don't think it would be fair to say this. LLMs are certainly not worthy of ethical considerations. Consciousness needs to be demonstratable. Even if the synaptic structure of the digital vs. human brain approaches 1:1 similarity, the program running on it does not deserve ethical consideration unless and until consciousness can be demonstrated as an emergent property.

energy123 · 12h ago
We should start by disambiguating intelligence and qualia. The field is trying to create intelligence, and kind of assuming that qualia won't be created alongside it.
falcor84 · 12h ago
How would you go about disambiguating them? Isn't that literally the "hard problem of consciousness" [0]?

[0] https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

feoren · 12h ago
"Qualia" is a meaningless term made up so that philosophers can keep publishing meaningless papers. It's completely unfalsifiable: there is no test you can even theoretically run to determine the existence or nonexistence of qualia. There's never a reason to concern yourself with it.
drdeca · 12h ago
The test I use to determine that there exist qualia is “looking”. Now, whether there is a test I can do to confirm that there is any that anything(/anyone) other than me experiences, is another question. (I don’t see how there could be such a test, but perhaps I just don’t see it.)

So, probably not really falsifiable in the sense you are considering, yeah.

I don’t think that makes it meaningless, nor a worthless idea. It probably makes it not a scientific idea?

If you care about subjective experiences, it seems to make sense that you would then concern oneself with subjective experiences.

For the great lookup table Blockhead, whose memory banks take up a galaxy’s worth of space, storing a lookup table of responses for any possible partial conversation history with it, should we value not “hurting its feelings”? If not, why not? It responds just like how a person in an online one-on-one chat would.

Is “Is this [points at something] a moral patient?” a question amenable to scientific study? It doesn’t seem like it to me. How would you falsify answers of “yes” or “no”? But, I refuse to reject the question as “meaningless”.

layer8 · 12h ago
The term has some validity as a word for what I take to be the inner perception of processes within the brain. The qualia of a scent, for example, can be taken to refer to the inner processing of scent perception giving rise to a secondary perception of that processing (or other side effects of that processing, like evoking associated memories). I strongly suspect that that’s what’s actually going on when people talk about how it feels like to see red, and the like.
balamatom · 8h ago
Except that philosophers can keep publishing meaningless papers regardless.
lo_zamoyski · 10h ago
Drinking from the eliminativist hose, are we?

You can't be serious. Whatever one wishes to say about the framing, you cannot deny conscious experience. Materialism painted itself into this corner through its bad assumptions. Pretending it hasn't produced this problem for itself, that it doesn't exist, is just plain silly.

Time to show some intellectual integrity and revisit those assumptions.

thinkingtoilet · 12h ago
I am certain this answer will change as generations pass. The current generations, us, will say that there is a difference. Once a generation of kids grow up with AI assistants/friends/partners/etc... they will have a different view. They will demand rights and protections for their AI.
russdill · 13h ago
Disagree. It would be like saying the more advanced transportation becomes, then more like a horse it will be.
thechao · 13h ago
Shining-brass 25 ton, coal-powered, steam-driven autohorse! 8 legs! Tireless! Breathes fire!
antithesizer · 12h ago
*shower thought
ge96 · 12h ago
3) can we use a dead person's brain, hook up wires to it and oxygen, why not
lo_zamoyski · 11h ago
The burden of proof is to show that there is any real or substantive similarity between the two beyond some superficial comparisons and numbers. If you can't provide that, then you can't answer those questions meaningfully.

(Frankly, this is all a category mistake. Human minds possess intentionality. They possess semantic apprehension. Computers are, by definition, abstract mathematical models that are purely syntactic and formal and therefore stripped of semantic content and intentionality. That is exactly what allows computation to be 'physically realizable' or 'mechanized', whether the simulating implementation is mechanical or electrical or whatever. There's a good deal of ignorant and wishy-washy magical thinking in this space that seems to draw hastily from superficial associations like "both (modern) computers and brains involve electrical phenomena" or "computers (appear to) calculate, and so do human beings", and so on.)

geeunits · 13h ago
I've been building a 'neuromorphic' kernel/bare metal OS that operates on mac hardware using APL primitives as its core layer. Time is considered another 'position' and the kernel itself is vector oriented using 4d addressing with a 32x32x32 'neural substrate'.

I am so ready and eager for a paradigm shift of hardware & software. I think in the future 'software' will disappear for most people, and they'll simply ask and receive.

JimmyBuckets · 11h ago
I'd love to read more about this. Do you have a blog?
stefanv · 13h ago
And still no mention of Numenta… I’ve always felt it’s an underrated company, built on an even more underrated theory of intelligence
kadushka · 4h ago
They pivoted to regular deep learning when Jeff stepped away from the company several years ago. It does not appear they're doing much of brain modeling these days. Last publication was 3 years ago.
esafak · 12h ago
I want them to succeed but it's been two decades already. Maybe they should have started with a less challenging problem to grow the company?
meindnoch · 12h ago
They will be right on time when the first Mill CPU arrives!
Footpost · 13h ago
Neuromorphic computation has been hyped up for ~ 20 year by now. So far it has dramatically underperformed, at least vis-a-vis the hype.

The article does not distinguish between training and inference. Google Edge TPUs https://coral.ai/products/ each one is capable of performing 4 trillion operations per second (4 TOPS), using 2 watts of power—that's 2 TOPS per watt. So inference is already cheaper than the 20 watts the paper attributes to the brain. To be sure, LLM training is expensive, but so is raising a child for 20 years. Unlike the child, LLMs can share weights, and amortise the energy cost of training.

Another core problem with neuromorphic computation is that we currently have no meaningful idea how the brain produces intelligence, so it seems to be a bit premature to claim we can copy this mechanism. Here is what the Nvidia Chief Scientist B. Dally (and one of the main developers of modern GPU architectures) says about the subject: "I keep getting those calls from those people who claim they are doing neuromorphic computing and they claim there is something magical about it because it's the way that the brain works ... but it's truly more like building an airplane by putting feathers on it and flapping with the wings!" From "Hardware for Deep Learning" HotChips 2023 keynote. https://www.youtube.com/watch?v=rsxCZAE8QNA This is at 21:28. The whole talk is brilliant and worth watching.

ge96 · 13h ago
Just searched against HN, seems this term is at least 8 years old
lukeinator42 · 13h ago
The term neuromorphic? It was coined in 1990: https://ieeexplore.ieee.org/abstract/document/58356
newfocogi · 13h ago
Once again, I am quite surprised by the sudden uptick of AI content on HN coming out of LANL. Does anyone know if its just getting posted to HN and staying on the first page suddenly, or is this a change in strategy for the lab? Even so, I don't see the other NatLabs showing up like this.
fintler · 12h ago
Probably because they're hosting an exascale-class cluster with a bazillion GH200s. Also, they launched a new "National Security AI Office".
gyrovagueGeist · 13h ago
I am not sure why HN has mostly LANL posts. Otherwise though it is a combination of things. Machine learning applications for NATSec & fundamental research have become more important (see FASST, proposed last year), the current political environment makes AI funding and applications more secure and easier to chase, and some of this is work that has already been going on but getting greater publicity for both of those reasons.
ivattano · 12h ago
The primary pool of money for DOE labs is through a program called "Frontiers in Artificial Intelligence for Science, Security and Technology (FASST)," replacing the Exascale Computing Project. Compared to other labs, LANL historically does not have many dedicated ML/AI groups but they have recently spun up an entire branch to help secure as much of that FASST money as possible.
CamperBob2 · 13h ago
I imagine the mood at the national labs right now is pretty panicky. They will be looking to get involved with more real-world applications than they traditionally have been, and will also want to appear more engaged with trendy technologies.
random3 · 10h ago
memristors are back