There is huge pressure to prove and scale radical alternative paradigms like memory-centric compute such as memristors, or SNNs, etc. That's why I am surprised we don't hear a lot about very large speculative investments in these directions to dramatically multiply AI compute efficiency.
But one has to imagine that seeing so many huge datacenters go up and not being able to do training runs etc. is motivating a lot of researchers to try things that are really different. At least I hope so.
It seems pretty short sighted that the funding numbers for memristor startups (for example) are so low so far.
Anyway, assuming that within the next several years more radically different AI hardware and AI architecture paradigms pay off in efficiency gains, the current situation will change. Fully human level AI will be commoditized, and training will be well within the reach of small companies.
I think we should anticipate this given the strong level of need to increase efficiency dramatically, the number of existing research programs, the amount of investment in AI overall, and the history of computation that shows numerous dramatic paradigm shifts.
So anyway "the rest of us" I think should be banding together and making much larger bets on proving and scaling radical new AI hardware paradigms.
marcosdumay · 4m ago
Memristors in particular just won't happen.
But memory-centric compute didn't happen because of Moore's law. (SNNs have the problem that we don't actually know how to use them.) Now that it's gone, it may have a chance, but it still takes a large amount of money thrown into the idea and the people with money are so risk-adverse that they create entire new risks for themselves.
Forward neural networks were very lucky that there existed a mainstream use for the kind of hardware it needed.
Even in that scenario, what would stop the likes of OpenAI to instead throw 50M+ a day to the new way of doing things and still outcompete smaller fry?
sidewndr46 · 35m ago
I think a pretty good chunk of HP's history explains why memristors don't get used in a commercial capacity.
ofrzeta · 22m ago
You remember The Machine? I had a vague memory but I had to look it up.
michelpp · 32m ago
Not sure why this is being downvoted, it's a thoughtful comment. I too see this crisis as an opportunity to push boundaries past current architectures. Sparse models for example show a lot of promise and more closely track real biological systems. The human brain has an estimated graph density of 0.0001 to 0.001. Advances in sparse computing libraries and new hardware architectures could be key to achieving this kind of efficiency.
lazide · 15m ago
Memristors have been tried for literally decades.
If the posters other guesses pay out the same rate, this will likely play out never.
42lux · 23m ago
We haven't seen a proper npu and we are in the launch of the first consumer grade unified architectures by Nvidia and AMD. The battle of homebrew AI hasn't even started yet.
madars · 45m ago
The blog kept redirecting to the home page after a second, so here's an archive: https://archive.is/SE78v
latchkey · 38m ago
Not a fan of fear based marketing: "The whole world is too big and expensive for you to participate in, so use our service instead"
I'd rather approach these things from the PoV of: "We use distillation to solve your problems today"
The last sentence kind of says it all: "If you have 30k+/mo in model spend, we'd love to chat."
But one has to imagine that seeing so many huge datacenters go up and not being able to do training runs etc. is motivating a lot of researchers to try things that are really different. At least I hope so.
It seems pretty short sighted that the funding numbers for memristor startups (for example) are so low so far.
Anyway, assuming that within the next several years more radically different AI hardware and AI architecture paradigms pay off in efficiency gains, the current situation will change. Fully human level AI will be commoditized, and training will be well within the reach of small companies.
I think we should anticipate this given the strong level of need to increase efficiency dramatically, the number of existing research programs, the amount of investment in AI overall, and the history of computation that shows numerous dramatic paradigm shifts.
So anyway "the rest of us" I think should be banding together and making much larger bets on proving and scaling radical new AI hardware paradigms.
But memory-centric compute didn't happen because of Moore's law. (SNNs have the problem that we don't actually know how to use them.) Now that it's gone, it may have a chance, but it still takes a large amount of money thrown into the idea and the people with money are so risk-adverse that they create entire new risks for themselves.
Forward neural networks were very lucky that there existed a mainstream use for the kind of hardware it needed.
This already exists: https://www.cerebras.ai/chip
They claim 44 GB of SRAM at 21 PB/s.
If the posters other guesses pay out the same rate, this will likely play out never.
I'd rather approach these things from the PoV of: "We use distillation to solve your problems today"
The last sentence kind of says it all: "If you have 30k+/mo in model spend, we'd love to chat."