There is huge pressure to prove and scale radical alternative paradigms like memory-centric compute such as memristors, or SNNs, etc. That's why I am surprised we don't hear a lot about very large speculative investments in these directions to dramatically multiply AI compute efficiency.
But one has to imagine that seeing so many huge datacenters go up and not being able to do training runs etc. is motivating a lot of researchers to try things that are really different. At least I hope so.
It seems pretty short sighted that the funding numbers for memristor startups (for example) are so low so far.
Anyway, assuming that within the next several years more radically different AI hardware and AI architecture paradigms pay off in efficiency gains, the current situation will change. Fully human level AI will be commoditized, and training will be well within the reach of small companies.
I think we should anticipate this given the strong level of need to increase efficiency dramatically, the number of existing research programs, the amount of investment in AI overall, and the history of computation that shows numerous dramatic paradigm shifts.
So anyway "the rest of us" I think should be banding together and making much larger bets on proving and scaling radical new AI hardware paradigms.
thekoma · 6m ago
Even in that scenario, what would stop the likes of OpenAI to instead throw 50M+ a day to the new way of doing things and still outcompete smaller fry?
sidewndr46 · 14m ago
I think a pretty good chunk of HP's history explains why memristors don't get used in a commercial capacity.
michelpp · 10m ago
Not sure why this is being downvoted, it's a thoughtful comment. I too see this crisis as an opportunity to push boundaries past current architectures. Sparse models for example show a lot of promise and more closely track real biological systems. The human brain has an estimated graph density of 0.0001 to 0.001. Advances in sparse computing libraries and new hardware architectures could be key to achieving this kind of efficiency.
42lux · 2m ago
We haven't seen a proper npu and we are in the launch of the first consumer grade unified architectures by Nvidia and AMD. The battle of homebrew AI hasn't even started yet.
madars · 24m ago
The blog kept redirecting to the home page after a second, so here's an archive: https://archive.is/SE78v
latchkey · 16m ago
Not a fan of fear based marketing: "The whole world is too big and expensive for you to participate in, so use our service instead"
I'd rather approach these things from the PoV of: "We use distillation to solve your problems today"
The last sentence kind of says it all: "If you have 30k+/mo in model spend, we'd love to chat."
But one has to imagine that seeing so many huge datacenters go up and not being able to do training runs etc. is motivating a lot of researchers to try things that are really different. At least I hope so.
It seems pretty short sighted that the funding numbers for memristor startups (for example) are so low so far.
Anyway, assuming that within the next several years more radically different AI hardware and AI architecture paradigms pay off in efficiency gains, the current situation will change. Fully human level AI will be commoditized, and training will be well within the reach of small companies.
I think we should anticipate this given the strong level of need to increase efficiency dramatically, the number of existing research programs, the amount of investment in AI overall, and the history of computation that shows numerous dramatic paradigm shifts.
So anyway "the rest of us" I think should be banding together and making much larger bets on proving and scaling radical new AI hardware paradigms.
I'd rather approach these things from the PoV of: "We use distillation to solve your problems today"
The last sentence kind of says it all: "If you have 30k+/mo in model spend, we'd love to chat."