SpikingBrain 7B – More efficient than classic LLMs

10 somethingsome 5 9/14/2025, 5:49:42 AM github.com ↗

Comments (5)

cpldcpu · 52m ago
>The current implementation adopts pseudo-spiking, where activations are approximated as spike-like signals at the tensor level, rather than true asynchronous event-driven spiking on neuromorphic hardware.

Isn't that in essence very similar to Quantization Aware Training (QaT)?

spwa4 · 4m ago
Can you explain more? Why would that be the case? What is being passed from one layer to the next is not a linear value but the delay until the next spike, which is very different.
asdfasdf1 · 1h ago
SpikingBrain Technical Report: Spiking Brain-inspired Large Models https://arxiv.org/abs/2509.05276
bob1029 · 37m ago
cpldcpu · 18m ago
Well, it would still allow to deploy the trained model to SNN hardware, if it existed.