>The current implementation adopts pseudo-spiking, where activations are approximated as spike-like signals at the tensor level, rather than true asynchronous event-driven spiking on neuromorphic hardware.
Isn't that in essence very similar to Quantization Aware Training (QaT)?
spwa4 · 8m ago
Can you explain more? Why would that be the case? What is being passed from one layer to the next is not a linear value but the delay until the next spike, which is very different.
Isn't that in essence very similar to Quantization Aware Training (QaT)?