UGMM-NN: Univariate Gaussian Mixture Model Neural Network

23 zakeria 6 9/10/2025, 7:23:53 PM arxiv.org ↗

Comments (6)

magicalhippo · 56m ago
I'm having a very dense moment I think, and it's been far to long since ny statistics courses.

They state the output of a neuron j is a log density P_j(y), where y is a latent variable.

But how does the output from the previous layer, x, come into play?

I guess I was expecting some kind of conditional probabilities, ie the output is P_j given x or something.

Again, perhaps trivial. Just struggling to figure out how it works in practice.

ericdoerheit · 1h ago
Thank you for your work! I would be interested to see what this means to a CNN architecture. Maybe it wouldn't actually be needed to have the whole architecture based on uGMM-NNs but only the last layers?
zakeria · 52m ago
Thanks - good question, in theory, the uGMM layer could complement CNNs in different ways - for example, one could imagine (as you mentioned):

using standard convolutional layers for feature extraction,

then replacing the final dense layers with uGMM neurons to enable probabilistic inference and uncertainty modeling on top of the learned features.

My current focus, however, is exploring how uGMMs translate into Transformer architectures, which could open up interesting possibilities for probabilistic reasoning in attention-based models.

zakeria · 3h ago
uGMM-NN is a novel neural architecture that embeds probabilistic reasoning directly into the computational units of deep networks. Unlike traditional neurons, which apply weighted sums followed by fixed nonlinearities, each uGMM-NN node parameterizes its activations as a univariate Gaussian mixture, with learnable means, variances, and mixing coefficients.
vessenes · 1h ago
Meh. Well, at least, possibly “meh”.

Upshot: Gaussian sampling along the parameters of nodes rather than a fixed number. This might offer one of the following:

* Better inference time accuracy on average

* Faster convergence during training

It probably costs additional inference and training compute.

The paper demonstrates worse results on MNIST, and shows the architecture is more than capable of dealing with the Iris test (which I hadn’t heard of; categorizing types of irises, I presume the flower, but maybe the eye?)

The paper claims to keep the number of parameters and depth the same, but it doesn’t report as to

* training time/flops (probably more I’d guess?)

* inference time/flops (almost certainly more)

Intuitively if you’ve got a mean, variance and mix coefficient, then you have triple the data space per parameter — no word as to whether the networks were normalized as to total data taken by the NN or just the number of “parameters”.

Upshot - I don’t think this paper demonstrates any sort of benefit here or elucidates the tradeoffs.

Quick reminder, negative results are good, too. I’d almost rather see the paper framed that way.

zakeria · 1h ago
Thanks for the comment. Just to clarify, the uGMM-NN isn't simply "Gaussian sampling along the parameters of nodes."

Each neuron is a univariate Gaussian mixture with learnable mean, variance, and mixture weights. This gives the network the ability to perform probabilistic inference natively inside its architecture, rather than approximating uncertainty after the fact.

The work isn’t framed as "replacing MLPs." The motivation is to bridge two research traditions:

- probabilistic graphical models and probabilistic circuits (relatively newer)

- deep learning architectures

That's why the Iris dataset (despite being simple) was included - not as a discriminative benchmark, but to show the model could be trained generatively in a way similar to PGMs, something a standard MLP cannot do. Hence, the other benefits of the approach mentioned in the paper.