Convolutions, Polynomials and Flipped Kernels

96 mfrw 34 5/21/2025, 4:35:21 AM eli.thegreenplace.net ↗

Comments (34)

nayuki · 1m ago
You can also multiply polynomials by way of analogy with integer multiplication:

         3  1  2  1
       ×    2  0  6
       ------------
        18  6 12  6
      0  0  0  0
   6  2  4  2
  -----------------
   6  2 22  8 12  6
= 6x^5 + 2x^4 + 22x^3 + 8x^2 + 12x^1 + 6x^0.
stared · 3h ago
Beware - one step more and you get into the region of generating functions. I recommend a book Herbert Wilf with a wonderful name of Generatingfunctionology (https://www2.math.upenn.edu/~wilf/gfology2.pdf).
esafak · 29m ago
I tip my hat to the person who invented that.
eliben · 3h ago
Indeed, generating functions are mentioned in a footnote :) Very interesting topic
stared · 3h ago
Saw that!

Sometimes it makes things simpler (quite a a lot of things in combinatorics), other times it is a tools for nice tricks (I have no idea how I would solved these equations if it were not for generating functions, see the appendix from a Mafia game paper, https://arxiv.org/abs/1009.1031).

srean · 2h ago
Ooh! Lovely. Thank you.

Generating functions, Z-transforms are indispensable in probability theory, Physics, signal processing, and now it seems for a good round of Mafia while camping with friends.

incognito124 · 7h ago
My favourite use case for this: By the same derivation as this blog, one can prove that, if you have any two probability distributions X and Y (they can be different), the probability distribution of X+Y is a convolution of the PMFs/PDFs of X and Y.
srean · 6h ago
On similar lines the MAX operator on the random variables become PRODUCT operator on its distribution.

It's fun to play with the (Max, +)algebra of random variables and infer it's distribution.

This turns out to be quite useful in estimating completion time of dependant parallel jobs.

Spawning multiple parallel jobs becomes a Max operation and chaining sequential jobs becomes a '+' operation on the completion times. This expression tree of Max'es and Plus'es can be algebraically processed to obtain bounds on the completion time distribution.

One example is the straggler problem in mapreduce/Hadoop. In the naive case, the completion time is the max of each parallel subtask. (*)

If the tasks have a heavy tail, which sometimes they do, the straggler's completion time can be really bad. This can be mitigated by k-out-n set up, where you encode the problem in such a way that only k out of n jobs need to finish to obtain the final result. One can play with this trade-off between potentially wasted computation and expected completion time.

For heavy tailed distributions another simplification is possible. The tails of Max and + start becoming of the same order, so one can switch between convolutions and products.

(*) This shows up in microservices architectures also. The owner/maintainer of a microservice end point might be very happy with its tail latency. However an end user who consumes and composed the results of the endpoints can experience really bad tail latencies for the final output.

deepsun · 35m ago
Almost every probability theorem starts with "let's take independent random variables". But in reality almost nothing is independent. Superdeterminism even claims that exactly nothing is independent.
srean · 6m ago
You are right.

The degree of dependence matters though. Mutual information is one way to measure that.

Thankfully, some theorems remain valid even when independence is violated. The next stop after independence is martingale criteria. Martingale difference sequences can be quite strongly dependent yet allow some of the usual theorems to go through but with worse convergence rates.

incognito124 · 6h ago
Thanks for sharing the name of that problem! I've encountered it before while optimizing batched LLM inference. The whole batch would last until all queries in a batch were done, and by changing the batch size, you'd trade off per-query-speed (better in a larger batch) with overall performance (worse with a larger batch).

Nowadays I think this is solved in an entirely different way, though.

srean · 2h ago
It gets more entertaining.

It's common to wrap API calls with

retry on failure, or

spawn an identical request if taking longer than x,or

recursively spawn an identical request if taking longer than x,or

retry on failure but no more than k times.

All of these and similar patterns/decorators can be analysed using the same idea.

incognito124 · 1h ago
Oh wow, pretty cool stuff! If you have more to share, you can always dump it in my mail inbox
silloncito · 1h ago
You should be careful with your estimation. The events should be independent to apply those properties but it is very common that one cause can influence many factors, so they are not independent and all the beauty math does not work as with independence. In the worst day, all fail, because one resource can block others and the system get strangled. There is the black swan book when one rare event make the financial market realize what is risk.
srean · 1h ago
A load-balancer transparently sitting in front of the api end-point (not an uncommon scenario) usually decouples things well enough to be practically independent.

That said, silloncito's warning does need to be paid heed.

ttoinou · 4h ago
What ? I've never heard of this math. Do you mean literally those are the resulting operations in the general case or are those approximate explanation and we need to find more specific cases to make this true ?
srean · 4h ago
The math is general and exact.

Max and Plus at the random variables space becomes product and convolution in their distribution function space.

    Distr(X+Y) =  DistrX ° DistrY

    Distr (X ^ Y) = DistrX * DistrY. 

    Where '^' denotes Max and '°' denotes convolution.
Note *, +, ° and ^ being commutative and associative they can be chained. One can also use their distributive properties. This really the math of groups and rings.

However, one can and one does resort to approximations to compute the desired end results.

More specifically, people are often interested not in the distribution, but some statistics. For example, mean, standard deviation, some tail percentile etc. To compute those stats from the exact distributions, approximations can be employed.

gjm11 · 1h ago
Surely this isn't quite right.

Max of variables = product of cumulative distribution functions.

Sum of variables = convolution of probability density functions.

So both of the equations you write down are correct, but only if you interpret "Distr" as meaning different things in the two cases.

[EDITED to add:] Provided the random variables in question are independent, as mentioned elsewhere in the discussion; if they aren't then none of this works.

srean · 1h ago
The original post, to which I replied, is about the correspondence between summation of random variables and convolution of their distribution. Independence is sufficient for that.

I just carried through that assumption of independence in my own comment, thinking it was obvious to do that (carry over the assumptions).

pizza · 1h ago
Check out tropical algebras
silloncito · 2h ago
$Let M=Max(X,Y)$. If $X$ and $Y$ are independent then: $F_M(k) = P(M \leq K) = P((X \leq K) and (Y \leq K))$, so that

  $P(X \leq K) x P(Y \leq K) = F_X(K) x F_Y(K)$.

 So $F_M = F_X \times F_Y$
srean · 2h ago
Thanks for making the assumption of independence explicit and welcome to HN.
silloncito · 1h ago
Thank you for your welcome, I must have been lurking here for around 30 years or more (always changing accounts). Anyway in this specific case, since M = Max(X,X) = X you can't have F(M) = F(X)*F(X) = F(X) except when F(X) in {0,1}, so the independence property is essential. Welcome fellow Lisper (for the txr and related submission) and math inspired (this one and another related to statistical estimation) with OS related interest (your HN account), OS are not my cup of tea but awk is not bad).

  In another post there are some comments between topology and deep learning. I wonder if there is a definition similar to dimension in topology which would allow you to estimate the minimal size (number of parameters) in a neural network so that is able to achieve a certain state (for example obtaining the capacity to one shot learning with high probability).
srean · 1h ago
Yes independence is absolutely an assumption that I (implicitly) made. It's essential for the convolution identity to hold as well, I just carried through that assumption.

We share interest in AWK (*) then :) I don't know OS at all. Did you imply I know lisp ? I enjoy scheme, but used it in anger never. Big fan of the little schemer series of books.

(*) Have to find that Weinberger face Google-NY t-shirt. Little treasures.

Regarding your dimensions comment, this is well understood for a single layer, that is, for logistic regression. Lehmann's book will have the necessary material. With multiple layers it gets complicated real fast.

The best performance estimates, as in, within realms of being practically useful, largely come from two approaches, one from PAC-Bayesian bounds, the other from Statistical Physics (but these bounds are data distribution dependent). The intrinsic dimension of the data plays a fundamental role there.

The recommended place to dig around is JMLR (journal of machine learning research).

silloncito · 20m ago
Perhaps your txr submission suggests a lisp flavor. The intrinsic dimension concept looks interesting, also the V.C. dimension, but both concepts are very general. Perhaps Lehmann's book is: Elements of large sample theory.
srean · 1m ago
I meant Lehmann's Theory of Point Estimation, but large sample theory is a good book too. The newer editions are a tad hefty in number of pages. The earlier versions would serve you fine.

The generic idea is that smaller these dimensions, easier the prediction problem. Intrinsic dimension is one that comes closest to topology. VC is very combinatorial.

whatshisface · 3h ago
That doesn't sound right. If P(X) is the vector {0.5,0,0.5} and P(Y) is {0.5,0.5,0}, P(X)P(Y) is {0.25,0,0} and that's both not normalized and clearly not the distribution for max(X,Y). Did you get that from an LLM?
srean · 3h ago
You are using PMFs. I meant and wrote distribution function aka cumulative distribution function. They are closed under products.

> Did you get it from LLM

LOL. There must be a fun and guilty story lurking inside the accusation.

On a more serious note, I would love it if LLMs could do such simplifications and estimations on their own.

whatshisface · 3h ago
Distributions can be either PDFs or CDFs. To be honest I'd never heard of assuming that a distribution was a CDF unless otherwise specified.
srean · 3h ago
May I raise you a

https://en.m.wikipedia.org/wiki/Distribution_function

It's right in the title.

In probability theory, integration theory, as well as electrical engineering, "distribution function", unless further clarified, means that cumulative thing.

In math, nomenclature overloading can be a problem. So context matters. In the context of dirac delta, distribution means something else entirely -- generalized functions.

Oh! So sad you deleted your point about densities. One can only laugh at and enjoy these idiosyncrasies of nomenclature.

In Electrical Engineering one uses j for imaginary numbers because i is taken (by current).

No comments yet

jerf · 2h ago
3blue1brown has a walkthrough of this: https://www.youtube.com/watch?v=IaSGqQa5O-M
Sourabhsss1 · 11h ago
The visualizations make the concept easy to grasp.
bjt12345 · 9h ago
This and complex analysis are fascinating topics in Undergraduate studies.
esafak · 27m ago
Contour integrals still feel cool.