Convolutions, Polynomials and Flipped Kernels

91 mfrw 27 5/21/2025, 4:35:21 AM eli.thegreenplace.net ↗

Comments (27)

stared · 2h ago
Beware - one step more and you get into the region of generating functions. I recommend a book Herbert Wilf with a wonderful name of Generatingfunctionology (https://www2.math.upenn.edu/~wilf/gfology2.pdf).
eliben · 2h ago
Indeed, generating functions are mentioned in a footnote :) Very interesting topic
stared · 1h ago
Saw that!

Sometimes it makes things simpler (quite a a lot of things in combinatorics), other times it is a tools for nice tricks (I have no idea how I would solved these equations if it were not for generating functions, see the appendix from a Mafia game paper, https://arxiv.org/abs/1009.1031).

srean · 1h ago
Ooh! Lovely. Thank you.

Generating functions, Z-transforms are indispensable in probability theory, Physics, signal processing, and now it seems for a good round of Mafia while camping with friends.

incognito124 · 6h ago
My favourite use case for this: By the same derivation as this blog, one can prove that, if you have any two probability distributions X and Y (they can be different), the probability distribution of X+Y is a convolution of the PMFs/PDFs of X and Y.
srean · 5h ago
On similar lines the MAX operator on the random variables become PRODUCT operator on its distribution.

It's fun to play with the (Max, +)algebra of random variables and infer it's distribution.

This turns out to be quite useful in estimating completion time of dependant parallel jobs.

Spawning multiple parallel jobs becomes a Max operation and chaining sequential jobs becomes a '+' operation on the completion times. This expression tree of Max'es and Plus'es can be algebraically processed to obtain bounds on the completion time distribution.

One example is the straggler problem in mapreduce/Hadoop. In the naive case, the completion time is the max of each parallel subtask. (*)

If the tasks have a heavy tail, which sometimes they do, the straggler's completion time can be really bad. This can be mitigated by k-out-n set up, where you encode the problem in such a way that only k out of n jobs need to finish to obtain the final result. One can play with this trade-off between potentially wasted computation and expected completion time.

For heavy tailed distributions another simplification is possible. The tails of Max and + start becoming of the same order, so one can switch between convolutions and products.

(*) This shows up in microservices architectures also. The owner/maintainer of a microservice end point might be very happy with its tail latency. However an end user who consumes and composed the results of the endpoints can experience really bad tail latencies for the final output.

ttoinou · 3h ago
What ? I've never heard of this math. Do you mean literally those are the resulting operations in the general case or are those approximate explanation and we need to find more specific cases to make this true ?
pizza · 1m ago
Check out tropical algebras
srean · 3h ago
The math is general and exact.

Max and Plus at the random variables space becomes product and convolution in their distribution function space.

    Distr(X+Y) =  DistrX ° DistrY

    Distr (X ^ Y) = DistrX * DistrY. 

    Where '^' denotes Max and '°' denotes convolution.
Note *, +, ° and ^ being commutative and associative they can be chained. One can also use their distributive properties. This really the math of groups and rings.

However, one can and one does resort to approximations to compute the desired end results.

More specifically, people are often interested not in the distribution, but some statistics. For example, mean, standard deviation, some tail percentile etc. To compute those stats from the exact distributions, approximations can be employed.

gjm11 · 36m ago
Surely this isn't quite right.

Max of variables = product of cumulative distribution functions.

Sum of variables = convolution of probability density functions.

So both of the equations you write down are correct, but only if you interpret "Distr" as meaning different things in the two cases.

[EDITED to add:] Provided the random variables in question are independent, as mentioned elsewhere in the discussion; if they aren't then none of this works.

srean · 23m ago
The original post, to which I replied, is about the correspondence between summation of random variables and convolution of their distribution. Independence is sufficient for that.

I just carried through that assumption of independence in my own comment, thinking it was obvious to do that (carry over the assumptions).

silloncito · 1h ago
$Let M=Max(X,Y)$. If $X$ and $Y$ are independent then: $F_M(k) = P(M \leq K) = P((X \leq K) and (Y \leq K))$, so that

  $P(X \leq K) x P(Y \leq K) = F_X(K) x F_Y(K)$.

 So $F_M = F_X \times F_Y$
srean · 1h ago
Thanks for making the assumption of independence explicit and welcome to HN.
silloncito · 27m ago
Thank you for your welcome, I must have been lurking here for around 30 years or more (always changing accounts). Anyway in this specific case, since M = Max(X,X) = X you can't have F(M) = F(X)*F(X) = F(X) except when F(X) in {0,1}, so the independence property is essential. Welcome fellow Lisper (for the txr and related submission) and math inspired (this one and another related to statistical estimation) with OS related interest (your HN account), OS are not my cup of tea but awk is not bad).

  In another post there are some comments between topology and deep learning. I wonder if there is a definition similar to dimension in topology which would allow you to estimate the minimal size (number of parameters) in a neural network so that is able to achieve a certain state (for example obtaining the capacity to one shot learning with high probability).
srean · 14m ago
Yes independence is absolutely an assumption that I (implicitly) made. It's essential for the convolution identity to hold as well, I just carried through that assumption.

We share interest in AWK (*) then :) I don't know OS at all. Did you imply I know lisp ? I enjoy scheme, but used it in anger never. Big fan of the little schemer series of books.

(*) Have to find that Weinberger face Google-NY t-shirt. Little treasures.

incognito124 · 5h ago
Thanks for sharing the name of that problem! I've encountered it before while optimizing batched LLM inference. The whole batch would last until all queries in a batch were done, and by changing the batch size, you'd trade off per-query-speed (better in a larger batch) with overall performance (worse with a larger batch).

Nowadays I think this is solved in an entirely different way, though.

srean · 1h ago
It gets more entertaining.

It's common to wrap API calls with

retry on failure, or

spawn an identical request if taking longer than x,or

recursively spawn an identical request if taking longer than x,or

retry on failure but no more than k times.

All of these and similar patterns/decorators can be analysed using the same idea.

incognito124 · 48m ago
Oh wow, pretty cool stuff! If you have more to share, you can always dump it in my mail inbox
silloncito · 18m ago
You should be careful with your estimation. The events should be independent to apply those properties but it is very common that one cause can influence many factors, so they are not independent and all the beauty math does not work as with independence. In the worst day, all fail, because one resource can block others and the system get strangled. There is the black swan book when one rare event make the financial market realize what is risk.
srean · 9m ago
A load-balancer transparently sitting in front of the api end-point (not an uncommon scenario) usually decouples things well enough to be practically independent.

That said, silloncito's warning does need to be paid heed.

whatshisface · 2h ago
That doesn't sound right. If P(X) is the vector {0.5,0,0.5} and P(Y) is {0.5,0.5,0}, P(X)P(Y) is {0.25,0,0} and that's both not normalized and clearly not the distribution for max(X,Y). Did you get that from an LLM?
srean · 2h ago
You are using PMFs. I meant and wrote distribution function aka cumulative distribution function. They are closed under products.

> Did you get it from LLM

LOL. There must be a fun and guilty story lurking inside the accusation.

On a more serious note, I would love it if LLMs could do such simplifications and estimations on their own.

whatshisface · 2h ago
Distributions can be either PDFs or CDFs. To be honest I'd never heard of assuming that a distribution was a CDF unless otherwise specified.
srean · 2h ago
May I raise you a

https://en.m.wikipedia.org/wiki/Distribution_function

It's right in the title.

In probability theory, integration theory, as well as electrical engineering, "distribution function", unless further clarified, means that cumulative thing.

In math, nomenclature overloading can be a problem. So context matters. In the context of dirac delta, distribution means something else entirely -- generalized functions.

Oh! So sad you deleted your point about densities. One can only laugh at and enjoy these idiosyncrasies of nomenclature.

In Electrical Engineering one uses j for imaginary numbers because i is taken (by current).

No comments yet

jerf · 1h ago
3blue1brown has a walkthrough of this: https://www.youtube.com/watch?v=IaSGqQa5O-M
Sourabhsss1 · 10h ago
The visualizations make the concept easy to grasp.
bjt12345 · 8h ago
This and complex analysis are fascinating topics in Undergraduate studies.