The new Gödel Prize winner tastes great and is less filling

82 baruchel 23 6/9/2025, 2:42:33 PM blog.computationalcomplexity.org ↗

Comments (23)

NooneAtAll3 · 3m ago
I feel like this blogpost isn't filling either

What exactly was the awarded paper about? What does "extractor" and "resilient function" mean?

Why did the discussion shift towards Ramsey theory? Why waste half of the post arguing about something already discussed in linked previous blogposts?

HPMOR · 5h ago
Wow, crazy randomly seeing my Algo prof at the top of HN winning this! Congrats Eshan!!
bhasi · 4h ago
Eshan's Bachelor's thesis advisor from IIT Kanpur, Prof Manindra Agrawal, also won the Gödel prize in 2006. Wow.
antics · 5h ago
If I give you a biased coin can you simulate a truly random coin flip with it? The answer turns out to be yes. Flip the biased coin twice: HT = heads, TH = tails, and HH/TT = flip twice again.

The general study of such things is called “randomness extractors”. The new Gödel prize goes to this paper which shows how to extract a nearly perfect random bit out of two sources with low min-entropy.

falcor84 · 5h ago
Yes, but - you need to replace "twice" there with "an unbounded number of times". If you apply this in an environment where the biased coin is coming from an external source, your system becomes susceptible to DoS attacks.
antics · 4h ago
While I obviously think randomness extractors over adversarial sources are very interesting, I think talking about them specifically in this example complicates the point I'm trying to make, which is that it's incredible it can be done at all.
dataflow · 4h ago
Note that adversarial is kind of a red herring, not sure why they mentioned that. The number of flips is unbounded regardless. Which is why it's not really incredible that it can be done: it can't, not as the problem was originally stated. What can be done is solving a different (but useful) problem than the one originally posed.

I realize this sounds like a minor detail to someone who finds this cool (and so do I), but I don't think it is. It's kind of frustrating to be told that your intuition is wrong by someone smarter than you, when your intuition is actually correct and the experts are moving the goalposts. IMO, it makes people lose respect for experts/authority.

dwattttt · 39m ago
I don't think anyone would be surprised to hear that if a biased coin can only give one result, you can't extract randomness from it.

And if it can only give the second result one in a million times, you could be flipping millions.

antics · 3h ago
So, the problem in its original framing is: can we simulate a fair coin flip with an unfair coin? As stated, I do actually think the von Neumann response answer ("this is actually technically possible") is fair, in that if I wanted a solution in O(1), I think I should have to say so ahead of time.

I suppose we'll have to disagree about whether this is incredible. The response shows that (1) this can be done at all, and (2) that the answer is exponentially likely as time goes on, not asymptotically, but for finite n. Incredible! You don't see finite-decay bounds very often! If you don't think that's incredible I invite you to ask a room full of people, even with the qualifications you deem appropriate, e.g., "solution does not need to be constant-time", or whatever.

QuesnayJr · 3h ago
I have the exact opposite reaction, that if someone told me the answer is "no" because it requires an unbounded number of coin flips that they were the ones trying to bullshit me. In antic's formulation, nothing is said about requiring a bounded number of flips.
dataflow · 3h ago
"Simulate a truly random coin" implies it IMO. You're not simulating a truly random coin if you need unbounded time for a single flip. The truly random coin definitely doesn't need that. It just feels like a scam if someone sold me such a machine with that description - I'd want my money back. I don't expect everyone would feel the same, but I think a lot of people would.
antics · 1h ago
I'm not sure we're on the same page about what this result practically means, so let me re-state it a few different ways, so that people can draw their own conclusions:

* The von Neumann approach will "appear to be O(1) (constant-time)" for any particular biased coin, but that constant might be big if the coin is very, VERY biased in one direction.

* How can this be true? Every flip reduces the probability you do not have an answer exponentially. The "concentration" around the mean is very sharpe.g., at 275 coin tosses for P(H)=0.5 (the fair case), the probability of not having an answer is smaller than 1 divided by the number of atoms in the known universe. It is technically possible, but I think most people would say that it's "effectively constant time" in the sense that we'd expect the coin to phase-shift through the desk before flipping it 275 times in a row and not getting answer. So it takes 275 flips, it's "constant" time! Interpret it how you like though.

* As you make the coin more and more biased, that "horizon" increases linearly, in that 0.99^1,000 is approximately the same thing as 0.999^10,000. So, an order of magnitude increase in probability requires roughly an order of magnitude increase in the number of flips. This is why it's not useful for the adversarial case, and why adversarial extractors are held apart from normal extractors.

Whether this is a "give me my money back" type thing is for you to decide. I think for most people the claim that you can simulate a fair coin from a biased coin in, effectively, O(1), and that the constant increases in O(n) in the bias, is plainly incredible. :)

dooglius · 1h ago
A real coin could repeatedly land on its side over and over indefinitely, every time you flipped it.
thaumasiotes · 2h ago
You don't need unbounded time for a single flip, that's all in your imagination. The worst-case time is unbounded, but you can't achieve the worst case.
dataflow · 1h ago
> You don't need unbounded time for a single flip, that's all in your imagination. The worst-case time is unbounded, but you can't achieve the worst case.

There literally isn't a bound, it can be arbitrarily large. This isn't just in my head, it's a fact. If it's bounded in your mind then what is the bound?

Dylan16807 · 1h ago
It's not a fact of the real world, at least. "You can't achieve it" is true. A pretty small number of failures and you're looking at a trillion years to make it happen. And if you buffer some flips that number gets even smaller.
dataflow · 58m ago
> A pretty small number of failures and you're looking at a trillion years to make it happen.

This depends on the bias of the original coin. P(H) can be arbitrarily large, making P(HH) the likeliest possibility even for a trillion years. "This wouldn't happen in the real world" would be a sorry excuse for the deliberate refusal to clearly state the problem assumptions upfront.

IMO, if you really want to pleasantly surprise people, you need to be forthcoming and honest with them at the beginning about all your assumptions. There's really no good excuse to obfuscate the question and then move the goalposts when they (very predictably) fall into your trap.

Dylan16807 · 27m ago
> This depends on the bias of the original coin. P(H) can be arbitrarily large

> There's really no good excuse to obfuscate the question and then move the goalposts when they (very predictably) fall into your trap.

Interesting. Because I see the guy pulling out the one-in-a-million coin and expecting it to run at a similar speed to be doing a gotcha on purpose, not falling into a trap and having the goalposts moved.

And I think "well if it's a million times less likely to give me a heads, then it takes a million times as many flips, but it's just as reliable" is an answer that preserves the impressiveness and the goalposts.

It's fast relative to the bias. Which seems like plenty to me when the original claim never even said it was fast.

(And if the coin never gives you a heads then I'd say it no longer qualifies as randomly flipping a coin.)

falcor84 · 1h ago
It's very easy to achieve if someone hands you a "coin" that is made such that it never lands on tails.

Sorry for still being in the adversarial mindset, but this means that you essentially have to hardcode a maximum number of same-side flips after which you stop trusting the coin.

Dylan16807 · 1h ago
That's not a situation of "can't be done", that's just a consequence of casually describing the problem instead of exhaustively specifying it.

Yes the bias has to be finite / the entropy can't be zero, and the slowdown is related to how big the bias is / how low the entropy is.

pbhjpbhj · 2h ago
Speaking from complete ignorance, with apologies to those who that will annoy:

I'm sure it's possible to make a coin with what one might term "complex bias" where the bias extends over two events (thick gloop inside the coin, possibly heat activated or non-Newtonian).

This method sounds like the bias needs to be fixed ("simple bias" if you like)?

I guess that's just out of scope here.

Aside: There's a strong 'vibe' that HHHH HHHH with a single coin is somehow less random than HTHTHTHT HTHTHTHT with two coins when discarding HH and TT. I wonder what the origin of that feeling is - maybe just HT doesn't seem like it's a constant result simply due to nomenclature? If we call HT "type-A", then we just have HHHH HHHH vs. AAAA AAAA; and I _feel_ happier about the result!

Terr_ · 1h ago
I suspect this depends on where you drawn the upper bound, since a really really complex biased coin is one that spies on your thesis and data and is committed to making you suffer.

Descartes' evil demon, in numismatic form.

doctorpangloss · 2h ago
It’s tough. You take Math 55, you’re a smart kid, you learn all this math, and it opens up all these opportunities from VCs in the real world for you, so long as it has to do with payments.

No comments yet