I think the summary at the beginning of your first video is misleading; it's not a way to "trade space for time", at least not in an arbitrary program. The real statement is a bit odder to wrap one's head around -- "every problem solvable in t time on a multitape Turing machine is also solvable in close to √t space".
For a Turing machine that already solves a problem in n time and √n space (in other words, a lot of them!), it doesn't say anything.
LegionMammal978 · 2h ago
When you convert a generic Turing machine into a Tree Evaluation instance, you end up with square-root space with respect to the original runtime t, but the new runtime will be far, far slower. IME, with these types of circuit reductions, the runtime typically becomes exponential in the space required, which is just about 'as long as possible'.
If we're being pedantic, it's trading time for the space guarantee.
From Februray 2025 fwiw. Same result there have been multiple articles here about. I wonder how it would work for Haskell programs (no mutable memory).
HappMacDonald · 58m ago
I'd view "no mutable memory" as misleading, because immutable languages can still create a new variable and forget an old one which has the same memory footprint as mutating one variable.
Obvious example: the flickering stack frame of tail call elimination.
[1] https://www.youtube.com/watch?v=8JuWdXrCmWg
[2] https://scottaaronson.blog/?p=8680
For a Turing machine that already solves a problem in n time and √n space (in other words, a lot of them!), it doesn't say anything.
If we're being pedantic, it's trading time for the space guarantee.
For algorithms, a little memory outweighs a lot of time - https://news.ycombinator.com/item?id=44055347 - May 2025 (139 comments)
Obvious example: the flickering stack frame of tail call elimination.