AGI Is Mathematically Impossible (3): Kolmogorov Complexity

1 ICBTheory 0 7/13/2025, 9:10:34 AM
Hi folks. This is the third part in an ongoing theory I’ve been developing over the last few years called the Infinite Choice Barrier (ICB). The core idea is simple:

General intelligence—especially AGI—is structurally impossible under certain epistemic conditions.

Not morally, not practically. Mathematically.

The argument splits across three barriers: 1.Computability (Gödel, Turing, Rice): You can’t decide what your system can’t see. 2.Entropy (Shannon): Beyond a certain point, signal breaks down structurally. 3.Complexity (Kolmogorov, Chaitin): Most real-world problems are fundamentally incompressible.

This paper focuses on (3): Kolmogorov Complexity. It argues that most of what humans care about is not just hard to model, but formally unmodellable—because the shortest description of a problem is the problem.

In other words: you can’t generalize from what can’t be compressed.

Here’s the abstract:

There is a common misconception that artificial general intelligence (AGI) will emerge through scale, memory, or recursive optimization. This paper argues the opposite: that as systems scale, they approach the structural limit of generalization itself. Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.

This is not a performance issue. It’s a mathematical wall. And it doesn’t care how many tokens you’ve got

The paper isn’t light, but it’s precise. If you’re into limits, structures, and why most intelligence happens outside of optimization, it might be worth your time.

https://philpapers.org/archive/SCHAII-18.pdf

Happy to read your view.

Comments (0)

No comments yet