I get the sentiment, but I think declaring scaling "dead" is premature and misses the point.
First, let's be honest about GPT-5. The article cherry-picks the failures. For 95% of my workflow -- generating complex standalone code, summarizing and finding issues in new code, drafting technical documentation, summarizing dense research papers -- it's a massive step up from GPT-4. The "AGI" narrative was always a VC-fueled fantasy. The real story is the continued, compounding utility as a tool. A calculator can't write a poem, but it was still revolutionary.
Second, "scaling" isn't just compute * data. It's also algorithmic improvements. Reasoning was a huge step forward. Maybe the next leap isn't just a 100x parameter increase, but a fundamental architectural shift we haven't discovered yet, which will then unlock the next phase of scaling. Think of it like the transition from single-core to multi-core CPUs. We hit a frequency wall, so we went parallel. We're hitting a density wall with LLMs, the next move is likely towards smarter, more efficient architectures.
The fever dream isn't superintelligence. The fever dream was thinking we'd get there on a single, straight-line trajectory with one single architecture. The progress is still happening, it's just getting harder and requires more ingenuity, which is how all mature engineering fields work.
First, let's be honest about GPT-5. The article cherry-picks the failures. For 95% of my workflow -- generating complex standalone code, summarizing and finding issues in new code, drafting technical documentation, summarizing dense research papers -- it's a massive step up from GPT-4. The "AGI" narrative was always a VC-fueled fantasy. The real story is the continued, compounding utility as a tool. A calculator can't write a poem, but it was still revolutionary.
Second, "scaling" isn't just compute * data. It's also algorithmic improvements. Reasoning was a huge step forward. Maybe the next leap isn't just a 100x parameter increase, but a fundamental architectural shift we haven't discovered yet, which will then unlock the next phase of scaling. Think of it like the transition from single-core to multi-core CPUs. We hit a frequency wall, so we went parallel. We're hitting a density wall with LLMs, the next move is likely towards smarter, more efficient architectures.
The fever dream isn't superintelligence. The fever dream was thinking we'd get there on a single, straight-line trajectory with one single architecture. The progress is still happening, it's just getting harder and requires more ingenuity, which is how all mature engineering fields work.