The OP claims progress in AI is stagnating. The author is, no surprise, Gary Marcus. Even if you tend to disagree with him, as I do, the OP worth reading.
My two cents: Progress in AI seems inevitable but its timing is unpredictable. There's no law in the universe guaranteeing that the rate of progress must be constant. It's possible that we will need new breakthroughs. No one knows for sure. In any case, breakthroughs do not come along on a nice, predictable schedule. It's possible we will go through a period of stagnation that lasts months, or even years. We cannot rule out another "AI winter" just because we don't want one.
Progress seems inevitable to me because the human brain is physical proof that an intelligent machine that consumes the same energy as an incandescent light bulb to power 100's of trillions of interconnections between neurons is possible. Today's largest AI models, by contrast, consume many orders of magnitude more energy to power only around ~1 trillion interconnections, each with a parameter specifying its weight. We have a long way to go, but we now a human brain equivalent is physically possible.
Even if progress is punctuated by high peaks and deep valleys, I believe we'll get there.
NYT had a much more engaging headline on the front page for this, "The Fever Dream of Imminent 'Superintelligence' is Finally Breaking". "How to Rethink AI" is a terrible headline and as we see here a high profile editorial by a pretty prominent character in the AI field is an empty backwater.
My two cents: Progress in AI seems inevitable but its timing is unpredictable. There's no law in the universe guaranteeing that the rate of progress must be constant. It's possible that we will need new breakthroughs. No one knows for sure. In any case, breakthroughs do not come along on a nice, predictable schedule. It's possible we will go through a period of stagnation that lasts months, or even years. We cannot rule out another "AI winter" just because we don't want one.
Progress seems inevitable to me because the human brain is physical proof that an intelligent machine that consumes the same energy as an incandescent light bulb to power 100's of trillions of interconnections between neurons is possible. Today's largest AI models, by contrast, consume many orders of magnitude more energy to power only around ~1 trillion interconnections, each with a parameter specifying its weight. We have a long way to go, but we now a human brain equivalent is physically possible.
Even if progress is punctuated by high peaks and deep valleys, I believe we'll get there.