Consider AI as a statistical topography of human thought, then we should expect AI transformer pathos to be human: People try wrong solutions before finding the right one; people's thinking tends to collapse in response duress of complex problems.
As to the meaning of "complete accuracy collapse", this report on Apple research doesn't even allude to what accuracy means.
As to language such as
//...“particularly concerning”.
Gary Marcus, a US academic who has become a prominent voice of caution on the capabilities of AI models, described the Apple paper as “pretty devastating”.//
Well that's just pretty f'ing fantastic.
//...a system is able to match a human at carrying out any intellectual task...//
Maybe the Turing Test always put the bar way to too low. Just what we need: mechanically accelerated pathos!
As to the meaning of "complete accuracy collapse", this report on Apple research doesn't even allude to what accuracy means.
As to language such as
//...“particularly concerning”. Gary Marcus, a US academic who has become a prominent voice of caution on the capabilities of AI models, described the Apple paper as “pretty devastating”.//
Well that's just pretty f'ing fantastic.
//...a system is able to match a human at carrying out any intellectual task...//
Maybe the Turing Test always put the bar way to too low. Just what we need: mechanically accelerated pathos!
The Illusion of Thinking: Strengths and limitations of reasoning models [pdf] - https://news.ycombinator.com/item?id=44203562 - June 2025 (258 comments)
The link appears to go to a catch-all for AI-related articles in the Guardian.