> Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable
This would involve proving that humans exceed the Turing computable, which would mean proving Church-Turing thesis is false.
Because if humans do not exceed the Turing computable, then every single human brain is existence-proof that AGI is intrinsically computationally tractable by demonstrating that sufficient calculations can be done in a small enough computational device, and that the creation of such a device is possible.
Their paper accepts as true that Turing-completeness is sufficient to "computationally capture human cognition".
If we postulate that this is true (and we have no evidence to suggest it is not), then if their "proof" shows that *their chosen mechanism can be proven to not allow for the creation of AGI, then all they have demonstrated is that their assumptions are wrong.
falcor84 · 2h ago
The proof seems sound, but the premises appear to me to be overly restrictive. In particular, seeing that a ML-based AI can write arbitrary code, there's nothing limiting the ability of these 2nd generation AIs from being AGI.
vidarh · 1h ago
If such 2nd generation AI's are AGI, then their claim that "as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable" is false.
Indeed, if their proof is true, they have a proof that the Church-Turing thesis is false, and that humans exceed the Turing computable, in which case they've upended the field of computational logic.
Yet they assert that they believe a Turing complete system "is expressive enough to computationally capture human cognition".
This would be a very silly belief to hold if their proof is true, as if that claim is true, then a human brain is existence proof that a computational device that can produce output identical to human cognition is possible, because a human brain would be one. They'd then need to explain why they think an equally powerful computational device can't be artificially created.
If they want to advance a claim like this, they need to address this issue. Not only does their paper not address it, but they make assertions about Turing-equivalence that are false if their conclusions are true, which suggest they don't even understand the implications.
Indeed, if they understood the implications of their claim, then a claim to have proven the Church-Turing thesis to be false and/or having proven that humans exceed the Turing computable ought to be front and center, as it'd be a far more significant finding than the one they claim.
The paper is frankly an embarrassment.
benreesman · 1h ago
I'd like to "reclaim" both AI and machine learning as relatively emotionally neutral terms of art for useful software we have today or see a clearly articulated path towards.
Trying to get the most out of tools that sit somewhere between "the killer robots will eradicate humanity", " there goes my entire career", "fuck that guy with the skill I don't want to develop, let's take his career", and "I'm going to be so fucking rich if we can keep the wheels on this" is exhausting.
And the cognitive science thing.
akoboldfrying · 49m ago
> Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable.
Well, pregnant women create such systems routinely.
Due to the presence of the weasel word "factual" (it's not in the sentence I quoted, but is in the lead-up), no contradiction actually arises. It may well be intractable to create a perfectly factual human(-like or -level) AI -- but then, most of us would find much utility in a human(-like or -level) AI that is only factual most of the time -- IOW, a human(-like or -level) AI.
vidarh · 46m ago
But a "perfectly factual" AI wouldn't be human-like at all, and notably they appear to have actually tried to define what a "factual AI system" would mean.
This would involve proving that humans exceed the Turing computable, which would mean proving Church-Turing thesis is false.
Because if humans do not exceed the Turing computable, then every single human brain is existence-proof that AGI is intrinsically computationally tractable by demonstrating that sufficient calculations can be done in a small enough computational device, and that the creation of such a device is possible.
Their paper accepts as true that Turing-completeness is sufficient to "computationally capture human cognition".
If we postulate that this is true (and we have no evidence to suggest it is not), then if their "proof" shows that *their chosen mechanism can be proven to not allow for the creation of AGI, then all they have demonstrated is that their assumptions are wrong.
Indeed, if their proof is true, they have a proof that the Church-Turing thesis is false, and that humans exceed the Turing computable, in which case they've upended the field of computational logic.
Yet they assert that they believe a Turing complete system "is expressive enough to computationally capture human cognition".
This would be a very silly belief to hold if their proof is true, as if that claim is true, then a human brain is existence proof that a computational device that can produce output identical to human cognition is possible, because a human brain would be one. They'd then need to explain why they think an equally powerful computational device can't be artificially created.
If they want to advance a claim like this, they need to address this issue. Not only does their paper not address it, but they make assertions about Turing-equivalence that are false if their conclusions are true, which suggest they don't even understand the implications.
Indeed, if they understood the implications of their claim, then a claim to have proven the Church-Turing thesis to be false and/or having proven that humans exceed the Turing computable ought to be front and center, as it'd be a far more significant finding than the one they claim.
The paper is frankly an embarrassment.
Trying to get the most out of tools that sit somewhere between "the killer robots will eradicate humanity", " there goes my entire career", "fuck that guy with the skill I don't want to develop, let's take his career", and "I'm going to be so fucking rich if we can keep the wheels on this" is exhausting.
And the cognitive science thing.
Well, pregnant women create such systems routinely.
Due to the presence of the weasel word "factual" (it's not in the sentence I quoted, but is in the lead-up), no contradiction actually arises. It may well be intractable to create a perfectly factual human(-like or -level) AI -- but then, most of us would find much utility in a human(-like or -level) AI that is only factual most of the time -- IOW, a human(-like or -level) AI.