The Catholic Church vs. the Turing Test

1 twitchard 4 6/30/2025, 12:43:47 PM twitchard.github.io ↗

Comments (4)

alganet · 3h ago
You don't need catholics, or appeals to the beauty of sunsets, to question the "Turing Test" (there is no such thing, actually).

https://courses.cs.umbc.edu/471/papers/turing.pdf

Here's a quote from Turing:

> I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.

Let's think about this "70 per cent chance of making the right identification" thing and what Turing means by that.

What if I make a fake Turing test? A human interrogator and two humans as subjects. That 70% should remain unchanged. Meaning that in his hypothetical year 2000, about 30% of the time, the interrogator should say that the real human is a machine.

So, what was Turing smoking? Why is he predicting that in the 2000s, a human would fail to identify other humans?

He's not. If we believe he's a smart guy (I do), he is also saying that about the 50s. At the time, simply there weren't any machines to make humans think of this problem. It's a hard problem to even grasp.

I would argue that this is the 1950 version of philosophical zombies, and it is the exact same problem. https://en.wikipedia.org/wiki/Philosophical_zombie. He explicitly acknowledges this issue.

Turing, therefore, _was_ thinking of the hard problem of consciousness, before it was even defined.

In the same paragraph, he makes this remark:

> The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any improved conjecture, is quite mistaken. Provided it is made clear which are proved facts and which are conjectures, no harm can result.

What Turing was trying to do, is to isolate this "hard problem of consciousness" and separate it from easier problems we can actually answer. He never said the problem doesn't exist (quite the contrary, as I demonstrated above).

He is also saying, quite explicitly as well, that this Imitation Game (or what we now call "The Turing Test") is a thought experiment, not a rule written in stone.

What is happening right now in the AI world, is that people are taking these ideas from the 50s, pretending they're rules written in stone, and applying a 50s imagined of the test to machines made in 2025. In other words, they're using an obsolete version.

So, this "next Turing Test", where is it? How can I get the latest version?

You can't, mostly because AI companies have absolutely no interest in disproving themselves. Their benchmarks are tweaked to show their qualities and hide their shortcomings.

With the benefit of hindsight, we know more than Turing. We know this particular class of current machines is prone to hallucinations, and we know they work with a short context. A skilled interrogator in 2025, therefore, knows that.

We should update the thought experiment, then devise "a new hypothetical test" based on this new thought experiment.

twitchard · 1h ago
> What Turing was trying to do, is to isolate this "hard problem of consciousness" and separate it from easier problems we can actually answer.

Yes exactly. As a computer scientist this is a great thing to do, science is all about taking mushy concepts like "intelligence" and extracting simplified versions of them that are more tractable in technical settings. The trouble is, Turing doesn't seem to want to stop at merely arguing that forgetting about interior consciousness is useful for technical discussions -- he seems to think that interior consciousness shouldn't be important for philosophical or popular notions of thinking and intelligence, either, and that they should update to use something like his test.

So even if you updated the Turing Test for 2025 the church would probably still be writing "Antiqua et Nova" to remind people that -- yes, interior consciousness exists and is important and robot intelligence really isn't the same as human intelligence without it.

lo_zamoyski · 3h ago
The cause of behavior is absolutely relevant. The same apparent effect can often be attained by a variety of causes, after all. Otherwise, there is no difference between simulation and the real McCoy, no difference between truth and deceit. A liar by definition is someone who does not communicate faithfully his true intentions or beliefs and intentionally creates the mere appearance of doing so in order to deceive.

It is not a mystery why computers are not intelligent: they lack semantics.

Take the concept of a triangle (let's call it "triangularity"). This concept has to be distinguished from concrete triangles, because the concept is what we predicate of all triangles, so we cannot identify triangularity with any of these triangles or with all triangles. Why not? Well, if you identified triangularity with a particular triangle, then it would mean that there exists only one triangle; you cannot predicate a concrete thing of other things. And you cannot identify triangularity with all triangles, because this would make your reasoning circular: which objects are part of this class of triangles? Well, the triangles in the class, of course! And furthermore, you could not possibly know the entire class if triangles, and yet, through the analysis of the concept of triangularity, you can come to learn all there is about triangularity and thus everything that can be known about what is common and essential to all triangles.

So triangularity cannot be a matter of image, as images are concrete instances of triangles. Might there be an encoding, then, that encodes this triangularity? The answer again is "no". An encoding is not a carrier of meaning in the way concepts are. They are, in fact, devoid of the meaning they communicate; all meaning they have is assigned by the intelligent observer.

Consider the encoding of this very block of text I've posted. Objectively, these funny little shapes do not contain the concepts they communicate. As physical artifacts, they have no semantic content save for the content of their identity as physical objects (so, in a book, the physical meaning of the blobs of variously shaped pigment on paper are just that; in computers, perhaps the state of pixels or the state of an array of semiconductor cells). The words, the concepts — these belong to the writer (me) and to the reader (you). And if we have our language conventions aligned, communication is possible. But it is always the case that the writer and the reader always bring the semantics to any piece of writing. Without them, there is nothing. So concepts cannot be physical, as the physical is always concrete and particular. And in an interesting analogy: it is the human being reading intelligence into the behavior of LLMs. There is none in the LLM.

Computing devices are, of course, entirely physical, but computers are, strictly speaking, purely mathematical formalisms that physical machines only simulate. But even if we reify these mathematical constructs or identify them with physical machines, we are left with, at best, syntactic machines. And no amount of syntax will ever amount to any semantics. It is magical thinking to believe — and this belief has no justification — that somehow, without explanation (and usually appealing to ignorance), all the lead of syntax will magically implode under its own weight into the gold of semantics. But there is nothing in the nature of syntax that can accomplish this, and this is by definition.

So the tl;dr is: computers lack semantics and intentionality, which means they cannot, even in principle, be intelligent.

twitchard · 12m ago
Why do you think that the human mind can contain semantics but a machine cannot? This argument needs some sort of dualism, or what Turing called "the objection from continuity" to account for this.

FWIW I don't think that the "triangularity" in my head is the true mathematical concept of "triangularity". When my son e.g. learned about triangles, at first the concept was just a particular triangle in his set of toy shapes. Then eventually I pointed at more things and said "triangle" and now his concept of triangle is larger and includes multiple things he has seen and sentences that people have said about triangles. I don't see any difficulty with semantics being "a matter of image", really.

Why do we believe that semantics can exist in the human mind but cannot exist in the internals of a machine?

Really "semantics"

I had come across this Catholic philosopher: https://edwardfeser.blogspot.com/2019/03/artificial-intellig... who seems to make a similar argument to this; i.e. that it's the humans who give meaning to things, "logical symbols on a piece of paper are just a bunch of meaningless ink marks"