CamperBob2 1 day ago

Exactly, the whole idea behind the test is flawed. ELIZA was already enough to fool some humans. Humans are very easy to fool.

See also the Chinese Room argument, which got a lot of airtime back in the day. It added no useful insight to questions about the nature of machine intelligence, but it did reveal how little we understood about the nature of language.

2
JKCalhoun 1 day ago

I must be dense, because I saw nothing useful at all about the Chinese Room argument.

Searle's translation book was essentially an LLM. Somehow (because it is a book?) we are to assume it cannot be in anyway like human intelligence despite it making convincing responses.

myrmidon 1 day ago

I fully agree with your view.

My take is that the whole chinese room argument rests on extremely nebulous and shaky definitions/assumptions, rendering it worthless.

It seems fairly obvious to me that the system of operator + rulebook does "understand" chinese, for every practical definition of "understanding".

Another counterargument would be simple physical simulation: If you built a computer program that could simulate a human by the atom, then you would either have to concede that the resulting machine does "understand" for all definitions that matter, or you have to admit that you believe in magic [1].

[1]: or desperately grasp for loopholes, like nondeterminsitic physical micro-interactions, but might as well call that magic.

mjburgess 1 day ago

It's meant to be so obviously absurd that a person inside the room, merely substituting symbols, should in any sense understand the meaning of those symbols.

This may perhaps be more obvious to a naturalistic philosopher or natural scientist, than a computer "scientist" (ie., a mathematician).

The meaning of the term "pen" in "pass me that pen" includes the pen. So when this room is asked, "pass me the pen" and it replies "i cannot pass the pen" (or whatever it replies) -- it should be obvious that the person in the room, or any function of their activity, has never acquired any reference to "the pen". It is wholly unaware that there is a pen at all.

The purpose of this thought experiment is to show that syntactical correctness or apparent "arrangement of symbols in 'a' correct order" is radically insufficient to evidence semantic competence.

This, again is perhaps more obvious to scientists -- the symbol order is only a proxy measure of semantic competence in people. It's trivial to come up with processes which clearly lack the capacity for such competence and yet are measured (/observed) to produce symbols in the right order.

In many ways, it's an over-engineered thought experiment. However I'd say Searle was baffled that more obvious phrasings of the problem seemed to confuse others, ie., that an observation of symbols isnt an observation of meanings -- one isn't a reliable measure of the other. Only under very many additional conditions does such a relationship hold in people.

Turing was not interested in producing systems that had such competence, so he may well agree with Searle in some ways at least. However, many students of computer science receive no empirical education whatsoever, and lack the basic vocabulary and understanding of the nature of the problem of meaning.

Eg., that in order to mean "pass me the pen" one must be able to acquire a reference to "the pen" which any system unable to observe its environment at the very least cannot do.

Turing machines lack devices, and hence lack any capacity to in principle refer to objects in the world. The only thing a turing machine can be said to do is express an abstraction ( a function of nat -> nat) -- since it is an abstraction.

No capacities follow from expressing such a computational abstraction -- Searle thought the chinese room made this obvious to those who didnt find it so. But he was baffled that anyone didnt already find it obvious.

One could make the same point with physics, rather than with meaning. Eg., the earth orbiting the sun computes +1,-1,+1,-1 .. and so does an infinte numer of physical processes that share no properties with the earth, or the sun, etc. Thus just because we observe +1,-1,+1... does not mean that "inside the chinese physics room" there's an earth orbiting the sun. It could literally be anything.

kevinventullo 1 day ago

So what happens when I stuff an LLM inside a robot, and when I ask it to pass me the pen it passes me the pen?

mjburgess 1 day ago

Well no amount of measures of behaviour imply a capacity for meaning -- behaviour is a proxy measure of mental capacities.

However we might, as a practical matter, have a large number of proxy tests and treat a system as meaning-capable if it passes.

CamperBob2 1 day ago

At some point you have to stop waving your hands and hand him the pen, though. What question can be asked that can be used to distinguish genuine human intelligence from intelligence simulated by a machine?

Searle thought he had come up with just such a question, but it turned out that he hadn't.

og_kalu 1 day ago

>It's meant to be so obviously absurd that a person inside the room, merely substituting symbols, should in any sense understand the meaning of those symbols.

Alright, so what neuron in your brain "understands" English ?. Hell feel free to name any part regardless. This is why the Chinese Room is nonsensical. Either you admit the system can understand even when none of the constituents do or you admit you don't understand anything at all either. At least either conclusion would be consistent.

Unfortunately, we have many people take the nonsensical middle road. "Oh that doesn't understand but I certainly do, just because."

throw4847285 1 day ago

I don't understand why people get so up in the arms about the Chinese room. It's very clear that a major part of human intelligence is a mental model of the physical world, and linguistic concepts have an (often complex) relationship to that model. There's no magic here. Nothing about that argument implies anything about neurons. The process of forming a mental model of the world and mapping words onto it could easily take place within many many neurons within the human brain, because it does! It does not take place in an LLM. That does not imply that nobody will ever develop a positronic brain that could do the same. We just clearly haven't done so yet.

Saying, "if you can't point to the neuron that does X, then you can't prove X happens" isn't a scientific perspective. It's a willfully ignorant one. If you're confident in the scientific process, then we will eventually understand how all kinds of human mental processes make sense in the context of neural networks.

og_kalu 1 day ago

The point is that all the Chinese room is is a play to absurdity. That because opening the box reveals mechanisms that we would not call understanding does not mean the system, the Chinese room does not understand. The neuron comparison is to demonstrate that very fact. The brain is a Chinese room. It doesn't have to be relegated to a neuron, feel free to open the box and show any of us what happens in there that we would call understanding.

>It does not take place in an LLM.

I don't know what else to tell you but LLMs absolutely model concepts and the physical world, separate from the words that describe them. This has been demonstrated several times.

mjburgess 1 day ago

The Chinese room does not aim to show, nor does it show, that part-whole relationships fail nor is it even about part-whole relationships.

Yes, neurones do not understand "pen" -- but some highly particular whole bodies do (ie., english spekaing people). That's because of highly particular relationships between those neurones, the body, the environment, and the history of that language user.

This is the csci brain rot that searle is baffled by. Symbol manipulation implies no relationships between wholes and parts. The capcity to understanding meaning requires extraordinarily specific ones.

og_kalu 1 day ago

What is the difference between "English Speaking People" and "the Chinese Room" ? The problem with Searle's arguments is that all the Chinese room is is an appeal to absurdity, a sleight of hand. I'm supposed to think, "Oh. This is so absurd, of course the room doesn't understand" but it is an appeal that falls apart about once you realize that the same logic could be applied to any computational process, including human cognition. The distinction Searle draws between a person who genuinely understands English and a system that mechanically manipulates symbols is, in essence, arbitrary. They are both systems that have demonstrated understanding.

JKCalhoun 1 day ago

> So when this room is asked, "pass me the pen" and it replies "i cannot pass the pen" (or whatever it replies) -- it should be obvious that the person in the room, or any function of their activity, has never acquired any reference to "the pen"

And yet Searle seems to pass the buck here to a book that actually "responds", not to the person in the room. I get it: the person is out of the loop.

But how does one explain the book that can answer so convincingly? That would appear to be where the "AI" resides.

mjburgess 1 day ago

Just in the same way a video game is convincing. Any experimental scientist knows that the measuring device isnt what's being measured.

The TV which displays a video game outputs images as-if there were a whole world inside the TV box: there isnt.

CamperBob2 1 day ago

This point predates video games by a couple thousand years, of course.

nkali 1 day ago

Not just "some", but whopping 23% of this test participants.