No strong AI for you! Serle's Chinse Room Argument

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
Originally posted by: CSMR
I am not interested in whether the monkey is intelligent or not in general; only whether it understands Shakespeare. Let us assume it's typing is random and independent in our assessments. Now you say it does not manipulate the information in a sophisticated way and so does not understand it? On what basis are you saying that it does not process the information in a sophisticated way? On the basis of our probabilty assessments? - the understanding of the monkey is then a property of our expectations.

On the basis that the monkey typing random charachters on a computer would not pass the Turing test. And of course it is a property our expectations, the idea is an AI is a true AI ONLY if it can behave in the same way as we expect an human to behave is the whole point of the Turing test.



 

CSMR

Golden Member
Apr 24, 2004
1,376
2
81
Originally posted by: f95toli
Originally posted by: CSMR
Originally posted by: f95toli
And what makes you think that the symbols have a "meaning"?
If there is no meaning there is nothing to understand! So the thought experiment assumes there is a meaning.

What I was refering to was of course "meaning" in the metaphysical sense, which is what I belive he was refering to. I.e. the idea that ideas and meaning "exist" in a way which trancends mathematical descriptions (something akin to Idealist interpretation of Plato's cave).
Now mathematics does not describe meaning or anything else. If there is no meaning that cannot be mathematically described there is no meaning and no understanding and no intellgence is possible. But you must have meant something else than mathematical: how do you want concepts to be described?
 

CSMR

Golden Member
Apr 24, 2004
1,376
2
81
Originally posted by: f95toli
On the basis that the monkey typing random charachters on a computer would not pass the Turing test.
-I was interested in looking at your criterion for understanding, which did not involve turing tests.
-We think (randomness assumption) that the monkey may pass the turing test.
And of course it is a property our expectations, the idea is an AI is a true AI ONLY if it can behave in the same way as we expect an human to behave is the whole point of the Turing test.
Hmm. My monkey isn't artificial.
It would be an unusual notion of understanding for my monkey to understand or not understand something only with respect to the views of the onlooker. We should not then be saying "this computer understands" or "this computer is intelligent" as if it contradicted another person saying "this computer is not intelligent" if we are speaking of not of the computer in itself but its relation to our own varying and subjective expectations.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
My point was that there CAN be understanding (in the sense of information manipulation) and intelligence without there being a metaphysical "meaning" of symbols, i.e I was questioning whether or not there is such a thing as "non-symbolic" information; or in other words whether or not the grounding problem is really a scientifc problem.
There is nothing stopping you from calling a mathematical desciption "meaning", but then
you are not using the word in the way it was used by the Idealist (or indeed Berkley if I am not misstaken).

As I have already written I am still not convinced that the "grounding problem" is a real, technical problem, to be it just seems like a modern version of a very old philosophical problem which is interesting but probably not relevant for computer science.



 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
Originally posted by: CSMR
Originally posted by: f95toli
On the basis that the monkey typing random charachters on a computer would not pass the Turing test.
-I was interested in looking at your criterion for understanding, which did not involve turing tests.
-We think (randomness assumption) that the monkey may pass the turing test.
And of course it is a property our expectations, the idea is an AI is a true AI ONLY if it can behave in the same way as we expect an human to behave is the whole point of the Turing test.
Hmm. My monkey isn't artificial.
It would be an unusual notion of understanding for my monkey to understand or not understand something only with respect to the views of the onlooker. We should not then be saying "this computer understands" or "this computer is intelligent" as if it contradicted another person saying "this computer is not intelligent" if we are speaking of not of the computer in itself but its relation to our own varying and subjective expectations.

The problem is that the only way to test if an AI (or in this case the monkey, or why not a student) is to ask questions. Hence, wheter or not something "understands" is therefore- as you point out- up to the onlooker. So obviously my criteria for understanding DOES involve the Turing test, I don't think there is any other way to measure it.

 

CSMR

Golden Member
Apr 24, 2004
1,376
2
81
Originally posted by: f95toliThe problem is that the only way to test if an AI (or in this case the monkey, or why not a student) is to ask questions. Hence, wheter or not something "understands" is therefore- as you point out- up to the onlooker. So obviously my criteria for understanding DOES involve the Turing test, I don't think there is any other way to measure it.
I am not sure whether you have misunderstood me or whether you have a very radical take on the world. The question is whether understanding is an objective property of a person - or thing - or whether it is dependent in some way on the onlooker. Is there an objective fact against which an onlooker's opinion can be right or wrong. I was arguing that your views (not on testing, but your criterion for understanding as processing information) would imply a dependence on the onlooker. Now you seem to be going even further and saying the onlooker is necessarily right.
 

CSMR

Golden Member
Apr 24, 2004
1,376
2
81
Could you give us an example of what you would call a mathematical description of a meaning please?
 

CSMR

Golden Member
Apr 24, 2004
1,376
2
81
Originally posted by: f95tolia very old philosophical problem which is interesting but probably not relevant for computer science.
Oh I'm sure none of all this is relevant to computer science!
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
Originally posted by: CSMR
Originally posted by: f95toliThe problem is that the only way to test if an AI (or in this case the monkey, or why not a student) is to ask questions. Hence, wheter or not something "understands" is therefore- as you point out- up to the onlooker. So obviously my criteria for understanding DOES involve the Turing test, I don't think there is any other way to measure it.
I am not sure whether you have misunderstood me or whether you have a very radical take on the world. The question is whether understanding is an objective property of a person - or thing - or whether it is dependent in some way on the onlooker. Is there an objective fact against which an onlooker's opinion can be right or wrong. I was arguing that your views (not on testing, but your criterion for understanding as processing information) would imply a dependence on the onlooker. Now you seem to be going even further and saying the onlooker is necessarily right.

I am saying that in order to be able to TEST whether or not something/someone understands something we need something like the Turing test. Going back to the Chinese room problem I would argue that whether or not the person in the room "understands" the symbol he/she is manipualting is irrelevant; as long as the room as a whole passes the test then we can only conclude that it understands chinese.

I am also emphasing the need for experimental verification (which is perhaps not surprising since I work in the field of metrology) and I guess I am also separating the problem into two parts: One scientific ("Did the room pass the test?") and one philosophical ("Does the person in the room understand the symbols), the former is the problem I think is relevant for computer science and whether or not it is possible to build an AI, the latter is a philosophical question which probably does not have an answer since it can't be tested and if it can't be tested (even in principle) it is not a scientific theory.

I am not sure what you mean by "objective property". For me, asking for "objective properties" sounds a bit like asking through which slot the particle passes in a double slit experiment, the question is meaningless since if we try to measure it (by e.g. blocking one slit) we will affect the outcome of the experiment.

 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
Originally posted by: CSMR
Could you give us an example of what you would call a mathematical description of a meaning please?

No, if I could I would probably be famous since I would already have invented an AI;).

I was using "mathematical" in the sense "Can be undestood by a Turing machine". I.e. assume we have an AI -based on a Turing machine- which has passed a Turing test; this implies that we can have a conversation with it and most people would probably say that the machine "understands the meaning" of the words (or symbols) used in the conversation; regardless of the kind of process that goes on inside the Turing machine.


 

EBH

Member
Aug 4, 2006
62
0
0
If intelligence has anything to do with being able to cooperate with another entity, such as another neuron, then yes neurons have their own intelligence.