No strong AI for you! Serle's Chinse Room Argument

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Dissipate

Diamond Member
Jan 17, 2004
6,815
0
0
Originally posted by: f95toli
I re-read the chinese room argument. As far as I can tell the point is that the system is not self-consistent. However, as far as I can tell there are two implicit assumptions
1) The symbols have a "meaning" in themselves. I.e. they are more than just representations of (as blackllotus has already pointed out) external information.

HuH? No, the point is that the symbols are inherently meaningless. They only derive their meaning from an external non-symbolic system (i.e. the outside world). Inside a Turing Machine the only 'external information' available is yet more symbols.

2) That there really is a "mind" and that mind "understands" something in the way we have already discussed.
If you (like me) do not agree with those assumptions then the whole argument fails.

I don't see why there can't be a scientifically rational explanation for a 'mind.' To me a 'mind' can just be the non-symbol database within my brain.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
Originally posted by: Dissipate
Originally posted by: f95toli
I re-read the chinese room argument. As far as I can tell the point is that the system is not self-consistent. However, as far as I can tell there are two implicit assumptions
1) The symbols have a "meaning" in themselves. I.e. they are more than just representations of (as blackllotus has already pointed out) external information.

HuH? No, the point is that the symbols are inherently meaningless. They only derive their meaning from an external non-symbolic system (i.e. the outside world). Inside a Turing Machine the only 'external information' available is yet more symbols.

2) That there really is a "mind" and that mind "understands" something in the way we have already discussed.
If you (like me) do not agree with those assumptions then the whole argument fails.

I don't see why there can't be a scientifically rational explanation for a 'mind.' To me a 'mind' can just be the non-symbol database within my brain.

Sorry if I was unclear, what I meant was that the assumption is that the symbols transfer informiton which for some reason can not be somehow transfered to a turing machine. To me that looks like an assumption.
There is nothing preventing someone from connecting a video-camera to a Turing machine.
As far as I understand the "grounding problem" stems from the idea that this connection can not be made because it is non-symboli.
In fact it doesn't seem likely than an AI would be able to pass the Turing test unless it could recieve input from the external world making the whole argument somewhat irrelvant.

About the mind: What I meant was that another implicit assumption is that there really is a "mind" which "understands" concepts.
If you read the "systems reply" section in the wiki you linked to in your original post you see what I mean. I would argue that the person does actually understand chinese regardless of what they say but I assume you disagree(?).





 

fire400

Diamond Member
Nov 21, 2005
5,204
21
81
Originally posted by: f95toli
Originally posted by: Dissipate
Originally posted by: f95toli
I re-read the chinese room argument. As far as I can tell the point is that the system is not self-consistent. However, as far as I can tell there are two implicit assumptions
1) The symbols have a "meaning" in themselves. I.e. they are more than just representations of (as blackllotus has already pointed out) external information.

HuH? No, the point is that the symbols are inherently meaningless. They only derive their meaning from an external non-symbolic system (i.e. the outside world). Inside a Turing Machine the only 'external information' available is yet more symbols.

2) That there really is a "mind" and that mind "understands" something in the way we have already discussed.
If you (like me) do not agree with those assumptions then the whole argument fails.

I don't see why there can't be a scientifically rational explanation for a 'mind.' To me a 'mind' can just be the non-symbol database within my brain.

Sorry if I was unclear, what I meant was that the assumption is that the symbols transfer informiton which for some reason can not be somehow transfered to a turing machine. To me that looks like an assumption.
There is nothing preventing someone from connecting a video-camera to a Turing machine.
As far as I understand the "grounding problem" stems from the idea that this connection can not be made because it is non-symboli.
In fact it doesn't seem likely than an AI would be able to pass the Turing test unless it could recieve input from the external world making the whole argument somewhat irrelvant.

About the mind: What I meant was that another implicit assumption is that there really is a "mind" which "understands" concepts.
If you read the "systems reply" section in the wiki you linked to in your original post you see what I mean. I would argue that the person does actually understand chinese regardless of what they say but I assume you disagree(?).

So how many minds do you perceive AI to be able to hold? You want a human mind okay? Do you want a psyKopath? What determines if a particular mindset will be acceptable?
 

CSMR

Golden Member
Apr 24, 2004
1,376
2
81
Originally posted by: f95toli
And again we are back to the problem of defining "understand". To "understand" a piece of information to me means roughly "to be able to manipulate on a constructive way" that piece of information and an AI could certainly do that.
So we agree then that the turing criterion is not a good one at least.
A monkey who translates shakespeare into french manipulates information in a constructive way (the french version being what is constructed) but does it understand the original text?
 

CSMR

Golden Member
Apr 24, 2004
1,376
2
81
Originally posted by: f95toli
And what makes you think that the symbols have a "meaning"?
If there is no meaning there is nothing to understand! So the thought experiment assumes there is a meaning.
 

Dissipate

Diamond Member
Jan 17, 2004
6,815
0
0
Originally posted by: f95toli
Sorry if I was unclear, what I meant was that the assumption is that the symbols transfer informiton which for some reason can not be somehow transfered to a turing machine. To me that looks like an assumption.

Here is another way of explaining the problem.

The grounding problem is, generally speaking, the problem of how to causally connect an artificial agent with its environment such that the agent?s behaviour, as well as the mechanisms, representations, etc. underlying it, can be intrinsic and meaningful to itself, rather than dependent on an external designer or observer. It is, for example, rather obvious that your thoughts are in fact intrinsic to yourself, whereas the operation and internal representations of a pocket calculator are extrinsic, ungrounded and meaningless to the calculator itself, i.e. their meaning is parasitic on their interpretation through an external observer/user.

Text

To me the problem here stems from the fact that the calculator's entire world consists solely of those meaningless symbols. It can only attach symbols to other symbols. So whatever information those symbols have, it is only going to be other symbols, nothing more and nothing less. Humans on the other hand appear to be able to attach symbols to 'abstract concepts' which seem to transcend mere symbols. This is not an assumption at all but it is patently true by the very definition of a Turing Machine. A Turing Machine is a seven tuple and it has two alphabets: an input alphabet and a tape alphabet. Both of these alphabets consist solely of symbols. The definition does not have in it any room for 'abstract conception' or 'impression' or 'idea.' It is just dots on a tape. This being the case, if someone was ever able to create a computer that was able to contain within it an 'understanding' beyond mere symbols, that computer by definition would no longer be a Turing Machine. Perhaps it would be an eight tuple containing some other element that it could write to its tape consisting of who knows what. So while I cannot tell you exactly would be the 8th element of the 8-tuple, I can tell you that it cannot just be more symbols.

There is nothing preventing someone from connecting a video-camera to a Turing machine.
As far as I understand the "grounding problem" stems from the idea that this connection can not be made because it is non-symboli.
In fact it doesn't seem likely than an AI would be able to pass the Turing test unless it could recieve input from the external world making the whole argument somewhat irrelvant.

You can get a camera attached to a Turing Machine, no doubt, but whatever information that is coming out of that camera is, it is only going to be symbols. What I am trying to explain is that you would have to hook the camera up to some other kind of machine that is theoretically more advanced than a Turing Machine in order for the machine to be able to get past the symbolic merry-go-round.

About the mind: What I meant was that another implicit assumption is that there really is a "mind" which "understands" concepts.
If you read the "systems reply" section in the wiki you linked to in your original post you see what I mean. I would argue that the person does actually understand chinese regardless of what they say but I assume you disagree(?).

I disagree because if the Chinese was say a story, and then the person was shown a series of images they wouldn't have a clue as to whether or not the Chinese they just processed described the images they were shown.

For instance, if they followed the rule book and processed a story about a man who went into the woods and picked some berries, and then they were shown pictures of a man going into the woods and picking berries, they wouldn't necessarily have any idea that these images represented the story they just processed. A native Chinese speaker of course would have no trouble doing so.
 

smack Down

Diamond Member
Sep 10, 2005
4,507
0
0
Originally posted by: Dissipate
Originally posted by: blackllotus
Originally posted by: Dissipate
Originally posted by: blackllotus

That was my point. You still didn't answer my question.


It cannot be broken down into yet more tokens. Then it would be tokens all the way down, which are innately meaningless. In other words, in order for the symbols to have any kind of meaning, the meaning must be outside of the symbolic system itself i.e. it must come from a none symbol. Read the above document and the problem of learning Chinese from just a Chinese dictionary.

Harnad's intentionally structures the "Dictionary-Go-Round" problem so it is impossible to solve. Words have associated sounds, pictures, tastes, and feelings that a dictionary cannot convey. Language is a medium to convey these senses. For a machine to be able to perfectly emulate a human's ability to speak it must be "aware" of these senses as well. How would a program that does not understand the meaning of words in a language be able to understand complex commands like "write a program that makes loud noises"? A program that merely associated the word "loud" with phrases like "high volume" would not be able to perform this task.

But that is precisely the point! A computer only being able to take symbols as input (not tastes, feelings, or impressions) really only has the dictionary to work with. And the dictionary's definitions only have yet other symbols. So those symbols are not grounded in anything but other symbols. The problem that is central is how you get those symbols grounded to something other than symbols inside the computer. Humans are naturally gifted at doing this, Turing Machines by definition will never be able to.

If is the same problem that deaf people had before sign language. They had no way understand the writen symbols and there where seen as deaf and dumb.

Any AI is going to need inputs from the world just like people need. For example my theromstate can feel that it is cold or and when it feels cold it turns on the heat. I don't think computer science will bring us better AI. I think better AI is going to come from people working on control systems.

Every real system is just a set outputs determined by the current and previous inputs.
 

blackllotus

Golden Member
May 30, 2005
1,875
0
0
Originally posted by: Dissipate
But that is precisely the point! A computer only being able to take symbols as input (not tastes, feelings, or impressions) really only has the dictionary to work with. And the dictionary's definitions only have yet other symbols. So those symbols are not grounded in anything but other symbols. The problem that is central is how you get those symbols grounded to something other than symbols inside the computer. Humans are naturally gifted at doing this, Turing Machines by definition will never be able to.

A computer can be suplemented with tastes, feelings, impressions, etc... Remember that what we interpret as taste is really just a combination of input from different taste receptors on the tongue. Create a mechanical version of these taste receptors and you can easily create a program to recognize tastes.

Also, what you should be focusing on is the Turing Test, not a Turing Machine. What is relevant is not whether a machine is a Turing Machine, but whether a machine can pass the Turing Test (the machine may have to be Turing Complete to acheive this but afaik it is not a proven requirement).
 

blackllotus

Golden Member
May 30, 2005
1,875
0
0
Originally posted by: CSMR
Originally posted by: f95toli
And again we are back to the problem of defining "understand". To "understand" a piece of information to me means roughly "to be able to manipulate on a constructive way" that piece of information and an AI could certainly do that.
So we agree then that the turing criterion is not a good one at least.
A monkey who translates shakespeare into french manipulates information in a constructive way (the french version being what is constructed) but does it understand the original text?

One needs to understand the source language to create a decent translation. Certain idioms in one language make no sense when literally translated to another (ex: saying "break a leg" in french makes no sense).
 

Dissipate

Diamond Member
Jan 17, 2004
6,815
0
0
Originally posted by: blackllotus
Originally posted by: Dissipate
But that is precisely the point! A computer only being able to take symbols as input (not tastes, feelings, or impressions) really only has the dictionary to work with. And the dictionary's definitions only have yet other symbols. So those symbols are not grounded in anything but other symbols. The problem that is central is how you get those symbols grounded to something other than symbols inside the computer. Humans are naturally gifted at doing this, Turing Machines by definition will never be able to.

A computer can be suplemented with tastes, feelings, impressions, etc... Remember that what we interpret as taste is really just a combination of input from different taste receptors on the tongue. Create a mechanical version of these taste receptors and you can easily create a program to recognize tastes.

Also, what you should be focusing on is the Turing Test, not a Turing Machine. What is relevant is not whether a machine is a Turing Machine, but whether a machine can pass the Turing Test (the machine may have to be Turing Complete to acheive this but afaik it is not a proven requirement).

So when you taste something is all you are doing recognizing the taste?
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: DissipateSo when you taste something is all you are doing recognizing the taste?

(I haven't been around in a while, but I wandered back in and -- hey! -- a topic near and dear to my heart.)

If you break 'taste' down far enough, it consists of electrochemical inputs to your brain from specialized nerve receptor cells. You have probably learned to associate those inputs with other abstract concepts, related memories, etc. as well as certain 'hardwired' responses (ie, toxins and other 'bad' substances generally taste/smell 'bad' to you and may even force involuntary physiological reactions). I would argue that what your brain actually does is a form of symbolic processing on the inputs from those nerves.

You can argue that it is impossible to assign any intrinsic or 'external' meaning to internal semantics or 'experiences'. How you "experience" a sensory input (like the taste of a particular food) may be totally different from how I "experience" the same input -- and I'm not even sure how you could try to relate them directly.

Earlier, you mentioned the concept of experiencing pain. Well, there is a genetic disorder (the name escapes me at the moment) where a person does not have the nerve cells that enable you to feel physical pain. How would you explain what the "sensation" or "experience" of pain is to someone with that condition? You could describe it as "an unpleasant sensation you get when something is damaging your body" -- but then trying to break down "unpleasant sensation" will invariably lead you back to something like "a sensation like the one you get when you feel pain", or a relation to some other arbitrary 'sensation'. The actual "experience" you have is something that can only exist internally, as an internal state in your mind (which may or may not have any analog in the 'real world').

You can make a similar argument about trying to explain what a color is to someone who has been blind from birth. You can teach them all you want about electomagnetic radiation, how the optic nerve and visual cortex work, etc. -- but how you experience, say, "redness" has no real external meaning that can be translated objectively into 'reality'. It's merely a reaction elicited by a specific set of inputs to your visual cortex.

The Ziemke paper you linked above says:

the key problem in the attempt to create truly grounded and rooted AI systems is first and foremost the problem of ?getting there?, i.e. the question how, if at all, artificial agents could construct and self-organize themselves
and their own environmental embedding.

I would argue that humans and other apparently "intelligent" organisms haven't truly solved the grounding problem, either. It is impossible for anyone to truly know that their internal sensations/experiences actually correlate to anything in the 'real world' -- therefore, having such a relationship is not actually a prerequisite for "intelligence". And many of the "feelings" you have are largely hardwired into you by millions of years of evolution; you don't have to 'learn' how to be driven to want food/water, or to have a will to survive and reproduce, or how to hook your senses up in a way that presents the information usefully.

Edit:

I also saw this in your last reply:

disagree because if the Chinese was say a story, and then the person was shown a series of images they wouldn't have a clue as to whether or not the Chinese they just processed described the images they were shown.

For instance, if they followed the rule book and processed a story about a man who went into the woods and picked some berries, and then they were shown pictures of a man going into the woods and picking berries, they wouldn't necessarily have any idea that these images represented the story they just processed. A native Chinese speaker of course would have no trouble doing so.

The way I've genearally heard the Chinese Room problem stated, the 'operator' may actually change the content of later 'outputs' based on the 'inputs' being fed in (ie, the 'rule book' is not static). In this case, assuming appropriate 'programming', the Chinese Room system could correctly answer questions about earlier inputs, even though the 'operator' might not have that knowledge if the questions were presented directly to them in their native language. The problem is usually stated in terms of linguistic input/output, but you could extend it to deal with visual inputs as well; you just need some thicker 'rule books' and more sophisticated 'programming'. :p
 

Dissipate

Diamond Member
Jan 17, 2004
6,815
0
0
Originally posted by: Matthias99

(I haven't been around in a while, but I wandered back in and -- hey! -- a topic near and dear to my heart.)

If you break 'taste' down far enough, it consists of electrochemical inputs to your brain from specialized nerve receptor cells. You have probably learned to associate those inputs with other abstract concepts, related memories, etc. as well as certain 'hardwired' responses (ie, toxins and other 'bad' substances generally taste/smell 'bad' to you and may even force involuntary physiological reactions). I would argue that what your brain actually does is a form of symbolic processing on the inputs from those nerves.

You can argue that it is impossible to assign any intrinsic or 'external' meaning to internal semantics or 'experiences'. How you "experience" a sensory input (like the taste of a particular food) may be totally different from how I "experience" the same input -- and I'm not even sure how you could try to relate them directly.

Earlier, you mentioned the concept of experiencing pain. Well, there is a genetic disorder (the name escapes me at the moment) where a person does not have the nerve cells that enable you to feel physical pain. How would you explain what the "sensation" or "experience" of pain is to someone with that condition? You could describe it as "an unpleasant sensation you get when something is damaging your body" -- but then trying to break down "unpleasant sensation" will invariably lead you back to something like "a sensation like the one you get when you feel pain", or a relation to some other arbitrary 'sensation'. The actual "experience" you have is something that can only exist internally, as an internal state in your mind (which may or may not have any analog in the 'real world').

You can make a similar argument about trying to explain what a color is to someone who has been blind from birth. You can teach them all you want about electomagnetic radiation, how the optic nerve and visual cortex work, etc. -- but how you experience, say, "redness" has no real external meaning that can be translated objectively into 'reality'. It's merely a reaction elicited by a specific set of inputs to your visual cortex.

The Ziemke paper you linked above says:

the key problem in the attempt to create truly grounded and rooted AI systems is first and foremost the problem of ?getting there?, i.e. the question how, if at all, artificial agents could construct and self-organize themselves
and their own environmental embedding.

I would argue that humans and other apparently "intelligent" organisms haven't truly solved the grounding problem, either. It is impossible for anyone to truly know that their internal sensations/experiences actually correlate to anything in the 'real world' -- therefore, having such a relationship is not actually a prerequisite for "intelligence". And many of the "feelings" you have are largely hardwired into you by millions of years of evolution; you don't have to 'learn' how to be driven to want food/water, or to have a will to survive and reproduce, or how to hook your senses up in a way that presents the information usefully.

That's all find and good, just as long as the internal representation of pain, color and other sensations are not/can not be represented by mere arbitrary symbols themselves. I suppose, in fact they could be represented even by non-symbols that were generated by experiences in virtual reality environments as well. The critical thing here is that at some point the brain is able to escape the endless arbitrary symbol loop. This doesn't have to do with other individuals either. The grounding problem doesn't lay any claim to objective reality (as far as I can tell).

The way I've genearally heard the Chinese Room problem stated, the 'operator' may actually change the content of later 'outputs' based on the 'inputs' being fed in (ie, the 'rule book' is not static). In this case, assuming appropriate 'programming', the Chinese Room system could correctly answer questions about earlier inputs, even though the 'operator' might not have that knowledge if the questions were presented directly to them in their native language. The problem is usually stated in terms of linguistic input/output, but you could extend it to deal with visual inputs as well; you just need some thicker 'rule books' and more sophisticated 'programming'. :p

You are right. My reply wasn't that good since the machine could be extended to include visual inputs. The underlying point of the Chinese Room argument is that the person who knows not a single word of Chinese could perfectly mimic the machine with the same rulebook at hand that the machine was using.

Edit:

BTW, more on symbol grounding problem. Text

I think this is the best one yet. It covers different aspects of SGP.
 

CSMR

Golden Member
Apr 24, 2004
1,376
2
81
Originally posted by: blackllotus
One needs to understand the source language to create a decent translation. Certain idioms in one language make no sense when literally translated to another (ex: saying "break a leg" in french makes no sense).
Some might say the monkeys are typing "randomly". Perhaps they have types out millions of works that make no sense. Would you still say they "understand" the source language? In the second case they do not even see the original work.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: Dissipate
That's all find and good, just as long as the internal representation of pain, color and other sensations are not/can not be represented by mere arbitrary symbols themselves. I suppose, in fact they could be represented even by non-symbols that were generated by experiences in virtual reality environments as well. The critical thing here is that at some point the brain is able to escape the endless arbitrary symbol loop. This doesn't have to do with other individuals either. The grounding problem doesn't lay any claim to objective reality (as far as I can tell).

It doesn't necessarily lay a claim to 'objective' reality, but it does lay a claim to some 'external' reality that you interact with. It is assumed that there is something to be grounded to, and things in that external environment have some semantic meaning in and of themselves.

I would argue that when people complain that, say, a Turing Machine's internal representations are inadequate, they are just splitting hairs. Your internal representations of "feelings" or "experiences" are just as arbitrary -- albeit significantly more complex. They have no meaning outside of your own head.

The underlying point of the Chinese Room argument is that the person who knows not a single word of Chinese could perfectly mimic the machine with the same rulebook at hand that the machine was using.

...then wouldn't this imply that the operator/rulebook system as a whole does understand Chinese? It's not like every neuron in your head 'understands' English, but your brain as a functional whole does.

But trying to (as Searle discusses) learn Chinese as a first language from a Chinese dictionary is impossible because the language itself has an embedded semantic meaning that you have no access to -- not because it is impossible to learn external semantics when they are accessible. If you have the syntactical rules for the language, and you know what the tokens are, you can produce well-formed statements -- but it is impossible for such statements to have any externally recognizable semantic meaning if you are given nothing else. It's possible to create a semantic meaning for the tokens, and even to create a shared semantic meaning, but trying to figure out how to 'bootstrap' yourself from nothing to a preexisting semantic meaning is, IMO, asking the wrong question.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
Originally posted by: CSMR
Originally posted by: f95toli
And again we are back to the problem of defining "understand". To "understand" a piece of information to me means roughly "to be able to manipulate on a constructive way" that piece of information and an AI could certainly do that.
So we agree then that the turing criterion is not a good one at least.
A monkey who translates shakespeare into french manipulates information in a constructive way (the french version being what is constructed) but does it understand the original text?

No, not quite. You need to keep in mind that the basic premise of this discussion is that the AI (which is assumed to be a Turing machine) has already passed the Turing test. Now, according the Chinese room argument passing the Turing test does NOT mean that the AI "understands" anything; but I would argue that in order to pass the test the AI must be able to manipulate information in a very sophisticated way and therefore one can claim that the AI understands information. Hence, a random number generator (which I guess is what you a refering to with your monkey example, bad example btw; a monkey is definitly intelligent) does not "understand" information in this context.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
Originally posted by: CSMR
Originally posted by: f95toli
And what makes you think that the symbols have a "meaning"?
If there is no meaning there is nothing to understand! So the thought experiment assumes there is a meaning.

What I was refering to was of course "meaning" in the metaphysical sense, which is what I belive he was refering to. I.e. the idea that ideas and meaning "exist" in a way which trancends mathematical descriptions (something akin to Idealist interpretation of Plato's cave).
 

blackllotus

Golden Member
May 30, 2005
1,875
0
0
How could a program that does not understand the underlying meaning behind english words perform a command like "invent a computer language that incorporates the best features from C++ and Java and is easily parsed"?

EDIT: While a machine does not necessarily have to be able to answer this question to pass the Turing Test, you would have a hard time convincing me that it is intelligent like humans if it craps out on complicated questions.
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Originally posted by: Dissipate
It is not a matter of being an expert programmer, it is a matter of fact that there is no algorithm for determining whether or not a mathematical statement is true or false. This has been proven. If there was, then we wouldn't need mathematicians to verify the work of other mathematicians. We would just plug their proof into a program and see if the result is TRUE or FALSE.
There are actually algorithms that attempt to prove things on their own. One of my undergrad calc teachers (looks like she's at a different school now, but I think this is her) does research in this area. Even MATLAB has symbolic operator capability (as a toolbox add-on). Maybe I don't understand what you're trying to say, because it's pretty clear that algorithms can be employed to make deductions from purely symbolic statements. Instead, it seems to me that we don't have proof-checking software because of the difficulty in following human logic and the difficulty in programming more complex symbolic logic, not because it's impossible.
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: CycloWizard
Originally posted by: Dissipate
It is not a matter of being an expert programmer, it is a matter of fact that there is no algorithm for determining whether or not a mathematical statement is true or false. This has been proven. If there was, then we wouldn't need mathematicians to verify the work of other mathematicians. We would just plug their proof into a program and see if the result is TRUE or FALSE.
There are actually algorithms that attempt to prove things on their own. One of my undergrad calc teachers (looks like she's at a different school now, but I think this is her) does research in this area. Even MATLAB has symbolic operator capability (as a toolbox add-on). Maybe I don't understand what you're trying to say, because it's pretty clear that algorithms can be employed to make deductions from purely symbolic statements. Instead, it seems to me that we don't have proof-checking software because of the difficulty in following human logic and the difficulty in programming more complex symbolic logic, not because it's impossible.

His argument is probably an attempt at referencing Godel's Incompleteness Theorem.

That (widely misunderstood) theorem states that in certain types of mathematical systems/languages, there are statements which are "true" (or acceptable in terms of the language) but that cannot be proven from other statements in the system/language. Plus, in many cases there is no way to know if you have come up with all the 'true' statments that are possible, and you often cannot even come up with an algorithm that will determine the truth of an arbitrary proof/statement that is guaranteed to halt.

In practice, you can solve or prove many useful things mechanically, but in general you can't find every possible mathematical proof by exhaustive enumeration. Or at least that's a grossly simplified version of it.
 

Jeff7

Lifer
Jan 4, 2001
41,596
19
81
Originally posted by: CycloWizard
I'm not an AI expert by any stretch, but I'm not sure that the logic there is sound. My understanding is that AI should be adaptive - that it essentially 'learns' based on its previous 'experiences'.

Or perhaps another test - ask it the same question over and over. A normal computer or low-level AI would simply continue to answer it time after time. A higher-level AI might finally notice that what it's doing seems quite repetitive, and would inquire on its own as to why it is being asked the same thing over and over again.
 

Dissipate

Diamond Member
Jan 17, 2004
6,815
0
0
Originally posted by: Matthias99
It doesn't necessarily lay a claim to 'objective' reality, but it does lay a claim to some 'external' reality that you interact with. It is assumed that there is something to be grounded to, and things in that external environment have some semantic meaning in and of themselves.

Semantics are the non-symbols themselves. They are what is retrieved when processing a symbol that is attached to the non-symbol. There certainly is something to be grounded to: non-symbolic forms of information.

I would argue that when people complain that, say, a Turing Machine's internal representations are inadequate, they are just splitting hairs. Your internal representations of "feelings" or "experiences" are just as arbitrary -- albeit significantly more complex. They have no meaning outside of your own head.

Aha, they are more complex, which means that they are more than just mere symbols. Complaining that a Turing Machine's internal representation is inadequate is not splitting hairs. A Turing Machine has a precise definition which in no way leaves any room for any kind of processing of non-symbolic information. If you want to talk about 'upgrading' the Turing Machine to something that includes non-symbolic information, that's a different story.


It's possible to create a semantic meaning for the tokens, and even to create a shared semantic meaning, but trying to figure out how to 'bootstrap' yourself from nothing to a preexisting semantic meaning is, IMO, asking the wrong question.

I don't think it is asking the wrong question at all. The Symbol Grounding Problem is a well known problem in Cognitive Science. It is a real problem and has real implications for overcoming the hurdles necessary for achieving strong AI. The symbolist position is simply untenable. I don't even think that symbolists believe their own position. If they did their only consistent course of action would be to treat all others (and themselves) as mere symbol processing automatons whose pain, feelings, writings and speech is all just a bunch of arbitrary symbols. In other words, so many ridiculous conclusions follow from the symbolist position that it is safe to say we can throw it in the dustbin.
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Originally posted by: Dissipate
Aha, they are more complex, which means that they are more than just mere symbols. Complaining that a Turing Machine's internal representation is inadequate is not splitting hairs. A Turing Machine has a precise definition which in no way leaves any room for any kind of processing of non-symbolic information. If you want to talk about 'upgrading' the Turing Machine to something that includes non-symbolic information, that's a different story.
Perhaps you covered this and I missed it, but what kind of thing cannot be represented as a symbol? I saw that you suggested pain, color, and sensations as things that cannot be represented by only arbitrary symbols, as they may differ internally. However, there is extensive evidence that the symbol (i.e. action potential trains) that you experience for a given wavelength of light are the same as I do. Thus, externally, we cannot determine whether what I 'see' at a given wavelength is different from what you 'see' at the same wavelength, but our bodies represent them symbolically in almost exactly the same way. You may respond to a given color differently, but this response could also be governed by subconscious rules. The bottom line is that there isn't enough information available for you to claim that these phenomena are independent of their neurological encoding, just as there is insufficient data for the opposite claim. However, there is strong evidence that a computer can differentiate between different colors, encode them, and provide feedback based on them. So, I ask: what's the difference? You claim subjective differences, but can you objectively make a claim either way?
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: DissipateSemantics are the non-symbols themselves. They are what is retrieved when processing a symbol that is attached to the non-symbol. There certainly is something to be grounded to: non-symbolic forms of information.

IE, you are assuming there is actually some sort of arbitrary non-symbolic information for the system to be grounded to. If you're the only thinking entity in the world, the only semantics that matter are the ones you come up with.

I would argue that when people complain that, say, a Turing Machine's internal representations are inadequate, they are just splitting hairs. Your internal representations of "feelings" or "experiences" are just as arbitrary -- albeit significantly more complex. They have no meaning outside of your own head.

Aha, they are more complex, which means that they are more than just mere symbols.

I meant "more complex" as in "not as easy to measure", not "fundamentally different in some meaningful way".

A Turing Machine has a precise definition which in no way leaves any room for any kind of processing of non-symbolic information. If you want to talk about 'upgrading' the Turing Machine to something that includes non-symbolic information, that's a different story.

I would argue that other "intelligent" systems do not really process non-symbolic information either.

It's possible to create a semantic meaning for the tokens, and even to create a shared semantic meaning, but trying to figure out how to 'bootstrap' yourself from nothing to a preexisting semantic meaning is, IMO, asking the wrong question.

I don't think it is asking the wrong question at all. The Symbol Grounding Problem is a well known problem in Cognitive Science. It is a real problem and has real implications for overcoming the hurdles necessary for achieving strong AI.

I think it is an interesting question, but more of a metaphysical or philosophical question than a practical one.

The symbolist position is simply untenable. I don't even think that symbolists believe their own position. If they did their only consistent course of action would be to treat all others (and themselves) as mere symbol processing automatons whose pain, feelings, writings and speech is all just a bunch of arbitrary symbols. In other words, so many ridiculous conclusions follow from the symbolist position that it is safe to say we can throw it in the dustbin.

Again, I would argue that intelligence, at least to a very large extent, consists of sophisticated symbolic processing.

It is convenient for most people to assume that they are observing some external reality occupied by other thinking agents similar to themselves. It is impossible, however, to ever actually know that this is true, as uncomfortable as that seems. And the assumption that the entities you appear to be observing are feeling beings is not inconsistent with this view -- since you cannot prove they actually are unfeeling automata, therefore you should be on the safe side in your interactions with them and assume they are actually other intelligent actors.
 

blackllotus

Golden Member
May 30, 2005
1,875
0
0
Originally posted by: Dissipate
I would argue that when people complain that, say, a Turing Machine's internal representations are inadequate, they are just splitting hairs. Your internal representations of "feelings" or "experiences" are just as arbitrary -- albeit significantly more complex. They have no meaning outside of your own head.

Aha, they are more complex, which means that they are more than just mere symbols.

They can be the result of a collection of symbols.

Originally posted by: Dissipate
Complaining that a Turing Machine's internal representation is inadequate is not splitting hairs. A Turing Machine has a precise definition which in no way leaves any room for any kind of processing of non-symbolic information. If you want to talk about 'upgrading' the Turing Machine to something that includes non-symbolic information, that's a different story.

Its more relevant to talk in terms of a hypothetical machine that passes the Turing Test. A Turing Machine will not pass the Turing Test on its own.
 

CSMR

Golden Member
Apr 24, 2004
1,376
2
81
Originally posted by: f95toli
No, not quite... I would argue that in order to pass the test the AI must be able to manipulate information in a very sophisticated way and therefore one can claim that the AI understands information.
I see. I was assuming that not everyone human processes information in a sophisticated way and may well demonstrate stupidity rather than understanding in a turing test but that depends on how lenient we are being with words.
Hence, a random number generator (which I guess is what you a refering to with your monkey example, bad example btw; a monkey is definitly intelligent) does not "understand" information in this context.
I am not interested in whether the monkey is intelligent or not in general; only whether it understands Shakespeare. Let us assume it's typing is random and independent in our assessments. Now you say it does not manipulate the information in a sophisticated way and so does not understand it? On what basis are you saying that it does not process the information in a sophisticated way? On the basis of our probabilty assessments? - the understanding of the monkey is then a property of our expectations.