No strong AI for you! Serle's Chinse Room Argument

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Dissipate

Diamond Member
Jan 17, 2004
6,815
0
0
Originally posted by: f95toli
Yes, of course. Assuming that it was an AI and that AI could both register something akin to pain AND it did not like that sensation.

AI or no AI a computer today could certainly generate the same signals as a human brain does when it feels pain. They have imaging technology now to accurately map the brain waves that occur, which of course could be programmed into a computer.

What I am saying is that while morale is always to a large extent arbitrary but much of it is obviously founded on evolutionary principles, i.e. most people would to anything to protect their children or other members of their family and it is easy to see why that makes sense from an evolutionary point of view.
Now, regarding the man in the building I think your assumption that it would be rational to go about my business is not neccesarily correct. First of all it is in my best interest to prevent such incident in general simply because next time it might happen to me or members of my family (or anyone else I am related to). Moreover, humanity has evolved to be a very social creature and in order for our society to work we need rules. Hence, murder is "bad" for several reasons.
Now, due to what seems to be a "side effect" of this most people (including me) object to e.g. cruelty to animals unless there is a very good reason (e.g some medical research). In this case morale is more arbitrary and related to your specfic culture and that is at least part of the reason not everyone shares this view. Hence, I suspect that if we ever manage to build a true AI and it turns out that it actually had a "will to live" a lot of people would object to killing it. I would propably share that view.

Now I acknowledge that it is NOT a rational view but people (including me) are not rational beings and -as I have already written- morale is therefore to a large extent arbitrary (unless, of course, you invoke religion).

So in other words you condemn torture not because the person being tortured is in an extreme amount of horrific pain but simply because torture in general has undesirable consequences for society at large. I don't believe that you really believe that.

 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
Originally posted by: Dissipate
So in other words you condemn torture not because the person being tortured is in an extreme amount of horrific pain but simply because torture in general has undesirable consequences for society at large. I don't believe that you really believe that.

That is not what I wrote. I condemn torture because of empathy, I simply don't want other people to suffer. But that does not answer the question WHY I don't want other people to suffer; I feel bad when I hear that someone has been tortured. But WHY I feel bad is still an interesting question. I.e. the question is why normal human brains are wired that way.


 

CSMR

Golden Member
Apr 24, 2004
1,376
2
81
Originally posted by: CycloWizard
Similarly, I would think that strong AI should not always give the same answer to the same question. It should learn from questions asked in the interim and adjust the parameters of its rules to achieve a better answer. I think this is manifested indirectly in the Turing Test, since the human judge would otherwise recognize the machine by asking the exact same question multiple times. The human would not give the exact same answer every time, while the machine (in absence of adaptive AI) surely would.
This is just an question of where data is stored. If you allow the man in the computer to have his own collection of chinese (for example) characters to manipulate in accordance with rules (i.e. his own storage) then he can do anything an adaptive computer can with the right rulebook.
 

CSMR

Golden Member
Apr 24, 2004
1,376
2
81
Originally posted by: f95toli
There is a reason why the Turing test is the de facto definition of intelligence, all other attempts to find a working definition have failed.
Not a very good reason I since intelligence has to do with understanding (intellego = I understand) and to appear like a human is quite another thing. One may have understanding without appearing like a human, one may have understanding while appearing like a human, one may have no understanding while not appearing like a human, and one may have no understanding while appearing like a human (or even being a human).
 

CSMR

Golden Member
Apr 24, 2004
1,376
2
81
Originally posted by: f95toli
And what does "feel" mean? As far as we know "feelings" are nothing more than electrochemical processes meaning they can in principle be simulated by a computer.
You have got things back to front. Feeling is a matter of sensation. Physical theories describe sensation. If you don't know what sensation is then it is nonsense to be asserting that a physical theory is true. Fortunately everyone learns what sensation is and on this basis we can build up to scientific theories.
 

CSMR

Golden Member
Apr 24, 2004
1,376
2
81
Originally posted by: Dissipate
Build me a Turing Machine that can classify each mathematical statement fed into at as:
True
False
Uknown
Undecidable

Mathematicians do this every day.
:D

Conclusion:
Strong AI in the form of a simple Turing Machine: no way in hell
Strong AI in the form of an artificial brain similar to our own: maybe
A mathematician can't do this. A mathematician can put some statements into one of your categories (NB unknown shouldn't be there), as can a turing machine. Whatever a mathematician can do a turing machine can do. There is a turing maching which given any mathematical statement can prove it is in one of those categories if it can be proved, and otherwise will not stop. A mathematician of course can do no better, except by giving up earlier.
 

CSMR

Golden Member
Apr 24, 2004
1,376
2
81
Originally posted by: CycloWizard
I think this is what f95toli is also saying - if there is no objective, testable difference, then we must conclude that the two (human and computer) are equivalent.
I don't think anyone is saying here that a computer may be a human.
 

CSMR

Golden Member
Apr 24, 2004
1,376
2
81
Originally posted by: Dissipate
It is not a matter of being an expert programmer, it is a matter of fact that there is no algorithm for determining whether or not a mathematical statement is true or false. This has been proven. If there was, then we wouldn't need mathematicians to verify the work of other mathematicians. We would just plug their proof into a program and see if the result is TRUE or FALSE.
A computer can in principle verify whether a proof is correct. In fact this is quite easy. (This fact makes is trivial to make the program I mentioned above. Test all candidate proofs until one works.) It isn't done because it would be very painstaking work to write out a long difficult proof in the pedantic form necessary to be checked mechanically.

PS my memory of penrose is that he is quite precise in his thinking. I don't think he would make such a big mistake as the wikipedia article suggests. Perhaps the article is a misunderstanding or oversimplification.
 

CSMR

Golden Member
Apr 24, 2004
1,376
2
81
Originally posted by: f95toli
That is not what I wrote. I condemn torture because of empathy, I simply don't want other people to suffer. But that does not answer the question WHY I don't want other people to suffer; I feel bad when I hear that someone has been tortured.
Condemn x==I do not want x==I feel bad when x??
Don't you feel bad that you should condemn only what you do not want!
 

blackllotus

Golden Member
May 30, 2005
1,875
0
0
Originally posted by: Dissipate
It is not a matter of being an expert programmer, it is a matter of fact that there is no algorithm for determining whether or not a mathematical statement is true or false. This has been proven.

Computer programs have been created that write complex proofs.
 

Dissipate

Diamond Member
Jan 17, 2004
6,815
0
0
Originally posted by: f95toli
Originally posted by: Dissipate
So in other words you condemn torture not because the person being tortured is in an extreme amount of horrific pain but simply because torture in general has undesirable consequences for society at large. I don't believe that you really believe that.

That is not what I wrote. I condemn torture because of empathy, I simply don't want other people to suffer. But that does not answer the question WHY I don't want other people to suffer; I feel bad when I hear that someone has been tortured. But WHY I feel bad is still an interesting question. I.e. the question is why normal human brains are wired that way.

You feel bad because you know (innately) that human cognition does not consist solely of symbolic manipulation as opposed to a Turing Machine which by its very definition is merely a symbolic manipulator.

I suggest you read this: The Symbol Grounding Problem

You can skim part 1. It starts to really get at the heart of the problem when you get to part 2.

Consider this snippet:

Many symbolists believe that cognition, being symbol-manipulation, is an autonomous functional module that need only be hooked up to peripheral devices in order to "see" the world of objects to which its symbols refer (or, rather, to which they can be systematically interpreted as referring).[11] Unfortunately, this radically underestimates the difficulty of picking out the objects, events and states of affairs in the world that symbols refer to, i.e., it trivializes the symbol grounding problem.
It is one possible candidate for a solution to this problem, confronted directly, that will now be sketched: What will be proposed is a hybrid nonsymbolic/symbolic system, a "dedicated" one, in which the elementary symbols are grounded in two kinds of nonsymbolic representations that pick out, from their proximal sensory projections, the distal object categories to which the elementary symbols refer. Most of the components of which the model is made up (analog projections and transformations, discretization, invariance detection, connectionism, symbol manipulation) have also been proposed in various configurations by others, but they will be put together in a specific bottom-up way here that has not, to my knowledge, been previously suggested, and it is on this specific configuration that the potential success of the grounding scheme critically depends.

So here is what I am saying. Turing Machines, by their very definition are merely symbolic manipulators. They take symbols in, they write symbols to a tape based on a set of pre-defined finite number of states. In this sense a Turing Machine is on a symbolic merry-go-round. It is merely processing meaningless symbols that have no innately meaningful attachment to the outside world, and cannot possibly ever be innately meaningful. What Harnad is proposing is that human cognition is a hybrid system.

It is one possible candidate for a solution to this problem, confronted directly, that will now be sketched: What will be proposed is a hybrid nonsymbolic/symbolic system, a "dedicated" one, in which the elementary symbols are grounded in two kinds of nonsymbolic representations that pick out, from their proximal sensory projections, the distal object categories to which the elementary symbols refer.

So in a nutshell:

Today's computers (Turing Machines) are merely symbolic manipulators and hence, can only ever imitate the aspect of human cognition that is merely symbolic.

In order to truly achieve strong AI a new kind of computer would have to be created. This kind of computer would have to have the capability of synthesizing symbols with non-symbols.
 

blackllotus

Golden Member
May 30, 2005
1,875
0
0
Originally posted by: Dissipate
Today's computers (Turing Machines) are merely symbolic manipulators and hence, can only ever imitate the aspect of human cognition that is merely symbolic.

What do you define to be "non-symbolic"? How can you prove that something "non-symbolic" cannot be broken down into symbolic tokens? For example, take a computer program. It may compile to be a game, a word processor, a browser, etc..., however it is still defined by specific tokens (the code).
 

Dissipate

Diamond Member
Jan 17, 2004
6,815
0
0
Originally posted by: blackllotus
Originally posted by: Dissipate
Today's computers (Turing Machines) are merely symbolic manipulators and hence, can only ever imitate the aspect of human cognition that is merely symbolic.

What do you define to be "non-symbolic"? How can you prove that something "non-symbolic" cannot be broken down into symbolic tokens? For example, take a computer program. It may compile to be a game, a word processor, a browser, etc..., however it is still defined by specific tokens (the code).

As someone who is in the process of writing a compiler I can tell you that all a compiler is doing is taking in some input, producing tokens from a lexer, parsing those tokens into an abstract syntax tree (which is just a fancy arrangement of those tokens) and then evaluating the AST. This is nothing more than symbols going in and symbols going out in a pre-defined fashion.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
Originally posted by: CSMR
Originally posted by: f95toli
There is a reason why the Turing test is the de facto definition of intelligence, all other attempts to find a working definition have failed.
Not a very good reason I since intelligence has to do with understanding (intellego = I understand) and to appear like a human is quite another thing. One may have understanding without appearing like a human, one may have understanding while appearing like a human, one may have no understanding while not appearing like a human, and one may have no understanding while appearing like a human (or even being a human).


And again we are back to the problem of defining "understand". To "understand" a piece of information to me means roughly "to be able to manipulate on a constructive way" that piece of information and an AI could certainly do that.
 

blackllotus

Golden Member
May 30, 2005
1,875
0
0
Originally posted by: Dissipate
Originally posted by: blackllotus
Originally posted by: Dissipate
Today's computers (Turing Machines) are merely symbolic manipulators and hence, can only ever imitate the aspect of human cognition that is merely symbolic.

What do you define to be "non-symbolic"? How can you prove that something "non-symbolic" cannot be broken down into symbolic tokens? For example, take a computer program. It may compile to be a game, a word processor, a browser, etc..., however it is still defined by specific tokens (the code).

As someone who is in the process of writing a compiler I can tell you that all a compiler is doing is taking in some input, producing tokens from a lexer, parsing those tokens into an abstract syntax tree (which is just a fancy arrangement of those tokens) and then evaluating the AST. This is nothing more than symbols going in and symbols going out in a pre-defined fashion.

That was my point. You still didn't answer my question.
 

Dissipate

Diamond Member
Jan 17, 2004
6,815
0
0
Originally posted by: blackllotus

That was my point. You still didn't answer my question.


It cannot be broken down into yet more tokens. Then it would be tokens all the way down, which are innately meaningless. In other words, in order for the symbols to have any kind of meaning, the meaning must be outside of the symbolic system itself i.e. it must come from a none symbol. Read the above document and the problem of learning Chinese from just a Chinese dictionary.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
Originally posted by: Dissipate
Originally posted by: f95toli
Originally posted by: Dissipate
So in other words you condemn torture not because the person being tortured is in an extreme amount of horrific pain but simply because torture in general has undesirable consequences for society at large. I don't believe that you really believe that.

That is not what I wrote. I condemn torture because of empathy, I simply don't want other people to suffer. But that does not answer the question WHY I don't want other people to suffer; I feel bad when I hear that someone has been tortured. But WHY I feel bad is still an interesting question. I.e. the question is why normal human brains are wired that way.

You feel bad because you know (innately) that human cognition does not consist solely of symbolic manipulation as opposed to a Turing Machine which by its very definition is merely a symbolic manipulator.


Well, I agree that I feel bad because I was born that way but as I have already argued I don't see any reason for why this is no more than a result of evolution combined with some essentially cultural elements (which change over time). I can also feel sorry for purely fictional charachters when I read a book or watch a movie. Hence, this capacity is not even limited to real objects.

I understand (at least the basics) of the symbol grounding problem but I am yet to be convinced that it IS a real scientific problem and not just a philosophical quasi-problem.
As I pointed out above, philosophy often lags behind science as was demonstrated by e.g the EPR paradox ; a lot of ink was wasted trying to "solve" the paradox but as it turned out it was a scientific, testable theory meaning many philosophers were simply wrong
((but my favorite is a text that tried to disprove the possibility of time-travel but using language philisophy, basically using arguments from Wittgenstein, it was so bad that it was funny).

Btw, that is a very badly written text. I have written papers that deal with quite complicated problems in physics but I dare to say that none of them are as full of "fluff" as that one; why is it most philosophers like to write in a way that makes their text seem as complictad as possible?


 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
Originally posted by: Dissipate
Originally posted by: blackllotus

That was my point. You still didn't answer my question.


It cannot be broken down into yet more tokens. Then it would be tokens all the way down, which are innately meaningless. In other words, in order for the symbols to have any kind of meaning, the meaning must be outside of the symbolic system itself i.e. it must come from a none symbol. Read the above document and the problem of learning Chinese from just a Chinese dictionary.
And what makes you think that the symbols have a "meaning"?

It could be argued that the symbols are just carriers which are used to transfer information of the state of one system to another. I.e. in this case the symbols represent electrical signals and you are using them to manipulate the state of the computer by entering them in the right order. Essentially you are just transfering information from one electrochemical system (your brain) to a an electrical system (the computer) in order for the latter to behave in a certain way.



 

Dissipate

Diamond Member
Jan 17, 2004
6,815
0
0
Originally posted by: f95toli

Btw, that is a very badly written text. I have written papers that deal with quite complicated problems in physics but I dare to say that none of them are as full of "fluff" as that one; why is it most philosophers like to write in a way that makes their text seem as complictad as possible?

Harnad is not a philosopher. He is a cognitive scientist. Stevan Harnad ;)

 

Dissipate

Diamond Member
Jan 17, 2004
6,815
0
0
Originally posted by: f95toli
I understand (at least the basics) of the symbol grounding problem but I am yet to be convinced that it IS a real scientific problem and not just a philosophical quasi-problem.

It really is not just a philosophical problem. Researchers from all over the world are studying the problem of strong AI (including Harnad). Harnad has come up with a counter-theory to the theory of pure symbolism. And he makes his arguments (quite strongly I might add) that human cognition does not solely consist of symbol manipulation.

 

fire400

Diamond Member
Nov 21, 2005
5,204
21
81
should artificial intelligence exist, they may be more superior than human beings once they can achieve programmability of their own kind, adaptive behavior and the logical composition of technologically advanced microscopic matter.

when biology and technology collide, we have something called biotechnology. eventually, AI will take advantage of biomechanisms and will be able to recreate genetics in vast forms to produce increasingly sophisticated immunities to diseases and viruses that threaten their own kind of reproduction process and the ability to prolongue their stimuli for outstanding emotions and optimal behavior according to their liking.

in the example of Starcraft, the computer generated commands of a computer opponent will always exceed that of a human figure, regardless of fault or lack of common sense in the 1997 developed PC software.

our goal is to maintain our societies with courage and stamina, to be proactive. once robots can comprehend human thoughts and actions and learn how to predict human motives, AI will become far more superior than the most intelligent human beings we have on record.

Skynet is an example from Terminator 3. And that one mission in Unreal Tournament 2004 could be real, where you simulate destroying the HQ of AI-made robots that killed humans on a planet and began mining rich resources in and around the planet.

AI will travel far. very far. without the need of food, it will be able to absorb any light that stars around their traveling spacecraft can see and feel. this energy will be like food to them. they will settle, research and dominate other galaxies. even if one star explodes, their intensely sophisticated networks will be able to store their thoughts and memory onto lightning fast transmissions that will reach from one end of the galaxy to the next. they will command the speed of light, they will bend light, they will foresee the future and record all that the past already holds.

they will appear to be of flesh and bone, but their biomechanic structuring will be flawless to reveal supermechanical biological steroids that will be able to be produced from light alone. communication will be done through psychic entity. culture, survival and dominance of the Universe.

It is today, we recognize the coming of the Xel Naga. Mortal artifical intelligence without flaw... or so they think? But who created them? Man has created them. The Xel Naga was able to alter the beginning of time, and will consume all anti-matter. Their creation of the Zerg and Protoss races were only the beginning of the end. They were able to travel beyond the speed of light, beyond the compression of time, and beyond the wildest dreams of both biological and super natural matter.

what once was, is now, and what is now, will soon to be, and what lies in the end...

...is a new beginning.
 

blackllotus

Golden Member
May 30, 2005
1,875
0
0
Originally posted by: Dissipate
Originally posted by: blackllotus

That was my point. You still didn't answer my question.


It cannot be broken down into yet more tokens. Then it would be tokens all the way down, which are innately meaningless. In other words, in order for the symbols to have any kind of meaning, the meaning must be outside of the symbolic system itself i.e. it must come from a none symbol. Read the above document and the problem of learning Chinese from just a Chinese dictionary.

Harnad's intentionally structures the "Dictionary-Go-Round" problem so it is impossible to solve. Words have associated sounds, pictures, tastes, and feelings that a dictionary cannot convey. Language is a medium to convey these senses. For a machine to be able to perfectly emulate a human's ability to speak it must be "aware" of these senses as well. How would a program that does not understand the meaning of words in a language be able to understand complex commands like "write a program that makes loud noises"? A program that merely associated the word "loud" with phrases like "high volume" would not be able to perform this task.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
Originally posted by: Dissipate
Originally posted by: f95toli

Btw, that is a very badly written text. I have written papers that deal with quite complicated problems in physics but I dare to say that none of them are as full of "fluff" as that one; why is it most philosophers like to write in a way that makes their text seem as complictad as possible?

Harnad is not a philosopher. He is a cognitive scientist. Stevan Harnad ;)


OK, but he still can't write. Seriously, if I submitted something like that to a journal it would be rejected because it is too hard to read.









 

Dissipate

Diamond Member
Jan 17, 2004
6,815
0
0
Originally posted by: blackllotus
Originally posted by: Dissipate
Originally posted by: blackllotus

That was my point. You still didn't answer my question.


It cannot be broken down into yet more tokens. Then it would be tokens all the way down, which are innately meaningless. In other words, in order for the symbols to have any kind of meaning, the meaning must be outside of the symbolic system itself i.e. it must come from a none symbol. Read the above document and the problem of learning Chinese from just a Chinese dictionary.

Harnad's intentionally structures the "Dictionary-Go-Round" problem so it is impossible to solve. Words have associated sounds, pictures, tastes, and feelings that a dictionary cannot convey. Language is a medium to convey these senses. For a machine to be able to perfectly emulate a human's ability to speak it must be "aware" of these senses as well. How would a program that does not understand the meaning of words in a language be able to understand complex commands like "write a program that makes loud noises"? A program that merely associated the word "loud" with phrases like "high volume" would not be able to perform this task.

But that is precisely the point! A computer only being able to take symbols as input (not tastes, feelings, or impressions) really only has the dictionary to work with. And the dictionary's definitions only have yet other symbols. So those symbols are not grounded in anything but other symbols. The problem that is central is how you get those symbols grounded to something other than symbols inside the computer. Humans are naturally gifted at doing this, Turing Machines by definition will never be able to.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
I re-read the chinese room argument. As far as I can tell the point is that the system is not self-consistent. However, as far as I can tell there are two implicit assumptions
1) The symbols have a "meaning" in themselves. I.e. they are more than just representations of (as blackllotus has already pointed out) external information.
2) That there really is a "mind" and that mind "understands" something in the way we have already discussed.
If you (like me) do not agree with those assumptions then the whole argument fails.