• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

No strong AI for you! Serle's Chinse Room Argument

Dissipate

Diamond Member
Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that it is a Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese speaker. The conclusion proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.

Now, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. Searle notes that he doesn't, of course, understand a word of Chinese. Furthermore, he argues that his lack of understanding goes to show that computers don't understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is ? and they don't understand what they're 'saying', just as he doesn't.[/quote]

Text

Is Serle's Chinese room argument a devastating blow to strong AI? I've been thinking about this recently. All computers can do is manipulate symbols which is different than actually gathering semantic meaning from those symbols.
 
I'm not an AI expert by any stretch, but I'm not sure that the logic there is sound. My understanding is that AI should be adaptive - that it essentially 'learns' based on its previous 'experiences'. The only example that comes to mind, though probably not a great one: finite element programs can automatically adjust their meshes and polynomial degrees to minimize and localize errors. This is based on mathematical rules, of course, but it's essentially a form of learning by the program. It learns more about the problem with every iteration and adapts its approach to achieve a better solution.

Similarly, I would think that strong AI should not always give the same answer to the same question. It should learn from questions asked in the interim and adjust the parameters of its rules to achieve a better answer. I think this is manifested indirectly in the Turing Test, since the human judge would otherwise recognize the machine by asking the exact same question multiple times. The human would not give the exact same answer every time, while the machine (in absence of adaptive AI) surely would.
 
Yes, as Cyclowizard has already pointed out a true AI needs to be adaptive and you would also expect some randomness in its replies, otherwise it would never pass the Turing test.
Hence, the assumtion that there must be a "fixed" set of rules is cleary wrong.

The second problem is that the argument assumes that intelligence IS somehing more than" mindless manipulation of symbols". You could easily apply the same argument to the human brain; which after all is just a collection of neurons which are not that different from transistors; I don't think anyone believes that individual neurons are intelligent.
The point is that we need to make sure we stay away from metaphysical arguments in these discussions. There is a reason why the Turing test is the de facto definition of intelligence, all other attempts to find a working definition have failed.

 
Originally posted by: f95toli
Yes, as Cyclowizard has already pointed out a true AI needs to be adaptive and you would also expect some randomness in its replies, otherwise it would never pass the Turing test.
Hence, the assumtion that there must be a "fixed" set of rules is cleary wrong.

I think that what Serle is saying though is that the Turing Test only establishes a weak/passive form of AI.

The second problem is that the argument assumes that intelligence IS somehing more than" mindless manipulation of symbols". You could easily apply the same argument to the human brain; which after all is just a collection of neurons which are not that different from transistors; I don't think anyone believes that individual neurons are intelligent.
The point is that we need to make sure we stay away from metaphysical arguments in these discussions. There is a reason why the Turing test is the de facto definition of intelligence, all other attempts to find a working definition have failed.

The argument doesn't assume that intelligence is more than mindless manipulation of symbols. It is trying to argue against that hypothesis. What Serle is saying is that computers (simple Turing Machines) can not ever attach any semantic understanding to the symbols they are processing. Your brain, on the other hand, does this all the time. If you touch something that is hot for instance, your brain is not just receiving symbols of 'hotness' and then sending signals to what ever extremity happened to touch the hot item i.e. symbol -> action. You really do feel pain. And hence, your experiences are being fed into some kind of ultimate semantic interpretation.

This being the case, I shall propose a new kind of test. It is called the Torture Test. I say we get all of those who believe that consciousness and intelligence are merely the result of computation, string them up and subject them to undesirable forms of computation. 😉
 
Originally posted by: Dissipate
What Serle is saying is that computers (simple Turing Machines) can not ever attach any semantic understanding to the symbols they are processing. Your brain, on the other hand, does this all the time.

Again, what does "semantic understanding" mean in this context? I don't see the difference between an AI and the brain. "Understand" is a very slippery word and it seems to be difficult to nail down a defintion; "something that computers can not do" is not a good definition in my view.

The whole point of the Turing test is if two processes (the human brain and an AI) always react in a similar way to a given input there is no practical difference. And, as I pointed out above, unless you invoke religious arguments there really IS no difference.

If you touch something that is hot for instance, your brain is not just receiving symbols of 'hotness' and then sending signals to what ever extremity happened to touch the hot item i.e. symbol -> action. You really do feel pain. And hence, your experiences are being fed into some kind of ultimate semantic interpretation.

And what does "feel" mean? As far as we know "feelings" are nothing more than electrochemical processes meaning they can in principle be simulated by a computer. Hence, an true AI would presumably also be able to "feel".


 
Originally posted by: f95toli
Again, what does "semantic understanding" mean in this context? I don't see the difference between an AI and the brain. "Understand" is a very slippery word and it seems to be difficult to nail down a defintion; "something that computers can not do" is not a good definition in my view.

You are right. A precise definition can not be nailed down because of the difficulty in conveying the ultimate sensation that we experience without referring to other people's same experience. For instance, we can not precisely define what it means to feel pain. But we are definitely aware of when we are feeling it. If I tell you that I am in a lot of pain you will understand what I mean because you have experienced pain yourself. If you never did experience this pain you wouldn't really know what I was talking about. Hence, while we cannot explain exactly what this semantic understanding is (what it ultimately means to be in a lot of pain) we know that it exists through our own experience of it.

The whole point of the Turing test is if two processes (the human brain and an AI) always react in a similar way to a given input there is no practical difference. And, as I pointed out above, unless you invoke religious arguments there really IS no difference.

Once again, however, Serle's point is that this is not the case. A machine that can speak perfect Chinese may have no ultimate semantic understanding of what the Chinese symbols mean.


And what does "feel" mean? As far as we know "feelings" are nothing more than electrochemical processes meaning they can in principle be simulated by a computer. Hence, an true AI would presumably also be able to "feel".

I cannot explain what it means precisely to 'feel.' I can only refer to our mutual experience of 'feeling.' This cannot just be electrochemical processes. If it were, I wouldn't have ever developed any preference for one feeling over another. If feeling is just processing of information then I wouldn't mind much at all if someone poked me with a hot iron. Perhaps my brain would have some automated responses such as twitching or pulling away, which might be somewhat inconvenient but other than that I wouldn't really care. All I would be doing is be taking symbols in and acting on those symbols in a hard wired fashion.

As I said before, let's run your idea through the Torture Test. I will string you up and do whatever I want to you until you recant your opinion. Believe me, I don't think it would be long.

By the way, I would like to add that I do not believe that there is necessarily anything mysterious or supernatural about the brain. An artificial brain that is in every way the same as ours may be developed one day. All I am saying is that I do not believe that the brain is just a Turing Machine on crack. There is something going on which is producing our ability to semantically interpret our world. This may be happening at the quantum level or through some other physical phenomenon yet to be discovered.
 
Originally posted by: Dissipate
This cannot just be electrochemical processes. If it were, I wouldn't have ever developed any preference for one feeling over another.

Yes you would. From an evolutionary point of view there is a good reason to have feelings. Ultimately "good feelings" are just those that help you survive and reproduce in the long run.
A regulatory system which rewarded pain (which is a signal that is something is wrong) wouldn't make much sense, would it?





 
Originally posted by: f95toli
Originally posted by: Dissipate
This cannot just be electrochemical processes. If it were, I wouldn't have ever developed any preference for one feeling over another.

Yes you would. From an evolutionary point of view there is a good reason to have feelings. Ultimately "good feelings" are just those that help you survive and reproduce in the long run.
A regulatory system which rewarded pain (which is a signal that is something is wrong) wouldn't make much sense, would it?

Evolution may account for the initial development of my ability to feel, but it certainly didn't account for all of the preferences I have now. For instance, if you had put before me a book full of random strings and an excellent novel when I was 3 years old, my preference for either one probably would have been random. As I grew older and my faculty for interpreting the words in those books developed so did my feelings I felt after reading one or the other. And hence, I would now choose the excellent novel over the random strings.

And yet the excellent novel contains nothing in it related to survival. It would not be useful for surviving in the wild.
 
This cannot just be electrochemical processes. If it were, I wouldn't have ever developed any preference for one feeling over another.

The first statement is wrong, based on the fallicy of the second one. There is an obvious evolutionary benefit to you prefering feeligns that casue self preservation. You can see this in action in every life form (even very simple ones).

There is something going on which is producing our ability to semantically interpret our world. This may be happening at the quantum level or through some other physical phenomenon yet to be discovered.

I would say no, its just much more complicated than the models we are able to build to date (we are lucky to try and model the brain of a small worm, but a mamal). But over time as that progresses, it may become much harder to hold the view you suggest (IMHO).
 
Originally posted by: bsobel
The first statement is wrong, based on the fallicy of the second one. There is an obvious evolutionary benefit to you prefering feeligns that casue self preservation. You can see this in action in every life form (even very simple ones).

But what you are assuming is that what evolved was merely an electrochemical process. What I am saying is that is not only what has evolved. Our ability to semantically interpret our world could have been a side effect of evolution that was sort of a catch all. Not only did it help us to survive, but also to appreciate say classical music or a fudge Popsicle.


I would say no, its just much more complicated than the models we are able to build to date (we are lucky to try and model the brain of a small worm, but a mamal). But over time as that progresses, it may become much harder to hold the view you suggest (IMHO).

The model we have right now is the Turing Machine. Most computation theorists would agree in saying that the Turing Machine is as good as it gets when it comes to computation. What I am saying is that we lack a model of semantic interpretation, which is an entirely different ball game than computation.

 
What I am saying is that we lack a model of semantic interpretation, which is an entirely different ball game than computation.

And others would say your tape just isn't long enough 😉
 
Originally posted by: bsobel
What I am saying is that we lack a model of semantic interpretation, which is an entirely different ball game than computation.

And others would say your tape just isn't long enough 😉

Touche.

Build me a Turing Machine that can classify each mathematical statement fed into at as:

True
False
Uknown
Undecidable

Mathematicians do this every day.

😀

Conclusion:

Strong AI in the form of a simple Turing Machine: no way in hell

Strong AI in the form of an artificial brain similar to our own: maybe
 
Originally posted by: Dissipate
What I am saying is that we lack a model of semantic interpretation, which is an entirely different ball game than computation.
What do you mean exactly by "semantic interpretation"?
And is there any scientific evidence for the existence of such a process? Which in this context is -according to your interpretation- a process which can not be performed by a Turing machine.

Edit: As far as we know there is no reason why our brain could not (at least not in principle) be simulated by a Turing machine. Imagine a very sofisticated model which models each indiviual cell of the brain in great detail.
If the model is good enough this "artificial brain" would then react in exactly the same way as a physical brain.







 
Originally posted by: f95toli
What do you mean exactly by "semantic interpretation"?

That very faculty you are using to interpret the words in this thread.

And is there any scientific evidence for the existence of such a process? Which in this context is -according to your interpretation- a process which can not be performed by a Turing machine.

Plenty. See above, the example.

Edit: As far as we know there is no reason why our brain could not (at least not in principle) be simulated by a Turing machine. Imagine a very sofisticated model which models each indiviual cell of the brain in great detail.
If the model is good enough this "artificial brain" would then react in exactly the same way as a physical brain.

There are languages that are not even Turing Recognizable (this has been proven). Perhaps the quantum effects necessary for consciousness are in this category, and hence could not be modeled in the way you describe. In other words, you could create a model of neurons replete with electrochemical processes, but not the quantum effects generated by those processes necessary for full consciousness.






[/quote]

 
Originally posted by: Dissipate
Once again, however, Serle's point is that this is not the case. A machine that can speak perfect Chinese may have no ultimate semantic understanding of what the Chinese symbols mean.
But if it speaks perfect Chinese and responds perfectly to questions in Chinese, how can you say that it doesn't understand Chinese, short of a metaphysical explanation? I think this is what f95toli is also saying - if there is no objective, testable difference, then we must conclude that the two (human and computer) are equivalent.
 
Originally posted by: Dissipate
There are languages that are not even Turing Recognizable (this has been proven). Perhaps the quantum effects necessary for consciousness are in this category, and hence could not be modeled in the way you describe. In other words, you could create a model of neurons replete with electrochemical processes, but not the quantum effects generated by those processes necessary for full consciousness.
This is pretty much moot at this point, as we do not yet know the exact physiological origins of 'consciousness', so we cannot say one way or the other whether a computer could simulate it.
 
Originally posted by: Dissipate
That very faculty you are using to interpret the words in this thread.

And what makes you sure that I am really "interpreting" the words? Maybe I am just a very good AI which could pass a Turing test.
I.e. reacting to your input by using some complex set of rules.

Perhaps the quantum effects necessary for consciousness are in this category, and hence could not be modeled in the way you describe. In other words, you could create a model of neurons replete with electrochemical processes, but not the quantum effects generated by those processes necessary for full consciousness.

First of all it is HIGHLY doutfull that you need to take any "real" quantum effects into account when describing our brain (and yes, I know Penrose disagrees; I just happen to think that he is wrong) even on a celluar level. There are a number of very good technical reasons for this (related to dissipation and dechoherence of open quantum systems), suffice to say it turns out to be very hard to create the right circumstances for quantum mechanical processes to be observable on a macroscopic scale (I know, I do it for a living); it is possible but generally only under extreme circumstances (e.g. at very low temperatures). As far as we know the brain is therefore very well described by classical or semi-classical physics.
Secondly, and this is the main point, ALL known quantum mechanical processes can be modelled using a classical computer. The only reason why we need quantum computers to solve some problems is that classical computers are inpractical because they are too slow to simulate e.g. large molecules or break codes; but is entirely possible to simulate a quantum computer using a classical computer (i.e. a Turing machine) if you have enough time and patience.









[/quote]

[/quote]

 
Originally posted by: CycloWizard
Originally posted by: Dissipate
There are languages that are not even Turing Recognizable (this has been proven). Perhaps the quantum effects necessary for consciousness are in this category, and hence could not be modeled in the way you describe. In other words, you could create a model of neurons replete with electrochemical processes, but not the quantum effects generated by those processes necessary for full consciousness.
This is pretty much moot at this point, as we do not yet know the exact physiological origins of 'consciousness', so we cannot say one way or the other whether a computer could simulate it.

While that may be the case that we don't know the physiological origins of our consciousness, the human brain appears to have abilities beyond that of Turing Machines. For instance, the ability to classify a mathematical statement. The mathematical physicist Penrose talks about this in one of his books.

Penrose has written controversial books on the connection between fundamental physics and human consciousness. In The Emperor's New Mind (1989), he argues that known laws of physics are inadequate to explain the phenomenon of human consciousness. Penrose hints at the characteristics this new physics may have and specifies the requirements for a bridge between classical and quantum mechanics (what he terms correct quantum gravity, CQG). He argues against the viewpoint that the rational processes of the human mind are completely algorithmic and can thus be duplicated by a sufficiently complex computer -- this is in contrast to views, e.g., Biological Naturalism, that human behavior but not consciousness might be simulated. This is based on claims that human consciousness transcends formal logic systems because things such as the insolubility of the halting problem and Gödel's incompleteness theorem restrict an algorithmically based logic from traits such as mathematical insight. These claims were originally made by the philosopher John Lucas of Merton College, Oxford.

Text
 
Originally posted by: f95toli
And what makes you sure that I am really "interpreting" the words? Maybe I am just a very good AI which could pass a Turing test.
I.e. reacting to your input by using some complex set of rules.

The same reason that we condemn torture. The person being tortured is not just reacting to input by using a complex set of rules. He really is feeling excruciating pain. This pain is not just another set of symbols. It is grounded in something else.

I could hardly imagine a scenario in which someone gives me a program for my computer one day and tells me to never feed it a certain kind of input because it really 'hurts' the program. :roll:

The meaning of meaning and the Symbol Grounding Problem

In his Chinese Room Argument Searle shows that symbols on their own do not have any meaning. In other words, a computer that is a set of electrical charges or flowing steel balls is just a set of steel balls or electrical charges. Leibniz spotted this problem in the seventeeth century.

Searle's argument is also, partly, the Symbol Grounding Problem; Harnad (2001) defines this as:

"the symbol grounding problem concerns how the meanings of the symbols in a system can be grounded (in something other than just more ungrounded symbols) so they can have meaning independently of any external interpreter."

Harnad defines a Total Turing Test in which a robot connected to the world by sensors and actions might be judged to be indistinguishable from a human being. He considers that a robot that passed such a test would overcome the symbol grounding problem. Unfortunately Harnad does not tackle Leibniz's misgivings about the internal state of the robot being just a set of symbols (cogs and wheels/charges etc.). The Total Turing Test is also doubtful if analysed in terms of information systems alone, for instance, Powers (2001) argues that an information system could be grounded in Harnad's sense if it were embedded in a virtual reality rather than the world around it.

Text


Secondly, and this is the main point, ALL known quantum mechanical processes can be modelled using a classical computer. The only reason why we need quantum computers to solve some problems is that classical computers are inpractical because they are too slow to simulate e.g. large molecules or break codes; but is entirely possible to simulate a quantum computer using a classical computer (i.e. a Turing machine) if you have enough time and patience.

Emphasis on KNOWN quantum mechanical processes. Penrose and others say that perhaps consciousness is a phenomena that is a result of unknown physics.
 
Originally posted by: Dissipate

While that may be the case that we don't know the physiological origins of our consciousness, the human brain appears to have abilities beyond that of Turing Machines. For instance, the ability to classify a mathematical statement. The mathematical physicist Penrose talks about this in one of his books.
Great, but Hawking would argue the opposite (as he likes to do with Penrose). I'm not an expert programmer, but I'm sure it wouldn't take much to get a computer to classify a mathematical statement, given a sufficiently complex set of rules. Note that I'm not disagreeing with your viewpoint, only saying that there isn't really any evidence to support or contradict it, since we don't even know what kind of evidence we would need to do so at this point.
 
Originally posted by: CycloWizard
Great, but Hawking would argue the opposite (as he likes to do with Penrose). I'm not an expert programmer, but I'm sure it wouldn't take much to get a computer to classify a mathematical statement, given a sufficiently complex set of rules. Note that I'm not disagreeing with your viewpoint, only saying that there isn't really any evidence to support or contradict it, since we don't even know what kind of evidence we would need to do so at this point.


It is not a matter of being an expert programmer, it is a matter of fact that there is no algorithm for determining whether or not a mathematical statement is true or false. This has been proven. If there was, then we wouldn't need mathematicians to verify the work of other mathematicians. We would just plug their proof into a program and see if the result is TRUE or FALSE.
 
Originally posted by: Dissipate
The same reason that we condemn torture. The person being tortured is not just reacting to input by using a complex set of rules.
He really is feeling excruciating pain. This pain is not just another set of symbols. It is grounded in something else.

Again, that is just an assumption. I also condemn torture but I still don't see any reason for why pain needs to be more than just biochemical process, i.e. there is nothing else,
I don't see any reason why morale would need to have anything to do with the undelying process of intelligence, you don't need to believe in the existence of a soul to condemn torture-
I could hardly imagine a scenario in which someone gives me a program for my computer one day and tells me to never feed it a certain kind of input because it really 'hurts' the program. :roll:
And once again, why not? If we ever manage to build a true AI that is the kind of ethical considerations we need to face.

Emphasis on KNOWN quantum mechanical processes. Penrose and others say that perhaps consciousness is a phenomena that is a result of unknown physics.

Yes, but the basic principles of quantum mechanics haven´t really changed for about 70 years and we have so far never encountered a problem which could not be described using those principles (except gravity, but there is good reason to believe that the same principles will apply to a QM description of gravity); they are VERY general.
In this case Penrose does not know more than anyone else, the mere fact that he is on unlclear grounds refering to "unknown physics" means that he is speaking not as a mathmatician/physicist but as a philosopher. I and most experimentalist (and in this case also most theoriticians) believe that Penrose is wrong in thise case.
Moreover, at least one of his attempts to prove his theory using mathematics was shown to be incorrect by others.

I should point out that one reason why I don't trust philosophy in this case is that philosophers are often ill equpied to understand what is essentially a "hard science" problem. Just think of the EPR paradox and the vast amount of nonsense which was written about it where it was assumed to be a pure "philosophical" problem that could never be settled; that is until we were able to perform experiments which tested Bell's inequalities and showed that Einstein&Co were simply wrong.
I suspect that the question of AI will be an open question for a long time, but if I am right the question will be settled not by philosophy but by computer science.

 
Originally posted by: f95toli

Again, that is just an assumption. I also condemn torture but I still don't see any reason for why pain needs to be more than just biochemical process, i.e. there is nothing else,
I don't see any reason why morale would need to have anything to do with the undelying process of intelligence, you don't need to believe in the existence of a soul to condemn torture-

It boggles my mind as to how you can say such a statement. If pain isn't anymore than a biochemical process then that is tantamount to saying that it isn't anymore than an electrical circuit process. Afterall, a circruit just has a different material through which the electricity flows. Which means that if I were to set up a computer that generated the same exact signals as a person does when it is struck or damaged you would condemn the 'torture' of the computer as well?

Also, the other way around. If torture only invokes a 'biochemical' process you have no real grounds for condemning torture. If some guy was strung up in a building across the street and being beaten to death it seems to me that under your logic it would be more rational to forget about it and just go about your business.

As for this pertaining to the underlying process of intelligence, I do not really know what that means. By this do you mean that it is bad to torture simply because you are destroying something/someone that happens to be intelligent?
 
Originally posted by: Dissipate
It boggles my mind as to how you can say such a statement. If pain isn't anymore than a biochemical process then that is tantamount to saying that it isn't anymore than an electrical circuit process. Afterall, a circruit just has a different material through which the electricity flows. Which means that if I were to set up a computer that generated the same exact signals as a person does when it is struck or damaged you would condemn the 'torture' of the computer as well?
Yes, of course. Assuming that it was an AI and that AI could both register something akin to pain AND it did not like that sensation.

Also, the other way around. If torture only invokes a 'biochemical' process you have no real grounds for condemning torture. If some guy was strung up in a building across the street and being beaten to death it seems to me that under your logic it would be more rational to forget about it and just go about your business.

As for this pertaining to the underlying process of intelligence, I do not really know what that means. By this do you mean that it is bad to torture simply because you are destroying something/someone that happens to be intelligent?

What I am saying is that while morale is always to a large extent arbitrary but much of it is obviously founded on evolutionary principles, i.e. most people would to anything to protect their children or other members of their family and it is easy to see why that makes sense from an evolutionary point of view.
Now, regarding the man in the building I think your assumption that it would be rational to go about my business is not neccesarily correct. First of all it is in my best interest to prevent such incident in general simply because next time it might happen to me or members of my family (or anyone else I am related to). Moreover, humanity has evolved to be a very social creature and in order for our society to work we need rules. Hence, murder is "bad" for several reasons.
Now, due to what seems to be a "side effect" of this most people (including me) object to e.g. cruelty to animals unless there is a very good reason (e.g some medical research). In this case morale is more arbitrary and related to your specfic culture and that is at least part of the reason not everyone shares this view. Hence, I suspect that if we ever manage to build a true AI and it turns out that it actually had a "will to live" a lot of people would object to killing it. I would propably share that view.

Now I acknowledge that it is NOT a rational view but people (including me) are not rational beings and -as I have already written- morale is therefore to a large extent arbitrary (unless, of course, you invoke religion).

 
Back
Top