John Searle's 'Chinese Room' Argument Content from the guide to life, the universe and everything

John Searle's 'Chinese Room' Argument

4 Conversations

The American philosopher John Searle's 'Chinese Room' analogy is a very simple argument designed to show up the limitations of the famous Turing Test devised in the 1950s by Alan Turing, an English mathematician. Turing's basic argument was that, if a machine could give answers to a problem indistinguishable, to an impartial observer, from those given by a human being to the same problem - and thus 'fool' the observer into thinking it was human - it could therefore be said to be thinking. Searle's refutation of the Turing Test runs as follows.

Brief Exposition of Searle's Argument

Searle imagines himself shut up in a room with two or three batches of papers full of writing in the Chinese language. His understanding of Chinese is absolutely zero, as would be a computer's understanding of a problem that it is asked to deal with. Fortunately, however, somebody has provided Searle with a list of instructions, or rules, which enable him to correlate the different batches of writing - this list is equivalent to a program being run through a computer. Following these rules, he is able to come up with a set of what he is told are answers to questions, which are perfectly correct and in effect impossible to distinguish from the answers that would have been given by a native Chinese speaker. This, then, is a sort of version of the Turing Test, and Searle has passed it - according to the test, he can now be said to understand Chinese, because he can give answers to questions in the language which are indistinguishable from those given by a Chinese speaker. However, he has understood nothing of the answers he has given - he has simply been blindly following instructions.

Searle contrasts this with a similar hypothetical situation, where he is answering questions in English instead of Chinese. He is still working with a set of rules in mind, of course - the rules of basic English grammar and so forth. Now, however, he not only answers the questions, but also - being a native English speaker - he understands them. From the point of view of the Turing Test, however, his answers are no better or worse than the ones he gave for the Chinese questions, and therefore according to the Turing logic he can understand English no better than he can understand Chinese. But the difference could not be more marked. Understanding, in this case, is not a matter of degrees. He understood all of the English, none whatsoever of the Chinese.

The Limitations of Artifical Intelligence

Computer thinking, Searle argues, is a simulation of a certain, very limited, aspect of human thinking, not an emulation of it. We would hardly expect a computer simulation of a fire to burn anything, for example, or that we would get wet in a computer simulation of a rainstorm. What, then, are the grounds for assuming that a machine designed to process binary data would resemble in any more than an utterly superficial form, the complex range of tasks that a human brain is capable of performing - understanding context-specific jokes, thinking in terms of metaphor and analogy, to take just a couple of examples.

Searle's intention here is to illustrate the limited nature of mechanical ingenuity in comparison with the varied complexity of human intelligence. The logical/problem-solving approach is just one of many varieties of intellectual resource that the human mind is capable of drawing on. It is worth noting, for example, that Searle chose to express his argument by way of analogy. This is precisely the kind of intelligence - metaphorical intelligence, understanding something by way of comparison with other things - that the human mind uses habitually, but which a machine has no grasp of whatsoever. A machine would not even have any way of understanding the Chinese Room argument - the analogy simply wouldn't compute!...

In order to argue that present-day computers can think, then, according to Searle, we would have to subscribe to a very narrow definition of what thinking actually means. Computers need to develop a far wider range of intelligence, before they can begin to compete with human beings for anything other than processing large amounts of information. In fact, the claims of artifical intelligence supporters rest on a rather one-sided view of intelligence - that of logic and problem-solving.

Concluding Thoughts

Searle's argument is not flawless. For example, many artifical intelligence researchers would contend that Searle has never fully answered the argument from systems theory, that the constituent parts of a machine do not need to demonstrate intelligence in order for the system itself to be intelligent1. However, most people on either side of the artifical intelligence fence would acknowledge that no machine, as yet, has managed to conclusively demonstrate intelligence. The Deep Blue computer that defeated Garry Kasparov in 1997 is a case in point. This computer was, in spite of its remarkable achievement, little more than a highly efficient number-crunching machine that was able to use this one immense talent to defeat its rather more versatile, but all-too-human, and therefore error-prone, opponent.

It is, of course, safe to predict that the machines of the future will be so far advanced from current technology as to be virtually beyond the scope of our limited imaginations. It would, therefore, be foolish to make rash predictions either way about the likelihood or otherwise of artifical intelligence being ultimately a success. It is, of course, possible in principle that one day we will make machines that can satisfy a more complex, less behaviourist, understanding of what it means to think. But, if the machines of the future are to show intelligent life, they will have to be qualitatively different from anything we have ever attempted to build before. They will have to be self-organising and essentially 'animate' (not machines at all, really), capable of evolving to meet the changing needs of their environment. The difficulties with developing neural networks, to take an example of a current direction in artifical intelligence research, would seem to indicate that self-organising, evolving systems, if they ever happen at all, are a long way off in the future.

Further Reading

  • John R Searle Minds Brains and Programs, Hofstadter and Dennett (see below)
  • John R Searle Is the Mind's Brain a Computer Program?, Scientific American vol. 262, no.1 (January 1990)
  • Douglas R Hofstadter and Daniel C Dennett (Eds) The Mind's I: fantasies and reflections on self and soul (1981, Penguin)
1Although this argument too is not without its flaws. After all, as Searle points out, in his Chinese Room neither the system nor any of its constituent parts have any understanding at all of what they are doing.

Bookmark on your Personal Space


Edited Entry

A486047

Infinite Improbability Drive

Infinite Improbability Drive

Read a random Edited Entry

Categorised In:


Written by

Edited by

h2g2 Editors

Write an Entry

"The Hitchhiker's Guide to the Galaxy is a wholly remarkable book. It has been compiled and recompiled many times and under many different editorships. It contains contributions from countless numbers of travellers and researchers."

Write an entry
Read more