(According to its author, the Turing test authorized the attribution of a form of mind to certain artificial systems based on a principle of indiscernibility of responses.)
The Chinese room is a thought experiment presented by John Searle in a 1980 article, shortly after Ned Block’s “China brain”. Its purpose is to show that the execution of a computer program in a system, however complex this program may be, is not enough to produce a real mind, or a consciousness. On a parallel level, it aims to convince us that the logical syntax on which any computer program is based does not on its own make it possible to produce meaning, in other words semantics. What is meant then by “artificial intelligence” can only be attributed to a machine in a weak sense, a program can at most only simulate intelligent behavior in it, itself necessarily intentional and conscious. To establish this point, Searle ironically pursues the procedure of the Turing test, a test supposed to demonstrate that a sophisticated computer program can be called intelligent in a strong sense. He designs a question-answer system where the program determining the answers is so sophisticated, and the person answering the questions so adept at manipulating the characters, that at the end of the experiment, the answers he gives to the questions cannot be distinguished from those that a real interlocutor would give, even though this person does not understand anything of what he is saying.
The Chinese room procedure can be described as follows. A person who has no knowledge of Chinese (in this case, Searle himself) is locked in a room: the “Chinese room”. This person is in possession of a large number of Chinese characters which constitute the database of the Chinese room. He is also given an instruction book in English (his mother tongue) explaining precisely how to associate certain Chinese characters or symbols with others: this is the program of the Chinese room. He is also provided, from the outside, with a number of Chinese symbols which (unknown to him) are called “questions”. In exchange for these symbols and following the program’s instructions in English, Searle gives other sets of Chinese symbols, which (which he also ignores) are called “question answers”. From the point of view of the speaker who asks the questions, the person locked in the room behaves like an individual who really speaks Chinese. But, in this case, this person has no understanding of the meaning of the Chinese sentences that he transforms. It only follows predetermined rules.
Such a situation must first convince us that it is not enough to be able to reproduce exactly the linguistic behaviors of a speaker to speak a language, because speaking a language is not only forming the right verbal responses , it is also to signify or mean what one says: a mastered use of language is thus coupled with an awareness of the meaning of what one says (“intentional awareness”). Because the presence of even highly sophisticated linguistic behavior is not sufficient to determine whether or not a system or organism possesses mental states of consciousness and intentionality, it alone cannot establish the existence of an authentic intelligence. Searle uses his thought experiment primarily against the “strong” version of the artificial intelligence thesis (“strong AI”), the version first championed by Alan Turing in the 1950s with his famous test. But it aims more broadly at the functionalist conception of the mind, in particular computationalism which constitutes the most radical version of it but also the most widely accepted at the end of the 1970s.
The Chinese room’s anti-functionalist argument has been the subject of numerous articles and objections. The objection most frequently advanced by functionalists is what Searle anticipatively called “the response of the system”. According to her, the system that the person following the instructions in the manual is part of does indeed understand Chinese, despite the fact that the person himself does not understand the language. In the system that constitutes the Chinese room, the person then plays the role of the central unit (or processor) of a computer. But the processor is only one of the many components of a computer. In the case of a computer sophisticated enough to think, it is not the processor taken in isolation that thinks but rather the whole system of which it is a part, because it is the whole system that makes it possible to provide the appropriate answers. For Searle however, this objection is not admissible because it implies the idea according to him absurd that there would be a consciousness of the Chinese room which would not exist at the level of the person who provides the answers, even though it is presumed that this person is the only conscious being in this chamber.
(Includes texts from Wikipedia translated and adapted by Nicolae Sfetcu)
Leave a Reply