Taking a break from the technical papers, I am going to
discuss Minds, Brains, And
Programs, a paper written in 1980 by John R. Searle at the University of
California - Berkeley.
First off, the thought experiment that Searle proposes is
one related to an artificially intelligent machine's true understanding
of anything from a psychological standpoint. He uses a proposed
experiment (the Chinese Room Experiment) as the basis for his argument. In this
experiment, a person who does not understand any Chinese lettering symbols
would receive an input of Chinese characters into a closed room.
Then, based on a set of instructions, written in English, the person's native language,
the person would manipulate the Chinese symbols into an output. The idea was
that this person could have a conversation in Chinese with a native Chinese
speaker and the native speaker would not be able to tell that the person did
not know Chinese. His point was that machines, at least at this point in time,
were only able to manipulate data based on a set of instructions, but could
never actually understand what the data or symbols really meant. His experiment
is Turing complete in this sense, because a Chinese speaker would not be able
to tell which room had the computer and which room had a person, since both the
human and computer were manipulating symbols based on the exact same set of
rules (translated into computer language, of course).
Searle continues his rant paper
by explaining the differences between weak and strong AI, which is a useful
distinction as weak AI is merely a tool that just performs a set of
instructions, where as strong AI actually understands the instructions it is
performing. His argument is that an appropriately programmed computer cannot
literally have cognitive states or intentionality, at least in the current
(1980) design of computers. He spends the rest of the paper ignoring logic and
answering criticisms in a blatantly biased manner.
He continues his paper by talking about how he believes that understanding is black and white; that you either understand or you don't. This is not the case, as shown by his critics' argument that a human who understands English can also partially understand French and to a lesser degree German, but understands nothing about Chinese. This argument holds true in my book because it applies to me directly and the 2-state definition of understanding doesn't make any sense in my case. The first argument against this thought experiment hits the nail on the head. It states that while the person in the room manipulating the Chinese symbols does not understand them, the system as a whole does. His rebuttal is that if the person internalizes the rules for symbol manipulation, then he has encompassed the whole system, yet still does not understand Chinese. My response is that whoever made the set of rules must be included in the system for the system to function, and, therefore, the system as a whole would understand Chinese. The most interesting point against Searle's argument is the "many mansions" reply that proposes that with sufficient technology, it is possible to make a machine with cognition and intentionality. Searle agreed with this proposition, but undercut it by stating that if definitions change, then it is impossible to answer the original question. He ended by answering some questions about his beliefs that AI cannot progress enough to the point that a program could ever give a machine intentionality, cognition, and understanding.
Overall, this paper was interesting and I think that, while wrong, Searle did advance the philosophical aspects of AI and helped to open up a world of research into human psychology and artificial intelligence.
No comments:
Post a Comment