A few days ago I picked up a copy of
He has an analogy that is just full of holes (in my opinion). He calls it the Chinese Room analogy (or at least that's how it is now referred) and it is framed in this rather verbose manner:
Suppose that I'm locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I'm not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that "formal" means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given to me in the third batch. Unknown to me, the people who are giving me all these symbols call the first batch "a script," they call the second batch "a story," and they call the third batch "questions." Furthermore, they call the symbols I give back in response to the third batch "answers to the questions," and the set of rules in English that they gave me, they call "the program." Now just to complicate the story a little, imagine that these people also give me stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English. Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view--that is, from the point of view of somebody outside the room in which I am locked--my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese. From the external point of view--from the point of view of somebody reading my "answers"--the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally sqecified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program.He goes on to explain how in the case of the Chinese questions, he doesn't know what his answers "mean", while in English he does know. Therefore computers will never have strong AI unless they somehow prove to have bridged this mysterious gap.
My question is: what if the "instructions" that are passed to him are in fact lessons in Chinese? Wouldn't he then eventually be able to answer the questions while actually knowing what the symbols meant? It seems to me presumptuous to say that lessons are different from instructions, because in the end they're both simply a list of rules that get applied in a certain order... the lessons are designed to help you discover the pattern beneath the instructions, so that you can answer questions even if you haven't necessarily been given the instruction for that exact instance. Lessons help you find
Imagine that you were trying to prove that a computer didn't understand the Chinese answers it was giving. You would try to prove this by asking things like, "what does an elephant look like?" "What sound does it make?" "Is it heavier or lighter than a house?" "Does is smell more like caramel or salt water?" "If you pulled its tail, would it start crying?" You'd try to discover questions that point to rules that, increasingly, they wouldn't have been taught explicitly. You're trying to find out whether or not the computer understands how to make rules about things that it claims to understand.
I think that the gap that John R. Searle is trying to illustrate is really only a gap in the type of data that we expect a computer to be able to know, store, and use. Can it find patterns in massive amounts of data, and therefore mimic the phenomenon of understanding that humans prize so dearly? Is comprehension really an all or nothing thing?
For really, how well do we really understand the concept of an elephant? How many elephants have you actually seen in person versus pictures of elephants and movies of elephants and stories of elephants? In other words, do you really need to leave the Chinese Room in order to understand elephants, or would the lessons suffice just as well? When you saw the elephant in person, did you try to pick it up (to therefore gain firsthand knowledge of its weight)? Did you put it on a scale with a house to see which is heavier? Did you hold a watermelon up to it to see which is greener? No, most of our data is acquired second-hand or by inference... making a best guess based on partial data about similar things. A lot of data is missing. How much does an elephant actually weigh? Are they generally heavier than a Honda Accord? What does their intestinal tract look like? What does their breath smell like? If you licked its eye, what would that taste like? If we had to explain an elephant to someone that had never seen or heard anything like it, would we be able to do anything without simply showing it a video clip of an elephant, telling stories about elephants, and explaining some novel traits about elephants that distinguish it from hippos and Hondas? To use a cliched question, could we explain to our friend what "red" is without giving it an example and relating stories to them that give it an emotional context? Can you explain to an alien how to tie his shoes?
Almost all of our data relies on other data... one could probably even say all of it does. It's a symbol manipulation program at its core. There is no objective "understanding" that takes place outside of rules, patterns, and probabilities. We have rule-making capabilities and therefore we think we understand. That is all.