In John Searle’s article, Minds, Brains, and Computers, he argues against the notion of strong AI; which states that an appropriately programmed computer is equivalent to a human mind, and has similar cognitive states (Searle, 349). Based on Searle’s definition of Strong AI, I will argue that a computer is not equivalent to a human mind because it does not apply meaning to the information it processes.
Many critics of Searle state that computers have the capacity to simulate the human ability to understand stories. CITATION RCS77 \l 1033 (Schank) One critic in particular named R.P. Schank, writes an argument in favor of this notion presented as two premises. “(1) […] The machine can literally be said to understand the story and provide answers to the questions. (2) What the machine and its program do explains the human ability to understand the story and answer questions about it (Searle, 350).” In order to disprove this argument Searle demonstrates an analogy that simulates a computer program. The analogy involves a monolingual English speaker locked in a room with instructions on how to manipulate Chinese symbols. The English speaker is ultimately able to answer questions that a native Chinese speaker would be able to answer. However, the English speaker does not apply the same meaning to the Chinese symbols the way that a native speaker would when answering the questions. This analogy is meant to display the difference between a computer and a human mind in reference to intentionality; which entails the ability to apply meaning.
Machines do not truly grasp the meaning of their actions when instantiating programs. In contrast, a human mind applies meaning to every action that it performs.
Those in favor of strong AI claim that there is no difference between the instantiation of a program and the application of meaning when answering questions, either from a human or a machine. “One of the claims made by the supporters of strong AI is that when I understand a story in English, what I am doing is exactly the same –or perhaps more of the same—as what I was doing in manipulating the Chinese symbols (Searle, 351).” In the quote Searle is demonstrating the views of his critiques. However, it is incorrect to assume that instantiation and the application of meaning are one in the same when answering questions. Searle argues against this idea in favor of strong AI. “ (1) […] It seems quite obvious to me in the example that I do not understand a word of Chinese stories… the computer has nothing more than I have in the case where I understand nothing. (2) […] We can see that the computer and its program do not provide sufficient conditions of understanding since the computer and program are functioning, and there is no understanding (Searle, 351).” Searle demonstrates in this counterargument that it is erroneous to assume that the English speaker in his analogy ties meaning to the answers he/she outputs. He then delves into the conditions of sufficiency; the computer (English speaker) has a working system to output answers, but lacks the sufficient requirements for the application of intentionality (meaning). Computer systems have the ability to process information (think) just like Biological systems (human mind), just not the ability to have intentionality to their programs (thoughts). Searle’s argument against Strong AI might hold. However, his premise concerning human intentionality verses machine intentionality provides a debate for the meaning and source of Searle’s definition of intentionality. Searle is vague on his definition of intentionality. Searle alludes to the idea that the definition of intentionality is to add meaning. However, I could argue that he’s too vague with this concept and state that computers do have intentionality (meaning) for their programs and can add meaning to their outputs. What distinctly separates the human ability to add meaning and the