Monday, August 30, 2021

3a. Searle, John. R. (1980) Minds, brains, and programs

Searle, John. R. (1980) Minds, brains, and programsBehavioral and Brain Sciences 3 (3): 417-457 

This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain I assume this is an empirical fact about the actual causal relations between mental processes and brains It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality The main argument of this paper is directed at establishing this claim The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. 





see also:

Click here --> SEARLE VIDEO
Note: Use Safari or Firefox to view; 
does not work on Chrome

77 comments:

  1. When reading this paper, I wasn't really sold on Searle's arguments and his refutations that programs couldn't have intentionality and understanding. It seemed to me like he was listing everything that understanding wasn’t but never what it was, which made it seemed like he believed understanding to be a sort of transcendent property that only humans could have - which would defeat the point of the whole paper and seemed to me to be like a sort of dualism.

    That was, however, until I got to the last part where he explains (from what I understood) that intentionality cannot come strictly from the brain and information processing, but has to be rooted in the biological/chemical property of the body in the same way that the other capacities of our human bodies are. This was to me quite a convincing argument, and it made me think of the idea of embodied cognition. I think drawing from that concept allow us to go further than Searle, and argue that programs cannot think not only because cognition depends on the human body, but also because it depends on the environment surrounding that body.

    ReplyDelete
    Replies
    1. I agree with Louise that I found it a little difficult to follow his points through his refutations, but upon reading through the section of his paper where he asserts his claims about intentionality, summarized in the last sentence as: “whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program since no program, by itself, is sufficient for intentionality” I began to understand more fully the points he was making. Because there is a difference between inputs and outputs and “understanding” (as illustrated in the Chinese room example), there is no room for an understanding of intentionality within strong AI (which sees both examples as equal in terms of simulating understanding), rendering it unable to tell us about human cognition. It made sense to me how he connected this to behaviorism: "Since appropriately programmed computers can have input-output patterns similar to those of human beings, we are tempted to postulate mental states in the computer similar to human mental states." But my question then becomes, why is no program sufficient to simulate intentionality?

      To clarify, it is this point: "...no purely formal model will ever be sufficient by itself for intentionality because the formal properties are not by themselves constitutive of intentionality, and they have by themselves no causal powers except the power, when instantiated, to produce the next stage of the formalism when the machine is running. And any other causal properties that particular realizations of the formal model have, are irrelevant to the formal model because we can always put the same formal model in a different realization where those causal properties are obviously absent." that I am currently having trouble grasping. I will do my best to strengthen my understanding of Searle’s point, but if anyone has any thoughts I would love to hear them.

      Delete
    2. Louise, actually, Searle's point has nothing to do with embodiment (though it did help lead, years later, to embodiment as a proposed solution).

      It is much simpler:

      "When I read or write English, I know what it feels like to understand it, to understand what it means. When I read and write the squiggles and squoggles that go into and come out of the symbol-manipulations I do when I execute the Chinese T2 program, I know that I am not understanding any of it, as surely as Descartes knows that it feels like he has a headache (even if he can't be sure he has a head)."

      Madelaine, that Searle quote is one of the most un-kid-sibly quotes you could ever find. No one can be blamed for not understanding it!

      What does "intentionality" mean?

      And what are these mysterious "causal powers"?

      Read 3b...

      Delete
  2. “But precisely one of the points at issue is the adequacy of the Turing test. The example shows that there could be two "systems," both of which pass the Turing test, but only one of which understands;”
    When Searle addresses the adequacy of the Turing Test on page 6, I was struck by the idea that one system can pass the TT and understand while the other passes without understanding. To me, this made me question how likely it is that the TT can actually be used to understand the processes of cognition. If it is possible for a system to pass the TT with no understanding, wouldn’t that mean it is indistinguishable in its capacities, whether that be T2 or other, so we wouldn’t be able to differentiate between the passing and non-passing systems? Thus, it seems the test is quite limited.

    ReplyDelete
    Replies
    1. Searle's Argument does not invalidate the Turing Test. It just shows that if T2 can be passed by computation alone (i.e., computationalsm, aka "Strong AI"), then it is not a strong enough test for reverse-engineering cognitive capacity.

      (But if T2 can only be passed by a robot that can also pass T3, even if T3 is not tested directly by T2, then T2 is strong enough.)

      Think about it, then read 3b.

      Delete
  3. Before replying to the rebuttal, Searle once again elaborated on his opinions on the definition of "understand." which makes me think about whether understanding is the process that produces intentionality, whether it is a necessary condition or a sufficient and necessary condition. Or the degree of understanding determines whether intentionality can be produced or not. I like this argument very much. But I may be a little bit off the point. I am curious about the rule book mentioned in the argument. In other words, the instruction table. Is this because people naturally feel that if all the rules can be exhaustively listed, or a general rule can be found, the machine then will speak human words. If we can analyze the specific process of the human brain to understand semantics and concepts, can we write a book of enumeration rules for this foreigner to use? Certainly, the author does not think that searching for the rule book compiled by the retrieval method helps the user to establish a connection between the unfamiliar character and its corresponding representation, let alone understand the language. I wonder whether the difference between a human and a machine lies in the performance after executing each command or processing information. Under any stimulation, the neural network of the human brain will undergo some changes, and it is always in an updated state, and the machine does not have it after the end of the operation. Is this when the process of understanding happens?

    ReplyDelete
    Replies
    1. What is "intentionality"?

      Forget that word.

      You know what it means to understand. You understand this sentence.

      You understand this sentence too: 你明白這個

      But you don't understand this one: Ezt nem érted (unless you look it up in google translate).

      Understanding is three things: (1) it is a T2 capacity to do things with words, and (2) it is a T3 capacity to do things with the things the words refer to, and (3) it is what it feels like to understand the words and be able to do (1) and (2).

      In week 5 (on symbol grounding) we'll discuss how and why T2 has to be "grounded" in T3. In week 10 we'll discuss how and why it is "hard" to explain (3).

      The "rule table" is just the (hypothetical) algorithm (program, software) that can successfully pass T2. If computationalism ("cognition is just computation" -- aka "Strong AI") is wrong then there can be no such "rule table." It requires a T3 robot, which is not just an algorithm.

      What's needed is not just a "a connection between the unfamiliar character and its corresponding representation" but a connection between the symbols (words) and the things they refer to (both concrete objects in the world, like apples, and abstract combinations of their features, like "the kinds of things vegans don't eat"... (Week 5, on symbol grounding)

      "Representation" is as vague as "intentionality."

      What does "simulation" mean?

      About the third component of understanding (feeling), see replies above.

      Delete
  4. One point that Searle raises that I found very interesting is the idea of literal versus non-literal understanding. He discusses how we often attribute “understanding” in a metaphorical way to inanimate or non-human items. He argues that we make this connection because we see things around us as extensions of our own minds, our purposes, and therefore our intentionality. He argues that this non-literal understanding is just different from how we as people understand English. While Searle does not clearly substantiate why this understanding is different, I think we as humans can sort of intuitively feel this difference and we know what Searle is getting at with this. Our idea of literal understanding encompasses many intangible, unspoken things like emotions and awareness of our consciousness. I think we equate understanding to consciousness, from which it is arguable for a computer to have a conscious, intentional understanding of its actions. I think advocating for the presence of human-like understanding in a computer is still a metaphorical extension of our minds because we can’t pinpoint what consciousness is. Tests of intelligence, awareness, or the capacity to learn don’t truly capture the essence of consciousness. I get stuck in a circle trying to understand this, where we need a proper definition/mechanism of consciousness in order to prove a computer can do it, yet we need to make a computer do it in order to find the proper definition/mechanism.

    ReplyDelete
    Replies
    1. What do you mean by consciousness?

      Whether a sentence is literal ("the cat is on the mat") or metaphorical ("my love is a red, red rose") -- either way, understanding the words feels like something: see (1) - (3) in the replies above.

      Delete
    2. Hi Leah, I find this same problem of getting stuck in a circle trying to understand these concepts. There is no agreed upon definition of consciousness, but I think most of us would agree it has something to do with being self-aware. However, this may bring up the "other minds problem" we discussed earlier, where we don't know if other minds are conscious either, so we can't apply that argument to computers. We don't know if computers are conscious, but we don't know if other people are conscious either. Therefore, if we equate understanding of consciousness with intentional understanding, doesn't intentional understanding not matter?

      Delete
    3. "Consciousness" (feeling) is any felt state, from ouch to Cogito ergo sum. I can feel what it feels like to think about feeling, or about "me," feeling -- but just an amphyoxus, feeling pain, is already "conscious" -- full-blown "consciousness."

      "Intentionality" is just jargon for the fact that it feels like something to think (to have an apple "in mind" when you are saying something about "apples"). Forget about intentionality and just think about feeling. It will never mislead or confuse you on the kinds of questions we are discussing here.

      Delete
    4. Leah, I believe the same section stood out to me in the text. Searle says that we, “often attribute ‘under standing’ and other cognitive predicates by metaphor and analogy to cars, adding machines, and other artifacts…” (p. 5). Understanding is not naturally afforded by the appearance of non-human objects (and oftentimes understanding is not even obvious in other humans), so it must be that we ascribe understanding and cognitive abilities to non-human objects for reasons other than that they appear to be conscious in their actions. I can think of two reasons, the first being that we are linguistically limited in describing actions outside of human abilities. Consider the example Searle gives, "The thermostat perceives chances in the temperature." What other words can we place in lieu of “perceive”? Maybe a thermostat senses, measures, represents or detects temperature. Most of these words are still derived from human abilities, and so we are limited to describing what objects can do using only the words we’ve made to describe our own abilities. Additionally, perhaps it is easier to ascribe cognition or understanding to each tool, technology, animal, and object we encounter, than it is to empathize with non-understanding, non-consciousness, because consciousness is all we know for certain to be true. If we observe an animal in its natural environment, we may often anthropomorphize its understanding of the world around it, because the only way we are able to understand the world around us is through this one human lens. It is harder to put ourselves in the shoes of the animal who has no minute-to-minute understanding of the world around it, than to project our own understanding onto it. For this same reason, we cannot imagine computers computing without the understanding which we have when we compute.

      Delete
  5. I find Searle's argument strong overall and could see why it poses a major challenge to computationalism, although after last week's class I could be already influenced by criticisms that align with his.
    While I was reading there were minor points I was unconvinced by. In his reply to possible objection III (Brain Simulator), Searle uses an analogy of a water pipe. The water pipes are connected in the same way a brain's synapses are organized. I thought giving an inanimate and non computational object was prying at our intuition about what thinking/computational machines look like. (I also found this example unconvincing "We are asked to accept it as a discovery of strong AI that the hunk of metal on the wall that we use to regulate the temperature has beliefs in exactly the same sense that we, our spouses, and our children have beliefs", for the same reason, and the hunk of metal outputs something very simplistic).
    "And any other causal properties that particular realizations of the formal model have are irrelevant to the formal model because we can always put the same formal model in a different realization where those causal properties are obviously absent." (p.11) This could be a reply, but my doubt was that although the physical material could vary, the carefully reconstructing the connections found among neurons could matter for generating intentionality. But his response in this section does undermine foundation of Strong AI around behavioural equivalency, since we shouldn't need to replicate all the connections of the brain according to the original hypothesis of Strong AI.
    Another mental debate I had while reading was on his assessment of the 'combination robot' that has pretty much everything. Searle says, "...as soon as we knew
    that the behavior was the result of a formal program, and that the actual causal properties of the physical substance were irrelevant we would abandon the assumption of intentionality" (p.9)
    "Isn't this because he assumes that formal system =/ intentionality?" I thought.
    However, he does reiterate that "The thrust of the argument is that it couldn't be just computational processes and their output", which "can exist without the cognitive state".
    To nitpick another minor point: "Indeed the belief as such hasn't even got a formal shape in this syntactic sense, since one and the same belief can be given an indefinite number of different syntactic expressions in different linguistic systems." This reasoning I disagree with, since the syntax of a language is not purely formal and I do not think linguistic differences support that belief does not have a formal point.
    In the end, he asks a series of questions, the format of which reminded me of Turing's paper (could it be intentional?). Then, he brings us to focus on the central points of his argument and I would jot down the following phrase as the take-home-message of this paper. "To confuse simulation with duplication is the same mistake..."

    ReplyDelete
    Replies
    1. Good points, April. Read the other replies here and also read 3b. That should answer most of your questions.

      Delete
  6. Searle bring up in this text that the individual in the Chinese Room incorporates the entire system (which is believed to include pen, paper, etc.) and that because they do not understand Chinese, the system does not. I do not fully agree with that statement and I would like to point out that the symbol manipulation the person does in the Chinese Room does not stray too far from what people do when translating sentences, especially without knowledge of the original language. By doing translation (for example, translating Chinese to English by matching characters to words and finding patterns in sentence structure), the individual may gain what I interpret to be an “understanding” of Chinese. An interesting quote on page 5 is the following: “[…] while the man in the internalized systems example doesn't understand Chinese in the sense that a native Chinese speaker does (because, for example, he doesn't know that the story refers to restaurants and hamburgers, etc.), still ‘the man as a formal symbol manipulation system’ really does understand Chinese.” The person undertaking translation is gaining a semantic (and a bit syntactic) knowledge and understanding of Chinese, while the person in Searle’s room is only gaining a syntactic understanding. In my own definition of “understanding”, both individuals understand Chinese in their owns ways via computations, but only one is able to manipulate the symbols to their liking because they were able to translate Chinese symbols into meaningful English symbols. A computer, in theory, could be programmed to master English and understand the semantics of it and could, in theory, understand another language via the same process. Its ability to manipulate and “understand” Chinese would therefore be the same as native Chinese speakers. In this perspective, “understanding” cannot just be a “biological” or “animal” phenomenon. However, if we expand the definition of understanding to other phenomena and behaviours, would the previous assertion would still hold?

    ReplyDelete
    Replies
    1. I had a similar thought while reading Searle's paper. He seems to assume that the person in the Chinese Room has no capacity for pattern recognition or deduction, and doesn't explicitly state this assumption. If that person had some basic understanding of linguistics, they could likely deduce some meaning from the combination of a full knowledge of syntax and a partial understanding of meaning in Chinese.
      However, I cannot agree that a computer could understand a language, since I do still agree with Searle’s overall point that computers are lacking understanding, which I take to be roughly synonymous with such concepts as qualia and consciousness. I believe these are inherently biological, and are required for semantic understanding. Even a computer with incredibly sophisticated pattern recognition and deduction would have no original point of understanding from which to produce meaning. Meaning is perceived by the brain with respect to subjective experience. Since no one has discovered the origin or nature of consciousness, which is at the origin of subjectivity, how low are the odds that a computer engineer could just create a computer that happens to be aware?

      Delete
    2. Camille, the computer that is executing the (hypothetical) T2-passing program, and any other thing, including Searle, that is executing that same program -- all are in the same boat. If one does not understand Chinese, none do. Searle says (correctly) that he does not understand Chinese, and would not if he were executing the T2 program. Do you not believe him? The program is not doing translation. It is doing whatever needs to be done to Chinese input to produce output indistinguishable from that of a real Chinese-understanding person, for a lifetime. What a person can learn from a translation is not relevant. Searle is just following orders, just as the computer does.

      Milo, yes, what is missing is what is described as (2) and (3) in the replies above: (2) robotic capacity and (3) feeling. (All those other words -- qualia, consciousness, subjective experience -- are all just redundant paranyms for feeling (or, latinate: sentience). See 3b and then week 5 for what's missing in T2 with the lack of (2): T3 grounding.

      But showing that the feeling of understanding is lacking does not explain how to produce feeling. That's the "hard problem."

      Delete
  7. It took me quite a long time to truly understand Searles Chinese Room Argument and what it entails After reading the text multiple times, and looking at some other work of Searle, he convinced me of his argument.

    First, I think that we need to make it clear that Searle is in no way saying that a machine (even man-made) can not think. It is attributing computation to thinking that is the issue here. He shows through the CRA that a Turing computational program requires semantic attribution (an observer) for any information to be extracted. In other words, computational information can only be epidemiologically objective (It is observer dependent).
    Thinking requires semantic (meaning attribution) which is not something that a Turing machine can do since it only works in syntax (symbol manipulation).
    Turing computation is extremely powerful, and it might be able to be used to gain insight on consciousness (weak AI argument) as a tool, but it would be impossible for something merely composed of symbols (observant dependent) to be able to think (Strong AI). In other words, Turing machine are not natural and can not by themselves give us information on "consciousness/thinking/cognition/or any other word you want to use".

    -Elyass Asmar

    ReplyDelete
    Replies
    1. What do you mean by "semantics" and how can you get it into a machine of any kind (including us)?

      Delete
    2. I really like the point you made about computational information being observer dependent. In a way, I find this argument similar to the “if a tree falls and no one is there to hear it, does it make a sound?” question. If there is no observer to interact with the computational information, is it still considered information (stimuli that reduces uncertainty)?
      Furthermore, it does follow from the CRA that Turing machines are only syntactic as they function on the squiggles and squoggles supplied. The semantics of the squiggles and squoggles in question are only ‘information’ when someone/thing can decipher it and feel understanding. This leads to the other minds problem, as we are not able to know if anyone else feels understanding, let alone if a machine could as well.

      Delete
    3. AD, I replied to you about "observer-dependence" elsewhere.

      Delete
    4. Semantics would refer to the meaning of information. Which in the case of computation would have to be given by us (Humans). The question of how we have semantics is a good one. I feel like we would need to first solve the "how" of consciousness to be able to answer that question.

      Delete
  8. I was not completely convinced by Searle's response to Berkeley's systems reply that the person in the room does not understand the story in Chinese but the system does understand. Searle suggested that the subsystem that understand English and the subsystem that understand Chinese are completely separate and "not even remotely alike". He illustrated that for "hamburgers" in the English subsystem, the Chinese subsystem knows only squiggles and squoggles. Yet, I was not fully convinced by him because it is natural for me to think that we need to have the semantic concept of "hamburger" in one of the system for the other system to understand its meaning. In this way, the two systems seem not to be separated as they are connected by the concept of "hamburger" in the "whole system".

    ReplyDelete
    Replies
    1. If Searle memorizes the (hypothetical) Chinese-TT-passing program and executes it on Chinese input, he, Searle, like the (hypothetical) TT-passing computer, is simply executing the code. He is the whole system. Searle's Argument is that executing the code does not produce understanding in him, so not in the computer either.

      What does not work for the whole system does not work for "subsystems" either, unless you believe that memorizing and executing a program a could induce multiple personality disorder, with one "subsystem" unaware of or not understanding another...

      Read 3b.

      Delete
    2. I sort of agree with Xingti and I also doubt his argument towards the System Reply: If the person internalizes all elements of the system, how do we know he still doesn’t understand Chinese? It seems all human beings learn the language by memorizing rules of the language, by referencing words, for example, ‘hamburger’ to ‘汉堡’, the real entity in the world. I feel like since language is all about references, he should be able to know what ‘汉堡’ corresponds to in the real world, that is, ‘hamburger', and in this way he actually understands Chinese, no? Maybe memorization is where understanding happens. But on the other hand, the computer can memorize these rules too, so there must be something different going on between the computer and the person's memorization.

      Delete
    3. Upon thinking about it, an important difference is that humans have the spontaneity while the machines don't. Human can act spontaneously while the machines can only be started and operated by something else (a program or a person). They can't start themselves or generate programs by themselves.

      Delete
    4. Zilong: Learning involves some remembering, but learning is not memorization just memorization unless all your are trying to learn is to repeat the words you have memorized.

      And don't forget that what Searle memorizes is the symbol-manipulation rules for passing T2.

      And real language learning does not just involve words; it also involves the world that the words are about. T2 can only connect words to words. It requires T3 to connect words to the world.

      Minds can't start themselves either; they need bodies. (Turing, 2a, discussed many of these questions.)

      Delete
    5. I believe that's the reason why Searle's argument that "not being to understand the Chinese by memorizing the rulebook" only works for T2 Turing machines. It's because T3 robots have the understanding about the world and can connect the word "hamburger" to the real world hamburger, but the T2 Turing machines only have the symbol-manipulation algorithms due to memorization of rules without being able to "understand what is a hamburger" in the real world. Yet, does it mean that T3 robots have the understanding about the real world (because they are able to interact in the real world, also learning languages by simulating the real world in their systems)? If that's true, then Searle's Chinese room argument does not work for T3 robots.

      Delete
    6. Hi Xingti!

      I’m fairly sure that all Searle’s Chinese Room Argument shows is that even if computation is all that is going on in a robot passing T2, it still would not understand. He demonstrates this through his own inability to understand. As such, he proves that computation cannot be all there is to cognition. However, his argument would not work for T3 robots because there is no way for computation alone to pass T3 or T4.

      Delete
    7. Hi Bronwen,

      Yes I agree! And we can't test if his argument works for T3 robot because of the other mind problem (we can never be the T3 robot so we can't know if he understands or not).

      Delete

  9. The thesis proposing that cognition is computation, meaning that mental states are simply computational state, such that the hardware; the brain itself is irrelevant, only the software, algorithm is important. This thesis is not dualism, unlike what Searle wrongly assumes. As we have discussed in class, the theory that computation is cognition cannot explain everything, as such, it feels like something to think. Just as in the reading, while the experimenter is manipulating the Chinese symbols and producing answers that one could think is made by a human having a conversation. Even though, an actual human is giving these answers, they do not understand anything of Chinese, the answers are produced by simple symbol manipulation. Searle still feels something as he does these manipulations, just not the same as one would feel when understanding, thinking and talking in Chinese. This is an example of a human doing computation, and clearly shows that it isn’t representative of cognition. Searle wrongly argues that cognition cannot be all computation as computation alone cannot think nor understand; it doesn’t fully demonstrate that cognition cannot be computation. As cognition is a hybrid, partly computation, partly not computation, would then T3 be sufficient to explain cognition? Why would T4 be necessary, is understanding absolutely necessary to understand cognition?

    ReplyDelete
    Replies
    1. I can't quite discern whether you are really understanding Searle's Argument or just expressing another Granny objection about computers and "programming"!

      Yes, explaining how and why organisms feel, including how they feel understanding, is part of the explanatory burden of cognitive science (which is to reverse-engineer cognitive capacity) since understanding is undeniably a part of cognitive capacity. But it is the "hard" part.

      Delete
  10. In this paper, he is undoubtedly expressing a strong belief that in order to understand cognition, one must look at how it is implemented dynamically, that is, how the brain works. Searle thinks computationalism is not just incomplete but quite wrong, that the only systems that can cognize are brains (T4), and that the only way to explain cognition is by studying and explaining brains. Using the Chinese room argument, he says that if there was some robot that could pass T3, it would not account for human (or animal) cognition. The Chinese Room Argument accomplishes the goal of demonstrating that programming a digital machine to understand language may give the illusion of comprehension, but it is unlikely to achieve true comprehension. As a result, the "Turing Test" is not bad but just insufficient. The thought experiment, according to Searle, emphasizes the fact that computers only employ syntactic rules to rearrange symbol strings and have no concept of meaning or semantics.
    I agree that if a computer passes T2 is doesn’t necessarily cognize, but I still find it unclear if a computer that passes T3 cognizes.

    ReplyDelete
    Replies
    1. See other replies to understand how Searle's Argument only works for T2 (if passed by computation alone) and not for T3. (With T3, there's only the other-minds problem left, but Turing says that's as close as cogsci can get.)

      ["Stevan Says" that only a T3 could pass T2. (Why?) We will get to this in Week 5 on symbol grounding.]

      Delete
  11. I found one of Searle’s rebuttals against the Systems Reply difficult to grasp. In this particular argument, Searle argues against McCarthy, who stated that machines have beliefs. I was confused as to why beliefs were being discussed in this paper. To me, having “understanding” in the sense that we understand symbols and can attach meaning to them and having beliefs are two very different things. I believe that a definition of “beliefs” would be quite useful here. For example, a toddler can understand symbols and attach meaning to them, such as understanding that the letters c-a-t spell cat and can point out a cat in a set of images. However, I am not sure that a toddler understands enough to have true “beliefs”, as I think that this is a separate thing.

    ReplyDelete
    Replies
    1. "Beliefs" are philosophers' favored jargon.

      If I say (and mean, and understand) the sentence that "the cat is on the mat," then don't I also believe it's true (unless I'm lying)?

      All these states (saying, meaning, understanding, believing, seeing, wanting, knowing (that's a tricky one!) are (among other things) felt states.

      Sentences on a page, or a computer screen, or executed in a computer's hardware, are no more felt than states of a rock, or a rocket, or a (toy) robot.

      But some of a T3's (Eric's) states may be felt ones. In any case, because of the other-minds barrier, there is no way to know for sure (other than "Searle's Periscope," which only works for T2, and only if it can be passed by computational alone).

      (Can you explain this to kid-sib? If so, you understand it.)

      Delete
  12. One response to Searle's Chinese Room argument that I found particularly useless (as, I believe, did Searle) is the combination reply out of Berkeley and Stanford. They proposed a robot with a computer in the cranial cavity programmed with all the synapses of a human brain, which behaves indistinguishably from a human, and altogether is a unified system. Searle says that we would have no more problem assigning intentionality to this robot as we do to a dog, UNTIL we find out that its behaviour is the result of a formal program. Though I think Searle's defence is definitely sufficient, I still struggle to find any point to this thought exercise in the first place. There are many other things that define what it means to be human other than intentionality and consciousness. If the goal is to understand the nature of intentionality, simulating everything that makes us human, I think, actually takes us farther from a solution and eliminates any deductive power.

    ReplyDelete
    Replies
    1. Searle's reply here is wrong, and that's important. A robot, unlike a computer, is T3, not T2, and cannot be implemented via computation alone. Read the other replies in weeks 1-3.

      Searle can become T2 (if it can be passed by a computer), and can show, thanks to his Periscope, that T2 does not understand (Chinese).

      Delete
  13. “No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding actually understood anything?”
    I found this idea of Searle’s eye opening. Indeed, what can tell us if a computer has the capacity to understand like we do based on intentionality? However, the idea of intentionality and the idea of the causal relationship between brain and mind sprout a lot of questions in my mind. Is the mind necessarily the result of the brain’s activity? Could the brain be just one part of a whole system involving other parts of our bodies producing the mind as a result? Could the brain be a transmitter for something unknown that makes the production of a mind possible? All these questions don’t necessarily refute the fact that a brain is necessary for the mind but what’s the relationship between the two? Not only that, but on the topic of intentionality, why can’t I control the cells in my body intentionally? Why can I focus my attention (attend to stimuli) willingly instead? In my opinion, the brain might be causal to the mind but what kind of mind was Searle describing? Would he mean feeling or intentionality or something else? This is what made his arguments less convincing to me because of the difficulty in grasping what the mind was to him.

    ReplyDelete
    Replies
    1. Cogsci is ("just") about reverse-engineering organisms' cognitive capacities -- to do and feel what organisms can do and feel. All these other questions are interesting, but first grasp what cogsci's basic questions are about and how it can go about its business trying to answer them.

      Delete
  14. When Searle writes "Whatever else intentionality is, it is a biological
    phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as
    lactation, photosynthesis, or any other biological phenomena", I am not surprised at this view. However, this makes me curious if other thinkers in Cognitive Science agree or not. If I have this right, this means that to Searle, having a T3 robot isn't enough to prove they have intentionality like humans do. This means that even if they can seemingly do everything a human can, their sentience isn't at the same level because they aren't made up of biological parts.

    I get his reasoning and it makes sense, however, this has made me more curious about how widespread a belief it is that only biological beings can possess intentionality and whether there is a counter-position to Searle.

    ReplyDelete
    Replies
    1. Searle does not discuss T2 vs. T3. He jumps straight from T2 and computation to T4 and secretion.

      Delete
    2. Laurier, let me know what you think about this interpretation, but what I took Searle to mean when he said "Whatever else intentionality is, it is a biological
      phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as
      lactation, photosynthesis, or any other biological phenomena” was not that intentionality (aka consciousness, or the feeling of understanding) could only be manifested by a biological system, but just that the physical system has something to do with this manifestation. In other words, he was not saying that only a carbon-based machine (or however we’d like to define biological) could have intentionality, but that intentionality is not implementation-independant. And just as it would seem absurd to us to think that other things physical systems do like produce sugar or milk might be implementation independent, it should seem strange to assume thinking (consciousness, expressing intentionality - there too many words at this point haha) would be implementation independent.

      So, in response to a computationalist who argues that consciousness = computation (or, in Searle’s terms, that intentionality could be a product of computation), and that this could be shown by running a computation-based program on a robot to produce everything a human can do, Searle’s response would not be “oh, but this can’t really be the same because the robot isn’t made of biological parts”, but rather that, if every part of human action (including intentionality, or the feeling of understanding) were produced in a robot, this would have something to do with the robot’s physical makeup, rather than being a product of the program itself. If a T3 robot had intentionality, it would not be solely because of the program it is running - would not be explained by the computations it is performing - so the computationalist could not say this T3 robot is proof that consciousness = computation.

      Delete
  15. Searle says that "no purely formal model will ever be sufficient by itself for intentionality because the formal properties are not by themselves constitutive of intentionality".
    Here, is Searle saying that formal programs can’t have perception, thinking, understanding, etc., or that if they do, they “perform” them unintentionally? As he says, formal models can only act in response to rules -- so does this mean that "intentionality" as it is used by Searle is synonymous to free will? I think I'm having trouble understanding how intentionality relates to a program's ability to help us understand cognition (many cognitive functions are not intentional, hence why I think I'm misunderstanding Searle's definition of the word in connection to cognition).
    Moreover, Searle says we must assume a machine/robot that's indistinguishable from humans both externally and internally (T4) has intentionality until proven otherwise. This is a kind of machine that we will likely never be able to create -- so which, if any, kind of program would Searle consider useful for understanding cognition?

    ReplyDelete
    Replies
    1. I think this is a great point: "many cognitive functions are not intentional". If the goal of cognitive science is to reverse engineer the mind, why does intentionality matter?

      Delete
    2. Forget “intentionality” (see other replies) and focus on doing and feeling, because that’s what cogsci is about. (Searle’s “intentionality” does not refer to doing things intentionally [deliberately, by choice]. It refers to your intended meaning when you speak or think.)

      Understanding something and meaning something both have a doing component and a feeling component. What are they?

      “Free will” is a feeling. It’s the feeling that “I” am causing what I’m doing.

      But cogsci’s (“easy”) problem is to reverse engineer the causal mechanism that produces (1) organisms’ doings, and, more important, (2) their capacity to do what they can do. How and why can they do what they can do?

      And cogsci’s “hard” problem is to reverse engineer the causal mechanism that produces (3) organisms’ feelings and, more important, (4) their capacity to feel what they can feel. How can they feel what they can feel? And why can they feel at all?

      Delete
    3. Doesn’t cogsci’s “hard” problem run into the other minds problem? If a machine can do exactly what humans can do, how can we tell if it’s feeling? We don’t have a measure of whether or not human’s are feeling, so how would we determine if a machine could ever have that capacity?

      Delete
    4. Melody, yes, the hard problem of cogsci does run into the other minds problem.
      However, in Searle’s scenario where he becomes the (T2) machine, he is literally in the machine’s shoes and can tell if it is understanding or not. Thus, just for this scenario, the other minds problem is penetrated.

      Searle tells us that after the hypothetical thought experiment of being the machine, I don’t understand Chinese, yet I am able to pass T2.
      So just in the case of T2 (where T2 can be passed by computation alone), “Searle’s Periscope” makes it possible to determine if the T2-passing machine is understanding, on which in this case, it is not.

      Delete
    5. There is a connection between them, but what is the other-minds problem? and what is the "hard" problem?

      Delete
  16. Searle says ""Suppose we design a program that...simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them" as part of the brain simulator reply.

    He claims that this still isn't sufficient for understanding, which I don't understand. If we were to build a machine that simulated the exact same sequence of events down to the molecular or cellular level as a fluent Chinese speaker (that understood Chinese), then how can we say the machine doesn't understand Chinese? The machine would literally be doing the same steps as the Chinese speaker's brain.

    ReplyDelete
    Replies
    1. I think Searle is arguing that a simulation of the brain is not the same as the real thing (like the computation vs computational simulation argument we talked about in class). The simulated brain isn't a real brain. I'm still confused though because the mind and brain are separate, but the mind is derived from the brain (we have no thoughts without the neurophysiology), so if the simulated brain has the same neurophysiology, shouldn't it be able to derive a mind?

      Delete
    2. Melody, I’ve also been having a bit of trouble wrapping my head around this concept, but I think the video helped me get a little closer to understanding it. Searle says that “computation exists relative to the interpreter” and that “you cannot give a computational explanation of consciousness because consciousness is observer-relative.” If I am interpreting this correctly (for lack of a better word, since I do not think there is necessarily a “correct” answer to this problem), Searle is arguing that understanding can only be elicited from the mind, which cannot be simulated. If, like we’ve done in this class so far, we take “machine” to mean any kind of system with cause and effect, and I substitute myself as the machine in the Chinese room, I would argue that I would not understand Chinese, even after many days spent in there. Without that understanding, I cannot have intentionality. The same applies to a simulated brain — it may be taking in the correct inputs and spitting out the correct outputs, but without the intentionality that is seemingly intrinsic to humans (or even animals in general), there can be no mind.

      As for your second point, I think Searle tackles it in the video when he says that the syntax of a computer program is not enough for the semantics. In order to truly think independently, the simulated brain would need to be able to understand semantics (the interpretation of symbols), as syntax (the shape of symbols) is not sufficient. It may have the same neurophysiology and be able to “read” the same shapes (syntax), but until it can actually interpret the shapes it comes across, the mind cannot be derived from the simulated brain. So, since the simulated brain cannot “think,” it cannot have a mind as a byproduct of its existence.

      Delete
    3. A simulated brain cannot think any more than a simulated plane can fly. It’s just squiggles and squiggles. But it might provide the information – like a blueprint – for building a brain that can think or a rocket that can fly. T4 would test the brain blueprint (but is it better than T3 [Eric]? How and why?)

      Melody What do you mean “mind and brain are separate”? I know what you mean by brain, but what do you mean by “mind” – other than the brain’s capacity to do and to feel? Otherwise the brain and mind are no more separate than a plane and its capacity to fly.

      Emma what’s important is not “observer-relativity” but the fact that the meaning of a computation is not in the computation but in the head of the user. So computation cannot just be what is in the head of the user! That would be worse than homuncular: it would be circular.

      What do you mean by “the mind… cannot be simulated”? What do you mean by the mind? And by simulation? (See replies.)

      Searle’s point is simple: if T2 does not feel understanding, it does not understand.

      Delete
    4. I think the claim that if a simulated brain has the same neurophysiology should produce mental states is not the whole story. From a more neuropsychological explanation, the simulated brain will still be made of the transmission of neurotransmitters and synapses. The transmission of dopamine between synapses and the following chemical release is just another squiggle and squiggle. They could be interpreted as symbols and inputs and outputs. This does not explain how we derive understanding because these symbols hold no meaning.

      Delete
    5. Professor, I think today’s lecture helped clarify that point for me. If I understood correctly, what’s important is not the “observer-relativity,” but rather the fact that computation does not simulate — it just performs an operation. It is then up to the person (or animal or even object, maybe) to interpret the operation and the results of said operation, since computations are “semantically interpretable.” Like you said, the meaning of a computation is in the head of the interpreter. As for your question about what the mind is, I think my response would rest on the Other Minds Problem, wherein we cannot know if things other than ourselves feel or have minds. How can I be sure that the computations I am assigning meaning to in my head are also in the heads of others? A simulated brain (and therefore a computational model) would be taking principles of computation to simulate something real, but once again, a simulated brain is not a brain. It’s just squiggles and squoggles. Though it may act as a blueprint for a brain that can think, a simulation cannot (yet?) replace the real thing.

      Delete
    6. Emma, I think you have the wrong idea of what simulation is. Computation CAN simulate. For example, a computer can simulate an airplane flying. Virtual reality is all simulation.

      However, as you mention in the last part of your comment, simulation is NOT the real thing. Even if we can simulate a brain through a computer, it is NOT the real brain, thus it cannot provide us the mechanisms of how the brain performs its cognitive duties.

      Also, computations are NOT always semantically interpretable.
      In the lecture, we went over that computation is symbol manipulation that is (1) rule based, (2) shape based and (3) implementation independent.
      The part about “semantically interpretable” was to add that although symbols that are manipulated through computation are arbitrary, it MAY be that you can interpret those symbols to be meaningful.
      However, the interpretation itself is NOT in the computation.
      We are INTERESTED in meaningful computations because we can understand, and understanding is part of cognition. But meaningful in a sense that we as USERS can interpret the symbols, not meaningful in the sense that computation is interpreting it.

      I hope this is a bit clarifying!

      Delete
  17. “Chinese symbols that I am giving out serve to make the motors inside the robot move the robot’s legs or arms. It is important to emphasize that all I am doing is manipulating formal symbols: I know none of these other facts.
    I am receiving “information” from the robot’s “perceptual” apparatus, and I am giving out “instructions” to its motor apparatus without knowing either of these facts. I am the robot’s homunculus, but unlike the traditional homunculus, I don’t know what’s going on. I don’t understand anything except the rules for symbol manipulation. Now in this case I want to say that the robot has no intentional states at all; it is simply moving about as a result of its electrical wiring and its program. And furthermore, by instantiating the program I have no intentional states of the relevant type.”

    In reading this part excerpt, it became increasingly apparent that Searle’s use of ‘intentionality’ was kept broad to mask a very apparent underlying meaning of consciousness or something akin to that sentiment. Later in reading Harnad’s paper, and coming across this: “The synonymy of the “conscious” and the “mental” is at the heart of the CRA (even if Searle is not yet fully conscious of it — and even if he obscured it by persistently using the weasel-word “intentional” in its place!”, I was curious as to whether or not Searle was consciously describing consciousness with a ‘weasel-word’ of intentionality, or if he really thought intentionality was something entirely different. If it was different than what we perceive him to be talking about, what are the differentiators?

    ReplyDelete
  18. "The formal symbol manipulations go on, the input and output are correctly matched, but the only real locus of intentionality is the man, and he doesn't know any of the relevant intentional states; he doesn't, for example, see what comes into the robot's eyes..."

    I found this reply to the “combination” argument to be particularly interesting. It follows Searle’s argument that, even if a system did an enormously convincing job of ~pretending to be conscious~, if we were to know that the system is in fact only dealing with formal symbols, we would not believe that it feels/believes/senses/whatever. That is, if we were not convinced that the system’s responses were grounded in some sensory/emotional/intellectual response to an actual stimuli (heat, an angry email, a difficult sudoku), we would not believe that it actually feels. This is, I suppose the reason we’ve argued that T3 is a prerequisite to T2, and why we have done so much foreshadowing to the “symbol grounding problem”. Perhaps this is jumping the gun a bit, but I worry with all the focus on the actual perception of the things that all the symbols refer to (the sweet red thing we’re calling apple, or what so be it), that perhaps the world isn’t so intersubjective as we might think. By intersubjective I mean that we would all be living in the same world, and that all our symbols refer to the same types of physical objects. Much theorising has been done on the possible “incompleteness” of language, or its inability to “get at” what is “actually” there (if there is anything at all). What if Searle’s pedestal of “intentional states”, or what is more simply just having a feeling about a certain thing (I feel that this apple is red and sweet), is shakier than he would admit?

    ReplyDelete
    Replies
    1. Sofia, the quote that you chose from the article I think explains why “Searle’s Periscope” penetrates the other minds problem for only a T2 machine and NOT a T3 robot. Once we have other sensorimotor capacities involved, we are met with the other minds problem again.

      I don’t think it is a matter of “believing” that the robot is feeling because [it is only dealing with formal symbols] but, we just wouldn’t know if the robot is feeling even if it is only dealing with formal symbols. We don’t actually have that “man (homunculus)” inside the robot “intentionally manipulating the symbols” that Searle hypothetically puts to refute the combination reply.

      Professor Harnad did mention above that Searle did not discuss the T3 robot. He went straight from T2 to T4 (neuro). Nonetheless, we are met with the same other minds problem over and over again for all the T#-robots after T2.

      Delete
    2. As for intersubjectivity, I don’t think that because all humans have the capacity to feel (sentience), and that mechanism is universal in humans by cognition, that our symbols (like language) are all referring to the same types of physical objects.

      Having a feeling about something, such as the example you used, FEELING that this [apple is X and Y] is something that we can be CERTAIN of ourselves, the Cartesian certainty.
      Acknowledging that FEELING is CERTAIN, I think, is irrelevant from feeling certain about a “THING” like an apple being red and sweet. The apple’s actually state of being red and sweet does not interfere with the certainty that we can FEEL it is.
      Just like Professor Harnad’s example in class: “He feels like he has a head. He might have a head, or he might not but he can be certain that he feels that way.”

      Tying this back to the reply above, can we be certain that a T3 robot is feeling? NO, because we ourselves are not the robot. That’s why we cannot know for sure whether a T3 robot has the same certainty of sentience as we do when we cognize.

      Delete
    3. I appreciate your distinction between 'feeling-certainty' and 'thing-certainty'. I think this clarifies some of the problems I have, even with the T3 robot that Searle does not address. However, and perhaps this is a philosophical question (and therefore, unfortunately, irrelevant to this course), I have a sneaking suspicion that Searle's room will be harder to get out of that we might have originally thought. While, as addressed in class, his argument is put forth in order to disprove computationalist theories of mind, that is, that minds could be computing systems dealing strictly with formal symbols, I can't help but think that his argument extends further than he might have intended. If we do not, in fact, have access to the thing-in-itself, or 'thing-certainty', then how are we any different from the man in Searle's room, who does not have access to the real-world referents of his symbols?

      Delete
  19. I did not fully grasp the Chinese Room argument until I fully reached the point in Searle's article where he explained "The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have intentionality (a man), and we program him with the formal program, you
    can see that the formal program carries no additional intentionality". I now understand the point of debate, where computation and "thinking" overlap, as programs compute via symbol manipulation, and it looks as though a human computer inside the Chinese room does not understand, but simply manipulates symbols, that computation does not involve intentionality, and thus, does not involve "thinking".

    I wonder what are some counter's to Searle's theory, some arguments or examples related to the Chinese Room Argument which dispute Searle's thinking, what other pygmies or giants have debated these things, and what are their reasonings?

    ReplyDelete
  20. So are we saying that for computationalism, Searle's periscope penetrates the other-minds barrier, but Searle shows that this does not hold up because Searle doesn't understand Chinese and if understanding Chinese were purely computational, then Searle should be able to understand just as the computer would be considered to? Ultimately, Searle's periscope only penetrates the other minds barrier if computationalism is considered to the true? (reposting here per Professor Harnad's request)

    ReplyDelete
    Replies
    1. Hi Grace! To try and clarify your question, I’m fairly sure that Searle’s periscope refers to computationalists’ belief that cognition, or thinking, is computation. So, if thinking is a computational state, then I could hypothetically get into the same computational state as, for example, my classmate. If I am in the same computational state as my classmate, I can observe if my classmate is “feeling” as I am. This is basically the idea that the implementation-independent property of computation penetrates of the other-minds barrier. That’s only according to Searle though. I think that professor Harnard was saying in class that mental states are not implementation-independent or purely computational. As such, Searle’s Periscope would only work for T2 and only in the case that T2 could be passed by computation alone.

      Delete
  21. The reason that Searle's CRA only works for T2 computers is that we cannot be sure about whether the T3 robot can understand due to the other mind problem (because we can never be the T3 robot and know how it feels). Searle's periscope can only penetrate the other mind problem if he implements T2 computer programs (due to the implementation-independent principle of the T2 computer that he can just simulate the computer programs on himself and compare his feeling of "not being able to understand Chinese" to a T2 computer). Hence, he concludes that computation is not cognition, because he feels that he is not able to understand the squiggles and squoggles of Chinese after implementing the T2 computer programs.

    Yet, if only T3 robots can pass T2 (e.g they need to know how to reply to blank messages or newly developed buzzwords by simulating the real world in their language systems -- I believe it is related to the symbol grounding problem), then Searle's Chinese room argument is wrong.

    Also, I'm just a bit confused about whether touching on the hard problem of "feeling not being able to understand Chinese" is valid. Or is the "algorithm for feeling" implementation-independent as well?

    ReplyDelete
  22. When reading about Shanck’s problem and the two prominent conclusions that partisans of AI tend to draw from it, I immediately was surprised by the first idea that “the machine can be said to understand the story and provide answers to the questions.”. From my understanding, this assumes that “computing” is the same as “understanding”, and I completely agree with Searle in that conflating the two is a serious logical fallacy. He supports this idea by suggesting that intentionality/understanding are heavily dependent on the biological and chemical processes within the body, which enable us to feel emotion and mould our inferences/conclusions based on this consciousness. If this has to be rooted in our biology/chemistry, doesn’t this then mean we need to place emphasis on the way our body interacts with the environment because “understanding” would then also involve our sensorimotor processes which impact how we generate action and modify our thoughts/behavior based on sensory signals about the environment?

    ReplyDelete
  23. a. In this text, I think that at the end where he addresses the issue on dualism, he makes a point where a strong AI would have nothing to tell us about thinking since it would have nothing to tell us about machines. Since the program itself and the machine would be considered two different things and therefore not having any information on the machine (brain) would not be able to tell us anything explain how we think and what part of the machine (brain) could be linked to thinking itself.

    In his example of the Chinese room, he shows a great example of how this would on ly be passing T2 since he, himself, does not understand Chinese but is merely following instruction. Yet, his output would be indistinguishable from that of a Chinese native speaker. In this case he only showing that he is able to do something but not actually do anything with it nor feels like to know what he is producing.

    From my understanding this would then show that the simple programming to do something (passing T2) goes back to say that we would gain no information on the way one thinks. There would be no link between the machine itself ant the programming.

    ReplyDelete
  24. In this classical paper, Searle wrote his objection to strong AI (computation is cognition) by emphasizing computational programming could never initiate intentionality - nor ‘understand’ anything. I’m convinced by him and I too believe computational system carries no human capacity of understanding. To further comment on this, I thought about how to define intentionality - that is when we perceive something from the sensory world, we have a sense of how we can deal with it. Mental process such as intentionality can exist without observable behavior - the fact that we can intend to close the door does not observable to other people who don’t have access to your mind. His Chinese room argument exemplifies a machine, an artificial intelligence, may be able to behave indistinguishably from human, but never understands as human do. Searle also replied to several objections to his argument. I’m particularly interested in the brain simulator reply. It states that “We can even imaging that the machine operates, not with a single serial program, but with a whole set of programs operating in parallel, in the manner that actual human brains presumably operate… At levels of the synapses, what would or could be different about the program of the computer and the program of the Chinese brain?” (Page 8). Searle’s replied as “it simulating the wrong things about the brain”. But the truth is, even we simulate the right thing
    (*we don’t know what is the right thing to simulate yet), it still just a simulation. As we learnt in the lectures, a simulated brain is never a real brain. It might be able to simulate some function of the brain (although, I also doubt this), but as we are not fully understand how intentionality arises from the brain physiology, we cannot simulate a brain to do the same thing.

    ReplyDelete
    Replies
    1. Following my own writing, I would like to add that in the Chinese room described by Searle, as an scenario of having a conversation, language is not functioning as simple as input-algorithm-output. If he’s task is to response indistinguishable from Chinese speaker, without understanding, he could not pass the test. Because the diversity of language use and the individual differences (i.e. there’s infinite number of ways to convey the same meaning), I could imagine that it would be easy to realize the person could not understand or show any competence in Chinese language. Although I understand the idea that the Chinese Room argument is just a thought experiment, I just want to add this on as a supporting argument against strong AI – because language capacity is also an important aspect of cognition.

      Delete
  25. I am disagree with the Searle’s Chinese room arguments on “understanding”. In the contrary, I find that the Chinese room example serves as a good tool to demonstrate the parallel between human and computers (i.e., “the mind is to the brain as the program is to the hardware ”).
    As illustrated by Searle, humans can learn how to produce the correct output for the input without involving real understanding, which is just like how a computer can do.
    For example, Searle don’t understand Chinese, yet by memorizing, he could learn to answer “谢谢” by “不客气” (which is more like drawing a pattern instead of really “answering”). A computer programmed with something like:
    “ if input(‘谢谢’):
    print(‘不客气’)”
    Has the exact same performance.
    However, for a native speaker of Chinese, they do not need to memorize by rote and yet produce the same output. But this does not mean that the underlying mechanism is different. But rather, think about, “how they learn their language?”
    From my point of view, there is no difference between the learning process of native speakers and that of non-native speakers. The differences are in the timing (native speakers start learning since they are born), and the amount of inputs (language environment that helps building a ‘database’). I see it more like a compaction, a collection that piles up those “stupid” programs (I am not sure if my wordings are clear). By memorizing tons of rules and building up our vocabulary, we seem to understand the language, but in deep, it is the applying of rules that provides us the sense of “understanding”.
    If a computer is programmed in such way, it represents accordantly how human process. Why can’t we say that it understands?

    ReplyDelete
  26. Just to confirm, Searle is not saying that computation is not part of cognition, right? Because to me, the systems argument makes quite a lot of sense. Would we say that the prefrontal cortex, for example, actually ‘understands’ what it’s doing? Probably not. The understanding seems not to be coming from the system dedicated to the specific function of acquiring and processing the rules. Rather, it’s a result of a bunch of systems working together. When Searle is using the CRA, he is only pointing to the system that does the ‘rule-integrating’, ‘symbol-processing’ functions, but not so much the other systems involved in language, which I guess is kind of his point. Cognition comprises all these different systems, and saying that it involves only the system that does the computation is wrong.

    ReplyDelete
  27. I don't think Searle is saying that computation is not part of cognition. He actually believes that computation is a big part of cognition. The argument he is making is that cognition can not only be made of computation. He does that by acting as the ultimate computer and learning chinese with only computations, but as we have seen it is not enough to actually understand Chinese (Even if he can speak just like a native Chinese speaker) -Elyass A.

    ReplyDelete
    Replies
    1. That's the thing though, I don't think we can say that he's actually "learning" chinese through these computations. The point Searle is trying to make is that despite being able to manipulate the symbols like a native speaker, he still does not have the "understanding" of a native speaker. He can't learn anything through these manipulations of symbols because of what we discussed in the dictionary lecture; you can't learn a completely foreign language using a dictionary that defines each word using other words from that same language, you need to ground meaning somehow and that never happens in Searle.s Chinese Roomé This is why Searle doesn't understand (and also why he can't learn) Chinese.

      Delete
  28. “Chinese symbols that I am giving out serve to make the motors inside the robot move the robot’s legs or arms. It is important to emphasize that all I am doing is manipulating formal symbols: I know none of these other facts. I am receiving “information” from the robot’s “perceptual” apparatus, and I am giving out “instructions” to its motor apparatus without knowing either of these facts. I am the robot’s homunculus, but unlike the traditional homunculus, I don’t know what’s going on. I don’t understand anything except the rules for symbol manipulation. Now in this case I want to say that the robot has no intentional states at all; it is simply moving about as a result of its electrical wiring and its program. And furthermore, by instantiating the program I have no intentional states of the relevant type.”

    In reading this excerpt, it became increasingly apparent that Searle’s use of ‘intentionality’ was kept broad to mask a very apparent underlying meaning of consciousness or something akin to that sentiment. Later in reading Harnad’s paper, and coming across this: “The synonymy of the “conscious” and the “mental” is at the heart of the CRA (even if Searle is not yet fully conscious of it — and even if he obscured it by persistently using the weasel-word “intentional” in its place!”, I was curious as to whether or not Searle was consciously describing consciousness with a ‘weasel-word’ of intentionality, or if he really thought intentionality was something entirely different. If it was different from what we perceive him to be talking about, what are the differentiators?

    ReplyDelete
    Replies
    1. Hi Melissa, since I was stuck in the word 'intentionality' when I was reading the article as well, I want to share my understanding of why and how Searle used the word 'intentionality' in his article. At the beginning of the article, Searle argues that 'Intentionality in human beings (and animals) is a product of causal features of the brain.' The way I understand it is that 'intentionality' here simply means what we feel when we are feeling. The 'causal features of the brain' could be replaced by any other words with the same meaning of consciousness. However, as Prof.Harnad points out (in his reply to Eric and during the lecture) since we don't have a clear definition (mean too many different things to different people) of intentionality, we should forget that word.

      Delete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2021 Time : FRIDAYS 11:35-2:25  Place : ZOOM Instructors : Stevan Harnad & Fernanda Pere...