Monday, August 30, 2021

3b. Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument?

Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press.



Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind).

60 comments:

  1. “But there is also a sense in which the Systems Reply is right, for although the CRA shows that cognition cannot be all just computational, it certainly does not show that it cannot be computational at all.”
    To me, this quote summarizes this article and my takeaways from both Searle and Harnad’s papers quite well. It made clear for me that it is the Systems Reply (refutation from Berkley and others on comp.ai that in the CRA the person is one part of a larger system that is responsible for the understanding) and how that pokes holes in the CRA. Searle’s mistake in the CRA then becomes involving a larger system, rather than containing the means for understanding within the system itself. If this is true about the CRA, then we must view cognition as something greater than computation, a more complex system, but not deny that the complex system also necessitates computation. We have been exposed to this in class, and it makes intuitive sense that the answer to cognition be more complex than computation, but does not disregard it as a necessary component.

    ReplyDelete
    Replies
    1. The Berkeley "System" Reply is completely wrong. There is no other "system" in Searle that is understanding Chinese, when what is being tested is T2. But verbal interaction is only part of what a T3 robot can do, and Searle is only doing the (hypothetical) computational part (in memorizing and executing the T2 program). But T2-passing capacity is only a part of the whole "system," namely, a T3-passing robot (Eric). And Searle, just like the T2 computer, can "become" a T2-passer by executing the (hypothetical) program, because computation is implementation-independent, but he cannot "become" a T3-passer like Eric, because a T3 robot is not implementation-independent computation. It is embodied and grounded.

      Delete
    2. Ahhh, that explanation helped, I think I'm beginning to understand. So the "System" Reply is flawed because it fails to consider that Searle can memorize the rules for Chinese, rather than relying on external rules (if I understand, this was what you were saying here: “it is unfortunate that in the original formulation of the CRA Searle described implementing the T2-passing program in a room with the help of symbols and symbol-manipulation rules written all over the walls, for that opened the door to the Systems Reply.”). But when considering a T3-passer, there needs to be mental representations and mobility, so the robot must be part of a larger system? Would that mean that the “System” Reply applies to T-3 robots, but not T2 computers? I’m still trying to piece it together in kid-sib terms.

      Delete
    3. I am also a little confused if the “system” reply could apply to T3 robots. By my understanding, I believe it could. This is because Searle rooted his argument against the “system” reply by proving that he could essentially be a T2 robot. However, Searle could never be a T3 robot because T3 is not just computation. So, based off of this, I would think that you could argue that the “system” reply is true, since all the arguments made against it thus far would not work.

      Delete
    4.       ”SEARLE’S PERISCOPE”

      Madelaine, the System Reply is that Searle would only be a part of the "System" that understands Chinese if he, too, executes the same (imaginary) T2-passing program that the computer uses to pass T2. The whole “System” – of which Searle is just a part -- would be understanding Chinese.

      That Reply is wrong, because Searle could memorize the T2 program and execute it, and it would make no difference for the conclusion: Searle still would not understand Chinese, and there's no bigger "System," of which Searle would just be a part, that would be understanding Chinese:

      It feels like something to understand Chinese, and Searle is right to point out that he would not have that feeling, no matter what the symbols he was manipulating could be interpreted as meaning by someone who really does understand Chinese. It's as simple as that.

      And it shows that Searle's Argument is right, because of the implementation-independence of computation: Both the T2-passing computer and Searle are executing the T2 program, and if one is not understanding then the other one is not understanding either. That's "Searle's Periscope" on the (normally impenetrable) “other-minds barrier” (what is that?). (Do you understand Searle’s Periscope, and why it only works for computation, and nothing else?)

      But if the TT-passer were not just an (imaginary) program that could pass T2 but a sensorimotor robot (Eric) that could pass T3, then Searle’s Periscope would fail, because Searle can “become” T2, and feel for himself whether he understands Chinese, but he cannot become T3 (why not?).

      Mobility, yes, but what on earth are “mental representations,” other than what is going on in the brain when I am thinking a thought? And that’s what cogsci is supposed to explain, not use as an unexplained something-or-other in order to explain how and why thinkers can think – and feel what it feels-like to think. “Mental” just means felt. (How and why does the brain produce feeling: that’s cogsci’s “hard problem,” which is not the same as the other-minds problem [why? And what is it?]).

      Evelyn, yes, you are right. If there could be a T2, Searle could memorize and execute it and prove that it would not be understanding Chinese (because of the implementation-independence of computation). So “computationalism” (“cognition is just computation”) is wrong. But T3 is a robot, not a computer, and sensorimotor functions, a body, and possibly a lot of other things in a robot, are not computation, so not implementation-independent, so Searle could not “become” the T2 robot, the way he could become the T2 computer. He could only become a part – the computational part – of the T3 “System.” So Searle’s Argument would not work to prove that T3 would not understand Chinese: May be Eric understands (English), maybe he doesn’t. But that’s just the usual other-minds problem, which is, as usual, impenetrable – except to Eric himself (remember Descartes?)

      Delete
    5. Professor Harnad,
      I keep seeing discussion of Searle’s Periscope, but I'm still not sure I understand exactly what it is. And it seems important so I don’t want to pass over it without really understanding.
      In your paper this idea is introduced in the following paragraph: "The critical property is transitivity: If all physical implementations of one and the same computational system are indeed equivalent, then when any one of them has (or lacks) a given computational property, it follows that they all do (and, by tenet (1), being a mental state is just a computational property). We will return to this. It is what I have dubbed "Searle's Periscope" on the normally impenetrable "other-minds" barrier (Harnad 1991a); it is also that soft underbelly of computationalism” (Page 4).

      Is Searle’s Periscope just a name for the idea that, by putting ourselves in the place of a T2-passing computer, we have a way to “see into” its head (like looking into something with a periscope), so rather than having to guess at what it is thinking/feeling/understanding, which is impossible due to the other minds problem, we can just introspect to see what we are thinking/feeling/understanding? We do not have to overcome the other minds problem to see what is going on in the T2-passing computer’s (potential) mind, because we can put ourselves in the place of this computer?

      And the reason we can put ourselves in the place of this computer and accurately see whatever it is doing that the computationalist claims to be consciousness is that, according to the computationalist, consciousness = computation so is implementation independent. Meaning anything that does or does not happen in a T2-passing computer making computations would also be happening in us making those computations (or, running the same program). Thus, the computationalist’s claim of implementation independence gives us a way to actually see that the T2-passing robot is not understanding, and so that computationalism cannot be the whole picture?

      I feel like I’m misunderstanding something here, so any feedback on where I’m going wrong would be appreciated :)

      Delete
    6. Caroline, yes, you've understood it. (What made you doubt you had?)

      Delete
  2. While I found Searle’s arguments in the original paper well laid out and thought provoking, I admit I found his use of “intentionality” difficult to understand given the context of the paper and the terminology we have used frequently in class so far. I found that Harnad’s commentary cleared the air for me in the following passage:
    “We're ready to believe that we "know" a phone number when, unable to recall it consciously, we find we can nevertheless dial it when we let our fingers do the walking.
    “It's definitely not what we mean by "understanding a language," which surely means CONSCIOUS understanding.”
    “The synonymy of the "conscious" and the "mental" is at the heart of the CRA (even if Searle is not yet fully conscious of it -- and even if he obscured it by persistently using the weasel-word "intentional" in its place!)”
    The replacement of “intentionality” with “the conscious” and Harnad’s phone number example helped to clarify the meaning of understanding that the CRA relies on and allowed me to return to Searle’s text with a clearer perspective. In my eyes, the notion of “intentionality” fails to encapsulate understanding in the way that Searle intends. For example, you could intentionally attempt to recall the phone number (and perhaps fail). But when we reframe the idea in terms of conscious understanding, it is clear that there is a difference in understanding and being able to recall and thinking you know the number but not being able to recall.

    ReplyDelete
    Replies
    1. I agree that what Searle meant by "intentionality" was a bit hard for me to grasp when it read his paper, since he doesn't really define it and its also not a notion that I've seen used elsewhere in cognitive science. I also wasn't clear on the relationship between intentionality and understanding: for Searle, could something reach true understanding without intentionality? Are they related causally, or only two examples of the kind of things that computation cannot create?

      But I'm also not sure to understand how consciousness can be used instead of intentionality, which seem to me to be pretty different. To me, intentionality reminded me of the Lady Lovelace argument, which is that machine cannot create anything new. Searle believes that a computational machine cannot choose intentionally to initiate anything, but only to execute commands. How is consciousness related to this?

      Delete
    2. I personally didn't take intentionality as meaning being able to initiate something.
      In that sense I think the argument would be somewhat weak, because a computer program could be able to initiate new things on its own. You just need an AI complex enough to be able to just create stuff based on multiple algorithm .
      If you mean free will, someone could argue (quite easily)that human cognition is derived of free will, and that we act based on the "programming" in our brain.
      How I saw Searles argument is that computation (under Turings definition) is simply syntax manipulation, and you need semantics to interpret symbols; Symbols are not real on their own (not ontologically objective), they need to be observed. While human minds do provide semantics, they are not computation alone.

      Delete
    3. the previous unknown comment was made by elyass Asmar *

      Delete
    4. Grace, "intentionality" the way philosophers use it does not quite mean what you think. It's not just about doing something intentionally, or deliberately vs. unintentionally, accidentally, or reflexively. It is closer to when we say to someone who has misunderstood what we have said that "that was not what I had in mind. My intended meaning was ...."

      Louise, that's why it all boils down to feeling: It feels like something to mean what you mean when you say something. Otherwise it's just words that can be interpreted, but the meaning is not in the words themselves; it is in the head of the speaker (or the hearer).

      That should remind you of computation, and interpretability: The interpretation is in the head of the user, not in the hardware, nor in the software. Cogsci is trying to reverse-engineer what's going on in the heads of users that produces that capacity, and Searle is showing that whatever that is, it can't be just the the manipulation of meaningless symbols (as in computation).

      Elyass, I think you're on the right track. What do you mean by "semantics"? How do you get it into a head?

      Delete
    5. Louise and Grace, I heard the word 'intentionality' being mentioned the philosophy of mind. In addition to the example Prof. Harnad gave above, I took 'intention' to be something like a mental attitude toward another object (not necessarily a physical object)

      Delete
    6. April: But kid-sib asks: what's an "attitude" and what's "mental"?

      For this course a "mental" state is a felt state: You don't need an "attitude" for that. You just have to feel. And thinking about an apple is feeling what it feels like to think about an apple.

      Cogsci needs to explain, first, how we know how to recognize and categorize apples as "apples," how we know what to do with what we can do with apples (including identify them by name);, last, how we can have apples "in mind," i.e., think about them. And that includes what it feels like to think about them.

      Delete
    7. Professor Harnad,

      When you give a definition of "intentionality" as "closer to when we say to someone who has misunderstood what we have said that 'that was not what I had in mind. My intended meaning was ....'", does this mean that intentionality is meaning, as in, a person with intentionality is applying meaning to their words and actions?

      If this is so, then does intentionality = meaning, and thus, computation must obviously never have intentionality, since computation is based on symbol manipulation, which is not based on the meaning of the symbols, but the shape of them?

      Delete
  3. From my understanding, Searle’s Chinese room argument works for T2, however it cannot be applied to T3? As discussed in class, computation is implementation-independent, meaning that a program or algorithm with specific abilities can be implemented on any physical device that is able to understand the instructions, and will generate those same capacities. In the Chinese Room Argument, Searle becomes the device himself, and executes the computation, and is able to pass the test without actually understanding any Chinese. As T2 is only via messages and doesn’t include sensorimotor capacities, the question of “the other minds” problem isn’t evoked, and Searle’s therefore concludes that the system is generating all capabilities without understanding. He doesn’t acknowledge that it feels something to think, to understand, learn…Therefore, as Searle cannot “becomes the other mind”, computational aspects of T2 can be implementation independent, however due to the ‘other minds problem’, it isn’t possible for the entire system (T3). Nonetheless, without ‘being the other mind’ there is no way to know if mental states are computable or just states. Perhaps, they are just a function of sensorimotor capacities and computation? For instance, why do we need to feel pain, such as a burn, it isn’t needed is to detect what needs to be detected and do what needs to be done? Therefore, Searle wrongly concludes that cognition is not at all computation, part of cognition can be cognition. We can anyhow conclude that a T2 doesn’t understand, or feel, due to the other minds problem, but we cannot conclude as such with T3. Is T4 necessary for understanding cognition?

    ReplyDelete
    Replies
    1. Yes, but: ...any physical device that is able to"execute [not "understand"] the instructions...

      The rest sounds more or less right.

      But the question of whether T2 understands is already the other-minds problem, and Searle shows that if T2 is just computation, it cannot understand, because Searle, executing the same program, doesn't.

      And Searle can't show that a T3 would not be understanding, because, unlike in the special case of implementation-independent computation (a symbol system), Searle cannot "become" T3 (Eric) the way he can become T2: The other-minds barrier blocks the way.

      Feeling is essential to cognition, but it is "hard" to show how or why...

      Delete
  4. I found it interesting to consider unconscious understanding. At first, this seemed like a much harder concept to use in argument because it happens at a level that we are unaware of and cannot necessarily describe or define. For example, when this idea was brought up make the revised version of the systems reply, the idea seemed rather abstract, to the point where it would be too abstract to refute. However, after rethinking I do not think this limits the argument of unconscious understanding because we also cannot fully describe conscious understanding. Unconscious understanding is just another idea we must attempt to model. This makes the systems reply much more interesting, because it shows conscious understanding is not required as the Chinese Room Argument had assumed.

    ReplyDelete
    Replies
    1. On the one hand, no one is conscious of (i.e., feels) how their brain produces understanding. (That was Hebb's 3rd grade schoolteacher example, and that's why even cogsci's "easy" problem will take a lot of work.)

      But understanding and meaning what you are saying is not just a matter of being able to (1) say whatever any normal person can say (T2), or even also do (T3)(2) (Eric). It also feels like something to mean and understand what you can say (3) , and that's a lot harder to explain... and not just because of the other-minds problem.

      Delete
    2. Searle implies that cognition is more than just computation. Even if computing accounts for a portion of cognition, the obvious issue remains: what is the remainder? This is the 'understanding' or 'feeling' component, according to Searle, and it is derived from brain neurochemistry. However, if a robot could sense something, does that mean it feels it, is brain neurochemistry really needed? For example, if a robot senses a low temperature, and says it’s a chilly day, does that mean it could understand that it is cold outside? If we could build a robot with many sensory modalities would the robot be able to feel and understand the world around them like a human would?

      Delete
    3. You should these questions at full Turing scale:

      "(i) if a T3 robot could sense something, does that mean it feels it?

      What do you mean "sense"?

      Or are you just raising the other-minds-problem?

      (ii) is T4 really needed?"

      Good question: if so, why? If not, why not?

      "if a T3 robot senses a low temperature, and says it’s a chilly day, does that mean it could understand that it is cold outside?"

      Eric?

      Or are you just raising the other-minds-problem?

      "If we could build a T3 robot with many sensory modalities would the robot be able to feel and understand the world around them like a human would?"

      Eric?

      Or are you just raising the other-minds-problem?

      Delete
    4. Melissa, I think if we build a robot with sensory modalities, they do not necessary understand that is it cold out. For instance, we could attach a thermometer to a robot and include a program that tells the robot to shiver (or do other similar acts) when the temperature gets below a certain temperature. However, whether or not the robot can actually FEEL the cold and UNDERSTAND that it's cold can't be seen. I'm not sure how we could measure or detect whether the robot can understand things because we can't do that with humans either. To me, this seems like the other minds problem.

      Delete
    5. Melody, yes, it's the other-minds problem. (But your example is a toy robot, not a T3: Eric.)

      Delete
  5. This topic was touched upon in class, but I wanted to discuss the claim that the distinction between T3 and T4 (D3 and D4 in this case) may rest upon a continuum (which I think is valid). In this text, Harnad uses the TT levels to illustrate the dependence of cognitive science on reverse engineering. In the case of a reverse-engineered duck, the duck would have the same external and internal structure as a “real” one, making it possible to understand the physiology and cognition of ducks. The reverse-engineered duck would be at T4 and an "artificial" duck at T3 would be able to replicate the sensorimotor and cognitive functions of a “real” duck, without the need for accurate structural features inside and outside. It is true that to replicate a duck’s gait, a machine would not need to look like a duck and that to quack it would not need the facial structure of a duck. However, once we get into reproductive functions, health-related functions and more minute details of sensorimotor capacities (at the systems and cellular level), every single one of these details will impose constraints on another and bring the robot closer and closer to resembling an actual duck, until it reaches T4. It would make sense to draw the line between T3 and T4 when basic performance capacities (like swimming) are reached, but these are often affected by other physiological components like the reproductive and immune systems. A variety of underlying conditions can be important factors and ducks do not act in consistent manners. Simply taking into account basic physiological mechanics is merely creating a toy. Must we draw the line then at the point in the continuum where the insides and outsides of the “artificial” duck resembles that of an “actual” duck in full? Is there a simplified version of a duck that we would accept as T4?

    ReplyDelete
    Replies
    1. I definitely agree with your point. There is a continuum in the distinction between T3 and T4 and that in order to reverse engineer an indistinguishable “duck” the “duck” will end up needing to satisfy T4 rather than T3 in order to include all relevant components that makes a duck a duck. Additionally, I also think there is a continuum in the accepted variance of behaviour. Does an indistinguishable duck have to be a “perfect” duck? Indistinguishability would need to define a strict boundary on the amount of variance allowed before it is considered distinguishable because as you had mentioned before, ducks do not act in consistent manners. An example that comes to mind is speech mannerisms in humans. If Person A waves their hands in the air when speaking on a topic they are passionate on while Person B does not, that accounts for a variance in behaviour, but it does not make us question the inner mechanisms of Person A. I think a simplified version of a T4 duck would depend on the amount of variance indistinguishability would allow.

      Delete
    2. You are both right that there may be a continuum between T3 and T4, just as there may be a continuum from cognitive to vegetative capacities. (Keep that in mind for exam questions about which is the right target for cogsci, T2/T3/T4.)

      But remember that the target is normal, generic cognitive capacities, not a particular person's, or Einstein's. Neither gesticulating nor stuttering would make you conclude that Eric was not a thinking being, just like the rest of is.

      Delete
    3. Professor Harnad,
      If one accepts the existence of a continuum on which unintelligibility lies, how does one discern the boundary of distinguishability? I find it difficult to find a robust distinction between entirely human-like and not (or even T3 and T4). I know this may be subjectively based on the human judge in the TT and their view of what makes a human indistinguishable from another. One may repeat the test many times, with different judges to account for this confounding variable… Nevertheless, what percent of people tricked into believing a robot is human would it take for it to be considered indistinguishable? 50%, 75%, 100%? Do we know of any scientifically rigorous ways to quantify this spectrum? If we reverse engineer ourselves for an understanding of human consciousness, I think it might be insightful to know how close we are to a complete understanding of consciousness.

      Delete
    4. Thank you Professor Harnad for the exam advice! That is definitely an interesting question to raise and its answer can go many ways. Bringing up the possibility of a continuum from cognitive to vegetative capacities is also interesting, as I wonder how it may or may not parallel the continuum of T-levels and how it could be scaled. Which cognitive abilities would we use to assess this? Would we be assessing cognition as a “whole” (and is that possible?) or would we be looking at it incrementally? I’d love to know if anyone has any ideas!

      Delete
  6. This paper cleared up some confusion I had about the Turing Test and what it means. Particularly this quote; “This does not imply that passing the Turing Test (TT) is a guarantor of having a mind or that failing it is a guarantor of lacking one. It just means that we cannot do any better than the TT, empirically speaking”. This implies that while Searle’s paper argues against the Turing Test, it does not completely invalidate it or prove that it is false. Instead, the Turing Test captures what cognition can be, in terms of reverse-engineering both structure and function. However, I am still confused as to how implementation-independence fits into the Turing Test in terms of T3 and T4.

    ReplyDelete
    Replies
    1. Evelyn, read "SEARLE'S PERISCOPE" above: Now is it kid-sibly enough?

      Delete
  7. To me, the crux of this paper comes from the phrase: "But the TT is of course no guarantee; it does not yield anything like the Cartesian certainty we have about our own mental states.". If I interpreted it correctly, this seems to be one of the points in which Searle and Harnad agree.

    Searle is essentially saying that our best bet in getting Cartesian certainty (intentionality) is through biological "robots" as computers may just be going through the motions. Harnad is essentially saying that we don't have any perfect test that's capable of reading minds, so our best bet is to use the Turing Test to determine if a robot is indistinguishable from a human.

    In both arguments, there is a consensus that we can't pinpoint "intentionality" or Cartesian certainty in our mental states as of yet.

    ReplyDelete
    Replies
    1. You’re right, but it’s not “Searle and Harnad” it’s “Searle and Turing.”

      And forget about the “intentionality” jargon: what’s at issue is the feeling of understanding. Yes, that’s Cartesian (i.e., the feeler knows for sure that the feeling is being felt, while it’s being felt). But cogsci is science (or at least reverse-bioengineering), not Cartesian certainty. Descartes points out that there’s never certainty that our causal explanation (of, for example, why apples fall down instead of up) is true [Newton’s law of universal gravitation): it’s just probably true… (Descartes died when Newton was only seven.)

      Delete
  8. I was pleasantly surprised to find Harnad discuss something I had brought up in my previous blog post, which is the idea of conscious understanding in the CRA. Searle asserts that there is a literal understanding, like understanding English, which is a different from how we can say an inanimate object metaphorically “understands” when it performs a task that humans have designed it to do. I enjoyed Harnad’s quip, and had not picked up on this myself, how Searle bypasses using the word “conscious” by using intentional instead, perhaps avoiding bringing the issue of consciousness into play. However, I get confused with where we go from here, with computationalism stuck at this crossroads of including or excluding the mental.

    ReplyDelete
  9. In Harnad’s article, the sentence “even in conscious entities unconscious mental states had better be brief” caught my attention. At first, I thought that what was meant was the subconscious mind, where chaos and irrationality reign supreme. This is one quintessentially human characteristic that would seem to be lacking from computers, which are paragons of order and rationality. However, I found that Harnad was actually speaking of the related but not identical UNconscious mind. Specifically, as pertains to language and memory. The examples of unconscious actions which are given reveal a grey area between conscious and unconscious mental processes. Remembering a phone number through the motions of the fingers while dialing it is still remembering the number; it’s just not remembered in terms of symbols. I would think that such motor memory, though largely automatic when executed, could still be considered conscious. Another example which lies between the conscious and unconscious would be speech and action while in a semi-dissociated state. We’ve all been there: tired, almost speaking on autopilot. But we’re still somewhat conscious of what we’re saying, and have at least an inkling of understanding: semi-conscious understanding.

    The point I’m trying to make (apparently in a very unclear way, my apologies to whoever ends up reading this) is that there is no hard line between conscious and unconscious. There are conscious shades of grey which people move through every day, in constant motion. And here is where I realize that I’ve come to partially agree with the sentence which begun this long, convoluted thought: all mental states are fleeting, not only unconscious ones.

    ReplyDelete
    Replies
    1. If there's any hard line between anything and anything it's the (Cartesian) line between being in a state that it feels like something to be in (be it ever so faint a feeling) and being in a state that does not feel like anything to be in: an unfelt state, as in a rock, or a rocket, or a (toy) robot.

      But even in feeling organisms there are states the organism does not feel: the organism might be asleep, or anesthetized, or just feeling something else, but, by definition, not feeling the unfelt state, which could be cortisol level-control in the hypothalamus, or whatever process retrieved the name of your 3rd grade school teacher when you were asked...

      The unfelt processes that produce both organisms' capacity to do all the (cognitive) things they can do -- as well as the capacity of (sentient) organisms to feel -- are what cogsci is trying to explain causally, by reverse-engineering them.

      Delete

    2. Reading this chain made me question the extent of my own conscious (or understood) states in my day-to-day. Often I find myself finishing a paragraph of a reading, and making notes, only to realize that I didn't actually "understand" or consciously take in what I just read. In this moment, am I like a T5 candidate? If this line between felt and unfelt states is so fleeting, but I am technically conscious for the duration of my reading, is consciousness even the right word (rather than intentionality) for this Cartesian idea of what it feels like to do or be something?

      Delete
    3. Lucie, sometimes understanding something takes a while (during which unconscious consolidation is taking place in your brain, or you are still consciously reflecting on it).

      But understanding is not just the feeling of understanding. Understanding has a "doing" component too: You may feel you understand Turing, and then a question comes up that you can't answer (or answer wrongly), and you realize that your understanding was incomplete. Back to thinking more about it...

      The feeling of understanding can be illusory. Not for something as big as understanding a language (that would be delusional, like speaking in tongues); but for a simpler notion, like "intentionality," as used in philosophy, where it does not mean "what you do deliberately."

      See Bertrand Russell on understanding "the secret of the universe".

      Delete
  10. “But one could ask for LESS, and a functionalist might settle for only the walking, the
    swimming and the quacking (etc., including everything else that a duck can DO), but
    ignoring the structure, i.e., what it looks like, on the inside or the outside, what
    material it is made of, etc. Let us call the first kind of reverse-engineered duck, the
    one that is completely indistinguishable from a real duck, both structurally and
    functionally, D4, and the one that is indistinguishable only functionally, D3.”
    If we think of the above-mentioned fact with computationalism and the mental states being implementation-independent implementations of computer programs, I believe that means that to computationalism, T3 would be all that is needed to produce mental states given that physical implementation is necessary, but its details are not directly relevant to mental states. Thus, in the case of humans, that would mean that a robot that can imitate our structure could have mental states without directly having a nervous system. To me, this makes mental states even more mysterious. Are mental states the result of a certain achieved complexity in an organism with no hardware specificity? In that case, what do they add to said organism? I guess, intuitively, I’d say that mental states give us a freedom to reverse-engineer ourselves and do meta-cognition. Thus, it gives us the power of giving meaning to our existence. In that sense, maybe consciousness is a result of a system’s complexification with the goal of reducing uncertainty (gathering info).

    ReplyDelete
    Replies
    1. "Mental" means "felt" -- everything else going on in the brain, unfelt, is "cerebral" but not "mental."

      T3 (Eric) is definitely not the execution of a computer program. You are mixing up computation with computationalism. (Searle is executing the (hypothetical) T2-passing program.)

      Whether reverse-engineering cognition requires all the features of the brain, or just those that are necessary to produce our cognitive capacities, is about the T3/T4 distinction, not the software/hardware distinction.

      Rocket-design can be done by computer-modelling, but to get of the ground, rockets, even in a successful computational model, cannot be just squiggles and squoggles (symbols and symbo manipulations).

      Read the other replies on 1a, 1b, 2a, 2b, 3a, 3b

      Delete
  11. This article helped me better understand the System Reply to Searle’s CRA, just like last week’s review article 2b helped me better understand article 2a. The two previous weeks’ readings also shed light on the irony of it all: decades of debate caused by simple misunderstandings, such as the meanings of “imitation game” or “intentional”. I, myself, misunderstood the main point of Searle’s article because I was caught up in understanding the meaning of “intentionality”. These “mistakes” are ridiculous in the humorous sense of the word, because of how trivial they seem when pointed out to us. But trivial they are not — where might the field of cognitive science be today, had these debates been settled and replaced by new ones decades earlier? Obviously, we aren’t formal programs (as we’ve learned), and can’t predict how others will interpret our words. But to me, this only confirms the importance of not divorcing pure science from “humanity” — scientists not only need to be able to communicate with and transmit their knowledge to the average person in a kid-sib-friendly way, but also need to do so efficiently with other experts.

    ReplyDelete
    Replies
    1. Juliette, I agree with you that getting caught up on the different terminology is frustrating. I too spent quite some time trying to decipher those ambiguous terms. After reading Prof Harnad’s article above, I realized that they are just misunderstandings but throughout this confusing process, I did eventually come to (my own) conclusion on what we can “gain” from Searle. So, I find the “decades of debate caused by misunderstandings” quite necessary.

      Below is my opinion on what we can gain from Searle and the CRA:

      Delete
    2. Searle’s Big Question in his Thought Experiment is: “What if I, myself, went through the T2 experiment that we’re trying to perform on computers?” (in a room, receiving Chinese symbols and spitting out Chinese symbols according to the instructions)
      His reply to his own thought experiment question is: “Well I don’t understand Chinese, but I can still pass the test. Machines are like this; they can pass without understanding. Unlike humans who can understand, machines don’t. Hence computation is not cognition.”

      Although Searle’s thought experiment and response doesn’t disprove Strong AI (as Searle would have liked), it does show that T2, specifically, cannot be cognition.

      Searle’s question pokes a hole in the impenetrable other minds problem. Professor Harnad terms this as “Searle’s Periscope”. Searle’s thought experiment is proposing that Searle, and us too, can implement the “symbolic” programs that the T2 computer is implementing. In other words, we ourselves can become the computer and execute the program like how Searle is doing in the Chinese Room. If we become the computer, (voila), we cracked the other-minds problem for just T2. We can now know what it feels like to be the machine.

      And being that T2 machine (hypothetically), we know that we can pass the test without understanding. We can execute the same program a T2 machine can execute with no understanding of Chinese words at all. So, in the case of T2, because it can pass the test without understanding, computation that exhibits verbal indistinguishability alone, cannot explain human cognition which “understanding” is surely a part of.

      Delete
    3. One REALLY important thing to note is that Searle’s periscope is conditional only on computationalism (and as others commented above, for the T2 test) because computationalism is trying to model our own mind. We can (hypothetically) be the T2 computer under the conditions that it is modeling our own minds. This is what Searle is doing with his Chinese Room experiment. We can’t (hypothetically) be a duck in any scenario. Nor can we (hypothetically) be a T3 robot.

      Delete
    4. Juliette, I agree with both you and Iris. Though many of the misunderstandings that have occurred in these kinds of discussions could largely have been avoided if the authors had chosen to communicate in a more kid-sibly manner, it is still very important to note how far we have gotten in terms of (attempting to) understand the root of the discussion at hand. Without these seemingly trivial “mistakes,” as you call them, we would not be taking the time to thoroughly evaluate our own thoughts on the subject and form conclusions of our own.

      Iris, I wanted to thank you for your thorough posts! I think that what you’ve written has helped clarify some confusion I was having. “Becoming” a T2-passing computer can only be done if we accept the premise that the computer is simulating our own mind, and for this to be true, we have to first accept computationalism. Once we accept that cognition is computation, or that Strong AI is true, we see that machines do understand, and that what they do in order to understand explains the capacity that humans have to understand. However, Searle’s experiment was designed to argue against Strong AI. We can follow the algorithms used in the Chinese Room without understanding them by manipulating symbols that have no meaning to us. Therefore, a computer in our place would not actually understand. So, cognition cannot be only computation, and we arrive at a crossroads: Searle’s periscope can only exist under the assumption that computationalism is correct, but Searle attempts to argue that computationalism is not correct. I think (hope?) I’m starting to understand this better.

      Delete
    5. And let's not forget that the idea that T2 could be passed by a computation alone is just a hypothesis (based on the fact that computation is so powerful, and that it can produce so many of the (toy) parts of what thinking heads can do). There is no T2-passing computer program. (And reasons to believe that there never could be: Week 5 on the symbol grounding problem.)

      Delete
  12. After reading Harnad's response to Searle's paper, I understand Searle's paper a lot better. From my understanding, Searle's argument was meant to invalidate functionalism because it ignores intentionality. Per the CRA, strong AI (which includes intentionality) is impossible because the TT can be passed without intentionality.

    Therefore, was Searle's intent to refute the TT, and in doing so, consequently refuted computationalism?

    ReplyDelete
    Replies
    1. Searle only succeeded in refuting computatonalism (cognition is just computation), not that the TT (T2, T3, or T4) is not a valid test of whether you have successfully reverse-engineered cognitive capacity.

      (But it would all have been more easily understandable without the needless weasel-words "functionalism," "intentionality" or "Strong AI" (which is the same thing as computationalism).

      Delete
  13. I re-read professor Harnard’s piece on Searle’s Chinese Room argument and one thing stuck out at me that I can’t find in the other skywritings - apologies if it has already been said and I missed it!

    Professor Harnard writes that we solve the other-minds problem with one another and with animal species that are “sufficiently like us” through “mind-reading” which is attributed to Heyes. I admittedly don’t know much about Heyes or this topic in general but I was wondering how the other-minds problem is solved for animals. I am just as sure that animals have minds as I am that human beings have minds. However, I was under the impression that we solve the other-minds problem between one another through communication via oral and written language (I am thinking of a T2 passing robot).

    I am wondering if Professor Harnard is saying that we solve the other-minds problem or turing-test other human beings and animals through body language and facial expressions? If that is the case, does a T3 robot need to be able to communicate orally/through written language? Could a robot not show that it is conscious the same way that an animal or a small child who cannot yet speak demonstrates consciousness?

    ReplyDelete
    Replies
    1. From what I understood, Harnad states that we solve the other minds problem (with other humans and sufficiently similar animals -- further referred to as "others") through Turing-testing. We basically test and judge the "other" based on how similar they are to us in a variety of instances (ie. can they do what I can do?). So, I think that you are on the right track when saying we solve the other-minds problem through body language and facial expression, but there may be additional characteristics we judge as well to conclude that "others" are conscious (for example, their ability to empathize).
      Your final question is quite interesting to consider, I interpreted the question as: 'because infants do not appear to demonstrate consciousness - but will eventually demonstrate it, could robots also have consciousness but unable to show it?". If this is what you are asking, I think it is a definite possibility and that we are perhaps asking the wrong questions. Just as asking a toddler to do math computations would not result in any information on their consciousness, it could be that what we are asking the robots to be capable of showing will not reduce our uncertainty of their consciousness of lack there of.

      Delete
  14. “If there are indeed mental states that occur purely in virtue of being in the right computational state, then if we can get into the same computational state as the entity in question, we can check whether or not it's got the mental states imputed to it”

    This quote, or what Harnad calls “Searle’s Periscope”, leads me to a kind of conundrum on what it would mean to be “nonconscious”. If, as Harnad states, the only “way out” of the creeping eye of Searle’s Periscope is acknowledging that computational states are not, in fact, mental states (whatever those are--presumably something like feeling a certain way about something). Rather, they are nonconscious states, which sound like something a thermostat would be in when it senses the room to be a tad bit too chilly (as it would be wont to do right around this time of year!!). Nonconscious states -- can they be “about something”? What kind of status do the ~nonthoughts~ that, say, a thermostat has, vis-a-vis what we would consider to be actual mental states? Say we are dealing with a robot with a million times more sensory capacities than a thermostat -- all different kinds of sensory nerves and eyes that can “see” and ears that can “hear”. At what point does that integral of sensory capacities allow for something like what we’d call mental states? I suppose this is, asked once again, the question of symbol grounding. I’m not over it !

    ReplyDelete
  15. I found this paper helpful in elucidating many of Searle’s idea’s. For example, I now understand that implementation-independence means that we should look for the “mentality” in the computer program rather than the hardware, whereas I initially understood it to mean that the program itself doesn’t matter, and just the hardware (which should be able to implement any program).

    Something that fascinated me in the discussion about the reverse-engineered D3 and D4 ducks was the understanding that, a D3 duck, even though it does not require exact structural equivalence as the D4 duck to perform the same actions, it still requires some form of structural similarity in order to even be possible to carry out the action. Furthermore, the more specific we try to get about function, the smaller the degrees of freedom for structural differences gets. As written in the paper, this shows a clear “microfunctional continuum between D3 and D4”. Doesn’t this then mean that we can’t achieve the same function (indistinguishably), without the same structure? And doesn’t that then imply that structure and function go hand-in-hand, which would mean the distinction between D3 and D4 is ultimately futile?

    ReplyDelete
  16. From my understanding of the paper, a main idea that Professor Harnad is trying to get to would be that Searle’s CRA is pushing too far by saying that it completely disapproved computationalism. As mentioned in the paper, Harnad explains how Searle did not show that cognition was not computational but rather that cognition cannot be all computational.

    Taking into consideration Searle’s point that cognition cannot be all computational, we would then have to question our intuition on a T3 passing robot. If a computational robot was to pass T3, we would have to assume that it is thinking and therefore might be inclined to not harm it in any way (not kick it). Since we would have to assume that it is not only computational but rather some sort of hybrid robot that would have non-computational mechanisms.

    ReplyDelete
  17. In this paper, Prof. Harnad thoroughly decomposed the problem of computationalism and Chinese Room Argument and make the concept of Strong AI much clearer. While I was really convinced by Searle’s CRA, and his reply to the ‘system reply’ was fairly simple and straight forward(For Searle, it is ridiculous to think that ‘if a person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese’). Prof. Harnad argues that it is true that a computation executing system can not be understanding semantics, but an executing system for sure can be part of and understanding system. The analogy between linguistics and cognition helps a lot for me to understand - where computational rule is just syntax (which is interpretable, but the interpreted meaning is not part of its computation), the understanding, meaning, and intentionality as mentioned in Searle’s paper, are the semantics. Can we say that the relationship between cognition and computation is analogous to the relationship between syntax and semantics? Then how semantic meaning arises from syntax?

    ReplyDelete
  18. I like the notion of Searle's Periscope mentioned in the article. Because the "others mind" problem, we can never experience the mental states of others, unless we become that object and experience it ourselves. If mental states depend on physical states, then it is completely impossible. But thanks to the implementation independency of computationalism, we can access the mental states of others by being in the same computational state, which is something we can do. Then, we can use it to test any mental state we have which requires a conscious experience. If we find there is no conscious experience being in a certain computational state, then we can see it does not give us the right mental state.
    I find this strategy very ingenious, because it uses one of foundation principles of computationalism to attack its soft underbelly. Then, computationalism has to choose between an either/or: either forfeit implementation independency to save computationalism or admit that computer cannot understand
    I think Chinese Room Argument also sheds some light on what understanding is and understanding is not. Understanding cannot be merely executing instructions following rules. When I learn cooking, I start by just following every step on the recipe. But after some time of practicing, I start to understand cooking, then it becomes something more than merely following instructions.
    Also, I think this difference between understanding and non-understanding can be discerned from a more behavioral level without resorting to consciousness. If I really understand cooking, then I can deal with new situations and cook new dishes, while if I am just following instructions and have no real understanding, then I am much less flexible, and easy to stumble on new cases.

    ReplyDelete
  19. When we talking about T3 and T4, and the analogy D3 and D4, one question haunts me: what is the standard of being indistinguishable? Yes, being indistinguishable means that you cannot distinguish them from real humans / ducks, but what are exactly the characteristics that are matching? For example, a duck can be deaf, which makes it unable to response to audio stimuli, or it can walk strangely for damaged palmate, these do not make them “not real”. However, when we talking about D3 and D4, do they need to be a normal duck to be indistinguishable? Like I discussing in my previous skywriting, when it has the exact physical properties as a real duck, without knowing in advance, when we see it walks strangely, we may just think that this duck has some problems on their palmate, or this duck is interesting.

    ReplyDelete
  20. I find Searle’s periscope a bit paradoxical in the sense that it is conditional on computationalism. Computation is implementation independent, which means the software can be implemented in any suitable hardware. If we take ourselves to be computers, the body is the hardware, and the mental state is the software, and since the software is implementation-independent, we can put ourselves in the place of another computer (person) and have ‘access’ to their mental states. It seems to me that since this is dependent on computation, ‘understanding’ would not be part of the process, since mental states are taken to be the ‘symbol manipulation’ part and Searle implies that cognition (which includes understanding) is not all computation.

    ReplyDelete
    Replies
    1. Hi Shandra! I think that you are exactly right about Searle’s Periscope being paradoxical in the sense that it is conditional on computationalism. As you mentioned, mental states are not implementation-independent and purely computational. Searle’s Periscope would only work for T2 and only in the case that T2 could be passed by computation alone. In my opinion there are no such cases as sensorimotor capabilities (a T3 robot) would be necessary to pass T2. A non-computational T2-passing system or a T3, T4, etc, system cannot be penetrated by Searle’s Periscope. Therefore, Searle’s Periscope would only work for an implementation-independent system that is computational.

      Delete
  21. After reviewing this text in light of what we have learned throughout this class, there is something that I'm still having a hard time wrapping my mind around. It relates to the following excerpt from the reading : Whatever cognition actually turns out to be -- whether just computation, or something more, or something else -- cognitive science can only ever be a form of "reverse engineering" (Harnad 1994a).

    Although I understand how and why cognitive science uses methods such as reverse engineering to understand cognition (its impossible to recreate evolution therefore we must rely on other methods to understand how and why we are the way we are), I don't understand why prof Harnad insists that cognitive science can *only every be* a form of reverse engineering. After studying cognitive science for the better part of the last 5 years, and having used many different approaches (other than reverse engineering) to study cognition, I feel like reducing cognitive science to reverse engineering is a narrow way of defining this field, which makes me worry that maybe there is something that I still haven't understood about the definition of "reverse engineering" or worse even, the definition of "cognitive science".

    ReplyDelete
    Replies
    1. Hi Isabelle. From my understanding, ‘reverse engineering is just a general term that Prof. Harnad used to give us an idea about what kind of answer to ‘how and why we could feel’ we are looking for or is acceptable. To reverse engineer is to deconstruct and extract design information. One example that Prof.Harnad used in class is to think about how to reverse engineer a vacuum cleaner. To fully understand the vacuum cleaner, we first need to understand how this device accomplishes a task, and then we need to think of how to gain a working knowledge of the original design by disassembling the product. I think using the term reverse engineering emphasizes that we should look for a causal explanation of the thinking ability of humans/animals. Same as reverse engineering a vacuum cleaner, when we are reverse engineering our mind, we should not only look for how and why we could do what we are capable of doing, but also how and why we feel when we are doing, which is more like the part of extracting design information in reverse engineering. Throughout this semester, different approaches were tested by asking the question: does it provide us with information to reverse engineer our minds? Obviously, lots of ‘scientific’ paradigms fail under this criterion. At least they fail to reverse engineer ‘cognition.’ I don’t know if I am right, but so far from my understanding, ‘reverse engineering’ just means to solve the easy problem + the hard problem.

      Delete
  22. In looking at the TT and what is considered the pen-pal version of the Turing Test T2, I’d like to pose a question. Let’s say theres two Siri-like softwares — one that is solely AI powered, the other, partially designed by a human (this is the role of a conversation designer who manipulates the logic flows and checks behind a conversational AI powered system.) Both systems produce the exact same answers. However, they are considered slightly different as one is human-out-of-the-loop the other human-in-the-loop. If we can keep abstract and not think about the hardware implementation would both these systems be considered T2 or would the latter be T3? I understand that a T3 would require to be just like us humans but if a human were to intervene what would this mean for the classification of the system? This question came up in my mind as I was reading this passage shortly after reading “An Inconvenient Truth About AI”, an article by famed roboticist Rodney Brooks: https://spectrum.ieee.org/rodney-brooks-ai

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2021 Time : FRIDAYS 11:35-2:25  Place : ZOOM Instructors : Stevan Harnad & Fernanda Pere...