Monday, August 30, 2021

1b. Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20

Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20, in Dedrick, D., Eds. Cognition, Computation, and Pylyshyn. MIT Press 


Zenon Pylyshyn cast cognition's lot with computation, stretching the Church/Turing Thesis to its limit: We had no idea how the mind did anything, whereas we knew computation could do just about everything. Doing it with images would be like doing it with mirrors, and little men in mirrors. So why not do it all with symbols and rules instead? Everything worthy of the name "cognition," anyway; not what was too thick for cognition to penetrate. It might even solve the mind/body problem if the soul, like software, were independent of its physical incarnation. It looked like we had the architecture of cognition virtually licked. Even neural nets could be either simulated or subsumed. But then came Searle, with his sino-spoiler thought experiment, showing that cognition cannot be all computation (though not, as Searle thought, that it cannot be computation at all). So if cognition has to be hybrid sensorimotor/symbolic, it turns out we've all just been haggling over the price, instead of delivering the goods, as Turing had originally proposed 5 decades earlier.

68 comments:

  1. When the reading discusses Searle’s take on the Turing Test in the section labeled Turing Sets the Agenda, I was prompted by the use of the phrase, “it is basically a test – life-long if need be – of whether the system has the full performance capacity of a real pen pal, so much that we would not be able to tell it apart from a real human pen-pal.” In particular, if the test could be considered to be life-long, then isn’t it possible that we may never know if a computer program passes the test? Isn’t it possible that there are still parts of human capacity that we aren’t able to recognize, so if a computer program hypothetically passes the test today, in however many years, it may actually fail?

    ReplyDelete
    Replies
    1. I think in this case, the author considers the Turing test to be more theoretical than an actual test you would do. Because in theory a machine that passes the Turing test would not be distinguishable from a human being in a life-time, but in practice you would not test such machine in that way.
      I don't believe that it would be particularly interesting or even surprising if an AI would pass the Turing test (some have claimed they did). What is more interesting is if we would consider such AI to be capable of human level thinking. So the practicality of the test is not as important as what it theoretically postulate. -Elyass Asmar

      Delete
    2. The capacity has to be lifelong but the test does not.

      For Eric, our T3 robot, a few months in the course and in the residence with other students should be enough to see whether there is anything that suggests he is not thinking and feeling, just like the rest of us.

      Delete
  2. "Behaviourists had rightly pointed out that sitting in an armchair and reflecting on how our mind works will not yield an explanation of how it works”
    Introspection can provide insight into the pathways we take to arrive at certain thoughts or ideas, however, it cannot uncover how we do what we can do. In that sense, I agree with the statement above, because introspection is uninformative. In some situations, we are able to retrace our thought pathways from one idea leading to another which can I’ve insight to how we arrived at that specific idea (ex.I visualized my teachers in elementary school -> remembered what my grade 3 teacher looked like -> put a name to the visual picture), but it does not give us any information about how these ideas were generated, or by which mechanisms within the brain.
    Additionally, could it be possible that different brains arrive at certain conclusions differently? For example with the 3rd grade teacher experiment - some people may visualize their teacher and then place a name to the face. Others may think about their elementary school, then the names of the teachers they had, then which one was their teacher in grade 3, all without visualizing what they looked like. Although introspection is highly subjective by nature, would it be possible for there to be different strategies that different brains default to in certain situations?

    ReplyDelete
    Replies
    1. Interesting thoughts AD! I personally think it is totally possible that different brains reach those seemingly instantaneous answers in different ways. Even though information like your third-grade teacher becomes semantic information, it is still tied to personal memories. As discussed by Prof. Harnad, arriving at the answer to a mathematical question is different - you can outline the steps that it takes to reach 9 from 3x3. Furthermore, these are well-defined steps, which are not tied down by subjective experience. However, if retrieving an answer such as Mrs. Sandall (for me) perhaps includes, at the subconscious level, associating other memories from that time, then the differences in pathways across people could be endless!

      Delete
    2. The point of Hebb's example (for it was Hebb who asked these kinds of questions in PSYC 21, over 50 years ago) was not about how we remember names (which by itself is just a "toy" problem that can be solved in many different ways). The point was about just about anything and everything we can do: We cannot find out how we do it by introspection.

      Delete
  3. "Rightly impressed by the power of computation and of the Church/Turing Thesis (Teuscher 2004)--that just about anything was computable, and hence computationally simulable to as close an approximation as one liked--Zenon relegated everything that was noncomputational to the 'noncognitive'...The criterion for what was to count as cognitive was what could be modified by what we knew explicitly"

    I'm a bit confused by this statement. Does this mean that Zenon there are functions that human brains have that are not cognitive functions, or is he just narrowed the definition of "cognitive" to only fit things that we know how to compute? If there are brain functions/processes that we don't consider "cognitive", what would they be? I guess it doesn't really matter how we categorize them as long as we understand the processes as is hinted at the end.

    ReplyDelete
    Replies
    1. I think Zenon is just narrowing cognition to be a kind of computation, and anything that is noncomputational can't be cognitive. Zenon suggested that those noncompuational(noncognitive) things occur below the level of architecture of our cognitive machine(that does computation) and instead they are implemented in sensorimotor modules(which are more physiological or neurosci) that are "informationally encapsulated" and "cognitively impenetrable". That means they are "not modifiable by what we know and what we can explicitly state in propositions and computations".

      Delete
    2. Zilong's answer to Melody's question is correct, but not very kid-sibly!

      Yes, Zenon was saying that things go on in the brain that we would not call "cognitive" (i.e., thinking). I gave some examples of "vegetative" functions in class: temperature regulation, balance, breathing, digestion. Sensorimotor function itself (input/output) is also a function of the brain (and the peripheral nervous system).

      But:

      -- (1) whether Zenon's notion that "cognition is just computation" ("computationalism") is correct,

      and

      -- (2) whether what is "cognitively impenetrable" (i.e., what cannot be changed by that you think, believe, or are told, such as the way the Mueller-Lyer illusion looks to you) marks the boundary between cognitive and vegetative brain functions,

      and

      -- (3) whether the notion that cognition occurs "at the level of the 'virtual machine' that the brain is 'simulating' is a coherent notion (rather than just another notion like the ones Zenon himself was rejecting as "homuncular", like mental imagery, or even mental propositions, calling for an unexplained little man in the head who is the "user" of the "virtual machine"

      are controversial matters.

      So if Zenon's computationalism (1) turns out to be wrong, then (2) and (3) are wrong too. We'll discuss this more in Week 3 (Searle) and Week 5 (the symbol grounding problem).

      (I think that if a cognitive/vegetative distinction in brain function makes any sense at all, the boundary between them is a fuzzy one...)

      Delete
    3. If, say, cognition is computation (our mind is a software encapsulated in our hardware body), then our thinking processes should not be influenced by our body (just as software cannot be modified by the hardware). Yet, there are so many instances in neuroscience that simply stimulating certain brain areas can enhance cognitive ability (e.g boost attention, memory .etc) . Does it mean that "cognition is just computation" is wrong, and the boundary between cognitive and vegetative brain functions is a fuzzy one?

      In terms of (3), I think cognition occurs "at the level of virtual machine" (that if the real architecture is a mac, we are at the level of pc) can also be an infinite regress. If we are at the level of pc, who is the user that control and create input for the pc? It seems that we fell into the "homuncular regress" again. However, it seems that this problem arises because we are always eager to find a "cause" (an autonomous user controlling the software), or it will make us question about our free will. If we accept that there is no homuncular, does it mean our programs are pre-determined then?

      Delete
    4. 1. Software is executed by hardware. ('m not sure what encapsulated means here.)
      2. Input (from outside the body, or from within the body) can modify software.
      3. Speeding up processing speed or increasing storage capacity modifies hardware, and thus the performance of its software.
      4. The "virtual machine" notion is homuncular: who/what is the "user"?
      5. There is no homunculus, but there is still me, and my capacities, that the causal mechanism must explain (if the reverse-engineering is correct).
      6. Computationalism is wrong (but we have not yet discussed how and why).

      Delete
  4. When talking about Computation and Consciousness, a problem is brought up with the way symbols are interpreted: "But if symbols have meanings, yet their meanings are not in the symbol system itself, what is the connection between the symbols and what they mean?". This is a complicated question and likely highlights an important difference between human consciousness and a hypothetical disembodied consciousness.

    Rationally speaking, the arrow symbol (→) doesn't necessarily have any inherent meaning. It can be defined as a few straight lines put together in a specific order, which doesn't really say anything. However, it seems that as humans we can nearly universally and implicitly agree that an arrow points or aims at something. The last time you looked at an arrow sign, you probably didn't think it concerned your hunter gatherer ancestors of thousands/millions of years ago. Though, they are partly involved in the meaning making of what the arrow symbol represents to us (an arrow is a pointed weapon).

    So as it seems, we have a very specific relationship to the symbols we often see around us that a disembodied consciousness would not relate to or perceive.

    ReplyDelete
    Replies
    1. I am not sure I completely agree with your final statement. I think it is an important fact that we all interpret a symbol (such as the arrow) in the same way, as you mention earlier. However, I don’t think that we have enough information to say that this shows that our relationship to these symbols is unique from a relationship a disembodied consciousness could form. As you said, the 2D symbol of an arrow is arbitrary, thus humans must also learn from other humans or experiences what it means, rather than inherently knowing its meaning. Because of this, I would think it is possible for a disembodied consciousness to be able to also learn the meaning of this symbol in the same manner than a child might learn the meaning. So I think it is true that our process of interpreting symbols is different from simply computing meaning, but I don’t think that we can confirm that disembodied consciousness would interpret symbols differently.

      Delete
    2. (1) What is a symbol system (computation)?

      (2) What is a symbol in a symbol system?

      That's the kind of symbol we are talking about in Week 1.

      Neither an arrow, pointing, nor a green traffic light are symbols in a symbol system.

      Delete
  5. In this reading, Searle explains that a Turing Test-passing machine may compute problems without understanding symbols, and that: “it is just a bunch of symbols that are systematically interpretable by us- by users with minds”.
    I found this quote to be particularly intriguing, as it implies that only humans with minds may systematically interpret symbols, and that machines compute symbols without knowing what they may represent. Searle then argues that because of this, the Turing Test is insufficient as a test of cognition and should be abandoned in favour of studying the brain.

    This left me with more questions regarding symbol-grounding and the Turing Test. For instance, while it is possible for the machines to compute symbols without understanding their meaning, could it also be possible for the machines to attach meaning? Furthermore, what does it mean to attach meaning to symbols in the world? If one does attach meaning to symbols are they therefore cognitive?

    ReplyDelete
    Replies
    1. More on this in Week 3, 5, and the later parts of the course on category learning and language (6-9).

      But at this point it would be good if you explained to kid-sib what you mean by "understanding" and "meaning."

      Remember that the challenge of cogsci is to provide a causal explanation (not a homuncular one) of how and why organisms can do what they can do (as well as how and why they can feel).

      Delete
    2. After reading the text and the skywritings, I was left with similar questions regarding the implications of Searle's arguments on the relationship between the Turing test and computers' "understanding" (or lack thereof). In hopes of understanding these concepts myself, I've broken down the reasons why the Turing test can't tell us much about whether or not machines can understand (or have the feeling of understanding as discussed in class).

      When discussing the Turing test in his paper, prof Harnad mentions the following : If it [the system] passes the test, then it really cognizes; in particular, it really understands all the emails you have been sending it across the years, and the ones it has been sending you in reply (Harnad 2007). In other words, passing the Turing test implies understanding the emails (which also implies being capable of *understanding*, however we chose to define this)

      But after reflection, I've realized that the only safe assumption we can make when it comes to the Turing test is the following implication : failing the test implies not understanding the emails (and vice versa).

      Indeed, Searle's thought experiment showed that it is possible to pass the test and not understand a thing. Although this is an interesting demonstration, it doesn't not show that failing the turing test implies not being capable of understanding in a broader sense. On the contrary, it's possible to not understand the test specifically but still be capable of general understanding. Searle's thought experiment actually demonstrates this, because one of the initial assumptions is that Searle is capable of understanding, yet he cannot understand the Chinese version of the Turing test.

      So a computer passing the test does not imply anything. It could understand it, like a human would in their native tongue, or it could succeed without understanding, like Searle would in his Chinese room experiment. On the other hand, failing the test implies that you didn't understand the test, but it's important to remember that it doesn't necessarily mean that you aren't capable of understanding at all.

      On that note, I am also having a hard time finding a kid-sibly definition of understanding, hopefully I'll have that sorted by my next Skywriting.

      Delete
  6. “A more valid inference is that cognitive science cannot be done by introspection.”
    Using a computational view to try and understand cognitive processes – like the one that occurs when we are prompted with a question such as “what is the name of your 3rd grade teacher?” can prove to be quite limiting. If there is a form of computation that occurs in our brains when we are asked such a question, it occurs subconsciously, so that no amount of introspection would be able to reveal why we are actually able to get to the name or mental image of our teacher. We reach certain answers without being able to explain why we got there. This is what I am interpreting to be our cognitive blind spot – the “why” that we cannot understand about the way we reach certain thoughts/actions/answers/feelings. Since no amount of introspection will be able to reveal the “why” behind our answers, it seems as though computationalism is an imperfect analogy for the brain.
    I don’t think it has been given enough time, however. If it is true that cognitive science cannot be done by introspection, it must rely on neuroscience for explanations. I believe that once neuroscience has evolved a bit more (so that we can statistically state the likelihood of certain neurons firing in response to a particular stimulus, for example) we will be able to not only describe the pathway that got us to a certain answer, but predict (using a function) our thoughts/actions/answers/feelings as well.
    Regardless of my thoughts above, I find it fascinating that something as obvious as who my 3rd grade teacher is could prompt such a deep question about the brain’s function.

    ReplyDelete
    Replies
    1. Naomi, I think you make a really good point! I know that my 3rd grade teacher was Ms. Silva without any hesitation, but I cannot seem to pinpoint why or how I know that — I just do. I find this phenomenon manifesting in many ways, and each time, it is baffling and still somehow fascinating that we still do not know “why” some things are the way that they are. I agree with you that no amount of introspection would help us deduce this answer.

      However, if we combine this reading with the ones from section 1a and consider a hypothetical PC-simulated brain, I am a little confused as to how your last point holds up. We would most likely need to know about every connection and wiring possible in order for us or our technology to best replicate or simulate a human brain. At what point do we know that we’ve evolved to the point where we fully “understand” neuroscience? It’s been well documented that the more we uncover, the more there is left to understand. Following this logic, it appears as though once we figure out more neuroscientific probabilities and likelihoods, more questions will arise. Would a simulated brain be able to generate thoughts/actions/answers/feelings as well as an actual human could? And if so, would we be able to distinguish these thoughts from the thoughts of an actual human? If we were able to clone each individual neuron and synapse and mold them into the patterns of an actual brain, would that be considered a real brain, or is a “real” brain one that DOES have all of these “why” questions unanswered? I’ve put the word “real” in quotes to emphasize that it’s hard to draw the line when there is so much we do not understand. Is a computer-generated brain able to have the same experience of remembering its 3rd grade teacher that a human brain can?

      Delete
    2. Naomi & Emma: Cogsci is reverse-engineering, and it's more about how we can do what we can do than about why . (Part of the answer to why is Darwinian (Week 4) -- we can do what we can do because the genes that made it possible for our brains to do it helped those of our ancestors who had them to survive and reproduce. The other part is already cognitive, because evolution is "lazy": it doesn't code into genes what we can learn from experience and culture; it codes only enough to make us capable of learning from experience and culture. Cogsci has to explain that capacity too.

      About whether learning where and when things happen in the brain can explain how and why we can do what we can do, that's Week 4.

      Emma: Please explain to kid-sib you mean by "simulation"? (See discussion in 1a, and on the 2nd (Week 1) video. You need to understand both what computation is and what the Strong Church/Turing Thesis is: try to explain those to kid-sib too.)

      Delete
  7. In this reading, I was struck by the sentence “The buck must stop somewhere, and the homunculus must be discharged, replaced by a mindless, fully autonomous process”. Why must the explanation for cognition be mindless and fully autonomous? Could the true explanation for cognition—by the brain about the brain—not elude the capacities of language—maybe due to the impossibility of an entity fully understanding itself? The possibility should be considered that, as sophisticated as technology is today, we do not yet have the tools to achieve a complete understanding of cognition. And that if we do have the tools, the explanation could be somewhat irrational: elusive to the human mind because of its complexity, or perhaps its infinity. After all, the laws of nature need not conform to us; we are the ones whose success in life depends on living as closely in accordance with the laws of nature as we can. And as we have seen in such fields as quantum mechanics, sometimes nature acts irrationally, outside of humans’ logical, intuitive explanations.

    ReplyDelete
    Replies
    1. What we mean by "having a mind" is being able to do and feel what an organism with a mind can do and feel. And reverse-bioengineering those capacities is what cogsci is trying to do: to explain causally how our brains (or any causal mechanism) can do and feel what organisms can do and feel.

      Why would we want to start trying to explain that by assuming it's not possible?

      But explaining it by assuming that an (unexplained) "homuncular mind" is doing it would be circular, and would beg the question. That's why explanations of the mechanism that causes a mind must be mindless.

      Delete

  8. I appreciate the attention given to the symbol-grounding problem in this article. The problem of “where” (if that is the right way to conceptualise it) meaning exists is certainly a haunting one. I sit often with the idea of the unsignified (Lacan’s “Real”, maybe, or the traceless/apophatic trace). That is, whatever is left after all objects/ideas/whatevers have been parsed out and named. It is certainly interesting to think about whether that process by which, as Harnad phrases it “internal symbols” and “the external things its symbols are interpretable as being about” come to be connected. That this process is an embodied one (requiring some sensorimotor capacities) is compelling. But what of the imperceptible, the unnameable? A robot created to deal in symbols and provided with some sensory inputs and motor capacities, how would it gain a sense of the “whatever else” that is affect, G-d, Desire, that we spend so much time thinking on? At the risk of sounding melodramatic, who wants a pen-pal who has yet to sit with the nothingness?

    ReplyDelete
    Replies
    1. I think you are underestimating what successfully passing T2 (or T3) really calls for:

      There are certainly plenty of real people who are not particularly interested (or gifted) in discussing "Lacan’s 'Real'... or the traceless/apophatic trace, or affect, G-d, Desire, or nothingness", but that would not make them fail the Turing Test (whether they were humans, robots, or just computers).

      Why not write Eric an email and escape him what he thinks of "nothingness"?

      Delete
  9. I don’t personally believe this, but could an alternative conclusion from Searle’s argument be that there is in fact no meaning in anything? Imagine I were born as a highly complex computational system that could process anything I interact with in the world and produce a certain response to it. Who’s to say I do actually know the meaning of any of these things I interact with, that I actually understand anything? Perhaps the experience I take to be understanding is simply a part of the output of my complex computational system. For example, maybe I don’t actually know what an apple is, I’m just programmed to pick it up, to bite into it or whatnot, and to think of it as something I “understand”. I am not right or wrong about whether the apple is an apple when I think this as there is no meaning at any level of the process, one of my computational outputs just happens to be the experience that I identify as understanding that this thing is an apple.

    This conclusion seems implausible to me, but I’m not sure if it’s actually impossible for some reason I’m not seeing.

    ReplyDelete
    Replies
    1. The conclusion is not only implausible, but it doesn't even make sense. Of course *I* can't know for sure whether you understand "an apple is a round red fruit,", but I can certainly know for sure that *I* understand it (and that I don't understand 蘋果是圓形的紅色水果 (till I google it) just as I can't know for sure whether you really feel pain when pinched, but I can certainly know that *I* do.

      The first of each of these pairs of cases is the "other minds problem" (there is no way to know for sure whether anyone else but you feels anything) but the second of each is Descartes' Cogito. (Can you explain that to kid-sib?)

      The error is the same as in "Matrix" thinking: "What if you're just a figment of my imagination?" (not true, but logically possible) versus "What if I'm just a figment in *your* imagination?" (meaningless nonsense).

      Delete
  10. Reading this article, I thought, 'Aha, here is the 3rd-grade teacher example from class again'. Of course, it seems unacceptable now that Skinner thinks it is okay to dismiss the how question (how the times have changed!) But is the visual imagery summoned an answer to the how question? The article says it is not. There are yet more layers of curtains to draw : the real 'how' of the superficial 'how'. This made me think of the kinds of questions scientists ask, which were mentioned in our first lecture, and what counts as a valid explanation. The important thing is what we identify as result and cause. We are dissatisfied once we fathom that something is a result rather than a cause. And it seems that we need the confidence to propose that the result can be decomposed into processes further. How can do we obtain that confidence?
    (Also, I was confused about the term 'dynamical')

    ReplyDelete
    Replies
    1. Yes, cause-and-effect explanations can lead to more cause-and-effect questions (you plant a seed and give it water and sun, and it grows into a plant), but you can still ask "how does light and water make it grow" (and you get into photosynthesis).

      So, yes, every cause/effect explanation can lead to more cause/effect questions -- but that's why we have biology and chemistry and geography and physics.

      (The answer to every question can be reduced to a kid-sib explanation, but that does not answer every other question that could be asked! Usually, though, if our question is based on uncertainty among a small number of options, as in the (vegan) sandwich machine, a few questions and answers will be enough to get you lunch (that day)...)

      Delete
    2. Oh, and "dynamic" means physical, material, analog. A car (or a cat) is a dynamical system; so is a computer, as a physical object. But the algorithm that the computer is executing is symbolic: the machine follows rules for manipulating arbitrarily shaped symbols on the basis of the symbols' shapes, not their meanings. And the same computation could have been done by a completely different dynamical system, with differently shaped symbols. Hence the independence of software (symbolic) from hardware (dynamic).

      Delete
    3. I see, thank you for that clarification!

      Delete
  11. I was struck by the quote "The decorative phenomenology that
    accompanies the real work that is being done implicitly is simply misleading us, lulling us in our anosognosic delusion into thinking that we know what we are doing and how." I interpreted this to mean that despite what we believe or hypothesize to be happening to produce our conscious thoughts, the true mechanisms beyond the "homunculus" are still beyond our reach. The line between a cognitive process versus a computational process versus some other process we don't yet know is up in the air and subject to change. With this in mind, I think it would be interesting to examine how computational explanations of consciousness could be used to understand anomalies of consciousness and cognitive dysfunction. Most of the examples/thought experiments we have looked at so far are in the scope of "normal" cognitive functioning.

    ReplyDelete
    Replies
    1. Yes, homuncular thinking gives us the false sense that we know how we're doing what we're doing.

      What do you mean by the line between a cognitive and a computational process. The fuzzy line is between the processes that are cognitive, like perceiving, understanding and willing, and process that are "vegetative," like temperature and balance regulation and breathing.

      Before we ask whether computation can explain cognitive dysfunction we have to see whether it can explain normal cognition. And that brings us back to T2 and T3...

      Delete
  12. “... images can be “computed” too. Here Zenon would agree, but pointing out that a computation is a computation either way. He had famously argued that Shepard’s mental rotation task (Shepard & Cooper 1982) could in principle be performed computationally using something like discrete Cartesian coordinates and formulas rather than anything like continuous analog rotation.”
    But do we all think the same way? Could it be that there are individual differences in the means that lead us to similar ends? For example when asked the 3rd grade teacher question, I did not picture her face but rather remembered her name in association with my 3rd grade experience. I can hardly recall her face and yet I still recalled her name. But some see the face of the teacher which triggers the memory of the name. If this is accurate for more ways of recall, perhaps there needs to be more of a dynamic understanding of the ways we cognate our world. This quote captures some of the thoughts I am having: “But at that point the debate became one about optimality (which of the two ways was the most general and economic way to do it?) and about actuality (which of the two ways does the brain really do it?) and not about possibility in principle, or necessity. It had to be admitted that the processes going on in the head that got the job done did not have to be computational after all; they could be dynamical too. They simply had to do the job.” Perhaps it is an oversimplification to assume that there is one singular answer to the way we cognate. Maybe there are spectrums that exist that can help account for individual differences, or perhaps different models of cognition would be a more accurate way of exploring this question.

    ReplyDelete
    Replies
    1. Well, yes, there are individual differences too, and those too will have individual causal explanations. But the Turing Test is about general, generic (human) cognitive capacity. The explanation of individual differences will come later. Eric passes the test regardless of whether he is a more visual or a more verbal thinker.

      Delete
    2. I agree with you that the image of the 3rd grade teacher will not universally pop into the head of people due to individual differences and the variety of processes that occur in the brain. In my experience, some information is stored as statements and as non-visual sensory stimuli. If I were asked a fact about ancient history, the information would probably come to me as a memorized sentence, and if I were asked about a party I went to in high school, I would only be able to list the highlights of the event as statements that I’ve extracted from short-term memory and consolidated over time. Yes, I do often remember people by recalling their faces first, but sometimes I also do so via association to spaces and events (I remembered my 3rd grade classroom before remembering my teacher). The fact that I often answer questions via statements reminded me of what we learned about Pylyshyn in class. He was initially critical of mental imagery and believed that we stored propositions, which was eventually scrapped and replaced by computations and their symbols. I agree that computations are more accurate to describe cognitive processes, but I would not undermine their links with visual imagery and propositions, as the 3rd grade teacher question can attest. As you have pointed out, there cannot be a singular answer to how people cognate, and I would be really interested to learn more about alternative models of cognition. If anything, the only commonality to cognitive processes is the anosognosia that accompanies them.

      Delete
    3. Good points. But even even for the things you describe -- remembering statements and short-term memory -- we don't have a working causal mechanism. Associative models there certainly are, but how they actually add up to the capacities to do what we can do are far from clear. The models we have as yet are just toys, and it is unlikely that they are part of the real thing,

      Delete
    4. While through introspection, one cannot explain how we come up with the name of our 3rd grade teacher. An imagery theorist would claim that we would come up with her image, and identify it. Hebb proposes that neurons that fire together wire together, meaning that the memory of third grade is intrinsically linked with the memory of the third grade teacher, their face, voice, attributes, and ultimately their name. Nonetheless, this doesn’t resolve the problem of cognition; how did we come up with that image and identify it? If Hebb’s explanation was sufficient, one could argue that cognition and memory occurs through consolidation, such that as one tries to remember the teacher, and as said memory is brought up, all of its associated memories are too. This approach however still doesn’t explain much, of anything, how is it that we are able to do so? How can we form these memories and remember them when needed? How do we link these concepts together? How are the feelings about our third grade teachers formed?

      Delete
  13. “Behaviorists had rightly pointed out that sitting in an armchair and reflecting on how our mind works will not yield an explanation of how it works (except of course in the sense that explanation in all disciplines originates from human observation and reflection).”
    This section of the article made me acutely aware that we are the ones programming AI. How are we to program a robot to act just as illogically and creatively as a human beings if we do not fully understand ourselves? Is it possible to program a robot to watch us and understand fully enough to take on the task of becoming as close to humans as possible? Is that the goal? In short, is the goal of cognitive science to create a robot that is as close to human as possible to understand our own cognition, or is it creating a robot that can do everything that a human can do, but perfectly and without human error in order to learn to optimize human cognition? Would it depend on who you ask or does the field currently have one defined goal? If it were the former, isn't this method of exploration backwards? How could we expect to create a algorithm to answer questions we still have while creating that very algorithm? It was validating to read that this is a large question in the field of cognitive science and that I am not alone in feeling skeptical of computationalism as a singular answer.

    ReplyDelete
    Replies
    1. What is "programming"? Computations are formal symbol-manipulation rules. Computers that implement them execute the rules. If the programme works, it gives the computer the capacity to pass T2 (not T3, because a robot is not just a computer executing programmes).

      But even a T2 (if it were possible to pass it through computation alone ["Stevan Says" it's not possible] would no longer be the same computer that the "programmer" had programmed, once it got new inputs, and executed more computations, learning new things (T2 has to be able to learn) that the "programmer" had never dreamt of. In fact, in operating on inputs, computer programs can change themselves. They don't even have to be T2 to do that: toy programs can modify themselves based on their history of inputs.)

      "Stevan Says" computationalism is wrong -- but that has to be shown by something better than the notion that "computers only do what they're programmed to do." Inputs matter too.

      You'll see more of these easily answered objections in Turing's famous paper next week. I call them "Granny objections" to cognition = computation. "Cognition = computation" is wrong, but the Granny objections are wrong too!

      T2 is not just about answering questions. But you do come close to a valid reason for doubting that computation alone could ever pass T2 when you ask how an algorithm could anticipate all possible questions.

      Delete
  14. Mediated symbol-grounding doesn’t work for cognitive science “for the same reason that homuncular explanations do not work in cognitive explanation, leading instead to an endless homuncular regress…the homunculus must be discharged”.
    In the reading, we learn that we must enhance the Turing test to avoid the need for mediation by external minds, so that internal processes can link symbols to things they symbolize. This is because having a hypothetical robot manage to pass the original Turing test through writing is insufficient to prove cognition, as it could well have done so through memorization and without understanding. But wouldn’t it still be impossible to know whether cognition is happening in a robot that passes the enhanced test? How would it be possible to know whether the robot truly understands links between symbols and meanings, and is not once again just memorizing how to meaninglessly perform an almost infinite set of sensorimotor actions?

    ReplyDelete
    Replies
    1. Turing noted that there is no solution to the other-minds problem (not T2, T3 or T4). But T3 at least makes the connection between the capacity to say "an apple is a round, red fruit" and the capacity to point out which fruit it is, and to do the right thing with it (such as eat it). Once it becomes Eric, Turing asks, what is your basis for doubting that Eric really understands, since he's a T3 robot you can't tell apart from anyone else in the class (for a lifetime). We don't look into one another's heads to feel sure...

      Delete
  15. In terms of the question of how do we remember our third grade teacher, what prevents us from answering the questions of "How did our brains find the right picture? And how did they identify whom it was a picture of?"? If we are able to introspect how we did long division in our heads, why aren't we able to introspect how we identified our third-grade teacher, how we had to reach into our memories and recover a mental image of their face? Is it due to the conjuring of the mental image out of seemingly thin air? Or perhaps it is because it is a process unique to each individual, and is not easily replicated in another person.

    ReplyDelete
    Replies
    1. If introspection does not tell you what causal mechanism can give you that output in response to that question (or to any other input, at T2 or T3 scale) than there is no point turning to introspection to figure out how to reverse-engineer all those capacities. About T4 (trying to figure out how to do it by studying the brain), we will be talking in Week 4.

      Delete
  16. Like April mentioned above, this paper also made me wonder about the kind of questions scientists ask as well.

    Always having been fascinated in cognitive development, I was immediately reminded of the experiments I learned in my Child Development course. The experiments that are designed to test children’s cognitive capabilities are perfect examples of Skinner’s behavioristic ideas: 1) The outcome variables are always observable behavior and 2) In the conclusion, researchers comment on the fact that children are able to do such things at a certain age. These conclusions arrange “cognition” in a hierarchical fashion, where one part of “cognition” becomes “activated” as babies develop.

    Now, I clearly understand that observing behavior explains nothing in terms of how babies do such things. However, knowing how early babies can understand causality or develop a sense of morality etc. from these experiments, the question of how even very young humans can do such cognition becomes even more intriguing.

    “Stop answering the functional questions in terms of their decorative correlates, but explain the functions themselves.”

    How can we explain the functions themselves? If I understood correctly, one method Pylyshyn believed in is the Turing Test, that “on the day we will have been able to put together a system that can do everything a human being can do, indistinguishably from the way a human being does it, we will have come up with at least one viable explanation of cognition”.

    In this statement, are these TT-passing robots readily made? Resembling adult forms of humans? In other words, when modeling a Turing robot, do we account for the fact that cognition can develop? Does cognition develop? It leads me to wonder whether cognition is an all or nothing phenomenon. Is the ability of cognition innate or do we update much like computers update?

    Many questions. Excited to read more about Chomsky and Universal Grammar.

    ReplyDelete
    Replies
    1. How can we reverse-engineer the causal mechanism underlying organisms' cognitive capacities? One way is to try to model them computationally (which does not mean organisms' brains are doing computation, any more than a computational model of a vacuum cleaner means that vacuum cleaners are computing). A computational model can discover and test (formally) the causal mechanism; but the real mechanism does not compute (unless what you are modelling is a computer!). Computation is just (interpretable) symbol manipulation.

      Yes, modelling cognition includes modelling development, and learning. But just observing the development and learning is not modelling it.

      This will become clearer as we go on in the course.

      Delete
  17. Even though we don’t know explicitly how we come up with our grade 3 teacher's name, we can still recall their name. It seems like we have our input (question for recall the name) and output (recall the name) even though we do not know how we come up with the output. It is just as if the thoughts have been going through the black box in our brain that we do not consciously understand the exact thinking process. If cognition is computation, the "black box thinking process" seems to correlate with the definition of computation that it is "systematically interpretable". That is, the computer does not understand its program, but the user can interpret the outcome. If we are the user, and our brain is the computer, then we should be able to interpret the meaning of the our grade 3 teacher's name, even though our brain does not understand the thinking process, as they just simply perform the algorithms. Yet, if the brain is the computer, what makes us a user? Which part of us makes command to our brain?

    ReplyDelete
    Replies
    1. The goal of cogsci is to find the causal mechanism (in the "black box") that explains how organisms can do all the kinds of things they can do. That's reverse engineering. Computation is just one candidate for that causal mechanism.

      If cognition is computation, and computation requires a user to interpret the symbols, who/what is the "user" when we are thinking? And what is the causal mechanism inside that user? (This is why this idea is homuncular.)

      Delete
  18. I agree that the Turing Test is insufficient to demonstrate that a robot can think. In the conclusion, Harnad proposes how we may utilize reverse engineering to construct a symbol-grounded, cognizing, non-human creature. However, if awareness is considered part of cognition, then suggested reverse engineering may not result in a humanly cognitive entity. Because, even if such a Turing test exists, it lacks the ability to assess if a system is aware of what it is doing, which is an important aspect of cognition. It would be difficult to evaluate whether the robot has consciousness experimentally, as far as I know. So, if consciousness is a component of cognition, how would we determine whether robots can think like humans? Is passing TT an accurate reflection of cognition?

    ReplyDelete
    Replies
    1. Yes, cogsci has to explain how and why organisms can do what they can do (the "easy" problem of cogsci) as well as how and why (sentient) organisms can feel (the "hard" problem of cogsci).

      Doing is "easy" because you can observe and measure it directly.

      Feeling is hard, because you can't observe and measure it directly.

      But you can infer feeling indirectly. We are pretty sure other people, and many organisms feel (i.e., are sentient). Turing suggests we can be no more nor less sure of a TT-passer, once we have reverse-engineered a successful one. (Cogsci is still very far from that: it has only produced toys so far.)

      Delete
  19. "Beware of the easy answers: rote memorization and association."
    This quote particularly struck me at first because when considering whether machines can think, I often get stuck on the fact that any response a machine has is simply a programmed response based on a specific input (for example, if A occurs > produce outcome B). For this reason, I find it hard to consider a machine, no matter how advanced, as being able to think on its own (as being "human"). However, the quote made me realize that humans often perform these same computations. If prompted to think about our 3rd grade teacher, the image of them is conjured into our brains. But as expressed in the quote, that would be too easy right? So, how then do we explain how and why we do what we do, if even introspection will not help us? I'm very interested in learning more about cognitive science, especially about the "cognitive blind spots" that Zenon made us aware of.

    ReplyDelete
    Replies
    1. We'll start with computer-modelling as a method (Turing's method), but we have to distinguish (1) trying to solve the "easy" problem using computer-modelling (also called "Weak AI" my Searle, Week 3, which is the same thing as the Strong Church-Turing Thesis) from (2) computationalism (cognition = computation). (Do you understand the difference?)

      Delete
  20. “The only way to do this, in my view, is if cognitive science hunkers down and sets its mind and methods on scaling up to the Turing Test, for all of our behavioral capacities….full robotic version of the TT, in which the symbolic capacities are grounded in sensorimotor capacities and the robot itself (Pylyshyn 1987) can mediate the connection, directly and autonomously, between its internal symbols and the external things its symbols are interpretable as being about, without the need for mediation by the minds of external interpreters.”
    Supposing that Cognitive Science does create this robot capable of perceiving all human sensory modalities, creating its own models of external stimuli, and is in every way capable of replicating human symbol systems, a human judge of the new Turing Test may be thoroughly convinced that this hypothetical robot is human. But could this robot convince other AI? If we had a machine learning system that was tasked with determining differences between the hypothetical robot discussed above and a real human, could it do a better job at finding the subtle tells that robot from human better than humanity could? Or are machine learning systems always biased and limited by the humans creating and “teaching” them? I am curious if using AI as the judge in a Turing test has been considered before. Why are humans necessarily considered the best judges of consciousness when we can’t definitively determine the consciousness of other animals or even sometimes ourselves? Computers can compute much better than humans in many domains, so could they also outperform humans in computing if something is conscious?

    ReplyDelete
    Replies
    1. Genavieve, I thought this was a very interesting point! I feel like AI could be a better judge than human in determining if someone passes the T3 Turing test or not, since AI that are focused on one task are often better at it than human (if that task can be computed). We can imagine an AI that, after being presented with a ton of real human and T3 robots and being given feedback (supervised learning), would get really good at differentiating the two.

      In this case, if we find a way more reliable than human judgement to check whether or not a robot acts like a human, I feel like this should become the new bar to pass T3. I feel like "fooling a human" is a pretty arbitrary and subjective way to assess an scientific feature, especially considering our tendency of anthropomorphism and all our cognitive biases.

      Delete
  21. “But can introspection tell me how I recognize a bird as a bird, or a chair as a chair? How I play chess (not what the rules of chess are, but how, knowing them, I am able to play, and win, as I do)? How I learn from experience? How I reason? How I use and understand words and sentences?” (247)

    In this excerpt, I think that Professor Harnad is getting at the easy problem of cognitive science. What interests be in particular is the idea that sometimes we are able to think consciously and explicitly about the cognition we are doing, but there are other times (maybe most of the time?) when we are unaware of our own cognition (is cognition the correct word here? I thought to say “computation” but that isn’t really what I mean…).

    This makes me wonder why it is that we are able to feel cognition sometimes but othertimes not. I’m not sure if that is the best way to phrase it. I’m thinking of the difference in feeling yourself trying to solve a math problem versus the example of passively coming up with the name and a mental image of your grade 3 teacher when someone asks.

    ReplyDelete
    Replies
    1. I think you might be describing the difference between cognition and computation. In the math problem example, we are given a set of explicit rules which we can write down and use to solve the problem. We can describe what is going on because it is a process/problem that can occur independently from the brain. It is a computation and does not require cognition. The problem of remembering the name of your third grade teacher, however, is one that directly requires memory processing in the brain. Since we don’t know how the brain functions completely, this makes us blind to understanding this cognitive function.

      Delete
  22. One of the overarching ideas that has guided the investigation of how we have the behavioural capacity to think what we think, and to do the things we do is that the computations done in our heads are invisible and impossible to truly unravel by means of introspection. In example of how we mentally trace back who was our third grade teacher, although I can superficially retrace the steps I went through to get to my answer, I’m not able to understand exactly how I’m able to do that.
    This is defined as an issue because in order for it to reach the standards of successful cognitive theory, “we must be able to make this implicit computation explicit so it can be tested computationally to see whether it works.”, however I’m not entirely sure if I believe in the logic behind this. Why exactly must the method be “tested” computationally to see “whether it works”, if we already know it works because of the fact we were able to come to the correct answer? What exactly does “work” mean in this instance; must it be the same exact procedure/mechanism across individuals? Since we are all so different and have learned to deduce/derive answers differently, wouldn’t this mechanism be subjective anyways?

    ReplyDelete
    Replies
    1. The points you raise in this skywriting definitely pushed my thinking on the topic. Is the purpose of emulating brain functions through computations only to help us validate and understand cognitive processes? Even then, these computations would likely have to be personalized due to cognition's subjective nature. As you brought up, we know that “computations” already “work” on a biological basis. The difficulty lies in finding general processes across individuals (whose experiences and, perhaps, biology are unique) and recreating them computationally, not in proving their success. Why would we make such an effort? Are we hoping to generalize the brain into computations in order to design tools that can help reduce cognitive load and ultimately help us survive?

      Delete
    2. I agree with your points! Like Adebimpe said, there must had been more to process through which we recall who our third grade teacher was. Like Professor mentioned in the paper, memorisation cannot be generalised to account for all cases. Regarding Adebimpe’s question about what “work” really mean and whether it must be the same exact procedure and mechanism across individuals - if I understand correctly, I think each of us can give different meanings to referents, such that the way we categorise things must be different. Say for example, a group of a students are tested to recall new vocabularies of a second language, input and output are the same, but the procedures through which these inputs are computerised in each of the person’s mind could vary, depending on the meaning each person gives each word.

      Delete
  23. b. In the section regarding Computation and Consciousness, it mentions how the mental states which were thought to be physical states are actually not physical but computational states. Professor Harnad explains Zenon’s hardware/software distinction where the computational “level is comparable to the software of a computer whereas the physical level would be comparable to the hardware of a computer. Since a software is able to run in computers with different components in the hardware, professor Harnad shows that these two are independent of one another.

    While reading this, I found it very interesting to think about because if a mental state is considered to be computational states, it would then be possible to have an AI with some type of consciousness. I understand that the software would be comparable to a mental state, but I’m wondering is that would apply for actual consciousness where an AI would be able to tell good/bad and have empathy or understanding of things. If this were the case, would the AI even know it was a computation, or would it be able to know that it is simply a simulation.

    ReplyDelete
  24. Through cognitive science we are trying to generate cognition through reverse engineering, meaning that we are trying to replicate the capacity to do what thinking organisms can do, think and feel. We are not really conscious of how we can do what we can, we are however conscious of doing it. This is why introspection cannot explain how and why we do the things we do.
    When saying that computation (the rule based manipulation of arbitrary symbols based on their arbitrary shape) is like software (an algorithm, or program; functional component) that can be executed by the hardware (structural component). In saying that the hardware is independent from the software (computation, how does this software implementation lead to feelings? How does individuality arise? This is further proof that computation isn’t enough as it doesn’t explain the details of cognition, nor consciousness, feelings, inner thoughts, essentially what makes us human.

    ReplyDelete
    Replies
    1. I agree with you, Lola. While computation does pave the way to a better understanding of cognition, it would be wrong to jump to the conclusion that cognition is computation, as computationalists do. Searle tried to refute this, with his Chinese Room Argument. However, he reached the wrong conclusion by saying that cognition is not at all computation. We, as cognizing beings, are always manipulating symbols following algorithms, so it would be wrong to say that we’re not performing computation. On the other hand, completely ignoring the parts of cognition that do not strictly adhere to computational principles would be wrong. So, we can say that cognition is not all computation.

      Delete
    2. I agree with what you’ve written. The inclusion of introspection also brings in the other-minds problem. Through introspection, we are only confident in saying that we are thinking or feeling, yet knowing we are thinking and feeling does not answer how we are thinking and feeling. The transfer of this problem onto another being adds another barrier in coming to a conclusion. We are unable to speak for others and say with certainty that they are also thinking and feeling and this is the other-minds problem.

      Delete
  25. In this paper, Prof. Harnard talked about the historical shifts in cognitive science studies. This include but not limited to how the study of Cognitive Science is denying behaviorism and how computation can be one of the possible but insufficient candidate explaining cognition, lastly, how our linguistic capacity being one of the models denying computationalism.

    It was Hebb who firstly questioning how behaviorism can ever explain the “how” problem in Cognitive Science, as behaviorism had been avoiding all the discussion about the internal process in the head. Likewise, the input-output system in computation can only explain what machines do, but not how. Computationalists – people who believe computation is cognition – could not explain how we think, reason and even the simplest – how we speak using certain language and words? Language learning for children is not like a process of executing a program on a computer. It different in a way that computational rules are only an analogy for syntax, but not semantics. In order to master a language, memorizing all the grammatical rule and all the vocabulary is neither realistic – nor sufficient. That leads to the famous Chinese Room argument and the symbol grounding problem.

    ReplyDelete
  26. “Even “individuals” are not “stimuli,” but likewise kinds, detected through their sensorimotor invariants; there are sensorimotor “constancies” to be detected even for a sphere, which almost never casts the identical shadow onto our sensory surfaces twice. ”
    Why it never casts “the identical shadow”? As we can recognize kinds by detecting their constancies, there must be some central invariants that we rely on more. This reminds me of the defining-attributes approach of categorization, which suggests that a category is defined by a set of attributes. For instance, an attribute for the category “bird” can be “have wings”, and an outspread “can fly”. These attributes have different priorities, some are more crucial and some are trivial. In the “bird” category example, an emu has wings but can’t fly, and yet an emu is still considered as a bird (at least for me). Those crucial attributes resemble the “constancies” mentioned in the quote. They may vary when it comes to complex categories. However, for the sphere, although the light is changing, if you replicated the exact environment, wouldn’t you get an exact shadow that you once had before?

    “The answer in the case of syntax had been that we don’t really “learn” it at all; we are born with the rules of universal grammar already in our heads. In contrast, the answer in the case of vocabulary and categories is that we do learn the rules […]”
    This quote intrigues me think of how the parallel works in animals. Let's say the carnivores. If they were completely bred by humans, without specific training, they might be bad at hunting and surviving in the wild, but they will naturally going to eat meat (which is the term “carnivore” stands for). Just like cats, they don’t need to be trained to know how to adjust their position when they are falling from height. There are something built-in.

    “It had to be admitted that the processes going on in the head that got the job done did not have to be computational after all; they could be dynamical.”
    I have read the discussion above of the term “dynamical”, and yet it still confuses me. From my understanding, “computation” is the process and the dynamic system is the where to process, and the relation between these two is some sort of cooperation. Why in the quote, the relation seems more to be opposition?

    ReplyDelete
  27. “Skinner regarded such theories of learning as either unnecessary or the province of another discipline (physiology) hence irrelevant”. I would be interested to see an evaluation of the strength of Skinner’s learning = physiology argument. Does this not grant that everyone learns in the exact same manner as everyone else if all brains are comprised of the same materials? To me, this argument would fall apart by simply visiting an educational institution such as an elementary school. There seems to be a flaw in the physiology argument from what I understand of it because if it were true, all students would have the same grade and would learn at the same rate. Additionally, even if there weren’t a discrepancy in grades or evaluations, if the students’ brains were examined they would need to be active in the exact same manner if they were learning. I am not sure this is necessarily true because of things like learning disabilities. If we consider learning as a result-definition and “learning = information X is gained” then that would contradict the validity of learning disabilities. Those with ADHD or ADD are still capable of learning, their route to gain information X is just more “complicated” than one who doesn’t have ADD. I would be interested in learning more about Skinner’s argument for physiology.

    ReplyDelete
  28. I’m confused as to the relationship between computation and being dynamic. On one hand it’s stated that computations need to be dynamically implemented, and on the other we see that dynamical systems are irrelevant if they can’t be simulated computationally. Moreover, how are we to disregard the physiological homunculus or more generally the physical hardware if the computations are to be dynamically implemented? If computation is implementation-independent whereby the software and hardware can be considered independently, what is the meaning of a “dynamical relationship”? Two examples from the passage that give rise to my question are:

    “Computations need to be dynamically implemented in order to run and to do whatever they do, but that’s not the only computational–dynamical relationship; and it’s not the one we were looking for when we were asking about, for example, mental rotation.”

    The other one is when Harnad mentions the use of dynamical is where Zenon and Fodor’s celebrated paper classifies neural nets as irrelevant “like other dynamical systems” if they couldn’t be simulated computationally.

    ReplyDelete
  29. In this article, Prof.Harnad proposes that the root of the problem is the symbol-grounding problem. From my understanding, to connect symbols in a symbol system to the things in the world, we need not only the hardware to sense the stimuli in the world but also an algorithm to process the sensory information, suggesting that a T3 robot needs both computational internal structure and the dynamic internal structure. After reading this article, I feel like so far, the only solution or agenda set for us to figure out who we are is to figure out how to build ourselves, or at least build something that we as humans could not tell the difference between ‘them’ and us. (Behavioral equivalence) However, it makes me wonder what would happen if we actually built a T3 robot. I have no intention to discuss either the philosophical implication or the ethics if we have such machine living or having lectures with us. But I want to point out that perhaps even if a T3 robot is standing in front of a group of cognitive scientists who just take him out of his mold, people might still have no answers to how could this robot be observed as sentient, or they might feel and think like we do. I keep wondering if our ability to introspect our own thoughts or any introspectable other inner states is already an answer given for humans what cognition is. If we have a T3 robot with the potential to feel and think, assuming that it has the same capability as humans, then it has the same introspection ability. ‘Our answer’, which is the robot will say the same when we ask how did you recall the face of your third-grade math teacher or tell me how could you raise your hand. I don’t believe there is an actual answer for ‘how’ and ‘why.’ I think that ‘How’ and ‘Why’ we could feel or think will be reduced to different kinds of questions when scientists reverse engineer our brain and when we could put everything together we know into an answer. If there is an answer, it might come back as a form of introspection: we feel the other mind the same way we feel ourselves.

    ReplyDelete
  30. This comment has been removed by the author.

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2021 Time : FRIDAYS 11:35-2:25  Place : ZOOM Instructors : Stevan Harnad & Fernanda Pere...