Monday, August 30, 2021

10c. Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling.æ

Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling. [in special issue: Turing Year 2012] Turing100: Essays in Honour of Centenary Turing Year 2012Summer Issue


The "easy" problem of cognitive science is explaining how and why we can do what we can do. The "hard" problem is explaining how and why we feel. Turing's methodology for cognitive science (the Turing Test) is based on doing: Design a model that can do anything a human can do, indistinguishably from a human, to a human, and you have explained cognition. Searle has shown that the successful model cannot be solely computational. Sensory-motor robotic capacities are necessary to ground some, at least, of the model's words, in what the robot can do with the things in the world that the words are about. But even grounding is not enough to guarantee that -- nor to explain how and why -- the model feels (if it does). That problem is much harder to solve (and perhaps insoluble).

47 comments:

  1. WARNING: Don't watch the trailer above of The Imitation Game or you'll spend the next 2 hours watching the whole movie again on Netflix.

    From - Your classmate who is suppose to be finishing up her skywriting for 9B at Monday night 1:16AM.

    ReplyDelete
  2. I appreciate how succinctly Professor Harnad ties together much of what we have been discussing/reading in the course. Harnad argues that Turing wasn’t a computationalist, though he did believe anything could be simulated with computation (Strong C/T Thesis). Turing understood computation as limited in answering the hard problem of cognition and so he didn’t believe cognition was just computation. In reverse-engineering a robot that can do what we do, Turing is focused on doing-capacity. Turing avoids the problem of feeling in this case and poses that the best we can hope to achieve is that by simulating all of the doing capacities, the robot might be able to feel. But even still, knowing whether or not we have generated a feeling robot leads directly to the other minds problem.

    ReplyDelete
    Replies
    1. On reverse engineering, doing-capacity is not "simulated," it is generated, i.e., it is actually, observably, verifiably produced. In contrast, we can only hope that in producing all the doing capacity (whether T3 or T4) we have also, for some unexplained reason, produced feeling. Whether we have done so is the Other-Minds problem. How and why is the Hard Problem.

      Delete
    2. Was Turing a epiphenomenalist? Believing that somehow feeling will be produced after replicating the physical abilities of what humans can do seems like Turing is suggesting that the feeling mind will be produced from the doing body/brain. Or is he simply suggesting that feeling and doing are connected, not necessarily that one depends on the other?

      Delete
    3. From what I understand, I don't think that turing believes that feeling can be produced after merely replicating the physical abilities of what humans can do because he himself acknowledges that generating the capacity to do does not explain or generate the capacity to feel

      Delete
    4. I would agree with this last comment because I also don’t belive that just because you have all the physical parts of something (a supposedly feeling robot) can one assume that you have somehow created its ability to feel. Referring to the reading in week 11, it is possible to see that there is still a lot of unanswered questions on how the physical aspect of the human arises to feeling such as pain. Furthermore, eve if it were to be completely understood, one can still not be certain that having all these mechanisms will create a feeling in the robot built because a feeling isn’t as physical trait but rather an experience. I think that by thinking of it in this manner one can better understand the extend of the hard problem as you are trying to create something that is not physical (even if it may emerge from physical mechanisms). Moreover, the fact that it is not something tangible one can’t show that something or someone else might have this capability.

      Delete
  3. I found that this reading was a nice summary of everything we learned in this course so far. It highlights how even though Turing came up with the Turing test/Imitation Game to compare machines and humans, he does not think it tells us much about consciousness. He is not a computationalist, that is, he does not think cognition is just computation. The TT only assesses the ‘doing’ part of cognition, but not the ‘feeling’ part, which is the hard problem of cognitive science.
    By ‘feeling’, I suppose we mean ‘having subjective experiences’. Will it ever be possible to truly explain how and why cognizing beings experience what they subjectively experience? Probably, to some level, but not entirely. And this might seem a bit of a fatalistic, but does it matter, anyway, how phenomenal experiences exist? There are different theories of mind out there, illusionism being one of them. It highlights how there’s a difference between how the world really is and how, through various mechanisms, we are made to experience it, and what we need to investigate, is how this illusion comes about, rather than whether everyone else experiences the phenomenon. And this is just one theory. So maybe, by exploring different perspectives, we will one day be able to shed some light on this matter.

    ReplyDelete
    Replies
    1. I very much resonate with your reflection that Wk 10 brings topics under a coherent frame: from Descartes's Cogito & other minds problem, Turing's computation & the advent of cognitive science, a method for letting us live our lives without too much doubt that other people are cognizant beings, Searle's challenge to uncritically linking computation = cognition, from which follows the need for symbol grounding. We also discussed "doing capacity" like language and categorization, for which evolution must have played a role; yet "feeling" is the Hard Problem - although for myself I know it exists (returning to Cogito). I also find it compelling that there is a continuation- perhaps even progression - of ideas across time, from Descartes's period to now.

      Delete
    2. Shandra, I resonate with your optimism about the potential to shed light on the problem of feeling or consciousness. I also think, as you said, that this calls for investigating different theories of mind, and bringing in different perspectives on the nature of feeling.
      In this reading, Harnad explains that through Turing, Cognitive Science came about as the project of reverse-engineering humans’ and animals’ capacity to think; yet, in my mind and as a cognitive science student, I can’t help but think of philosophy, linguistics, neuroscience, computer science and psychology when thinking about what cognitive science is. In fact, every time I get asked what cognitive science really is (happens very often!), my answer always mentions these sub disciplines.
      I think this interdisciplinarity and breadth give cognitive science its character, beyond seeking to reverse-engineer thinking — they go hand in hand. Professor Harnad is probably right when he says that the hard problem is insoluble; but I remain optimistic that with a broad perspective on feeling that stretches across domains and fields, we can get closer to solving it.

      Delete
    3. Hi Shandra, I completely agree that this reading was a nice summary of what we've been discussing so far. This made me think back to when we originally were introduced to the TT and computationalism, and we debated whether or not Turing actually believed in computationalism or if he was simply just trying assuming computationalism as true to create the TT.

      Harnad states: “Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel. He merely pointed out that explaining doing power was the best we could ever expect to do, scientifically, if we wished to explain cognition.” Harnad’s argument that Turing was not a computationalist but believed in the Strong Church-Turing Thesis helps explain the reason for creating the TT--to get as close as possible to reverse-engineering cognition. However, if Turing knew computationalism was false, why would the TT be the best way to reverse engineer cognition? Is it because we have no other current theory of cognition?

      Delete
    4. Hi, Melody. In Turing’s article, he proposes the ‘skin-of-an-onion’ analogy to reply to Lady Lovelace’s objection. From my understanding, Lady Lovelace suggests that the machine can’t ‘think’ or ‘feel’ since the machine is programmed by humans: the machine can only do what we tell it to do. In reply, Turing argues that an ideal presented mind, which can think and feel instead of just inputs and outputs, may ‘give rise to a whole ‘theory’ consisting of secondary, tertiary and more remote ideas.’. From my understanding, Turing suggests that the machine we might be able to design and build to pass TT might raise other theories to explain further or demonstrate how and why we could feel (the hard problem). Alan Turing aware that cognition is not just computation, suggesting that there is only part of our cognition, ‘the functions of the mind or the brain,’ that could be explained purely mechanical terms, indicating that TT is not the best way to reverse engineer cognition: the machine that passes TT could only explain how human could do. However, Alan Turing also points out that approaching the answer to what is a real mind is like stripping off the skin-of-an-onion to find out what is under the skin. Passing TT, or building a TT robot, a machine that has the same doing capacity as human in verbal communication, is just a way to strip off the skin on the surface, and he is not sure whether just considering the function or behavioral aspect of the brain could strip off all the skin, since if eventually there is nothing in the skin, it means that cognition could be completely explained by computation, and therefore the whole mind would be mechanical. And I think this is also why Prof.Harnad specifies the hierarchy of the Turing test, which is from TT to T4, along with the symbol grounding problem, demonstrates what kind of problem we will face when we are trying to strip off the skin of cognition.

      Delete
  4. How I feel right now, whether I'm feeling this or that, and why I'm feeling it, is perhaps a question for a psychologist.

    But whether any organism is feeling anything at all is the other-minds problem.

    And how and why is the hard problem.

    ReplyDelete
  5. Hi everyone! Looking for a little clarification on TT and the hard problem. Is this right?

    So, Turing did not include the hard problem in his Turing Test methodology, not because of any specific limitation of computation he had in mind, but rather, because he knew that the Turing Tests could not explain feeling. Turing knew that his TT methodologies only go as far as explaining what is observable, or the easy problem (which is still a lot!). Professor Harnard expands on this fact, arguing that Turing wasn’t considering sensorimotor function because he did not believe that computation and cognition were the same things.

    Thanks!

    ReplyDelete
    Replies
    1. It seems like you have it right!
      As you note, Harnad believes that Turing was not a computationalist - but this is in relation to cognition. This is because Turing was aware of the fact that sensorimotor capabilities would be required in order to ground symbols, obtain meaning, feel understanding, and thus pass the TT. Turing was a computationalist in the sense that he thought any physical, dynamical structure could be approximately simulated by computation. The difference between the two lies in the fact that although everything can be computationally simulated, the computations doing the simulations are not cognizing - they are not demonstrating cognition.
      Regarding why Turing didn’t include the hard problem; You are also right in your statement. Harnad states that Turing didn’t include the hard problem because he knew that the TT could not explain feeling — that the capacity to “do what humans do” does not always lead to the capacity to “feel”. Turing also realized that we may not be able to ever explain anything above “doing” (aka the hard problem) which is another reason why he did not attempt to create a TT that assessed feeling.

      Delete
  6. The sort of recap of the course that this paper does reminded me of a thought I had for the last few classes.

    As Prof. Harnad mentioned, the hard problem of cognitive science is figuring out how and why we feel. Particularly for the "why" part (as far as I understood), this problem is made harder by the answers to the easy problem, which is how and why we do what we do. Indeed, cognitive science has managed or is on its way to manage to explain how and why we do what we do, with a explanation that does not involve feeling. Then, all the causality is "used up": we have a good chain of event that explains why, for example, we remove our hand from a flame, that does not need to involve feeling. Then, there is no place in this chain of causality to insert feeling: why did we evolve to feel pain?

    If really we can explain most of our behavior without resorting to using feelings, it seems to me like one option we should consider is that feeling do not have a function. I remembered listening to a podcast with neuroscientist Joseph Ledoux, and from what I remember, he was arguing that we didn't evolve to have emotions, that they are just consequence of non-emotional processes. For example, he was talking about the fact that many very simple organism have a way to get away from danger; despite this, we do not consider them to be complex enough to have subjective experiences, and therefore we don't think they feel fear. Then, the feeling of fear is not needed to run away from danger; maybe it appeared in more developed animals just because of the increased complexity of the brain.

    It is impossible to prove that something does not have a function; however, I feel like the longer the hard problem remained unsolved, and the more explanations we can build only using mechanisms that don't involve feeling, the more that answer would seem plausible.

    ReplyDelete
    Replies
    1. Great points! I want to touch on your point about the function of feeling. I think something that has become clearer in the last couple of weeks of class is that an evolutionary explanation to feeling (and anything, really) is insufficient and incomplete, but we don’t have much more to rely on at the moment. The fact that “feeling” or “consciousness” are not on the order of organic material, physical forces, etc., seems to complicate the picture, and is part of the reason why (if I understood correctly) if we ever solve the easy problem, there will be no more causal forces left to explain feeling as well.
      The way I see it, because feeling doesn’t seem to follow the evolution of species on the physiological and neural levels, it must either have emerged randomly, or serve some purpose that is not biological or evolutionary, and maybe cannot be explained by science. I think the gap between our four fundamental forces and consciousness is unbridgeable, because feeling seems to be distinct from “doing” (easy problem) in ways that we cannot explain through physical forces. I know this is a cognitive science class, but I think it’s interesting and important to remain open minded about the fact that science cannot and does not explain everything relating to human functioning (and feeling). Maybe feeling really does have no function; or maybe it simply has a function that can’t be explained in our current framework of thinking.

      Delete
    2. Louise and Juliette, you both make excellent points regarding the function (or lack thereof) of feeling. Like Juliette says, the last few classes we’ve had have made it more clear that we cannot explain the existence of feeling as dependent on evolution, but once we’ve ruled that out, we do not really have a “backup answer.” To go back on your point about the four fundamental forces: I almost started writing that these forces would exist on Earth even if we as a human race went extinct. However, we would not be able to know if they did still exist, since we wouldn’t be here to see for ourselves! I see it as very much a if-a-tree-falls-in-the-forest kind of situation. That aside, I want to re-emphasize your last point about how science might not be able to explain everything. We’ve been told throughout most of our education that science is supposed to be an objective end-all-be-all answer to everything, but the more I think about problems we have yet to solve (namely the hard problem and the OMP (since I do not subscribe to computationalism)), the more I realize that a lot of things might just be unsolvable in their essence. Feeling having no function obviously sounds like a very unsatisfactory explanation, but I think you’re right in saying that our current framework, or current way we frame science (or explanation in general), is not enough to dissect the purpose of feeling. Louise says it very well in her last point: the longer we stay with the hard problem unsolved, the more it seems plausible that feeling just does not have a purpose.

      Delete
    3. Louise, your thought on feeling potentially not having a function got me thinking. Perhaps there is no particular reason why we feel. Maybe it really is just a property of minds which emerges from the interplay of all of its cognitive functions. To echo Juliette’s points, we must keep our minds open to the possibility that it may just exist as something unexplainable by science in its current form and with current tools. However, I do feel that with rapid progress in machine learning and AI, if feeling is scientifically explainable, it’ll be explained within a few decades.
      Based on my extremely limited but curious viewpoint, these are my 2 favourite far-fetched theories of the hard problem:
      1. Feeling is synonymous with free will—basically, it is the anti-causality. Without feeling, everything in the mind could be causally explained with the right tools. As I’ve heard some physicists say: if a giant computer simulation knew the exact position and velocity vectors of all the atoms in the universe, it could accurately predict the future and see back into the distant past. I do not totally agree with this statement, since, if I understood basic quantum physics correctly when it was taught to me, matter exists in more of a probability field than a solid form. However, the way I see it, feeling acts as an anti-causal mechanism. It is, in a sense, a manifestation of quantum randomness. It probably sounds insane, but whatever.
      2. Matter is itself conscious—in other words, panpsychism. By this, I don’t mean that matter actually feels things like we do. To piggyback on theory 1, I mean that the probabilistic properties of matter (such as electrons) is anti-causal—and this anti-causality is the most rudimentary form of consciousness (or feeling, if you prefer). Given physicists’ finding new fundamental fields even in recent years (i.e. the Higgs field), I find it interesting to think of the possibility of even more fundamental fields existing, one of which is responsible for this strange property of matter which eventually gave rise to conscious beings.
      I realize that I probably didn’t formulate these in a proper manner, nor did I use the proper terminology. I haven’t done any physics in 3 years, after all. However, I do feel that if there is any hope of solving the hard problem, cognitive scientists are going to need to think outside the box.

      Delete
  7. I agree with the comments above that this paper was a nice overview of a lot of what we’ve covered. Something in particular it got me thinking about is why the ‘hard problem’ is something we’re interested in at all. Many scientists (and perhaps to an even greater extent analytic philosophers) are so allergic to “feeling” that the moment the word is mentioned they dismiss any related content as not only unimportant, but harmful to their "perfectly objective and unbiased” (insert eye roll) understanding of the world. So for many, including the question of how and why we feel in the study of cognition might seem pointless and misguided. Why do we care about anything other than performance capacity? What we do is what actually has practical relevance, so why not put aside this question of feeling altogether?

    But this would be strange, if you actually think about it. Our original interaction with cognition is our own experience of it, is feeling what it feels like to cognize. This is where our primary understanding of cognition comes from. Just as sensorimotor experience grounds our understanding of the external world, being conscious of our own cognition, experiencing/feeling what it is to cognize, is the starting point for any other understanding we might build. As Harnad notes, in reference to Descarte’s famous cogito argument, "I can be absolutely certain that I am cognizing when I am cognizing. I can doubt anything else, including what my cognizing seems to be telling me about the world, but … I cannot doubt that what it feels like right now is what it feels like right now”.

    We have nothing to build an understanding of the world from if not experience (feeling what it feels like to interact with the world and think about it), so why would we ever discount this as an untrustworthy or unimportant aspect of cognition?

    How and why we feel, then, is certainly a worthwhile question to ask, but asking it does come with some interesting implications. For example, as Louise noted above, what if we can’t come up with a functional explanation for feeling. If this were so, what would the implications be for evolutionary theory, which only has explanatory power with respect to activity that has a functional use? Unless we were satisfied with a ‘non-functional byproduct’ explanation for feeling, we would have to figure out a way to nuance standard evolutionary theory so that it can accommodate an answer to the hard problem.

    ReplyDelete
  8. This paper cleared things up for me. From what I understood, the hard problem is not the other-minds problem. I believe that we can't solve the hard problem of consciousness because we can't answer the question in the first place. Even knowing what other people feel and that they feel would not give us any insight into how or why we ourselves feel. For example, even if T3/T4 feels and even if we could "know" it, that still does not explain how or why it feels. Hope this could help anyone else who was confused:)

    ReplyDelete
    Replies
    1. Hi Melissa! Yes, it seems you are right! The other minds problem is the idea that we cannot know if any other being feels since we cannot be inside their minds. The hard problem is different, as it is concerned with how and why we feel. You are right that solving the other minds problem will not solve the hard problem. If we were to figure out how to be inside of another being’s mind to feel what they are, this would not explain how or why we feel in the first place. Thus, these problems are quite separate. However, solving the other minds problem might make it a little easier to solve the hard problem. This is because if we were to solve the other minds problem, we could use this in debates such as the T5 zombie. If a T5 zombie (a T5 robot that does not feel) truly has no feelings (and we would know this because we solved the other minds problem somehow), then this would tell us that feelings are not required to act completely indistinguishable from a human. Thus, while the other minds problem and the hard problem are two distinct issues, they do impact each other.

      Delete
    2. Hi Melissa, yes, I agree that solving the other minds problem would still not solve the hard problem of consciousness. Even if we had a T4 robot, and we "knew" that they were feeling, we still wouldn't know why or how it feels. However, to build the T4 robot, we would have to know the mechanism behind the feeling, and theoretically, we would then have to know how or why the robot feels because we would have reverse engineered it. In other words, I think the other minds problem is a roadblock to solving the hard problem because if we ever were to figure out how to reverse engineer feelings, we would never even know because of the other minds problem.

      Delete
    3. I agree and as Melody points out, this distinction is important because they are not mutually inclusive. Solving the hard problem is complicated by the other minds problem and the other minds problem is complicated by the hard problem, but the two are their own issues. The hard problem is how and why we experience consciousness. The other minds problem is asking if other people feel.

      Delete
    4. I agree with all your points. But I think solving the hard problem (if we can) will also solve the other minds problem. Because if we can explain how and why we feel, this is applicable for everyone. Then we will be able to explain how and why other people feel deterministically and rigorously, that the input stimulus will deterministically lead to the output feeling, therefore solving the other minds problem.

      Delete
  9. This reading felt essentially like a summary of the whole course thus far. We continue to be stuck on this problem (we all understand now, I think, why it is called the “hard” problem of cognitive sciences): how and why we feel.

    I appreciate the distinction here:
    “Turing was perfectly aware that generating the capacity to do does not necessarily
    generate the capacity to feel.”

    This is a rejection of strong equivalency or the idea that anything which, given the same inputs, provides the same outputs, is essentially equivalent, or the same. It is, to me at least, intuitive and obvious that this is the case. But I have certainly heard the opposite be argued. What are the stakes of that position, I wonder? Must one attribute small bits of consciousness to anything that has any input and output? Would a TV remote have some modicum of thought within that framework? I believe I’ve heard that kind of theory of the world described as panpsychism: that consciousness is an emergent property of matter, and that therefore everything, from tiny specs of dust to galaxies, has some measurable (and perhaps infinitesimally small) amount of consciousness. I wonder how the computationalists would feel if faced with the idea that the logical conclusion of their thought is an oddly spiritual and universalist one...

    ReplyDelete
    Replies
    1. These questions are hard to answer but remember how for the purposes of this course, "consciousness" is a weasel word for feeling. This leads me to believe that we are able to answer the question about the TV remote by saying it doesn't feel and therefore isn't conscious.

      Delete
  10. The way in which the Turing Test is described in this reading led me reflect on how the test verifies whether or not a computer can build a relationship with a human that is undistinguishable from those between humans. I came about this interpretation when I read that a robot who passes the TT must be able to communicate “[…] with as many people as any human is able to communicate with.” Essentially, this would imply that having the same social network (virtual or not) as a regular human being is necessary to pass the TT. Socializing and group dynamics have not been discussed much in this course and I am curious to know how each T-level relates to them.

    What is the maximum number of people humans/robots can establish correspondences/relationships with? Furthermore, would a T2 robot be able to respond to a group chat with multiple people for its entire lifetime? If a T4 robot attended a party, would it be able to adapt its behaviour and fool an entire group of humans into thinking that, just like them, it can have Dionysian experiences?

    ReplyDelete
    Replies
    1. I think this is a really cool takeaway from the reading and highlights some of the nuances of communication. It might be one set of skills that enables a robot to communicate via infinite email exchange with one person, but another to be able to communicate effectively within a group dynamic. I think as with people, many of these skills are learned and require exposure and practice to really understand. Must the T4 robot have a sense of humor? The ability to use sarcasm or be self deprecating? Maybe this is less important for the robot pretending to be a customer service representative, but I still think it is interesting to think about how this T4 might have to entertain a group at party, and the skills it would need.

      Delete
    2. I suppose that if we can accept other people are capable of having Dionysian experiences (getting drunk and acting drunk), we must extend that acceptance to T4 robots as they are no different from us. If by Dionysian "experience" you mean feeling, there is no way to be sure, but we grant the possibility that all other people are feeling without any proof out of politeness, so that would be extended to T4s as well.

      Delete
    3. This chain made me think about the complicated process of understanding social cues (a department that even many humans are lacking in). I find it interesting to think about a T3 robot who has sensorimotor interactions with the external world being trained to correlate the slightest position of specific individual's mouth, eyebrows, etc. to different behavioural patterns. Hypothetically, such a robot could maybe even learn to predict this individual's behaviour, based on their facial expressions, better than their own family and friends. A deep understanding of the people closest to us is a major component of social and emotional relationships--the statement "I know them better than they know themselves" holds a lot of weight. How would we describe the relationship between this T3 robot and the person that they "know" to such a degree? This is a bit of a random tangent but it got me thinking!

      Delete
    4. To build off this discussion, I find it may be difficult to answer these questions for machines for we can hardly answer for ourselves how many people we can communicate with and with whom we can hold a group dynamic. This to me is an unsolved easy problem of consciousness as we are not exactly sure how and why we can participate in social milieu. Mirror neurons have perhaps suggested some sort of functional correlates to these behaviors, but they by no means capture the full scope and complexity of how and why humans can communicate, especially on such large scales. Perhaps a machine could be capable of “reading” human behavior and internal state through fine machine learning even better than a person’s family and friends. We could even know the how and why as we have created these machines, but the hard problem of consciousness, whether the machine “feels” socially adept, would remain occluded to us!

      Delete
  11. Reading this text (and some of the comments above) got me thinking about what it really means to solve the hard problem. I realized that we have read and discussed many "bad" methods of dealing with the hard problem (ex. dismissing the importance of feelings, as Caroline mentioned earlier) but I still don't have a good sense of how we should be approaching this problem based on our current knowledge of brains, evolution, and computation.

    At the end of the abstract, prof Harnad mentions that the hard problem is perhaps insoluble. I think this is in part because of the lack of an obvious method to make progress towards solving the hard problem. Despite this, I was wondering if there is a way to define progress in this field, or at the very least if there exists some consensus on how to work towards solving the hard problem without running into the pitfalls we've discussed throughout this class with authors like Dennett.

    The only strategies I can think of are to create more consensus around operational definitions of words such as "feeling" and "consciousness" in order to avoid misunderstandings and equivocation in future debates. Another promising avenue would be to increase our knowledge of the brain through empirical research and use this to formulate better hypothesis on *how* we feel. Finally, we could come up with more adaptive problems (as discussed in reading 7a) in hopes that we might come accross one that seems to only be solvable by the evolution of sentience in certain (or all) species, to answer the question of *why* we feel. Although none of these methods can single-handedly solve the hard problem, I think they are at least steps we can take in the right direction.

    ReplyDelete
    Replies
    1. Hi Isabelle, I think you did a great job of laying out some more practical methods cognitive scientists can use to further debate and discussion around the hard problem, even if the solubility of the hard problem is questionable at this point in time. I especially agree with the coming to a consensus on operational definitions, as throughout this course (especially the early parts) we saw miscommunication and discrepancies in definitions and terms used, and Professor Harnad has been consistent about hammering into our brains proper definitions... another example of a good step to take in discussing the hard problem, and catcomcon topics as a whole.

      Delete
  12. Turing’s contribution to cognitive science (among many) includes the idea that the Turing Test is a way to answer the hard question (why and how we feel) by answering the easy question (how and why we do what we do) by demonstrating it with a robot. If we can successfully do this then we have landed on the causal mechanism for doing-capacity, including cognition and consciousness. If we cannot, then we must accept that our doing capacity is separate from our feeling capacities and that they are not causally linked.
    Harnad believes that Turing was not a computationalist in terms of cognition because of the work Searle has done to show that cognition is not computation. Computationalists may have taken Turing’s work at face value (that a Tx passer can operate strictly computationally), but Searle looked more critically at the implications of Turing’s design and demonstrated that he likely had more in mind than computation. However, Turing was a general computationalist because he thought that everything can be simulated by computation (Strong church/Turing thesis). He mediates the two views by summarizing that although everything can be demonstrated or modelled strictly computationally, it does not imply that the computations themselves are cognizing (page 3).
    Lastly, Descartes Cogito holds that I can be positive that I am feeling (or cognizing) what I am feeling (or cognizing). This is what Searle used when he was demonstrating his Chinese Room Experiment.
    With all of this in mind, Turing knew that he may not be able to answer the hard question with his Turing Test, but it was a justified and concrete aim. This connects with what Harnad wrote in his abstract, that the hard problem may be insoluble.

    ReplyDelete
    Replies
    1. I agree with much of what has been said in this thread so far, and I am also optimistic that by using the frameworks and knowledge from various other domains, we may think of multiple ways to tackle the easy and hard problem. Because of the limitations of the Turing Test, should we accept it as the ultimate goal of cognitive science? Or should we be attempting to develop other models of framing the questions? Could we (as cognitive science) do both at the same time?

      Delete
  13. Something that stood out to me in this paper was the following: “Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel”. I found this quote very interesting. We have been discussing the Turing Test for much of this course and if it could actually tell us something about cognition and help us reverse engineer the human brain. However, in leaving out feeling from the Turing Test, there is a crucial part of human cognition that is not explained. By my understanding of cognition and what we have discussed in this course, I do not think it would be possible to understand human cognition and to reverse engineer the brain without considering feeling. Of course, this is the “hard” problem of cognition- how and why we feel. And solving the “easy” problem (how and why we can what we can), will not solve the hard problem. So while Turing’s contributions were substantial, particularly for the easy problem, it does not do much to solve the hard problem.

    ReplyDelete
  14. The Turing Test is a test of indistinguishability for a lifetime, and given that a language’s semantics and syntax are not independent, a T2 computer cannot really pass the test because it has no way of grounding symbols. Basing simply on the outputs of the Chinese Room Argument, it appears that Searle’s behaviour is similar to someone who understands Chinese. Nonetheless, this will not be sustainable for a lifetime as Searle does not understand. Correctly manipulating symbols using rules producing a given output does not result in understanding. Giving a correct answer does not equal understanding, it feels like something to understand, to cognize. Because of that feeling, it is clear that cognition needs something more than just computation.
    The Turing test helps us to determine whether we have successfully reverse-engineered cognition; explaining how and why we can do what we do. This however remains in the realm of the easy problem. The hard problem of cognitive science remains, that is, explaining how and why we feel. Neither the Chinese room argument, nor the Turing tests help solve the hard problem. Throughout the course we have hypothesized that a T3 robot would be able to pass the Turing Test, however we have no way of knowing whether our T3 robots feel or doesn’t as Searle’s periscope doesn’t extend to it.
    We proposed the comparison of reverse engineering organs mimicking their functioning. In such reverse-engineered models, if we remove some part and tested it, we would clearly see the function of the missing element. For example, if we removed a valve of an engineered heart, there would be a drastic decrease in pressure, and blood would no longer be pumped. The causal role of that valve is clear, we can explicitly see what happens without it. This week’s papers and videos explain that we have been able to remove feeling, yet haven’t ascribed a function to it, therefore, how could go about determining the function and origins of feelings?

    ReplyDelete
    Replies
    1. Through the Chinese room argument, Searle demonstrated that cognition is not just computation, as the output does not result in understanding, and it feels like something to understand. Harnad describes what is missing as the symbol grounding problem, which transcends formal symbol manipulation, entering sensorimotor interaction with the world. This explains that at least a robot it required (not just a computer, as proposed by computationalisms).
      Nonetheless, I don’t think symbol-grounding equates meaning. This is because it doesn’t acknowledge the role of feelings. The symbol-grounding problem doesn’t disprove computationalism due to feelings related reason, it shows that sensorimotor interactions are needed to achieve the performative capacity (ground symbols and produce language). In other words, you would need a T3 in order to pass T2. From my understanding, the categorization aspect of cognition would be doable without feelings. This would mean that in theory the performance capacity would be solvable even if the hard problem is unsolvable. It however doesn’t mean that all of cognition would be explained because we still haven’t explained feelings, which is exactly what distinguishes the easy from the hard problem.
      When discussing the Turing test we do not talk about feelings, reverse engineering cognition is about getting something to do everything we can do, not accounting feelings. Even though there is obviously more to it than that (it feels like something), we do not yet have a way to measure it, nor would we even know where to start, so we have decided to take it out of the equation for now. However, it may just be a matter of time, just as previous generations could not have imagined the scientific discoveries we’ve made today, it could be possible that the hard problem might be one day closer to being solved.

      Delete
  15. "How does Searle know
    that he is not understanding Chinese when he is passing the Chinese TT by
    memorizing and executing the TT-passing computer program? It is because it feels
    like something to understand Chinese. And the only one who knows for sure whether
    that feeling (or any feeling at all) is going on is the cognizer"

    I believe that this point is the key in terms of how cognition, consciousness and 'feeling' all tie together, showing that 'feeling' is very very important in the idea of consciousness. One knows that they do not understand something because they do not 'feel' like they understand something, and that can only be felt by the person themselves. I found this reading super helpful in terms of allowing me to understand that intangible aspect of myself that allows me to believe that I have consciousness, that I am not simply a robot taking inputs and giving outputs - the ability to feel, whether than is pain, emotion, or simply the idea that I am feeling like I am understanding topics in class.

    ReplyDelete
  16. “The contribution of Descartes' celebrated "Cogito" is that I can be absolutely certain that I am cognizing when I am cognizing. I can doubt anything else, including what my cognizing seems to be telling me about the world, but I can't doubt that I'm cognizing when I'm cognizing. That would be like doubting I'm feeling a toothache when I am feeling a toothache: I can doubt whether the pain is coming from my tooth -- it might be referred pain from my jaw -- I may not even have a tooth, or a mouth, or a body; there may be no outside world, nor any yesterday or tomorrow. But I cannot doubt that what it feels like right now is what it feels like right now.

    I feel that many key concepts of this course–the hard and easy problem of consciousness, the other minds problem, etc.– can be understood through Harnad’s description of Descartes' Cogito in this article. I think that the following paragraph brings up a very interesting discussion of simulation and realty as it has been discussed in this class. Although this question may dive into phenomenological and philosophical questions that this course is not meant to answer, I nevertheless wonder how we can be certain of the distinctions between what is real and what is not real, reality against simulation. Harnad seems to suggest that the only certain truth is that it feels like something to feel. We are not even certain that our perception of tooth pain is real as we feel it, but we that it feels like something in that moment. What can other branches of cognitive science tell us about truths beyond this fundamental truth of feeling?

    ReplyDelete
  17. To conclude my skywritings from 10A and 10B, I would like to return to two giants: Descartes and Turing and their positions on the hard problem.

    TURING
    On the hard problem and the question to whether Turing Test passing machines would feel Turing states: “According to the most extreme form of this view the only way by which one could be sure that machine thinks is to be the machine sand to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. … instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.”

    This is Turing’s way of saying that TT, which measures doing-capacity is the closest and possible test we have for reverse-engineering of cognition or for an empirical study of cognition. The fact that Turing was aware of this limitation shows that he acknowledged that the casual explanation of the doing-capacity does not subsequently explain the feeling-capacity. In other words, he did not deny the hard problem by providing a mechanism to only the easy problem. Turing just provided the best chance we’ve got.

    Not that it’s relevant but Turing’s position is a bit like Chomsky’s who after proposing that UG must be innate consequently handed the evolutionary problem of UG to evolutionary psychologists. Moreover, I agree with Turing (and Prof Harnad) on the Hard Problem because if feeling is truly necessary (adaptive) for us, it would be necessary to pass the TT. An indistinguishable T3, T4, T5 most likely would have the ability to feel, although we could not directly measure it or know for sure because of the OMP. Although we term the easy problem as the easy problem, we must not forget the immensely difficult (but nonetheless still possible) standard of indistinguishability.

    Descartes
    As for Descartes, he provides us with the referent to the hard problem with certainty. Through Cogito, we are certain that our mental states are felt states. It is through Cogito we are certain that there is a hard problem beyond the easy problem because we (I) know it feels like something to be in any functional state.

    Thus, when Dennett states that “Turing showed us how we could trade in the first-person perspective of Descartes and Kant for the third-person perspective of the natural sciences”, he is wrongfully disregarding the first-person perspective because Cartesian certainty only exists within the first-person perspective. He is also wrong about TT because as stated above, Turing did not intend for his test to explain feeling but knowing that Dennett essentially denies the hard problem, it is understandable why he misreads Turing and his TT.

    ReplyDelete
  18. I found this reading to be a very nice summary of what we’ve learned so far and helped me refined my understanding on the topic of cognition and the extent to which it is computation. Harnad makes it clear that Turing was not actually a traditional computationalist in the sense that he believed computation could explain everything. Turing was aware that generating the capacity to do does not explain or generate the capacity to feel, which contributes to what I felt while reading the articles in 10a and 10b. I do agree with turing that any physical and dynamical structure/process could be approximated using computation, and that in terms of cognition, the best we can do is explaining our doing capacity. Ultimately, I feel that being able to explain and generate how we’re able to arrive at certain states is sufficient; I’m not really seeing the need for an answer to the hard problem anymore. I think it’s enough to know how to get to these states, and the why problem may be better answered on a subjective, individual basis.

    ReplyDelete
  19. I quite thoroughly enjoyed Professor Harnad’s tribute to Turing. In his paper, he succinctly tied together the concepts that have been the roots of the discussions garnered most heavily in class. Additionally, he did so in a very kid-sib manner, and I am tempted to send it to my brother who has been hounding me about what I’m studying this year. As the year draws to a close, this paper emphasizes how important integration of these topics are together. It is difficult to properly appreciate the depth of each topic together without understanding the previous one, and the same follows for the following topic. Only together, do they paint the landscape of the root questions surrounding consciousness and cognition. It is impossible to understand Searle’s periscope without knowledge of the Turing Test and its subsequent steps. Along this same thread, it is difficult to understand Descarte’s cogito without the use of Searle’s Chinese Room thought experiment, and Searle’s Chinese Room explains the block of the other - minds problem and the potential but infallible remedy in Searle’s Periscope.

    ReplyDelete
  20. This paper is a nice summary reviewing the course and clearly states what cognitive science is studying.
    Starting from Turing Tests (the best empirical way to study cognition is by comparing what TT capable of doing - the observable behavior)
    to Searle’s Chinese room (what computation cannot do – the understanding. Cognition cannot be just computation),
    then, this brings out the symbol grounding problem (question about how we get meaning at the first place? Sensorimotor interactions.)
    to categorization (what we can do with the meaning we get from the sensorimotor interactions).

    So far, this discussion only partially solved the easy problem of cognitive science. In trying to answer the hard problem, however computation works and whatever reverse-engineering is capable of does not give us answer for what it is means to feel and why. Turing knew well that TT could only explain the observable behavior, and the TT left feelings to be unexplained. Descartes, on the other hand, by arguing that everything can be deceiving, but when we feel something (even feel the wrong thing) we can be certain that we are feeling/cognizing. This provides the ‘absolute certainty’ to the hard problem of cognitive science, but the hard problem is yet so be solved.

    ReplyDelete
  21. It is bizarre, but I see how "doing and feeling" links with "showing and telling". When you are showing (e.g., point to an apple), something worth telling (e.g., say: “that apple”) is implied in showing, yet this implication is an interpretation additional to showing rather than being included in, and you cannot say that showing has this interpretation so it's equivalent to telling. The relation of doing and feeling is somehow similar. The way I see it is that doing is a product of feeling where the process of the product remains unclear and doing may imply feeling (e.g., laughing for being happy), however, this is based on our own experience (so to speak empirically, and since most humans process likewise, we reached to consensus), and hence is an interpretation that we add upon what we observe, viz. doing, following the same logic of "showing and telling", having this we-said-implication in doing is not equivalent to doing indicates feeling.

    ReplyDelete
  22. “Today we say that the test had to be conducted via email: Design a system that can communicate by email , as a pen-pal, indistinguishably from a human, to a human, and you have explained cognition. Questions arise: (1) Communicate about what? (2) how long? (3) with how many humans? The answers, of course, are: (1) Communicate about anything that any human can communicate about verbally via email, (2) for a lifetime, and (3) with as many people as any human is able to communicate with.”

    We’ve spoken about GPT-3 in class, and it’s something I’ve adjacently worked with for my own extracurricular projects. There was a start-up co-founded by a McGill grad who uses GPT-3 to generate copy for marketing purposes. With input of a few pieces of information (demographic, purpose, market) and the click of a button on a simple to use interface, the AI can generate copy/content that is just as useful as a copywriting intern could at $14/ hour. This is text based generation, much like email would be and the audience members who visit a website would theoretically never know that it was machine generated. While this is not necessarily a back and forth the way it would be expected in a T2 TT, GPT-3 does have those applications as well. However, would we consider this machine/feature/startup to be cognizing? Would we say it arrived at the answer the same way the intern would have? The intern is T3 and the GPT-3 T2, and while GPT-3 has an intelligence of sorts that is a domain of cognitive science, it does not understand. This really makes sense there the fourth question that follows the quote above is how? And Searle’s periscope really makes it obvious in this example that is does not extend to T3. We have to a certain extent reverse-engineered text based generation or some aspect of language (not to a satisfying extent as seen through our UG/OG lectures) but there’s no way of being able to answer the hard problem with the inventions we have today. Perhaps ever, but I suppose as the problem has presented itself, we will continue to try.

    ReplyDelete
  23. This is sort of adjacent to this reading, but I read a tweet the other day in which someone wrote: “Your job isn’t going to be automated by AI it’s going to be automated by a series of if/else statements don’t flatter yourself.” When I first read that, I found it funny that I immediately thought about this course, and the difference between T2 and T3. It’s true that most of what we do can be successfully executed by a hard-coded robot that is a series of if/else statements. However, there are also arguments by successful AI researchers and more specifically roboticists that AI will not take over our jobs, not all at least. I’m not entirely sure if this is the case, but it could very much be the human ability to feel. Our sentience is one thing, but your ability to understand, empathize, and feel is the essence of the hard problem, but perhaps it also works in our favour. This makes me thing of what happens if we do solve the hard problem? What will happen then?

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2021 Time : FRIDAYS 11:35-2:25  Place : ZOOM Instructors : Stevan Harnad & Fernanda Pere...