Monday, August 30, 2021

10a. Dennett, D. (unpublished) The fantasy of first-person science


Extra optional readings:
Harnad, S. (2011) Minds, Brains and Turing. Consciousness Online 3.
Harnad, S. (2014) Animal pain and human pleasure: ethical dilemmas outside the classroomLSE Impact Blog 6/13 June 13 2014


Dennett, D. (unpublished) The fantasy of first-person science. 
"I find it ironic that while Chalmers has made something of a mission of trying to convince scientists that they must abandon 3rd-person science for 1st-person science, when asked to recommend some avenues to explore, he falls back on the very work that I showcased in my account of how to study human consciousness empirically from the 3rd-person point of view. Moreover, it is telling that none of the work on consciousness that he has mentioned favorably addresses his so-called Hard Problem in any fashion; it is all concerned, quite appropriately, with what he insists on calling the easy problems. First-person science of consciousness is a discipline with no methods, no data, no results, no future, no promise. It will remain a fantasy."
Click here -->Dan Dennett's Video
Note: Use Safari or Firefox to view; 
does not work on Chrome

Week 10 overview:





and also this (from week 10 of the very first year this course was given, 2011): 

Reminder: The Turing Test Hierarchy of Reverse Engineering Candidates

t1: a candidate that can do something a human can do

T2: a reverse-engineered candidate that can do anything a human can do verbally, indistinguishably from a human, to a human, for a lifetime

T3: a reverse-engineered candidate that can do anything a human can do verbally as well as robotically, in the external world, indistinguishably from a human, to a human, for a lifetime

T4: a reverse-engineered candidate that can do anything a human can do verbally as well as robotically, in the external world, and also internally (i.e., neurologically), indistinguishably from a human, to a human, for a lifetime

T5: a real human

(The distinction between T4 and T5 is fuzzy because the boundary between synthetic and biological neural function is fuzzy.)  

51 comments:

  1. On Chalmers' definition of a zombie: How do we know that the zombie won't have a phenomenal feel? The zombie is identical "molecule for molecule" to Chalmers, which sounds like a T5 robot, at least a T4 + biologically same in parts other than the brain. It does not seem logically inconceivable that it will feel, and this can be an open-ended question. However, I could not see in the imported text where Chalmers states that the "zombie believes he has the evidence", which is the interpretation of Dennett, on p.464. Is the belief one of the "internal states"(from Chalmers) that the Zombie has? Does the Zombie express this belief verbally for us to know that it believes so? If we begin by saying that the zombie does have a belief about his consciousness I think it gets very confusing.

    ReplyDelete
    Replies
    1. April, you are right that if we read what Chalmers and Dennett say uncritically, we will be confused. So we have to read it critically:

      A “zombie” (a T5 that does not feel) is a just a fantasy, like a “zombiverse,” molecule-for-molecule indistinguishable from the real universe, except the in a zombiverse there is no gravitation, but “zavitation.” Physicists would immediately burst out laughing, saying that you’re just re-naming gravity, since there’s no way to distinguish gravitation from zavitation (if the universe and zombiverse are indistinguishable).

      With the “zombie” fantasy it’s the same, except for one thing. There would really be a difference between a T5 zombie and us, namely, feeling. And each of us know that feeling is real, but the only one who can feel it is the feeler. So it is the other-minds problem that would prevent being able to distinguish a T5 from a T5 zombie [if there could be a T5 zombie at all]. Otherwise, the fantasy of a T5 zombie is exactly the same as the fantasy of a zombiverse.

      This is also where the other-minds-problem teams up with the “hard problem,” namely. The fact that we do not have a physical (causal) explanation of how or why any physical system feels at all. We can only reverse-engineer doing-capacity, not feeling capacity. (That’s why we’re only considering T2 to T4: Because fantasizing that there could be a T5 that was a zombie would be like fantasizing that there could be a zombiverse.)

      And that’s why you can’t “believe” that a T5 zombie would not feel, April: That just means you can’t believe the T5 zombie fantasy at all. Because the T5 zombie is defined as a T5 that does not feel. For in a fantasy you can conjure anything into existence by definition, whereas in reverse-engineering (cogsci) we are interested in actual T2’s-T4’s that we could model and explain causally, not ones that we can only fantasize. And Turing reminds us that that means we have to recognize that we can only explain doings, not feelings.

      [By the way, again, by definition, a T5 zombie could not “believe” anything at all: “Belief,” like “consciousness” is a weasel-word, because it feels like something to believe something. Dan Dennett, too, trades on this weasel-word without realizing it. How?]

      Delete
    2. Professor, to continue your thought on feelings, I believe Dennett dismisses the importance of "feelings" since empirical testing would be difficult to do. The Zombic Hunch example is meant to demonstrate that even if you have an entity that is identical to you in every aspect (except for "feelings"), you wouldn't be able to identify the difference based on their actions, words, or behaviour (since we understand how others "feel" by what they tell us or show us). For example, a T4 robot may display indicators of pain and even inform you that it is in pain. But how can you tell the difference between a robot and a typical human that exhibits the same behaviours and responds in the same way? We have no way of knowing the difference; we must rely on the subjective reports of the person/robot providing them.
      However, I believe we can't just dismiss "experience" and "feelings" since they appear to play a part in consciousness. Why do we have feelings if they aren't vital to consciousness? We may have a system that doesn't have any sensations (like the T4), but would we be cognizant at all? Our bodies would react normally, but would we be able to tell when we were hurt? Would we be able to detect pain? Is it necessary to have feelings in order to know?

      Delete
    3. Just to clarify because I'm a bit confused: Chalmer believes that there's something missing from zombies because they are missing feeling and experience, but Dennett is arguing that since the actions, words, and behavior of the zombie would be indistinguishable from a human's, we can "leap" over the Zombie Hunch as he says (456)?

      Delete
    4. A question in response to professor Harnard’s post here: In this definition, is a zombie just anything that does not feel? Is it only an insentient T5 or could it be something like a river, flower or rock? I ask this because if a rock is a T5, then the question of whether or not a zombie can exist is yes. The problem would be that you can never know for sure if anything is a zombie - you can never know if something you believe is insentient feels. I can imagine the implications of this for reverse-engineering and cognitive science. Does that make sense?

      Delete
    5. Bronwen, I'll attempt to respond to your comment. if I understand correctly, the “Zombic Hunch” is our reliance on intuition to claim there is a difference (in terms of feeling) between us, as humans, and our “zombie twin”. In fact, this zombie is functionally identical to us in every respect — same actions, mannerisms, behaviours, use of language —, except for the fact that our zombie twin lacks feeling, whereas we don’t — we know that we ourselves feel (Cogito), but can’t know (other minds problem) whether the zombie feels. We can only make assumptions about its feelings based on its actions, and rely on our gut feeling to make a claim about its sentience/non-sentience.
      So, I think that the fact that our zombie twin looks and acts and does things like we do is a prerequisite/premise to talking about this “hunch” we have about the distinction between it and us. If the zombie was a rock, it would be much easier to claim its non-sentience — but our zombie is identical to us, making our intuition the only justification for claiming it is not sentient. In other words, there is no actual “evidence” for its lack of sentience besides our hunch.
      To be honest, I had a bit of trouble understanding this reading, so I might be off — Someone please correct me if I misunderstood!

      Delete
    6. “This speech act is curious, and when we set out to interpret it, we have to cast about for a charitable interpretation. How does Chalmers’ justification lie in his “direct evidence”? Although he says the zombie lacks that evidence, nevertheless the zombie believes he has the evidence, just as Chalmers does. Chalmers and his zombie twin are heterophenomenological twins: when we interpret all the data we have, we end up attributing to them exactly the same heterophenomenological worlds. Chalmers fervently believes he himself is not a zombie. The zombie fervently believes he himself is not a zombie. Chalmers believes he gets his justification from his “direct evidence” of his consciousness. So does the zombie, of course.”
      This quotation challenged my understanding of the concept of the zombie T5, because it feels like something to have a belief. So even in this example, would the zombie not have some amount of consciousness? If I am interpreting this correctly, wouldn't it add evidence to the fact that the zombie T5 can’t even exist as a logical possibility? In this case the zombie is unable to escape feeling, because it feels like something to believe yourself to not be a zombie.

      Delete
    7. The zombic narrative seems to be a particular way of dealing with the other minds problem. If, when interacting with other humans, we make a leap of faith over the chasm of the other minds problem, what is stopping us from making this same leap in the case of a t5? If it is on the basis of language, nonverbal behaviour, mindreading, what is different about a t5's ability to produce these various evidences that we accept as proof of the minds of others? A t5 can produce language just as would another human, and it can produce facial expressions and autonomic responses just as would another human. Could there perhaps be a difference in our ability to 'read the minds' of these t5s? If the t5 is behaviourally and structurally indistinguishable from a human, it is difficult to conceive of what would provoke this 'zombieification'. This begs the question of what exactly it is that we do when we attribute feelings to others. If we even do !!

      Delete
    8. Form my understanding, Dennet's heterophenomenology seemed to be quite similar to a kind of neuroscience geared toward understanding subjective experience. He thinks that once cognitive science characterises the patterns and relationships between beliefs and brain activity or rather some other physical states both cognition and feeling will be explained. In a rather more kid-sibly manner, he seems to think that Turing's engineering technique could then solve both the easy and the difficult problems.

      I don’t think that the hard problem would be solved by reverse-engineering cognition because causal mechanical explanations that would allow cognitive scientists to construct a T3 robot are not congruent with the explanation that it would take to solve the hard problem. Even if we are capable to make a fully functional/feeling/passing T3 robot, we will still not be able to know for certain that the robot is actually feeling.

      In this case not only would I Dennet, but with Chalmers. In his philosophical zombie it would also be impossible to establish that a T5 robot has feelings for the same reasons and previously mentioned. One would think that a exact copy of one’s self will have al the exact same feeling as one would, however I don’t believe you would ever be able to know for certain as this is simply a model/copy trying to simulate you in which case you can’t be 100% certain that it has been built in the proper way to actually experience what one would.

      Delete
    9. I felt that Dennett really focuses on correlations and on recording "raw data" relating the subjective experience of the subjects. While interesting, Dennett's approach does not explain how/why we feel. Ultimately, what Dennett is trying to get at the fact that we can just use reverse-engineering and the TT to solve both the easy and hard problem.

      I disagree with this, and don’t think that the hard problem can be solved using engineering. While we can discover the causal mechanisms behind a T3 robot, these will not give us the completely satisfying or appropriate answer to the hard problem because we have no metrics to determine whether they are actually feeling. The essence of the hard problem that makes this a lackluster solution is that we need to determine why our causal mechanisms give rise to consciousness and feeling and the robots’ doesn’t.

      Delete
  2. Dennett is arguing for the use of heterophenomenology to study consciousness. This method requires verbal recordings, turned into transcripts, turned into interpretations of the individual's verbal acts. These interpretations are meant to be used as the individual’s beliefs. The distinguishing factor, according to Dennett, is that these interpretations are made neutral - this is because of the false negative/positive issue. Dennett argues that with these neutral interpretations, you can derive what it is like to be the individual.

    I will admit I am having a hard time being convinced that by neutralizing the interpretations, you can know how and why we feel. Dennett argues that heterophenomenology provides us with more than just verbal aspects. But still, all it is doing is making interpretations to describe what is happening. It reminds me of the brain imaging issue where the “what” in terms of “when/where” doesn’t inform the “how/why.” The problem is that Dennett is using heterophenomenology to explain the wrong thing.

    ReplyDelete
    Replies
    1. Yes, "neutral" interpretations (besides being parasitic on the felt understanding in the head of the interpreter) are just weather-reports: squiggles and squoggles. If they are formally correct, then they are interpretable as evidence of what is going on in the head of the one whose verbal reports (and behavior, and neural activity) are being interpreted, and predicted, and explained. That would be useful toward solving the “easy” problem -- of reverse-engineering what is going on in a cognizer’s head that produces what he does and can do, and how and why. But it leaves the “hard” problem – that it feels like something to believe, understand, think or mean something, and how and why – completely untouched.

      Delete
    2. I would also like to add that Dennett does make a point to say that heterophenomenology does not simply include neutralized verbal reports but also unconscious responses that accompany these reports. He writes “But all other such data, all behavioral reactions, visceral reactions, hormonal reactions, and other changes in physically detectable state are included within heterophenomenology. I thought that went without saying, but apparently these additional data are often conveniently overlooked by critics of heterophenomenology.” Could we not say that these sorts of data, like galvanic skin response, hormonal changes, and outward behavior, allude to how it “feels” when completing these verbal reports. Although these sorts of things may be ambiguous, unnamable, lying just beneath the surface of our unconscious, does not mean that they are not felt. We can all know what it feels like to be afraid, and this feeling is exhibited externally through objective data like behavior and pupil dilation, GSR, and heartrate. Does the combination of verbal reports and this additional external data provide deeper insight into the hard problem of consciousness, or is it simply illuminating tiny facets of the easy problem, the how and why.

      Delete
  3. When reading Dennett’s paper, I couldn’t help but agree with a lot of his statements. He described heterophenomenology as a way of getting access, as a 3rd party, to a person’s phenomenal experiences. From what I understand of heterophenomenology, it’s really just a description of how research is conducted on human participants in general. And while Chalmers and other people supporting category B might not agree with Dennett’s position on heterophenomenology, I think Dennett is right in saying that it is the best method we have of investigating phenomenal consciousness in other people, because what better method do we have? Isn’t that how research involving human participants in cognitive science occurs usually, by interpreting data from participants?
    Chalmers talks about the zombie, who seems to be identical to the molecule to Chalmers, and yet Chalmers says that the zombie cannot truly know what it feels like to be himself. How does Chalmers know that, and why does it matter if the zombie truly experiences phenomenal consciousness or not, if he can claim to be able to do so? Should we not trust the zombie’s statements? Even if we ‘cut up’ the zombie’s brain, we might not be able to know for sure whether he can have phenomenal experiences or not, because as Chalmers says, he is made up of the exact same ‘stuff’ as Chalmers himself.
    At least Dennett is actually proposing an empirical way of investigating phenomenal consciousness, rather than just accepting the statement that we will never be able to know ‘what-it’s-like’ to be someone else.

    ReplyDelete
    Replies
    1. I tend to agree with you. One point that I found quite convincing is this: "I find it ironic that while Chalmers has made something of a mission of trying to convince scientists that they must abandon 3rd-person science for 1st-person science, when asked to recommend some avenues to explore, he falls back on the very work that I showcased in my account of how to study human consciousness empirically from the 3rd-person point of view." I feel like most of the experiences that contributed to the debate on consciousness are experience that fits with heterophenomology (for example, the split brain experiment). I'm having a hard time trying to think of another way to study consciousness and even feeling without resorting to using humans' account of their feeling. It seems to me like the Turing test - it's not the perfect way to study consciousness, but its the best one we have.

      Delete
    2. Shandra, in response to your line, "I think Dennett is right in saying that it is the best method we have of investigating phenomenal consciousness in other people, because what better method do we have”. I agree that there might not (currently) be a better option. Dennett’s approach of heterophenomenology certainly seems useful in that it looks both at functional properties and subjective experience (feelings), and then sees how they line up. And perhaps this is the best we’ve got (I’m not totally clear, based on Dennett’s descriptions, what exactly Chalmers' 1st person approach is. So not sure what exactly it can and cannot answer). But at the end of the day, I don’t think heterophenomenology has the resources to explain what Dennett thinks it can: feeling (though, honestly, not even sure if this is what he's claiming to be able explain, he does a lot of talking about beliefs when answering what seem to be questions about feelings which confused me). All heterophenomenology could give us, even in the best case scenario, is a complete functional map of what is happening when someone feels. But this falls prey to the same issue Fodor identifies in thinking that brain imaging will tell us the how’s and why’s of subjective conscious experience (feeling). It would still just be the identification of functional correlates. And finding correlates to feeling doesn't tell us *why* there is feeling in there first place, why this second correlated piece is there at all.

      In response to your thought: "At least Dennett is actually proposing an empirical way of investigating phenomenal consciousness, rather than just accepting the statement that we will never be able to know ‘what-it’s-like’ to be someone else”, I would disagree that Dennett is acting as he should. I think dogmatically sticking to method incapable of finding the answers you’re looking for is worse than admitting that something might be unknowable.

      All that is not to say heterophenomenology is useless, it’s just important to identify what it is and isn’t useful for. In terms of predictive power, it is likely quite useful, but prediction is just about what we’re going to *do*, about performance, so whether heterophenomenology has predictive utility isn’t really related to whether it can answer the “hard problem” of why and how we feel.

      It seems like you see the easy problem to be what’s really important (and that’s fair, I think it is more related to practical benefit!), but from my understanding Dennett is claiming he can go further than this. Turing acknowledged that he was just sticking to the easy problem, partly because, as he noted, we don't *really* know
      whether we can trust any robot OR other person’s claim to conscious experience, but, at the end of the day, what we mostly care about is how things act, so thats what we should (and, I would add, can) investigate. But Dennett seems to think he can go further (or, maybe, doesn’t recognize the hard problem at all? Again, I found all his talk about beliefs confusing with respect to seeing what he was actually claiming heterophenomenology could investigate). So as much as we should give the idea of heterophenomenology credit where credit is due, I think it’s just as important to recognize its inherent limits, and not say “we might as well try to use this to answer the hard problem because it’s all we have”.

      Delete
    3. I think you make really great points Caroline! Do we, a) accept heterophenomenology as a (unsatisfactory) explanation for the question of consciousness and move on, or, b)remain hopeful and hungry for a better conclusion? On one hand, accepting defeat is beneficial for the investigation of other aspects of subjective human experiences. However, could this abandonment prevent ever truly reaching an answer to the hard problem? I agree that Dennett attributes too much power to heterophenomenology. What should be a gracious acceptance of defeat, he actually sees as a victory.
      "What needs to be explained by theory is not the conscious experience but your belief in it."
      It may be true that belief in consciousness is a productive object of study, but it is still missing an understanding of actual consciousness. Dennett's suggestion of this method as an analysis of the hard problem, rather than the easy problem, is (I think) a bit overconfident and may stand in the way of a satisfactory answer.

      Delete
  4. The problem Chalmers brings up about heterophenomenology is that it is not a good enough empirical method of investigating phenomenal consciousness because it crucially misses out on the feeling of being conscious. You and I both know it feels like something to believe something, which is an important quality of our phenomenal consciousness that heterophenomenology misses out on.

    I believe this is a point of tension between Team A and B because although heterophenomenology could be described as the best method of investigating consciousness, it still misses out on the conscious quality of feeling, which makes it incomplete.

    Therefore, it seems disingenuous to prop up heterophenomenology as a scientific understanding of conscious experience when it doesn't attempt to address the quality of feeling, which we all individually acknowledge is central to our phenomenal reality.

    ReplyDelete
    Replies
    1. Although heterophenomenology certainly can't explain why or how we feel conscious, I wouldn't be so quick to say that it misses out on addressing the matter altogether. In the reading, Dennett describes heterophenomenology to include verbal reports of any accessible "convictions, beliefs, attitudes and emotional reactions". As a result, "feeling conscious" would likely be included in this verbal report for many subjects. If my understanding is correct, this means we have to dig a bit deeper to identify the point of tension between A team and B team.

      After reading Dennett' article, I think there are many different points of tension between A team and B team. One of their disagreements revolves around what should be the data for research in cognitive science. From what I understand, A team argues that the data should include our verbal judgements about what we think and feel whereas B team argues that our thoughts and feelings themselves (what Dennett refers to as conscious experience) should be the object of study.

      Another interesting point of tension between the A team and the B team relates to the Zombie Hunch that April and prof Harnad discussed earlier, and the list of disagreements between the two teams goes on if anyone else wants to pitch in more examples.

      Delete
  5. When reading that Searle aligned to team B that he had "a gut intuition" or "can feel it himself", it reminded me of the Chinese room argument that he made. By executing the T2 program himself, he cleverly penetrated the other-minds problem because he did not have to access others' feelings, but only relied on his own feeling that "he does not understand".

    However, I can't help but think the question of to what extent we can trust the intuition brought by our feeling. There are indeed many philosophers who questioned the validity of "intuition". It just seemed to be a bit vague that he made his statement purely based on his feeling -- "I feel I don't understand", and that's it. Indeed, we can only access our feeling; but how true are our feelings? (Sorry if this may be out of cog sci's responsibility but goes into philosophy but I'm just curious :))

    ReplyDelete
    Replies
    1. I think that you bring up an interesting question of how true our felt states are. However, I don’t think that within these arguments, it is necessarily a matter of whether or not our feelings are true. I think that the important thing is that we know what it feels like to understand something and what it feels like to not understand something. For instance, I know what it feels like to read English and understand it, and I know what it feels like to read Greek and not understand anything. In either case, it produces a feeling that I personally can identify, whether or not this feeling is fully ‘true’ or not. This is what Searle was arguing in his Chinese Room Argument, and we have to take his word for it when he says he understands or doesn’t because of the other minds problem. The other minds problem prevents us from knowing how true another person’s thoughts or feelings are, because we cannot experience them ourselves. When it comes to the intuition discussed in this piece, I think that it was not explained in a very clear way. But by my understanding, he is also referring to the feeling of knowing versus not knowing, which is again, a felt state that is protected by the other minds problem.

      Delete
    2. I think these are both interesting points, and I would add to Evelyn's that the main focus of solving the hard problem that we talk about with T2, T3, etc. is execution and reverse engineering. While I agree it definitely feels vague and flighty for Searle to say that "he does not understand", I think this alludes to the idea that in a conscious being, understanding is a fundamental element of proper execution. If the machine is unable to execute this feeling of understanding, then it's not checking all the boxes required of consciousness.

      Delete
    3. Thanks for the replies!

      Upon reading next week's reading and reflecting on previous lectures, I realized what I mentioned here is actually related to Descartes Cogito -- "I think, therefore I am". Building upon this and not falling into skepticism, we cannot be skeptical about the fact we feel, or we are not existing. Then, just as I can say I feel I understand Chinese, Searle can say he feels he does not understand Chinese -- and they are true to ourselves.

      Delete
  6. The arguments presented in this reading seem to rely pretty heavily on the various weasel-words presented to us by prof Harnad last Friday, including consciousness, qualia, subjective states, intentional states, subjectivity, feeling, experience, mind, as well as first- and third-person perspective.

    After doing further research on what weasel-words are and how they are used, I still have some questions as to what this means within the scope of this course and also within cognitive science as a whole.

    My first question is "how do we identify weasel-words?" Which also ties in with the question "Can anyone identify weasel-words, or do we need to have experience and credibility within a given field of study to make this claim?"

    My second question is "What are some other examples of weasel-words that we use frequently in cognitive science?" I'm thinking maybe "intelligence" could be one of them, I feel like its used in many arguments without being properly defined, but I can't think of any others.

    My third question is, when identifying weasel-words, how do we make sure that we aren't brushing over relevant nuances between certain terms (for example someone mentioned in class that there is a distinction to be made between first- and third-person view. Despite sentience beeing defined in the same way for both of them, it would be counter-productive to ban the use of these words on the basis of them meaning the same thing.) So I guess my question is where do we draw the line to avoid falling into these traps?

    I'd be curious to know your thought on any of these questions, let me know what you think!

    ReplyDelete
    Replies
    1. Hi Isabelle! From my understanding, the examples of the weasel words that Prof. Harnad showed us are words that we don't have a clear definition of and therefore mean too many different things to different people. It became problematic because when applying theories or models to explain feeling by replacing 'feeling' with weasel words, there is always somebody who has got a different conception of the weasel words and raised a counterexample given their understanding of the concept of the weasel words. For example, because of his misunderstanding on the concept of 'intentionality, in 3a, Eric raised a stupid question: whether understanding is the process that produces intentionality. Clearly, he did not understand what understanding is, and intentionality doesn't become consciousness unless it is felt. In reply, Prof.Harnad first asks Eric to forget that word and points out that the other word that Eric used in his comment, 'representation', is as vague as 'representation'. At this point, Eric could argue that he uses the term representation for anything that stands for something other than itself after he took Prof. Harnad's symbol grounding problem lecture to arugue that what he meant by 'representation'. I might not answer your question, however, as far as I could see, the weasel-words are the words that make people spend all their time and energy debate on the question that is irrelevant to the topic, like Dennett, D himself once said, 'hammering away on the definition and counterexamples instead of looking at the empirical work that could be done.'

      Delete
    2. Hi Isabelle, I think cognitive science probably has many weasel-words since I feel like lots of topics I've learned in cognitive science classes are based on debates surrounding what the definition of certain words are. "Intelligence" would definitely be one of those. Many studies measure human intelligence or animal intelligence, but there is no direct measure of intelligence because we can not decide on a solid definition. Therefore, we use proxies.

      To answer your third question, I think we need to focus on the "why". Why are we researching intelligence? What is the end goal? For instance, maybe we're trying to figure out the extent of the human capacity to conduct certain tasks. Therefore, the exact definition wouldn't really matter and we can ignore the problem of weasel-words. For consciousness, we are trying to reverse-engineer the brain and all its cognitive functions. Therefore, what consciousness actually IS (it's definition), doesn't really matter as long as we get to it.

      Delete
  7. This piece made me this a lot about the hard problem and the other mind’s problem. In order to contradict the concept of a T5 zombie (an insentient robot indistinguishable from a person?), one would need to show that feelings are necessary for a T5. In other words, there would need to be a causal and complete explanation for how and why a T5 robot needs to feel. This would essentially be the same explanation for how and why a human being feels (this is the answer to the hard problem!). Obviously, we do not have a response to the hard problem yet and, therefore, we cannot contradict the idea that a T5 robot zombie exists. However, is fair to say that we can never know for sure if such a robot exists because we can never know for sure that something is not feeling. This is true even when it comes to something like a rock or a river. I am almost totally sure that a rock or a river does not have sentience the same way that I do, but I can never know for certain (the other minds problem). In the same way, you could be almost certain that a T5 robot is a zombie but you can’t know for sure. This is where I get a bit confused. To clarify: It seems like we are at a bit of an impasse here -- we can’t know for sure whether a T5 zombie exists or not. We can only say it may exist. Is that right?

    ReplyDelete
    Replies
    1. You bring up some great points about the limitations of T5 robots and how these relate to the hard problem! It is indeed impossible to tell if a T5 robot can actually feel, despite having the same neural circuitry, organs and physical interactions as humans. As much as we would like to answer the hard problem (or, at least, get close to doing so), reverse-engineering might not be able to provide us with more clues than studying another human because of the other minds problem. Reverse-engineering cognition has its limits and I wonder if reaching T5 is useless given our utter inability to know if other humans, animals and objects cognize as we ourselves do. That being said, I do believe that passing the Turing Test up to a certain level (T3?) has its usefulness when it comes to solving the easy problem of cognition. I am curious to know if anyone else has some thoughts on the relevance of reaching T5 and on if other approaches to solving the hard problem should be considered.
      As for your question, a T5 zombie has yet to be conceived, but I believe that it could in theory. The text makes it clear that a T5 zombie lacks the extra component that makes beings “feel”, which would ultimately solve the hard problem. Because a T5 zombie does not reach the same level of sentience/feeling as a human does, there should be no issue with conceiving one because generating feelings isn’t necessary.

      Delete
    2. Bronwen, I think you do a great job of summarizing how the hard problem and the OMP are related to this week’s reading. Throughout this class, I have found that a lot of the concepts we discuss hinge on “yet.” We have not “yet” found a solution to the hard problem, we can not “yet” know what another organism feels (or whether they feel at all, for that matter), etc. etc. Similarly, a lot of the same concepts rely on “may” or “might.” Like you said, “we can only say [a T5 zombie] /may/ exist.” We can think of it as existing, but we cannot know for sure because, once again, we can never be entirely sure whether or not another thing is feeling. I am still trying to figure out if this is also a “yet” question, and I am leaning towards it being a conditional “yet.” If we ever solve the hard problem or the OMP, we would be more likely to know about the existence of a T5 zombie. For now, however, we still do not know. As for your point, Camille, I agree that it seems a little useless to dream up a T5 robot knowing that, due to the OMP, we would not be able to verify if it can or cannot feel. Sure, a T5 zombie could “in theory” exist, but so could literally anything, no? Otherwise, (from what I understand) all other things that could not in theory exist would be considered unthinkable thoughts. I am not sure if this skywriting has made much sense to anyone else, but I have found it helpful in order to gather my thoughts on the subjects at hand.

      Delete
  8. From what I understand here, the distinction between a t5 and a t5 zombie robot is very similar to the distinction we gave between a t2 and t3 robot.

    We have previously concluded that it would not be possible to truly pass t2 without t3. That is the case because sensory-motor input is required to ground what you say, and therefore truly understand/interact with what is being said. Even if a t2 robot could trick us into passing t2, it would not have achieve cognition, since it does not posses what is required to ground what it is saying.

    In the case of t5 zombie, we are looking at a t5 robot that does not feel. Here we are talking about a theoretical t5 since we do not even know if it is possible for a t5 to not feel. This question is interesting, because you need to understand why we feel (the hard question) to understand if it is possible to have a t5 zombie. -elyass

    ReplyDelete
    Replies
    1. Hi Elyass, I think that this is a very interesting comparison between T2 and T3 and a T5 robot and T5 zombie. I think that the symbol grounding problem can be applied to a T5 robot in a way. This is because I would think that some states and forms of understanding could not be done without feeling. For example, I learned in another class about people who have a disorder that prevents them from feeling physical pain. This condition can be quite fatal, as these people cannot feel when they have an internal injury, or any other typical aches and pains we might feel to alert us that something isn’t right. This is an example of physical pain, but I would imagine there are examples for how other feelings impact us as humans and animals. Without feeling, I imagine that T5 zombies would be lacking something that would make them indistinguishable from humans.

      Delete
  9. I have an additional comment to make on this reading. Would it be correct to think that the “beliefs” to which the authors refer here are essentially the outcomes of feelings? The text goes on about what differentiates “zombies” from actual humans and explains that experiences are largely what leads to the creation of unique human beliefs. I understand that the terminology used here is different to what we are using (and was criticized in reading 10b.), but I want to make sure I can apply and compare some of the ideas in this text to the rest of the course materials.

    ReplyDelete
  10. Throughout the course, we discussed how there are two things we can be certain of; firstly the truths of mathematics, and secondly the fact that we feel the way we do. “Heterophenomenology” or T3 does not solve the hard problem, nor does it prove that there is not feeling. It is simply predicting what people feel from the brain and correlated behaviours (including verbal ones). Nonetheless, it gives value to human subjective experience, acknowledging them as starting points, and attempts to expand on them with more objective measures. If these subjective experiences are one of the few things we can be certain about, it is important to consider them in some capacity.

    It should be noted that self-reports of subjective experiences are limited by our capacities to relate our feelings to others as we experience them, our ability to express them, and the limits of introspection (as demonstrated by behaviourism) such that it doesn’t get far in explaining the easy problem, let alone the hard one. The hard problem attempts to explain feelings causally, why and how we feel the way we do, it isn’t really about a 1st or 3rd person view, but a ‘felt’ view. The objective ‘behavioural’ measures that heterophenomenology (or T3) wants to apply will not be able to answer the hard problem. Similar to how we discussed mirror neurons and neuroimaging, these methods simply provide correlational data for the fundamental questions, nothing really concrete or definitive, nor do they provide answers for the questions we are really interested in. In essence, I think that heterophenomenology does nothing more really for the hard problem, than brain imaging does for the easy one. As expected, we are limited by the other minds problem (i.e. we can never know for sure how and why an organism feels), and we rely on correlational techniques when attempting to solve it, thus prohibiting us from ever solving it.

    ReplyDelete
    Replies
    1. Hi Lola! I totally agree with your points. I think that it is important to gather correlational data to determine how human beings are aware of their own feelings, as well as neutralize this data to the largest extent possible by gathering data relating to feelings that are not of conscious awareness. I do think this is a clever way of gathering data of this sort, but as has been said before it misses some crucial points. I agree that heterophenomenology has important limitations as it by nature ignores the other mind's problem and the hard problem. If we don’t try to understand why or how we feel, focusing instead on what we feel (easy), we simplify the true nature of the question we have set out to answer. This makes the data unhelpful in answering the hard problem, like you said, mirroring how neuroimaging cannot exactly help to answer the easy problem.
      This reading also brought to my mind criticisms relating to behaviorism, and I agree with your points there as well. There are limits to relying on what people say they feel, and trying to infer what they feel based on their physiology, and these limitations all trace back to the other mind’s problem, the fact that all we know to be true are our own feelings. Thus, trying to determine the feelings of others could be likened to making sense out of squibbles and squabbles. We have no way to know for sure what the other is experiencing (this also links to the symbol grounding problem), and the measures taken by heterophenomenology are only as useful as we believe them to be in revealing what, how, or why people feel.

      Delete
  11. "But if we keep looking, we will also presumably find yet other areas of the visual system that only come into synchrony after you’ve noticed.” (In reference to change blindness experiments)
    This statement is interesting to me, because the effect of “noticing” the change seems akin to the feeling of understanding. Does something in our brain synchronize when we have that “aha” moment, when things finally seem to fall into place? Or is it just the feeling of understanding that arises in this moment? It appears that Dennett is claiming that the brain somehow changes in response to “noticing” that a change has occurred. Although there are apparently fMRI studies that confirm this, aren’t those also based on introspective reports of when the person *consciously* identifies that there is a change? It also appears to me that a lot of heterophenomenology is relying on interpretations of introspection, which Dennett says himself could be wrong (“You should admit that some… of your beliefs about your conscious experiences might be wrong”). So, in sum it seems that this argument is laking some evidence to prove that we are able to accurately understand what the experience of another is like through their (possibly incorrect) verbal descriptions.

    ReplyDelete
  12. Dennett’s discussion of heterophemonology reminded me of the distinction between actual things and computational simulations of those things. In class, we discussed the airplane example: a simulated airplane has all the same properties as an actual airplane, but can not take real passengers on it from Montreal to Chicago. It can not actually land in Chicago like a real airplane can. Similarly, in heterophemonology, which takes the 3rd-person scientific method, raw data is extracted from a subject to construct his/her heterophemonological world. However, I think this runs into the same problems as simulations because the construction of the world is not the actual world itself. The constructed world would be like the squiggles and squoggles of the computer simulation. However, Dennett’s argument is convincing when he asks: “Is there anything about your experience of this motion capture phenomenon that is not explorable by heterophenomenology? I’d like to know what (461)”, since I’m not really sure how to describe the thing that’s missing besides just saying “consciousness”.

    ReplyDelete
    Replies
    1. This is an interesting thought and good comparison! I wanted to add that in creating this heterophenomenological world, interpreted data is used instead of raw data. So in this sense, the interpreted data might be add even more to the representation of the world, separate from the world itself. However, I was wondering if we can truly think of the interpreted data as being a representation that can be reduced to squiggles and squoggles. To me, it seems like the interpreted data still relies on the raw data and cannot be completely reduced to meaningless symbols.

      Delete
  13. This reading brought to mind a few questions I have about the simplification of the attempt to understand what, why, and how consciousness is brought about. I am thinking in terms of individual differences in both the subjective and objective experience of feeling. Surely we can intuit (and I do recognize the danger of relying on intuitions), that there are individual differences in what we feel. This is subject to the limitations of the other minds' problem of course, but as a thought experiment, say that it is a fact that we all differ on what, the amount, and the way we feel. In that case, what counts as human consciousness?

    ReplyDelete
  14. Dennett's comparison of the "Zombie Hunch", as he calls it, to other intuitions people have had in the past, such as thinking the Earth doesn't revolve around the Sun, is quite absurd. This is because the intuition that you and I can feel is a feeling itself, while the archaic feeling that the Sun revolves around the Earth is not about feeling, but a hunch about physical objects which exist outside of ourselves. The very fact that the “Zombie Hunch” is a feeling means this (and everything else that we experience/feel) must be real. Of course, this depends on your definition of “real”. I won’t try to define that I mean by “real” here, because it would take quite a while. Feeling is the lens through which humans and animals see the world. To say that this is only an illusion is meaningless, since I can only say that anything is real based on my subjective experience. If feeling isn’t real, then what is?

    ReplyDelete

  15. One example that Dennett demonstrates in his article to argue about what needs to be explained in the cases is the etiology of the false belief confused me when I was reading it. In Dennett’s article, he proposes the example of people's visual fields, which most of them believe that their whole visual fields are uniform in detail. As explained by neuroscientists, this is due to the physiology base of our visual perception system that our visual system is neither a photometer, which means that the perceived lightness or a patch is not directly related to its luminance nor a specter-radiometer, supported by the evidence that our perceived color does not change proportionally with the wavelength of light, instead we use the pattern of cone activity to see color. From my understanding, this example given by Dennet aims to clarify what is the 'etiology of the false belief' and to further reveal that there is something psychological going on. However, as far as I could see, this example of what we believe we could see in our visual field is different from what we perceive is just an example of 'the stimuli, and our sensation is not a one-to-one mapping. It could be completely explained by answering the easy problem, which is how and why our 'hardware' is doing what it is doing. Therefore, in considering this, I wonder if this example that Dennett raised here is still appropriate.

    ReplyDelete
  16. What I like most about Dennett's writing is his portrayal of a "Zombie Hunch", which acts as a central counterpoint to the idea that Turing has found a way to determine and theoretically recreate 'thinking', and that "Zombie Hunch" counterpoint is simply our natural intuition that we have consciousness ourselves.
    Therefore, Dennett is able to attack the legitimacy of the Zombie Hunch by targeting the idea of 'direct evidence' being the difference between myself and a zombie. The use of the zombie example allows for an easy illustration that the zombie can procure "direct evidence" (as it is described by Chalmers) to the exact same degree that I can. Whether I agree with Dennett, I am still considering, but I think he raises a good argument against some innate beliefs that I have had (and most of us have had) for as long as I can remember.

    ReplyDelete
  17. "As I like to put it, we are robots made of robots— we’re each composed of some few trillion robotic cells, each one as mindless as the molecules they’re composed of, but working together in a gigantic team that creates all the action that occurs in a conscious agent"

    I found this another interesting idea that I had not considered as of yet in this course, that at some point, our individuals cells do not have consciousness, it can be argues of whether or not they 'feel', but rather, it takes a collection of cells to have what we consider consciousness. Additionally, maybe I am completely off with this, but it is funny how this draws parallels with symbol grounding... to understand one term's definition, you need to understand others, and eventually, 'ground' the symbols and match them with something tangible. Maybe, the brain is 'grounded' in a similar way by clusters of neurons, and those clusters are grounded by individual cells, but where did we lose this 'consciousness' within those layers? Perhaps this is another way to look at the hard problem of consciousness.

    ReplyDelete
  18. Although feelings is the hard problem, I was wondering what question is unfelt states a part of? The hard problem as Prof.Harnad defined is only targeted at the felt states or the feelings, but never mentioned the unfelt states. I assume unfelt states are part of the easy problems, which means reverse-engineering how and why there are unfelt states. In 10b Prof.Harnad indeed mentioned there are "(Plenty of unfelt internal functions, from temperature regulation to perhaps semantic priming and blindsight:", so are all unfelt states functions?

    ReplyDelete
    Replies
    1. Zilong, I think you answer your question correctly by saying that unfelt states can be reverse-engineered to help us understand how and why these states exist, which would essentially mean they're part of what the easy problem aims to solve. As you mention, unfelt states include innate internal functions such as blindsight, or other homeostatic processes; as these are all phenomena that can be reliably measured objectively, we can experimentally simulate the conditions that under which they occur which helps us answer when they occur and thus how, and why. However, I'm not sure I would agree that unfelt states are functions, especially not if you mean physiological functions as mentioned above. I think it's possible that unfelt states can be emotional or mental as well. I may be wrong but wouldn't it be possible to subconsciously feel something that you're not able to correctly identify in the present moment. For example, as someone who suffers from borderline personality disorder, I often misidentify my emotions when I'm going through them despite the subconscious emotion (what I realize later i truly was feeling) is the prominent driver of my actions and decisions.

      Delete
  19. I would like to use my skywritings for Week 10 to attempt on summarizing the different perspectives on the “Hard Problem” from Descartes, Turing, Dennett (TEAM A) and Prof Harnad (TEAM B).

    Starting with Dennett on Team A, Dennett believes that there is nothing more to explain once the easy problem is solved. How do we know this?

    His proposed way of empirically studying “consciousness”, heterophenomenology, constitutes of gathering data such as verbal reports and behavioral/physiological correlates to determine the first-person subjective experience from a third-person introspective method. Dennett equates such behavioral/functional states to felt states, so heterophenomenology is his behaviorist approach to “do justice to the most private and ineffable subjective experiences”.

    Dennett’s position is further illustrated by his response to the Zombie Hunch. Chalmers from Team B states that a being that is identical to him molecule by molecule (T5) will only be identical functionally and will be able to report its internal states without feeling. Dennett states that if such “zombie” is capable of reporting his internal state, that serves as justification for its consciousness.

    Dennett’s approach to the hard problem of how and why we feel, does not go beyond the easy problem because he simply does not believe there is anything more. Thus, the methodology of heterophenomenology is a mere proper subset of the Turing Test (T4 if neural measures are included). A subset of the Turing Test can only serve as part of the casual explanation of our doing-capacity.

    Continued on 10B…

    ReplyDelete
  20. ) I agree with many of Dennett’s viewpoints surrounding the phenomenon of human consciousness and the extent to which we can use reverse-engineering to not just understand what and how people think, but more importantly, why they think and feel what they do. While I agree that Turing gives us a way to answer Kant’s question, I don’t entirely agree with his postulation that we are “able to trade in the first-person perspective of Descartes and Kant for the third-person perspective of natural sciences and answer all the questions without philosophically significant residue”. I still am not convinced that the third-person perspective of the natural sciences can objectively provide insight to the philosophically significant question of why we feel as opposed to illuminating the different facets of how we come to these states of feeling and believing certain things. For this reason, I’m able to understand why critics of heterophenomenology tend to ignore the other data such as behavioural and visceral reactions; to me they don’t still really penetrate the hard problem but merely just dances around the questions of how we get to these states, still leaving the “why” untouched.

    ReplyDelete
  21. Chalmers fervently believes he himself is not a zombie. The zombie fervently believes he himself is not a zombie”
    Does the fact that the zombie fervently believes himself to not be a zombie in fact prove that he is not a zombie? The qualification for “zombie” is that they do not feel. However, believing that you are something / not something is a feeling in itself, so doesn’t this mean that the zombie is not a zombie? Essentially, if I said I feel like I am not a zombie, therefore I am not a zombie, and a “zombie” said the same thing, the two scenarios are identical to each other in terms of fact.
    Also, if you consider the reverse, and the zombie WAS a zombie, then would they be able to say “I know I am a zombie based on the fact that I feel as if I am a zombie” also disprove this argument? Or in this case, would a zombie not be able to say anything at all as to what it is.

    ReplyDelete
    Replies
    1. I'm not sure I agree with your statement that if you said you feel like you're not a zombie, and a real zombie said the same thing, that the two scenarios are necessarily identical to each other in terms of fact, because the reality still would be that you're the only one who truly is not a zombie (assuming the "zombie" would have the biological makeup of a real zombie).

      Delete
    2. I also found that there are somehow logical contradictions. Since zombies don’t feel, they couldn’t have beliefs (I agree with you that belief is a form/result of feeling), and they couldn’t have internal states with contents as internal states are felt states. A zombie, according to the definition, cannot “fervently believes he himself is not a zombie”, he can only show, or fake to make people (non-zombie) feel like he fervently believes it. But in deep, he does not feel, hence does not believe.
      The fact that Chalmers allows zombies to have internal states with contents is indeed contradicting.

      Delete
  22. Dennett proposed the method ‘heterophenomenology’ (“the neutral path leading from objective physical science and its insistence on the third-person point of view, to a method of phenomenological description that can (in principle) do justice to the most private and ineffable subjective experiences, while never abandoning the methodological principles of science.” (p.456)) to try to empirically study the subjective experience of consciousness and feeling. Along with Dennett, group of people trying to solve the hard problem of consciousness from different approaches. Team A (Dennett) argues that by collecting behavioral and physiological data and by interviewing and repost to get first-person subjective experience, they could make conclusion on their feeling states, and once we solve the easy problem, there’s nothing left for the hard problem. On the other hands, team B argues from the ‘gut intuition’ that I, as a person, could know that I’m feeling, just like Searle in his Chinese room is able to report and certain about he does not feel like he understands Chinese. I think this entensive discussion about hard problem of cognitive science really got me to think about what exactly can be an indicator to judge other people of being ‘conscious’ or not, and how exactly introspection can be useful and reliable?

    ReplyDelete
  23. In this paper, Dennett introduces heterophenomenology as the methodology of studying human consciousness empirically from the 3rd-person point of view, and he criticizes Chalmers's first-person science of consciousness as a discipline with no methods. Dennett describes heterophenomenology to be “the neutral path leading from objective physical science and its insistence on the third-person point of view, to a method of phenomenological description that can (in principle) do justice to the most private and ineffable subjective experiences, while never abandoning the methodological principles of science. “
    I think Dennett’s heterophenomenological method has its merit. Afterall, there are not many methods available to study “consciousness”. Dennett does provide a set of empirically methodology to study consciousness, and it might be the best methodology we have right now if we want to study consciousness empirically. We cannot simply discard empirical methods to jump into some first-person science. It might turn the whole discipline into mere speculations.
    However, with that said, heterophenomenology is not enough to explain consciousness as Dennett has hoped. Dennett’s whole effort here, with his heterophenomenology, is still confined to the easy problem. By confining himself to what he called “3rd person science”, Dennett’s hope is to solve the whole enigma of “consciousness” within the easy problem, that is to say, to answer why and how we can do what we can do. But being conscious feels like something. The point is that we need an explanation of why we feel as conscious beings. But by limiting himself to the scope of the easy problem, such problem as why we feel is necessarily left out in the explanation.

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2021 Time : FRIDAYS 11:35-2:25  Place : ZOOM Instructors : Stevan Harnad & Fernanda Pere...