Monday, August 30, 2021

2b. Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence

Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing,Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer 


This is Turing's classical paper with every passage quote/commented to highlight what Turing said, might have meant, or should have meant. The paper was equivocal about whether the full robotic test was intended, or only the email/penpal test, whether all candidates are eligible, or only computers, and whether the criterion for passing is really total, liefelong equavalence and indistinguishability or merely fooling enough people enough of the time. Once these uncertainties are resolved, Turing's Test remains cognitive science's rightful (and sole) empirical criterion today.

68 comments:

  1. Hi all: Please read the commentaries and replies before posting your own, to see whether someone has made or replied to your own point already.

    ReplyDelete
  2. "... if Turing's indistinguishability-criterion is to have any empirical substance, the performance of the machine must be totally indistinguishable from that of a human being -- to anyone and everyone, for a lifetime (Harnad 1989)."
    I'm just now grasping how large a task this truly is. Some humans are incredibly observant, and can intuitively feel the energies and emotions of people with striking accuracy. This would be an incredible feat to create a machine that could blend in seamlessly forever, thus “fooling” us out of what we understand to be our own innate intuition. Furthermore, and I’m sure this requires extended reflection on my part, would creating a machine that passes the Turing Test suggest that the answers to the mysteries of metaphysics (ie. intuition or “gut feelings”, feelings of deja vu, profoundly religious or spiritual experiences, etc.) are more simple than some of us think? Or could this machine reverse engineer answers to these questions by using something like quantum physics? I understand that this is not within the scope of Turing’s paper or this annotation of the paper, but this quote brought to my attention how these concepts could theoretically inform one another. Additionally, these are some examples of the components of cognition that currently exist largely below the level of our own consciousness, and I am interested in reflecting and learning more about how and if they fit within with the aims of the Turing test.

    ReplyDelete
    Replies
    1. I would like to take you to task on one thing, when you say "would creating a machine that passes the Turing Test suggest that the answers to the mysteries of metaphysics (ie. intuition or “gut feelings”, feelings of deja vu, profoundly religious or spiritual experiences, etc.) are more simple than some of us think?", maybe this answers questions about intuition, gut feelings and deja-vu but I don't think it touches on religious/spiritual experiences.

      I'm only saying this and drawing the difference here because there's very promising and interesting research happening right now on religious/spiritual experiences (in humans), particularly as they relate to psychedelics. I know this sounds crazy, but I will point you to this study https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3050654/

      This is only one of the many new and ongoing studies that is discovering the link between psychedelics and mystical experiences. There are frequent cases of self-described atheists having these religious/spiritual experiences. If we can accept that these experiences are real and can reliably be occasioned by psychedelics, I don't think creating a machine to pass the Turing test answers any questions about the nature of these experiences. I also don't think the answer is more simple than we think.

      Delete
    2. The TT is not about "fooling" anyone. It is about really reverse-egineering human cognitive capacity. Text (T2) or interact with (T3) Eric in class if you want to set your intuition as to what this is really about.

      The task is to reverse-engineer the mind. Sci-fi, psychedelics, mindfulness training or QM do not help..

      Delete
    3. If historically under-researched phenomena such as meditation, psychedelic experiences, and mysticism can help us to better understand the workings of the mind (which is certainly possible, given the mystery underlying the mental mechanisms behind them), I don't see why they could not possibly help us reverse-engineer the mind. Of course, I say this as an undergraduate student who is very curious about these unexplained phenomena and not as a professor with years of research under my belt, so if I'm missing something important, please let me know. But I believe that these subjects should not be ruled out as being potentially helpful in furthering scientific understanding of the mind--and therefore, potentially reverse-engineering the mind. Especially with how unusually the mind acts and cognizes during such experiences. In order to accurately reverse-engineer the human mind, would the engineered mind not need to have the capacity to act as a human mind would act, and therefore have the ability to experience these mysterious phenomena?

      Delete
    4. Milo, please explain what you understand to be the reverse-engineering of cognitive performance capacity (i.e., finding and testing the causal mechanism that would explain how and why organisms can do what they can do) and then give us some idea of how you think mysticism could help us do it. Meditation and mysticism are fine (I too tried them when I was your age -- though now I have some doubts: here is a talk a few years ago at McGill when another student in this course, arranged a conference at McGill with a Buddhist monk on this question: The Other-Minds Problem, or Orgasms (Mine) Versus Agony (Others)”.

      Delete
  3. This skywriting is partly in response to my previous paragraph in 2a. I do agree that referring to Turing’s work as the “imitation game” is misleading and undermines how the task is meant to assess the capabilities and T-level of a computer and its AI. I must highlight, however, that Turing’s efforts to illustrate his notions of AI using a popular party game and through metaphors made his work much more accessible, a rarity in scientific writing, despite the previous knowledge one must have to fully comprehend it. If I have understood correctly, the goal in assessing the level of intelligence of computers and the desire to produce T2+ robots is to further our understanding of human cognition via reverse-engineering (i.e. understanding what kind of machine humans are). I was perhaps too optimistic in thinking that we had reached T2, as Turing had predicted, and Harnad’s text makes the valid point that we are still not capable of building machines that can text back and forth with anyone about anything for a lifetime. The computers we are familiar with are indeed chatbots and have multiple limitations. They are incredibly easy to fool and are specialized for specific topics and audiences. Conversations with them are rarely of interest and they can barely simulate human conversations, yet alone display all of their properties. Thinking further about the current limitations of AI, I am realizing that we are far from reaching T2+ levels and that most of Turing’s text remains theoretical. How long will it take until we perfect our chatbots and reach T2? What are the limitations to reaching T3 and T4, and will we ever reach these levels?

    ReplyDelete
    Replies
    1. You are right that the party-game was effective in popularizing the TT.

      But the result was also that almost everyone misunderstood what the TT really was about, and for!

      Cogsci is still far from reverse-engineering the cognitive capacity of the brain, but there's no reason to believe it's not possible. After all, it is about the capacity of a biological organ. And although organisms can do many kinds of things, it's not an infinite number of kinds of things (although language -- whose expressive power is rather like that of the Strong Church-Turing Thesis [How?] -- comes close, and that's why Turing started with T2).

      And think also about the relation between T2 (i.e., verbal) capacities and T3 (sensorimotor + verbal) capacities.

      ["Stevan Says" T2 could only be passed by a T3 robot. [Why might that be true?]

      But the TT is only about doing (the "easy" problem), whether the doing of the mouth (T2), the body (3), or the brain (T4). Reverse-engineering feeling might turn out to be not just "hard" but perhaps even impossible (unless we luck out and the solution to the easy problem, which is testable, turns out to also be the solution to the hard problem, which is not testable, because of the other-minds problem. So we may never know: not just not know "for sure," as with other scientific explanations...

      Delete
    2. As I review some skywritings before the midterm, I wanted to attempt an answer to the “How?” in bold. Because new words and “things” are consistently being added into our lives and, in theory, could be generated during the entirety of our existence (infinitely??), would it be possible to say that the brain may not have infinite functionalities, but that it has infinite uses for information, especially when it comes to language? How can we consolidate "finite" hardware with the infinite acquisition of new knowledge (as long as previous information is discarded)? Is this due to heuristics and propositions? How does this tie in with categorization, as seen during week 6?

      Delete
  4. “T2: Total indistinguishability in email (verbal) performance capacity. This seems like a self-contained performance module, for one can talk about anything and everything, and language has the same kind of universality that computers (Turing Machines) turned out to have. T2 even subsumes chess-playing. But does it subsume stargazing, or even food-foraging? Can the machine go and see and then tell me whether the moon is visible tonight and can it go and unearth truffles and then let me know how it went about it? These are things that a machine with email capacity alone cannot do, yet every (normal) human being can.”

    When reading the distinctions between T2, T3, etc., I also questioned where along this line the more nuanced parts of our language fall. How can a machine with only T2 email capacity pick up things like tone? Are these not considered in the realm of “verbal capacity?” I question the ability of a T2 machine to be able to produce responses such as this without the additional information that we have (context of the environment, for example). I think that perhaps it is unnecessary for T2 to have this ability if it can produce somewhat similarly to humans, given that humans also make errors in the nuanced parts of language.

    ReplyDelete
    Replies
    1. Grace, I think this is a really good point! I am 99% sure I am human (the leftover 1% is how I feel after reading these papers and thinking I might be a T-something robot who has convinced herself she is human), but being able to convey tone over email or text is something I have a lot of trouble with. I find that a lot of tone is facilitated by facial expressions or bodily movements, which are obviously not present when interacting with what we’d typically call a T2 robot. However, if, as Harnad posits, there is “total indistinguishability,” I imagine that tone would have to be perfectly expressed over email. In this case, would “perfect” mean almost lacking, like a real human? I’d like to think that seeing an email with absolutely no errors in the nuances of language would raise some red flags in my mind, since no human I personally know is capable of writing emails that flow flawlessly and still read like regular human creations, but it’s hard to tell what I would think when faced with an actual T2 robot and unaware of it.

      Delete
    2. Grace, well, we do seem to manage to text to one another, despite missing tone. Tone perception is a sensorimotor capacity, not just a verbal one (but that's not the reason I think only a T3 could pass T2: what might that reason be?

      Emma, of course tone is important, and so is a lot of other nonverbal behavior we have and use in communicating. Nonhuman animals have only nonverbal capacities, yet they communicate. That's why T2 is non-total, and passing it requires T3 capacities, even though they are not directly tested by T2.

      (You are a robot, by the way, since a robot is a machine, which just means a cause-effect system, and every organism is a cause-effect system. But you are a T5 robot, able to do everything that a human can do verbally (T2) and sensorimotorically (T3); and your internal processes are not just a synthetic version of the real brain and body (T4), but a natural one (T5).

      Delete
    3. To the question of why only a T3 robot could pass T2: might that be because the T3 robot experiences the world through its senses like we do, and therefore would be able to partake in the inevitable conversations about stimuli from the external world? Whereas the T2 robot would fail the moment it has to answer based on subjective experience?

      Delete
    4. Juliette that's part of it. But what T3 can do is not just converse about things in the world, but learn what they are and what they're called. That's called symbol grounding, and a T2 can't even begin doing that: T2's connections are just between words (symbols) not between words and the things they refer to,

      Delete
    5. “...our verbal abilities may well be grounded in our nonverbal abilities”

      This quote made me think about this comment and the symbol-grounding problem: How many words and which words do you need to know the meaning of so that the rest you can get from T2?

      If you want to teach a toddler something, you likely demonstrate actions rather than verbally explaining. Moreover, there are some concepts that are almost impossible to describe with words alone (ie. plans to build a house). Moreover, in the case of the toddler, action-demonstrations may be necessary to teach language (ie. pointing at an object and naming it). So, if this kind of action-demonstration or nonverbal ability is necessary for understanding the world (from language to building plans a T3 robot is needed to pass a T2 test.

      Delete
  5. By my understanding, the value of the Turing test to cognitive scientists is that, in creating a machine that would pass TT, you establish a possible model for how we think. Consider humans to be a machine whose functioning we don’t understand. By building a non-human machine whose functioning we do understand, and that is indistinguishable from us, we would establish one possible explanation for how we function. In regards to T5, then - meaning indistinguishability down to the last molecule - it seems like it almost wouldn’t be a Turing test anymore. It is not looking at a non-human machine and comparing it to a human, it’s comparing a human machine - albeit one that has come into existence through a non-typical process, in a lab rather than being born to human parents and so on - to another human machine. It seems like T5 is just asking, “if we make a human, will it act like a human”, which isn’t really asking anything (this is setting aside, of course, the possibility that humans have some immaterial element like a soul).

    Perhaps I am misunderstanding T5. By my current understanding, though, it also seems like the concept of T5 isn’t practically useful, because, to make a T5 machine, it seems we would already need the knowledge we are trying to acquire (exactly how, down to the last molecule, our minds do what they do).

    ReplyDelete
    Replies
    1. Yes, I was somewhat puzzled by this as well. I don’t understand why Turing put forward tests beyond T2. We want to know if machines can think (like humans do, most probably), but why does it matter what structure the machine is made of, or how it looks like and so on? Why do we need something to be looking even remotely physically human to be able to think of it as a thinking entity? If the goal is to find out if machines can think, why bother ‘creating’ another human when just a software might do the job? Is it because we can only experience our own thinking and so the only way we can conceptualize thinking in other things is if we give it some form of human characteristics?

      Delete
    2. Caroline, you're right that T5 is superfluous for reverse-engineering the mechanism that produces cognitive capacity. -- unless there's something about the synthetic chemistry of T4 that matters. But if it did matter, we would not know, because T5 is weakly equivalent to T4 and T3, so the I/O is indistinguishable. So it could not matter for doing; it could only matter for feeling. It's just mentioned to point out that reverse-bioengineering, just like the rest of science, is based on observations and measurement as data, and that includes biochemistry. (How is weak vs strong equivalence (which really only applies to computation) similar to the T3/T4 distinction?)

      But forget about the T4/T5 distinction. I never ask about it in exams; it's too indeterminate to matter in cogsci.

      Shandra: The above only applies to the T4/T5 distinction. The differences between T2, T3, and T4 do matter, a lot. So you should think about them (and I often ask about them on the exam.)

      And of course machines can think, since organisms, too, are machines. (What is a machine?) Cogsci's goal is to reverse what kind of machine we are.

      And as to software: that depends first on whether computationalism is correct: What is computationalism? And what difference does it make to the T2/T3/T4 question?

      Delete
    3. I think Turing agrees with you, actually. At the beginning of his article (page 2), he says, "We do not wish to penalise the machine for its inability to shine in beauty competitions", meaning that physical characteristics are not what we care about here. That said, the reason to go beyond T2 is that much of human thought is not just in how we talk to people, it's about how we take in and and process information in the world around us, and how are thought and behaviour are connected. Can we say a machine thinks like a human if it can't touch a caterpillar and tell you whether it's fuzzy or not? If our machine isn't able to interact with its world as we do, it's harder to know if it is really performing all aspects of human thought. And I don't think Turing necessarily would have disagreed with this and said that T2 was the place to stop. In the line quoted above, I think he more meant that we shouldn't decide a robot isn't thinking like a human just because we look at it and it looks non-human to us, as you said. Keep in mind, though, that there are more differences between T2 and T2+ than just how human they look to us :)

      Delete
    4. Caroline, good points: surely human thought is not just about words but about the things the words refer to,

      Delete
    5. I am adding my comment here because it is an extension of the above conversation about Turing’s stance.

      First off, it was extremely clarifying to read this article that annotated Turing through words that we (as a class) have been familiarizing ourselves to. From the multiple distinctions and problems that are to be understood, the focus of the annotations it seems is on the easy problem, trying to reverse engineer a machine with an indistinguishable performance capacity from humans.

      From what I understood, Turing believes that a machine that can past T2 “is enough”: such machine can provide an explanation for how thinkers can think. One reasoning from Turing is that a T2 machine is universal and thus can fulfill the strong Church/Turing thesis. It can simulate just about anything, including all the other T-robots. However, a simulated T3 robot in a T2 machine is NOT the real robot and cannot be given the Turing Test to check its indistinguishability.

      Another problem is that (as paradoxical as it sounds) it might be impossible for even a T2 machine to pass the T2 test because when “questioned too closely about the qualitative and practical details of sensorimotor experience”, verbal performance would break down and may be distinguishable. Thus, T2 might not be the place to stop as it might be practically impossible for a machine to pass T2 whereas a T3 robot could pass both T2 & T3.

      Additionally, Prof Harnad continues to ask whether we would kick Eric as a T3 robot. Up until now I interpreted this question as “At what point (T2, T3, T4) would Eric feel pain?”. I initially thought that we would be able to figure it out once we understood the different T-machines better. However, after reading the article, I feel like there is no way we can be sure that the machines can feel, other than be that machine ourselves (other minds problem) just like how we cannot be 100% sure that if I kick you (hypothetically) I cannot be SURE that you are feeling pain. I can only be 100% sure when someone kicks me and I, myself, feel the pain.

      So, the Turing tests/machines are modelled to explain how our own machinery works, and whether they can feel is unidentifiable through the Turing tests. Does this mean that although the Turing tests can explain how thinkers can think, they are irrelevant in explaining how thinkers can feel?

      Delete
    6. All your points were correct, but I don't understand your last paragraph. Turing's point was that you cannot do any better than the TT in explaining cognition, because each of the TTs, from T2 to T4 (or even T5) is based on equivalence (it can do it all) and indistinguishability (we can't tell any difference from any one of us).

      That's good enough for the other-minds problem, but not for the "hard problem," which is how and why do organisms feel? No one yet has a clue about that(except clueless mistakes such as: "if organisms did not feel that fire burns, they could not detect, escape or learn to avoid fire." [How/why is that explanation wrong?])

      Delete
    7. In response to prof Harnad's comment "if organisms did not feel that fire burns, they could not detect, escape or learn to avoid fire." [How/why is that explanation wrong?])"

      To answer prof Harnad's question, I think this explanation is mistaking because as we have seen in class, organisms do not have to feel pain in order to detect and learn to avoid things in their environemnt that are potentially damaging to the organism (such as fire). Indeed, very simple living organisms are equipped with nociceptors, a type of sensor cell that responds to potentially damaging stimuli. Although these receptor cells are not necessarily connected to a central nervous system that would allow for complex processing of feelings such as the sensation of pain, they can still give an organism the necessary information to detect threats in the environment and react accordingly, by either escaping and even changing subsequent behaviours to avoid future encounters.

      Delete
  6. I think the statement “the view that machines cannot give rises to surprises…” is an interesting one to take into consideration. If we consider ourselves machines (aligned with Harnad’s definition of machine) then it is quite evidently a false statement. It is not uncommon for humans to give rise to surprises — but humans are not pre-programmed with ‘stores’ or instructions, or have laws of output in the same way that robotic machines do. The robotic machines that humans create are subject to the constraints of what we have programmed into the store and what we want the machine to do. In this sense, it seems like the above statement holds up less well. Seeing as humans are programming the machine, would we not be privy to all the capabilities of the machine beforehand, and thus nothing be a surprise? 
To reach an answer, we would need to specify surprise - but I would argue that an error in the machine’s running, is not a surprise in the sense of what Turing and Harnad are discussing.

    ReplyDelete
    Replies
    1. About "programming" and surprise, see previous replies. Programmes can modify themselves based on input. The "programmer" cannot predict what input the programme will actually get, and hence what state it will be in after an unknown future history of input.

      "Surprise" is a matter of expectations and uncertainty. It is also a feeling. So apart from whether it is true that cognition is just computation (it's not) no one (human or nonhuman) can know (hence expect or predict) all of their possible future experiences, even if what they would do in response is already set in advance internally. Nor can anyone predict where the internal settings will go from there, since their future inputs cannot be predicted.

      But surprise is no problem; it happens all the time. Rarely do things go as expected or predicted, whether for human machines, for nonhuman-animal machines, or for human-made machines.

      Delete
  7. While going through this reading I couldn’t quite get over my confusion about the “other minds” problem, or what is perhaps the difference between an AI and a CM.
    If “a device we built but without knowing how it works would suffice for AI but not for CM”, and CM is not necessary for AI, then how would there be any way of verifying if something is “thinking” in the same way that we do. Of course, again, the “other minds” problem which has it that we never know, and never could, short of perhaps the advent of Turing’s telekinesis. But doesn’t this pose a larger problem than just the circular “we never know”? Perhaps this is my flesh-and-bone supremacy talking, but I am certainly more inclined to believe that other fleshy beings think, or at least have grown accustomed to using that as a presupposition for relationality/intersubjectivity (fancy words for making friends!!). Does the Turing test require that we ‘get over’ the “other minds” problem, as a kind of prerequisite leap of faith? Somehow I am not satisfied with that conclusion!!

    ReplyDelete
    Replies
    1. Sofia, I find myself having the same thoughts! In trying to reconcile these ideas I was struck by the beginning of Turing's paper. When Turing says replaces the question, "Can machines think" with the idea of the imitation game, I felt that this implied that maybe the "thinking" concept just isn't relevant to these what we are trying to accomplish with these tests. When I think about it as the imitation game, I don’t think we need to just ‘get over’ the other minds problem, instead I am led to think that solving the other minds problem isn’t a necessary component in this situation; as long as the machine can convince us it can think, that is enough.

      Delete
    2. Here's a simpler way of thinking of it. It requires distinguishing (1) doing (i.e., behavioral performance that you can observe and measure) from (2) feeling, which no one but the feeler can feel.

      What Turing is saying is that his method can only explain doing-capacity, not feeling.

      (About the intuitions about "flesh" [actually, it's correlations with flesh): consider those when thinking about T3 vs T4, or even T4 vs T5]. But this course is mostly about T2 vs T3.)

      Delete
  8. I thought it was interesting when Harnad distinguishes the goals between artificial intelligence and cognitive modeling. He compares the field of artificial intelligence “whose goal is merely to generate a useful performance tool” to that of cognitive modeling “whose goal is to explain how human cognition is generated.” This is important because Turing wishes to allow a machine that functions but whose operations cannot be fully described to participate in the Turing Test. This means that the machine partaking in the Turing Test would only function as a performance marker for AI. With this understanding, it seems the only thing we can determine of the machine would be if it can think at the same level as humans or not. In contrast, it seems that a Turing Test made for CM purposes would allow us to draw bigger conclusions about how thought works. However, would this actually be helpful to describe how thought works generally, or would it just describe how the robot in the particular case functions?

    ReplyDelete
    Replies
    1. Remember the doing/feeling distinction and indistinguishable performance capacity. Cogsci (and hence CM) is about explaining ho organisms can do all that they can do. Once you have reverse-engineered a machine whose performance capacity is equal to and indistinguishable from ours, for a lifetime, you have solved the easy problem. The rest of your questions are not about that, but about whether the TT machine feels (the other-minds problem) and the hard problem (of explaining how and why the machine feels.

      Delete
  9. I was particularly compelled by the rebuttal against the Argument from Immortality of Behaviour. This states that T3 robots would need to have more than just computation in order to fit in with real-world situations. It is then argued that the T3 rule-based system can describe what the robot would do under all contingencies.

    However, this makes me think about the true implications of a Turing-passing robot. Particularly, if a robot were to be indistinguishable from a human, they would have to have somewhat of a personality. Thus, it would be necessary for a T3 machine to not always be restricted by rules, and sometimes act irrationally or erratically, as many humans do. This makes me wonder if this is something that would have to be programmed into the robot, or if reverse-engineering the machine to be exactly like a human would thus contain some of these capabilities.

    ReplyDelete
    Replies
    1. All the TT calls for is complete indistinguishability in performance capacity. No constraints on personality.

      Delete
  10. From around the argument from consciousness, Turing writes, “...the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking…”. Harnad rebutes that this objection is correct but irrelevant as, “we have no more or less reason to worry about the minds of anything else that behaves just like us ... It is based only on what they do.” From this I feel the author argues that we can gather important information about human thinking from the Turing test, because if something behaves like us, as other humans and machines may do, then we can “mind-read” their behaviour because it mimics our own. I feel the need to object to this, as how can we be certain that the system we create, convincingly indistinguishable from ourselves, is running on the same principles as us. We may have a behavioural equivalence, coming to the same solution - consciousness. However, if the internal symbols and methods to achieve consciousness are vastly different, how can we come to know anything more about ourselves from creating this system? How can we be certain that we compute similarly to this machine, if we are unable to introspectively see our own computation for comparison? Can we rightfully assume that we will naturally create systems that mirror our own computation and abilities just as they are internally structured?

    ReplyDelete
    Replies
    1. Hi Genavieve,

      I agree about your point on behavior: that if a computer and human have behavioral equivalence, it doesn't necessarily mean we have the same internal processes. However, I think that whether or not a machine "thinks" doesn't have to be the exact same prosses as a humans. As long as the output is the same, isn't that the goal? If the computer can come up with it's own ideas/thoughts, isn't that thinking?

      Delete
    2. If cogsci reverse-engineers at least one T3 we will have at least one answer to how and why can we do all the things we can do. (We will be discussing "underdetermination" (more than one way to explain it all) later in the course.

      Delete
  11. The idea that a t-2 robot can play chess is very interesting to me. According to Lady Lovelace’s objection, “a machine can ‘never do anything new’”. To which Harnad replies with, “all causal systems are describable by formal rules.” When a robot plays chess, it is bound by the formal rules of the game (e.g objective of the game and limitation of player movement) but nothing else that could influence a human’s game like lack of sleep the night before or environmental distractions. From there it would be able to compute a strategy in order to achieve success. Very very simplified, input would be “capture the king” and output would be corresponding move. Following inputs would be the opponent’s moves and following output would be dependent on the opponent’s moves until someone wins. While I agree that “it is not clear that anyone or anything has ‘originated’ anything new since the Big Bang” I would be interested in testing a robot against a chess grandmaster. Grandmasters implement strategies in ways the average chess player does not and I think this lends a certain “creative” quality to their skill. In this case, I wonder who would emerge victorious in this match. The robot with the power of computation, or the grandmaster with the power of “creativity

    ReplyDelete
    Replies
    1. Chess-playing capacity is a toy capacity. Can you translate your point at the T2 or T3 level (total, indistinguishable performance capacity)?

      Delete
    2. I’m not sure of specific examples but I think there would need to be a method to turn the t-2/t-3 test into a zero-sum game. The t-x robot would be bound by its algorithms while playing against a human with considerable expertise also bound by the same rules of the game but would make less expected or common moves than the average person. Forgive me if I’m out of my depth on this one.

      Delete
    3. What I meant was TT-scale, rather than toy-scale. Chess-playing or any other game is not a T-test.

      Delete
  12. First of all, I really liked the format of this paper; it is almost like reading annotations from a active reading session of Turing's paper. I agree that "robots can have analog components" that "any dynamical system" is a candidate for thought. One doubt I had is whether thinking needs to do exactly all that humans do, and be equipped with all the sensorymotor capacities, and interact with the physical world. If my limbs are tied, I can still think. If I am locked up in a room blindfolded, I can still think (but maybe go mad after a lon time). Prof. Harnad does discuss disabilities and conclude that all humans have some sensorimotor capacity - this is true. Is this not closer to reverse-engineer human capacities that are 'more than' "thinking"? It does come back to how one thinks about thinking: ""Yes, but only at a cost of demoting "thinking" to meaning only "information processing" rather than what you or I do when we think, and what that feels-like."

    ReplyDelete
    Replies
    1. It's a safer bet to produce full T3 capacity and then try to produce it with fewer senses (but not by blinding or paralyzing Eric!

      Delete
  13. “his restriction of the test exclusively to what we would today call email interactions is, as noted, a reasonable way of preparing us for its eventual focus on performance capacity alone, rather than appearance, but it does have the unintended further effect of ruling out all direct testing of performance capacities other than verbal ones; and that is potentially a much more serious equivocation, to which we will return. For now, we should bear in mind only that if the criterion is to be Turing-indistinguishable performance-capacity, we can all do a lot more than just email!”
    In the book called “algorithms to live by, the computer science of how humans think” by Brian Christian and Tom Griffiths that I’m reading, a technique that is touched upon when dealing with a complicated math problem is simplification of the problem by relaxation. In other words, it’s simplification of the problem by taking out variables and leaving only the base of the problem to work on. Subsequently, removed values are added back to the problem. This concept came to mind while reading the above-mentioned quote which makes it interesting when thinking about the relevance of Turing manipulation of reducing human interactions to email communication to better understand performance.

    ReplyDelete
    Replies
    1. Trouble is that if you don't scale up to full T3 capacity, you could be building toys that have nothing to do with the way full capacity is produced, because toys can be built in so many different ways. (And then there's also the symbol grounding problem...)

      Delete
    2. This is a really interesting take. Would you consider a T2 robot passing the TT to be the first step in understanding human capacity and then scale up the TT by using a T3 robot? I’m curious if this would work in the same way as your book described, since in the book this technique is used for a computation (a math problem). However, since cognition is not computation, would scaling back yield the same results?

      Delete
  14. I am curious about the significance of having a difference between T4 and T5. Why bother creating an extra level in the hierarchy to distinguish the nervous system from the rest of the internal structure? Given that “Total indistinguishability in physical structure/function“ is extremely similar to “total indistinguishability in external performance capacity as well as in internal structure/function”, what is the significance of an actual nervous system instead of a synthetic one? I believe that T4 could include the nervous system as a part of the “internal structure/function” previously referred to.

    Perhaps the difference is that while a T4 robot would say “ow” in response to being hit, a T5 robot would actually feel the same pain a human would feel. But I believe that this is a strange little addition, not necessarily worth separating a T4 from a T5. What if you had a machine with a non-synthetic nervous system, but there were other internal aspects which were synthetic? Where would this lie on the Turing scale? Or perhaps it would be something else entirely.

    ReplyDelete
    Replies
    1. Hey Alex :) Something I'm curious about in your question: how could we know the difference between our machine saying "ow" and feeling pain? Since we're not the machine, we can't know if a T5 feels other than based on what it tells us, or what we can observe about its neural signalling. In both T4 and T5, you'd see the same neural function as a human who is experiencing pain, but we can't know if the machine *really* feels pain as we do. So I don't think that would be the T4/T5 difference, but I may be wrong :)

      Delete
    2. And what about when a T3 says "ow"?

      Delete
    3. Good points, Alex and Caroline:) I would assume there is a purpose in distinguishing T4 from T5 since we, as humans, are T5 machines. And while it may never be possible to create a T4 robot, this would stand as the last possible level a robot could reach, one of total indistinguishability from us. But managing to build such a robot means we would probably already have *at least* some of the answers we’re looking for in terms of thinking and consciousness, and maybe have transcended the others-mind problem such that the distinction between T4 and T5 becomes obsolete.
      I don’t think I would kick Eric if he “demonstrated” that he felt pain by saying “ouch” — not having absolute proof as to the contents of his mind would not justify me hurting him. By being a T3 robot, Eric would technically be human-passing enough that I would take for granted that he feels, as I do… But this brings me to my question.
      I wonder whether the T3 robot needs to keep its robot identity hidden, just as T2 needs to “fool” (although it’s not a game) humans into thinking it is also human. Is the point that I assume Eric is human, given that he passed T3? And if I already know he’s not human, doesn’t that defeat the purpose? And if we do assume T3 is human, who would think “he’s probably saying “ouch” but doesn’t feel anything, therefore I should kick him”? We don’t have this thought process when interacting with other humans.

      Delete
    4. This is getting a bit sci-fi-ish, but if cogsci did succeed in building a T3 then I think we'd get used to many of them among us and we would come to feel it would be racist to discriminate against them because of the metal under their skin...

      Delete
    5. That is a great point Caroline, I did not think about how we cannot know if a T4 would still feel pain in the neural signalling, because even if the nervous system is synthetic, the T4's head can be "opened up and still look exactly like a human", so I was likely incorrect in my interpretation of the difference between T4 and T5

      Delete
  15. I am intrigued by the notion of “feeling” and how it complicates our understanding of consciousness, thinking, and intelligence. As Turing and Harnad note, many object to machines thinking without evidence that machine can feel itself thinking. However, I don’t understand why it is argued that this is not a solipsistic argument, when it holds that thinking is dependent on one’s own mind and existence. Could this simultaneously be an issue of solipsism and the other-minds problem? Turing frames this argument as one must be the machine to be aware of itself thinking, whereas Harnad frames it from a more social perspective, where we believe others think if they behave and communicate as we do. He argues that behavior is the only relevant determiner of recognizing “thinking”. I think solipsism would still apply here because there is no proof that other beings are also thinking, or rather I don’t see how it is substantiated that the objection being solipsistic is “dead wrong”.

    ReplyDelete
    Replies
    1. Solipsism is the belief that I am the only one who exists, all else is illusion. (It is a form of scepticism; you can also be sceptical about scientific findings. But there can be truths of which we cannot be certain but are nevertheless true.)

      Solipsistic scepticism is the wrong problem, and not relevant to the TT.

      Turing is right, though, that only feelers can feel, and know for sure that they are feeling. (That's actually Descartes' Cogito, including the other-minds problem.

      It's a safe inference that other humans (and many animal species) also feel, even if it is not 100% sure.

      Turing does not say that performance capacity is the only "determiner" of feeling, but that it is the only way to infer whether anything other than me is thinking. (Maybe that's what you meant?)

      Delete
  16. During the class, we talked about the easy and hard problems: how and why we do what we do and how and why we feel what we feel. I was intrigued by the discussion that it seems that we do not need feelings to evolve as a Darwinian machine -- why don't we just evolved to pull away our hands when there is fire without feeling pain? In my opinion, I believe feeling acts as a condition for humans to learn. Since there are too many uncertainties in the world, humans need a mechanism (such as pain) to actively update and remind themselves what are dangerous. It helps us constantly evolving. Relating to Turing machines, do they also need feeling, or other similar mechanisms, to constantly learning and updating?

    ReplyDelete
    Replies
    1. First, already today's toy robots can learn (and learn to escape and avoid damage) without feeling anything.

      Turing machines are just computers. Are you asking about computers or about T2,3,4?

      Turing's point is that if robots can do and say anything we can do and say, so we cannot tell them apart from us, then we have no better (or worse) reason for inferring that they feel than we do with one another.

      Delete
  17. One thing that I have struggled with underlying both Turing's paper and Harnad's response, is the idea of a "perfect" human performance. How can a non-human machine playing the imitation game prove to be indistinguishable from a human 100% of the time when I can imagine so much variability in actual human performance when put to this test. Harnad addresses the fact that physical disabilities are irrelevant because "we all have some sensorimotor capacity" but what about intellectual or social disabilities? If a neurodivergent individual has a manner of communication that comes off rather robotic and they lose the Turing test 100% of the time, what does that do to our definition of intelligence and "normal" human behaviour?

    ReplyDelete
    Replies
    1. Hi Lucie, I've been thinking about this comment for a while. I don't think Turing took into consideration neurodivergence into his paper, and neither him nor Harnard touch upon intellectual or social disabilities--only physical ones (e.g. reference to Helen Kelly and Steven Hawking). However, in the case of these disabilities, the mechanism of thinking would still be the same (or at least very similar). A neurodivergent individual may have a "robotic" form of communication, but they still perform cognitive functions such as thinking, learning, etc. Their minds are flexible, such that a Turing machine would need to be as well.

      Also, I think the same argument could be made that we all have some intellectual or social capacity.

      Delete
    2. Lucie, the purpose of the TT is to test whether we have successfully reverse-engineered human cognitive capacity. It is easy to make a robot indistinguishable from a dead person, or a sleeping person, or a paralyzed person, and it shows us nothing. To learn whether we have succeeded we need at least T2 or T3. This is not a clinical test or assessment, it is just what we do with one another every day. Consider Eric as your example.

      Melody, as you note, neurodivergence is irrelevant, especially because real neurodivergent people don’t fail T3!

      Delete
  18. In the reading, Harnad highlighted that "there is no one else we can know has a mind but our own private selves" (the other-minds problem). This statement was followed by the notion of it being irrelevant to inquire what machines are made out of, so long as they do what they are supposed to do. We cannot know "with certainty" that T3, T4 or anyone other than ourselves is thinking or feeling because of the other-mind problem. Turing suggests we settle for whatever generates our full doing capacity. My question is: if machines do something formally equivalent to our doing capacity (ex. Virtual flying vs actually flying a plane) can we consider machines and humans actually equivalent. I don’t think so, what do you think?

    ReplyDelete
    Replies
    1. That's a interesting point, but I think you're making a leap there. From what I understand, the Turing test isn't only about checking whether or not a machine can do all that humans are capable of doing, but the machine also needs to fool a human. Which means, in the case of T3, it cannot be a virtual simulation because humans have to see it do sensorimotor tasks and be fooled into believing its human. So computer simulation might be "equivalent" of human in the sense that they are a perfect model (and the Church-Turing thesis says that anything can be simulated), but it will never be equivalent in the sense Turing meant with the Turing test.

      Delete
    2. b. I do think this is an interesting point where it might be questionable/considerable to say that humans and machines may be equivalent. However, as mentioned by Louise, the machines would need to mimic human behaviour or “fool” humans into being just as capable in every sense and virtual vs real flying for example would be a dead giveaway since they are completely different. Furthermore, from my understanding, the goal of building a “thinking machine” as mentioned in the text, would be for us to reverse engineer a machine to allows us to understand and explain the thinking capacities.

      Therefore, it is more than just about having a machine “fool” humans and worrying about the actual aspects of the action (ex: virtual vs real life) but rather understanding and trying to explain the process of the action of “flying”.

      Delete
  19. By my understanding of Turing so far, he is arguing that a machine could hypothetically be made that could speak and act indistinguishably from a human. His initial example was a T2 machine, which could speak exactly like a human. However, in actuality, it is unlikely that only having verbal abilities (T2) would make the machine completely indistinguishable from a human. This is where T3 and T4 come into play, since this allows the machine to be able to see and interact with the world around them, which is what humans do. The question of if they can actually feel or attach meaning to the things they are seeing and saying is quite irrelevant because of the other-minds problem. Thus, we cannot tell if any other being is thinking or feeling since we cannot be inside of their mind. We assume that other humans think since they act like a thinker, and thus if T3 or T4 were to act like a thinker, then whether or not they are actually thinking is redundant. Either way, they would be like any other human and thus, we could look at the programming of the robot to reverse-engineer and understand how the human brain cognizes. Hopefully I am on the right path here in terms of Turing and his theories.

    ReplyDelete
  20. Turings belief that in about fifty years’ time it would have been possible to program computers to play the imitation game so well that more than 70% of the time, an interrogator will make the right identification after five minutes of questioning is considered to be a weak criterion because in order for the computer to have the performance capacity of a real human being, it most be totally indistinguishable from that of a real human being for a lifetime if need be. I wonder if the length of time it can successfully imitate a human being a restriction is beneficial or harmful because if the performance capacity must remain consistent forever, doesn’t this neglect the way in which human beings tend to decline cognitively over time for several factors such as age and illness? What I mean by this, is, to have a program that’s truly indistinguishable (indefinitely) wouldn’t it have to be also able to demonstrate how performance capacity can fluctuate over time?

    ReplyDelete
  21. In this paper, Prof. Harnard commented on the Turing 1950 paper and gave me much insight about the Turing Test. Prof. Harnard made very clear distinction between Artificial Intelligence (AI) and cognitive modeling (CM), and stated ‘a device we built but without knowing how it works would suffice for AI but not for CM’ (page 7), explains how Turing machine is not sufficient to explain how human cognition is generated. Prof. Harnard also responded to Lady Lovelace’s rejection by stating “it is not clear that anyone or anything has ‘originated’ anything new since the BigBang” (page 15). I’m not sure if I understand correctly, of course the physical world is basically settled since the BigBang and all the physical objects are just reactions based on the existed elements and engineered existed elements, but what about abstract thoughts, intellectual knowledge, etc? As human being we are able to invent such concepts but how computation could ever support this invention?

    ReplyDelete
  22. “Just as (in the game) the difference, if any, between the man and the woman must be detected from what they do, and not what they look like, so the difference, if any, between human and machine must be detected from what they do, and not what they look like.”
    This quote enlightens me a lot. Although it is said that the difference should be detected from what they do, I feel like actually, people almost only rely on the appearance, or more specifically, the concept they caught from the appearance. For example, a girl with short hair looks like a boy and is dressed like a boy, at the first sight you will think that “oh this is a boy” for those characteristics that conform with the traditional image of boy (short hair, wear pants, etc.). If someone tells you that she is a girl, the idea will switch to “she is a girl who looks like a boy”, without her really doing something. Which makes me wonder, how could we define the line depending on the “what they do”? When it comes to the difference between “human and machine”, if, hypothetically, there is a machine which has exactly the same appearance as a human, without knowing that it is a machine, even if it may have flaws in its performance, we may still consider it as “someone weird” rather than “a robot which looks like a human”. Since everything outside conforms with the category “human”, “this is a human” becomes a premise that underlies our logic process as a base, which I believe people hardly ever challenge. Thus, when it has insufficient capacities compares to normal humans, people may probably think that they have IQ problems or psychological problems.

    ReplyDelete
  23. In the point “Arguments from Various [In]abilities” Turing mentions that “I grant that you can make machines do all the things you have mentioned but you will never be able to make one to do X” [e.g] Be kind, resourceful, beautiful friendly, have initiative, have a sense of humour, tell right frmo wrong, make mistakes, fall in love, enjoy strawberries and cream, make some one fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new…”

    Harnad has dubbed these these “Granny Objections” and the slideshow that touched upon Lady Lovelace’s points were equally interesting to me. Examples mentioned by Turing which Harnad said computers will eventually be shown to be able to do, are happening today everywhere. Particular examples that stood out include have a sense of humour, make mistakes and learn from experience. In 2019 a company called Jibo distributed social robots had their servers permanently shut down. At that point, there was actual connection between the users and this beloved robot had funerals held for it: https://www.theverge.com/2019/6/19/18682780/jibo-death-server-update-social-robot-mourning
    This robot had perhaps a hard wired programming of sets of emotive statements and jingles and associated gestures that brought it to life, but to the young child users, it felt all the same as a true playground pal. The other minds problem applies to the machines here, as Harnard mentions in other parts of this text, and I think this is a great example of how children really were not able to consider jibo as other than a true friend in a different physical reality.

    ReplyDelete
    Replies
    1. This is not related to my earlier posting, but just a secondary skywriting/thought I had while reading this: The Mathematical Objection: Godel's theorem[:] [A]lthough it is established that there are
      limitations to the powers of any particular machine, it has only been stated, without any sort of proof, that no such limitations apply to the human intellect.

      Godel's theorem shows that there are statements in arithmetic that are true, and we know are true, but their truth cannot be computed. Some have interpreted this as implying that "knowing" (which is just a species of "thinking") cannot be just computation. Turing replies that maybe the human mind has similar limits, but it seems to me it would have been enough to point out that
      "knowing" is not the same as "proving." Godel shows the truth is unprovable, not that it is unknowable. (There are far better reasons for believing that thinking is not computation.)

      In reading this the differences between “knowing” and “proving” really stood out as Searle’s Chinese Room Argument in which the interpreter can manipulate symbols/characters and prove a solution without truly knowing the semantics behind the symbols/characters. It’s really interesting as there is a really interdisciplinary contribution to the definition of computation. Or perhaps, computation really encapsulates so much that is mechanistic such as arithmetic or language learning in a way.

      Delete
  24. From my understanding, the criteria of different levels of Turing tests requires different levels of observation: started from evaluating a machine’s doing capacity when performing specific tasks to its verbal performance capacity, the criteria for ‘thinking’ shifts from the capacity to simulate or perform a specific task to email performance capacity. As pointed out by Prof.Harnad, to test if a machine could pass t0, we only need to test if the device could perform a certain task; the only standard is that if it fails or not. However, when it comes to t2, there is no specific task for the machine to perform. The only task for the machine is to fool the person in the other room observing the conversation on texts. It lacks absolute criteria: whether the machine would fail to perform a particular task or not (if we exclude ‘imitation’ from the tasks mentioned for t0). The higher on the hierarchy of the Turing test, the more kinds of observation and interaction are involved. And the difficulties in judging whether we could pass on to work on another level, a higher level when we are climbing up, are apparent. As for T2, one of the difficulties that came immediately into my mind is that how could distinguish whether it’s a machine emailing a person, or a machine is a real person, but a foreigner. I used the example of a foreigner here because clearly, a foreigner can think. And it is easy to imagine how and why a foreigner would fail to pass on Turing test. Same as machines, I wonder if there is the possibility that we build a machine that could ‘think’ before we test it in the Turing test. We claim that it could not ‘think’ since they are different from us. Still, this is just t0 to t1. From this point, I wonder if the Turing test could still be considered an agenda set for cognitive science since the ability to think might be independent of our understanding of the output of thinking, our doing capacity.

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2021 Time : FRIDAYS 11:35-2:25  Place : ZOOM Instructors : Stevan Harnad & Fernanda Pere...