Monday, August 30, 2021

2a. Turing, A.M. (1950) Computing Machinery and Intelligence

Turing, A.M. (1950) Computing Machinery and IntelligenceMind 49 433-460 

I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B. We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"




1. Video about Turing's workAlan Turing: Codebreaker and AI Pioneer 
2. Two-part video about his lifeThe Strange Life of Alan Turing: BBC Horizon Documentary and 
3Le modèle Turing (vidéo, langue française)

69 comments:

  1. Hi all: Please read the commentaries and replies before posting your own, to see whether someone has made or replied to your own point already.

    ReplyDelete
  2. What struck me in particular in this text was Turing’s ability to predict that, towards the end of the 20th century, computers would have enough storage to successfully partake in the imitation game. With the rise of AI, it is now possible to create programs that can very accurately imitate humans and, even, confuse their users. That being said, Turing makes the point that humans cannot imitate computers, despite his belief that computers could generate the same answers and mistakes as humans (via the imitation game). This is due, in my understanding, to the lack of rules of conduct that regulate human behaviours. Computers are coded to behave in specific ways, while humans are unpredictable and have yet to be understood in full. Humans, according to Turing, cannot therefore behave like machines. This is a very interesting observation, but I do not think that it will stand the test of time. Indeed, it is possible that AI will someday be capable of replicating entire individuals with their quirks and emotions. If that were the case, humans would be able to imitate computers (and vice-versa).

    ReplyDelete
    Replies
    1. 1. Despite the title, the Turing Test is not about imitation, nor is it a game.

      2. To reverse-engineer human cognitive capacities so as to design (or discover) a mechanism that can produce all of human cognitive capacity -- equal to and indistinguishable from that of any human -- is a (cognitive) scientific goal.

      3. To produce the capacities, not to imitate them.

      4. The verbal Turing Test (T2) is not just about asking and answering questions. It is about being able to text back and forth about anything, for a lifetime, without any indication that the texter is anything but another person. Our full verbal capacity. (Text Eric to get an idea of what T2 is like.)

      5. Turing's prediction about how far along the path to T2 cogsci would be in 50 years was about right (fool 7 out of 10 judges for 10 minutes), but that's not T2. It's a toy chatbot.

      6. Read again what Turing writes about errors, novelty, and randomness. Then read 2b.

      7. Computers are coded to respond to input in certain ways -- including the ability to learn to respond differently to later input, depending on what input they have encountered and how they have responded earlier. (Even the trivial capacity for Skinnerian learning -- "do more of what has led to reinforcement, less of what has not" -- is altered by input. And the input is not part of the internal code -- whether or not whatever what is going on internally is computation. Input comes from the external world, not from internal code.)

      8. A machine is just a causal system. All organisms, human and nonhuman, are machines. The TT (and cogsci itself) is trying to discover what kind of machines organisms are. (Chiefly their cognitive capacities, not their vegetative ones -- if they are separable.)

      Delete
    2. I'm a bit confused -- if it's not about imitation, why then does Turing say: "Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?" Isn't the point for a computer to "fool" the interrogator (by imitation) into thinking it's human, so as to find out how the brain does what it does?

      Delete
    3. The "imitation game" is just a metaphor for indistinguishability. "If you can't tell them apart, you have no basis for affirming of one what you deny of the other." (Read 2b.)

      Delete
    4. I had a question related to point 8, actually. I wanted a better grasp of Turing's perspective (and underlying assumptions) while reading his argument. In section 4, he uses the expression "human computer"...comput-er? a human that computes? I revisited section 3 where he restricts exceptional cases not to be considered as "machines" but could not point to what things he considered 'computers'. So, did he consider thinking computational?
      One more question about machines- in Section 5: "Everything really moves continuously. But there are many kinds of machine which can profitably be thought of as being discrete-state machines." I had thought the contrary, that most machines were discrete than continuous - how are machines continuous?? Is it because I am thinking of digital machines that are in binary?

      As a side comment, in Section 6: "The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any improved conjecture, is quite mistaken. Provided it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research"
      These days it seems like scientists need to temper down the public expectations around artificial intelligence; meanwhile, in Turing's time, as the idea was still novel, scientists needed to conceive an intelligent machine and somewhat believe in it to make progress on its development. I am biased by the discourse I have heard before, of course.

      Delete
    5. April, I think what Turing had in mind was mathematicians. The weak Church-Turing Thesis is that what mathematicians mean when they say they are computing something is what the Turing Machine does. That doesn't mean that computing is the only thing humans (or their brains) do.

      Most "classical" (i.e., Newtonian) physical systems are continuous (or close to it). Computation is discrete, but it can approximate continuity as closely as we want. (But the real divide between the dynamic world of physics and the symbolic world of computation is not that computation is discrete but that it is symbolic .)

      Delete
    6. Reply to: “The "imitation game" is just a metaphor for indistinguishability. "If you can't tell them apart, you have no basis for affirming of one what you deny of the other."”

      These ideas made be think about how there are also many conscious human beings with rich internal lives who are unable to communicate verbally or through written language. That does not make them any less conscious (or less of a person). This makes me confused about the idea of “starting” with T2. Isn’t it possible for a robot to be indistinguishable from a human without passing T2?

      Delete
  3. The Argument from Consciousness (4) is one objection that first struck me as interesting because it hinges on the valid point that we do not know that others think (Other Minds Problem), though I find I am having a difficult time wrapping my head around the Turing’s response. In particular, Turing is arguing that if the machine responds in a way similar to the viva voce example wherein the answers appear to be more than an artificial response, that this demonstrates that the consciousness objection does not hold. Based on my understanding of what we have talked about so far, is it the case that what we are really concerned with isn’t so much whether the machine can think (because based on the Other Minds Problem, we cannot know that), but rather we are concerned with whether or not the machine is able to be distinguished from us? And if this is the case, is Turing’s response to the objection simply showing that the objection is not relevant to the problem we are considering?

    ReplyDelete
    Replies
    1. Grace, see reply to Juliette above.

      This is about reverse-engineering and causally explaining thinkers' capacity to do what thinkers can do. Turing swears off the other-minds problem, because his behavior-based method cannot solve it. Can any method solve it? T4?

      Delete
    2. If I am understanding correctly, this means that the goal to reverse-engineer a machine that is indistinguishable from us is not concerned with knowing about whether or not the machine is feeling/thinking? In this case we are simply concerned with reverse-engineering a machine with the same performance capacity as us, and any attempt to understand the issues that arise with the other minds problem will have to be taken up in another way?

      Based on the distinctions made in 2b. which states that T4 is indistinguishable in external performance capacity plus internal structure/function, it seems possible that this would at least contribute to understanding other minds, unless of course the mechanism by which we feel has nothing to do with internal structure.

      Delete
    3. Grace, I think you’re right in saying that the goal of reverse engineering here is to produce a machine with the same performance capacity as a human, not necessarily a machine that is exactly the same as a human (ie. the goal is for the machine to be indistinguishable from a human, not identical to one - for Turning, indistinguishable in a verbal test, for T3 and T3, indistinguishable on a behavioural and then structural level). So for Turing, whether or not a machine has a consciousness is not relevant (or testable), we just care about whether this supposed difference (having a consciousness or not) produces a noticeable change in performance.

      Additionally though, remember that Turing made the interesting (and amusing) point that external behaviour is usually enough to convince us that other humans are conscious (or, that they think). We could say to everyone we meet “hm, maybe you’re thinking, but maybe I’m just being tricked” and live life assuming we are the only thinking thing in the universe, but instead, as Turning puts it "it is usual to have the polite convention that everyone thinks”. So in addition to pointing out that the consciousness objection isn’t directly relevant, he asks the question of why we would hold a machine to a higher standard of proof than others humans when it comes to demonstrating their consciousness. I thought this was quite a clever response on his part :)

      Delete
    4. Caroline, you freed me of the need to reply!

      Grace, since Eric is T3, not T4, do you think you've learned something that makes it ok to kick him. (I assume you would say no: My question to you: Why not?)

      Delete
    5. When reading the other mind's argument that Caroline mentioned, it seemed to me that there is a rational reason to hold machine to a higher standard (or to be more doubtful of their consciousness) than other humans. Though you can never know that for sure, it is reasonable to assume that other humans have roughly the same internal structure as you do: we know we share very similar DNA, we can see that we grow and develop in the same way, etc. Because of this, and because you know that you, with your standard human brain, are conscious, it seems reasonable to infer that another human would have a brain with similar capacity and also be conscious. We can't made the assumption as easily about a machine who's creation process and internal structure is completely different.

      I know this answer isn't foolproof because Turing's whole premise is that we should judge the machine only on the external behavior with no regard to internal structure (which would be more T4 I imagine), but I do feel like this distinction is relevant in our common conception of the other's mind problem.

      Delete
    6. Louise, good point, but T4 is a Turing Test too. So I ask you the same question I asked Grace above: Does the difference you point out justify kicking Eric? If so, why? and if not, why not?

      We're "Turing-testing" other people every day, in our everyday "solutions" to the other-minds problem. But are we doing T4? Do we need to? Why?

      Delete
    7. Given that Eric is T3, I would not be able to justify kicking him. I say this because I would not kick another human and since Eric is indistinguishable from us in sensorimotor performance capacity I have no way to differentiate. Further, if Eric is indistinguishable, then I should be applying the same conventions to him as anyone else. Thus, as we tend to assume that others think even though we do not know, I would have to assume the same about Eric.

      Delete
    8. In reading Louise's response to Caroline's point, I think it's really interesting that there is grounds to give human's the benefit of assuming they have consciousness based on physiological similarities or more broadly, internal structures. In this sense, I wonder if the robot were to pass T4 to T4 then we could assume it to be conscious? Again, understanding that Turing positioned consciousness to be outside of the scope of his behavioural based test, I think it comes back to the question of how we define consciousness in the first place. In the paper 'Robots: Machines or Artificially Created Life?' Putnam attempted to answer the question "are robots conscious?". This problem demoted as the problem of Minds of Machines provides us an interesting lens in approaching the Other Minds problem. If we were to include robots in our linguistic community may mean that there are equivalences in their sensorimotor performance that is indicative of consciousness. And in that case, a robot being T3 would be enough to be considered conscious. All that is to say, I think kicking Eric is not justified.

      Delete
    9. Grace, good response. But isn't it also true that this is not just how we "reason" about Eric, but the thought of kicking him feels just as unthinkable as with any other person? Turing made an intellectual point (about "intelligence") but it also leads to a moral point, about empathy and decency...

      Melissa, do you really think that not-kicking Eric is a matter of social or linguistic convention? Or might it come from the same source as whatever it is that makes us not want to kick humans (and other sentient creatures)? Doesn't that all come with the territory of indistinguishability when it comes to other-mind-reading?

      Delete
    10. I think that even if we were to distill whether kicking Eric was morally right or wrong based on x,y,z, different people will subscribe to different reasons and we wouldn't come to a singular universally accepted/belief of why not to kick Eric. However, sticking to your question Professor, slightly contradictory to my initial point, I personally do not think that not kicking Eric is a matter of social or linguistic convention. And yes, it very much is in the territory of indistinguishability. I'll likely bring up this example in other skywriting because it's so pertinent to what we are studying in this class but Jibo, a popularized home robot who's servers went dark in 2019, had users/consumers/patrons who bought the family-friendly bot undeniably impacted beyond what is (I consider appropriate) for an inanimate object or software being discontinued or being updated, for example. This article articulates the funerals that real people held for this robot: https://www.theverge.com/2019/6/19/18682780/jibo-death-server-update-social-robot-mourning. Back to the original point, if people were to feel so deeply (especially the children whose perception of other minds while developed, not quite to the extent of adults) about this bot, surely, the children would not be okay with kicking Jibo either? And the reason for this I think is very much indistinguishability when it comes to other-mind reading.

      Delete
  4. Turing proposes suggests that the cognition follows successive serial algorithms to produce cognitive elements. In the text he uses that definition of cognition, and wonders if it is possible to reverse-engineer a machine that has all the necessary qualities to be mistaken for a human, totally and undistinguishably. If that were the case, would it mean that cognition is equivalent to computation? Nonetheless what aspects could be non-differentiable between humans and machine – machine could in theory replicate emotions, create new things… Turing argues that the machine would not “create” new things per se but rather apply concepts and implicitly knowledge that it hadn’t used until then. It is possible to create algorithm that learn from data and then do things that the programmers didn’t expect. The Turing tests attempts to see if we can explain through reverse engineering all human cognition while acknowledging that we will never know if the machine possesses the feeling of thinking.

    ReplyDelete
    Replies
    1. "...all the necessary qualities to be..." indistinguishable from a human. Whether it's a mistake to assume that T3 feels is a different matter (the other-minds problem).

      T2 is another story, if it can be pass through computation alone. We'll get to that is Week 3. Searle's Chinese Room Argument is that a purely computational T2 would not understand (i.e., would not feel what it feels like to understand).

      ("Stevan Says": that (1) only a T3 could pass T2; that (2) a T3 robot is necessarily not just a computer, computing; and that (3) Turing was not a computationalist, so he would have agreed.)

      Was Turing -- the co-inventor of the computer and the proposer of the Turing Test as the test of whether cogsci has successfully reverse engineered cognition -- a computationalist? (What is computationalism?)

      Delete
  5. What I quite enjoyed about Turing’s “Computing Machinery and Intelligence” was the inclusion of the arguments that could be used to dispute the validity of the Turing test and whether computers can “think”. In particular, “The Argument from Consciousness” stood out due to Jefferson’s exclusion of artificial signalling.
    When in class, we discussed the different t-levels and the implications behind the progressing levels. “No mechanism could feel (and not artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.” We discussed artificial signalling in t-3 robots and whether we would kick a robot that would release an electrical signal in response to being hit. Turing states that “the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking”. In short, if you were a computer, conduct introspection on your current state. However, since this is not possible and we cannot be anything other than ourselves, this answer will remain elusive. But does this even matter in the end? In class we concluded that we would not kick a t-3 robot if it signalled a pain response regardless of if it were artificial signalling or not. So when Jefferson says “[…](and not merely artificially signal, an easy contrivance)” I don’t think he should have been as dismissive of artificial signalling as he was in his oration.

    ReplyDelete
    Replies
    1. … since ultimately the determination comes down to ourselves and it has been established that from an ethical standpoint we would not kick a t-3 robot because of the presence of artificial signalling

      Delete
    2. I think you are right in critiquing Jefferson’s dismissal of artificial signaling and I believe that Turing also holds this view. From Jefferson’s standard, you would have to verify that a machine can think and feel in the ways you quoted in order to prove equal capacity between a human brain and machine. However, according to Turing, knowing that a machine thinks under Jefferson’s criteria is not possible. Instead, Turing asks a different question to understand the capacity of machines. In the Turing test, rather than ask if machines can think, it asks what they can do because this is a question we can actually answer without introspection. Our decision to not kick the T3 robot would be evidence to answer the Turing test because it shows the behavior of the machine is indistinguishable from humans, even if it is only artificial signaling. How the robot behaves is the important part for Turing, not determining if the robot has a mind. So to answer your question, I think Turing would argue that knowing if a machine thinks does not ultimately matter.

      Delete
    3. What is "artificial signalling"? A T3 robot is artificial, so everything it does is "artificial signalling." So what, if you can't tell it apart from a real thinking, feeling person?

      Turing doesn't say thinking, which is felt, doesn't matter; just that it is not directly testable by his method. Only doing (capacity) is.

      Delete
  6. I was struck by Lady Lovelace's objection, and Hartree and Turing’s rebuttals. Hartree posits that the reason Lovelace could not imagine a machine originating anything, or following anything but explicit instructions that the user is fully aware of, is because she had not been exposed to the developments in machinery that would suggest that it is possible for computers to surprise us. Turing supports this argument, citing his own experiences being surprised by machines, and asserts that the notion of machines not being capable of this behavior is a fallacy. This makes me wonder if we are close ourselves to a new perspective on the question of computation as a representation for cognition. Perhaps the more we understand about consciousness (whether it be via studying the minds of children, studies of the effect of mind-altering substances, the physical conditions necessary for flourishing, etc.) the closer we become to a realization of this nature. Or perhaps at the same time, the more we understand about machines (including machine learning, AI, expanding storage capacity), the closer we are to understanding what actually stands in the way of our current machines and passing the Turing test. I don’t mean to suggest that we need to understand everything to accomplish this, but we may need more exposure to different and creative ideas to attain a perspective that can help us create a machine capable of passing the Turing test.

    ReplyDelete
    Replies
    1. One of the things that stands in the way of passing T2 nay be that computationalism is wrong, and thinking is not just computation...

      Did Turing think it was?

      Delete
    2. To add on to that, I feel like another way to answer the Lady Lovelace's objection, instead of saying that machine can originate something, is to dispute the (unsaid) claim that human can originate anything. If you take a more deterministic approach, you could argue that nothing that we do is fundamentally new, since it always derives from something we saw in our environment, or the combination of two different ideas that lead to another one. In this sense, we are not different than machines, in that we are dependent on the stimuli we receive and the rules that guide cognition in our brain. And in that perspective there is no reason why machines couldn't produce ideas considered as "new" as the ones we produce.

      I don't think it really changes anything to the debate but that was a thought I had.

      Delete
    3. Yes, the "something new" challenge is a red herring. The TT already has it covered: it asks no more nor less than what a real person can do.

      "we are not different [from] machines" What is a machine, other than a causal system, whether living or non-living, natural or human-made, human or nonhuman...?

      Delete
  7. This is really just a personal interest question: a part of Turing’s paper that made me go “huh?” was when he started talking about Extrasensory Perception (unless I'm misreading, he seems to think extrasensory perception is possible?). In his earlier section responding to theological objections, he points out that theological objections brought against Copernicus or Galileo in their time seem, in light of our current scientific knowledge, to hold little weigh. With regards to ESP though, it doesn’t seem to bother him that ESP existing would go against our current materialist scientific theory (though he does suggest some statistical evidence for telepathy - anyone know what he might be referring to here?). I’m curious how others took this section, since it threw me for a bit of a loop.

    ReplyDelete
    Replies
    1. Yes Turing, a Giant, is not at his best, nor on the turf where he is the expert, when he talks about ESP. (Newton believed in alchemy -- another Giant, out of his element.)

      One of the films ventures a hypothesis about why Turing might have wanted to believe in it: to be able to communicate with a lost love who had died. The film even suggests that that was what inspired the Turing Test.

      But think of the many scientists who are believers, and feel they are doing their work "to the greater glory of God." (I think it's a bit like doing everything to please their parents, even after they are long gone...)

      Delete
    2. I picked up on the same thing. To answer your first question, he wrote "Unfortunately the statistical evidence, at least for telepathy, is overwhelming.", which seems to suggest he believes telepathy and therefore some forms of ESP are real. I would be curious to see those studies with strong evidence too.

      With regards to it contradicting the material view and not bothering him, there's a quote that may suggest he doesn't want to make the jump to believing it yet because it seems too woo-woo. "It is very difficult to rearrange one's ideas so as to fit these new facts in. Once one has accepted them it does not seem a very big step to believe in ghosts and bogies.". I am reading between the lines so I'm not sure if I got what he was trying to convey, but it sounds like he is hesitant to fully believe it because then things like ghosts and bogies could be real too.

      Delete
    3. Laurier: Not worth puzzling over whether Turing believed in ghosts. But here is a fact: "the statistical evidence... for telepathy is" definitely not "overwhelming"!

      Alcock, J. E. (1987). Parapsychology: Science of the anomalous or search for the soul? Behavioral and Brain Sciences, 10(4), 553-565.

      Delete
  8. After reading Turing’s paper, I enjoyed watching Jack Copeland’s lecture at MIT, which I felt well contextualized the work leading up to Turing’s ideas surrounding computer intelligence. Copeland does an impressive job of breaking down Turing’s work and hypotheses. In particular, I think the unpacking of fiendish objections best clarifies and demystifies Turing’s work. Based on Copeland’s presentation, objections from influential figures like Ned Block, Robert French, and so on, misinterpret much of Turing’s argument. For example, many often wrongly attribute Turing to defining intelligence, and then criticize the definition. However, Turing was not trying to define thinking, but rather argues that a machine can carry out some kind of process that could be described as thinking, even if it is not identical to how a human thinks. I think this distinction is very important because it reinforces the idea that passing the Turing test is not the sole condition for something to possess the ability to think, rather it is one interpretation of how a machine could be used to emulate our notion of intelligence.

    ReplyDelete
    Replies
    1. Being able to reverse-engineer a mechanism that can do everything we can do, indistinguishably from any of is, is rather more than what is conveyed by the vague notion of "emulating"...

      Delete
  9. Among all of the objections that Turing lists as opposing his own opinions, that of "The Argument from Consciousness" is the one that makes me think the most. Mostly because at first glance, I would definitely agree with Professor Jefferson's Lister Oration, in that how could a machine be constructed to understand and replicate all of the complex intricacies of the human mind? How could a machine truly feel pleasure, grief, anger, rather than simply computing responses that have been inputed in their coding. Turing does make a compelling argument by claiming that this is a solipsist point of view, in that the only true way of knowing how a machine thinks would be to be machine itself. In the same way that in order to truly know the thoughts and feelings of a man, we would need to become the man himself. But even with this argument, I feel it very difficult to leave Professor Jefferson's argument behind...

    ReplyDelete
    Replies
    1. It's not solipsism (look up "solipism"): no, it's the other-minds problem. There is no way to detect, measure or "prove" feeling directly (even Descartes pointed that out). But the Turing Test is an attempt to capturing it indirectly, by reverse-engineering as much as can be reverse-engineered.

      In sentient organisms, feeling somehow "came with the territory," with the evolution of all that performance capacity. But the Blind Watchmaker (evolution) is just as blind as we are to whether organisms feel; the only thing it has to work with is what organisms turn out to need to be able to do (to survive and reproduce).

      Delete
  10. After watching The Imitation Game when it first came out, the one line from Alan Turing’s character in the movie, “what if only a machine can defeat another machine?” kept resonating in my mind. What if only a machine can understand another machine? This now makes me wonder, are we doing the right thing by making an analogy from computers to how human cognition works? That is, computing machinery has its own, well, machinery, and the resulting intelligence is as different from human intelligence as it is similar.
    Are simulations with machinery that’s totally different from humans’, but give results that are quasi similar to human behavior truly telling us anything about how cognition works?

    ReplyDelete
    Replies
    1. I think your final question, “Are simulations with machinery that’s totally different from humans’, but give results that are quasi similar to human behavior truly telling us anything about how cognition works?” Is interesting to consider. The results from simulations with non-biological systems are only so generalizable to humans due to the differing variables, such as the hardware (computer or brain) or the expected output. But this is also true of non-human animals. The results of animal studies are also only partially generalizable to humans — although the hardware is (debatably) more similar than that of a general purpose computer, we can have more ambiguous output behaviour. It could be possible that these studies with both non-human animals and non-biological systems are not nearly as useful as predicted, however I believe that the value in these sorts of simulations/studies lies in the ability to rule out possibilities of functioning, as well as observing the similarities present within all systems to arrive at a better understanding of what is and is not a part of how human cognition works.

      Delete
    2. Hi Shandra,

      I found your comment very interesting because that quote also stood out to me from the trailer. If the only way to understand a machine is with another machine, we actually going around understanding cognition. If humans can't understand the original machine, then using another machine to understand it would just be creating a second problem--understanding the second machine. This seems like it could go endlessly: a third machine is built to understand the second, etc., and we never understand cognition.

      Delete
    3. AD, there is a big difference between "animal models" (in biomedicine) and computational models in cogsci.

      Melody, if the goal is to reverse-engineer the causal mechanism of a machine you do not understand, and you build another machine that does the same thing, but you understand how, because you designed it, why would that be a problem?

      Delete
    4. I found the discussion of “what if only a machine can defeat/understand another machine?" super interesting. After all, as learned in class, any cause-effect system can be considered as machine. Thus, we are also machines as we are governed by cause and effect in the universe. Also, if we create a Turing machine (that can pass T2, let's say) by simulating our own thinking processes, we need to first understand the causal mechanism of our own thinking processes. Thus, I believe we can understand the Turing machine that we create.

      This may sound a bit sci-fi and irrelevant, but what if the Turing machines that we create keep running on its program, write its own code and evolve into machines that are out of human control? If that happens, whether humans still can understand them may become a problem...

      Delete
    5. Yes, that's sci-fi ("robots take over!"). And you're not even talking about robots, but about software.

      I think real people have already shown both what damage can already be done by software and what damage (to the planet and to all its sentient creatures) can already be done by real T5s (us).

      Delete
  11. Out of Turing’s counter-arguments for the Turing Test, “The Argument from Consciousness” stood out to me as the one which comes closest to my own objections about the Turing Test and AI in general (other than the “Heads in the Sand” Objection, which, as Turing said, doesn’t merit much discussion, but which I felt very called out by). However, attempting to invalidate the Turing Test with “The Argument from Consciousness” is futile, since the Turing Test is not about consciousness, but a machine’s ability to fool a human into thinking it’s human itself.

    Turing knew that this argument was not actually about the Turing Test, it was merely a reaction to it. It is where the human mind naturally goes when discussing intelligent machines: the possibility of AI becoming conscious. And I’ve got to say that I’m quite skeptical of the concept of conscious machines. The fact that there are people out there who believe that we can make conscious AI without even understanding consciousness in the first place are an expression of extreme hubris. And I am quite skeptical that it is even possible to understand consciousness in the first place. After all, as Turing puts it, "there is something of a paradox connected with any attempt to localize it." Or, to put it more poetically, a light cannot shine on itself.

    ReplyDelete
    Replies
    1. I think your second paragraph raises a very interesting point - if we don't understand consciousness ourselves, how are we to identify it in an AI? Do we know if some animals have consciousness? I think this extreme hubris relates to Genevieve's point down below, where perhaps man is intimidated by machines, and needs to know they are superior to them. Who is to say we should create a conscious AI if we do not understand it completely? However, this may be paradoxical because as you say, we cannot shine a light on ourselves.

      Turing seems to brush away these worries, as he says that "these mysteries necessarily need to be solved before we can answer the question", the question being whether machines think. Is this hubris, a need to be superior over machines, or is there another reasoning here which he does not delve into?

      Delete
    2. Milo, the TT is not about fooling; it is testing whether a causal explanation (of doing-capacity) actually works. (Explaining consciousness (i.e., the capacity to feel, rather than just to do) is harder than just explaining doing-capacity because you can see and measure doing, but you can't see and measure feeling; you can only measure its correlates -- and those are all doings, not feelings (even neural doings)!

      Alex, who is talking about trying to create a "conscious AI"? (What is that?) Turing is talking about trying to explain human performance capacity and testing whether your explanation works. That a model that produces total performance capacity also feels is just a hope (and not a causal explanation of feeling capacity).

      Delete
  12. A quote from the paper that stood out to me was: “The “witnesses” can brag, if they consider it advisable, as much as they please about their charms, strength or heroism, but the interrogator cannot demand practical demonstrations”. I found this particularly interesting in comparison to a question that was posed to us in class. The question was; If there was a student in the class (Eric) who was actually a T3 robot and thus physically and behaviourally indistinguishable from a human, would you kick him? Most of the class seemed to state that they would not, since he was indistinguishable from a human. This discussion makes me wonder about Turing’s machine used for the Turing Test. This robot would be considered T2, as it can act completely verbally indistinguishable from a human. However, it does not look like nor possess the sensorimotor capabilities of a human. Therefore, I would want to pose the same question to the interrogator who believed the T2 machine to be a human. Upon seeing the machine and it being completely distinguishable from a human, would you kick it?

    I believe that more people would agree to kick the T2 machine than the T3 machine, since it would be easy to look at the T2 machine and decide it is not human. I think this brings up an interesting comparison on how the physical appearance of the robot changes how human we regard them and how much empathy we are likely to display towards them.

    ReplyDelete
  13. The objection in the text which I found most interesting was that of Lady Lovelace, or the problem of “anything new” or the “surprise”. In Hardnad’s response to this problem (correct me if I am wrong) the problem with this problem is a misdiagnosis of what ‘creativity’ is. If we believe that creativity stems effortlessly from us, as it sometimes seems, as an act of our free will, then how could a robot, which, being unendowed with the gift from G-d that allows for self-determinacy, creation, etc, create something original? Obviously, this question takes a lot for granted: that creativity is effortless, that there is free will, that G-d exists, that ‘originality’ is possible. Even the art created by the quite simple AI bots that we have at the moment evokes emotional responses from people, sometimes even more intensely so than paintings by the greats (although this may be just a testament to bad taste, having seen some of the paintings myself!). With no proper definition of creativity, and no means by which we can measure it, how could we hope to create an AI that has ‘it’, whatever ‘it’ is? Have we already? Is this a problem to be sent back to the aesthetic philosophy department before it can be undertaken by the AI magicians?

    ReplyDelete
    Replies
    1. I had the same thought exactly! DALL-E first came to mind, the breakthrough artistic artificial intelligence program by OpenAI (https://openai.com/blog/dall-e/). Through their code, one can enter a text prompt and quickly receive several original images depicting that prompt. For example, one may write “an illustration of a baby daikon radish in a tutu walking a dog” and the author will receive more baby daikon radish illustrations than they know what to do with. I would like to think that we have already created AI that can “originate” with many examples outside DALL-E. One may argue that the creativity only exists under the direction of the prompt writer, and the inspiration of preexisting images in the AI image learning database. But the images it outputs are nevertheless new, never before drawn, photographed, or otherwise created. Is not all human art just drawing from our own dataset of model images? Through centuries of art we see humanity drawing inspiration from reality and external stimuli (nature, food, animals, other humans). We know artists draw inspiration from other artists as well so why is there creative gatekeeping when AI receives inspiration from the human experience? I believe it boils down to the root argument and objection to creative machine thinking, and that is man's desire for superiority, or the “Head in the Sand” objection. Computers can already compute far larger mathematical problems than we ever thought possible. They are approaching mastery of literature, art, music... all the things that we consider core to our humanity. Perhaps this threat to our understanding of the human condition causes the reactionary distrust of AI and machines. Cognitive scientists may need to better recognize this bias in order to create systems that inform us about our own consciousness.

      Delete
    2. I had a similar reaction when reading the "head in the sand" objection. I realized I was somewhat guilty of feeling such skepticism towards AI advancements. But I also think our superiority complex as humans goes well beyond thinking -- it's the creativity, and the "feeling" we feel is necessary to showcase creativity, that seem to be some our most prized characteristics, and those which distinguish us the most from other animals. For me, the thought of AI taking over creative spheres of life is scary for many reasons, the biggest one being that expression through art, writing and music, to me, is something that has been critical not only for the well-being of the human species, but for our ability to retrace history and make sense of the world — removing the underlying operations that bring about creative output somewhat reduces this aspect of art. But I suppose this was irrelevant for Turing’s work.
      Turing said: “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted”. But I wonder how this claim, and his response to the “head in the sand” argument, would change were he here today to defend them given current AI advancements and related ethical dilemmas.

      Delete
    3. I am adding my comment here because it seems like an extension of the above conversation about Lovelace’s objection.

      This reading was most interesting in the sense that it touched upon a question I asked in my last skywriting: “When modelling a Turing robot, do we account for the fact that cognition can develop?

      In the article by Turing (1950), Lady Lovelace objects to the question “Can Machines Think?” stating that machines can do “whatever we know how to order it to perform” and never anything new. According to Lovegood, this lack of ability to originate anything is her reasoning behind why machines can NOT think.

      Turing presents his own view. Again, Turing’s objective is to model a machine (reverse engineer) that can imitate the mind.

      He proposes that when trying to imitate the adult mind, we should initially try to produce a child machine which can develop into the adult mind through learning. Turing distinguishes between a child and adult’s mind and states that a machine too can go under a process of evolution through learning. (Here he answers my question above.)

      In Learning Machines, rules of the machine can change during the learning process and its teachers (us studying the machine) will be ignorant of what is going on inside. “Processes that are learnt do not produce a 100% certainty of result.”

      Thus, I believe Turing is saying that by including the process of learning when creating a Turing Test passing machine, the objection that machines cannot produce surprises no longer becomes relevant.

      Turing ends his article by speaking of the advantages and disadvantages of a learning method and a systematic method when building the machine but advocates for BOTH approaches.

      I still don’t fully understand what Turing means by the “learning method & systematic method”. If we are unsure of what rules are changing inside the learning machine, how can we be certain that the machine is imitating (the randomness of) our minds?

      Also, (random question) is deep learning AI that is prevalent nowadays following the learning method or the systematic method?

      Delete
  14. Seeing as this paper was written in 1950, it is very interesting to see how some of Turing’s points still hold up. In section (5), Arguments from Various Disabilities, he discusses how people often say that “you will never be able to make [a machine] do X,” with “X” being anything from being resourceful to learning from experience to enjoying strawberries and cream. These arguments, which Turing contends are derived from scientific induction (the machines I saw yesterday are incapable, the machines I see today are incapable, therefore the machines I will see tomorrow and the day after will also be incapable), are fascinating to me, as even in the 20th century, people were seeing technology progress and change. Digital storage space has expanded so much that machines are now able to do things people could never have dreamed of. We have virtual assistants that can provide us with resources at the drop of a hat, chatbots that adapt as more people use them, and so much more. Though I have yet to hear of a machine that can enjoy berries and cream, it almost seems like science denial to say that it could never happen. I am unsure as to how we would be able to know or understand if a machine was experiencing the sensation of enjoyment, but at this point in time, it does not seem like an impossibility.

    ReplyDelete
    Replies
    1. Hi Emma, I also found it fascinating that we got to read Turing’s paper 70 years later since he talks a lot about the end of the 20th century which already passed. For example, he states: “ Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”
      At the time, it was strange to think that computers could one day have thoughts if their own, but I find that it’s a common fear today. With growing AI and robot technology, the once fear of “robots taking over the world” that’s often used a movie or book trope is not so unreasonable anymore. Although, I don’t think we’ll get to that point, I think many fewer people would question that a machine could one day think on its own than at the time Turing wrote this.

      Delete
  15. After reading Turing’s article, I have to say I’m baffled by the ideas he developed in such an early stage of machine development. The following quote particularly resonated with me. Turing states about digital computers: “By observing the results of its own behaviour, it can modify its own programmes so as to achieve some purpose more effectively. These are possibilities of the near future, rather than Utopian dream”
    I had this thought before. If a digital computer can adapt to some new situation and upgrade itself accordingly, then perhaps its potential of “thinking” could be bigger than we could ever imagine. Related to this subject, I liked the fact that Turing discussed the question of discrete versus continuous computations which he showed couldn’t be told apart in a situation where the investigator had to make a choice in Turing’s imitation game. This added complexity to digital computers which I thought wasn’t existent.

    ReplyDelete
  16. In Turing's 'argument for continuity in the nervous system', he states that the human nervous system cannot be considered a discrete-state machine because a small error may have a large affect on the output. Therefore, he says, it is reasonable to argue that a discrete-state machine cannot mimic the nervous system. His rebuttal claims that by the nature of the imitation game, this does not matter - the interrogator wouldd be constrained from using this difference to differentiate human from computer. Am I correct in my understanding that this rebuttal would no longer hold in T4 - if you cracked open the skull and found a discrete-state machine rather than a human nervous system? Furthermore, that this argument demonstrates Turing's favouring of weak equivalence over strong equivalence?

    ReplyDelete
  17. I found this text to be very enlightening on the topics discussed in class such as the Turing test. To link what we saw in class with the reading, I believe that if a human interrogator was not able to differentiate a machine from a human in the imitation game, then that same machine would pass the T2 level Turing test. Indeed, if I understood correctly, to pass the Turing Test, the system must be able to perform whatever an ordinary human being can do for a lifetime, indistinguishably from any other ordinary person. Similarly, the Imitation Game is not about imitating or believing; it's about developing genuine performance potential in order for machines to be thought of as human.

    I also really enjoyed reading about Turing’s view on machine learning. “Instead of trying to produce a programme to stimulate the adult mind, why not rather try to produce one which stimulates the child’s?” This is a super interesting view on how to program machines to learn. I think machine learning is the future of engineering so for Turing to have already started to talk about how to do this 1950 is very impressive.

    ReplyDelete
  18. Something that quickly stood out to me when reading this paper was in the way the new problem was define as “will the interpreter decide wrongly as often when the game is played with the computer as he does when it is played between a real man and woman”. From my understanding, this is asking the extent to which machines can imitate. If a machines ability to “think” is simply determined by how much it can convince another human that it’s a human, is this question really worth answering since it says nothing about the internal processes which enable the machine to come to its conclusions? In the “argument from consciousness”, professor Jefferson suggests that a machine is not guided by any emotion such as joy, grief, anger when computing responses, and as such can never be built to truly mimic the human mind. This relates to my initial idea that a computers ability to simply manipulate symbols consistently to make it resemble a human being should not be enough to definite it as a “thinking machine”. The way that humans are influenced by emotions such as disappointment, success, anger, and anxiety is essential to the way we interact with other people and our environment and inevitably shapes the way we form conclusions.

    ReplyDelete
    Replies
    1. I don’t think Turing is concerned about the machines internal processes, only that its capabilities match that of a human. I think he is interested in weak AI to support his claim that the brain can be simulated by a computer, but not necessarily duplicated. This would mean Turing is not concerned with a T4 computer and is not trying to learn how a brain does what it does, only what it does. I think this is still important to know because it shows what possibilities or limitations we might have in thought. Your interest in the internal processes of thought, on the other hand, seems to relate more with the goals of cognitive science which is more concerned with how the brain works. This is also an important question, but different from what Turing wants to find out.

      Delete
  19. This reading was very intriguing and surprisingly quite interesting as a lot of turnings idea hold quite firmly years after he wrote them. One of the arguments that stood out the most to me would be the argument from informality of behaviour. In this section, he talks about how it is not possible to produce a set of rules purporting to describe what a man should do in every conceivable set of circumstance which would then argue that we ourselves are not machines. From my understanding of this section, I believe he explains that being regulated by laws of conduct is what would make us (humans) no better than a machine. He also mentioned that there are laws of behaviour which I understood as laws that simply appear to regulate a man’s life. Would this mean that things such as gravity or simple physic laws at the one that regulate us under those set of laws? But I guess since those rules aren’t the ones dictating our behaviour, they don’t apply as the rules of conduct mentioned, which will then allow us to say that we may not be better than machines?

    He mentions that in the case we were to observe the existence of such laws, it should be possible to then predict future behaviour based on the laws observed. However, he mentioned that it doesn’t seem to be the case and we wouldn’t be able to predict the responses of a discrete machine, would this then mean that machines themselves self are just as unpredictable as we are and therefore men are no better than machines?

    ReplyDelete
  20. In Turing’s 1950 paper, he proposed the famous question ‘Can machines think?’ By testing if a machine can be ‘indistinguishable’ from us based on computational rules (would be the judgement based on ‘indistinguishablity’ too human-centric?). Here he was specifically looking for behavioral/weak equivalence. Like the same criticism toward behaviorism, I think we could not answer this question simply based on its behavior (as in T2 & T3), that is, even the machine passed T2 and T3, we still could not conclude this machine has cognition.

    For other critics brought by Turning, I’m mostly convinced by the consciousness argument and the creativity argument (Arguments 4 and 6). As skywritings from 1a mentioned about emergence, we are still not sure about how consciousness arises from brain anatomy or can machines be conscious. For this paradoxical question (which ultimately becomes the Other Minds problem) Turing was simply avoided it in his paper. Secondly, as Lady Lovelace stated ‘the analytical Engine has no pretensions to originate anything’. I find it particularly convincing, mostly because we acquire the rules differently - humans are able to design certain set of rules as we need flexibly, sometimes with mistakes; while rules in machines are initially programmed by people and later can be modified by machine learning (but even in the context of machine learning, machine is not truly ‘autonomous’, then how it can create something new?).

    Someone had mentioned in their comment about how AI-generated art is lack of humane creativity and “removing the underlying operations that bring about creative output somewhat reduces this aspect of art”. This might be out of topic but out of my own interest and I agree with this view regarding to topics related to art. Empirical psychology/aesthetics studies showed that viewers tend to lower their aesthetic judgement after knowing which piece is generated by AI. I suppose the purpose of AI generated art is not mainly serving for aesthetics experience anyway(based on the agreement that AI could not really be creative and originate new pieces of art) but to standardize empirical aesthetics research.

    ReplyDelete
  21. I think what is ingenious about the idea of Turing Test is that it uses the understanding we already have of what thinking is as a tool to test whether machine can think.
    The difficulty of designing a test on thinking is that it is quite hard to an explicit definition of what thinking.is. Even if we have, then there is another difficulty of turning it into an objective criterion and developing a standard procedure of test. However, although we cannot explicitly tell what thinking is, we all understand it implicit. We can distinguish a thinking being from a non-thinking being in our everyday life. We cannot tell how we do that and what standard we are following,; it is something we know it when we see it.
    Hence, Turing's strategy is that: why not just use this implicit understanding to develop a test? Instead of seeking for an objective criterion independent of us, let us be the judger--if I cannot tell the difference of machine from a thinking being, then the machine can think. This sort of judgment we make everyday is good enough. And it is reliable and universal--we are generally consistent in our judgement of whether something can think and we usually agree with each other on this kind of judgement.

    ReplyDelete
  22. My first impression after reading seems to be pretty irrelevant and bold, which is, why bother? What is the point of reverse-engineering human cognitive capacities when there are already too many humans on the world?
    For me, machines, at least in a narrow sense, are designed as tools and helpers which are able to perform tasks in a more efficient way, in order to save time and effort for humans. Nowadays, people live in a world of technology. There are even tons of people who can’t live without machines. What makes them this useful and be relied upon this much is that their computation speed far exceeds that of humans, providing them abilities to perform various tasks that humans are normally unable to produce in the same given time limit.
    If a machine was undistinguishable from humans, saying that they also produce mistakes and have a considerable reaction time, I feel like they are actually losing their original functions. To better illustrate this, take arithmetic problems as an example:
    If you ask a human a super hard math question, even if they give an answer (without demonstration), you will remain dubious. But if a computer gives you the same one, you will trust that it is correct.
    If now they are parallel to humans, as it is hard to tell whether somebody is making mistakes or lying, this now also applies to computers. They become unpredictable as what humans do. Like you cannot really force a human to do something, not you can’t “force” computers to do their jobs (i.e., use them as tools) , as they may have their own thoughts. Wouldn’t this dramatically lower the effectiveness?

    ReplyDelete
    Replies
    1. To answer your question "What is the point of reverse-engineering human cognitive capacities" as well as your comment "If now they are parallel to humans [...] Wouldn’t this dramatically lower the effectiveness?" I think that within the scope of this class, the goal isn't to create robots to help us perform tasks more effectively. The goal is merely to help us solve the easy problem, in other words to give us at least one possible template or explanation of how we can do everything that we can do. Certainly humans are "flawed" in the sense that we lie and we aren't as effective as many machines (for example in arithmetic, as you mentioned). As a result, attempting to replicate these flaws can seem counterproductive from a performance point of view. But as I mentioned earlier, brute productivity is not the objective of Turing machines, but rather advancing our knowledge of the possible mechanisms behind our human abilities.

      Delete
    2. Hi Zhiyuan, I think you actually bring up a good point that makes computers distinct from human cognition/human performance – that is, the real cognition makes mistakes, cognitive fatigue exists and is an important property of it. It can be also associated with the developmental aspect of cognition, that cognitive abilities are developing as infants growing up, and also on the other hands, cognitive resource will used up after a long day(i.e. people will difficulties in decision-making, hard to concentrate, etc), as well as cognitive aging. Computers are nothing like this.

      However, as Isabella commented (and also as part of what I learnt from cognitive science), cognitive science researchers are focusing on how computational modeling can contribute to the understanding of human cognition (as a collective function, not to individual level of cognitive differences, mistakes, etc). In one way, it is a mutually beneficial approach to advance the understanding cognition(or at least brought people think about this), and to make computers/AI to be more efficient and reliable.

      Delete
  23. One possible hypothesis that I could think of why we need a T3 to pass T2 is that a T3 is required in order to make the machine that aims to pass T2 fully comprehend the symbols it manipulates represents. This idea was inspired by me trying to trace back how I did find out that my roommate was colorblind a few days ago through conversation (verbal interaction). Clearly, since having colorblindness, my roommate sees colors differently than most people. Still, most of the time, he could behave the same as 'normal people' since he learned what the object's color should be in normal life. Suppose we could roughly consider the process for him to learn the common sense, such as a fridge is usually not red, that relate to the colors he sees differently as learning from a handbook since there are no sensory inputs. In that case, it creates a 'Turing test' feeling when the aim is to distinguish a colorblind from people with normal visual function. Scale up the scope of knowledge limited to color to the 'common sense for humans, and the machine has to feel the exact first to achieve behavioral equivalence. Inspired by this, I think a T3 is needed to pass T2.

    ReplyDelete
    Replies
    1. (Reply) Hi Peizhao! I think that another argument for why a T3 robot is needed to pass a T2 test is because of the symbol-grounding problem. In order for symbol-grounding to occur, an individual must have sensorimotor experiences (it can’t be pointing at things and naming them/categorizing them all the way down, as professor Harnard explained in class). Therefore, a robot would not be able to seamlessly and indistinguishably communicate with a human over a lifetime without having sensorimotor experiences (ie. a T3 robot at least is needed to pass T2). I think that the experiences of people who don’t have typical sight, movement or hearing abilities for instance would certainly be able to pass all of these tests because of human language. That said, I think these discussions are a bit limited or even exclusionary when we’re discussing sensory-motor skills as necessary for proving consciousness. However, the Turing Test is not (as I understand it) a test to prove humanity or consciousness. No person needs to prove their humanity, and any discussion around this fact is obviously deeply problematic. Rather, I think these tests are purely about indistinguishability of robots from people (and they are flawed! They are the best we have at the moment though).

      Delete
  24. I found this reading to be useful in understanding the T2 test - any robot that can mimic humans via text, hence, input and output that is the same as the person’s, then it passes the T2 test. And because of its irrelevance to the exact procedures that happen in our minds to get us from input A to output B, and the hardware problem, this robot would not pass T4.
    I also found this reading useful in complement with what has been mentioned in class in understanding Turing’s rebuttals. However, I wonder what Turing really meant by “thinking” in the comment “May not machines carry out something which ought to be described as thinking but which is very different from what a man does?”.

    ReplyDelete
  25. Turing posed an interesting idea of simulating a learning machine, that is to simulate a child's because there are much less complications of children's brains than adults'. I agree with this idea very much. I also think it will be great benefit if we can simulate not only children's brain but also how their brains develop into the adults'. In this way, we have the highest chance of getting the most correlated simulated brain to the real brain.

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2021 Time : FRIDAYS 11:35-2:25  Place : ZOOM Instructors : Stevan Harnad & Fernanda Pere...