Monday, August 30, 2021

1a. What is Computation?



Optional Reading:
Pylyshyn, Z (1989) Computation in cognitive science. In MI Posner (Ed.) Foundations of Cognitive Science. MIT Press 
Overview: Nobody doubts that computers have had a profound influence on the study of human cognition. The very existence of a discipline called Cognitive Science is a tribute to this influence. One of the principal characteristics that distinguishes Cognitive Science from more traditional studies of cognition within Psychology, is the extent to which it has been influenced by both the ideas and the techniques of computing. It may come as a surprise to the outsider, then, to discover that there is no unanimity within the discipline on either (a) the nature (and in some cases the desireabilty) of the influence and (b) what computing is --- or at least on its -- essential character, as this pertains to Cognitive Science. In this essay I will attempt to comment on both these questions. 


Alternative sources for points on which you find Pylyshyn heavy going. (Remember that you do not need to master the technical details for this seminar, you just have to master the ideas, which are clear and simple.)


Milkowski, M. (2013). Computational Theory of Mind. Internet Encyclopedia of Philosophy.


Pylyshyn, Z. W. (1980). Computation and cognition: Issues in the foundations of cognitive science. Behavioral and Brain Sciences3(01), 111-132.



Pylyshyn, Z. W. (1984). Computation and cognition. Cambridge, MA: MIT press.

79 comments:

  1. "A physical symbol system has the necessary and sufficient means for general intelligent action." - What is a physical symbol system?

    The "necessary and sufficient means" in the quote is equivalent to "if and only if", therefore, the authors went on to explain both directions, that is, "It means that any intelligent agent is necessarily a physical symbol system. It also means that a physical symbol system is all that is needed for intelligent action".

    ReplyDelete
    Replies
    1. A "physical symbol system" is a physical system that manipulates symbols, e.g., a computer. (Newell & Simon were among the first to propose "computationalism," the hypothesis that cognition is computation.

      The first few weeks of this course will be about what is right and what is wrong in computationalism.

      (What is "cognition"? what is a "symbol"? what is "symbol-manipulation"? and what is "computation"??)

      Delete
  2. I find it interesting that in What is a Computation the author says, “At first, it seems that we ought to be able to use a simulator to solve the halting problem. After all, a simulator tells us the exact output a program generates. That’s actually more information than we need: we just need to know whether there is an output, not what its value is.” For me, my first instinct was based on assuming that a simulator would in fact simply simulate the original program. So, I guess I am confused about where the author takes this idea from. I would be interested to hear if someone could provide the reasoning for this!

    ReplyDelete
    Replies
    1. The halting problem is interesting but it can only be explained clearly in a technical computer science or logic/maths course. But the halting problem is not about a particular computer program. The problem is whether there is a general way to compute whether any computer problem halts. There isn't. A simulator is just for some particular computer programme. There is no general simulator for any computer program. A simulator just executes the program. But for a non-halting computer program, the simulator does not halt either. (This question is not relevant to cognitive science.)

      Delete
  3. As far as I understand, given the proposition: "A physical symbol system has the necessary and sufficient means for general intelligent action.", if we are able to build machines such as computers which can carry out intelligent actions (or computations), we can then consider these computers intelligent. I agree with this line of logic.

    However, I wonder if we can call these computers intelligent in the same way that we consider ourselves intelligent. In the physical symbol systems reading, a passage states "a physical symbol system is all that is needed for intelligent action; there is no magic or an as-yet-to-be-discovered quantum phenomenon required [...] Indeed, [this hypothesis] could be false". I do not intend to bring magic into the equation, though I wonder how some of the human unconscious symbols relate to the idea of a the physical symbol system. \

    There seem to be unconscious, embodied symbols that are built into the deepest levels of the human psyche given our evolutionary past as tree-dwellers. One such example is the "world tree" or "tree of life" (https://en.wikipedia.org/wiki/World_tree), if we strip the symbol of all of it's metaphysical connotations, we still can't dispense with the idea that as beings who inhabited trees for millions of years, the "tree" symbol seems to be baked into our being. I wonder how the physical symbol system would account for seemingly unconscious, human symbols such as these.

    ReplyDelete
    Replies
    1. What do you mean by "intelligence"? (Easier to stick to "cognition," which means thinking.)

      No, a physical symbol system (a computer, computing) does not think -- at least not just any computer doing any computation. Some people think that a computer running the right computer programme might be thinking. One candidate for the right computer program has been one that can pass the verbal version of the Turing Test (T2): What is that? And why do some people think in would be thinking?

      What are symbols, in computation? (The "tree of life" is not a symbol in this sense.) What is a symbol system (computation)?

      Delete
  4. I think it is interesting how cognition can be studied through computation. I find it especially intriguing when Horswill explains how a theoretical computational simulation of the brain shows that there are uncomputable problems for the brain, and thus limitations to human knowledge. Although we know there are unsolvable problems, such as paradoxes, I think we attribute the impossibility as a characteristic of the problem rather than a limit of our own brains. Rarely, if at all, do we consider that there is an absolute ceiling on the knowledge the brain can acquire. In this way, relating computation to cognition has reframed the way cognition is studied. Cognitive science can be understood by what the human brain is not capable of, rather than just what it is capable of.

    ReplyDelete
    Replies
    1. Computation is formal [manipulating symbols based on rules operating on the symbols' (arbitrary) form (shape) not their meaning (if any)].

      So if something is formally uncomputable, that's a limitation on computation, not on knowledge (except in the way that the shape of any object that is too far to be observed by human senses or measuring instruments is not a limitation on knowledge but on measuring instruments.

      However, the limitations of computability might be the limits of human knowledge (cognition) if cognition is just computation. Is it? Why? Or why not?

      Delete
  5. These readings had me consider the question "what does the notion of computation tell us about our own cognition and abilities/limits as humans”? Similarly, this question varies on how you define computation. In a somewhat misguided way, I used to view computers as an overly direct analog for the human brain cognition, and assumed that they were developed to work how we thought our own brains to work, whether that be accurate or not. And in some ways, computer computation does reflect human cognition, as discussed by Horswill (2008) in that types of behavioral equivalence can be found between the two, and in how we create and encode inputs and outputs into representations. But it is interesting to consider the ways in which computers go beyond the capabilities of our own minds/cognitive capabilities, even though they are products of our creation. Some of the ways in which computers can learn and compute are in processes that we as humans can’t see or make out, known as the “black box”. In psychology, the human mind is also sometimes thought of to be a black box, as we don’t fully understand how our consciousness arises. Is the “black box” a limit in our understanding of what computation is or is it a feature of higher processing computational “machines” (I don’t know if the brain/mind should be called a machine)? Despite our creating and manipulating the notion of “computation”, it too has evolved in a “black box” that we don’t seem to have the tools to decipher. I think our inability to pinpoint computation lies in between our lack of semantic vocabulary/ability to explain properly as well as in the potential abilities of computation that are still unknown.

    ReplyDelete
    Replies
    1. Computation was defined by Turing (and Church, and Gödel, and Kleene and Post and von Neumann...) and all the definitions turned out to be equivalent, and there have been no exceptions. So, so far there is only one definition of computation.

      Whether cognition is just computation is another story. Is it? Why or why not?)

      And (for kid-sib) what is the definition of computation?

      And, while you're at it, what's cognition?

      ("Black box" just means we don't know -- or not yet.)

      Delete
  6. Upon reading “What is a physical symbol system”, a point that I am having difficulty wrapping my head around is the section on models. It is stated that all models are “abstractions”, meaning they include select details and none encapsulate the entire dynamic of the world (to my knowledge). Further, models are judged by their usefulness, not by their accuracy. This leads me to wonder how the usefulness of a model is determined? My question is further complicated by the addition of the possibility of an agent to have multiple, contradictory models.
    Secondly, how can a single agent have contradictory models? In the delivery robot example, the article lists important details included in the abstraction that the robot needs for a successful delivery. These include steering angles, the size of the parcels, and dimensions of the surrounding area. If the robot had contradicting models, how could that lead to success?

    ReplyDelete
    Replies
    1. Models are "abstract" (what does "abstract" mean) because (1) they are formal, the way a verbal description of a pipe is not a pipe and (2) they are approximate (no verbal description of an object is compete -- there's always more to be said about the object -- and even a picture or video of an object is not the real thing (although it's worth more than 1000 words...)

      Since models are just approximate and partial, several partially contradictory ones could still tell you more than any one of them. Contradictions themselves can of course only lead you astray.

      Delete
    2. I am still confusing with this idea. Why does an agent process contradictory models? Since models are judged by whether they are useful, doesn’t have contradictions in models lead to a longer processing time of decision making? For the delivery robot example, if it has one model that tells it to turn left and another that tells it to turn right, it will spend more time on choosing the route, which lowers the effectiveness and seems to be useless. It seems that this ineffectiveness can be prevented by creating one model in advance that integrates the information provided in two contradictory models and takes one side regarding the contradictory part, which I think is what the human mind usually does.

      Delete
  7. These readings got me thinking about the concept of emergence: the idea that a whole can be greater than the sum of its parts. Specifically, what it means that intelligence—as well as the phenomenon underlying it, consciousness—are emergent phenomena. Certainly, intelligence is emergent, since human intelligence emerges from unintelligent parts: subatomic particles form atoms, which form molecules, which form more complicated molecules, which then interact with other molecules, and so on, all the way up to cells and organs. Together, at a high level of complexity, these individually unintelligent parts form intelligent beings called humans. Computers, as symbol manipulators like ourselves, are intelligent, according to Newell and Simon in their physical symbol system hypothesis.
    What can we conclude from the starting point “computers are intelligent”? Well, that depends on how we define intelligence in the first place. In this case, “intelligence” is defined as symbol manipulation, which is only one of many aspects of human intelligence. It should be safe to conclude that the type of intelligence spoken about here is not human intelligence, but computational intelligence. However, the term “intelligent” in common parlance refers to human intelligence. An “intelligent” Turing Machine may be able to reason, but it is a purely logical entity. Generally, a pure symbol manipulator such as a Turing Machine lacks the irrational component of intelligence (found in such virtues as creativity) which constitutes an important part of human intelligence. Therefore, if the intelligence of a computer is defined as human intelligence, then most computers are certainly not intelligent. However, who knows what sort of intelligence would emerge in a computer built at an even higher level of complexity than modern ones?


    ReplyDelete
    Replies
    1. What does "emergent" mean other than unexpected and unexplained?

      About "intelligence" see earlier replies above. What do you mean by "intelligence" (other than "smart things people can do"?)

      And what do you mean by "higher level of complexity"?

      Delete
    2. This comment got me thinking as well about emergence, and about whether or not emergence can happen on computers or computational system. All of the example of emergence I'm aware of relate to the biological word (life emerges out of cells, consciousness emerge out of neurons), but I wonder if it could happen similarly with a complex enough artificial system.

      For me, this would relate to the question of consciousness rather than intelligent: with a complex enough computer or simulator meant to imitate the brain (even if it just imitate the behavior of the brain and not its structure), could emergence happen in the same way and produce an artificial consciousness?

      Delete
    3. About emergence, see reply to Milo above.

      Delete
    4. Hi Milo,

      In my own skywriting for 1a, I talked about how studying cognition is like matching puzzle pieces without knowing what the complete picture is. After reading your comment (thank you for your thought bubble!) and getting reminded of the concept of emergence, I think cognition may be an emergent phenomenon. Computation is a part of cognition; thus, computation can be cognition, but cognition is not computation.

      I think we need to distinguish cognition and intelligence though. Under the physical symbol hypothesis, intelligence is not as specific as we think it is to humans. In one of the readings, it states that under this hypothesis, “any intelligent agent is necessarily a physical symbol system”. Thus, things, organisms, whatever it may be and however arbitrary it is, it can still be intelligent as long as it can manipulate symbols. To me, it feels like this hypothesis conceptualize intelligence and computation as close synonyms. So, intelligence is indeed a part of cognition, but it is not cognition itself.

      UPDATE: That was my initial thought after reading your comment but after seeing prof Harnad’s comment about emergence, now I’m thinking cognition may or may not be greater than the sum of its parts such as computation or intelligence (pieces in my puzzle metaphor) and we just don’t know yet because we don’t know the entirety of its whole.

      Delete
    5. Iris, what is emergence, and what do you think it explains (and how)?

      Cognition (thinking) is something that organisms do, although we can't observe it directly, only by the observable evidence of what organisms can do (and its physiology).

      "Intelligence" is just a term of art, an amalgam of what organisms have the capability to do and a bit of theory-of-mind intuition that it feels like something to be able to do all that.

      "Artificial" Intelligence refers to devices we build, that almost certainly do not feel, yet have some of the (toy) capacities that feeling organisms.

      Delete
  8. In "what is computation", it's stated that "By modeling neural systems as computational systems, we can better understand
    their function. And in some experimental treatments,
    such as cochlear implants, we can actually replace
    damaged components with computing systems that
    are, so much as possible, behaviorally equivalent."

    Although, cochlear implants physically allow people to hear the same frequencies as those with proper hearing, I wonder how much of the auditory process is actually able to be"replaced". Our auditory experience is not just the stimulus that we hear but also the thoughts and reactions that come with it. For example, a specific sound could trigger a memory, or another sound could trigger fear. If the activation between the auditory reception and memory were damaged, would we one day be able to "replace" that function as well? If a sound no longer caused fear, would we be able to "replace" the effect of fear from that sound (e.g. a loud noise)? I can see how stimuli can be replaced by computational devices, but I'm hesitant to believe that psychological processes can be represented computationally in the near future (Although, I would be happily surprised).

    ReplyDelete
    Replies
    1. Cochlear implants replace hearing capacity (acoustic sensors), not memories or emotions.

      And cochlear implants, though they contain some computational components, are not computers.

      What do you mean by "represented" computationally?

      Delete
    2. Hi Professor, I mean hat I wonder if we will be able to code/build a robot/machine that can do the same psychological processes.

      Delete
    3. Melody, what do you mean "do the same psychological processes"? Some devices can do some of the same kind of things that sentient organisms can do, but for the processes that produce this capacity to be "psychological" means that the device has to be sentient (i.e., able to feel something), doesn't it?

      Delete
    4. Looking back at this, I see how my argument doesn't make much sense. I think this eventually brings us back to the Other Minds Problem. I was thinking that we wouldn't know if the device can actually understand the sensory stimuli and process it the same way we do, but I guess we don't really know if other humans do as well so that doesn't actually matter.

      Delete
    5. Hello Professor Harnad,
      Looking back at your replies to Melody's skywriting, I am a bit confused by what you mean by 'though [cochlear implants] contain some computational components, are not computers.' Isn't anything that can perform computation a computer, or is that just the definition of a machine?

      Also, I understand that the implants themselves might not be involved in the cognitive processes, but they do happen to be part of the sensory system, right? So, if the sensors don't transduce input the same way a normal cochlea would, that would lead to a completely different experience of the world, wouldn't it?

      Delete
  9. Reading this collection of sources made me think more deeply into a point discussed briefly last class; that we cannot learn anything about most processes that occur in the brain through simply thinking about them (i.e. using our brains). One sentence that brought this to mind was in the second reading 'What is Computation', "since we know that computers can’t solve certain problems (the uncomputable problems), that must mean that brains can’t solve the uncomputable problems either". I find there is a funny relationship between the limits of the human brain and the limits of computers in that computers are essentially conceived by the power of human brains and there is a colossal list of things the human brain can do that computers cannot - at the same time, there are many things that modern computers can do that a human would not be able to complete in their lifetime. This becomes a question of optimality and in many problem spaces, computers are exponentially more optimal than humans. To return to the point from class, the brain does have substantial limits, not just in introspection but in time complexity. I think it's fair to say that we create computers largely as a crutch for our own limits, however, for all their different strengths and weaknesses, brains and computers are both stumped by uncomputable problems.

    ReplyDelete
    Replies
    1. About uncomputability see replies above.

      Evolution does not optimize; neither does the Turing Test (whether purely verbal or verbal + robotic). It just does as well as any average human can do,

      Delete
  10. I wanted to draw attention in my skywriting to how intelligence is defined in the text “What is Computation”. Resting upon Turing’s work and his namesake test, Horswill argues that intelligence “[…] is ultimately a behavioral phenomenon.” (p.16) and that is is computational. This definition percolates throughout the entire text and we aren’t given an alternative definition of intelligence that is proper to humans. Given how the term “intelligence” was historically used to differentiate our species from others and its association to humans gave rise to the term “artificial intelligence” for objects, I can’t help but wonder if the word “intelligence” is now obsolete when it comes to describing our species. If any object that can perform computations is deemed to be intelligent (to varying degrees), why does the adjective persist in our vernacular and why do we not adopt another term more closely related to "consciousness" and other traits that are still inherently “human”. I would be interested to know if there are any words or concepts that are in the process of replacing the archaic use of intelligence to describe humans.

    ReplyDelete
    Replies
    1. "Intelligence" (always vague) has been replace by "cognition" but still remains to be explained by cognitive science. It was anthropocentric hubris to suppose that only the human species is "intelligent'" and it's just off-the-mark to say that only the human species cognizes (i.e., thinks) or that only the human species is "conscious" (i.e., sentient, which means able to feel). More about this later.

      Delete
    2. I believe that this will be touched upon later when we go over the Other Minds Problem, but would it be right to think that we will never reach consensus on the definition of intelligence and, more particularly, cognition for living species because of our innate incapacity to know if others cognize? Is there a possibility, in theory, for neuroscience to demonstrate that individuals within and across species think the same way? Is the definition of cognition inherently subjective?

      Delete
  11. “Today, the overwhelming majority of what we consider to be computation is done by machines.   But at the same time, we still somehow associate it with thought…but many people argue that the brain is fundamentally a computer.  If that’s true, does it mean that (all) thought is computation?  Or that all computation is thought?”

    I find this an interesting question because it underpins the question of “what is cognition”, and as we discussed in the first lecture, there is not a definitive answer for this question. Describing the brain as a computer is reductive, in a sense, as it designates all the process of the brain into “just” computation. However, the brain is capable of many more types of computation than what a computer (computing machine) is capable of. For example, Horswill states that it is not fair to ask an adding machine what the capital of India is — but the brain is capable of both these types of computation, in addition to an array of other processes. Furthermore, it is important to clarify if ‘thought’, in Horswill’s use, represents only the conscious processes within the brain, or if it also includes the unconscious processes. I find it questionable to label some of the unconscious processes, such as memory consolidation/retrieval, object recognition or emotional responding, as a computation for it does not resemble a computational problem with a set of possible questions, each of which having a desired correct answer.

    ReplyDelete
    Replies
    1. What are different kinds of "computation"? (What is computation? See replies above.)

      A computer is just a piece of hardware that can execute computations. No one is saying the brain is just a computer. But it can execute computations (since we can). The question is whether cognition is just the execution of computations.

      As in the class discussion of (someone's -- whose?) recalling the name of their 3rd grade school-teacher, we are unconscious of how we do it; introspection cannot discover how our brain does these things. But although we are unconscious of how we do things we are conscious while we are doing them. Important distinction. What our brain does while we are completely unconscious (e.g., during delta sleep or under general anesthesia) is even more of a mystery, whether or not we want to call that cognition too.

      But "unconscious processes such as emotional responding..."? We may be unconscious of the processes but we are certainly conscious of (i.e., feel) the emotion).

      Delete
    2. AD, your comment is interesting and definitely reinforces Horswill's point that "it’s not entirely clear what computation really is”. If we look at the functional model of computation, then like you said, it is questionable to suggest that unconscious brain processes are computations, as they don't really work as computational problems. In fact, as Horswill says, the functional model describes computation pretty well in terms of math and programming, because they are mostly about deriving an output from an input, but is restrictive in what it considers to be computation. The functional model is probably too strict to consider unconscious brain processes as computation, especially given that it is not always clear whether the behaviour or process is caused by a specific input, as it the case for instance for brain activity that works to maintain homeostasis. Even though the functional model may be too strict to consider this as computation, the goal of such activity is still a specific outcome or brain state, and there are pre-determined/evolutionarily determined procedures aimed at arriving at that outcome. Thus, a more flexible model may treat unconscious brain activity as computation.
      On the other hand, the functional model may treat conscious brain processes as computation given their more systematic/procedural nature — but now, there is the question of whether irregularities or errors in them still count as computation. For instance, if the brain intends to make a specific limb movement but fails to do so because of a spontaneous dysfunction (or a neurological disorder), is this a failed or “bad” computation given that it failed to arrive at the expected outcome/output? A computer program will run if it is built correctly, unless there are outside circumstances that alter its functioning — but that is not how living systems work. Does computation even account for these kinds of mistakes (whether spontaneous or not) that happen despite adherence to procedures?

      Delete
    3. Juliette, in this course, computation means Turing computation, which means manipulating symbols on the basis of rules (algorithms) that operate on the (arbitrary) shape of the symbols, not their meanings (if any). Computation is purely formal; it has to be physically implemented in some way (e.g., by a Turing Machine, or computer) but the physical properties of the computer are not relevant to the computation (program): Another computer, with other physical properties, could have executed exactly the same computation.

      The brain does not just perform cognitive functions, but also "vegetative" ones, such as homeostasis, maintaining blood sugar levels, temperature, oxygen intake, balance, etc. These functions cannot be just computations, executed by a computer, though they can be simulated (modelled) by a computer. Movement is not computation; secretion is not computation. Chemical interactions are not computation.

      The interesting but difficult question is whether and why cognitive processes, although we may be just as unconscious of them as vegetative processes, nevertheless occur while we are conscious (thinking). We can call some processes that occur while we are not conscious (such as the consolidation of memories during dreamless sleep, if it occurs) "cognitive," but otherwise they are like vegetative processes. Consciousness (i.e., being in a felt state: a state that it feels like something to be in) is the mark of the cognitive, but we don't know why.

      As to whether cognition is just computation -- we'll get to that in Week 3 on Searle.

      Delete
  12. After the reading ‘What is Computation’, I have been left with questions regarding consciousness. Specifically, the section about computational neuroscience claims that “…it should be possible in principle to simulate the whole brain by simulating the individual neurons and connecting the simulations together”. However, it is not mentioned or discussed what this would mean for consciousness and thought. If a computer can fully simulate a human brain, does that mean the computer would have consciousness and be able to think the same as a brain?

    This problem of consciousness also plays into the Imitation Game. Namely, if a computer can fool humans into thinking it is also a human, does that mean the computer has consciousness or thought? Humans view the ability to think as the facet that differentiates us from computers or animals, and therefore would be a factor in determining if you were talking to a human. But if the computer could fool a human, then it could be argued that the computer has consciousness or thought, or at the very least can simulate consciousness. However, this raises even more questions about consciousness, computation and the human brain.

    ReplyDelete
    Replies
    1. If a computer can simulate a waterfall or an airplane, does that mean it can be wet, or fly? (A simulated brain is just symbols and symbol manipulations, running on some hardware. A brain isn't.) What does "simulate" mean? (And what does "compute" mean?)

      The Turing test is "just" behavioral: it is about producing the capacity to do anything and everything humans can do, indistinguishably (to humans) from the way they humans do it. It is not a game, despite the title. It is a call to reverse-engineer human cognitive capacity, in order to explain the underlying mechanism that produces it. Turing stated explicitly that his Test cannot test whether (or what) people feel. Just what they can do.

      About whether computers (or robots) can feel and not just do we'll discuss more later. About anthropocentric hubris, see earlier replies.

      Delete
  13. The level of abstraction that a computer will model an environment seems to me to be a very interesting concept. It could be said (accurately or not) that individual people also model the world in different ways. For example, myths and stories are a very high level description of the world that serves(or served) a practical use to us; Religion served as a moral compass and framework for human behavior.
    Science attempts to try modelling the world in a lower abstraction (to make it more predictable and "real"), but even it is limited by the human brain and human tools (languages, our perception of the environment, computational technology, etc).

    Another way to see it is that our brain also models the world in a high level description that is very abstract (i.e. the visual modeling of our environment created by the sensory cells in our eyes) that leaves out a lot of details, does not sense much of the actual real world (i.e. we only perceive a small portion of the color spectrum) and that ''outputs'' in a very practical way (at the expense of ''accuracy'').
    Another (maybe) interesting though I had was that dreams where also kind of a very high abstract modeling that our brain creates(might be a practical model of our subconscious). In that frame of thinking dreams might be an environment that gives us insight on the internal processes of cognition?

    ReplyDelete
    Replies
    1. I don't know why it is as unknow. I am Elyass Asmar

      Delete
    2. Elyass, what do you mean by "modelling"?

      A computational model is a string of symbols that can be interpreted as meaning something (but the interpretation is not part of the computation).

      Delete
    3. I should have used "model" instead of modeling. In the same way it is used in the text "An agent can use physical symbol systems to model the world." - Elyass

      Delete
    4. I meant: "What do you understand "model/modeling" to mean? Explain to kid-sib...

      Delete
    5. I would explain modeling as: An estimation of the world. Attached informational components that reduces uncertainty.

      Delete
  14. Of the three articles of this set of readings, I found Ian Horswill’s “What is computation” to be the most compelling. His description of computation as an “idea in flux” stuck with me, as I feel that any system purporting to deal with symbols must always be in flux -- I feel that this is a characteristic inherent to symbols, which are constantly shifting in meaning/value/use as the symbols related to them, too, shift. I was intrigued by Horswill’s “imperative model”, although I must confess I did not quite understand how it functions. If the imperative model is a moving-beyond from the principle of behavioural equivalence, which cathects to input and output as a means for determining sameness, then I am definitely all for it !

    Our discussion in class about whether or not one would kick a robot’s head asked this same question: does an entity that produces conscious-seeming responses albeit having a completely different physical structure from any animal have consciousness (whatever that means)? Or, would you kick it in the head? Unfortunately, if AI is being built with the same theory of symbol systems as held by the third reading, “Representations”, I cannot imagine that any resulting robot would have anything resembling consciousness (again, whatever that is). The idea of the “physical symbol system”, which seems to me to be essentially logocentrism, has been quite fervently called into question in the last few decades. For someone who has opted out of this conception of language/symbol systems, which purports each symbol to have a distinct physical or real referent, a robot relying on such a conception of language would not be a very engaging conversational partner...

    ReplyDelete
    Replies
    1. What do you mean by "symbol"? In computation it is something purely formal: an arbitrary shape (like 0 or 1) that is manipulated on the basis of formal rules (algorithms, programs) like "if you see a 0, print a 1." The computations we are interested in are the ones that are also interpretable as meaning something, or solving some problem. But the interpretation is not in the symbol system; it is in the head of the user.

      As to whether a formal manipulating symbol system can be conscious -- i.e., whether it can have felt states -- this will be discussed in weeks 3 and 5.

      But I can say already that a robot (that has sensors and effectors) is not just a formal manipulating symbol system.

      Delete
  15. The concept that stood out the most - and tied the readings' content together - was behavioural equivalency. I found that the readings corroborated my understanding of each. For example, the section "Universal Turing Machines" on alanturing.net became clearer when paraphrased as that turning machines have a behaviorally equivalent universal Turing machine (according to my understanding at least). Behavioural equivalency is also central to the question posed by Ian Horswill. I daresay the image of a concrete computer comes to the mind of most people when they hear the expression "the mind as a computer". But from an engineering perspective, are we not trying to build a computer that is like a mind? That is, a device that is made of a different material but produces the same answer. I do concede that the expression is misleading in popular language. But it seems like only the abstract version of computation, still an organic concept, that makes the question "is the mind a computer?" relevant.
    In addition, that "many programs don't return results in the usual sense" striked me. Because applications on a smartphone respond to my action, such as clicking 'delete' for a photo in the gallery, I easily thought them as always reacting, and returning a result. So, it was an insight that these programs do not terminate (and they should keep on running!)

    ReplyDelete
    Replies
    1. Yes, a computer model of an organism's cognitive capacity (which also means behavioral capacity) is based on behavioral (input/output, or "weak") equivalence -- although you can make the equivalence stronger if you also require "strong equivalence," i.e., not just I/O equivalence between two computers, but the same algorithm or computer program.

      We will see that this is related to some of the differences between the T2, T3 and T4 versions of the Turing Test. In a more simplistic way it is also related to the hardware/software distinction: See replies above about the independence of formal symbol systems from the physical details of the physical system that is executing the symbol manipulations. It is also related to the question of whether "computationalism" (cognition - computation) is correct.

      But neither computer modellers nor computationalists are behaviorists. They don't just describe the behavior (i.e., the I/O): they provide a causal mechanism that can produce it. Hence a possible explanation.

      Smart phone apps have nothing to do with the "halting problem," but the halting problem has next to nothing to do with cognition or cognitive science...

      Delete
  16. In "Artificial Intelligence: Foundations of Computational Agents", something that got me thinking was when the authors state, in discussing different levels of abstraction useful for creating an non-human entity that emulates human activity, that "The knowledge level is a level of abstraction that considers what an agent knows and believes and what its goals are...Both human and robotic agents can be described at the knowledge level.” It is not immediately obviously to me what describing a robotic agent at the knowledge level would mean, specifically in terms of what its goals are. There would seem to be goals given to the robot by a human programmer, but not goals coming from within the robot itself. We may be able to explain how the robot interacts with the goals it is given, or what processes it goes through to fulfil these goals, but is it possible to have goals, or direction, that are not at some level grounded in an external source (grounded at some level in a goal created by a human)? This ties into the “mind as a computer” question for me, because while envisioning our neuronal connections through a computer simulation might give insight into how our brain functions under certain conditions, it seems not to capture the entire picture of cognition as it leaves out what is initially motivating these processes - where the goals come from that explain why these complex processes are set into motion. I suppose this is where one might turn to evolutionary theory for an explanation, saying that there are actually no consciously willed “goals” but simply certain brain behaviours favoured by natural selection. That our brain’s function the way they do because it has been fitness-enhancing throughout the course of evolutionary history, and that whatever we might experience as intentionality or meaning is just a part of an intentionless, deterministic process (everything, even my doubting that my mind acts like a computer, is a predetermined product of a computational system that has been shaped throughout evolution). As of yet, though, I’m not fully convinced by that answer.

    ReplyDelete
    Replies
    1. What are levels? I understand the difference between the hardware and software level in computation, and I understand that the interpretation of the software is not in the software but in the head of the user. But what are all these other levels? Levels of abstraction are in our heads: This is an apple, a fruit, a food, a thing… But what has that to do with computation, which has only two levels, plus the interpretation?

      Robots are not computers, though they may contain some computers, computing. We can interpret robos as if they were thinking, as if they were in felt states, but are they? If so, how do we know? If not, how do we know?

      The question of what a robot can do, and how, is answerable, and it is irrelevant who “programmed” what. If an algorithm can do something, it can do it; if not, not. And this pertains only to the robots computational parts, not to all of the robot, which is not just a computer executing an algorithm.

      By “goals” do you mean what the robot is interpretable (by us observers and users) as trying to do? or that the robot feels what it feels like to have and pursue a goal (or feels anything at all, for that matter)?

      “Evolution” has no goals. A plant has a tropism: its leaves grow toward light and its leaves grow toward water, for biophysical and biochemical reasons. Plants don’t have goals; it’s just that we can interpret them as if they did. Sentient animals, on the other hand, really do have goals. And, more important, they are sentient!

      How can I know whether anything other than me feels (i.e.,has a mind)? That’s called the “other-minds problem.”

      How and why can organisms do what they can do? That’s called cogsci’s “easy” problem.”

      How and why can (some) organisms feel? That’s called cogsci’s “hard” problem.”


      Delete
  17. I think it’s interesting to think of cognition as a form of computation. At first thought I felt there may be some differences, especially considering some of the innate reflexes and cognition that exist below the level of comprehension. Perhaps computationalism does account for this by using precise and vast algorithms to replicate cognition, but it is hard for me to envision because I find it difficult to grasp how much of my consciousness is unconscious.
    Additionally, is computationalism the result of advances in technology? Could it be that we are linking the two given our modern context rather than the most natural fit?
    Does computationalism account for the complexities of human relationships? Is this within the scope of computation? Is it possible to create an algorithm capable of relationship-building the way that humans are? Is there an algorithm that would make a machine feel all of the complexities of human emotion? Is there a way to check for this?
    Furthermore, is it possible to create an algorithm that mirrors the illogical behavior of humans? Although we are largely predictable, is there a way to randomize behavior in a way that is believable enough to reflect our own idiosyncrasies? How about personality? Is this a direction that the field is headed in --- to create AIs that have distinct personalities?
    This quote from the article by Michael Rescorla grounded and contextualized some of the questions I have been thinking about, as well as the questions that Turing was aiming to answer with his machine: “Some philosophers insist that computers, no matter how sophisticated they become, will at best mimic rather than replicate thought. A computer simulation of the weather does not really rain. A computer simulation of flight does not really fly. Even if a computing system could simulate mental activity, why suspect that it would constitute the genuine article?”

    ReplyDelete
    Replies
    1. Whether computationalism is true and cognition is just computation will be discussed in week 3 and 5. Weeks 1 and 2 are about what computation is and the many things it can do (and how).

      Almost everything going on in your head is unconscious in that you don’t know how it’s doing what you’re doing. But that’s neither here nor there on the question of whether whether how it’s doing what it’s doing is by computation.

      Computers may become faster and able to store more, but computation remains computation: Turing computation. Rule-based symbol-manipulation.

      What computation alone can do, or what robots (which are not just computers can do) is a question about whether they can pass the Turing test. (Week 2.)

      No, cogsci is just trying to explain how and why ordinary people can do all the (cognitive) things they can do. Individual differences come later.

      Yes, if computation can do something, that doesn’t mean the brain is using computation to do it. But can computation alone pass the Turing Test (the verbal version T2: T3 and T4 are no longer just computers.

      A computer simulation (Strong Church/Turing Thesis) is like a verbal description of a thing: it is not the thing, but it can explain the thing.

      Delete
  18. “So the real question is whether a sufficiently big PC could simulate the human brain.”

    This quote in What is computation? really stuck with me. It has been very interesting and at times confusing to see the boundaries of technology being pushed in my lifetime. We now have cars that can drive themselves, printers that can reproduce items in a three-dimensional manner, and food created in labs that perfectly replicates “organic” foods, among many more inventions. Human brains are able to create ideas like these and essentially bring their existence from nothing, but at what point is the “line” drawn? Taking the example of lab-grown meat, how will we be able to distinguish a piece of said meat with a piece of regular meat? Does there come a point where we no longer need to make the distinction? The same extends to this question of the simulated human brain. If we, as human brains inside the shells that are our physical bodies, are able to create a PC that simulates the brain itself, what guarantees that we are able to consider the PC-created brain and a regular brain as two separate entities? Is the PC even simulating the brain, or are we, since we are the ones creating the PC? Would replicated neurons generated by a PC in the same array as a human brain be considered a human brain? After all, a human brain would have conjured up the technology required to produce the PC doing this generation.

    ReplyDelete
    Replies
    1. What is the difference between a heart and a computer simulation of a heart?

      Delete
    2. A heart is the actual organ that humans possess whereas a simulation of a heart is only a model of a heart. A simulation can imitate a heart, but it is not a replication. The computer simulation of a heart does not really beat or produce any of the functions a real heart does, it just models it. If we think a computer simulation of a brain, it becomes evident that although we may be able to imitate its functions, it will not function exactly like a brain because it is not a brain itself. The simulation won’t actually think like a brain does to produce outputs, although it may produce the same outputs that we would come up with given an input. Thus, I think we can consider the PC-created brain and regular brain as separate entities.

      Delete
  19. “We’ll call this the principle of behavioral equivalence: if a person or system reliably produces the right answer, they can be considered to have solved the problem regardless of what procedure or representation(s) they used.” - Horswill

    I find this notion of behavioral equivalence to be quite interesting given that it is such a central aspect of computation. We can exchange systems with entirely different procedures and as long as the output is the same, they are considered functionally equivalent. With the example of the arithmetic problem, it can be solved as an entirely mental operation, performed with the assistance of a pen and paper, or perhaps done with a calculator. Assuming the computation is performed accurately, the same answer would be produced with each method which classifies these systems as behaviorally equivalent.

    This is fascinating to me because it's as if the pen and paper become part of your consciousness. These physical items become extensions of your mental capabilities in junction with your physical self. Another example of this phenomena could be a person with amnesia who might record their daily thoughts in a journal to remember the next day. In this case, the journal becomes a crucial part of their mental processes. If we count this as behaviorally equivalent to the way that memories are consolidated and retrieved in someone with a normally functional memory, then how far can we extend consciousness beyond our mind and bodies. Does it extend further into our environments in the physical world or onto digital platforms like the internet to create a shared form of consciousness?

    ReplyDelete
    Replies
    1. See earlier replies for the difference between weak and strong equivalence,

      The Turing Test (T2) is just about weak equivalence. And T3 and T4 are not just computation

      The “extended mind” used to be one of the modules in this course. If you want to see it, look at: an earlier year of the course.

      Delete
  20. "As artificial agents become more life‐like, will we start to view them as real people?" (What is Computation, p.17)
    I have always found the concept of artificial intelligence to be extremely interesting and profound. To what extent can we claim that an AI is just a machine v.s a real person. Afterall, our brains' neural systems are already being studied and considered (to a certain extent) as computational systems, even to the point that with the help of technology, we can replicate components of the brain that are damaged (like in the case of cochlear implants). Presently, the advancements in artificial intelligence are not advanced enough that we would consider AI as "real people". However, in the future, if we are able to make a machine replicate all that makes us "human", what would we think then? Is there truly a difference between our feelings as biological living beings v.s. a perfect simulation (AI). This also raises the question of what makes us human? Is it simply the biological flesh and bones? Addtionally, Horswill mentions that "There are people who argue we could live forever by “downloading” ourselves into silicon" (p.17). This concept has been explored in many books and T.V. shows, where a person's consciousness is downloaded so that they can keep "living" even after death. In this case, I have the impression that if a person's close loved one, a mother for example, were downloaded into silicon, they would still consider her to be a real person even though it is a computer performing a perfect simulation. So what is it that really makes us "real"? This also leads to questions regarding AI rights (but we still have some ways to go before needing to answer that one...)

    ReplyDelete
    Replies
    1. All these questions are already being discussed in the 2 lectures so far, and in the skywriting on the two readings, in connection with the Turing Test and T2, T3, T4. Have a look and repeat your questions in the light of those (rather than just the Horswill article)l

      Delete
  21. In What is Computation by Horswill, a question I am having trouble considering is what other limits to computation exist, aside from paradoxes and loops. I suppose this boils down to whether there is a difference between something that is not computable because it doesn’t make sense, versus something that is not computable because the computer would find itself in a loop. For example, is something like “a + b + 8” computable? If it is not, then is it a limit on computation other than a paradox? Or would it just not be considered as something to be computed at all, but rather just a meaningless string of symbols.
    Of course, meaning could be given to the variables a and b, but without the meaning, should this be considered a limit of computation, or something else entirely. If a computer were to receive this input, I believe it would give an error message, rather than halting/stalling. What about a universal machine, what would it do?

    ReplyDelete
    Replies
    1. The halting problem is not really a cogsci problem. The string you describe would simply meaningless (or unexecutable). See the definition of computing in the lectures and ppts. The interpretation (meaning) of a computation is not in the computation, which is just syntax (formal symbols and manipulations, as the Turing Machine does) not semantics. But that also leads to the symbol grounding problem (for computationalism). What is computationalism?

      Delete
    2. I’m pretty sure this is a rhetorical question but I just want to make sure that I am clear on this one. So (if I understand this correctly), computation is just systematically interpretable symbol manipulation. Computation is based on algorithms that operate on the shape of the symbols, but not the meaning of those symbols. Computationalism on the other hand, is a belief system that argues all human cognition is computation.

      Delete
  22. The overall message/impression I got from the first introduction lecture is that in the study of cognition, we’re trying to match puzzle pieces without knowing what the complete picture is. So, there are different pieces that could or could not work, and we don’t even know whether these pieces are compatible. At first, this vast uncertainty brought confusion but in respect of kid-sib (for myself), I’ve decided to treat this question like a matching game for now.

    The first “piece” we have is computation and we need to comprehend what it is before the match. Prof Harnad explains computation as symbol manipulation based on rule. I needed time to really imagine the possibility of such phrase. Computers these days can essentially pass all the Turing tests, T2, T3, and maybe even T4 if we believe that the artificial neural networks of deep learning resemble the neural networks of the brain. The question of what computers can do is undeniable, they can do many, many things and essentially provide a model for what it is that cognitions is doing.

    The reading “What is a Physical Symbol System” explains models as “representation of the specifics of what is true in the world”. “Models are judged not by whether they are correct, but by whether they are useful.”

    I think this statement applies when using Turing robots as models for human cognition. These models cannot be judged on correctness because they only represent a part of what is true. However, they CAN be judged on usefulness and that is why I think prof Harnad continues to call Turner a giant, because using computational models is a revolutionary tool for understanding cognition. Hence, a significant piece in the puzzle, NOT the whole reference.

    ReplyDelete
    Replies
    1. 1. Computing is like following the recipe for making a (vegan) cake (but automatically, without knowing what the ingredients [symbols] mean, just what they look like, and what to do with them).

      2. No, cogsci has only reverse-engineered toy problems. There is as yet no T2, T3, or T4.

      3. T3 and T4 cannot just be a computer. (Why?)

      4. What is the difference between computational modelling and reverse-engineering? Use the example of the vacuum cleaner.

      5. Cogsci is about explaining cognitive capacities by reverse-engineering the causal mechanism that produces them.

      Delete
  23. Is artificial intelligence equivalent to the human mind?

    Alan Turing argued that “if a computer can fool humans into thinking it was human, then it would have to be intelligent even though it wasn’t actually human.” On the basis of the behavioral equivalence principle, if an artificial intelligence system can reliably produce the same computations and answers as a human, then it could be considered equivalent. Indeed, intelligence is a behavioral phenomenon so as long as an artificial intelligence behaves intelligently and is amenable to computational analysis it would be valid to assume that it can comparable to a human’s intelligence. However, I don’t think humanity is solely based on intelligence. Even though the knowledge level and symbol level are common in both biological and computational entities, other levels cannot be accounted for in the mechanicalistic construction of AI’s mind. Descartes’ dictum, “I think, therefore, I am” (Descartes, 1984). not only establishes the existence of the self which thinks and acts but also its freedom from mechanistic laws to which the human body is subjected to. Because it is ‘I' who perceives the world,' Descartes' notion of ‘I think' implies subjective experience. Because the subjective activity is the first-person perspective, all of these subjective activities are non-computational. Descartes considers mental processes to be deliberate acts of the thinking subject. Therefore, this subjective attitude of mind cannot be mapped mechanically in an algorithmic system.

    ReplyDelete
  24. 1. The Turing Test is not about fooling; it is meant to explain how organisms can do what they can do -- by reverse-engineering it.

    2. Turing equivalence provides an Input/Output explanation ("weak equivalence"), but whether there is a thinking/feeling mind inside the mechanism that produces the I/O is a different question. (The other-minds problem.)

    3, Descartes' Cogito does not prove anything about existence or about a self. It just points out that if a system is in a state that it feels like something to be in (such as thinking something) then it is not possible for that system to "doubt" that it is feeling what it is feeling. (It is possible to doubt that what one is thinking is correct, but not that that's what one is thinking.)

    ReplyDelete
  25. Building off the idea of behavioural equivalence in “What is a Physical Symbol System?”, we know that regardless of the steps and methods by which a person or system solves a problem, if it has solved it reliably, it has found the answer. No matter how differently humans and computers may compute the same problem, if the problem is still solved, then it may logically follow that their computational ability (intelligence?) are commensurable. We know the symbol level of humans and computers differ so greatly, but behavioural equivalence has me thinking about the nature of human intelligence and consciousness. Is it perhaps appropriate to say that humans function like computers, in that our body, flesh, brain are all hardware on which the “human software” runs? Just as several different computer software run on similar binaries, code, propelled by hardware creating electrical signals, so too do humans run software that is structured on similar thought patterns and intellectual traditions, passed on through generations of humans, driven by electrochemical brain hardware. In “ What is Computation” it is emphasized that intelligence or capacity for thought helps us to distinguish between our collective humanity and other animals. And at what point does computation in computers advance enough that our symbol levels, or the methods by which we both solve human problems, will be so similar that we can no longer find the distinction between ourselves and AI? Will we always be separated by our differences in hardware?

    ReplyDelete
  26. In the paper “What is Computation”, a provisional explanation for computation is provided as being a sort of “question and answering”. The functional model makes this definition more precise by proposing that a computational problem is a group of related questions, each of which has a corresponding desired answer. This suggests that computation, at it’s core, consists of the process of deriving a desired output from a given input(s).

    It intrigues me that the functional model can include having several inputs, but still just for one output. This led me to wonder if the question has several potential answers, like say: “How can I get from Montreal to Ottawa without walking?”, does it still count as a computational problem? If a computational problem is defined as having just one desired correct answer, doesn’t this restrain the types of computable questions to only close-ended questions? In the paper you address how the functional model’s narrow limitation of a program’s behavior to its output is a significant limitation as it doesn’t account for programs that run based on imperatives rather than to achieve one specific goal, but wouldn’t this “correspondence requirement” of the functional model also be another significant limitation?

    ReplyDelete
  27. In the reading of “what is computation”, I found it very interesting how the author mentioned that intelligence is seen as a behaviour and not an aptitude that would be considered exclusive to those that have a “special substance such as living tissue or soul”. This view on intelligence makes you question what really sets humans apart from others. If everything is cognition and intelligence is a computational phenomenon, then it’s possible for us to create intelligence of our own (AI) as mentioned in the reading. Working on such AI arises the question on how similar we will be able to simulate humans’ intelligence. If we are able to create an Ai that simulates the degree of intelligence of a human, what would end differentiating us from such AI and what would make us say that we ourselves aren’t just a simulation?

    The author of the text also arises the problem that we already know computers are not able to solve every problem. If computers are programmed to simulate the brain and they can’t solve every problem, it can show how the brain itself has some limitations. This really makes you think how much we, as humans, know about the limitations of our own brain capabilities.

    ReplyDelete
    Replies
    1. Hi Mariana, thank you for your insight! I appreciate that you questioned whether we are able to create an AI that simulates the degree of intelligence of humans, and if so, what would be the integral factor that sets humans apart from such AI. I think human intelligence entails more than what computation could do, like social intelligence. With the assumption that intelligence can be trained, even if AI can learn to compute, interpret and manipulate inputs and outputs, there is only so much that an AI can learn from the cumulation of social interactions to understand the exact socialisation processes that humans carry out, without looking, talking and behavior like one of us. In this regard, such an AI must pass the T4 test at least.

      I also appreciate your comment regarding the problem with the limitation of human knowledge. I also found this question interesting. Assuming that computationalism is correct, if computers can’t solve the “uncomputable problems”, it implies that the human brain cannot solve certain problems as well. I wonder when do problems really get uncomputable? And what if it just depends on the way it’s inputted, or the way we categorise things?

      Delete
  28. After reading through this week’s papers, I gained a better understanding of what is computation in the context of cognitive science. My previous knowledge of “computation” was limited to the ‘functional model’ as mentioned in the paper “What is computation”. From my point of view, what ‘imperative model’ provided is supplementing the fact that simple input/output equivalence is not sufficient when we’re defining computation. Despite I agree that cognition is not computation, to me ‘imperative model’ somehow closer to what cognition is because of the following: 1) it works by “gradually accumulating scribbles until the answer is on the page”, therefore, it closer to a dynamic continuous system that self-updating its internal representations; 2) the fact that real-world situations are not all looking for a solution/output, and there’re uncomputable problems, imperative model can be better used in these situations.

    “But if the universe is ‘just’ computation, then what isn’t computation?”
    This question from Horswill paper also made me thing about what cognition can do but computations can’t. One of which has discussed in previous comment about “emergence” - how consciousness arises while we processing information from the external world, and I think meaning, as mentioned in the symbol grounding problem, is also a result of emergent phenomenon where computation cannot have.
    Secondly, while reading “What is a Physical Symbol System”, the level of abstractions reminds me about the concept of attention in human cognition that's missing in computation. The problem for computation is to choose the right level of abstraction, i.e. for delivery robots to only consult information relative to their task, is not encountered in cognition because our attention can be flexibly-selective while also passively keep record of irrelevant information (by unconsciously perceiving it). I guess this is another evidence of why cognition is not only computation.

    ReplyDelete
    Replies
    1. Hi Christy, thank you for your input! Like you, this week’s reading changed my initial perspective on computation. My initial understanding of computation was also limited to what “functional model” entailed. I was unfamiliar with the author’s perspective, hence why I found most of what the paper talked about to be intriguing, for instance:
      1. The idea of representations, and how it is significant
      2. The limitation of program behavior of the Functional Model, as it does not allow manipulations and interpretation of representations
      3. Imperative model and meta-programming
      4. The halting problem vs. human brains

      While I was trying to take in all this new information, like you, I tried to question whether cognition is entirely computation. Like you, I thought about how a continuous computation that allows for interpreting different representations and meta-programming, could perhaps be largely what cognition entails?

      Delete
    2. Hi Natasha, thank you for your comment! As the definition of computation describes (which is, computation is symbol manipulation), I think computation could never really be ‘continuous computation’. Symbolic inputs could only be discrete inputs, however small, it can only get more and more closer to be “continuous”. Also, I think the most important distinction can be made between cognition and computation is that whether the agent who carries out this procedure (whether it is a cognitive process, or computational process) can get meaning from it. Cognition should be able to get something more than just get an appropriate output.

      Delete
  29. Computation is the manipulation of symbols (such as 0 and 1) according to rules (algorithms, programs). Symbols, are just that; symbols and are not connected to referents, they need an external thinking cognate to give their meaning and connect them. So how do symbols get their meaning?
    This entails that computation cannot be cognition; it cannot be the only thing that is done in our minds. Not only is the human mind able to connect symbols to their referent (symbol grounding problem), but we also have consciousness such that we are able to explain how and why we feel certain way. As humans, it feels like something to know what a word means. While it is evident that computation isn’t enough, a robot is more than computation; it is dynamic, sensing and moving, and while it has can ground things, I wonder if it has meaning? What about consciousness?

    ReplyDelete
  30. These three readings made me realize that after many years of studying, doing and talking about computation, I still had a very bad definition of what it really is. I realized this especially after reading the following excerpt:
    “One hundred years ago, nearly everyone thought of computation as being a mental operation involving numbers. And being a mental operation, it could only be done by people. Today, the overwhelming majority of what we consider to be computation is done by machines. But at the same time, we still somehow associate it with thought.”
    It made me realize that I tend to overlook completely the physical components of computation, as discussed in the reading on physical symbol systems. If I could go back and explain computation to a younger version of myself, I would emphasize the fact that computation involves physical objects that are part of the real world. I would remind myself that it’s impossible to come up with an example of computation that doesn’t involve some hardware (either a computer, a brain, even just a calculator) therefore this should be at the forefront of how we think about computation.

    ReplyDelete
    Replies
    1. And it was just a small step from the old idea that "computation is mental" to the new idea ("computationalism") that "the mental is computational."

      Yet even when we learned to do long division it was clear that that was not something we could do only in our heads, especially when it is very long!

      Delete
  31. Argued by Strong Church-Turing thesis, there is a universal computing device that could simulate every process. In the article, the halting problem is raised to question the plausibility of the Strong Church-Turing thesis. When I was reading the Strong Church-Turing thesis, I had another question which may be an obstacle faced by the supporters of this thesis: how computers simulate randomness? According to what I could remember from my previous introductory computer science class, a computer can only store a number with limited precision, and the random number it produces is naturally a pseudo-random number. To generate a random number, the computer measures some physical phenomenon outside of the computer—eg, geiger muller counters to measure the radioactive decay of an atom. According to quantum theory, there’s no way to know for sure when radioactive decay will occur, so this is essentially ‘’pure randomness” from the universe. However, the Strong Church-Turing thesis states that a universal Turing machine could simulate everything in the world. Therefore, in this case, the ‘randomness’ of the radioactive decay of an atom becomes a predictable description of a physical entity. Following this idea, there is no randomness at all in this universe. If we could simulate throwing up a coin in the air at the level of description of the movement of each particle of the coin, which side is up when it lands on the ground is clearly no longer a probability problem. Given this train of thought, my question now turns into is the strong church-Turing thesis a form of determinism?

    ReplyDelete
    Replies
    1. I think we need to be careful differentiating between a simulation and the real thing. Here it is true that a computer can't really generate a random number; but this speaks to the limit of computation, not the limit of the universe. Simulations are just syntactic representations of real phenomena's. While they can be useful in predicting, or learning something about the real world, it is not equal to the real thing, just an approximation (sometimes a very good one, but just an approximation. I am not saying that determinism is wrong or that true randomness exists, I'm just saying that proving that randomness can not be computed is not a proof that it doesn't exist in the real world. -Elyass

      Delete
  32. I find it very interesting that while there are many contributions to computation, there is a single definition that stands. Particularly, in the text, Phylysyn says “this notion of mechanism arose in conjunction with attempts to develop a completely formal, content-free foundation for mathematics.” Reading external resources such as the Stanford Encyclopedia of Philosophy on the Church-Turing Thesis, it’s interesting to see exactly how each contribution built upon the previous one or brought new information to light. Some noted contributors include Kleene, Kripke, Hilbert, Gödel, and of course, Church and Turing. This makes sense as computation is defined as formal symbol manipulation of arbitrary shapes and not the meanings behind the shapes. Personally, I think this is a really high level and abstract definition. For example, the implementation-independence whereby the computation is the rules and not the machine executing it covers nearly every sentient being and more. The sheer number of things that can be categorized in this definition brings futile attempts to anyone who has tried to contribute to this definition who go beyond this definition and try to produce deeper definitions or meanings are almost “doing too much”. I suppose it’s most difficult to distill the complexities of the mind into a rudimentary definition that is widely accepted and universally true.

    ReplyDelete
  33. Hello Professor,
    I wanted to review "weak equivalence" in light of your question-comment in my midterm.
    weak equivalence= same input, same output.
    strong equivalence = same input, same output, AND the internal process/algorithm is also the same.
    In your question "What is the difference between implementation-independence of cognition, and weak equivalence of algorithms?", I assume the weak equivalence 'of' algorithms is still restricted to the inputs and outputs, and not having the same rules (=algorithms)?
    Cognition is implemented in the brain, and a certain output to an input can be realized by many brains. This is what I take to be the implementation-independence of cognition.
    What is the key distinction between that and weak equivalence of algorithms?

    ReplyDelete
    Replies
    1. Weak and strong equivalence, as well as implementation-independence, apply only to computation. Computationalism is the theory that cognition is just computation. Turing was not [“Stevan says”] a computationalist, but his T2 called only for weak equivalence.

      If cognition is not just computation (i.e., if computationalism is wrong) then, strictly speaking, neither weak/strong equivalence nor implementation-independence applies to cognition (or at least not to the dynamic [i.e., physical] parts of cognition).

      There is a vaguer (philosophers’) notion of "implementation-independence" and "weak/strong equivalence" called "functionalism" that does apply to dynamical mechanisms too: for example, there is more than one way to make a vacuum cleaner (something that can suck up dust) or a toaster (something that can bronze bread). All the ways that can do it would be "functionally equivalent."

      We could then speak of dynamical systems (e.g., T3) with "weak functional equivalence" (same I/O, different internal mechanism) or "strong functional equivalence" (same I/O and same internal mechanism) (T4 or T5). But this is a bit strained when the "input" is just dust or bread and the "output" is just toasted bread or clean rugs.

      Even the input/output distinction is strained with dynamical systems: What is the "input" and the "output" of a volcano, or an ocean, or a galaxy?

      Organisms can be considered as "autonomous dynamical systems," so they can be considered as getting inputs and producing outputs to and from the "outside world." But to a physicist that is all just one dynamical process. The notion of an “autonomous” system is an engineering one. To physicists there is no causally isolated “system.” Even the thermodynamic notion of an “adiabatic” system refers to independence of properties, not of “entities.” Biology’s entities – living organisms – are defined functionally. But for physics an organism’s birth, life, and death is just a dynamical, causal process in time, with changes in dynamical properties along the way.

      No wonder that sentience (feeling) poses a “hard problem” -- for cogsci…

      Delete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2021 Time : FRIDAYS 11:35-2:25  Place : ZOOM Instructors : Stevan Harnad & Fernanda Pere...