Monday, August 30, 2021

4a. Cook, R. et al (2014). Mirror neurons: from origin to function

Cook, R., Bird, G., Catmur, C., Press, C., & Heyes, C. (2014). Mirror neurons: from origin to functionBehavioral and Brain Sciences, 37(02), 177-192.

This article argues that mirror neurons originate in sensorimotor associative learning and therefore a new approach is needed to investigate their functions. Mirror neurons were discovered about 20 years ago in the monkey brain, and there is now evidence that they are also present in the human brain. The intriguing feature of many mirror neurons is that they fire not only when the animal is performing an action, such as grasping an object using a power grip, but also when the animal passively observes a similar action performed by another agent. It is widely believed that mirror neurons are a genetic adaptation for action understanding; that they were designed by evolution to fulfill a specific socio-cognitive function. In contrast, we argue that mirror neurons are forged by domain-general processes of associative learning in the course of individual development, and, although they may have psychological functions, they do not necessarily have a specific evolutionary purpose or adaptive function. The evidence supporting this view shows that (1) mirror neurons do not consistently encode action “goals”; (2) the contingency- and context-sensitive nature of associative learning explains the full range of mirror neuron properties; (3) human infants receive enough sensorimotor experience to support associative learning of mirror neurons (“wealth of the stimulus”); and (4) mirror neurons can be changed in radical ways by sensorimotor training. The associative account implies that reliable information about the function of mirror neurons can be obtained only by research based on developmental history, system-level theory, and careful experimentation.


See also:

Rizzolatti, G., & Destro, M. F. (2008). Mirror neuronsScholarpedia3(1), 2055.

Bandera, J. P., Marfil, R., Molina-Tanco, L., Rodriguez, J. A., Bandera, A., & Sandoval, F. (2007). Robot learning by active imitation. INTECH Open Access Publisher.




80 comments:

  1. BELOW is a transcript of conversations in the chat during Sept 24th class that are worthy of saving:

    Iris Kim to everyone (12:08 PM)
    Do you guys think you can NOT be a computationalist but still believe in T3 at the same time?

    Melody Zhou to Everyone (12:10 PM)
    Isn’t the TT based on the idea of
    computationalism? So in that case, I think you can believe a T3 can exist but that it doesn’t represent what it’s meant to do which is simulate human cognition

    Caroline Bruce-Robertson to Everyone (12:11 PM)
    Or, you could think that even if T3 perfectly mimics human cognition, it’s doing it in a different way

    Iris Kim to everyone (12:11 PM)
    Melody, yes that’s what my intuition tells me but I’m still not convinced that Turing himself is a computationalist lol
    and this is also in line with my own thoughts
    I don’t really think I’m a computationalist (maybe I agree with weak AI but not strong AI) but I still believe that T3 CAN explain cognition
    is that weird? lol

    Caroline Bruce-Robertson to Everyone (12:13 PM)
    Haha I get that. It feels like it should be possible, though not with our current knowledge, to perfectly mimic human cognition, but it also seems like there’s something more than jsut computation going on in our minds

    Melody Zhou to Everyone (12:14 PM)
    Iris, yeah I also feel like Turing isn’t fully convinced of computationalism, but the TT was based on the idea that computationalism is true. I think that he created it with that assumption, so in that case you cant really not be a computationalist and believe the TT does what it’s supposed to if that makes sense

    Laurier Levesque to Everyone (12:15 PM)
    He could have made it with the understanding that the TT isn't perfect to answer his questions but it's the best tool we have

    Caroline Bruce-Robertson to Everyone (12:16 PM)
    To me it seems like Turing wanted to avoid the question that computationalism is the answer to (that cognition is computation). Turing seemed interested only in proving weak equivalence, not strong. So he rephrased “can machines think” as “can a machine act in a way indistinguishable with human thought”

    ReplyDelete
    Replies
    1. Iris Kim to everyone (12:19 PM)
      Caroline, YES weak equivalence, Turing believed in weak equivalence so that means that a Turing Robot doesn’t need to execute the algorithm in the same method, so TECHNICALLY a turing robot DOESNT have to understand to pass the test? am I understanding this right?

      Caroline Bruce-Robertson to Everyone (12:22 PM)
      Yes I think so!

      Caroline Bruce-Robertson to Everyone (12:23 PM)
      And Turing pointed out that we can’t tell if other humans are understanding either, so the fact that we cant tell if a machine is understading shouldn’t bother us

      Iris Kim to everyone (12:23 PM)
      so technically, Turing isn’t a computationalist, he only requires weak equivalence
      a computationalist believes in strong equivalence and strong AI?

      Caroline Bruce-Robertson to Everyone (12:24 PM)
      As far as I understand?

      Iris Kim to everyone (12:25 PM)
      thus, a “machine” that can pass the Turing Test (paradoxically even Turing only required weak equivalence), COMPUTATIONALISTS believe it is understanding?

      Caroline Bruce-Robertson to Everyone (12:26 PM)
      Yeah, or perhaps they just havnt thought about the understanding issue

      Caroline Bruce-Robertson to Everyone (12:26 PM)
      Also, I think someone could “believe” in computationalism only in the sense that they think its currently our best approximation of truth, without really thinking its the total picture of what’s going on

      Delete
    2. Iris Kim to everyone (12:27 PM)
      yea, I think I am on that part in the spectrum as of now

      Caroline Bruce-Robertson to Everyone (12:27 PM)
      Haha yeah same
      Like it seems like a useful image

      Caroline Bruce-Robertson to Everyone (12:27 PM)
      But eventually we’ll come up with a more complete understanding

      Genavieve Maloney to Everyone (12:28 PM)
      If you get to the point of memorization of these algorithms to respond to squiggles to make squabbles, don't you in turn gain some sort of understanding of the rules? You may not understand the sematics or what each word signifies, but you can start to gain understanding of structures, grammar? Doesn't AI also gain this understanding?

      Jie Yu Li to Everyone (12:34 PM)
      If you don’t explain the game to an individual and they end up knowing how to play and to win, doesn’t that mean that on some level, they are able to understand the patterns?

      Laurier Levesque to Everyone (12:35 PM)
      Yes, there's actually deep truth in that statement cause kids often don't know the explicit full rules of a game but know a successful pattern of playing
      but that's with people, idk if it extends to computers

      Delete
    3. Shandra Gunnoo to Everyone (12:50 PM)
      @ Jie Yu But aren't we kind of looking at the human brain as a computer here? Sure, artificial machines don't have the same physical aspect and don't work the same way as human brains, but we're already seeing advances in AI where just a limited amount of data needs to be shown to a software for it to remarkably 'understand
      *and reproduce what is being shown

      Jie Yu Li to Everyone (12:54 PM)
      @shandra yes, but to pass as T, the computer's neurological characteristics needs to be indistinguishable from ours
      pass t4*

      Shandra Gunnoo to Everyone (12:58 PM)
      Ok this might be a stupid question, but aren't we just trying to find out whether the computer is human in that case? Because Turing wants to know, can machines think, right? So what does it matter what the neurological characteristics are?

      Jie Yu Li to Everyone (12:58 PM)
      But feelings are subjective concepts, if a person wearing a VR headset is shown themselves dangling from off a beam, they might “feel” like they are falling, even if they introspect, but they understand and know that they are not falling.

      Ainsley Dillon to Everyone (12:59 PM)
      Doesn’t not understanding fell like something as well?

      Fernanda Pérez Gay Juarez, Dr to Everyone (1:00 PM)
      That is correct. Not understanding feels like something as well.

      Emma Nephtali to Everyone (1:00 PM)
      @Ainsley — yes, I think so. At that point, you understand that you do not understand, so once again it is a subjective feeling

      ZIlong Wang to Everyone (1:01 PM)
      So if Searle is the only one who can tell that he doesn't understand Chinese and he is doing computation, then we can either choose to believe him or not, so we can't know whether computation is implementation-independence?

      Delete
    4. Caroline Bruce-Robertson to Everyone (1:06 PM)
      Well, we can imagine ourself in Searle’s place in his thought experiment, and we can know whether WE would understand


      ZIlong Wang to Everyone (1:08 PM)
      but that again applies to only ourselves and not others, so we still can't know whether others said is true or not, no?


      Caroline Bruce-Robertson to Everyone (1:11 PM)
      Yeah you’re right!

      Delete
    5. Caroline Bruce-Robertson to Everyone (1:13 PM)
      But if we all act upon a “maybe no one else is thinking/understanding/feeling” level of skepticism, there’s not really anywhere we can go, so it makes sense to have a certain suspension of disbelief with respect to the other minds problem

      Laurier Levesque to Everyone (1:14 PM)
      It's more adaptive to assume other minds are as real as yours

      ZIlong Wang to Everyone (1:15 PM)
      I guess I'm just still not understanding how Searle's periscope works lol

      Iris Kim to Everyone (1:16 PM)
      @Zilong, dont’ worry you’re not alone

      Delete
    6. (This thread should have been posted in 3b rather than 4a)

      No, you cannot believe both that cognition is just computation (“computationalism”) and that cognition is not just computation. (T3 cannot be passed by just a computer.)

      TT’s are not “simulations” or “mimicry”: They can really do what they can do (whether or not there are other ways to do it: “weak equivalence”). It is the capacity that they are trying to reverse-engineer.

      If T2 or T3 do not understand then they have reverse-engineered (1) doing but not (2) feeling. With T3 we can’t know, but with T2, if it can be passed by computation alone, we know it has not reverse-engineered feeling.

      Some computationalists seek only weak equivalence; others (e.g., Pylyshyn) seek strong.

      (Computationism is certainly not the only way to reverse-engineer cognition.)

      Delete
    7. Whether studying the algorithm that can pass Chinese T2 (if there is one) would help a person understand Chinese is irrelevant. The computer just executes the algorithm, and if computationalism were true, to do that would be to understand Chinese. Searle points out that this would not true for Chinese T2. (Just as executing a hexadecimal tic-tac-toe algorithm would not make you understand you were playing tic-tac-toe.)

      But remember that a purely computational T2 algorithm is just hypothetical. There are also reasons (the symbol grounding problem) to doubt that T2 could be passed by just computation.

      Playing hexadecimal tic-tac-toe without understanding that you are playing tic-tac-toe is not the same thing as playing tic-tac-toe and understanding that you are playing tic-tac-toe, but not understanding how your brain is doing it. (Cogsci is trying to explain that.)

      If Searle says he does not understand Chinese, you better believe him, just as you should believe him if he says he does not hear a dog-whistle. Yes, not-understanding Chinese and not-hearing a sound feels like something too.

      Searle is just imagining a T2-passing algorithm (because there isn’t one, and probably cannot be one, because of the symbol-grounding problem). But his point is just as valid as with not-understanding tic-tac-toe if you just memorize and execute a tic-tac-toe algorithm coded in some unfamiliar lnear (nonspatial) code (and that experiment can actually be done).

      Delete
    8. A child who is factoring quadratic equations by executing the algorithm without understanding what a quadratic equation or a root is would not be understanding what it was doing (other than following a symbol-manipulation recipe).

      Searle’s Periscope penetrates the other-minds barrier by “becoming” the T2-passing computer – because computation is implementation is independent. So if something is a purely computational property, then any hardware that executes the computation (even a person) has to have that property. In the Chinese Room, that property is understanding Chinese Searle executes the computation, but he doesn’t understand Chinese. So understanding Chinese is not a purely computational property. Searle knows (no Cartesian doubt) that he is not understanding Chinese, and hence he knows the computer passing T2 (i.e., the “other mind”) is not understanding Chinese either, because it feels like something to understand – or not understand – Chinese. And Searle is not understanding. So the “other mind” (he computer) is not a mind. That Periscope on other-minds only works if the candidate for being another mind is a computer, and if the hypothesis (computationalism) is that executing the right algorithm (T2) will produce the feeling of understanding.

      Delete
    9. So are we saying that for computationalism, Searle's periscope penetrates the other-minds barrier, but Searle shows that this does not hold up because Searle doesn't understand Chinese and if understanding Chinese were purely computational, then Searle should be able to understand just as the computer would be considered to? Ultimately, Searle's periscope only penetrates the other minds barrier if computationalism is considered to the true?

      Delete
    10. Grace, that's correct, but please don't post it in this thread, which is 4a. Post it in the Searle thread, either 3a or 3b.

      Delete

    11. MIRROR NEURON DISCUSSION BEGINS BELOW
      vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
      vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv

      Delete
  2. “MN activity relates to the ‘meaning’ of an action and yields a ‘richer understanding,’ ‘real understanding’ or ‘understanding from within.’”

    This seems to be the argument from the genetic take, where MNs are present at birth and from this, understanding is innate. It would also seem that there is some physical representation of understanding, which we see in the presence and activation of MNs. To me, this would mean that MNs possess the connection between symbols and meaning. Additionally, if proven correct, I think this would show potential that understanding could be artificially created in a machine. However, the associative learning theory challenges the genetic theory and argues that understanding is not the function of MNs and that the activations are a result of experience, not an innate connection. In this take, it does not seem that understanding arises from the MNs themselves. If this take was confirmed, we would no longer have any physical indication for how understanding is produced or how to replicate it.

    ReplyDelete
    Replies
    1. We already knew our brains could recognize and imitate what others were doing (whether innately, or through learning) before we discovered mirror neurons.

      Here are some questions to ask ourselves:

      Did the discovery of mirror neurons help us explain how our brains can do it?

      Does it help us reverse-engineer how toy robots could do it (let alone at the T3 level!)?

      In other words: Does the discovery of the brain activity correlated with imitation capacity (or understanding) help us reverse-engineer how the brain causes imitation capacity (or understanding)?

      Delete
    2. After what we discussed in class yesterday, I would answer no--that the discovery of MNs didn't help us explain how our brains can recognize and imitate what others were doing. Like you said, we already knew our brains could do this. Knowing that there are specific neurons for this function doesn't explain HOW we do this function (i.e. the mechanism behind it). Therefore, knowing there are MNs won't get us to achieving T3, but I think it's still a vital step. The physical circuitry can help us understand the mechanism that allow imitation and recognition of others' actions.

      Delete
    3. Melody, the physical circuitry will help explain how to produce imitation once it helps roboticists design robots that can imitate. (Has it?)

      Delete
  3. I found the reading really interesting because it made me realize what I imagine could be a common problem with evolutionary accounts in humans: they don't really offer testable hypothesis. If you assume that mirror neurons offer a selective advantage, you don't need to build of model for how those neurons came to develop, since you can just leave it to "evolution", which is very hard to prove or disprove (with the exception of poverty of stimulus arguments). So in this case, to be able to contest the evolutionary account, researchers had to build an whole other model that fitted the data better, as they had no model of mirror neurons development to criticize. This sort of makes me wonder what other flawed evolutionary arguments are taken as true because they instinctively make sense and are just very hard to counter.

    ReplyDelete
    Replies
    1. I found myself thinking the same thing! It made me realize that it can be useful to frame a problem from the bottom-up using data, rather than top-down from a theory, especially if the hypothesis is not testable. This made me question further how helpful simulation is in answering questions about human cognition. I'm not sure if that is an oversimplification of the aims of computationalism or not, but if anyone has any thoughts I'd love to hear them.

      Delete
    2. For a social species the adaptive advantages of being able to imitate movement and vocalization, as well as the "mind-reading" capacity to recognize what others intend and even what they feel, as well as to understand and produce language -- these advantages are all as evident as the advantages of being able to swim or fly.

      Whether these "mirror" (perception/production) capacities are innate or acquired through a general capacity to learn them is another question.

      (Evolution is "lazy" and "prefers," where possible, to rely on learning to do things rather than pre-coding the capacity to do them in the genes. But does "associative learning" really explain our capacity to imitate or "mind-read"? Will associative learning capacity enable a robot to learn to imitate?)

      Delete
    3. This is a very interesting point, and to build off the idea of flawed evolutionary arguments, I wonder if the debate of pre-coded mirroring ability versus a learning capacity for the mirroring ability is something which will be "solved" in the near future. It is very hard to prove or disprove, as Louise put it, and it could give us a clue into how abilities assumed to be innate could either be confirmed or negated, and give us clues about the computational basis of the human brain as well - whether or not the human brain is simply a supercomputer well beyond the abilities of what we have now, in a way, even if we are limited mathematically.

      Delete
    4. @ Louise After week 7 I wanted to come back to your comment. I had not been able to remember the title when I was reading the comments during week 4, but the Spandrels of San Marco mentione in the lecture is a relevant critique to your point. It does not discount evolutionary accounts as a valid explanation, but problematizes the pan-adaptationst programme, which attempts to explain 'everything' with stories of adaptation. https://royalsocietypublishing.org/doi/10.1098/rspb.1979.0086

      Delete
  4. I think that mirror neurons are really interesting to consider when we think about a potential mechanism that carries out a similar process computationally. One thing that stuck out to me was the idea that mirror neurons develop from motor neurons through learning mechanisms via correlated excitation. So, motor neuron’s map sensory information to motor actions, encoding action understanding (to be honest, I don’t really get what “action understanding” is, if anyone has a good grasp on this, please let me know!) Basically, I think it is really interesting that mirror neurons are linked to the identification and replication of actions (imitation!). What I think might be relevant to cognitive science here, is how a T3 robot might integrate this kind of complicated process. Moreover, what is the relationship between imitating actions and actual meaning/actual understanding? I think if we get that, the process could be reverse-engineered?

    ReplyDelete
    Replies
    1. From what I understand, 'action understanding' is the matching of the observed action to our own action repertoire. When this happens the object is not really considered, so it is only the physical movements being taken into account. From this matching, we are able to *interpret* what other people are doing by just their actions. I also agree with what you said about the T3 robot. I think that action understanding would be a very interesting capacity to test for passing T3 -- Is the robot able to interpret what we are doing from just our actions? However, before this is possible, would the T3 robot not need adequate sight as well as a repertoire of human actions. If so, wouldn't there be poverty of stimulus problem as we cant possible program every single human action into the T3's store?

      Delete
    2. I also found the similarities between the term ‘action understanding’ and Searle’s conception of understanding to be particularly interesting. The article states that: “Attempts to clarify have emphasized that, in comparison with purely visual processing of action, MN activity relates to the “meaning” of an action and yields a “richer understanding”’. However, the article states that there is no clear definition of the term ‘action understanding’ and how it would differ from action perception, recognition or selection. Searle encounters something similar when discussing understanding. In both cases, it is unclear how to measure if one understands. In the case of this article, it is concerned with how or if the mirror neurons understand what action they are seeing. In Searle’s case, he is concerned with how or if machines understand the symbols they are computing. Like you mentioned, it is interesting to see how these conceptions of understanding would apply to a T3 robot, and if it ‘understands’.

      Delete
    3. The test of "action understanding" is whether the same mirror-neuron activity occurs when you see someone stretch out their arm (1) to stretch or (2) to reach an object when the shape of the movement is the same. If they are neuronal activity for both is the same, it's just movement mirroring; if they are different depending on the object of the movement, it's movement+intention mirroring.

      There's no doubt that we have perception/production mirroring capacities: We can imitate movements and sounds, both perceiving and producing them, regardless of whether that capacity is inborn or learned. A similar kind of perception/production mirroring may also underlie not just movement-imitation but also "mind-reading" intentions, recognizing facial expressions, recognizing the movements that accompany feelings ("empathy") and even both expressing and understanding meaning through language. These might all be examples of this "mirroring" capacity.

      The question to reflect on concerning the discovery of "mirror-neurons" -- whose activity is correlated with this perception/production mirroring capacity (which we already knew our brains must have, even before the discovery of mirror neurons, because we have it) -- is whether the discovery of mirror neurons helps us explain how our brains do perception/production mirroring.

      Turing-testing one another every day of course draws on this "mind-reading" capacity. The trick is to explain it...

      Delete
  5. This paper is written to attempt to show that the associative hypothesis is more suitable than the genetic hypothesis for understanding mirror neurons. The associative hypothesis is arguing that mirror neurons originate from sensorimotor associative learning. I was largely convinced by the arguments made by the authors in support of this hypothesis and thought back to our discussion in class about how we require T3 in order to pass T2. Given that T2 involves imitation to the point of indistinguishability but will also require T3 capacities (sensorimotor) to provide a level of meaning, and mirror neurons are purported to be involved with these capacities, I can see how understanding mirror neurons could be a valid research pursuit. However, I do wonder if this could be leading us down the wrong path - assuming that there is something beyond the physical brain involved in cognition, I am not sure that focusing on the physical will provide us with the answers we are looking for.

    ReplyDelete
    Replies
    1. What do you mean by "there is something beyond the physical brain involved in cognition." Of course our brains and bodies interact with the outside world (T3), but isn't that physical too?

      Delete
  6. “The fact that these MNs respond maximally to unnatural stimuli – that is, stimuli to which the evolutionary ancestors of contemporary monkeys could not possibly have been exposed – is hard to reconcile with the genetic hypothesis”
    This quotation stood out to me because I don’t understand the argument the authors are making here. From my understanding of the genetic hypothesis it says that genetic evolution is the basis for the development of MNs, and that learning and experience is more facilitative. I understand that the focus of the paper is not “what” MNs do, but it is crucial to discuss in order to provide evidence against the genetic hypothesis. Why couldn’t the fact that the monkeys MNs were learning and adapting to new stimuli be the result of an adaptive advantage? Couldn’t this be more of an argument for both genetic evolution and associative-learning being equally weighted and working together? The associative hypothesis argues that genetic evolution is more of a background component in the development of MNs, and this evidence seems weak in proving that. Unless I am misinterpreting the point, couldn’t this example show that the adaptive advantage for MNs is the precursor for associative-learning and that they work together to a large extent?

    ReplyDelete
    Replies
    1. I think you are right. In fact, that "the focus of the paper is not 'what' MNs do" is a problem, is it not?

      Delete
    2. To follow your train of thought, one feature of the associative account hypothesis that I liked was that it separates the explanations for the origin and function of MNs (I found this to be a better indicator on how to reverse-engineer MN capacities in robots). The associative account proposes working on domain-general associative learning processes. Indeed, Cook et al. were primarily interested in whether MN abilities were innate or taught through associative learning. Similarly, Turing had previously stated that the ability to acquire new knowledge was an important component of the Turing Test. Perhaps mirror neurons (MN) just illustrate the requirement for dynamic processes in reverse-engineering a T3 robot. MN capacity, from what I understood, could be imitation, so could a part of the Turing Test (or can a way to pass the Turing Test) be: can robots learn to imitate?

      Delete
    3. Yes, the capacity to imitate movement, speech and action, and the capacity to understand speech and emotion are all part of T3. If in doubt, ask yourselves whether Eric can do it.

      Delete
  7. One thing that stuck out to me about mirror neurons was the study with robotic pincers mentioned on page 187. In this study, participants had just 50 minutes of sensorimotor training where they opened and closed their hand when the robotic pincer opened and closed. The results showed that 24 hours after the training, the automatic imitation effect was as strong for the robot as a human hand. I think this is an interesting finding to consider in the context of T3 and T4 robots. A T3 robot hand would have to have the same functions as a human hand, but would look different while a T4 robot hand would look exactly like a human hand. This study would suggest that in our brains, there would not be much a difference between registering the movement of a T3 or T4 robot hand. I think this raises an interesting question about the difference between T3 and T4, as in the context of mirror neurons, they are quite similar.

    ReplyDelete
    Replies
    1. Both T3 and T4 include the capacity to imitate, to learn, and to generalize.

      The challenge, given the discovery of mirror neurons -- regardless of whether their activity is correlated with an innate perception/production mirroring capacity or a learned perception/production mirroring capacity -- is to explain how the brain does it.

      Delete
    2. Since a T3 robot exhibits weak equivalence (same inputs and outputs), and a T4 robot exhibits strong equivalence (same inputs, outputs, AND the same algorithm), they will perform the same as you said. Drawing this to MNs, a T3 robot would also be able to imitate and recognize others' movements, but the mechanism would be different than that of a human's. A T4 robot would be able to do the same, but in the same way as a human. That's what would differentiate the two.

      Delete
    3. Melody, you are right, but what follows from that?

      Delete
    4. Given that the output of T3 and T4 is the same, the discovery of mirror neurons just adds more information about their correlation to our imitation capacity, but it does not help explaining more about how our brains come up with the output.

      Delete
    5. Mirror neurons relate to 1) our ability to imitate (the easy problem), 2) our capacity to have empathy and feeling (the hard problem). Yet, since correlation does not equal causation, the existence of mirror neurons does not explain the causal mechanism of our imitation capacity or empathy. However, I believe there are still benefits in understanding neuroscience, and we should not totally ignore it. It makes me think that whether we need T4/T5 power to pass T3. In my opinion, we need T4/T5 power to pass T3, as evolution explains the necessity of mirror neurons (T4 level) that allow us imitate others or act empathetically. Adding to my previous skywriting, even though we do not know "how" mirror neurons create these capacities, I believe T4 is still necessary to provide us a blueprint to model a T3 robot.

      Delete
  8. Given the multitude of theories about and the lack of research on mirror neurons, I wonder if their name is still relevant. What was initially believed to arise uniquely from mirrored actions (with objects usually!) has evolved into a neurological phenomenon that may have other functions and may arise from different stimuli. One of the theories, which the paper argues is the most applicable currently, is the associative hypothesis. It stipulates (if I understood correctly) that motor neurons have the potential to become mirror neurons and that mirror neurons are mostly shaped by associative learning. The associative hypothesis does not entirely dismiss the role of genetics in the creation of mirror neurons, but it does emphasize the role of training and learning, which are more environmental factors, in the creation of these cells. Because research has proven, for example, that mirror cell areas can be activated by inanimate stimuli and that mirror responses can change as a result of sensorimotor leaning, their role, which hasn’t been determined in full yet, appears to not be limited to mere imitation. Is there an alternative way we could refer to them that would refer to their origins and abilities more broadly?

    ReplyDelete
    Replies
    1. Mirror neurons (MNs) would be neurons whose activity accompanies an equivalence or invertibility between perception and production (input and output), regardless of whether the equivalence is innate or learned.

      But if that's what MNs do, we still have no idea how.

      Delete
    2. I am replying to your post, Camille, because I also had a reflection about the term.
      I realized that, although I had heard about about mirror neurons and found them interesting to read about years ago, I could have asked more questions about what they were. I had thought of mirror neurons as a kind of its own, structurally and functionally distinct from others. I guess, in my mind then, there could be the set 'mirror neurons' and 'non-mirror neurons'. Reading the definitions, even the most strict meaning seemed a strange, however... when I am observing an action and doing the action, a mirror neuron is involved in both.. by definition. That did not seem so amazing. There is bound to be some overlap in firing. Are we just labeling them post-phenomenon (common activation)? (Or was I taking too much for granted, that observation and enactment were linked physiologically?)
      Anyway, let's say that such a cell exists. But 2.3.3 mentions 'mirror area', and 2.3.2 seems to suggest somthing more like a mirror pattern. Also, on p.5: "...what was originally a motor neuron has become a lip protrusion..." so it becomes?
      The reading didn't change my definition of a mirror neuron from A to B, but brought about those questions in my attempt to understand how they were characterized.

      Delete
    3. Are you any closer now to understanding how mirror neurons do what they do?

      Delete
    4. I also used to think the same thing! In my previous courses, we were only taught about what mirror neurons do, but not how they came to be and what they actually consist of. I therefore also concluded that they were their very own distinct and specialized entities. As Professor Harnad puts it in his first reply, mirror neurons seem to be those that are activated both by observed motions and executing such motions. I wonder, however, to what extent these neurons may be firing due to other stimuli and how these neurons may occupy more general functions.

      On the topic of mirror areas (as mentioned in 2.3.3 of the text), I think that it would make more sense to state that there aren’t specific areas for mirror neurons, but that there are rather clusters spread around the brain in sensorimotor areas. I may be wrong to think so, but it would make sense considering that mirror neurons are not as distinct of an entity as we originally believed.

      Delete
  9. OK, so I think I’m seeing a trend here.

    Strong Church/Turing Thesis states a universal Turing machine can simulate just about anything in the world.
    - PRO: We can simulate the T3 robot in a computer before actually trying to build it, to serve as a blueprint.
    - BLOCKAGE: A simulated version of the T3 robot is NOT the real robot and hence we cannot draw conclusions based on simulation alone. Simulation is just squiggles and squoggles. There is NO interpretation.

    Searle with his Chinese Room Argument states that if he become the T2 machine, I would be able to pass the Turing Test without understanding Chinese.
    - PRO: Searle’s CRA shows that if T2 can be passed by computation alone, the T2 machine is NOT a sufficient reverse-engineered model of cognitive capacity.
    - BLOCKAGE: Searle’s periscope can only penetrate the other minds problem with a T2 machine, not a T3 robot.

    The associative account of mirror neurons state that sensorimotor experience leads to formation of MNs. The experiments listed in the article show that mirror neurons fire when humans are [doing, mirroring, imitating] actions and even during vocal communication (language), “specifically imitative or responsive vocalization”. The associative account is essentially stating that sensorimotor association learning “plays a major, instructive role” in many of the cognitive capacities we have.
    - PRO: Knowing this suggests that maybe a sensorimotor function is required when reverse-engineering a Turing robot. If sensorimotor experience plays a role for even language, then as Prof Harnad has mentioned repeatedly, a T3 robot may be required to even pass T2.
    - BLOCKAGE: Mirror neurons are merely a neural correlate of cognitive functions. Knowing that mirror neurons exist and are indicative of cognitive capacities doesn’t explain how and why we can imitate, recognize, mirror etc. Thus, they provide no insight to the causal mechanism of cognition. Even if we build a Turing robot with implemented mirror neurons, unless we find out HOW mirror neurons work, we still have no CASUAL explanation of cognition, only a replicated correlate.

    The three propositions above, all contribute a piece to this process of solving the “easy problem” by reverse-engineering a Turing robot, but we are continuously met with major impediments.

    ReplyDelete
    Replies
    1. I just want to share one more question I had regarding the “blockage” met with mirror neurons.
      “Does having a neural correlate (MNs) contradict the implementation independent property of computationalism?”

      My intuition first told me, yes, maybe. BUT, this was my train of thought afterwards:
      The problem with “correlates” is that you don’t know if A is causing B or B is causing A (or another factor C is causing both A & B).
      Does cognizing fire MNs? or Does MNs need to be fired for cognizing to happen?
      Currently, I would argue that cognizing fire MNs. (Neurons typically respond to stimuli.)
      In that case, having a neural correlate does NOT contradict the implementation independent property.

      When a program is being executed by a computer, the computer could also have buttons on its hardware that blink light. These lights indicate that the computer is running the program. However, it doesn’t mean that the computer cannot execute the program without blinking its light. The same program “could” be executed on a computer with/without these buttons blinking and provide the same output.

      In the case of humans, MNs could just be buttons that fire when running algorithms. Hence, a constant correlate and indicator but not something REQUIRED to run the program.

      I don’t think of myself as a computationalist but still, just a thought to my own question.

      Delete
    2. Excellent summary and grasp of the issues so far.

      A few tiny details:

      (1) There is an interpretation of a computer simulation of a physical system (or of any symbol system that we choose because we find it useful). It is just that the interpretation is in our heads, not in the coomputation. It is an interpretation of the computation. If it's the right computation, it will "bear the weight" of the interpretation, just as a sentence will bear the weight of an interpretation: "The cat is on the mat" can be interpreted as being about a cat being on a mat, but the cat and the mat are not "in" the sentence, which just consists of squiggles and squoggles that are interpretable (by us) as referring to cats and mats.

      (2) Sensorimotor association learning can alter the activity of MNs, but this does not explain what MNs do, nor how our brains do what the MN activity is correlated with when we are doing it.

      Delete
    3. I thought you drew great parallels between the material, and this helped me better connect some of the ideas of the course. I think your secondary comment is quite interesting in trying to look at MNs in the context of implementation. However, I think your analogy of the computer with the blinking lights doesn't really qualify what Mirror Neurons do. Mirror Neurons can influence actions, and the role they play in internal mechanisms. Based on how mirror neurons perform in humans, I don’t think its computer analogy would be a superficial addition to the outside of a computer. The mirror neuron facilitates new kinds of inputs, which cause new kinds of outputs within the system, based on this situation. Also, I don’t know if I agree that mirror neurons are not required to run the program. As we are fundamentally social and adaptive (i.e. we learn with our environments), how would cognitive functions in a human brain be facilitated without mirror neurons? I think just because our limited knowledge of MNs don’t explain cognition, doesn’t mean they don’t facilitate causal/consequential roles within the system.

      Delete
    4. Iris, these are really great points! The end of your first post, where you talk about how we still have no causal explanation of cognition, really reminds me of the 3rd grade teacher problem — we can remember the name of our teacher, but we don’t know how we remember.

      In previous psychology classes I’ve taken, it almost seems as if people want to separate the “easy problem” from the “hard problem” as much as possible. We have hours of class time dedicated to the easy problem and the hard problem just gets brushed under the rug — after all, it IS the “hard” problem, so it must be too hard to talk about! However, this class (and especially the last few readings) has made me realize and accept the fact that both problems are very intricately linked and related to one another. Even if we are able to create a robot that has functioning mirror neurons (therefore somewhat solving the easy problem through reverse-engineering, as you’ve put it), we still have no explanation for HOW they function and how cognition actually occurs. We’ve only recreated a model, but the inner workings are not yet known to us.

      Delete
    5. Leah, MNs themselves don't explain anything. Mirror capacities (i.e., perception/production equivalences), which we know organisms (and hence their brains) have, are certainly important, especially for interaction with other organisms. And they are essential to T3. But MNs certainly don't explain how the brain produces mirror capacities.

      Emma, if we could make a toy robot with mirror capacities this may or not prove useful in making a T3 robot. But either way, if we were able to produce mirror capacities in a robot, toy or T3, would mean, by definition, that we knew how those mirror-capacities can be produced in at least one way (which is better than not knowing how they could be produced at all).

      It is very unlikely, by the way, that mirror capacities (imitation, vocal mimicry, action-objective mirroring, emotion- and intention-reading, empathy and language-understanding are just add-on peripherals. Along with learning they are the core of cognition, hence of T3.

      Solving the Hard problem (how and why do organisms feel?) is certainly as much a part of cogsci's mission as solving the Easy problem (how and why can organisms do all the things they can do?). But how much progress cogsci has made on the Hard problem is another question..

      Delete
    6. Iris, first, I want to thank you for this summary. It was very helpful in connecting the paper to what we've already talking about in class, and helped me understand the main points better. As Leah mentioned, however, I don't think the computer analogy does justice to the function of MNs. Although, I'm not sure of the directionality of the relationship between MNs and their respective cognitive functions, one could not exist without the other. The computer program, however, could certainly exist/run without the blinking light (the blinking light could also exist with the program running, but this would be useless).

      Also, I find it interesting how the associative learning hypothesis relates to what we said in class about how a T3 robot is needed to pass a T2 test. If imitative learning requires sensorimotor functions, then the robots would need to feel and move in order to form the same cognitive functions as a human.

      Delete
    7. Melody, T3 would certainly need to move and have sensory input, but why would it need to feel? (Hard problem...)

      Delete
  10. I'm not sure if this is applicable or not, but something that came to mind during this reading is the differences in intention/meaning behind human smiling and smiling in primates. If I remember correctly, many primates take what we call a smile as an act of aggression. I find this interesting because if we evolved from primates, then where did this alteration in social cues occur? I would imagine that this consideration could act as a con for the genetic account of mirror neurons and a pro for the association account. The association hypothesis could argue that sensorimotor experience of viewing reciprocal smiling learned from a young age allows us to understand this social cue despite ancestral differences.

    ReplyDelete
    Replies
    1. Lucie, this brings up a general point that evolutionarily encoded abilities are usually on a broader scale (an ability like associative learning) rather than being super fine grained (like having an association between smiling and aggression). If this weren't the case, we likely wouldn't be able to function in out current world as much of what we interact with has not been around long enough for evolution to mold us to react to its specifics in an adaptive manner. For example, evolution can not be the immediate explanation for why I don’t stick a metal knife in my toaster, but it might explain more basic learning capacities that have lead to me knowing not to do this.

      Something this paper brought up for me that is related to this is that it seems like the less we take evolution to have fine-grained control over our actions, the less a computational model of cognition makes sense. One might imagine evolution as a “programmer” that “codes” us to act in certain exact ways, but the less we take evolution to specify exactly what we do in our daily lives (the less we take our “output”, or behaviour, to be determined by evolutionarily encoded rules), then the more complicated explaining our cognition becomes. The more importance we take our experience in the world to have, the more complicated our explanation of cognitive function will likely be, because it will have to include how our interactions with the world support and structure that function.

      Delete
    2. Lucie, both innate mirroring and learned mirroring exist.

      (It's not the human smile but the human open-mouthed surprise-gape that signals aggression in nonhuman primate. Meanwhile the non-human primate grin indicates fear and appeasement, not amusement. And some cultures signal "yes" with the same head-movement that other cultures use to signal "no.")

      Caroline, evolution is "lazy": it is both more flexible and more economical to equip organisms to learn than to hard-code all input/output patterns in advance.

      (This would be equally true for parts of cognition that are computational and parts that are not. In fact the biggest progress on the mechanisms of learning itself -- unsupervised and supervised -- has been computational, so far.)

      Delete
    3. I'd like to dig deeper into prof Harnad's connection between lazy evolution and mirror neurons, now that we have learned more about Baldwinian evolution. I think that the appearance of mirror neurons could be a good example of Baldwinian evolution because they facilitate general learning of species-specific behaviours, without having to hard code what those behaviours are and under what conditions they are useful. In other words, being able to easily imitate successful members of our group allows organisms to learn and benefit from their behaviour, without having to hard-code the extensive variety of specific behaviours that are adaptive to their species in a given environment. Furthermore, mirror neurons likely allow a sort of "natural selection of behaviours" within the span of a single generation. If every behaviour is imitated thanks to mirror neurons, individuals with non adaptive behaviours will die off, and thus be impossible to imitate whereas sucessful behaviours will be imitated and spread, making mirror neurons a very adaptive trait that is aguably more efficient than the time-consuming Darwinian evolution.

      Delete
  11. As we have discussed in one of our first lectures, we know we have certain abilities but cannot consciously explain how we have done them. For instance, we know that we are capable of imitating movements, recognize and produce vocalizations, but we cannot explain how and why we can do it. This paper claims that MNs originate from sensorimotor associative learning, arguing that it plays an important role in mirroring and imitation, maybe extending to communication. On that trope, it would mean that motor abilities are necessary to pass the written Turing test; T3 is required for T2. The genetic and evolutionary explanations discussed in the readings depict their adaptive and survival values, nonetheless knowing the existence of mirror neurons (MNs) does not explain what they are really doing and how this enables our mimicking abilities. Furthermore, as a main goal of cognitive science is to reverse engineer the human brain and its capabilities, this knowledge wouldn’t allow one to build a system with these mirror capacities. While the article’s genetic vs associative arguments, and explanations of 'what' MNs do are interested, they do little to improve our understanding of consciousness and cognition, and 'how' MNs work.

    ReplyDelete
    Replies
    1. Good summary, but the "why" of mirroring capacity has plenty of potential explanations, both genetic and cultural. It's the "how" that's less obvious.

      Delete
  12. Very interesting take on mirror neurons. As I have learned more, it seems that pinning down the exact "purpose" of a brain area or sense becomes harder and harder as you learn more about it. One of the main points of the paper is that learning may play an unexpected amount of in the development and function of mirror neurons "human infants receive enough sensorimotor experience to support associative learning of mirror neurons (“wealth of the stimulus”)"

    This made me think of a similar case in which at first glance, it seems logical that the fusiform face area would have the specific function of reading and decoding faces and that it might be driven by biology. Strangely enough, for guys who are fans of cars and have a wealth of knowledge and images of cars in their memory, part of their fusiform face area lights up the same way a normal person does when they see a face, but when these guys look at a car. What this may suggest is that instead of assuming the fusiform face area is a face-purposed brain structure, it may also just be an area for the visual decoding of whatever you feed it!

    ReplyDelete
    Replies
    1. No doubt learning can tune or transform brain function. But whether (and how) sensorimotor associative learning can produce mirror capacities like imitation remains to be demonstrated (whether in a robot or a computational model).

      Delete
  13. Before reading about the link between associative learning and mirror neurons, I thought about these two as active vs passive Associative learning would be active because it requires an association between a visual stimulus and an action representation to imitate while mirror neurons would be passive because they would biologically elicit that same imitation in a reflexive manner. After reading the paper from Cook et al., I can see a third option where associative learning induced mirror neurons through repetitive Hebbian learning between visual and sensorimotor experience. In that case, mirror neurons would be the result of the associative learning system and would therefore only be an adaptation from our biological system to optimize associative learning. This idea changes my perspective on the evolutionary perspective. Previously, I mostly thought of evolution in cognition as mostly going from biological mechanism to intended output but now I see it’s possible to go from a system with an intended output to a matching biological mechanism.

    ReplyDelete
  14. I had more trouble with this reading than I have had with previous weeks' readings -- perhaps it is the heavy technical language or all the in-text citations that made it very difficult to follow. But in attempting to piece the text together, I found its argument for the associative hypothesis of mirror neurons to have quite interesting implications on what we understand to be the directionality of evolution. The genetic hypothesis assumes that the hardware for imitating/learning is already there, and that we, as learning humans, make use of our hardware to do all the human things (walking, talking, being sweet with one another). This stance would view much of our activity, our capacities, being predetermined, by evolution on a species scale and by our brain structure on an individual level. It also posits that our environments are not rich enough (poverty of stimulus argument) to form us and our capacities for movement, language, being sweet, etc. It is much more compelling, in my opinion, that we are less formulaic beings, with little on-off switches as Chomsky would have it, and that our minds (including those mirror neurons) are largely determined by the environment around us, with all its enormous variety (wealth of stimulus argument). It would be nice to think that our brains are not strictly determined utilitarian devices but rather funny little slates within which skills like walking, talking, and being sweet can develop.

    ReplyDelete
    Replies
    1. Poverty of the stimulus (Week 8 and 9) is about grammar, not mirror neurons. (What is the poverty of the stimulus argument?)

      And as for mirror neurons: They don't explain mirror capacities, regardless of whether the capacities are inborn or learned. (They're probably both.) And sensorimotor association is not an explanation; it's a capacity that itself needs to be explained and then tested to see whether having that capacity explains being able to learn mirror capacities.

      Delete
    2. For the sake of argument, I would like to say that a general genetic hypothesis (in any evolutionary case, not just with mirror neurons) does not assume that the hardware is already there. and it certainly does not assume that the hardware is not influenced by the environment. I found this to be under-addressed in this reading. The contemporary behavioural ecologist's perspective posits that each trait, ability, behaviour, or physiology (in this case mirror neurons) develops by random chance, and this random development benefits the individual so that they can better survive in their specialized environment. Traits are entirely shaped (selected for) by the environment, including the community of the individual. For example, if a random new behaviour helps an individual socially, they may have higher reproductive fitness, better ability to fight competitors to increase longevity, or a plethora of other benefits that pass their genetic benefits onto their offspring. I felt that there could have been better coalescence of the genetic and associative hypotheses around mirror neurons in this paper. I do not think it is strong enough to say that the role of genetics and evolution is simply background to the development of motor neurons (as is stated in the associative account). I believe there must be some understanding of how mirror neurons increase fitness to understand the reason for this mirroring ability and likewise, its role in cognition. Looking at the genetics and how the mirror neurons are influenced physiologically by genes may bring us closer to the “how” of mirror capacities, and associative Hebbian learning more generally, but as many have discussed, there is much more to be explored and explained to totally understand this “how”.

      Delete
    3. Distinguish between morphological "traits" (wings, fins) and behavioral "traits" (flying, swimming, mirroring, learning). We'll discuss evolution more in Week 7.

      Morphological traits are more likely to be the results of genetic variation and "selection" on the basis of their adaptive effects on survival and reproduction ("fitness").

      Behavioral traits (including cognitive capacities such as certain kinds of learning, as well as language) are more likely to be shaped by learning and experience rather than by genetic variation and its effects on fitness -- although learning itself is a capacity that has evolved genetically.

      Evolution is "lazy": it is faster, safer and more flexible to shape a behavioral capacity by learning than by genetics.

      But neither the genetic explanation of how a behavioral capacity was shaped by Darwinian selection, nor an explanation of how it can be shaped by learning explains the causal mechanism that can produce it.

      To reverse-engineer the capacity to do something requires reverse-engineering the mechanism that can do it. That requires hypothesizing (e.g., by creative hunch or by trail-and-error computer-modelling) a causal mechanism that might be able to do it, and then testing whether the model or hunch can really produce the capacity.

      Which brings us back to Turing-testing.

      The perception/production "mirroring" that underlies "mirror-capacities" (imitation, speech perception/production, emotional expression, perhaps movement-purpose, perhaps empathy, perhaps language-meaning) is not explained by showing that mirror-capacties are inborn or learned (they are obviously both).

      Nor are they explained by the evidence that they are correlated with mirror-neuron activity.

      They begin to be explained once we have discovered a causal mechanism (even a toy robot) that can actually produce some of those capacities.

      Till then it's all just correlations, whether with genes, or with learning, or with mirror-neuron activity. Not a causal ("how") explanation.

      Delete
  15. I agree with the idea that mirror neurons sprout from sensorimotor associate learning. As we discussed in class, since we know that T2 requires indistinguishable imitation for an infinite amount of time, we know that sensorimotor capabilities are essential to enable a machine to truly mimic the complexities of the human mind/cognition, and sensorimotor capacities would be helpful (if not necessary, I’m unsure here) for providing a level of meaning. This seems to support the idea I had in week 3 that fully “understanding” would involve the machine being able to fully interact with our environment, paying emphasis to how this has an impact on a biological and chemical level, which further leads me to believe that hypothesis T3 is indeed needed to pass T2, as it would need to be able to also imitate all the actions a human can do in order to be able to interact sufficiently with people/environment in order to provide some level meaning to what it’s doing.

    ReplyDelete
    Replies
    1. T3 is not mimicking, it is actually producing T3 capacity (of which various mirror capacities -- invertible perception/production analogues or copies -- are part).

      Eric imitates; he doesn't mimic imitating.

      The rest of what you say seems right.

      Delete
  16. We learn in our psychology classes that the more we train our neural connections, the stronger they get: “neurons that fire together wire together”. From a young age, we learn by observing others and through associative learning — although we don’t replicate all motor actions, our brain networks can still be activated as if we did, thus strengthening connections between neurons and influencing our later actions and movements. By observing others’ and our own actions, and correlating/pairing different neurons’ activities, MNs become specialized. And, as the authors of the reading state, the associative hypothesis says that MN development requires social construction; “MNs are to a very large extent built through social interaction”.

    To me, this demonstrates an obvious reason for why MNs can’t help reverse-engineer how the brain recognizes and imitates others’ actions. We learn a lot through the associations we make by living and experiencing the world. A robot, on the other hand, doesn’t experience life the way we do, and lacks this vastness of opportunity — for sensing, observing/interacting with others, exploring different spaces, being faced with fear and danger, making serendipitous discoveries, etc. This is something most animals including ourselves have access to every day in nature, and it plays a part in how we make associations and learn. How, in the absence of these opportunities, can robots’ neurons “wire together” in a similarly diverse manner as ours do? The collection of our experiences largely define us. But in order for T3 or T4 to be successful, wouldn’t the robot need to have its own set of experiences with the outside world, something which can’t really be forcibly learned or reverse-engineered?

    ReplyDelete
    Replies
    1. "A robot, on the other hand, doesn’t..."

      A T3 robot? Why not?

      What do you mean by "experience"?

      Turing means input/output interactions with the world. (He gives up on feeling, because of the other-minds problem.)

      Doesn't Eric have those I/O interactions with the world, every day?

      Or are you still thinking of toy robots?

      Delete
    2. Juliette, I do agree with most of what you said. But couldn't the connections created by associative learning between sensory and motor neurons be recreated in the wiring of a T3 (or even T4) robot? For example, let's say that by the time technology reaches a level that humans can build a T3 robot, we're also technologically advanced enough that a full brain scan (with neurons and synapses and everything) of a live subject is possible. At this point, AI would probably also be sufficiently advanced that it could process the massive amount of information in a full brain scan. In such a scenario, I don't see why a robot couldn't be built that has years of experiences already programmed into it, and therefore has mirror neurons without yet having interacted with the outside world.

      Delete
    3. Milo, too sci-fi. Computation may be able to show how to duplicate the brain, cell by cell, molecule by molecule, but that's more like cloning than explaining.

      Delete
  17. In his paper, Cook states that, “‘audiovisual’ MNs respond to unnatural sounds associated with actions […]. The fact that these MNs respond maximally to unnatural stimuli – that is, stimuli to which the evolutionary ancestors of contemporary monkeys could not possibly have been exposed – is hard to reconcile with the genetic hypothesis”. If mirror neurons fire in response to unnatural and unfamiliar stimuli, then if that property were to be translated into “speech” then would that not solve T2? If T2 were presented with “unfamiliar stimuli” in the form of a never-heard-before phrase, then it would elicit some sort of response to it that could be categorized as an answer.
    Could this also be extended to a T3 robot? Mirror neurons in a T3 robot could be used for associative learning. Would this pass the Turing Test? With the inclusion of the capacity to associate responses in the external environment to itself as well as to other units, the robot would be in a continuous state of “updating”/learning. There wouldn’t need to be a pre-setting that covers every potential eventuality. From this, would the robot also be able to learn from observing interactions between others as well as engaging in interactions with others? This feels like a big ask, but incorporating a function that allows associative learning does seem like it holds promise.

    ReplyDelete
    Replies
    1. I don't understand your points.

      How could the fact that “mirror neurons fire in response to unnatural and unfamiliar stimuli… be translated into ‘speech’ [so as] to solve T2”?

      What is T2? what is required to pass T2? and what is required to reverse-engineer a mechanism that can pass T2?

      What is T3? what is required to pass T3? and what is required to reverse-engineer a mechanism that can pass T3?

      Think of Eric in both cases.

      And what do we know about sensorimotor associative learning or mirror neurons (whether innate or learned) or mirror-capacity that helps in any way to reverse-engineer T2 or T3?

      Delete
    2. Hi Professor,

      When I wrote this, I believe the point I was trying to articulate was that in response to an “unfamiliar” signal from the mirror neuron could elicit a verbal response expressing that they did not understand. However, I believe that this error in judgment arose from a lack of depth in understanding what a T2 robot is capable of doing. It could have the capacity to verbalize that it does not understand what it is being presented with from the person it is interacting with, however what would make it distinguishable from a human is that it would never understand due to the missing sensorimotor capacities.

      Delete
  18. From what I understand, the associative hypothesis suggest that Mirror neuron gains this property of "firing" both when observing and doing the same action by experience and interaction with the real world.
    "However, it is important to note that the associative account is predicated on a
    “wealth of the stimulus” argument" (Mirror neurons: From origin to function).

    Would that entail that most of what makes us human (or what makes any other animal as they are) is not "coded" in our genome, but is in fact an interactive process dependent on our environment? For example, we don't have genes that encourage us to cooperate with others, but we learn that the process of cooperation is mutually beneficial which reinforces cooperation mechanism and techniques (I.E. sharing resources, communication, etc.). In the case of MN, we are not born with an innate ability to match observation and action on the cellular level. NM acquire this ability through interaction with the environment. Genetics only helped with the underlying neural properties that made MN able to perform such a task (maybe the capacity of adaptation, quick response to stimuli, etc.). -Elyass

    ReplyDelete
  19. After reading this paper on mirror neurons, I was left thinking about how it would apply to our robots, especially being able to implement it in T3 robots. I think that after reading this paper we could say that MNs are potentially very important to cognition seeing as they add complexity to the reverse engineering of the brain.

    This paper shows how the brain itself is not an isolated system but rather one that needs constant interaction with our environment/surroundings. In this case, it is possible to say that even if we are able to understand the brain on its own, we would still be missing information gained from the constant interactions with the environment to fully understand cognition.

    Now that we know MNs are consequences of learning mechanisms, the question would then be how, or if, we can implement such MN mechanisms computationally into our T3 robots.

    ReplyDelete
  20. Mirror neurons have been regarded as one of the mechanisms support human empathy. If we still asking about the question “why cognition is not all computation?” I guess lacking “empathy” will be one of the aspects. Mirror neurons work in a way that they will fire not only when a person themselves make a movement, but also when he/she observe other people doing the same movement. This phenomenon provides a much bigger scope of cognition. Considering the hard problem in cognitive science (how we feel the way we feel?), mirror neurons might give some information about how we interpret other people’s feeling by mimicking their experience in our brain. But yet, we don’t know how and why mirror neurons does it. Hypotheses might include innate mirroring capacity, and the paper is mostly talking about acquired mirroring capacity by sensorimotor associative learning.

    ReplyDelete
  21. After reading this paper, I understand more deeply how difficult it is to construct a T3 robot. I believe that T3 cannot be built indistinguishable at its “birth”, but rather learn to be indistinguishable (from adult human) through time. Just like babies cannot talk and act, T3 should also hold this learning process. I find it almost impossible to build a “finished” human cognition, as human’s mind evolves. One important thing is “recognition of social information in social interactions” (de la Rosa, S. & Bülthoff, H.). For a T2 robot, if you say: “I am happy”, with a sad face, it may response: “That is good”. But for a T3 robot, as it is indistinguishable with human, it should say: “No, you are not” or “What is going on”. T3 should be able to perceive all the information and correctly interpret, and for humans, we are not born with the ability of correct interpretation of social information, it is acquired through time. Understanding how MNs originate highlights the importance of the learning process, yet it does not help us with how to implement this learning process in robots. 

    ReplyDelete
  22. So, if I understand correctly, ‘action understanding’ is when mirror activity changes depending on the intentionality of the movement being observed?

    If mirror neurons are involved in associative learning, would that mean that T3 robots need to have them in order to pass the Turing test? If we were to reverse-engineer a T3 robot, I think that makes sense because T3 robots need to interact with the environment. Moreover, since T2 cannot be passed without T3 (according to what ‘Stevan says’), would a T2 robot also need to have mirror neurons to pass T2?

    ReplyDelete
    Replies
    1. Hi Shandra, I think you are making interesting point relating Turing machines to the mirror neuron. But I think it is important to consider the following: 1) The fact that mirror neuron fires when the organism observing others action does not give causal conclusion about its function and why it functions like this. As you mentioned, it is only involved/correlate to associative learning. 2) The only Turing machines which considering the inner brain structure are T4 and T5. When we say that we need sensorimotor interaction with the physical world in order to ground meanings, we are talking about its observable behaviour.

      Delete
  23. I don’t fully agree with the points made in this article. Specifically, the part where the mirror neurons are specifically “ forged by domain-general processes of associative learning in the course of individual development, and, although they may have psychological functions, they do not necessarily have a specific evolutionary purpose or adaptive function”. There’s the critical period in child developement whereby children are able to learn a variety of basic functions including the ones that mirror neurons are cited to support in this article such as grasping. However, during the end of this period there is also a phenomenon called synaptic pruning. This is a natural process that occurs in the brain between early childhood and adulthood. During this phase the brain basically eliminates extra synapses or connections in the brain that are rendered useless/ no longer needed. If mirror neurons really used for learning processes in individual development, as Cook et al. argue, then why would it not be pruned away with other parts of the brain? The purpose of mirror neurons are clearly useful for more than just development otherwise they’d be vestigial.

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2021 Time : FRIDAYS 11:35-2:25  Place : ZOOM Instructors : Stevan Harnad & Fernanda Pere...