Monday, August 30, 2021

6a. Harnad, S. (2005) To Cognize is to Categorize: Cognition is Categorization



Harnad, S. (2017) To Cognize is to Categorize: Cognition is Categorization, in Lefebvre, C. and Cohen, H., Eds. Handbook of Categorization (2nd ed.). Elsevier.  

We organisms are sensorimotor systems. The things in the world come in contact with our sensory surfaces, and we interact with them based on what that sensorimotor contact “affords”. All of our categories consist in ways we behave differently toward different kinds of things -- things we do or don’t eat, mate-with, or flee-from, or the things that we describe, through our language, as prime numbers, affordances, absolute discriminables, or truths. That is all that cognition is for, and about.


See also: Jorge Luis Borges Funes the memorious

82 comments:

  1. When reading section 3. Categorization the paper discusses inputs and outputs. Specifically, that categorization is not about exactly the same output from exactly the same input, because categorization is differential. What I understand this to mean is that if categorization required exactly the same input, then it wouldn’t really be categorization at all because no variety of inputs would produce the same output (so each category would only ever have one member). Upon reading further, I was compelled to think about the relation between these qualities of categorization and Funes’s inability to selectively ignore irrelevant information. It seems that Funes is an example of the notion of same input for same output because he couldn’t selectively ignore and abstract in order to categorize, thus everything was unique.

    ReplyDelete
    Replies
    1. Yes, categorization requires abstracting the distinguishing features and ignoring the rest. The input is whatever you need to do the right thing with, and the output is the thing you need to do (e.g., eat it, or call it "edible"). Categorizing is like passing the input from the world through a filter whose output is what you do with input. Many different inputs could lead to the same output once they pass through the feature-filter.

      Funes could not just abstract some features and ignore the rest. He remembered every detail, for every successive instant, forever. Funes was a fiction, but the point -- about feature detection and selection -- is right and important, in cogsci.

      Delete
  2. In sections 24 and 25, Professor Harnad discusses how categorization depends on selectively abstracting some features of categorization and ignoring others. He explains that if we accept that, we can understand that all categories are abstract and that we might consider degrees of abstractness. As such, we can sort concepts as instances of features of that construct. The example he gives is that of a round-thing and a non-round-thing. For example, sorting things as instances of a round-thing and a non-round-thing where the things you are sorting are features. In this case, roundness is a feature. So, if we consider this example given by Professor Harnad, roundness has a sensorimotor basis. This brings me to a thought that I had about categorizing features in a sensorimotor sense, it’s a little confusing so bear with me:

    If to cognize is to categorize and cognition is categorization, would all cognitive scientists need to reverse-engineer cognition be category learning? If that is so, is a T3 robot the “right” level for cognitive science? In the case of roundness and its sensorimotor basis, I believe a T3 robot would certainly be sufficient for categorizing. A T4 robot would then be unnecessary because the process of categorization would not change regardless of whether or not a robot has a brain or a computer in its head. A T2 robot would be insufficient due to its lack of sensorimotor abilities. That leaves us with T3.

    Let me know what your thoughts are on this! I think it might be a little bit too simple an explanation for professor Harnad’s questions asking what the right level (T2, T3, T4) for cognitive science is, but it’s what came to mind during the reading.

    ReplyDelete
    Replies
    1. First, a lot of cognition is categorization, but not all of it. Categorizing is doing the right thing with the right kind of thing. Right and wrong are categorical but some things we can do are continuous rather than categorical. Most motor skills are continuous (swimming, playing tennis, singing, drawing, copying, imitating). And everything that we can do that is relative or a matter of degree, rather than all-or-none (e.g., judging how similar things look).

      But, yes, these are all T3 capacities – though some of them (like smelling and tasting) may require biochemical properties of T4, not just movement.

      Delete
    2. In response to your last point, Professor, I wonder whether T4 becomes the "right" level. Surely we rely on capacities like smell and taste in order to interact in the world, so a T3 robot would still be missing these interactions and thus would not be able to interact indistinguishably from us without these experiences. But at the same time, there are humans who lack these properties, so I question how necessary these processes are..

      Delete
    3. Grace, I think you've answered your own question. The TT tests cognitive capacities rather than vegetative ones. I don't think we would doubt that Eric had a mind if it turned out he could not taste or smell. (Covid sometimes make you lose taste and smell but not your mind!) And if any T4 vegetative capacities are really necessary for cognition, a robot lacking them will (necessarily) fail T3.

      Delete
    4. Good point! Some humans don't have certain sensorimotor skills and therefore don't have access to many sensorimotor features through direct interactions, but that doesn't make them any less human. People who are visually impaired can still know what kind of thing a "dog" or a "cloud" is despite not having access to certain affordances that other people might have access to. Maybe they have to learn more categories through hearsay (such as "cloud") or maybe they build categories (like "dog") based on different invariants. This could mean that T3 robots might be as performant as T4 robots if they compensate in this manner (although they would have to pretend that they lost their sense of taste and smell for example, to justify their preference for verbal learning and their use of alternative features to categorize odourant and flavourful things).

      Delete
    5. Isabelle, yes, but remember that neither reverse-engineering nor the TT is a game or a trick (despite the title of Turing's paper). A T3 does not have to pretend anything. It just has to really be able to do what it is really able to do.

      Cogsci, building its way up to T3 or T4, would have to do it bit by bit, but T3 itself is only a TT if it can do it all. Only T4, however, needs to be Turing-indistinguishable in its "neuroplasticity" (capacity to compensate, reorganize and recover after sensory loss or brain damage). (It's too easy to build a robot that is indistinguishable from a human in a chronic vegetative state, but that's neither T3 nor T4...)

      Delete
    6. Isabelle's point about a visually impaired person made me wonder a person like this could ever obtain a grounded understanding of the category "cloud". It is not a thing you can hear, smell, taste, or really touch--it could only be learned through hearsay, as April noted.

      Delete
    7. To the point about T3 lacking biochemical properties -- Does T3 know that it can’t smell? Is it able to say “I am not able to smell this” in response to questions such as “doesn’t this smell nice?” or “this smell reminds me of my childhood — which smell reminds you of yours?” ? Does it have that awareness about its own capacities? If not, wouldn’t this make T3 quite distinguishable from a human despite its having sensorimotor capacities?

      Delete
    8. Lucie, you underestimate the power of grounded, recombinatory propositions (hearsay): We don’t know what it feels like to be a bat, but we know what “echo-location” means. A blind person does not know what seeing feels like, but they do know what it means to see. (All categories are approximate, even sensorimotor ones. The only way you can communicate to someone who has never felt a particular feeling what that feeling feels like is by analogy to feelings that they have felt. The analogy and the approximation may be weak, but it is not empty, and enough to ground discourse.)

      Juliette, congenitally blind people can and do talk about “looking” and “seeing,” by analogy with other senses that they do have. And they know what it means to lack a sense. They know about shape from touch (the primal sense). Look up Helen Keller.

      And blind people still have a visual cortex, even if they lack eyes; that might produce something like visual imagery. Humans are echo-blind to bats, smell-blind to dogs, touch-blind to octopuses, and thermally blind to boas.

      Delete
  3. I'm a bit confused about the difference between innate learning and unsupervised learning. From my understanding, Fodor's innate learning theory is that we were born with the capacity to categorize, and unsupervised learning is placing things into categories without an external source telling you whether each choice is right or wrong, and you are learning it by yourself. Harnad uses the example of the Jerry Fodor shadow vs the boomerang shadow as a form of unsupervised learning. We would be able to sort shadows into these two categories based on intrinsic shape alone. Why would this count as learning? Learning is defined as a decrease in errors over time, and I don't see how that would be the case for the Fodor vs boomerang shadows.

    ReplyDelete
    Replies
    1. As I understood this section, unsupervised learning defined here is specific to AI and refers to the ability to learn categories without external feedback. Specifically, with the shadow examples, the AI model would identify patterns in the intrinsic features of the shadows and group together similar looking shadows, thus separating the Fodors from the boomerangs. However, learning is not reliant on external factors. Unsupervised learning is still learning because there is a point where maybe there is not enough data to differentiate the shadow’s category (i.e. the AI has only been given one shadow example), and then as the AI gets more data, it is better able to identify patterns and thus differentiate shadows. I see how later down the line it seems like the AI is not learning because it’s just distinguishing between the two shadows, but there is a point where the AI has not yet been able to differentiate between the two and this adaption is where learning occurs.
      I’m not sure is unsupervised learning only applies to AI. I think there could be an argument that humans do perform unsupervised learning at times, but unlike AI we may also receive external feedback because of how we interact with the world around us. I am not sure though!

      Delete
    2. Melody, there is no categorization in unsupervised learning. It is just passive input in which the learning mechanism learns correlations based on statistics: how often features co-occur together. If some categories are obvious, because they are already obvious from these correlations – say, because most things that are red are also round, and most things that are green are also square – then, if, later, the right thing to do with red round things is different from the right thing to do with green square things, then the category will be very easily and quickly learned (by supervised learning) after just a few trials. But now that there is something that needs to be done, with feedback from the consequences of doing it right or wrong, that’s no longer unsupervised learning. (Can you now apply this to the example of passive viewing of shadows and boomerangs?)

      Categories and their distinguishing features (and what to do with members) are either innate or learned. If they are innate, the organism already knows what to do with what: both the distinguishing features and what to do have been “learned” in advance, and coded genetically, through evolution: nothing left to learn. (“Innate learning” means the capacity to learn is innate, not the behaviors that are eventually learned (which are infinite).)

      Leah, you are right on all points. And, yes, humans can do both unsupervised (e.g., Pavlovian) and supervised (e.g., Skinnerian) learning. The mechanism underlying those capacities in humans may or may not turn out to be the mechanism of unsupervised and supervised learning as it is understood in AI today. (Unsup/sup learning has two meanings: the performance task itself, on the one hand, and the AI model for performing it, on the other. The task is what it is, independent of the model, which might or might not be the right model for the task. Do you understand this distinction?)

      Delete
    3. Is the distinction is that humans and AI do not necessarily perform supervised/unsupervised learning in the same way for the same task?

      Delete
    4. Sort of. But the distinction is between a task and hypotheses about the mechanism that produces our capacity to perform that task. "Trial and error learning with error-corrective feedback" is the task and supervised learning algorithms are a potential mechanism.

      Delete
  4. In section 2.24 there is the phrase, "if we accept that all categorization, great and small, depends on selectively abstracting some features and ignoring others, then all categories are abstract”. This got me thinking about what “perceiving” our world really is. Often I find we assume that when we look at the world in front of us we are somehow receiving direct access to reality - that understanding is magically imprinted in our brains in a direct and unmediated manner. But if you think about it, this doesn’t really make sense. When I, say, look at a coffee cup in front of me, there is not some direct path by which the reality of the cup of coffee is suddenly now in my head - my understanding of this area of space as a unified object, and furthermore a specific kind of object, develops through a process of sorting and manipulating the visual stimuli I receive (ie, my understanding would be impossible without categorization). In this sense, following the statement that all categorization is abstract, we would say that all understanding is abstract (which makes sense - understanding is not just brute perception, it is making meaning out of that perception).

    I’m wondering, then, how the idea that our understanding of the world depends on the categorization differs from the Whorf hypothesis that "how objects look to us depends on how we sort and name them” (section 2.22). Would it be that the Whorf hypothesis takes language to be the only relevant form of categorization? Or maybe that the Whorf hypothesis takes all categorization to be developed in an organism’s lifetime (the opposite of an extreme nativist approach)? In other worlds, is the Whorf hypothesis saying that language is our only categorization tool, or just that language is one factor that influences how we categorize stimuli?

    ReplyDelete
    Replies
    1. Hi, Caroline. Yes, for the first part, it is important to understand that perception is not passive, but rather a constructive process through which our brains reduce the complexity of the information that reaches the senses and, among other things, identify objects or situations that we can sort into distinct groups -or categories, towards which we can act in a certain way. The result of this process, in which we identify features that co-vary with category membership, is a "feature filter" that makes the relevant features “pop-out,” making us perceive stimuli in different categories as being more different (separation) and stimuli in the same category as being more similar (compression), an effect referred to as learned categorical perception (CP).

      For the second part, the main difference between learned categorical perception and the Whorf Sapir hypothesis is thar Whorf and Sapir attributed the perceptual change to language itself (i.e. NAMING things; if two things have different names, we will perceive as more different than if they have the same name). However, Category Learning causes Categorical Perception not because we name things (as in merely associating a verbal label to an object or situation) but because, in order to "ground" the name of a referent in the world, we need to learn to detect the features relevant for category membership, either by sensorimotor interaction or by verbal instruction. It is categorization, not mere naming, that generates a “feature filter” changing the way things look to us.

      Delete
  5. For this skywriting, I wanted to comment on the last paragraph of the text (30 – Cognition is Categorization). It is proposed here that categories are used to group things that elicit the same behaviours and that this is all that cognition is for and about. I find this to be too simple of an explanation for cognition, given how its very definition is broad and there has yet to be a perfect understanding of how higher-order brain functions came to be. Would we say that an individual that does not discriminate in their interactions (i.e. behaves randomly) with people and objects isn’t capable of cognition? From an evolutionary standpoint, cognitive abilities like categorization and language predict reproductive success, but can cognition also do the opposite or have no effect on survival? You could argue that neurodivergent individuals have their own set of categories and that this exemplifies how cognition is universally meant to support categorization, but I wonder if cognition can serve a more arbitrary function, especially in a world where certain cognitive skills are no longer as necessary to survive.

    ReplyDelete
    Replies
    1. Hi Camille,
      I think this is an interesting comment. Particularly, wondering whether cognition is now more arbitrary since cognitive skills are not as necessary to survive anymore. What I understood from the reading and my conceptions of categorization as cognition, categorizing something as ‘edible’ or ‘not edible’ for example is still something quite relevant today, even if survival might be easier. For instance, categorizing a battery as something not edible is not as arbitrary as one might think and is something that will still have an effect on survival. Thus, even though we are more “evolved” now, categorization and language can still be important for survival and therefore reproductive success.
      But overall, I agree with your statement that this is a simplified definition of cognition, and is taking me a while to wrap my head around. However, I think this statement is more of a summary of the general idea of cognition and categorization rather than a definite statement of all cognition is.

      Delete
    2. Thank you for your reply! The example you use is definitely an interesting one. Although we may not be conscious of it, we are constantly categorizing items around us as being safe or a threat to survival. I would be curious to know, however, to what extend modern living has reduced cognitive load when it comes to categorization. Language can help people figure out the use of things without having to experience them beforehand. In many cases, context and symbols (like colour codes and written instructions) do the dirty work for us and we are quickly able to infer the use of things. If anything, they are already categorized for us.

      Delete
    3. Categorization is doing the right thing with the right kind of thing. ("Right" refers to consequences: Does it help/hinder me in surviving, succeeding, reproducing? Cognition is the capacity to do all that. But all the things we do are categorical (either/or, 0/1, member/nonmember). Some are continuous (swimming, dancing, singing -- and similarity judgment, relative discrimination).

      Delete
  6. After reading this paper, I have a better understanding of how ‘embodiment’ works in cognitive science in the term of categorization. Categorization is defined as ‘systematic differential interaction between an autonomous, adaptive sensorimotor system and its world’ (section 3). That is, our interaction with things in the world is based on which category they belong to, by using our adaptive sensorimotor system (the body).

    While the same things can often be categorized in many different ways, items in one category can share no obvious intersection and be based on abstract properties (section 14, section 25). This reminds me of the classical concept of ‘family resemblance’ - items in a ‘family’ are grouped together by their common feature (that are essential to the category) or similarity relative to another, but meanwhile there might be no common definitive feature among all items in the ‘family’. I was wondering if it’s similar to the categorization of goodness, truth and beauty - Is the way we fine “what is game”, similar to how we define “what is beauty/art”? (But can the categorization of a family resemblance be acquired by hearsay?)

    I’m also confused about the magic number seven by Miller, as mentioned in section 20. I got the idea about how chunking works in memory, like the capacity to memorize elements by grouping them into at most seven chunks. Or otherwise, it off capacity and mistakes arise. But how it related to the discrimination vs categorization comparison, and to the size of JNDs? One of my thoughts is related to the concept of weighting features. If there’s more than one dimensions in question, there will be an extra weighting process while some features will be weighted more than others, and our judgement of categorization/discrimination will be based more on this particular feature(s, potentially up to 7)?

    ReplyDelete
    Replies
    1. I was also confused by the magic number seven in relation to categorization. I'm not sure if I got it right, but what I understood it's that when you evaluate one characteristic/dimension of something (ex: the loudness of a sound), people can usually classify it on a scale of at most 7 point (seven labels that would go from "not loud at all" to "extremely loud"). Trying to have more than seven distinction on the scale would lead to categorization errors. From what I understand, this is related to categorization and not discrimination, so it isn't related to JNDs.

      Delete
    2. Hi there! I'm also not positive of my understanding, but I took it to mean that without using memorization strategies like chunking, we can not normally memorize more than 7 distinct units in a row. I interpreted this being included as a way of suggesting that there are ways to increase our capacity for categorization. Prof. Harnad makes the link between Miller's finding (the magic number 7) and what our brains do naturally: observe and categorize a finite number of features that can result in infinite categories (underdetermination). I think the suggestion here is that we chunk features by level of importance. It is related to the credit-assignment problem with machine learning, because when learning is successful, the result is a string of features that are chunked based on relevance/ the weight we have learned through reinforcement-learning to place on them. And because we can only reliably categorize objects in isolation in 7 ways, we need a method of weighting features.

      As for it's relation to JNDs, I found this quote helpful: "Miller pointed out that if the differences are all along only one sensory dimension, such as size, then the number of JNDs we can discriminate is very large, and the size of the JND is very small, and depends on the dimension in question (discrimination). In contrast, the number of regions along the dimension for which we can categorize the object in isolation is approximately seven (categorization)."

      If I'm off base, please let me know! This is just how I interpreted it.

      Delete
    3. Christy, a category is a category if it’s all-or-none, i.e., categorical. “This is the right thing to do with X’s, and this is the wrong thing to do with X’s.” If a human can do that, then they have learned the category X; if they can’t they can’t.

      If your lunch depends on it, can you say (for any candidate) whether they are in family X? If not, it’s not a category but just a similarity judgment.

      Wittgenstein’s “family resemblance” category is just a disjunctive composite category. A simple case is the “family resemblance” of apples: they are not all red; they are not all green; they are all “red OR green (OR yellow…).” (The composite rule could be more complex: any boolean combination (or, and, not, if, then), as in a Google search.

      Games are human artifacts rather than “natural kinds.” Whether a mushroom is edible or inedible to a human is up to nature. (If some are neither, then there are 3 categories.) Whether a game is a game is up to humans. But even there, if we can categorize (and agree) about what is a game and what is not a game, then there must be a Boolean feature-combination on which that successful categorization is based. Things we can’t categorize at all are irrelevant to this question.

      Louise, yes, Miller’s magic number is about categorization. We can subdivide a single dimension (like loudness) into about 7 categories (with lots of errors at the boundaries), but if we add more dimensions such as pitch and timbre then we can subdivide it into many more than 7 categories. That’s trivial, because we have more than one dimension of variation.

      Less trivial is the fact that if you learn to “recode” the dimensions of variation you can get even more categories. This is what happens when you have lots of dimensions of variation (features) and you learn to abstract them with selective feature-detectors of their own, ignoring all the rest of the dimension. That’s how you get all the geometric shapes you can identify and name using just straight/curved, parallel/divergent, continuous/discrete etc.

      Something similar happens with “chunking,” in which you learn features and their names, and then recombine them (in a Boolean rule) that defines further categories. That’s basically the power of propositional language. Verbal categorization opens up a universe of potential combinatory categories that nonverbal (sensorimotor) categorization alone cannot.

      Madelaine, you are right, but the power of recombining grounded categories verbally is based more on Boolean (and/or/not) definitions and descriptions using (named) features rather than on their “weights.” Feature weighting occurs at a lower level, in direct sensorimotor grounding rather than indirect higher-order verbal grounding.

      Delete
  7. Here is my understanding of the paper:
    Firstly, categorical perception involves both categorization and discrimination. Categorization is absolute whereas discrimination is relative. However, it is important to nuance by stating that discrimination becomes categorical if we can do something right or wrong. It has to be clearly right or wrong to be categorized, it can’t just be not good enough. There are also other things we can do that are not categorizing. For example, continuous motor skills are not categorical but continuous (like dancing, walking, sprinting). And the ability to imitate, copy, compare and make similarity judgments on things is not categorical either. To categorize is to do the right thing with the right kind of thing. In fact, I think to pass T3 is also to be able to do the right thing with the right kind of thing… but I might be wrong about that.
    Secondly, all categorization is abstraction, because it involves doing the same thing with diverse things that differ in both relevant and irrelevant ways. The relevant ways are the features that allow us to categorize them correctly, once we have learned to abstract them and ignore the rest. In the paper, Funes was not able to “ignore the rest” because of his prodigious memory for detail. Indeed, living in the world requires the capacity to detect recurrences, and that that in turn requires the capacity to ignore what makes every instant infinitely unique, and hence incapable of exactly recurring. Gibson’s concept of an “affordance” captures the requisite capacity nicely: objects afford certain sensorimotor interactions and these affordances are all invariant features of the sensory input and the organism has to be capable to abstract them, ignoring the rest of the variation.
    Finally, a little thought, I think that although animals are surely able to categorize, language makes categorizing much easier and faster because information in conveyed more quickly. I believe that this evolutionary advantage is one of many that have permitted humans to adapt so well to many situations.

    ReplyDelete
  8. Hi! Your last thought reminded me of this quote: "In practice we usually cannot make our implicit knowledge explicit, just as the master chicken-sexers could not. Yet what explicit knowledge we do have, we can convey to one another much more efficiently by hearsay than if we had to learn it all the hard way, through trial-and-error experience. This is what gave language the powerful adaptive advantage that it had for our species."

    I thought this point was very interesting too, and interpreted it to mean that although sensorimotor grounding is crucial for the basis of Categorical Learning and Categorical Perception (we must have sensorimotor experience with flowers to understand what it means when we say that a primrose is a flower), language is what allows us to quickly understand and categorize things based on heresy, and has had an adaptive advantage. So it is uniquely human to be able to do both (understand sensorimotor interactions and verbal information when it is sufficiently grounded), and this is what constitutes our ability to categorize things.

    ReplyDelete
    Replies
    1. Yes, but it's hearsay rather than "heresy" (though language can be heretical!)

      Delete
    2. Hi Madelaine,
      I had some thoughts about the same quote. In class, we spoke of how a picture is worth more than a thousand words. It is near impossible to describe all the features of a picture, but perception of the picture is effective and immediate. I wondered what marks implicit knowledge that cannot be made explicit. At first, I thought, to the master chicken-sexers, the features that distinguish chickens' sex must be as meaningful as whatever distinguishes different kinds of snow in Icelandic, yet former are not categorized in words. But then I realized I don't even know to what degree perceptual features themselves are categorized in the snow example...that would be an interesting read... Maybe features that are too fine-grained cannot be transferred to language.

      Delete
    3. Hi April! I think the fact that features that are too fine-grained cannot be transferred to language is not the only reason that implicit knowledge can not be made explicit. Maybe another reason is that much of our sensorimotor interactions are continuous and dynamic, rather than categorical.
      Another interesting observation I have in mind is that, though we cannot fully make explicit implicit knowledge, we can still say something about it and convey it meaningfully to others. Usually, we do master skills much faster if we have teachers who can give us instructions, compared to learn it alone solely through trial-and-error. But we can never learn skills merely through verbal instruction. We must "ground" or "refine" what we already learn through instruction by sensorimotor interactions in order to really master it. Anyway, it is so interesting to reflect on what roles verbal description and trial-and-error experience play respectively in learning implicit knowledge such as sensorimotor skills.

      Delete
  9. I wonder if categorization (as it is used in this paper) can also be expanded to explain the cognitive from of individuation - that is, the identification of an individual from within a set of things in the same category. One idea of visual recognition is that it contains two levels: the first being course categorization ie. "a car is a car", while the second level identifies the individual ie. "my Dad's 1997 GMC Yukon". Now, I wonder if individuation is also a form of categorization, where the individual is placed in its own "category" in the mind, with specific features now being associated with this individual in the mind.

    ReplyDelete
    Replies
    1. Alex! I really like your analysis here, because I was also thinking of identification, and more specifically self-categorization as I read. We come to know ourselves rather slowly in the course of our lifetime, and I think it could be that self-categorization is rather difficult. I would think that individual identity is built around the set of objects (both human and non-human) that surround you. Forming an identity might be thought of as a gradual recognition of the objects that do belong and do not belong in the “Alex Nissenbaum” category, an amalgamation of the objects you have identified with, wish to be, would like to reject, all thrown together with the things you are innately. Maybe it takes a long time because we are constantly changing, as are our interests and the things we believe define us. At one time in my life, I may have been a Pennsylvanian, but at what point of living in Montreal do I become “montrealaise”? Three years? Ten years? Twenty? Maybe our ability to discriminate this sort of distinction is a implicit feeling, that requires trial and error until we sense that our actions actually align with our image of ourself, or perhaps it is learned externally through a grounded verbal instruction (“you need to be a permanent resident here to be ‘montrealaise’!”) There might also be a lot of hearsay in identity, like when you overhear someone describing you as a negative person, and you identify with their projection, becoming more negative in response to their categorization of you. Another classic example of hearsay might be the kid whos parents tell them again and again that they should be a doctor, shaping the child’s own expectations of themselves, either to their own success or detriment. It may take many years to figure out, but ultimately, I’d like to think that identification is “doing the right thing with the right kind of thing” or doing what is best for oneself at every moment as it suits one’s self-perception.

      Delete
    2. Alex, distinguish the question of “individuals” vs. categories (kinds). Even an individual is a bundle of instances that you have to distinguish from instances of other individuals. So it’s still a category with distinguishing features, Think of Funes’s puzzlement about all the different views of the dog, which we want to all call “Fido.” And distinguish subcategories (like “crimson” or “scarlet”) from higher order categories, such as “red” or “green.”

      Genavieve, I think you might be getting a bit too introspective in all these reflections about selfhood. But, yes, “self” and what “I” feel (which is in fact the only kind of thing I ever feel) is a special category (like the “Laylek”) I spoke about in class, because it has no non-members, so no way to pick out its distinguishing features. We’ll discuss this in Week 10.

      Delete
    3. This reminded me of something brought up in another class, where the left hemisphere was more likely to identify with a morphed picture of themselves than the right hemisphere. However, each hemisphere was still able to identify the self, just at different levels of morphing. So I think this shows that distinguishing between individuals is a form of categorization and this information in particular demonstrates that our hemispheres categorize them differently. This implies a certain level of fluidity in our categorization overall, meaning our perception of our self or others is variable. It seems, therefore, that external items that we own could also become part of our “self” category if we interact with it in such a way that we associate ourselves with the item. This categorization would use unique features, such as ownership, to help distinguish the self from others.

      Delete
  10. When reading this article, specifically the part on colours, I couldn’t help but think back to a notion that I’ve learned in a few previous psychology and neuroscience courses: colour categorization is not innate, it is learned. If I recall correctly, certain tribes’ colour categories were totally different to ours: while a certain band of the blue-green area of the spectrum was considered to be one colour category, another shade of what we’d consider to be blue was seen as a totally different colour. Their colour categories are totally different than ours, with boundaries at different wavelengths; thus colour categories are likely not innate. It is possible that this article was written before much research had been done on different kinds of colour categorization, but this finding turns the idea that “between blue and green there is [an innate category boundary]” on its head. Another example—though much less conclusive and thoroughly researched—that I have heard of the potential relativity of colour categories is the ancient author Homer’s description of the sea as “wine-dark” in the Illiad and the Odyssey. This one has some mystery surrounding it, as no one is sure what exactly it means, if anything.

    ReplyDelete
    Replies
    1. Hi Milo,

      Thank you for bringing this interesting example up! I was also thinking of this when I was reading the article. However, from what I remember there was a further experiment proving that accessing color category information does not rely on language: when the experimenter gave the tribe members tasks to match color patches (vs. the first experiment when the experimenter ask them to name color patches), the tribe members performed equal to the English participants (vs. the first experiment when English participants outperformed tribe members due to we have "blue and green" color terms in our language but they don't have). Thus, it shows that there are still innate color categories. I think this experiment shows that language can *aid* our perceptions instead of *determining* our perceptions.

      Delete
    2. Great point, Xingti! You're right that language is one of a few factors that influences color perception, the physical properties of visual light (such as wavelength) being another one. 500 nanometers is still closer to 400 nanometers than to 700 nm; therefore, 500 nm is more likely to be in the same category as 400 nm than 700 nm. But although we may sometimes categorize more according to our senses than by language when it is difficult to qualify a boundary linguistically, a sensual category remains a category.

      Delete
    3. Basic color categories are perceived the same across languages and cultures, and this is because of innate feature-detectors. Some sub-categories can be learned, but their boundaries are much fuzzier. Read 6b for more about learned categories.

      Delete
  11. The story of Funes and S made me reflect on the idea of memory in cognition and categorization. Particularly, I was struck by this quote: “…living in the world requires the capacity to forget or at least ignore what makes every instant infinitely unique and hence incapable of exactly recurring.” This made me think of the T3 and T4 that we have been discussing. According to this, T3 and T4 would have to have this capacity built in and would need to be able to abstract and ignore information. If they did not have this ability, then they would not be able to cognize indistinguishably from a human and would be facing issues such as the (fictional) Funes and S.
    By my conception of a machine, I would think of it as being infinite, and therefore not forgetting details. However, I wonder if this is a moot point and the ability to abstract and ignore would be already built in to T3 and T4 if they were created by reverse-engineering?

    ReplyDelete
    Replies
    1. Evelyn, you bring up an interesting point that this whole "forgetting" information thing seems very human: something we’d often consider to be a benefit of a non-human machine is that is has a greater capacity to NOT forget!

      Certainly, I think a T3 would need to be able to abstract in order to categorize. It would need to deal with the world not as an infinitely complex and continual haze of sensory information, but as discreet and understandable objects. And this might involve ignoring, or "forgetting about" non-relavent characteristics. It is possible in our T3 though, that this would appear more in the form of relevant characteristics being given greater weight within whatever calculation results in our T3 doing what it’s supposed to with things in the world, rather than the robot “forgetting” non-relevant characteristics. For example, if you think about a situation with supervised learning, when our robot correctly categorizes cats as cats and is reinforced for doing this (I’m not sure exactly how this reinforces would occur, I don’t know enough about robots or machine learning :p), then in future interactions with warm fuzzy things, more weight would be given to the characteristics that caused this particular warm fuzzy thing to be put into the group “cats”. So the ability of a non-human machine to categorize isn’t necessarily representative of a deficiency like being forgetful, but could be more about the ability to learn what information to prioritize. Let me know if that just makes you more confused and I can try to explain differently :)

      Delete
    2. Hi Caroline,
      Thank you, this explanation makes a lot of sense to me! The idea of a robot 'forgetting' seems confusing to me, but the idea of prioritizing information in order to categorize and cognize like a human makes sense.

      Delete
    3. Evelyn, you brought up something I was also thinking about while reading the paper — selective forgetting is necessarily a requirement for “living in the world,” as Harnad puts it. We tend to see “machines” (in quotes, since I’m not yet committing to a T-something for the definition of “machine”) as infinite sources of information that would not be able to forget and therefore have issues with categorization. Since a T2 would just be an “emailing robot” with no sensorimotor capabilities, it would not be able to categorize (in section 7, categorization is described as a sensorimotor skill with emphasis on the sensory aspect). Would a T3 have the inherent ability to categorize, though? Or should that question be asked in a different way: is categorization necessary in order to be considered T3+?
      Caroline, you make a great point! One thing I’m struggling with, though, is the idea of feature weighting in robots. Like you said, some features are weighted differently, which helps us categorize. This is explained very thoroughly in section 19 with Watanabe’s Ugly Duckling Theorem. If we were to weigh all features equally, we could not say that the ducklings are less similar to the swanlet than to each other, due to the fact that the list of possible features is infinite. As written in section 13, categorization is the “problem of sorting [sensory inputs] correctly.” So, if we combine the fact that features are infinite and some features are weighted more than others, it does make sense to conclude that a robot would eventually, through supervised learning, figure out how to differentiate and categorize a cat. The example I bring up now seems silly, but it’s something I’ve been reflecting on: what about hairless cats? They don’t exactly fit into the “warm fuzzy thing” group, but they’re cats nonetheless. Though supervised learning would help a robot obtain a general category of “cat,” would it not run into some issues when trying to decide what a hairless cat is? Since cats are so varied, a defining feature cannot simply be “pointy ears” (e.g., Scottish Fold breed) or “four legs” (three-legged cats exist, they’re still cats!) or “fuzzy” (again, hairless cats). Is there a specific thing that we’re missing in order to categorize? Too specific misses the mark, but too general also does. After all, a “domesticated mammal often kept as a pet” could be referring to dogs.
      I’ve been rambling quite a bit trying to find the point I’m attempting to make, but I think it all boils down to this: would a T3+ be weighting features similarly to humans in order to categorize them, or are they utilizing and prioritizing features we do not in order to make their calls?

      Delete
    4. Hey Evelyn, I had the exact same question! (I wrote it in my skywriting down below as well.)

      I feel like machines either know or don't know something (binary) and characteristics like selective forgetting or weighting is something unique to humans.

      I can't seem to wrap my head around how we can get a robot to do that. Caroline’s suggestion does hold insight: We can build the robots and give it the ability to learn what information to prioritize (sort of programming the “weights”).

      However, I feel like that would be impossible with the infinite number of possibilities of priorities. Additionally, our categorization is highly context dependent, and we don’t even know HOW and most importantly, HOW MUCH we are prioritizing certain features. Just like Emma mentions above, we are oblivious on how much (quantitative wise) weight is put into what features (hair? ears? pets?) when categorizing a hairless cat. Hence, I think as of now, we wouldn’t be able to relay that sort of ability to learn what information to prioritize to a machine.

      Delete
    5. Evelyn, yes, a T3 (Eric) has to be able to discriminate, categorize and learn just as we do. (Re-read Turing on infancy, development and learning). And what is a “machine”? (Week 1-3)

      Caroline, what is a “machine”? And don’t nonhuman organisms categorize and learn too? But, yes, categorization and learning is more a matter of weighting and filtering features than of “forgetting.” Borges’s story is metaphorical, hence exaggerated, but it highlights the essential points about categorization and memory. And don’t forget that we categorize the same inputs in different ways depending on the context. This means feature-weighting can be context-dependent too.

      Emma, on hairless cats, etc., see the replies about “family resemblances” and Boolean feature combinations.

      Iris, what is (and isn’t) a “machine”? Not what you “feel “a machine is: what a machine is. (Isn’t Eric a machine? And what about T4s and T5s: aren’t they machines? Cogsci is just trying to find out what kind of machine we are.

      Delete
    6. A machine is something that can interpret inputs according to a set of rules to produce an output. By this definition, which is quite vague, humans, nonhuman living beings, and nonliving beings can all be considered machines. Maybe part of the problem with seeing how machines might forget is because memory seems to be stored in distinct hardware parts whereas the information we know is not as clearly defined physically. Thus, it might be easy to think that in order to forget, an action must be taken on the physical storage device to erase information for a mechanical machine, but for humans, forgetting is more of a passive occurrence. However, I think this is just more a reflection of how we do not understand the mechanics of our own brains, rather than the limitations of machines.

      Delete
  12. Section 2.11. concerning learning algorithms and specifically supervised learning made me think about subjective categories. For example, "bad people", "pretty things", "smart political platforms", one's that are greatly variable across people. On the flip side, I would assume that categories like "made of metal" or "lives on Earth" are objective and mostly invariant across people. I wonder if we have to categorize categories into objective and subjective categories, and accept that computers are only capable of learning the first. One could argue that a machine learning algorithm could learn subjective categories through supervised learning but would it just be inheriting the biases of its developer? Furthermore, would this algorithm pass T2 even though we know there is no intentionality behind its category decisions?

    ReplyDelete
    Replies
    1. Lucie, Categories are things that there is a right and a wrong thing to do with. Right and wrong are determined by consequences: heights are not good things to jump from, nor toadstools to eat. Social categories have consequences too: If you live in a dictatorship in which the dictator has decreed that anyone wearing green on Tuesdays will be shot, that too would not be a good thing to do, on Tuesdays. But what can you mean by “subjective” categories? You can declare that you will stop eating sweets. And you can punish yourself for a lapse by making yourself do 40 pushups. But that’s only a self-imposed category rather than a “subjective” one.

      “Laylek” is the real subjective category. We’ll get back to that in Week 10. For now, just reflect on the fact that if there is no external (hence “objective”) feedback on whether you should do this or that, then there is no way to distinguish members from non-members, or right from wrong.

      Imagine yourself on a desert island in which you notice that you occasionally have two kinds of headaches. One feels like a somewhat throbbing sensation and the other makes you feel slightly light-headed. You decide to call the headaches T and L. Well, you might be right when you say “this one was a T” – or you might be wrong. No corrective feedback available. Then a neurologist takes a vacation on the island and brings his EEG equipment. You ask him to test the difference between the Ts and the Ls.

      Here are two (of many) possibilities: (1) it turns out that you do have two different patterns of EEG in the headache area, one pattern that occurs somewhat more often when you say you have a T headache, the other when you have an L. But the correlation is not perfect, so sometimes when you say it’s a T, he tells you the EEG shows an L. And when he says that then you say, yes, it could have been an L, I wasn’t sure. Without the neurologist, you couldn’t tell the difference. Is that really a category?

      It could be worse. (2) The correlation between the EEG and T or L could be perfect, except the area that distinguished them had nothing to do with headache activity: it was correlated only with a verbal area, and the likelihood of saying “T” or “L”. Is that really a category of headache?

      Pain is a “subjective” category, and it’s real (so it every feeling, when it is being felt). But the “hard problem” is explaining the causal role of a signal from yourself to yourself. And this is not about the usefulness of talking to yourself…)

      (Patience. We’ll get to this…)

      Delete
    2. Thank you professor, I think I understand your points. Would what I was thinking of as "subjective categorization" be more like doing the right thing with the right kind of thing, where the "right thing" is affected by a person's personal beliefs, circumstances, past experiences, etc. rather than facts about the world? In this logic, I think that I would be valid in categorizing my T and L headaches because it was the right thing to do with the current information available (my personal experiences with the headaches).

      Delete
    3. With "subjective categories" the problem is with the error-correcting feedback on what is "right" or "wrong." The problem is that there isn’t any feedback:

      With explicit (conscious) hypothesis-testing, the hypothesis is the subjective part -- a hunch as to what is the distinguishing feature ("maybe the striped mushrooms are the safe ones to eat") -- and the feedback is objective (you get nauseous or you get nourished after you eat a striped one).

      But what is the feedback for "right" or "wrong" when you call it a T headache or an L headache? The philosophers call this the "problem of error," which is that there is no error feedback, no consequences of being right or wrong. Hence there can be no error. It is more like passive, unsupervised learning. There is no "supervised" part. It's T or L simply because you believe it is so (which really just means you feel it is so, for “subjective” simply means “felt”).

      Contrast that with the supervised case, where there are two medicines, one of which works for a T headache but not an L headache, and the other one works for L but not T. That feedback would correct you if you mis-categorized a T for an L or vice versa. But without it you're just as free to believe it’s a T or an L, with no consequences, as you are to believe in fairies.

      Sometimes these untested subjective beliefs can even be having bad objective consequences that you are simply not paying attention to, or denying, such as anti-VAX...

      (Some (but not all) of the points I’ve just made here are implicit in Wittgenstein’s Argument that there cannot be a "private language" – a language that a person invents to refer to private feelings that only that person feels. Wittgenstein thinks that words get meaning from the fact that they are public, social, shared and jointly used by speakers. So an “error” is a wrong use of a word, and the corrective feedback comes from the speech community. W is right about the fact that words are for communication and that it must be possible to make and detect an error, but wrong that this is based only on the shared rules of use of a speech community. Can you see W’s error? Hint: categories and mushrooms.)

      Delete
  13. “If the world of shapes consisted of nothing but boomerangs and Jerry Fodor shapes, an unsupervised learning mechanism could easily sort out their retinal shadows on the basis of their intrinsic structure alone”. Does this mean that there could be both kinds of learning models applied to the same input affordances? For example, if we take the scenario given in the reading, the unsupervised categorization mechanism could easily distinguish boomerang shape from Jerry shape. However, if teh Jerry shape and the boomerang shape are both coloured varying shades of blue, would this lead to error-correcting feedback? Or does the total entity of blue Jerry and blue boomerangs mean that a supervised model is necessary because there are now two methods of categorization?

    ReplyDelete
    Replies
    1. After attending class, the answer to what I initially proposed here has become clear. The updated example that I included with the Jerry and boomerang shapes becoming varying shades of blue would still be categorized as unsupervised learning. What would allow the switch from unsupervised to supervised is if there were consequences attached to the colour or shape perception. Hopefully I am understanding the full breadth of the two types of learning now.

      Delete
  14. To me, this part of the paper was most interesting: "people who are perfectly capable of sorting and naming things correctly usually cannot tell you how they do it (Rosch & Lloyd, 1978). They may try to tell you what features and rules they are using, but as often as not their explanation is incomplete, or even just plain wrong"

    I did not know that categorization mostly happens automatically or unconsciously. When putting this into context with the rest of the paper, it is useful to consider how much of our cognition actually rests on our unconscious sorting and knowledge. It is as if consciously, we gather the information with our interactions in the world, for it to unconsciously get sorted out and categorized into a mental library we can draw from.

    ReplyDelete
    Replies
    1. Your last point about consciously gathering information to then be sorted and categorized unconsciously is interesting and brought to mind part of a reading for another psychology class that I did today - specifically, it noted that we do not actually spend a lot of our waking life actively attending to stimuli. Thus I would be interested to know more about how much of categorization really takes place unconsciously vs consciously. If what I've read applies to categorization, what are the mechanisms that would allow us to categorize while not actively attending to stimuli in our environment, or do we require active attention to categorize?

      Delete
    2. Remember the problem of remembering the name of your 3rd grade schoolteacher? If much or most of cognition were observable though introspection, cogsci’s “easy problem” would be solved. Most of what’s going on in our brains to produce our capacity to do what we can do is not done consciously by us (although we are awake and conscious while it’s being done).

      Category features are either detected through innate feature detectors, as with colors, or they are learned by supervised learning. Usually the learning is “explicit,” so that those who succeed in learning the category can report which features they are using, usually because they have been paying active attention in their trial and error, testing features one by one.)

      (But there are also interesting cases where they don’t know. Either they say they don’t know how they are managing to categorize correctly or they say they are doing it with features that are actually wrong. But this kind of dissociation between success and explicit report is rare.)

      In any case, even knowing which features you are using does not explain how your brain learned to detect them, any more than you know how you remembered your 3rd grade schoolteacher’s name.

      And with category learners who do trial and error across days or weeks, there is sometimes an improvement from the level of success at the end of one session and the beginning of the next, suggesting some unconscious “consolidation” going on between.

      Delete
    3. I also find this part interesting given Deep Neural Network's resemblance of biological neural architecture and its uninterpretability. This mutual relation allows for the possibility that once we are able to interpret DNN, we can have a blueprint to explicitly explain how categorization is learned in human neural system.

      Delete
  15. So is categorization a function?
    For example,
    We want to to recognize a bird.
    If it has wings, eyes, feathers, a beck= bird (in this case we ignore the color of the bird). So the input is our perception of the bird and the output is the concept of a bird.

    In that case categorization is a computation that exists within a symbol system. Or is the computational aspect only a part of categorization?

    But, other animals can also use categorization. A dog can decide what is food (with smells, look, etc.). In the case of categorization without language, what is the underlying process? From what I understand it is sensorimotor learning. In that case, it would involve more than just a computational function. It requires interaction with the environment.

    So, then is language the "computer language" that lets us symbolize those categorization? Language in that case would be a tool that humans have developed to share more complex abstraction (i.e. how to perform a medical operation) or to share information that is not accessible to the agent (i.e. knowing what a zebra is when you are Canadian). The big advantages are that you don't need to have live the sensorimotor "experience" to abstract it and that you can keep building on abstraction from generation to generation in a very effective way.

    -Elyass

    ReplyDelete
    Replies
    1. From my understanding, categorization is based on a function as you mentioned where you are taking general featured from a grounded symbol (ex: birds) and making it a group of its own. But as seen in the Funes story, this can only be done by ignoring all the details of the symbol (ex: an ignore colour or size of the bird). If unable to ignore those type of details, we are unable to fit the symbols into groups/sub-group which will result in a high cognitive demand to remember all the different type of birds (example).

      I believe that you are correct, and it isn’t just language but rather a sensorimotor learning which allows for animals as well as human to be able to have such categorizations. However, as mentioned in the paper, it is possible that there are some “innate” categories that humans may have due to evolutionary reasons that occurred through natural selection. I believe this would also apply for other species since having such categories would make it easier to distinguish what could be considered food vs poison/dangerous.

      Delete
    2. Sensorimotor categorization cannot be purely computational because sensing and moving are not computation. And it can’t be purely verbal because words are meaningless until they are grounded in their referents, and that requires sensorimotor category learning. Nonverbal animals too can have some innate feature-detectors, and they too can of course also learn categories. In all cases the learning algorithm can be computational, but that’s only part of the mechanism.

      Delete
  16. “What the stories of Funes and S show is that living in the world
    requires the capacity to detect recurrences, which in turn requires the
    capacity to forget or at least ignore what makes every instant infinitely
    unique and hence incapable of exactly recurring.”
    From what I understand, characteristics of our environments are weighted, and invariant features are focused on, while excluding variations, to distinguish between categories of external objects. Furthermore, this dimensionality reduction (blurry to clear) is guided by innate (colors, facial expressions) supervised or unsupervised learning depending on the situation. When we reach abstract concepts, language can be used to bypass sensorimotor trial-and-error and we can thus learn through hearsay. Language can even let us understand concepts such as a peekaboo unicorn by instruction. Moreover, there is the concept of categorical perception which is to compress within category and expand between categories to make a certain category’s perception different and more salient.

    ReplyDelete
    Replies
    1. The way dimensionality reduction (feature reduction) brings things into "focus" is by filtering out the category-invariant dimensions.

      Delete
  17. In order to categorize, doing the right thing with the right kind of thing, we are able to abstract, which is to select specific features, and ignore other porpoerties to distinguish between members and non-members of a category. Despite a few exceptions, such as facial expressions and colour detecting, most categories are learned. Yet we have an innate ability to learn to categorize, which allows us to form categories and thus language. Is this ability particular to humans?
    For categories with physical entities, such as “apple”, or “zebra”, there are clear distinctions between members and non-members, but the distinction is not always quite so clear. Take for instance colours, something that I would consider green, can be considered by others yellow. Such subjective categories are dependent on personal interpretation, yet we are still able to communicate. While this example is benign, there could be instances where differences in categorization could be important, and lead to misunderstanding or communication. Do these differneces in categorization depend on the features we use to discriminate categories?

    ReplyDelete
    Replies
    1. You jumped too quickly from sensorimotor categories (whether innate or learned) to language. (Yes, "Stevan Says" only humans have language.) What is language, and how do we get from categories to language?

      About "subjective" categories, see other replies in this thread.

      Yes, differences in features used can lead to verbal misunderstanding -- but they can also be resolved verbally.

      What does it mean to say that categories are "underdetermined" by their features (which are approximate)?

      Delete
  18. To continue our journey of trying to build a T-robot:
    - If T2 can be passed/built with computation alone, T2 would NOT understand the symbols proven by Searle’s CRA. Why does T2 NOT understand? Because the symbols are not grounded.
    - To understand, we need grounding and to ground we need sensorimotor experiences. Why? (1) mirror neurons that fire during vocal communication suggest that sensorimotor experience plays a role in language (2) the symbol grounding problem illustrates that sensorimotor experiences are required initially to detect features and ground the name of referents. So, as of last week, we knew that we need at least a T3 robot with sensorimotor capabilities.
    - So how exactly would do we give this T3 robot with sensorimotor detectors the ability to ground? This week provides an answer: we build T3 so that it can categorize. Or in other words, we build a “autonomous, adaptive, sensorimotor system” that can “systematically and differentially interact” with its world. Our robot, much like cognizing humans, need to have the “innate” ability to learn, which allows the production of almost infinite behaviors that humans can perform through categorization.
    - This ties back to my skywriting for 1B where I asked the question, “Are TT-passing robots readily made? Resembling adult forms of humans?”. The answer is NO. We cannot build a T3 robot pre-wired with all the categories that adults already have (because it is likely an infinite combination) but we CAN give the robot ability to learn to categorize.

    - And to do that, we need to first understand how humans ourselves “learn” to categorize. We learn through (1) unsupervised learning (2) supervised learning and (3) by verbal hearsay, the strongest of all (and what differentiates us from other livings organisms).
    -(1) Unsupervised learning is learning through exposure when the “input affordances are already salient”. We cluster structure similarities and dissimilarities and enhance the contrasts.
    -(2) Supervised learning is learning with corrective feedback. We need this type of learning because categorization can also be context dependent.
    -(3) However, we have a far more efficient way of learning to categorize which is language. “Language allows us to acquire new categories (indirect grounding) without having to go through the time-consuming process of direct trial-and-error learning.”
    - Language allows us to name categories, ground categories, and recombine grounded categories. Consequently, this allows us to have an infinite number of categories through the power of language. (Effability thesis for language: anything can be defined/described in words.)

    So up until this point, we know the important of grounding for understanding, the importance of sensorimotor capabilities for grounding and the importance of language for (plenary & efficient) grounding of categories.

    ReplyDelete
    Replies
    1. Yes, there are other things to consider when building T3 like the innate categories that is coded genetically through evolution (like color perception) but the primary concern when building our T3 robot would be for it to have the ability to learn to categorize.

      Which leads me to ask, if learning to categorize includes selective forgetting, abstracting, feature selection and biased weighting, can a robot have these somewhat “finite” characteristics in their ability to learn? Finite in the sense that us humans cannot detect or remember every single little feature(even with chunking & recoding).

      Intuitively, thinking the technology we have now, machines seem pretty binary: they divide information into either something they know or don’t know. Once known, it isn’t forgotten so I can’t really imagine a robot having a learning ability with “finite” characteristics like selective forgetting and weighting dependent on the context. But that’s just my own thoughts. We’ll just have to see how cogsci advances.

      Delete
    2. Iris, your summary in your first posting is correct; but I don’t understand what you mean by “finite” (and maybe also what you mean by “robot” or “machine”) in your second posting. I’m also not sure what you mean by “binary.”

      Supervised learning can certainly be done by computers, and hence also robots, today. You seem to be imagining a hypothetical problem that does not exist in practice.

      In binary codes (there can be other codes too), symbols are either 0 or 1. That is enough to code anything discrete. You can code all of language in letters and letters in Morse Code, so just 0’s and 1’s. A Turing Machine just needs 0’s and 1’s.

      But categories are “binary” too, in the sense that there is a right thing (1) and a wrong (i.e., not right) thing to do (0) with a member of a category. In the case of language, the right/wrong thing would be to say the category’s name.

      Delete
  19. I believe I have made myself clear in the past about my conviction that there is something missing when we use language to describe the world. This, again, at the risk of being enormously boring, is Lacan’s ‘Real’, Derrida’s ‘Traceless Trace’, G-d, etc. We have at some point decided that this is unimportant to cognitive sciences--and thus that ‘ontology’ is not interesting to us in this course. I can see how this is could be argued for things such as mushrooms. From what I’ve understood, categorisation is what we do when we do the right things with the right kinds of things, or don’t do the wrong things with the wrong kinds of things (ie we eat the right kind of mushroom, and don’t eat the wrong kind of mushroom), and we can learn these rules both through sensorimotor experience and by consulting the categories others have formed and referring these to our previous sensorimotor experience. So, if much of cognition is categorisation, then cognition deals in those verifiable, or supervised, choices or actions that are taken based on what is knowable. What else is cognition? If my conviction ends up being right, and there is indeed a fundamentally ‘unknowable’ thing (or, more accurately, non-thing) that lies at the base of our experience of everything, I have trouble imagining that categorisation could account for the ways in which this shapes us as cognising beings. Isn’t that ‘unknowable’ fundamentally uncategorisable?

    ReplyDelete
  20. I made several connections through the sections of this paper. The discussion regarding the compression of colour categories and how “we see them all as just varying shades of the same qualitative colour” is akin to category membership. For example, the category of birds is compressed to include all types of birds, regardless of their inter-category variation (ie. ability to fly). So it is through the compression of categories, and the ignoring of certain differences, that we are able to identify the overarching category of which its members belong to. Additionally, through our ability to discriminate we are able to dissect overarching categories into smaller, more specific ones that highlight (rather than ignore) the variation within the category (ie. birds that cannot fly vs. birds that can fly). In this sense, I agree with the statement “So if we accept that all categorization, great and small, depends on selectively abstracting some features and ignoring others, then all categories are abstract” because we are the categorizer, and through our experiences (trial and error, hearsay, supervised learning, etc.) are able to create adhoc categories for whatever purpose we may need to use such a category for.

    ReplyDelete
  21. The discussion on feature selection reminds me of the problem in computational simulation: which is how to choose the right level of abstraction when we are trying to use the computation simulation to mimic or describe real objects or a dynamic system to further use it as a tool to prove our theories. For human, pointed out in the discussion, since ‘dimensionality reduction’ occurs ‘innately through innate invariance-detectors, or through acquired invariance-detectors, there is no infinite feature list for every subject, making everything equally like everything else.
    Mentioned earlier in the article, the patient Funes, since his infinite rote memory gives him a handicap, which is that he was unable to selective ignoring, he could not grasp any concept in his everyday perception, people’s ordinary life. In the article, professor Harnad points out that, strictly, Funes could not even exist, and describe Funes as a passive sensorimotor system.
    The lack of the ability to abstract features and ignore the rest is due to his defect of not being able to forget, which also in case make him remember everything. I wonder if we will encounter the same problem when building a T3 robots. With the ‘same’ sensorimotor abilities, T3 robots could at least experience some of the qualitative properties of ‘conscious’ experience, our experience. However, since T3 robots are different from human at the cellular level, the subjective aspect, which are the ‘what-it-is-like’ properties of experience will still be different from human. From my understanding, these qualitative properties of experience are the features that could be categorized with supervised learning to reach behavioral equivalence to human, passing T3 test, by doing the right thing on the basis of analyzing the sensory inputs. I wonder if such experience could be grounded at a certain level in T3 robots, trying to build a T4 robots therefore will remain no more practical value since human beings themselves are abstracting features on different level based on their different sensorimotor abilities.

    ReplyDelete
  22. In section 4, we see that in order to be able to categorize, meaning respond differentially to different kinds of input or stimulus, we need to be able to learn, and that is something that depends heavily on our ability to consistently use our sensorimotor capacities to interact with the world in different ways throughout as we grow. This made me think about when we spoke about unsupervised learning in class. From that discussion I learned that unsupervised learning occurs when we figure out what is what by ourselves, without any supervision or guidance to serve as cues for how to do it. I know that this is a huge aspect of learning in general and our ability to categorize because often-times we need to make inferences from our environment and observations of others’ experiences to learn what labels to attach to things, and this makes me thing about the impact of our subjective feelings when it comes to categorization. Is feeling a crucial component of giving ourselves corrective feedback and learning from our environment, and therefore be needed for categorization? Or is observation enough?

    ReplyDelete
    Replies
    1. Hi, Adebimpe. I believe feeling would be important for corrective feedback, because you need to feel, at some level, what something is like to be able to categorize it. (I am assuming that by feeling you mean, to use some weasel words, consciously experience the world, my apologies if I’m understanding wrong).

      I’m not sure if observation and feeling can be mutually exclusive here. A T3 robot has the capacity to observe the world through sensorimotor interactions, but in so doing, it is also grounding whatever it interacts with and therefore also feels what it is like to know what the things it interacts with are.

      I have a feeling that you don’t quite mean that kind of feeling, though (get the pun? hehe). I would say everyone experiences the world differently, and so it makes sense that everyone would categorize according to their experience. If, for example, I have had a bad experience with dogs in the past, I would categorize dogs in the category ‘danger’ (knowing that ‘dogs’ itself is a category). If on the other hand, I absolutely adore dogs and know a lot of things about dogs, I will probably categorize dogs differently, maybe in terms of breed, aggressiveness, etc.

      Delete
    2. I think this is a really good question. But I would ask from this how you intend to differentiate feeling and observation, for isnt observation, or seeing, simply a form of feeling? Observation is just another weasel word for feeling.
      I believe the distinction you are actually intending to make is between learning for oneself against learning through others, which are both informed by feeling. In the first sense, one learns from first person feeling with one’s own sensory motor capability. For example, you may categorize lemons as inedible as you find them too sour for your subjective taste. This would be an unsupervised learning, as the input you receive is not labeled, and no other person is intervening to help you to classify this lemon as undesirable. However, in the second case, you may observe (see) another person eat a lemon and learn to categorize them as inedible based on their strong adverse reaction to it. I believe that observation is enough to learn, as we learn through feeling (seeing) in both inference from the experience of others, and through internal firsthand experience. A question I have is whether observation without any intervention (think watching someone eat a lemon without them being aware that you are there) is considered supervised or unsupervised. When you watch someone you are not being directly and verbally told that the lemon belongs to the sour, or inedible category, but the information is nevertheless conveyed through another person. How does one make a distinction between supervised and unsupervised in this case?

      Delete
  23. During most, if not all of this reading, I was thinking about how robots do or do not fit the descriptors. Robots fit in the categories: (1) sensorimotor systems (lidar, sonar, radar etc), (3) the ability to categorize areas, (4) learn — the mapping and routes they’ve been to, (5) innate categories — preprogrammed/ hard coded information, etc.

    So with that, my question is do we need to completely possess/pass/meet requirements of the categories presented in order to be a cognizing thing? I’m assuming no, because some sections are really specific and may leave out populations which we know (or assume — re: other minds problem) cognize. For example, section 2. Invariant Sensorimotor Features (“Affordances”) covers how light simulation afford color vision for those with the right sensory apparatus, but not for those who are color-blind. So other sections which have specificity and rule out populations would refer to this point and re-include them in the cognizing category. I am supposing that broadly speaking all of these subsections in this paper fall under sensorimotor systems which is T3 passing?

    ReplyDelete
  24. Professor, could you please clarify vanishing intersections? Were Fodor and others saying that for any pairs of words/categories, when looking at their invariances shared by sensory shadows, there is nothing in common? Also, could you please re-explain sensory shadows? I am especially having trouble understanding this term.

    ReplyDelete
    Replies
    1. “Shadows” just means the shape of sensory stimulation projected onto our sensory surfaces – a geometric projection of shape on our retina, the frequency of light on our retinal cones or of acoustic vibrations on our eardrum, a pattern of contact on the skin, the chemical effects of a fragrance on the mucous membrane of our nose, etc. These iconic (analog) patterns may be preserved upstream in the brain too (tonotopic projection).

      If you look at all google images of an apple you will not find any simple sensory feature that they all have in common that distinguishes them from images of tomatoes or pears.

      [The intersection between two sets is what they both have in common. It’s empty when they have nothing in common. Vanishing intersections of simple features are whyFunes the Memorious could not see why we call all dogs “dogs.”]

      But there has to be some way we do manage to sort things as to whether they are apples, tomatoes or pears. That way must be based on some kind of composite “feature” – not a simple monadic feature such as all being red, or all being green. That composite “feature” would be more like being round-or-green.

      The composite feature can be a lot more complex than that, but it has to exist, if we are able to categorize correctly. If it is learned directly by supervised sensorimotor learning, it would have an implicit complex sensorimotor feature-detector. If it is learned by verbal instruction, it would have an explicit verbal description.

      The set of all things that are both square and round is empty (vanishes), but the set of all things that are either square or round is not empty.

      With AND, OR, and NOT you can define all “Boolean” rules for composite feature combinations.

      So the common feature of a category that has many non-intersecting simple sensory features is whatever obeys the Boolean category rule. (That would need something like a direct “either X or green” detector or a verbal definition of it.)

      Delete
    2. So can we say that the common feature of a category is a composition of all that simple sensory features?
      Because of this composite feature property, can we say that two categories will have nothing in common as their composite features are compared as a whole? For example, "red-OR-green-AND-round" and "green-OR-yellow-AND-narrow on the top-wide and round on the bottom" both have green as one element of the composition, but since the structure is {red or green}&{round} and {green or yellow}&{narrow on the top, wide and round on the bottom}, the intersection of green cannot be made as other elements are not fulfilled.

      Delete
    3. Boolean bracketing --((a AND (b OR c)) or ((a AND b) OR c)) -- should would work the same way as the boolean connectives AND or OR do, whether in an implicit direct sensorimotor feature-detector or an indirect, explicit verbal feature-descriptor (grounded).

      All categories are underdetermined ((1) except formal mathematical categories like "even numbers" and (2) "what it feels like to feel when something is being felt" (i.e., the cogito/sentio)) and their feature detectors, whether implicit or explicit, are approximate, not exact or exhaustive or even unique).

      (It's good to ask about such interesting, deeper points, but you can be sure I will never ask about them in an exam!)

      Delete
  25. In Section 28, Professor Harnad asks whether there is any sense in which “primroses or their features are ‘realer’ than prime numbers and their features”? He answers that both are absolute discriminables, in the sense that “both have sensorimotor affordances that I can detect, either implicitly, through concrete trial-and-error experience…or explicitly, through verbal descriptions”
    It is an interesting answer to me. However, I do not fully grasp the notion of affordance here. In Section2, an affordance of a sensorimotor system is what can be exacted from its motor interactions with its sensory input. It can be color vision, depth-perception or detecting an invariant sensorimotor feature. This discussion of affordance then leads to the topic of categorization.
    In the context of Section 28, affordances seem to refer to detected invariant sensorimotor features. But then Professor Harnad goes on to make the point: “The affordances are not imposed by me; they are ‘external’ constraints, properties of the outside world, if you like, governing its sensorimotor interactions with me”. “That 2+2 is 4 rather than 5 is hence as much of a sensorimotor constraint as that projections of nearer objects move faster along my retina than those of farther ones”. It is because these are both affordances.
    My questions are: How is affordance related to categorization? Why is the proposition that 2+2 is 4 rather than 5 also affordance? Why is it a sensorimotor constraint?

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2021 Time : FRIDAYS 11:35-2:25  Place : ZOOM Instructors : Stevan Harnad & Fernanda Pere...