Monday, August 30, 2021

5. Harnad, S. (2003) The Symbol Grounding Problem

Harnad, S. (2003) The Symbol Grounding ProblemEncylopedia of Cognitive Science. Nature Publishing Group. Macmillan.   

or:

Harnad, S. (1990). The symbol grounding problemPhysica D: Nonlinear Phenomena, 42(1), 335-346.

or:

https://en.wikipedia.org/wiki/Symbol_grounding

The Symbol Grounding Problem is related to the problem of how words get their meanings, and of what meanings are. The problem of meaning is in turn related to the problem of consciousness, or how it is that mental states are meaningful.

 


 If you can't think of anything to skywrite, this might give you some ideas: 
Taddeo, M., & Floridi, L. (2005). Solving the symbol grounding problem: a critical review of fifteen years of research. Journal of Experimental & Theoretical Artificial Intelligence, 17(4), 419-445. 
Steels, L. (2008) The Symbol Grounding Problem Has Been Solved. So What's Next?
In M. de Vega (Ed.), Symbols and Embodiment: Debates on Meaning and Cognition. Oxford University Press.
Barsalou, L. W. (2010). Grounded cognition: past, present, and futureTopics in Cognitive Science, 2(4), 716-724.
Bringsjord, S. (2014) The Symbol Grounding Problem... Remains Unsolved. Journal of Experimental & Theoretical Artificial Intelligence (in press)

87 comments:

  1. I’m not sure I fully understand the difference between symbol grounding and meaning that is laid out in this article, so any input would be appreciated :)

    By my understanding, symbol grounding is referring to the fact that symbols in our heads are “grounded” in things in the real world. Our mind somehow connects symbols, lets say the word “carrot”, to the real world object of a carrot, so the symbol gains meaning that is grounded. The connection to a thing in the real world provides grounding because it avoids the infinite regress seen in the example of trying to find the meaning of a world by looking it up in a unilingual dictionary of a language I don’t know (see section 3). If someone were to ask “what is a carrot” and I said “an orange root vegetable”, they would then need to know the definitions of “orange”, “root”, “vegetable”, and so on, leading to and endless path of jumping from definition to definition (trying to explain one squiggle without inherent meaning by pointing to other squiggles without inherent meaning never leads to a final explanation). But I can give a real answer to “what is a carrot” simply by picking up a physical carrot and saying “this, here”. My explanation is grounded in a thing with inherent meaning (a physical carrot does not need to refer to something else for meaning, it just is what it is). And since there is a connection between the word “carrot” and a real world carrot in my head, the word in my head has meaning [as a side note, we don’t yet understand how the word in my head is connected to the real world object - maybe this connection arrises through some form of associative learning, but this also wouldn’t be a final explanation, it just re-focusese on something new - associative learning - that needs to be explained]. Anyway, lets assume there is some explainable mechanism by which I can pick out the real world thing that the word “carrot” responds to, this gives the word “carrot” in my head a grounded meaning.

    And this is where I confuse myself, because it seems fitting to use the world meaning here, but if meaning is not simply associating “carrot” with a real world carrot, then using “meaning” would be incorrect here. A T3 robot with “no one home” (section 9) could reliably associate the word carrot with a real world carrot, but we wouldn’t necessarily say that this robot contains meaning, or that it understands what the word “carrot” means.

    Maybe meaning has multiple layers: (1) meaning as the association between squiggles and actual objects that ties those squiggles to reality, and (2) meaning as a subjective experience of understanding? And including this second layer, if we don’t know that our T3 robot is conscious of its understanding, that it *knows* what “carrot” means rather than just being able to display all the behaviour we would expect something that knows what “carrot” means to display, then we wouldn’t say it really contains the meaning of the word carrot within its functions? Of course, maybe this robot is feeling what we do when we understand, but we can never know. Once we move past the an implementation-independant model of computationalism, Searle’s periscope disappears and we are back to not knowing what anything else is thinking or feeling. But I’m still confused where the explanation for this additional, subjective experience of meaning meanings would come from, if it turned out not to just be a side effect of the mechanism of symbol grounding (and so something not present in our T3).

    ReplyDelete
    Replies
    1. Additionally, I wonder how this whole *feeling* like I understand “carrot” thing would have come about. Consider "a robot that could pass the Turing Test, "living" amongst the rest of us indistinguishably for a lifetime” (section 9). This robot with its symbol grounding abilities would have all the adaptive advantages I have as a thing with symbol grounding + a subjective experience of meaning meanings, because it interacts with the world in the same way I do, so what would be gained by me feeling feelings and meaning meanings? Without an adaptive benefit for this quality, it isn’t clear that there’s a good evolutionary reason it would have come about (the possible explanation would be something about this quality being the side effect of another, adaptively useful quality).

      Delete
    2. Lots of good points Caroline, and they’ll be coming up again in the skywritings and in later weeks. I’ll just touch on a few points now:

      (1) Meaning is not the same asreference. The referent of a word is the thing it refers to. And usually that thing is not an individual (unless the word is a proper name), it’s a kind of thing: a category. “Carrot” does not refer to “Carrot-Charlie,” the one in my hand right now. It refers, as you say, to any “orange root vegetable.” Being “orange,” a “root,” and a “vegetable” are features of carrots. They are also categories, and they have names. So if you already know what those names refer to, then the definition gives you a new name, “carrot,” and you now know what a carrot refers to. But if you don’t know what “orange,” etc. refers to, and, if you look it up, you don’t know what the words defining its features refer to, then you’re back to the symbol grounding problem. And to make things worse, not only is meaning not the same thing as reference, it’s not the same thing as grounding either!

      (2) But we’re not here to give riddles or keep secrets: Grounding is the sensorimotor feature detectors (usually learned, not inborn) that connect T3’s words to the members of the categories that the words refer to. But meaning is more than grounding and reference. It is also the capacity to string words together into a subject/predicate proposition like “A carrot is an orange root vegetable” or “the cat is on the mat.” We’ll talk about that more as we get to language in weeks 8 and 9. Just notice now that a proposition, i.e., verbal definition or description or assertion, is not just a name of a category.

      (3) But there’s more than reference, grounding, categories, and propositions: there’s also the fact that it feels like something to know what a word refers to, and to be able to identify it if you encounter it, and to define or describe it verbally, and to be able to understand definitions or descriptions or assertions about it. If you know what a word means, you can do all that, and it feels like something to be able to do all that.

      (3) So, putting it together: to know the meaning of a word = (a) the T3 capacity to (learn to) recognize, identify and manipulate the members of the category it refers (i.e., grounding) + (b) the T2 capacity to produce and reply (T2) and respond in the world (T3) to verbal propositions that describe the members of its referent category + (c) the capacity to feel what it feels like to be able to do (1) and (2).

      But you’re right that no one has a causal explanation of how or why we need (c) (feeling) – and that’s why it’s called the “hard problem.” And because of the other-minds problem (which is not the same thing as the hard problem), T3 (or T4) is as close as you can ever hope to get to evidence that anyone else, living or synthetic, has not only (a) and (b) but also (c). A purely computational T2 lacks grounding; but meaning is not guaranteed by T3 (or T4) either.

      Delete
    3. I want to touch on some things professor Harnad talked about with his response. Symbol grounding, as I understand it, is the act of association of an element(s) from the environment and an element(s) of the symbol system (I.E. language). Symbol grounding helps us to understand why classical computationalism is wrong. How can symbols be conscious on their own, when consciousness is an interactive process with the physical world (T3).
      Now where this argument loses me a little is with the addition of the “meaning” component. We keep arguing about the importance of the “meaning” aspect of being conscious, but we haven’t really defined what we mean (pun not intended) by that. Here ( c ) refers tot the capacity to feel what it is like to feel, isn’t that simply a subjective description that only exist in the symbol system? Wouldn’t that mean that this is completely irrelevant to the question we are asking?
      What I am trying to say is that “the capacity to feel what it feels like to be able to do” is not a capacity at all, it is just a description of what it feels like to be able to do. This descriptive idea can be widely different from one person to another, one organism to another, one conscious entity to another. I think we have a very hard time grounding that problem because it isn’t a problem at all. Our subjective experience of “feeling” is not an important part of the consciousness problem, but just a subjective lived experienced of our interaction with the physical world that we try to describe inside our symbol systems. In other words, a mycelium network underground is just as conscious as me or my cat Za’atar; our particular subjective conscious experience is neither magical nor universal, it is just an evolutionary result that is somehow homogeneous (and I use “somehow” very generously here) across humans.
      -elyass

      Delete
    4. When you refer to (c) and say that feeling what it is like to feel is a subjective description and is thus irrelevant, I’m a little confused as to what part is subjective. Firstly, due to the other - minds problem we can never know for certain whether another organism is capable of feeling what it is like to feel. That aside, there are still an argument to be made for subjectivism. The fact that feeling what it is like to feel may vary among organisms if we are to assume that others DO feel is not subjective. The feelings associated with feeling what it is like to feel may be subjective but the underlying principle that they feel at all is supposedly objective. The subjective experience of “feeling” is an important part of the consciousness problem due to the fact that the capacity to do so exists at all.

      Delete
  2. This article discusses the necessity of grounding for meaning and from what I understand, the meaning is what allows us to determine referent of a word. Grounding is necessary for this because we need a way to connect the word inside the head with the external referent, and grounding is how this happens (grounding is when we understand the words, like if it is in the language that we speak). If my understanding is correct, I can see how grounding is necessary for meaning because without understanding the word (ex. if it were in a language I don’t speak), obviously I wouldn't be able to determine the referent. What I am confused by is the ending of the article where it says grounding is not necessarily sufficient for meaning. If we need a T3 to pass T2, and if a robot passed the Turing test, doesn’t that mean it can interact with the world in a way that allows for meaning? I'm having a hard time grasping how the TT could be passed without grounding. Is the argument that there is some unknown mechanism by which consciousness allows for it?

    ReplyDelete
  3. I'm confused about one small thing. Can we really say that the symbol grounding problem (and other instance of embodied cognition like the mirror neurons) means that a T3 machine is needed to pass T2? I understand how the symbol grounding problem means that you need a T3 (specifically, sensorimotor capacities) to be able to use language like humans do and reach this feeling of meaning or understanding. However, wasn't what Searle showed that a machine can pass T2 without really understanding? In the sense that the TT can only evaluate behavior, and so a machine that can speak as if it understand the meaning of words even though it only manipulate the words on their shape would still pass T2.

    ReplyDelete
    Replies
    1. Hey Louise :)

      A preliminary note on your sentence "you need a T3 (specifically, sensorimotor capacities) to be able to use language like humans do and reach this feeling of meaning or understanding". Symbol grounding is necessary for meaning, but doesn't guarantee the *feeling* of meaning, or understanding. Understanding is (potentially) at a level above symbol grounding, so a machine that cannot understand could still have symbol grounding capacities.

      To take a stab at your question, Searle *is* imagining a T2 that can pass T2, but this is in a theoretical world where Searle's rulebook contains all information necessary to do this (the rulebook tells Searle how to manipulate all possible symbols he could receive and what to send in response). He assumes for the sake of his larger argument that this kind of all-encompassing rulebook is possible, because he's trying to show that even if everything an ideal T2 is meant to do were possible, it would still not understand, and so is not cognizing.

      However, his thought experiment doesn't do anything to prove that this kind of rulebook he's imagining is actually possible. Part of the reason we might say a T3 robot would be necessary to pass T2 is that it would likely be impossible to actually write a rulebook (or, program) that would be able to anticipate everything that might occur over a lifetime of interaction. To completely mimic human verbal behaviour, a machine would have to gain and ground its rules for manipulating symbols the same way we do - through interacting with the real world.

      Also, and I think this is the more important part, if we’re trying to reverse engineer human cognition, we’re trying to make a system that can do everything a human can do *by itself*. Even with a perfect program that tells it what to do, a T2 isn’t really passing the T2 test itself, whatever human programmed it is passing the test. To *independently* gain the ability to pass a T2 test, a machine would have to be able to interact with the world, as it couldn't develop its own system of grounded symbols that it could manipulate to verbally interact like a human without being able to interact with the physical world.

      Delete
    2. Spot-on, Caroline, except that T2 is interacting with the world, though only verbally -- with the words and the speakers of the words, and not with the referents of the words.

      Also, best to forget Searle's walls and rulebook and think only of a (hypothetical -- and ["Stevan says") impossible) T2-passing algorithm; impossible because of the symbol grounding problem. If a Chinese T2-passing algorithm had been possible, it could be executed by anyone without understanding Chinese.

      Delete
  4. Just a little bit confused about numbers and the symbol-grounding problem…

    Professor Harnard explains that for a symbol to be grounded, it must be connected to its referent. A formal symbol in this example is a word, like “bunny.” The referent is the thing that the word/symbol picks out in the world. So, the referent of the symbol “bunny” is a bunny in the world.

    This is the part where I get a bit confused: If a symbol is manipulated according to rules that are based on the symbols' shapes, not their meanings, what about symbols that are not as straightforward as “bunny”?

    It is my understanding that whatever system contains a symbol must interact by sensorimotor means with the symbol’s referent for the symbol in question to be grounded. Otherwise, the symbol may be grounded by a definition containing some words that are grounded in this same sensorimotor manner. But, what about something like a number? What is the referent that a number interacts with?

    If anyone has a good example of sensorimotor grounding of numbers that would be super helpful!

    ReplyDelete
    Replies
    1. I think you are talking about higher order abstractions.

      To a first approximation, a referent is something you can point to: "That's an 'X'", in response to "What's an 'X'?"

      If X refers to a physical thing, like a bunny, you can point to examples. "That's a bunny, that's not." ("Bunny" refers to a kind of thing (i.e., a category), not an individual, unless you are referring to a particular bunny whose proper name happens to be "Bunny.")

      But what about the referent of "number"? Or "truth," "beauty," or "justice"?

      First, even with abstract categories like that you can point to examples: "That's a number, that's not." "That's true, that's not."

      All categories (even proper-named individuals) are "abstract" to varying degrees, because categorizing is already abstraction. (We'll discuss this more in Week 6.)

      To abstract is to single out certain features -- the ones that distinguish the members of the category from members of other categories -- and ignore the rest. There are verbal definitions of "number" and "truth," and "beauty," and "justice." Look them up. They will be defined or described on the basis of features that distinguish their members from members of other categories. But in order to understand the definition, you have to already know what the names of the features refer to.

      The symbol grounding problem is that it can't be names and verbal definitions of features all the way down. The least abstract level of abstraction is the sensorimotor one, where you simple point to what is and is not a member of the category that is referred to by the category name. There are still features involved -- sensorimotor features that distinguish members of the referent category from non-members. But the distinguishing features are sensorimotor features that are not yet named. Your brain has to be able to detect and abstract those features in order to ground the name of the referent directly through sensorimotor experience.

      But once you have grounded enough referents directly, the names of higher-order categories (like "truth" or "justice") can be grounded indirectly, through words describing their features (as long as the words in the description are already grounded).

      Step through the (computation-like) power of verbal definition by considering the indirect steps leading to the grounding of the definition of "number."

      (Exercise: What is the difference between the referent of a word and the referent of a sentence?)

      Delete
    2. As an attempt to the exercise: the referent of a word is just an entity or category as in Prof.Harnad's comment to Caroline, be it abstract or real in the world, whereas the referent of a sentence is the relation between the entities that the words in the sentence refer to, and it is always abstract. Sometimes words can refer to pure relationship as well, for example, marriage, friendship and so on, but they lack other entities that appear in a sentence. "A person is married to another person" is not equal to the word 'marriage', because there are other entities involved in the sentence.

      Delete
    3. Zilong (part 1): You are right that every (content) word just refers to a referent, whether a more concrete referent, such as"afraid" (an adjective), "apple" (a noun) or "attach" (a verb), or a more “abstract” referent, such as "ambiguous" (adj), "artifact" (noun), or "adumbrate" (verb).

      The referents of all these words are categories. They can all be defined or described in words. Examples of the category’s members and nonmembers can always be pointed to (or pointed out verbally). All have distinguishing features that distinguish the members from the nonmembers. All features – such as “big” or “brother,” including relational features, such as “bigger-than” or “brother-of,” are also (actually or potentially) learnable, nameable and definable categories.

      But it would be very hard, perhaps impossible, to ground the second three words above (the ones with the more abstract referents) through direct sensorimotor experience. They have to be defined verbally, hence indirectly, in words, by describing their distinguishing features (but the words referring to those features have to be grounded, whether directly or indirectly).

      A content word, no matter what it’s part-of-speech (noun, verb, adjective, adverb) always has a referent; the word can be defined (in words), and the referent can be described (in words). Almost all the words in the dictionary are content words.

      (Proper names of persons or places are content words too, but their referents are individuals, rather than categories. But individuals too can be defined or described verbally by their distinguishing features.)

      Delete
    4. Zilong (part 2: A function word, like “if,” “or” or “not” has no referent. It is defined (in words) by describing how it is used, that is, by formal rules, as in computation. There are very few function words, and they are mostly the same in all languages.

      (Some words, like prepositions, are borderline between content and function.)

      Definitions and descriptions (in fact, all complete affirmative sentences) are called propositions. Propositions have a subject and a predicate. They state something that is the case (i.e., is true). They have a “truth value” (either “true” or “false”).

      (Questions and commands are not propositions, but they can easily be transformed into propositions. “What is an apple?” -- “I am asking you what an apple is.” “Give me an apple” – “I am asking you to give me an apple.” And even an affirmative proposition like “That is an apple” is really saying “I am saying it is true that that is an apple.” (“That” is a deictic pronoun, which, like prepositions, are borderline between content and function words.)

      We’ll be talking more about categories and propositions in Weeks 6 and 8, but keep in mind that the subjects and predicates of propositions are also categories (though they can be complex, composite categories) and a proposition is always stating that the subject category is contained in the predicate category. “An apple is a round, red fruit” is saying that the members of the subject category (an apple) are contained in the members of the predicate category (things that are round and red and fruit). Otherwise put, being “round, “red” and fruit” are distinguishing features of being an “apple.”

      Propositions are what makes language a nuclear weapon, just like computation. The counterpart of the Strong Church-Turing thesis for computation – that just about anything can be modelled computationally -- is the “Effability” thesis for language: that just about anything can be defined or described in words. (In fact, computation is part of language: In English, “Two added to two gives four” is a proposition, with a subject and predicate, but a purely formal proposition in mathematics; it gets semantic content when it is stated in English, or any language, although the semantics of the category “two” is very abstract!

      (There, I’ll stop, as I’m sure this has given kid-sib a headache!)

      Delete
  5. When reading about the means of picking out referents, I didn’t understand why the question ‘but what if the “entity” in which a word is located is not a head but a piece of paper, what’s its meaning then?’ was a question. My initial thought was that the answer was clear, doesn’t the entity “transfer” to the head of the reader, and from there the inner object can be connected to the outer object by whatever means the brain uses to process it? This idea was supported as I read on, and I understand there is no connection between the words on the paper and their intended referents if there were no minds mediating the intention to connect the two. However, the article then says this means that the meaning of a word on a page is “ungrounded”, and I’m not sure I agree then. Does this then mean that grounding is equal to meaning? Doesn’t that then mean that the only thing that provides meaning to a word/referent is our ability to connect them to one another? If this is the case, I’m not sure it’s a sufficient standard for ascribing “meaning”, because even a robot with sensorimotor capacities could make these associations without having the capacity to feel anything or develop personal connections to them.

    ReplyDelete
    Replies
    1. Adebimpe, you bring up interesting points and I too had/have some confusion working through this part of the reading. From my understanding, grounding is not equal to meaning because grounding is the sensorimotor ability to connect word to the referent, whereas meaning goes further to enable the ability to also use the word in sentences and have the feeling associated with knowing the referent. I believe this would mean that simply connecting word to referent is not all that provides meaning!

      Delete
    2. Yes, there's more to meaning than "simply connecting word to referent," but it's important, when you skywrite, to first read the skywriting that has already appeared in the same thread, and especially any replies from me. The answer to your questions appear earlier in this thread. Please read them and then let me know whether there are any further questions you still have.

      Delete
  6. I actually found the concepts in the symbol grounding problem to be quite clear. I understand the meanings of words exist within contexts, distinct contexts that may signal the same referent, but still have separate meanings. It seems as well, that in order to understand meaning, one must also know the referent to which a word refers in some sensory way. This is how one connects internal symbols to external objects, and this concept is implied by the need of T3 abilities in order to pass T2. I am nevertheless curious how one creates meaning around a referent that one has never expressly encountered externally. For example, I understand that we may be able to have a meaningful referent for the word “ horse” even if we haven’t seen a horse in real life, as we’ve encountered them by proxy through images, videos, stories describing them etc. But what about things we have named and created words for that rise internally? Abstract concepts such as love, religious vigor, hatred, time… all things that do not seem to be consciously sensed through vision, hearing, proprioception etc. I'm curious if you have insight into how these internal agents, affects, and emotions are grounded?

    ReplyDelete
    Replies
    1. Do you agree that if you don’t know what “zebra” refers to, then if I tell you “A zebra is a horse with black and white stripes” that’s a good enough first approximation?

      It describes a zebra’s distinguishing features so that (as long as you already know what horse, black, white and stripes refers to, and you can understand a subject/predicate proposition), you now know what zebra refers to, and could identify one if you ever saw one -- live, or in a picture or video.(The approximation can always be tightened with more words.)

      Now let me introduce the more abstract category "peekaboo unicorn" :

      “A peekaboo unicorn is a horse with a horn, who vanishes without a trace if any organism’s or device’s sensors are aimed at it.” No sensorimotor interaction with it is possible, yet “peekaboo unicorn” is as well-grounded as “zebra” (as long as horse, horn, vanish, trace… etc. are already grounded).

      Remember that symbol grounding has nothing to do with ontology – i.e., with the branch of metaphysics concerned with what “exists.” This is not Philosophy; it’s just cogsci, and cogsci is about reverse-engineering how earthly organisms can do what they can do (the easy problem), including how they can categorize things (and, in the case of human organisms, also how they can name and describe things in words). That’s cogsci’s “easy” problem. But there’s also cogsci’s “hard problem of how organisms can feel. (And it does feel like something to know what words or propositions mean.)

      Now go back to the skywriting Replies in this thread to understand how although grounding and reference are necessary for meaning, they are not sufficient. Subject/predicate propositions are needed to ground words indirectly through words.

      And remember that it feels like something to understand what a word or sentence means.

      Delete
    2. "One property that the symbols on static paper or even in a dynamic computer lack that symbols in a brain possess is the capacity to pick out their referents.": in this week's reading, this left the strongest impression on me. We have been discussing in class a lot about robots and computers, and to me, this phrase seemed to really hit home. Using machine learning, we can devise an algorithm and teach a dynamic computer how to pick out the referent. But how would a computer make sense of a completely novel thing? (It's not easy to imagine a completely novel category for humans but sometimes new things are invented or stumbled upon and as a species humans demonstrated the capacity during their existence. So I am very interested in novel abstract categories like pikaboo unicorns)



      This paper made me stop and reflect when it elaborated on the distinction between meaning and referent, and it looks like others did too. It is explained that meaning is the rule for picking out the referent, but in everyday life - at least I think that we do this- we regard the referent and its meaning as an inseparable entity. I also find "systematic correspondence" remarkable in that we can find a system of interpretation that is coherent with the real world and consistent in itself up to a certain degree (I hope I understand systematic correspondence correctly)

      Delete
    3. Hi Genavieve,

      For abstract, I would say that we don't have a grounded understanding of the concept. Based on Stevan's response to Carolina above (where he puts together the parts of what meaning), I think that we would have part of the meaning: “the T2 capacity to produce and reply (T2) and respond in the world (T3) to verbal propositions that describe the members of its referent category”. However, we would be missing the first part: “the T3 capacity to (learn to) recognize, identify and manipulate the members of the category it refers (i.e., grounding)”. I would argue that most people don’t have a solid definition of “love” or “religion”, and even if they did, their definition would clash with other people’s definitions. This differs from “carrot”, where we all have the same definition, and we would consider the same objects carrots. Therefore, the capacity to recognize or identify these concepts is not there. I’m not sure if that’s an accurate interpretation of the symbol-grounding problem, but from my point of view, there’s no way to “ground” abstract concepts because that’s what makes them abstract.

      Delete
    4. Hi Melody,

      To be able to "make sense" of the world, we see (or hear, smell, touch, taste) and manipulate objects and associate them with their names. This is the way we ground our first set of words, through perception and action (i. e., sensorimotor interaction, or manipulating a carrot). Once we have grounded enough words this way, we can start recombining them to "define" other words through verbal instruction, i.e., we ask our parents "What is a zebra?" and they respond "it is a horse with stripes" If we have previously grounded the words horse and stripes, then this definition will be enough to ground the new concept, saving us the time and energy of having to interact with every entity of the world to be able to ground it. This is referred to as symbolic theft (see: https://www.jbe-platform.com/content/journals/10.1075/eoc.4.1.07can).

      That being said, you are right that grounding abstract concepts through sensorimotor processes seems contradictory, but given that grounding is necessary for meaning, if there was no way to ground abstract concepts, they would not mean anything to us. One thing is for sure and that is that learning abstract concepts relies more on language and propositions (like the zebra example) than on sensorimotor interaction with their referents in the world, but those words HAVE TO BE grounded one way or another, although there is still ongoing debate on how is it that we ground them. Do we ground them by verbal recombination of concrete concepts? Do we ground them based on events or situations to which they are related? Is it more about creating schemas or analogies? For a good opinion piece that reviews these perspectives, see: https://royalsocietypublishing.org/doi/10.1098/rstb.2017.0132

      Delete
    5. Applying the "peekaboo unicorn":

      (I’ll now try to give an approximate definition [all definitions are approximate] of a very abstract category: “truth.” This will be a bit more than a definition or description: it’s an explanation, which is a long series of propositions – with all their content-words [in italics] grounded, either directly, by features learned from sensorimotor experience, or indirectly, by features described using words that are already grounded, directly or indirectly. Definitions tend to appear in dictionaries, whereas descriptions and explanations tend to appear in textbooks or encyclopedias or conversation.)

      Truth is a feature that all true propositions have in common. A proposition is a series of words that refer to at least two categories (the subject and the predicate. The proposition is stating that the members of the subject category are members of the predicate category as in ‘apples are round’). This proposition is stating that ‘It is true that apples are round.’ Propositions are either true or false. The proposition is true if apples are round. It is false if apples are not round. You can test whether it is true or false by observing apples or by consulting a dictionary. The proposition is also true if it would be self-contradictory for the proposition to be false, as in ‘this proposition is both true and not true,’ which is false.”

      Well, that definition/explanation was complicated, and long, and not very kid-sibly. But with enough time and space, it could be made more kid-sibly. And it ends up being like the definition of a peekaboo unicorn.

      Defining a more abstract category like “truth” is more complicated than defining a less abstract category like “apple,” but (1) all categorization is abstraction (because it is all based on abstracting features) and (2) all verbal definitions are approximate (except in formal mathematics) because their abstracted features are incomplete.

      If a definition is too approximate, because a feature has been left out, that feature can be added -- once it has a grounded name. That’s part of the nuclear power of words. A picture or object may be “worth more than 1000 words,” but with 10,000 words or 100,000 you can keep getting as close as you like, just as, in a computational model, a discrete approximation can be made as close to continuous as you like (the strong Church/Turing Thesis). Information is reducing uncertainty among finite choices in which it matters (to an organism) to “do the right thing with the right kind of thing.” But only in formal mathematics and logic can all uncertainty be not just reduced but eliminated. That’s why Descartes pointed out that what can be formally proved to be necessarily true in mathematics (“on pain of contradiction”) is one of only two kinds of certainty.

      What is the other kind?

      Delete
  7. In "The Symbol Grounding Problem", it is said that "Note that both iconic and categorical representations are nonsymbolic". Could you clarify what being symbolic means? It seems to me that even iconic and categorical representations can also be formalized and transfered from one person to another (by education) in such a way that they can be physically seen, drawn, read and formalized in a Turing machine through symbols (perhaps language in our humans case). But maybe being symbolic means any arbitrary symbols can be assigned to a property to represent it while iconic and categorical representation are fixed and can't not be assigned any other representations. Although they can be named with different symbols, they themselves have to be invariant and thus nonsymbolic? Then to some extent, the iconic and categorical representation have to be grounded as well.

    And is being sensory and symbolic conflicting to each other?

    ReplyDelete
    Replies
    1. In formal computation, the shape of a symbol (“squiggle”) is arbitrary. It does not resemble the shape of whatever it might be interpretable as referring to. And the interpretation is not in the symbol system. Any object can be used as a symbol for formal computation, but squiggles are easier to manipulate than planets…

      In formal mathematics, symbols do not have or need grounding (though learning maths does).

      Grounding is essential for a special kind of symbol: words (in human language); but words do not resemble their referents either. The connection is grounded through sensorimotor feature-detection in learning to categorize.

      This should become clearer as we go further into the nature of language and its relation to sensorimotor categorization.

      Delete
  8. In my understanding, the CRA uses symbol manipulation to argue that there is no understanding taking place in a T2 computer, as it relies on ungrounded words - shapes without meaning - and can still (hypothetically) pass T2. But the brain does ascribe meaning to words regardless of shape, and is able to pick out the words referents, hence grounding the words, so the brain must do something other than only implementation-independent symbol manipulation to generate the feeling of understanding a meaning.
    This relates to why we need a T3 to pass T2 test because, to be able to interact with a human for a life-time indistinguishably, the machine would need grounding (the ability to recognize, identify and manipulate words in reference to the members of the categories they refer to) to be able to generate words strings that have meaning to us. And because this is a sensorimotor capacity, a T2 machine does not suffice. However, this does not guarantee that the machine will be feeling understanding in the same way that we do, but it would be sufficient to pass T2.

    ReplyDelete
    Replies
    1. That's right. The easy problem is easy. The hard problem is hard. The other-minds barrier is impenetrable (except to Searle's Periscope). Both T3 and T4 are grounded. But grounding is not meaning.

      Delete
  9. From my understanding of the articles and previous skywritings, ‘meaning’ is the feeling of understanding, potentially having opinions about, and making associations between referents. It is the difference between understanding English and being able to work with Chinese by using rules. A T3 robot that can distinguish between stimuli (ex. Accurately referring to a carrot due to the sensorimotor capacity it has) may demonstrate that they can associate with referents (meaning they are grounded for the robot), but there is nothing to prove that the robot can generate meaning from them in the ways that we can. We would not be able to recognize this in the robot because we still don’t understand the causal links in our own brain from referents to meaning, cannot recognize it in others, so we would not be able to definitely say what is going on in the brain of the robot is the same as what is going on in ours. This is the other-minds problem.
    So my question then becomes, how do we answer the other-minds problem? Would this problem not be necessary to understand further to be satisfied with any kind of Turing robot? I understand that the goal of reverse-engineering is to demonstrate how cognizing is accomplished. But without developing theoretical frameworks to answer the other-minds, won’t we only have a chance to answer the easy problem (how we can do what we do) and not the hard problem (why we feel things)? I believe Prof. Harnad has said previously that Searle’s periscope allows us insight into T2 but will not answer any further questions because it only relates to computationalism. If my understanding is correct and this is true, why is the other-minds problem considered impenetrable and not the subject of more consideration? Shouldn’t this be where the field is focusing?

    ReplyDelete
    Replies
    1. When you refer to what a T3 robot could or could not do, think of what Eric can do. Otherwise you are just presupposing limits on T3 capacity without evidence. There is no evidence yet that there can be a T3 indistinguishable from Eric, but there is also no evidence that there can't be. There are just toy robots today.

      The problem is not with the other-minds problem. There is no way to know for sure whether anyone else can feel -- not toy robots, not T3, not T4 and not other people. Turing's point was that if you can no longer tell them apart in what they can do, as with Eric, you have no better or worse reasons for believing that T3 does not feel than for any other person.

      (You seem to be assuming that T4 is necessary for feeling; ok, but what are your reasons? I often ask about that on exams.)

      Computationalism, and with it Searle's Periscope, was already left behind once we got to T3 (which is not and connot be purely computational.)

      Delete
    2. Thank you for the reply! That helped me contextualize the other-minds problem more. I'm still looking to understand the way feeling relates to T4 in a more kid-sibly way. Is it correct to assume that T4 may be necessary for feeling, because feeling may rely on something biologically present in the mind that we don't yet understand (seeing as it cannot be computation)? So because T4 is biologically indistinguishable, we can assume that that feeling would be occurring in the robot the same way it is for us?

      Delete
    3. Yes, Turing's suggestion that cogsci should not try to be "more royalist than the king" or "more catholic than the pope" means we should not ask for more from a T-test-passer than we ask from a real person with a mind, once we can no longer tell them apart.

      So T2 may not be enough, because people can do more than just talk (even though language is a nuclear weapon).

      T3 is as much as we can ask of Eric, but a neurosurgeon could ask for T4 (or T5) capacity: the causal mechanism of feeling may turn out to be physiological or biochemical. (It could even turn out that the causal mechanism of T3 is partly physiological or biochemical.)

      Just as it may require T3-power even to pass T2, it may require T4/T5 power to pass T3.

      We don’t know yet, because we’re still so far from anything that can pass T3.

      But that’s the question you’re asking, when you ask whether it might require T4/T5 to produce feeling-capacity.

      (“Stevan Says” that’s probably true.)

      It might even require T4/T5 to produce the doing-capacity of T3 (the “easy problem).

      (Who knows?)

      But T-tests are just tests, and we’re just speculating on what conclusions we could draw from them, if ever cogsci successfully reverse-engineers a robot (or bio-robot) that can pass them.

      Delete
    4. Madelaine, you’ve brought up some really good points I’ve also been thinking about. I’ve taken a couple days to reflect on and synthesize the past readings we’ve done, and something that keeps sticking out to me is that we seemingly “need” T(X+1) in order to pass T(X). Like Professor Harnad said, we are still extremely far away from anything that could pass T3, so discussing anything after T3 is only speculation. We need T3 to pass T2, but we would also hypothetically need T4 to pass T3, and T5 to pass T4. Does this mean we need T5 to pass T2?

      Something else I have been thinking about is related to the idea that, in the professor’s words, the causal mechanism of T3 may even be physiological or biochemical. In Fodor’s article from last week, we tried to understand why all of us — or all of science, really — is so obsessed with the brain and finding specific neural correlates for everything. Even after trying to approach this new reading through Fodor’s lens, I find myself asking the same questions. Is the need to find an exact neural explanation for everything a phenomenon present only in myself, or does all of science seem to have the same problem? Are we just spinning in circles trying to find answers that we may or may not be able to access?

      Delete
    5. Emma, whether and how much and how and why T(X+1) is needed to pass T(X) is an empirical question: We'll have to try to find out. (“Stevan says” only a T3 could pass T2, but that’s just “Stevan says.”)

      All of science seeks causal explanation (natural laws) for what there is in the universe, from stars and galaxies to atoms and electrons. Biology (as far as we know) only occurs on earth, and through a process called natural selection, organisms and organs evolved.

      Causally explaining how biological organisms and organs work is not quite the same (except perhaps in biochemistry and biophysics) as looking for universal laws of physics or chemistry. That is why much of biology (including cogsci) is called “reverse-engineering.” – But it’s still the search for causal explanation (which calls for more than just correlations).

      Delete
  10. “But in our underdetermined world, with its infinity of confusable potential categories, icons are useless for identification because there are too many of them and because they blend continuously[15] into one another, making it an independent problem to identify which of them are icons of members of the category and which are not!”

    The idea of a categorical representation struck me as interesting in this week’s reading. If “iconic representations”, or “internal analog transforms of the projections of distal objects on our sensory surfaces”, are insufficient for identification, due to the ambiguity of objects, the categories into which they are sorted, and the qualities by which they are sorted, then “categorical representation” allows for this kind of differentiation. I suppose I am confused as to what qualities categorical representations would have to have in order to allow us to identify things. They are, in a sense, simplified iconic representations, if I have understood correctly, that contain only the information that is useful for distinguishing them from other categorical representations. What kind of information is this? Does each categorical representation point to a million other categorical representations just to say “I’m not this”? How does one distinguish definitively between different types of things -- could such a process be so reducible?

    ReplyDelete
    Replies
    1. I also struggled with the concept of categorical representations. By my understanding, categorical representations are unchangeable and innate features of an object that we may discern with our senses that are selectively narrowed icons. From what I understand, the categorical representation does not say that it isn’t something else, but rather just contains features to prove what it in fact is. In the example of a horse, this would be features that we have learned through experience as identifiers as to whether an animal is a horse or not. Correct me if I am wrong, but from my understanding this would be based on quite distinguishable features, such as its hooves, mane, shape of its head, etc. As for how categorical representations truly work, this is explained by connectionism in section 4. This theory states that seeing an icon, and then receiving feedback with the icon’s name could cause a connection between the two, and thus create connections between the features of the object and the name.

      Delete
    2. I think I maybe just disagree with the idea that representations can be distinguished from each other positively. Much of the field of Linguistics in the last forty-or-so years has been focused on undoing the idea that there is a distinct connection between our representation (or signifier) of an object and any real object (the purportedly signified). I am not sure that words give access to the world of physical objects (I'm not sure that those objects exist without words either, that's the whole problem). So I have a little twinge in my heart against the idea that representations could be distinguished positively by a set of physical characteristics.

      Delete
    3. Sofia, input is whatever you hear or see, the incoming sensory and sensorimotor stimulation, and output is what your body does. When you are learning what to do, with what kind of thing (categorization), a neural mechanism in your brain learns to detect the sensorimotor features in the input that distinguish the kinds of things that you do this with or that with, in other words, the features distinguishing inputs from the members of one category from the members of other categories. Once you have learned the distinguishing features of a category, your brain can ignore the irrelevant features. The internal representation that is filtered through these feature-detectors is the categorical representation. There are many inputs but the feature-detectors filter it into much fewer categories – the ones you need in order to be able to do the right thing with the right kind of thing.

      Evelyn, neural nets learn the features that distinguish what category an input belongs to (i.e., what to do with what kind of input) by “supervised” (or “reinforcement”) learning: trial and error, with corrective feedback from the consequences of doing the right or the wrong thing. I think what you mean by “unchanging” is “invariant” (or better, covariant with membership in a category): the features that members of the category have and the nonmembers don’t have. All the rest of the features of the members are irrelevant (to that categorization, to doing the right thing with that kind of thing). But supervised learning is not just “association.” Feature detection requires a mechanism that can learn to detect and abstract the features from the sensory input.

      Sofia, words get in the way. And “representations” is a weasel-word. The symbol grounding paper is over 30 years old and I’ve since retired the word. Let’s forget it and just talk about inputs, outputs, words, and the internal processes that learn to connect the (content) word, the category name, to the sensory projections (input) from the category’s members.

      If we can learn to call apples “apples” and pears “pears” when we see them then we can distinguish them from one another. Words don’t give access: eyes and hands do. And our eyes and hands can distinguish apples from pears, so that we can do what needs to be done with them.

      And, yes, apples and pears exist without words – they would exist without people. But cogsci is not particularly about what things exist in the world. That’s left to physics, biology, and, maybe, philosophy. Cogsci is just about reverse-engineering how and why organisms are able to do what they are able to do with those things (apples, pears, people and words). (That’s the “easy problem”: T2, T3, and maybe T4.)

      And of course, also how and why it feels like something for organisms to be able to do what they are able to do with all those things… (“hard problem”).

      Delete
  11. In order to give symbols meaning, they have to be concretized by the real world. I suppose this is why "grounding has to be sensorimotor". This all makes sense, though it's a shame we will likely never have the necessary test to determine with certainty if symbol grounding gives rise to conscious meaning.

    ReplyDelete
    Replies
    1. Hi Laurier,

      I am tempted to say that symbol grounding should give rise to conscious meaning. Although we don’t do the grounding consciously, whatever word we think about consciously at any point is there in the conscious mind. When we read or say the word, we might have an iconic representation of that thing or a symbolic representation, but in the end, it’s there in the conscious mind.

      Although, upon reflection, I don’t know if I should be so sure about that. What if we also get the meaning subconsciously, like we do for symbol grounding. These two most probably happen in parallel. I remember Prof. Harnad saying at some point that we are the homunculi for the virtual machine in the sense that we semantically interpret the symbols for the machine. But are we our own homunculi if we can actually interpret the symbols that we are grounding, assuming cognition is computation?

      I’m also interested in knowing whether the grounding system works the same way when we are not, or not fully conscious, like when we’re dreaming or in a coma. That might be a bit off-topic though.

      Delete
    2. Laurier, grounding words has to be sensorimotor (whether direct or indirect) because the sensorimotor connection (whether direct or indirect) is the only connection of words with their referents. But grounding is not the same as meaning. (And for meaning there’s still the other-minds problem for everyone except the meaner/feeler.)

      Shandra, meaning is not just grounding (see earlier replies in this thread).

      “to know the meaning of a word = (a) the T3 capacity to (learn to) recognize, identify and manipulate the members of the category it refers (i.e., grounding) + (b) the T2 capacity to produce and reply (T2) and respond in the world (T3) to verbal propositions that describe the members of its referent category + (c) the capacity to feel what it feels like to be able to do (1) and (2).”

      Unconscious (unfelt) meaning would mean unfelt (c), which makes no sense. (What does “subconscious” mean?)

      A homuncular explanation is no explanation because it requires reverse-engineering the homunculus, which is us! (And I don’t have to interpret the meaning of the proposition “The cat is on the mat” if I know what “The cat is on the mat” means. If a computer emits the string of symbols “The cat is on the mat” I can interpret them, because I know what “The cat is on the mat” means. But the computer doesn’t. And can’t. And that’s Searle’s point – about computationalism.

      I have no idea what (if anything) it feels like to be in a coma.

      Delete
    3. From how I interpreted Shandra’s comment, I agree the subconscious is not the right word, because having meaning is the feeling/understanding of a word. I would say that perhaps rather than subconscious, it might be more accurate to say that we arrive at meaning effortlessly or instantaneously with symbol grounding. I don’t know if this is actually true, since it probably just feels effortless or instantaneous, but I think the idea that meaning feels intangibly present, though not subconscious, is an important quality. I think the reason that Prof. Harnad is distinguishing that it is not subconscious is because it directly contradicts the importance of it being entirely conscious, and not much more, as the meaning is only internally known and processed by the meaner.

      Delete
    4. Meaning is conscious (it feels like something to mean or understand a sentence), but what we are not conscious of is how the brain (or any device) produces the capacity to mean or understand a sentence. It's cosci's job to reverse-engineer such capacities and come up with a causal explanation.

      Delete
  12. I don’t agree with the amount of reliance computationalists have on their ideal algorithms. According to computationalists, “the future theory explaining how the brain picks out its referents (the theory that cognitive neuroscience will eventually arrive at) will be a purely computational one. […] It is essentially a computer program: a set of rules for manipulating symbols.” How can computationalists argue that computation will solve the symbol grounding problem and explain how the mind can arrive at a meaning derived from an external symbol? Their own definition of computationalism is that it is a set of rules for manipulating symbols. How is the manipulation of symbols meant to penetrate the symbol to derive a meaning from it? This seems like the re-occurring issue of regression ad infinitum. One can create a set of rules (an algorithm) for input A to spit out output A. Where in that series of events is the computer meant to understand the meaning of the inputs/outputs when it is merely spitting out a new symbol in response to another one. This becomes even more unreasonable with the addition of the “implementation - independent” factor. A system that is implementation - independent essentially creates an internal universality in its functioning. This seems difficult to conceptualize as there are a multitude of factors that would influence an individual’s interpretation of a symbol. As Harnad suggests, “one property that the symbols on static paper or even in a dynamic computer lack that symbols in a brain possess is the capacity to pick out their referents”. He also suggests that this is one property that could be contributing to the creation of meaning from symbols.
    The right computer program will have to be able to pass the Turing Test and I do not think that the robot in question will be able to continue interacting with an external environment forever without interpreting meaning from incoming inputs and responding with meaningful outputs. I do not believe that meaning can be derived from an algorithm and thus, I do not believe that computationalism’s answer to the Symbol Grounding Problem will be able to pass the Turing Test.
    I would be interested in hearing what a computationalist’s response would be to the necessity of an implementation - dependent system.

    ReplyDelete
    Replies
    1. Katherine, I think your comment is in the wrong thread. It should be posted in Week 3, which is about what is wrong with computationalism (“cognition is just computation”). And please read the Replies to the Week 3 skywriting.

      Week 5 is about trying to solve that problem (how to give a T3 the capacity to categorize its symbols (words) to their referents, i.e., “do the right thing with the right kind of thing”). That problem is computationism’s problem. You are still criticizing computationalism, which we laid to rest in Week 3. Please read the other skywritings and the replies by me and Ferna.

      You should also make sure you understand the other-minds problem as well as the difference between the easy problem (doing) and the hard problem (feeling).

      Delete
  13. The end of Harnad (2003) (if I understand correctly) states that meaning is composed of two parts: (1) sensorimotor grounding and (2) consciousness. Since T3 has sensorimotor capacities, it is grounded. However, how can we ever evaluate if a robot such as T3 fulfills the second requirement for meaning? Professor Harnad says in a reply to a previous skywriting that the problem isn't with the other-minds problem because Turing's point is that the robot must just be indistinguishable from a human. However, even if the robot doesn't understand meaning, couldn't it still be indistinguishable? For instance, in Searle's Chinese room, Searle doesn't understand Chinese but he could be writing Chinese to someone who can't tell that he doesn't understand Chinese.

    ReplyDelete
    Replies
    1. Melody,
      I’ll try and answer some of your questions to the best of my ability. If you read the replies above, Prof Harnad states that knowing “meaning” is equal to three parts. 1) Grounding 2) Producing and Replying to Verbal Propositions and 3) Feeling what it is like to be able to do 1&2. So, you are right about the two parts: sensorimotor grounding (#1) and consciousness (#3) BUT there is also #2 which if I understood correctly, is being able to describe the referents with language (verbal propositions).

      As for T3, yes, T3 is grounded but we will never know whether they “feel” what it is like to understand the meaning of words (#3), BECAUSE of the other-minds problem.

      I think what Professor Harnad was trying to say before was that to PASS the Turing Test, the robots must be indistinguishable from a human. Being “Indistinguishable” and being “the exact same” are different things. Also, remember that Turing only requires weak equivalence as well. Weak equivalence is enough to pass the Turing test. (Getting the same output for the same input, can give indistinguishability.)

      We will never know if a T3 or T4 robot knows what it feels like to understand meaning but as Prof mentions, “T3 (or T4) is as close as you can ever hope to get to evidence that anyone else, living or synthetic, has not only #1 and #2 but also #3”. In other words, "indistinguishability" is the closest test (and POSSIBLE criteria) we have for successful reverse-engineering of cognitive abilities. Because a T3 (or T4) robot has the ability to ground, they can do #1 and #2 of “knowing meaning” but again, because of the other-minds problem, we will never know for sure if T-robots satisfy #3.

      As for Searle’s CRA, we are talking about a T2 machine WITHOUT sensorimotor abilities, so they cannot ground (so they don’t even satisfy #1). Also, the T2 machine in the CRA is a hypothetical, imagined scenario that Searle has made to formulate his argument. Prof Harnad even states ("Stevan says") that it is impossible to pass a T2 because of the symbol grounding problem; thus a “indistinguishable” T2 passing machine is quite impossible as well.

      It is worth noting though that Searle’s hypothetical argument uses the implementation independence property of computation to penetrate the other-minds barrier for T2 alone, aka Searle’s periscope. (This is only true in his scenario where T2 can be passed by computation alone, which as stated above is impossible. You need sensorimotor abilities to pass T2.)

      Delete
    2. Thank you, Iris, this is the kind of skywriting I've always dreamt about, but rarely see, which is students who have already understood the readings and lectures explaining it to one another in a kid-sibly way.

      But besides explaining the content of the course, challenge it too! Challenges from those who have really understood it, as you have, are the most rewarding of all (to all).

      Delete
  14. With the Harnad, S. (2003) The Symbol Grounding Problem and Harnad, S. (1990). The symbol grounding problem., I’ve been introduced to the hybrid explanation of icons and categorization. Icons would be images on our retinas standing for external objects that would let us discriminate and categorization would use invariant features to identify members of a category from non-members. Furthermore, as I understood it, connectionism would explain how invariant features are derived from sensory inputs to categorize according to neural states. Now, what I was wondering about is how does this apply to abstract concepts such morality, freedom etc. I see that symbol grounding has a sensorimotor component to explain the link between inner words and external objects. If the inner word (abstract concept) doesn’t have an external object, but still has meaning, how do we explain it?

    ReplyDelete
    Replies
    1. Icons (i.e., analogue copies) of things don't "stand for" (i.e., refer to) anything. Only words have referents, because reference is needed for the subject/predicate, true/false propositions of language to have semantics (meaning). Otherwise they are just syntax: meaningless squiggle and squiggle. In mathematics (computation) the ungrounded symbols in propositions such as “2 + 2 = 4” are just syntactic, i.e. formal. They can be interpreted, by someone who has language, as referring to and meaning something, but the meaning is not in either the symbols or their use in mathematics, which is just formal computation (Turing).

      However, already before language evolved in our species objects and actions (including copies, mimicry and pointing) could be used to draw attention to something (and still can be, by children before they have learned language). There is still dispute about which other species can do this, and how, but when we get to the evolution of language (in the only species that has that capacity), imitation and pantomime are hypothesized to be precursors of language according to the gestural-origins theory of the evolution of language, [which "Stevan thinks" is right]). Weeks 8 and 9.

      About abstract “concepts” (i.e., categories) see the Replies in this thread. (** And, everybody: please, please, always read the other skies and replies before posting yours ** , otherwise you may be saying or asking the same thing as others – and there’s only two of us, Ferna and me, to answer! So we don’t have time to answer the same question more than once (although it’s often a good challenge to answer it again in a different way, and some of the other replies have tried to do that too.)

      Delete
  15. I know that we have (implicitly) agreed to stop talking about behavioral correlates, but I think the symbol grounding problem can be “illustrated” by behavioral studies looking at child development of categorization. (I could be completely wrong, but it made more sense for me when I thought of it this way.)

    Prof Harnad mentions that the “least abstract level of abstraction is the sensorimotor one”, the “distinguishing features are sensorimotor features, not yet named”.

    Think of your kid-sib that is just born into the world. They can (somewhat) see, hear, touch, smell and taste. Without initial understanding of language, babies first make sense of the world by observing things with their senses. Also, “Grounding is sensorimotor features usually learned, NOT inborn.”  Because you have to be able to interact with the world using sensorimotor features. Babies that are not yet born (fetuses), wouldn’t have the same categorical abilities compared to those who are born and in the world.

    It is known from behavioral research that babies form superordinate categories (ex. Animal) and basic categories (ex.dog) around 3~4 months old and subordinate categories (ex. Goldendoodle) by 6~7 months old. Babies of this age do not YET know “names and verbal definitions of features.” Thus, babies form such categories through sensorimotor interaction with the world and form these categories using only PERCEPTUAL features as first.

    When a baby sees a Goldendoodle, it does not know its “name (Goldendoodle)” per se, but they input the “distinguishing features/sensorimotor features that are not yet named”. They can SEE [curly brown fur], [4 legs] and hear [WOOF WOOF].

    Babies then start learning the symbols we use in this world (language) and connect the symbol (word) “DOGGY” into what they have been perceiving, its referent, the dog. The brain, “detects and abstract features to ground the name of the referent directly through sensorimotor experience”. They have learned to recognize and now IDENTIFY members of the category. (#1 in what it is to know the meaning of a word).

    Once enough direct grounding has happened, babies can now “reply and respond in the world to verbal propositions that describe the members of its referent category” (#2 in what it is to know the meaning of the word). After babies age and learn how to speak and grow their vocabulary, they are known by age 4 to use the FUNCTION of a referent to categorize it. In other words, by the age of 4, children not only rely on perceptual features but can categorize that a dog is a living, breathing animal with organs and blood, not just 4 legged animals with fur that bark. This indicates that by this age, babies have surpassed abstracting through only sensorimotor experience. Indirect grounding is now happening. An unperceivable referent & word “unicorn” can be grounded because the baby now has the directly grounded words “horse and horn”.

    ReplyDelete
    Replies
    1. I realize this is not a course on cognitive development, but when you think about how humans actually learn categories (through behavioral experimentation or even through personal observation/experiences) after they are born, the initial [direct symbol grounding] to [indirect grounding through words] makes a lot of intuitive sense.

      This is truly fascinating, that humans have this ability to “ground” both directly and indirectly and that they are all learned. This also indicates that grounding is essential for learning and memory. And why I assume that Prof Harnad mentions the “effability” thesis for language. Anything can be defined or described in words. As Fernanda states above, the abstract concepts we learn are VERY MUCH dependent on [language and propositions] and less dependent on “sensorimotor interaction with their referents in the world” after we pass that initial infancy stage. I’m sure this is a prevalent question in the academia as well but I’m super curious on the “minimal quantity” of direct grounding that must be achieved to move onto indirect grounding. (Although I assume that they are interdependent on each other.)

      Delete
    2. Just one more thing, if you look at Fernanda’s link above about how initial sensorimotor grounding takes place, the articles states that:

      Abstract concepts may be initially grounded in image schemas and situations (those babies without “names” yet, categorizing dogs by looking and hearing them) but these initially grounded concepts are NOT necessarily grounded in sensory-motor simulations once we gain words and verbal propositions.

      Sensory-motor experiences are what grounds initial concepts (A baby grounds dog because they saw and heard the dog) but once we learn the names of referents and gain verbal abilities (#2), we develop more concepts by relying on structural similarities. (A baby expands her abstraction of the dog by adding more abstract concepts through indirect grounding such as “mammal” or “canine” that have structural similarities).

      So, once we have language as a tool, when we process a concept in our head (that was initially directly grounded) it “may NOT depend on reactivation of the perceptual states”. Directly grounded concepts “show traces of sensory-motor processing” but when well-established (grounded), “sensory motor simulations will not routinely be necessary or meaningful processing.”

      Simply stated, we need sensorimotor capabilities at first to ground but after a certain point, once we have verbal abilities, we tend to rely on language much more and less on sensory motor simulations when processing and developing even the SAME initially-sensory-motor-grounded concepts. We, as adults, tend to abstract dogs through words and verbal descriptions more than sensory motor simulations/perceptions we remember. This ties in with what I mentioned above on the development of categories based on “function” by the age of 4.

      Delete
    3. Me to Everyone (12:13 PM)
      hello @fernanda, i think prof harnad misunderstood when I said that language eventually take over sensorimotor grounding but the article doesn’t actually say that it says Directly grounded concepts “show traces of sensory-motor processing” but when well-established (grounded), “sensory motor simulations will not routinely be necessary or meaningful processing.” Would there be an evolutionary explanation for that? or is it just that we get more used to using language for learning/grounding and it’s a habitual dominance
      hope this question makes sense
      this is kinda neuroscience related so hoping you’re the right person to ask
      it’s talking about processing abstract concepts right?

      Laurier Levesque to Everyone (12:18 PM)
      I don't know if this answers your question but the part of our brain that processes abstractions grew out of the motor cortex

      Fernanda Pérez Gay Juarez to Everyone (12:18 PM)
      Hi, @Iris! I think there is an evolutionary advantage of not having to interact with every possible instance in the world in order to be able to learn its category. Language, as you pointed out, allows us to recombine categories we already grounded to learn new ones, "borrowing" their features.

      Fernanda Pérez Gay Juarez to Everyone (12:20 PM)
      When it comes to abstract concepts, it's hard to think of an evolutionary explanation. I think the point is that, even for abstract thought (i.e., to learn, understand and retrieve abstract concepts) we make use of other concepts we already know, the core of which we learned through sensorimotor experiences

      Me to Everyone (12:22 PM)
      yes but why do the concepts we learned through sensorimotor experiences become less dependent on sensory motor simulation when re-activating that concept in the brain once language becomes available? It seems that language is somehow becoming more dominant even for a directly grounded concept

      Fernanda Pérez Gay Juarez to Everyone (12:23 PM)
      For example when we think or define an abstract concept we may make use of schemas, directionality, opposition or alignment, which all depend to a certain extent in sensorimotor experience in the world

      Delete
    4. Fernanda Pérez Gay Juarez to Everyone (12:26 PM)
      @Iris, that is a good question, but I don't think it becomes less dependent on sensorimotor simulation (at least for more concrete concepts). Regardless if we learn categories through sensorimotor interaction or through verbal instruction, the mechanism remains the same: detecting the features that make them part of the category. And there is evidence that, while retrieving concepts, the circuits that are active in the brain have some degree of overlap with the perception/action circuits that were active when we interacted with its referent (i.e., when we grounded it).

      Fernanda Pérez Gay Juarez to Everyone (12:27 PM)
      This becomes more complex for abstract concepts, because the features that define them are less sensorimotor.

      Fernanda Pérez Gay Juarez to Everyone (12:32 PM)
      @Iris I think language becomes more dominant to learn categories because it's less energy consuming to be told or to read a definition than to have to do the work of extracting the relevant features yourself everytime. But that doesn't mean that what you learned by verbal propositions does not activate sensorimotor circuits in the brain when retrieving the concepts. The whole idea of the "symbolic theft", means that, through language, we can also access the sensorimotor features that we previously learned and therefore be capable of simulation of things we have never seen/touched! Think of fiction for example. Through words, it creates images in your brain of things that may not even exist r that you have never encountered. This relies on percpetion/action systems in the brain

      Fernanda Pérez Gay Juarez to Everyone (12:32 PM)
      @Iris, does this make sense?

      Fernanda Pérez Gay Juarez to Everyone (12:33 PM)
      @Iris, there are some papers that actually show these activations of action perception systems when retrieving concepts that have been at least partially grounded through verbal propositions. I can share them in the skywritings if you are interested!

      Me to Everyone (12:33 PM)
      ah yes, so they start working together , language doesn’t become dominant per se on already abstracted concepts

      Me to Everyone (12:33 PM)
      makes a lot more sense, yes

      Fernanda Pérez Gay Juarez to Everyone (12:35 PM)
      Exactly! They add up.

      Delete
    5. Iris, those are all good points about development, but “functional” features (based on what an object does, or can be used to do, or what can be done to it) are still sensorimotor features (i.e., things your senses can see and your body can do).

      Categories only become still more abstract than sensorimotor features (which are already abstract, because they have been abstracted from objects as features-of-objects) when they have been named, recombined and used in propositions to define further categories (whether poodles or tools) verbally.

      And think critically about the developmentalists’ notion of so-called “basic level,” “superordinate” and “subordinate” categories (whether object categories or feature categories).

      “Superordinate” and “subordinate” “levels” are relative notions, not absolute ones. Every category, from sensorimotor ones to the most abstract (i.e., verbally defined) ones has (many) superordinate categories of which they are members (or subcategories), and (almost as many) subordinate categories that are members or subcategories of theirs. There may be a default “basic” level at which children first tend to form most of their categories (though I doubt that the developmental literature – which I don’t know very well – has really figured out what/where that “basic level” is, because it is all pre-verbal, based on what children can categorize (i.e., do the right thing with) and not on what they (or their parents) name, or describe in words.

      Think about it: What are the “basic level” categories of nonhuman species, who have no verbal categories at all? You’d have to be an ethologist laboriously constructing an <a href='https://mousebehavior.org/about-ethogramethogram">"ethogram"</a> on a preverbal child to figure that out.

      Nor are all possible categories (or even all possible sensorimotor categories, or all possible verbal categories) strictly “hierarchical”: is “red” higher or lower than “round”? It’s more like a universe expandable (and expanding) in all directions…

      Delete
    6. I alas don't have the time to reply to all the reposted chat, but thanks for posting it. If there's something important I've missed, please ask it again in a (brief!) post.

      For your other points, we'll get to them when we get to the dictionary's minimal grounding set (Week 6).

      Delete
    7. PS Here's an analogy that might be helpful:

      To lexicalize is to coin a new word for something. Not just for new inventions or fashions, but also for things that would otherwise take lots of words to describe. “Mammal” is a much more efficient way of referring to mammals than having to pronounce the full definition every time as “a member of a species whose females give birth to live young and nourish them with milk they secrete...”

      Same for “truth” and “peekaboo unicorn” and “zebra.”

      Once you lexicalize a word that refers to the same thing as a long definition, you can define other referents using that word rather than having to repeat the whole definition every time.

      This is related to perceptual "chunking" in which lots of bits of information are re-coded into a single unit, a “chunk.” This happens in the learning of motor skills as well as music; and language itself is an instance of chunking. It happens in perception too, with phenomena like “categorical perception,” which we will be discussing next week.

      Well once you have recoded a lot of bits into a chunk, you can treat the chunk as it if were itself just a bit, and then recode those recoded chunks into still bigger chunks. In language, the recoding is the verbal definition. With direct sensorimotor categories, the only verbal portion is the category name. The features are direct, sensorimotor ones. But once you use propositions to define further categories whose features are all already grounded, named categories, the features are more verbal rather than sensorimotor (even though the features of the features are sensorimotor).

      This is not really about imagery. When told that a zebra is a striped horse, you do imagine a horse with stripes, but when you go on to learn that horses and donkeys and zebras are all “equids,” and equids and rhinoceroses and tapirs are all Perissodactyls (odd-toed ungulates), the only shared feature is the number of toes…

      Delete
  16. When thinking of grounding the meaning of words, we do not necessarily require a formal definition, the meaning of words are approximate to us, and we each have our own understanding and internal definitions. We can also infer the meaning of new words based on context, so how many words does one need to ground directly in order to ‘understand’ a language?

    ReplyDelete
    Replies
    1. We'll discuss this in Week 6, when we get to the minimal grounding set (MinSet) of the dictionary. It looks like it's about 1000 words, but which words? The MinSet turns out not to be unique.

      And although a MinSet may be the minimum number of words that need to be grounded directly so that all other words can be grounded verbally it's unlikely that we only ground one MinSet directly. It's probably hybrid sensorimotor/verbal throughout our lives.

      Delete
  17. The symbol grounding problem represents connecting symbols to its reference. Language is but a string of words until it is placed into a context where it can be interpreted and associated to a physical representation. This idea seems to be different than meaning since meaning involves affect whereas grounding represents understanding how a symbol relates to its representation whether it be an idea or a real-life object. In the human brain, being able to associate a symbol to its referent is inherent. We learn through lived experience and extract information to make these associations. I have trouble understanding how a T3 robot would be able to make these same connections through autonomous learning. How is one able to categorize and how would we program the T3 robot to do the same? How do we choose what fits into a category to be able to eventually create meaning through our associations?

    ReplyDelete
    Replies
    1. Please read the other comments and replies.

      Delete
    2. Zoey, you make an interesting point about meaning and grounding. I think that the presence of emotion and affect—essentially the irrational—in meaning points to the inherent subjectivity of language. While people can generally agree on the meaning of a word like zebra, everyone has their own preconceptions and associations with different words. Even with the word zebra, I would have different associations than someone who actually lives somewhere where zebras live. I may just see it as an exotic animal that lives in Africa, while someone who has experienced zebras first-hand would have a more fine-grained idea of a zebra, with memories triggered of the times they have seen zebras. This is even more pronounced in more abstract words, such as happiness . Every person has their own conception of happiness; some peoples’ happiness conflicts with others’. Words have different connotations for different people. In all the infinite variability of humanity, this makes for an ocean of potential meaning for each word. With associations, if a memory is triggered by a certain word, it’s likely that this memory would trigger another memory or thought or person, which triggers something else, which triggers something else, ad infinitum. The meaning contained within a word can thus be infinite, with each association like an echo of the original trigger word. And if meaning can be infinite, I have trouble seeing how even a T3 could have understanding. Unless, of course, it was so human-like that it had an irrational side so as to not short-circuit when faced with infinity.

      Delete
  18. Reading this text and learning about the symbolic model of the mind, in particular, reminded me of what I learned in logic courses (computability-focused courses!) and about Gödel. These courses largely focused on proving the incompleteness theorems and studying the limits of formal systems, which was done by symbol manipulation. My recollection of these courses is flawed, but the ways in which symbols were manipulated in relation to arithmetic were similar to what was presented in section 1.2 of the text. In both cases, symbols are manipulated according to strict explicit rules, they can be atomic or composite strings of symbols (symbols combined according to rules) and they are “semantically interpretable” (can be attributed meaning). The main difference between the logical systems in my previous courses and the symbol systems in this one, however, is that the logical systems were grounded in arithmetic and mathematics as a whole. Although symbol systems are also semantically interpretable, they cannot be grounded in relation to mental processes, which is depicted by the symbol grounding problem. Even if this problem were to be solved, parallels with logical systems lead me to think that describing cognition via symbols has greater limitations that we have yet to be familiar with. I believe that this parallel with logical systems and mathematics as a whole adds additional support to connectionism and more hybrid models.

    ReplyDelete
    Replies
    1. In mathematics symbols don't need to be grounded. They are manipulated purely on the basis of rules (algorithms) that operate on their (arbitrary) shapes, not their meanings. The interpretation is in the head of the user/interpreter.

      Cogsci is about reverse-engineering what is going on in the head of the user/interpreter, and symbol grounding is about the grounding of words: how does the head of the speaker/hearer connect words to the things in the world that they refer to?

      Connectionism (neural nets: unsupervised and supervised learning) might be part of the hybrid mechanism in the head of a T3 or a T4 that can learn to detect the features that distinguish the members from the nonmembers of the categories to which (content) words refer.

      (But meaning -- see Replies -- is more than grounding.)

      Delete
  19. "The description or definition of a new category . . . can only convey the category and ground its name if the words in the definition are themselves already grounded category names."
    This issue surrounding the recursive nature of definitions made me think of morphology. The grounding of words depends on the grounding of the symbols it is made of, but I think there is another level in between that we could analyze. The root, prefix(es), and suffix(es) of a word must be grounded before the word can be attributed a meaning. For example, the word "predetermined" can be broken down into "pre-", "determine" and "-ed". Just like a definition relies on the meaning assigned to its components, "predetermined" relies on an understanding of the meaning of these three components, as well as the meaning of the symbols that make up these components.

    ReplyDelete
  20. When we are first born, we do not have an understanding of language at all. We are born into that same situation of having to learn Chinese as a first language, but only having a Chinese/Chinese dictionary. Thus, we must begin learning language, and accumulating grounded terms, via observation... our parents teach us the words for simple objects and concepts, and we store them in our head (grounded, as we now have a referent for a symbol). Thus, I wonder if a child born blind and deaf would find it impossible to ground symbols, as they would not be able to attach meaning to referents, and would simply have trouble observing a referent in any capacity. Maybe this is obvious to others, but I wonder how a blind and deaf person from birth would be able to attach meaning to referents...

    ReplyDelete
    Replies
    1. I would think that a person who was blind and deaf would be able to attach meaning to referents because they would still have other sensorimotor capacities, such as feeling, tasting, and smelling things, that could allow them to observe the world and learn what certain objects are. Although their method of observation may be in a different way than we are familiar with, I think it is satisfactory for them to recognize unique objects to a certain extent and categorize similar objects. A blind and deaf person could then be taught braille and learn to associate braille symbols with objects that they have observed. With braille, a person could respond to and make reference themselves to objects or categories of things similarly as one does with spoken language. I think this language use means they would be able to attach meaning to referents and ground symbols.

      Delete
    2. I think this is an interesting thought, but I agree with Marisa that even a bling-deaf person would still be able to have other sensorimotor experience/input to allow them to ground symbols. Not only could they have input from braille, example mentioned above, but I have personally seen how parent will physically approach the child to certain things/object and physically place their hand on the object while making (what I assume to be) sign language symbols on their skin for them to feel and try to interpret.

      I would guess that by having such input when they feel an object and then have it associated with some sort of hand gesture that they can feel directly on them (and then potentially replicate), it will allow them to attach meaning to the gesture. I would then assume this would then allow them to have some version symbol grounding.

      Delete
    3. I agree with Marisa and Mariana that a blind and deaf person can still ground symbols, but I doubt that the symbol grounding would be as concrete as for normal people. I find that even some words can be explained without really seeing or hearing what it refers to, there are more words that can’t. For example, colors. How is it possible to explain blue, red, orange…? For me, my instinct is “oh blue is the color of the sky”. However, since they are not able to see, they don’t know what is the color of the sky. And I tried to come up with other explanation, like “it is sad but calm” or “waves of the sea”, but in the end I realized that, as color is a feature that can only be perceived through vision, it is almost impossible to explain without the involvement of vision. Or more, how do you explain the word “color”?

      Delete
    4. I think the discussion of how blind-deaf people ground meaning is a very good question, which is also the goal of special education. Color vision, on the other hand, is a very subjective experience. Even though the sensation of light wavelength itself is objective, the concept of each color is subjective and having the problem of ‘the other mind’ problem(i.e. if you are unconscious of being color blind, when you referring to a certain color, the subjective experience will be different from others, but you will not know). Suppose a person with dichromatic version (like red-green color blind), it is actually much common than I thought (approximate 1 in 12 men are red-green color blind, for example), they might ground the meaning of color very differently than people with normal color vision but yet can do the symbol grounding(as mentioned, maybe not as concrete). This is following my comment below by asking how an abstract concept can be grounded based on sensorimotor experience-based learning.

      Delete
  21. “Meaning is grounded in the robotic capacity to detect, categorize, identify, and act upon the things that words and sentences refer to”.
    T3, unlike T2, has sensorimotor abilities — it is grounded, and can categorize and link objects and “things” in the real world to the words that refer to them.
    Something that came to mind was the experiment with a skunk-racoon hybrid, where children and adults were both asked to categorize the animal as either a skunk or a racoon, revealing how different age groups value essential features. This categorization task depends not only on the visible features of the animal, but on past knowledge we already have about different species and biology — there is no available definition for a skunk-racoon and yet we can use what we know to place the hybrid in one category or the other, something T2 could not do.
    An adult would say “the animal has racoon parents, so even though the offspring looks like a skunk, it is a racoon”, whereas a child would say “it’s a skunk because it looks exactly like a skunk”. The child does not yet have an understanding of genetics and reproduction, causing it to rely only on what it has experienced, i.e. the features of a racoon versus those of a skunk. While T3 is grounded and would have knowledge of skunks and racoons, I wonder whether it would, as a human adult, intuitively refer to the biological background of the skunk-racoon to categorize it, or look at its surface features to decide. Would its status as a non-biological being change the way it (instinctually) categorizes?

    ReplyDelete
  22. Professor Harnad’s article on symbol grounding cleared up much of the confusion that I had regarding this topic. However, I don’t get how this comes close to dealing with the problem of understanding, aka the hard problem, brought up in the Chinese Room Argument. Not that I expect it to be possible to solve the hard problem, but the connectionist portion of the hybrid model leaves much to the imagination. Of course, it would be quite complicated to attempt to give much in the way of details on connectionism, given that it deals with non-symbolic representations in the brain, which are meaningless to the untrained eye.

    What the article got me wondering about is: could understanding really be as simple as connecting symbols to a sensory representation by way of a non-symbolic neural representation? Is the point of this paper that understanding is irrelevant to the symbol grounding problem? Because my understanding to this point was that in attempting to scientifically describe meaning, the hard problem is necessarily a major obstacle.

    ReplyDelete
    Replies
    1. I really enjoyed reading these thought-provoking articles. Milo, to understand the relationship between symbol grounding and the Chinese Room Argument you can think about what you can do with symbols that have form and shape, but not meaning. If Searle executes an algorithm, his only capacity is to manipulate symbols on the basis of their shapes, not their meaning. When Searle shows that he can memorize and execute the T2 passing program, computationalists would say that that’s what produces understanding; therefore, any device that executes that algorithm program would understand. Searle disagrees with this and implies that you need something other than just symbol manipulation to pass T2. The solution here would be to connect the meaningless symbols to things in the world that are the referents to the symbols. For this, sensorimotor experience is needed and therefore sensorimotor symbol grounding can only be done by a T3 robot. They can attach verbal input (symbols/squiggles and squaggles) to sensorimotor features that are perceived of verbally described. In other words, by interacting with the world, one can form a vocabulary that is grounded.
      Finally, a little something to think about would be how would you verbally describe or perceive abstract concepts such as trust or betrayal? How can a T3 robot perceive these through sensorimotor experiences?

      Delete
    2. Hi Milo,

      According to Prof Harnad, symbol grounding does not simply equal understanding but is one of the factors for understanding. In fact, there are three components required for understanding meanings -- 1)the symbol grounding capacity. 2) the capacity to reply and respond to verbal prepositions. 3) the capacity to feel. Indeed, the third factor, feeling, touches on the hard problem and cannot be proved because of the other mind problem (we can never know if others have feelings), so we cannot really prove that a T3 robot "understands". According to Turing, as long as we cannot tell a robot from a person apart, we don't have to prove that they feel and understand, just as we cannot prove other people feel and understand.

      Melissa, to describe abstract concepts like "truth", humans use language as a weapon -- use already grounded referents and prepositions to describe them (ground them indirectly). With language, we can reduce the abstract definitions all the way down to concepts that can be grounded in the real world.

      Delete
  23. I am quite convinced by the argument in the article that the ability to pick out the referent of words is a necessary condition to understand meaning; formal symbol system alone is not enough to achieve that. However, I think that maybe we do not need to go to the hard problem to see that it is not the sufficient condition. To think beyond just names and categories, many words are not directed grounded to a referent, or representing a category. For example, how do we understand the meaning of "is"? It obviously does not point to a referent or represent a category. Nor is its meaning simply its syntactical role in the symbol system.
    In general, while I agree with the article that the sensorimotor capacity is quite necessary, I feel that maybe we need to be more careful in thinking how the "grounding" works. I think we should not simply treat sensorimotor capacity just as a grounding of symbol system, in the sense of giving the symbol system the ability to pick out referents. I guess we tend to think in that way because we confine ourselves to think of meaning just as the meaning of language. But maybe our sensorimotor engagement with the world has its meaning too. Maybe we should think of meaning of symbol system in terms of a more general notion of meaning, instead of trying to use sensorimotor input simply to connect a word to a referent . I do not have a ready argument for this. This last paragraph is just a speculation I happen to think of

    ReplyDelete
  24. A question posed in class was to find categories that you could only get through words, and therefore could only be used or found by humans with language. A category that I thought of for this was the concept of right and wrong. The concept of right and wrong is highly dependent on context and culture, and is not something tangible that exists in the physical world. Therefore, you would not be able to categorize this concept based solely on sensorimotor capabilities. Instead, you would need language in order to understand the definition, and ground these terms and concepts. Without language, you would not be able to differentiate between the categories of right and wrong. Thus, by my understanding, this would be a category accessible only to those with language.

    ReplyDelete
  25. To me, the symbol grounding problem is making much more sense of the whole course - it is trying to solve the problem of how we get meaning from symbols(of any kinds) and make the connection of the physical world where we live in. It asks further questions to computationalism - regarding the critics of how machines can ever understand meanings? It also explained the reason why T3 (instead of T2) is the right level to test, since a T3 robot could only pass TT by understanding how to deal with objects in a certain way in the world that indistinguishable from a human. This requires the system to have a sensorimotor subsystem to operate and interact with the world.

    The keyword for the symbol grounding problem is ‘intrinsic’. The nature of an object is often attached to object when we engage with it. We not only know what this object is for, and also know the feeling of knowing what the object is for. This may or may not be related, but it makes me think about a controversial debate about whether a piece of music has narrative power – for general audience, music is mostly likely to be all acoustic symbols rather than have any narratives. Audience might need to read the back story about the composer and the context where the music was made to get a sense about the ‘meaning’ of the piece. After this ‘grounding’ process, audience can gradually feel the same narrative in music for which to become meaningful.

    The only confusion for me is that how an abstract concept can be grounded based on lower-level sensorimotor experience-based meaning? By making conjunctions in description partially make sense but I’m curious about the specific point/boundary when an abstract concept can be derived from a concrete sensorimotor experience.

    ReplyDelete
  26. I'm trying to put the reasoning for the 3 questions brought in the class together:

    1. What is the Symbol Grounding Problem?
    2. How is it related to Searle's Chinese Room argument?
    3. How does the symbol grounding problem explain why you need a T3 robot to pass T2?

    The symbol grounding problem is that definitions of features cannot be reduced all the way down -- always referring to new categories to explain the old ones would lead to an infinite regress. Searle's Chinese room argument states that one implemented with an implementation-independent T2 passing program does not understand the symbols it refers to, as he just manipulates the squiggles and squiggles of the symbols in the program. Thus, we need to connect the meaningless squiggles and squiggles to the real world with the sensorimotor detector to avoid infinite regress, to attribute the definitions to the real world. Thus, to pass T2 (when you need to use language to reply and respond to a human in a lifetime), you need to pass T3 (being able to interact with the world through sensorimotor function) because of the constantly evolving language that needs to be grounded. However, even though a robot passed T3, it does not ensure understanding, or knowing the meaning of the world because we also need to address the hard problem -- to feel that what it feels like to reply (T2) and interact with the world (T3).

    ReplyDelete
    Replies
    1. I think your answers to the first two questions (What is the Symbol Grounding Problem and How is it related to Searle's Chinese Room Argument) were really well worded (although I think you would have to define infinite regress if you were putting this on the final). However, I think that your answer to the third question (How does the symbol grounding problem explain why you need a T3 robot to pass T2) might be a little too simplistc. You mentioned that the robot would need T3 to ground constantly evolving language (new words, I'm assuming) but I think that even if language wasn't evolving, we would still need to have a T3 robot to pass T2. Remember the marble example, where if we sent our T2 robot a marble in the mail, they wouldn't be able to tell you what colour it is. Their inability to use sensorimotor categories to succeed in this very simple interaction would be a dead givaway that they are likely not human. This example shows that categories aren't the only things that require sensorimotor grounding. Any event where an individual needs sensorimotor abilities to aquire the necessary information for normal T2 interactions would require a T3 robot. If there's a power failure and you decide to communicate with your T2 pen pal by writing them a letter by the candle light, they would fail to answer not because of their inability to generate a response to the text, but rather because they lack the ability, through motor actions, to open the letter in the first place. Although this isn't a complete answer to your question, I was hoping it would clarify why there is more to consider than just "grounding new words" in order to explain why passing T2 needs a T3 robot.

      Delete
  27. In Searle's Chinese room argument, he points out that the man he hypothesis, himself, who does not understand Chinese, could behave equally to Chinese by just following the instructions on the rule book, which is implementing the algorithm. He argues that when Searle is executing inputs in Chinese and performing the task, there is no meaning, and Searle still doesn't understand Chinese. Still, I doubted if it proved that cognition is not only computation when I first read the argument since Searle seems to be able to learn Chinese at least to some degree. It is reasonable for me to think we could learn a new language by being given such a complete rule book. Therefore, symbol grounding would not be a problem for computationalism. However, what I experienced while working in a Korean restaurant's kitchen answered my doubt about why symbols should be grounded. First, I didn't understand Korean at all when I walked into that kitchen. After a few shifts, I could perform all the tasks I needed to do after the chef gave commands about what dishes were making. In this case, I could roughly be considered a machine is receiving inputs and generating outputs, and I perform so well that the new waitress cannot tell who also works as a beginner. (Behavior equivalence) The chef usually gave separate verbal commands, and I assume that they relate to the main material in that dish. Until one day, when I was trying to get the same material from the grocery store, the employee told me what I just tried to pronounce to him means a special kind of sauce, instead of a kind of noodle that I was asking for. In this case, as Searle in the hypothesized Chinese room, I in that Korean kitchen did not get the meaning nor the meaning from my chef commands and demonstrations of the correct 'algorithm' to execute the rule book.

    ReplyDelete
  28. In an earlier reply, prof Harnad said "In formal mathematics, symbols do not have or need grounding (though learning maths does)". I sometimes still get a bit fuzzy about the relationship between numbers and words in symbol grounding. Within this area of consideration, do the symbol "2" and the word "two" behave like separate entities? Mathematical equations could be performed using "2" without a grounded understanding. However, in language, "two" is a grounded concept - for example, I know what it feels like to hold two oranges in my hand. For an additional example, are a division symbol and the word "division" / an understanding of what it means to divide, separate as well?

    ReplyDelete
  29. I think I just need clarification on whether my understanding is correct. If consciousness is a weasel-word for sentience, then is it correct that the symbol grounding problem is the problem of how words (symbols) get their meanings for entities that feel? Based on the notion that a conscious state is a state it feels like something to be in, then the symbol grounding problem’s notion that a word’s meaning is its grounding and the feeling associated with it, would depend on the qualitative experience of the sentient being who is feeling. So grounding is independent, like a constant in math and the feeling is a parameter or function whose value is determined by the sentient being’s marker of feeling associated with it.
    A mathematical expression of what I just said would be:

    Meaning = m
    Grounding = g (constant)
    S(f)= Sentient(feeling)
    m = g + S(f)

    In that case, I also wonder about the Whorf Sapir Hypothesis and the role it plays in this equation. If we were to say the sentient is humans, there is still variability in the value of the feeling depending on the qualitative sentiment associated with certain words that give rise to the meaning. Is this a right way to think about the symbol grounding problem and its broader applications?

    ReplyDelete
  30. Genie knew a few categories such as “blue” “mother” and “go” but for most of her developmental period her circumstances did not allow for sensorimotor interactions and thus grounding. However, for the categories she knew she was still able to refer to them correctly. Following her release, she was able to learn how to play and dress herself. Essentially demonstrating that symbol grounding requires sensorimotor interactions. I thought this was pretty interesting as it’s a strong demonstration of how gaining meaning and categorization capacities is reliant on sensorimotor interactions.

    ReplyDelete
  31. Trigger warning: child abuse




    As my last skywriting on this topic didn’t demonstrate a solid conceptualization of the topic I’m here for a second attempt.
    In some of my other classes we’ve discussed cases of developmental disorders. When reading this, one in particular comes to mind. The story of Genie the Wild Child is one that has garnered much interest in the field of linguistics and psychology. Genie was a case of severe child abuse that left her in a state of infancy. She was thirteen when she was rescued but could barely speak or do anything independently. Briefly, the circumstances of her confinement involved her being confined to the basement in the dark, in a cage, and in a straitjacket.
    Genie knew a few categories such as “blue” “mother” and “go” but for most of her developmental period her circumstances did not allow for sensorimotor interactions and thus grounding. However, for the categories she knew she was still able to refer to them correctly. Following her release, she was able to learn how to play and dress herself. Essentially demonstrating that symbol grounding requires sensorimotor interactions. Looking past the absolute horror that is this case,I thought this was a strong indication of how gaining meaning and categorization capacities is reliant on sensorimotor interactions.

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2021 Time : FRIDAYS 11:35-2:25  Place : ZOOM Instructors : Stevan Harnad & Fernanda Pere...