Monday, August 30, 2021

9a. Pinker, S. Language Acquisition

Pinker, S. Language Acquisitionin L. R. Gleitman, M. Liberman, and D. N. Osherson (Eds.),
An Invitation to Cognitive Science, 2nd Ed. Volume 1: Language. Cambridge, MA: MIT Press.
Alternative sites: 1, 2.



The topic of language acquisition implicate the most profound questions about our understanding of the human mind, and its subject matter, the speech of children, is endlessly fascinating. But the attempt to understand it scientifically is guaranteed to bring on a certain degree of frustration. Languages are complex combinations of elegant principles and historical accidents. We cannot design new ones with independent properties; we are stuck with the confounded ones entrenched in communities. Children, too, were not designed for the benefit of psychologists: their cognitive, social, perceptual, and motor skills are all developing at the same time as their linguistic systems are maturing and their knowledge of a particular language is increasing, and none of their behavior reflects one of these components acting in isolation.
        Given these problems, it may be surprising that we have learned anything about language acquisition at all, but we have. When we have, I believe, it is only because a diverse set of conceptual and methodological tools has been used to trap the elusive answers to our questions: neurobiology, ethology, linguistic theory, naturalistic and experimental child psychology, cognitive psychology, philosophy of induction, theoretical and applied computer science. Language acquisition, then, is one of the best examples of the indispensability of the multidisciplinary approach called cognitive science.

Harnad, S. (2008) Why and How the Problem of the Evolution of Universal Grammar (UG) is HardBehavioral and Brain Sciences 31: 524-525

Harnad, S (2014) Chomsky's Universe. -- L'Univers de ChomskyÀ babord: Revue sociale es politique 52.

58 comments:

  1. While reading this paper, I found it really interesting to try to understand which circumstances would lead a child to "pick up" a language and which one would not.
    From what I understood, kids are born with a set of innate rules of grammar (UG), but they are not born with any semantics or any vocabulary. So they need to hear words, verbs, etc, to build their vocabularies. They also need to hear sentences: from the structure of the sentences they hear, they will extrapolates general pattern/structure that are constrained by their innate set of rules (UG). Without UG, and since kids do not receive significant negative feedback, kids would just generate false pattern and produce agrammatical sentences.

    We know that kids that do not hear any language do not learn to speak. Kids that hear language without any context (ex: on the radio) also do not learn from it. So I was wondering: let's say that a child only hear aggrammatical sentences in the critical period, would they start to speak agrammatically or would they note acquire language at all? It is of course an highly improbable situation, but we could imagine a kid whose parents have a type of aphasia that makes them say sentences with proper meaning but deficient syntax. If this case, would the kid be able to make extrapolation (guided by UG) on the incorrect input? Or would the aggrammatical sentences somehow conflict with the kid's UG?

    ReplyDelete
    Replies
    1. Notice that the examples Pinker gives of learning without correction are OG, not UG.

      No one speaks UG non-compliantly.

      Many people have speculated about what if a child heard only UG-noncompliant language. But even if a computer generated it, how would it be brought into verbal interactions? Otherwise it's just a radio.

      All of OG is UG-compliant.

      Delete
    2. Hi Louise, this is an interesting proposition. Since UG is a set of innate grammatical rules, I don't think it can be messed with. Therefore, the aggrammatical sentences that the child hears could not conflict with his/her UG because the UG is already there. I think if this scenario happened (a child grew up hearing only aggramatical sentences), they would simply be unable to develop language. They may not be mute (they might be able to make certain sounds and say certain works), but I don't think they would be able to communicate via a language.

      Delete
    3. Hi Melody and Louise, I think that some good points are brought into question in both of your skies. I would agree that UG itself is an innate mechanism that by definition is hardwired into each one of us and therefore can't be affected by the input that is received (from my understanding). In this case i believe that since Ug does have core rules that are essential to the language, I would like that think that even if a child is only exposed to ungrammatical input, a child would still be able to squire some grammar and vocabulary even if erroneous.

      Delete
  2. Regarding the Whorf hypothesis, linguistics have shown "that human languages are too ambiguous and schematic to use as a medium of internal computation." The main conclusion here is that thoughts cannot be words. Could this also be a significant challenge to computationalism? In coding, the symbols or functions should not be ambiguous; that's one of the first lessons learned from an introductory coding class. On the other hand, spring can mean the season, or an object that goes "boing". Are there ways to add other constraints, analogous to those involved language acquisition, that enables computation to deal with ambiguity? We discussed computation extensively during the first few weeks of the course, so I am curious what other people make of this problem.

    ReplyDelete
    Replies
    1. Computation is just formal syntactic rules for manipulating meaningless symbols. The rules are unambiguous, but there cannot be ambiguity about symbol meaning, since there is no meaning in computation, just syntax (even though the right algorithm can be interpreted externally, by the user).

      Only linguistic symbols (content words), have referents, and meaning. But computation is a subset of language -- a syntactic subset, with no semantics. Function words (and, if, not) are more like computation: they have rules for using them, but they have no referent.

      Delete
    2. Hi Prof. Harnad,
      I am confused as to how only content words have referents and meaning. Is it an issue of tangibility and grounding? When I think of content words I think of nouns and verbs, and I understand how those can become grounded based on previous lectures. However, I am not clear on how function words are more like computation. If you think of the phrase “apples or pears”, “or” is less tangible than something like “apple” but “or” still indicates the action of selecting between two entities. Is it considered a non-referent because it needs an action or the two entities it refers to? To me, “or” still holds meaning because the phrase “apples or oranges” refers to the process of choosing without the explicit use of “choose apples or oranges” and this meaning is lost if you say “apples, oranges”.

      Delete
    3. To add to that, I am even more confused as to why language differs from computation. Isn't language also just formal syntactic rules for manipulating meaningless symbols? Just as much as you need an external user to interpret computations, you need an external user to interpret language. I don't understand where the distinction between the two exist. For example, in binary code red is 00000000; on it's on " 00000000" is meaningless, but someone familiar with it could interpret it to mean red.

      The word "red" is also just syntax on it's own, just like " 00000000". You always need an external user to ground the word to the actual experience of seeing red.

      Is it that "00000000" not consider a computation, but part of a computer language that has a referent? Then computation is comprised of "+" "-" "if, then", etc in computer language and "and", "or", "with", etc. in language? -elyass

      Delete
    4. A content word has a referent, something you can "point to," like "rose," or "red" or "run" (that's a rose, that's not a rose). It designates a category, which has members and non-members. The members need not be concrete objects: For “fairness” you can point to “that’s fair,” “that’s not fair”). But what do you point to with a function word, like “or” or “if”? They are words too, but their “definition” is not the distinguishing features of a referent that you can point to. They are the rules for how you can use the word in a sentence: syntactic rules, rules for how to manipulate the function words in a string of other (content and function) words (symbols). As in computation.

      Mathematics is just syntax. It can be given a semantic interpretation, but the interpretation is not the mathematics. The syntax (symbol-manipulation rules, algorithms) is the mathematics. The semantics is cognition, which is done by heads that have the grounded symbols of language and thinking. And that grounding pertains only to the content-words. And we only understand the content words if they are grounded by our T3 capacity to “point to” (i.e., recognize, categorize, interact with) their referents. That’s what Searle can’t do. He can only manipulate the content words as if they too were just function words; just syntax.

      (The meaning and understanding of a word is (G) the T3 capacity to recognize and interact with its referent – i.e., T3 grounding – PLUS (F) the capacity to feel what it feels like to understand the word.)

      Delete
    5. This is true, I was confused by this at first. Language definitely needs to be interpreted by an outside user when words are combined together in sentences, but most of the content words we use are grounded. I think this might be why computation and language are different in this way — computation is not grounded and always necessitates external interpretation because the computation does not contain meaning, whereas for language, the basis of our sentences are grounded words and contain their meaning through our sensorimotor interactions with them (granted we speak that language). This is what I think was meant, please correct me if I’m wrong!

      Delete
    6. You're right. But don't believe mathematicians when they say they do math purely formally (ungrounded squiggles and squiggle-rules only). That's maths. But they do maths with their heads, and their heads interpret the squiggles. (That's why maths is a [special] subset of language and cognition.)

      Delete
    7. I know that words have meaning and symbols in math or computer programming do not. That said, arithmetic, for instance, can produce content that is semantically interpretable. In math, semantics are independent of the syntax. As such, I am wondering if the same goes for language. Could syntax be considered a component the produces output in the form of the meaning of a sentence?

      Delete
  3. It is my understanding that while children are born with UG, they still require context in order to develop their language abilities. They need to take sounds and situations as input and somehow the output becomes the arrangement of the language's grammatical structures. Pinker is arguing in section 8 that this transition cannot occur by innate grammar alone, they need to be able to pick out the grammatical structures that exist before their innate UG can be of use to them. This is what he refers to as the bootstrapping problem. But isn't this stating the obvious? Maybe I misunderstood, but didn't we discuss in class that these things that need to be "bootstrapped" are parts of OG, so we know that they are learnable?

    ReplyDelete
    Replies
    1. Yes, this sounds like another symptom of Pinker's failure to distinguish UG and OG.

      Delete
    2. Pinker talked about context which reminds me of symbol grounding problem. I think context is not only useful for grounding content words, but also useful for grounding non-content words such as 'if', 'or' whose meaning can only be infered from the relation of content words. But since UG has the ability to form propositions, we already have the ability to recognize such non-content words that are crucial in formulating propositions, we just need some words to ground them.

      Delete
  4. Hi Prof. Harnad, I was wondering if you could give some examples of what a UG constraint would be. I feel like I get the general concept of UG - that there must be some inborn constraints that children work from as they begin the language learning process to explain how children make any sense of the wall of sound they encounter, and to explain universal grammatical similarities seen across all languages - but I’m finding it hard to concretely understand how these inborn constrains interact with and relate to learned constraints (what is UG and what is learned grammar) because I can't come up with examples of what a UG constraint would be. When reading the Pinker article, I found it hard to evaluate the different proposed learning mechanisms for how children might develop an understanding of language+grammer because I’m not totally clear on what we need to explain through learning (what needs to be developed over and above UG), and what baseline a child is starting with. And I think all this confusion on my part is because I can’t put my finger on what exactly a UG constraint would look like.

    ReplyDelete
    Replies
    1. OG:
      John is eager to please Mary (OG+)
      John are eager to please Mary (OG-)
      violation of verb subject agreement constraint
      (Learned and learnable by (1) unsupervised learning or (2) supervised learning or (3) instruction [any english grammar textbook])

      UG:
      John is eager to please Mary (UG+)
      *John is easy to please Mary (OG-)
      violation of wh-movement constraint
      Unlearned and unlearnable by (1) unsupervised learning or (2) supervised learning. [Learnable only by taking a course on UG].

      Delete
    2. Language began between 300,000 and 100,000 year ago, and with it, OG, which was co-invented by people, differs from language to language, is learnable, and is continuously changing.

      UG was first discovered by Noam Chomsky about 60 years ago. Its rules are being elaborated by linguists ever since. UG is unchanging, unlearnable by children (hence inborn) and the same across all languages (except for parametric variations like Subject-Verb-Object vs. Subject-Object-Verb word-order and pronoun-dropping, which vary from language to language; parameters are learned and learnable by children, and are hence part of OG, but constrained by UG).

      Delete
  5. I find it interesting that “parents do not understand their children's well-formed questions better than their badly-formed ones.” Seeing as adults are usually more accustomed to interacting with other adults it seems like parents would also have difficulty understanding badly-formed sentences. For myself, badly-formed sentences uttered by children are almost like a code to decipher when attempting to respond. However, I wonder if a similar process to children’s language learning is occurring in parents when they interact with their children. Through the interactions do parents become accustomed to the strings of words their children use and thus become “fluent” in what their child is attempting to say. Furthermore, does this finding only include parents being able to understand their own children’s badly-formed sentences? Or do parents in general have an easier time understanding all children’s badly-formed sentences?

    ReplyDelete
    Replies
    1. Adults have learned (at least one) language. The language-learning child age 2-4, at first, has not. But they do understand some simple words and strings of words (so do dogs and primates) and can also produce them.

      But is it language, or just a series of category names and requests? The adult will try to interpret them by projecting a propositional meaning on them, but that does not mean they are learning a “child language” in so doing.

      Delete
    2. I don’t think that a child’s use of words necessarily indicates their ability to use language. I think there must be some transition in the child from just learning words to actually understanding them as they connect in a larger language. A young child using words could be compared to Bunny the dog using words. Both are able to “speak” however, their word choice only reflects things they have experience with. I think these words are the foundation for building a language (they will be the grounded words), but they are not the language itself. So we might still be able to interpret a meaning because our words are grounded in a similar way, but we impose greater meaning due to our greater language knowledge.

      Delete
  6. As someone that has not taken a functions course since grade 12, it was surprising that to me that the part of this paper I understood the most clearly was the use of the mathematical analogy in the parameter - setting principle. Pinker states, “for example, all of the equations of the form y=3x + b when graphed, correspond to a family of parallel lines with a slope of 3; the parameter b takes on a different value for each line, and corresponds to how high or low it is on the graph.” using this analogy, is this saying that as soon as the child can interpret y = 3x + b, they will know what y is for every value of b? Further, when discussing how children could set parameters, Pinker writes, “Thus for every case in which parameter setting A generates a subset of the sentences generated by setting B, the child must first hypothesize A, then abandon it for B only if a sentence generated byg B but not by A was encountered in the input”. Since using only parameter A would never fully encompass all possible propositions, the child would always encounter a parameter-setting B from parameter A. However, I am confused as to how would the child assume to move onto B and that the proposition did not belong to A, but A was still a valid parameter? Is it not also possible that the child would consider re-evaluating parameter A?

    ReplyDelete
    Replies
    1. Hi Katherine, I think these are very interesting points relating to parameters! To respond to your last few questions, I am also unsure of the specifics of how these parameters work in children. Based on my understanding of what Pinker was arguing, I believe the child would assume to move onto B based on the examples they had seen from parental input, which would depend on what language they spoke. Presumably, this would teach them that if the sentence was not parameter A, to move on to B. I don’t think that the child would need to re-evaluate parameter A since they had already assessed it initially and decided to move on to B.

      Delete
    2. Katherine, once you learn a maths algorithm you don’t know the outcome of all possible applications of the algorithm; you just know how to do all possible applications of the algorithm. Computation is an ungrounded formal procedure: symbol manipulation. Knowledge (and understanding and meaning and perceiving) is not just a formal procedure: It includes grounding in referents as well as what it feels like to know, understand, mean and perceive).

      Evelyn, see ’s explanation of parameter-setting.

      Delete
  7. “Children with Japanese genes do not find Japanese any easier than English, or vice-versa; they learn whichever language they are exposed to.” (Page 13) I was a bit puzzled by this sentence. What exactly does Pinker mean by ‘children with Japanese genes’? If it does not make any difference whether they have these genes or not, is it necessary to categorize genes as ‘Japanese’ or anything else? Or does he just mean children who were born in Japan, or are from a Japanese descent?
    Also, is there a limit to how many languages a child can acquire? If the child gets positive evidence for numerous target languages, they should in principle be able to acquire all the languages without much difficulty, given that they can pronounce all the phonemes of all languages before the age of 1, right? Or will they ‘glitch’ at some point because of the overwhelming amount of energy and neural connections this will require?

    ReplyDelete
    Replies
    1. This is a very interesting point!
      For example, I have Korean genes and I was raised in a bilingual home. Additionally, for much of my formative years, I was only spoken to in Korean as my grandma babysat me while my parents were at work. However, I still struggle with understanding and speaking Korean and I do not consider myself to be bilingual. From this, I’d assume that Pinker has to be referring to children who were born in Japan, because clearly one’s genetic heritage does not lend any particular advantage in learning a language.

      Delete
    2. Also, I’m not entirely sure about language-specific “glitching”, but studies have demonstrated that there are significant differences between the brains of monolinguals and bilinguals. Bilinguals have better cognitive capacities relating to attention and inhibition. A more empirical example of this is the finding that bilinguals perform better at the Stroop task than monolinguals, demonstrating more competency in switching categories.

      Delete
    3. I also read this as just an example meant to show that there is no difference present based on the child's genetics. So I agree that it is referring to a child born in Japan vs Canada vs elsewhere. If genetics, in this case, made it easier for a certain child to learn a certain language, then I think this would cause serious problems for UG. It would seemingly remove the universality of it, which is rests on.

      Delete
    4. Shandra, there are minor genetic differences between ethnic groups, for example, skin color, but not cognitive ones, and Pinker’s point was that there are no genetic differences in language either. Any child can learn any language (there are at least 7000 today, and there were probably more before cars, trains and planes as well as mass media cut down on the variety), but there aren’t enough hours in the day to learn them all in a lifetime, let alone a critical period. (And the problem is less all the different pronunciations but all the different OG grammars and vocabularies. UG, meanwhile, is the same for them all.)

      Katherine, you could easily get back your Korean capacity if you communicated more in Korean. (Genetics is irrelevant but critical period experience is not.) As for the brains of bilinguals: that’s an effect (of the experience and learning) not a cause.

      Grace, genes don’t favor any particular language, there are clearly genes that make some people better or faster at learning than others.

      Delete
    5. I think this is an interesting discussion and just showed how the ‘universal’ aspect of UG works. Despite some people are Japanese/Korean descents, their inherited UG works for language as a whole, not specifically for Japanese/Korean language. UG is all about the capacity to develop language and aid early life language acquisition, rather than devoted to their parent’s mother language. It is just if the parents are speaking one of the languages most of the time, the child have more exposure to the language.

      Delete
  8. Pinker suggested that natural languages seem to be built on the same basic plan, or "parameters", or OG. I think by referring to OG as "parameters", he pointed out one property of language as formally coded. He also claimed that these parameters interact with universal principles (UG) -- UG helps children solve the problem of an overgeneralization, making them learn these grammatical rules faster (e.g "if there is a principle that says that A and B cannot coexist in a language, a child acquiring B can use it to catapult A out of the grammar"). This explains how OG and UG interact and all of OG is UG-compliant.

    ReplyDelete
    Replies
    1. This was fully explained by Iris in a prior thread.

      Delete
  9. When reading this paper, something that I was curious about was regarding how languages evolve and morph over time. For instance, the English language was quite different in the early 1500s than it is now. Pinker eludes to this, stating: “People do not reproduce their parents’ language exactly. If they did, we would all still be speaking like Chaucer.”

    This made me curious about how these changes to language come about. It is likely that they are small, subtle, societal changes that eventually modify the words and phrases that are commonly used. I wonder if this relates to slang, which is often created by younger children and could potentially alter the common phrases and words that are used. I was curious as to how and why these changes occur, and specifically, why it seems to be younger generations that create slang and have a more causal expression of English.

    ReplyDelete
    Replies
    1. I also thought this was really interesting and I think plays into the idea of a UG versus OG. The question is, do we have an inborn propensity to alter language to our changing environment or are do these changes arise from a learned OG? I think there would be an evolutionary advantage to adapting language, where certain things become more or less relevant with new generations (types of tools, activities, etc) but also I don't know if there is really evidence to substantialize the changing of language to being innate. Once you learn OG, it would be easy to come up with new kinds of words or phrases since you have a good idea of how real words sound, like the order of consonants and vowels. Even though they are both gibberish, "pleesh" sounds more like a real English word than "jtedk".

      Delete
    2. To add to this thread, there are similar forms of cultural transmission across species! One of my favorite examples that reflects this propensity for humans to transform vocabular and grammar would be in humpback whales. In their songs, they have themes of a particular series of sounds, clicks, and hums, all at varying frequencies. Over a typical course of ten years, these songs are gradually passed on between populations of humpbacks from Australia’s east coast, from west to east across the oceans, like a enormous game of whale telephone. Be the time it reaches the west coast of Australia, the song has changed so dramatically that it is unrecognizable from its start. The clashing of west and east coast whale songs that happens once every 10-15 years, ignites a novel song that start the process all over again. I do wonder if similar mechanisms of mixing cultures and abrupt exposure to different language would cause new language features to emerge in humans. I find it very interesting that we can culturally transmit language, transforming OG across generations, but that UG should remain untouched and inherently the same seems like an anomaly. I feel that it would be extremely hard to follow, but to what extent does UG evolve just as OG evolves? I feel like it must, as UG had to have evolved from some prehistoric form of communication into what it is today, but what mechanisms could keep it locked across thousands of generations of people?

      Delete
  10. When reading this article, this quotation made me think about how language acquisition is a good example of lazy evolution. "there is no question about whether heredity or environment is involved in language, or even whether one or the other is "more important." Instead, language acquisition might be our best hope of finding out how heredity and environment interact"

    In this case, the language acquisition device is what evolution spends time on, which offloads the actual language learning onto the environment.

    ReplyDelete
  11. I’m a little bit confused about the difference between OG and UG. If I understand correctly, UG is not learnable. Rather, UG is innate. OG on the other hand is learned. The nature of the parameters of UG is what confuses me a little bit. My understanding is that a parameter is a variable component of a language. For example, a parameter could be whether or not a language drops a pronoun at the end of the word by turning it into inflection. So, while all languages observe the rules of UG, the parameters themselves are learned. These parameters are what give UG nuance. So, I am confused about how learned parameters of UG fit into a construct that is otherwise innate?

    ReplyDelete
    Replies
    1. Hey Bronwen!
      I had the exact confusion as yours and asked Prof Harnad about UG parameters which I’ll try to summarize to the best of my understanding.
      You are right in that OG is learnable and UG is not learnable. OG is learned by 1) unsupervised learning 2) supervised learning and 3) instruction but UG is innate.

      So, if UG is the hardwired radio we are born with, UG parameters are like the stations of the radio. We are born with a default station but tune in to the station of our first language.

      One example of parameter setting is SVO [Subject-Verb-Object] order.
      Languages like English and Mandarin use SVO order: I love you. 我(s)爱(v)你(o)。
      Korean, my first language, uses the SOV order: 나는(s) 너를(o) 사랑해(v).

      When I was born, I was born with UG and a hierarchy of ALL the possibilities for ALL the UG parameters. The possibilities are wired in my brain. Let’s assume that the default setting of the SVO-order parameter is the order S -> V -> O (I’m not sure whether this is the true default or not but let’s just assume it is), I would have heard my parents speak Korean in the SOV order and changed my settings to the SOV order accordingly. However, for someone with English as their first language, the default setting of SVO would NOT have been violated and they would have simply retained it.
      Page 27 of article 9A: “One suggestion is that parameter settings are ordered, with children assuming a particular setting as the default case, moving to other settings as the input evidence forces them to” (Chomsky, 1981).

      The fact that my SOV-order parameter was learned through Korean input from my parents does not change that I was still hardwired with the hierarchy of the ALL possibilities for this parameter. UG is in the brain, UG parameters are still in the brain, but you just learn which parameter your language uses among ALL the possible parameters by input. Having all the possibilities of parameters allows us to learn every possible language when we are born which is indeed a quality of UG. I made a simple diagram to illustrate how I understood it.DIAGRAM

      Also, because UG parameters can be learned, it is a consequence of lazy evolution. Having UG parameters gives us the flexibility to learn as much as there can be learned rather than having it fixed. But this is also possible because there is a finite number of parameters which are simple enough to be learned. If your first language violates the default combination of parameters you are born with, you just tune into the other possibilities of combinations according to the input received because you have all the other possibilities of parameter settings hardwired. However, UG itself encompasses a lot more and cannot be learned in such way as shown through the poverty of the stimulus argument.

      Hence, there is no evolutionary problem with UG parameters as they are a product of Baldwinian evolution, but we are still met with the evolutionary problem with UG in general because it is VERY difficult to see how UG could have gone through the gradual Baldwinian evolution.

      I hope this makes sense.

      Delete
    2. Thank you to my unpaid teaching assistant, Iris Kim, for clarifying this so lucidly..

      Delete
    3. Thank you so much Iris!! This is so helpful!!

      Delete
  12. A remark made in this text that definitely pushed my thinking further is that certain components of language can be acquired throughout life and others can only be perfected at an early age. For example, an adult can take intensive language courses (or move to a foreign country) and, with time, acquire a strong understanding of the language and its grammar. They could eventually learn to write as well as a native speaker, but their speech will always be one step behind. As seen in class and discussed in this reading, it becomes extremely hard to learn new phonemes and pronunciations with age. However, Pinker's text does not elaborate on the reasons as to why this may be the case. Could it be a simple issue of muscle activation? By perfecting and only using specific languages at a young age, are we selecting for specific muscles and, perhaps, neurons, which facilitate the use of these languages? Could it be that we are born with neurons meant to control speech musculature and phonetic sensitivity, and that they specialize to process the language stimuli we first encounter in life? I am assuming here that we are born with the necessary hardware to produce speech in any language, but that the plasticity of our speech-related neurons is almost non-existent in adulthood.

    ReplyDelete
    Replies
    1. I'm not sure what Pinker's opinion is on learning new phonemes, but I think another possible answer to your question would be that learning phonemes works in the same way as learning the governing categories for pronouns in different languages. This reading says that according to Wexler and Manzini, "children should start off assuming that their language requires the largest possible governing category [for pronouns], and then shrink the possibilities inward as they hear the telltale sentences." In the same manner, we are born able to produce all the phonemes (innate capacity) and over the first year of life, we hear language sounds and trim down our babbles until they constitute only of the phonemes of our mother tongue. I feel like this could be another counterexample of the Subset Principle, and illustrates how innate language abilities are shaped by our linguistic environment during learning.

      Delete
    2. Camille and Isabelle,

      Could it possibly be a combination of the two? As in, some degree of mind-body dualism, where the body, muscles and neurons physically become accustomed to form certain phonemes associated with our first language, and as an adult, the years of cementing these habits are extremely difficult to break - meanwhile, the mind and what Isabelle talks about - the children narrowing down their largest possible category and shrinking their possibilities - would be the other side of that coin. Perhaps it is this combination of physical and the early narrowing of possibilities in children that causes adult language accusition to be so much more difficult.

      Delete
  13. One section of this reading asked whether language is just a way to communicate cognitions. This made me think of earlier discussions in this class surrounding computationalism. If this statement were to be true then would we be able to say that computers use language without implicating that they also have cognitions? Computer programs exist these days that can utilize the nuclear power of language and write articles filled with grammatical correct sentences. The quality of their prose is another question, but is there any question that they have acquired syntax? Perhaps language assists in communicating cognitions but there is no necessary connection between the two.

    ReplyDelete
    Replies
    1. Hi Lucie, it’s a very interesting thought! To me, I would say the answer is no. Although with algorithms, computers can produce prose and seem to be able to manage language, but in fact, they are still doing computation of 0 and 1, and computation does not involve interpretation. The prose is indeed an extra layer of human interpretation that was added before the output.

      Delete
  14. Pinker doesn’t seem to distinguish between UG and OG, putting them in the same basket. By doing so, he is simply restating the obvious about OG, how it is learned through trial and error, or hearing from adults. However, it remains problematic that he doesn’t distinguish them, explaining the poverty of stimulus such that UG isn’t learned, it doesn’t need to be corrected, if a mistake is made it inherently sounds wrong/ incomprehensible.

    ReplyDelete
  15. I find the varying conditions that can create discrepancies in a child’s ability to speak a language quite interesting. I understand that we are born with a set of innate rules (UG), but in order to develop the language-speaking skill and our vocabularies, we need to be in an environment where we constantly hear words and sentences so we can learn their general structure and communicate in accordance with UG rules. This makes me curious about not only what would happen to a child who in their critical stage, was not learning the proper grammatical rules, but more importantly, is it even possible that a child could learn an entire language that doesn’t comply with UG rules? Would it mean they need to only be communicating with people who also only spoke this way or are there real neurological/biological deficiencies that could lead to this?

    ReplyDelete
    Replies
    1. Adebimpe, I think you might have kind of answered your first question in the first half of your skywriting! Like you said, we are all born with Universal Grammar, and I think the whole point of UG is that it applies to /all/ languages — Pinker writes that “UG specifies the allowable mental representations and operations that all languages are confined to use.” So, there is no language that a child could learn that would not comply with UG rules. The fascinating thing that all human languages have in common is that they in fact do comply with UG, and we innately have the capacity to learn them all when we are born, though we only learn a few at most. You bring up an interesting hypothetical, but I think the root of it all is that no matter what language we come up with, it will be in accordance with these “rules.” Pinker also brings up that children don’t need to hear a “full-fledged language” as long as they are surrounded by other children — they will end up inventing their own spontaneous language, but this language would still be in compliance with UG, which I think answers your last point.

      Delete
  16. When reading Pinker’s description of language acquisition, I couldn’t help but wonder about “pointing out [objects’] properties and owners” being an early manifestation of language in children everywhere. Specifically, if this is also true for children who grow up in societies where such concepts do not exist in the same way as in ours, such as hunter-gatherer societies, or in places where they don’t really own anything, such as an orphanage, or even children who grow up as slaves. Of course, in our society, young children tend to be quite possessive, with “MINE” being a common saying among toddlers. But I would venture to say that in the examples listed above as well as in different types of societies in general, toddlers’ first strings of words may relate to different categories than those studied in occidental countries (which value property rights) such as Britain, the US, or Canada. The only way to partially offset such biases would be to incorporate studies on the subject from diverse cultures—which is, of course, much easier said than done.

    ReplyDelete
    Replies
    1. Hey Milo, I found this point interesting as well! I found this quote to be relevant to what you are saying:
      “Children's first words are similar all over the planet. About half the words are for objects: food (juice, cookie, body parts (eye, nose), clothing (diaper, sock), vehicles (car, boat), toys (doll, block), household items (bottle, light, animals (dog, kitty), and people (dada, baby). There are words for actions, motions, and routines, like (up, off, open, peekaboo, eat, and go, and modifiers, like hot, allgone, more, dirty, and cold. Finally, there are routines used in social interaction, like yes, no, want, bye-bye, and hi -- a few of which, like look at that and what is that, are words in the sense of memorized chunks, though they are not single words for the adult. Children differ in how much they name objects or engage in social interaction using memorized routines, though all children do both. (page 5, section 3: The Course of Language Acquisition)“
      Although Pinker is discussing language acquisition specifically, not the act of pointing out objects, I think this creates potential tension with what you’re hypothesizing. If children around the globe are all speaking about similar topics predictably, wouldn’t this suggest that they are somewhat resistant to cultural narratives at this stage? Furthermore, Pinker does not comment on the cultural differences in the amount that children name objects or engage in social interaction via routines, so it may still be that they do exist, as I would assume you would agree with. However, if Pinker did not specify, opting to choose the language that children simply differ, it may be fair to assume that children in the pointing stage are operating not from culturally dictated values, but a more inborn curiosity about the world around them. That being said I think it’s a shame that Pinker did not specify this point, as it is something I wonder about as well. Perhaps it is an area that requires further research?

      Delete
  17. Sometime between when we started pointing at objects that are not currently visible, and when we started using words like ‘and, if, but, or’, there was some structure that appeared that would allow us to use those functional words, as well as how to better communicate what we used to have to do only with pointing. Because evolution is lazy, this structure is flexible and is not filled with content -- it is merely a frame that can be filled out with any of the world’s ~7000 languages.

    If uptake on rules like word order (Verb/Subject/Object etc) and function words (or/and/if) happens very quickly, and perhaps more quickly than other forms of language specificities (conjugating for time, according for gender and number), this would indicate that those sets of rules have some fundamental difference between them. If we posit that word order is one of the parameters of UG, then it would surely make sense that it should be picked up on faster by a child.

    Can the same be argued for function words ? How would this argument go ? That we have evolved with the ability to absorb specific iterations of rules into our rule-structure (UG) makes sense. It would certainly be more efficient ! But what of function words, and propositionality? Are these part of UG ? If language began with the proposition, and language capacity requires UG, did they somehow magically appear at the same time ?

    ReplyDelete
  18. Pinker claims that in order to learn a language, children must have both positive and negative evidence. It is evident that positive input exists, and what it is is the language that they hear and/or see all of the time. It's worth noting that this input doesn't have to be a complete language: as long as there's interaction and some words, language will grow. The knowledge that the child learns from this input regarding what counts as grammatical in that language is the 'positive evidence.' Everything the child hears from others is UG-compliant, and, more importantly, everything the child produces (after a brief prelinguistic stage of only copying and vocalising) is likewise UG-compliant. As a result, there are no errors and, no corrections (negative evidence).

    Pinker goes on to say that in order to learn a language, children also require negative evidence. This 'negative evidence' is information regarding grammatical errors in utterances. There is, however, no evidence to back up the idea that this feedback information originates from the environment (from parents, teachers, older siblings, etc.). In fact, Pinker cites several studies that suggest that this type of feedback has little influence on children's language development. As a result, he claims that youngsters must have some kind of mental incentive that allows them to rule out a vast number of unacceptable utterances and word combinations. Therefore, the category (UG) must be inherent, given its complexity and the "poverty of the stimulus" (no negative evidence). It couldn't have been learnt by induction since without negative proof, you can't learn a (non-trivial) category via induction.

    ReplyDelete
    Replies
    1. This was a very clarifying explanation. It is interesting that this research has shown that negative feedback does not really effect language development. I think another way to think of it is that UG is sort of a built-in immunity against incorrect language uses. I think one of the more confusing aspects of UG is that it is not learnable and therefore unseeable in behavior. By thinking of it as immunity, it's more that UG is the incorrect language use not happening in the first place

      Delete
  19. I found the video Professor Harnad included with this topic to be helpful in distinguishing the difference between negative and positive evidence. He included the example of trying to discern what constitutes a mythical thing (layleck) from positive evidence alone. This is theoretically possible because it is attainable to parse out what is and is not a layleck from positive evidence alone because the differences are striking enough. Although this seems plausible, it does not apply to UG because it's rules are too complicated; the difference between what is and is not grammatically correct is not striking enough to understand with positive evidence alone. This suggests that the negative evidence we require to learn UG is not environmental, but innate. It is also important to note that the poverty of the stimulus argument refers to the absence of negative evidence, not how much negative evidence is sufficient. The evidence cited by Pinker suggests that the amount of negative evidence a child is exposed to is not relevant to their language acquisition.

    ReplyDelete
  20. In this article, Pinker argues that children solve the problem of language acquisition by having the general design of the language, which is wired into them in the form of universal grammar. I was confused about what UG is and expected Pinker to give a definition or example of UG when I read the article. Based on the article, I wonder if I could comprehend UG as the rules that governed what thought could be thought; also demonstrated in Iris’s graph, all kinds of OG are in the domain of UG. I tried to conclude the arguments that Pinker proposed in this article which support UG theory and wonder if it could be divided into two parts: The first part is that although languages differ from one another, all human languages share the property of following universal grammar, and people exposed in different language environment convergence on the same grammar. And the second part is based on the poverty of the stimulus, which argues that children know some particular rules about language that they could not have learned from the stimulus they are exposed to or the input available to them. Children are not exposed to negative evidence but know which structures are not ‘grammatical’ and learn language effortlessly, indicating that the universal grammar is learned by neither unsupervised learning nor supervised learning.
    I had one question that might be a bit off the topic when I was reading the part that Pinker explained how words are built in layers. In Pinker’s demonstration, he used ‘Darwin’ as an example. It reminds me of the case that patients who lost interhemispheric access appear with symptoms that they are unable to understand compound words; instead, they interpret the whole word as the sum of the meaning of the roots of the compound words. In particular, if the word ‘rainbow’ is shown to such a patient, they would interpret it as ‘rain’ and ‘bow’ separately. I wonder if this case is related to our mechanism in language acquisition and worthwhile to discuss under the framework that Steven Pinker proposed in this article.

    ReplyDelete
    Replies
    1. Eric, I really like the question you have raised here regarding compound words and interhemispheric access patients who are unable to semantically interpret them as one word as opposed to two. I have done a little digging, and I can not seem to find concrete proof that compound words exist in all languages, which would suggest compounding to be a feature of UG. There is certainly plenty of evidence that compounds exist in the world’s most commonly spoken languages, especially in Germanic languages, Mandarin, and Arabic. However, there is want of evidence in indigenous languages and minority languages around the world. In a world where we could confirm compounding as a part of UG, these patients could be of particular interest because they could, in theory, inform us of the physiological and anatomical mechanisms of UG, perhaps revealing some part of the “little black box” of language acquisition.

      Delete
  21. I’m really impressed by the complexity that is found in language acquisition. From 1 to 4 years old, children go from learning the prosody of words in their target language with their auditory system to associating semantics to the words and learning grammar for the whole language by 4. Complexity is even found in that primary exposure of prosody as emotional states and intent of the speaker changes its quality which makes it more difficult to infer rules from auditory input. What I’m really fascinated by is the mystery of the supposed auto correctional mechanism that exists in the child’s brain. How, even without external correction by parents, they can understand grammatical concepts and apply them in their lives. Furthermore, the precision needed by this supposed innate mechanism is astonishing. It must not over generate language through universal grammar rules and must follow certain steps to achieve adultlike capacities.

    ReplyDelete
  22. “The scientific question is whether the chimps' abilities are homologous to human language -- that is, whether the two systems show the same basic organization owing to descent from a single system in their common ancestor. For example, biologists don't debate whether the wing-like structures of gliding rodents may be called "genuine wings" or something else (a boring question of definitions). It's clear that these structures are not homologous to the wings of bats, because they have a fundamentally different anatomical plan, reflecting a different evolutionary history. Bats' wings are modifications of the hands of the common mammalian ancestor; flying squirrels' wings are modifications of its rib cage. The two structures are merely analogous: similar in function.”

    This passage from Pinker’s Language Acquisition, reminded me of our discussions on T2 vs T3 passing. Communication via text or speech is enough to distinguish T2 passing, however, to be T3 there needs to be no discernable differences between a T3 robot and ourselves. Tying in the other minds problem we extend courtesy to chimps and their abilities of language acquisition as homologous that descends from a singular ancestor due to evolutionary biology. The last line particular stood out “The two structures are merely analogous: similar in function.” If we were to think about the hardware for a T2 robot and then think about a localized area in the brain for a specific function I suppose we could still say that that area is “analagous”, but the context would be perceived differently if we were to look at chimps. All that is to say, is this relevant to or tie into or arise from the other minds problem?

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2021 Time : FRIDAYS 11:35-2:25  Place : ZOOM Instructors : Stevan Harnad & Fernanda Pere...