Monday, August 30, 2021

8a. Pinker, S. & Bloom, P. (1990). Natural language and natural selection

Pinker, S. & Bloom, P. (1990). Natural language and natural selectionBehavioral and Brain Sciences13(4): 707-784. 

Many people have argued that the evolution of the human language faculty cannot be explained by Darwinian natural selection. Chomsky and Gould have suggested that language may have evolved as the by‐product of selection for other abilities or as a consequence of as‐yet unknown laws of growth and form. Others have argued that a biological specialization for grammar is incompatible with every tenet of Darwinian theory ‐‐ that it shows no genetic variation, could not exist in any intermediate forms, confers no selective advantage, and would require more evolutionary time and genomic space than is available. We examine these arguments and show that they depend on inaccurate assumptions about biology or language or both. Evolutionary theory offers clear criteria for when a trait should be attributed to natural selection: complex design for some function, and the absence of alternative processes capable of explaining such complexity. Human language meets this criterion: grammar is a complex mechanism tailored to the transmission of propositional structures through a serial interface. Autonomous and arbitrary grammatical phenomena have been offered as counterexamples to the position that language is an adaptation, but this reasoning is unsound: communication protocols depend on arbitrary conventions that are adaptive as long as they are shared. Consequently, language acquisition in the child should systematically differ from language evolution in the species and attempts to analogize them are misleading. Reviewing other arguments and data, we conclude that there is every reason to believe that a specialization for grammar evolved by a conventional neo‐Darwinian process.

Tomasello, M., & Call, J. (2018). Thirty years of great ape gestures. Animal Cognition, 1-9.

Graham, Kirsty E; Catherine Hobaiter, James Ounsley, Takeshi Furuichi, Richard W. Byrne (2018) Bonobo and chimpanzee gestures overlap extensively in meaning. PLoS Biology





67 comments:

  1. (Sorry beforehand that this is so long)
    As I was reading through this article, the response I initially found most compelling against the authors’ argument that language is a specialized biological system produced through natural selection was the question of how natural selection would have acted on the intermediate steps presumably gone through before getting to language as we know it. If we can’t explain how intermediate evolutionary forms were adaptive, isn’t it more likely that language is just a side effect of another adaptive structure?

    There were two points, as I read, that changed my mind about this. The first is the response the authors gave to the suggestion that language is an emergent property of a biological system evolved for a different purpose, similar to a complex program being run on a computer initially designed for a different purpose. The authors point out that, while a computational system might have the ability to perform an activity it wasn’t initially meant to perform, there needs to be someone to reprogram it (page 25). How understand this is, for a flexible system with many possible uses to be co-opted for language specifically, there would need to be an outside force guiding it to do so. And in the case of language, natural selection would be this force. The authors then point out that "the kinds of generalizations that must be made to acquire a grammar are at cross-purposes with those that are useful in acquiring other systems of knowledge from examples”, suggesting that, for language to develop, the underlying system itself would likely need to be altered. So, it is unlikely that language is simply an emergent property of a system evolved for another purpose, because without the external force of natural selection acting on this system (ie if language has not been selected for), what would prompt this system to act in such a way as to produce language. And, this system probably needed some language-compatible specialization in order to successfully produce language. This second point, while it might seem like a minor afterthought, actually seems quite important to me. Without it, the answer to “why would a flexible system possible of underpinning language development have lead to language and not something else” could just be “why not? Maybe in other organisms it lead to doppler-shift echolocation or pollen-source communication (page 26), it just happened to lead to language in humans”. But, if language as we know it could *not* be produced by a generally useful communication/information organizing system, then there must at some point have been an evolved specialization that made language possible. In other words, it seems that some inborn, language-specific structures have been produced through natural selection.
    (continued in replies)

    ReplyDelete
    Replies
    1. In addition to arguing that selection would have been necessary to produce language-supporting biological structures, the authors provided what I found to be a very interesting explanation for how intermediate language structures, when paired with learning mechanisms, could in fact have been adaptive (page 32). As I understood it, the argument was basically that learning could fill in the gaps for intermediate structures, allowing them to be functional even if they were an incomplete form. There would still be selection pressures for more and more of this “language structure" to be inborn, because the more that needs to be learned, the longer it takes for an organisms to be able to access the adaptive benefits of this structure. As the authors put it, "there are increasing advantages to having more and more of the correct connections innately determined, because with more correct connections to begin with, it takes less time to learn the rest, and the chances of going through life without having learned them get smaller”. This effectively means that, while there would still be pressure for a more complete "language structure” to develop, intermediate forms could be usable, and better than less-complex prior forms, explaining how intermediate steps in the development of a “language structure” could in fact have be acted on by natural selection.

      For me, evolution-based theories often feel incomplete, like they need more proof, but since we can’t go back millions of years and see what our evolutionary ancestors were like or exactly what happened through the course of our evolutionary history, I try to approach evolutionary theories with slightly more suspension of disbelief than I would other scientific theories. And I have to say, after reading this paper it seems much more plausible to me that language was in fact a system produced through natural selection.

      Delete
    2. Some aspects of language capacity may be learned and others inborn and evolved.

      Learning capacity itself is evolved, but language-learning capacity is somehow special. No nonhuman species can learn language. (If you disagree, please describe the evidence that nonhuman species can learn language – but be careful to specify also what you mean by language, so as not to confuse it with other forms or codes of communication, non-linguistic ones.)

      Whatever it is that made the human species the only species that could learn language is clearly something that evolved. So it is uncontroversial that there is an evolved component of language capacity.

      The P&B target article appeared 30 years ago because it was thought that there was a special problem with explaining how some features of language capacity could have evolved. P&B survey how several features of language capacity could have evolved, and how other features could be learned.

      But they omit completely any discussion of how one very specific feature of language, Universal Grammar (UG) could have evolved, since UG cannot be learned by children. The reason for this is what is called the “Poverty of the Stimulus”: what the language-learning child hears and says, and what adults hear and say -- including any corrective feedback adults may give to the child -- is not enough for the child to learn UG from it. The reason is that neither adults nor the child make UG errors. So UG cannot be learned from supervised learning, which, as with any category, requires trial and error and corrective feedback. If there are no errors, there cannot be supervised learning. So the child must know UG already, without learning.

      Could it be learned through unsupervised learning, by just hearing adults speak? No, the rules of UG are far too complicated to be learnable through just passive exposure to adult language. (Interesting current question: is GPT (google GPT-3) learning UG through unsupervised learning, by being exposed to enormous volumes of text input? GPT does go on to produce text that is GP-compliant, but (1) does that mean GPT has learned language? And (2) how could the child’s brain do that on just what the child says and hears in a few years, rather that many terabytes of language input that GPT gets?)

      The only inborn feature of language the evolutionary theory would have to explain is UG. There is no partial or progressive “UG.” So it is very hard to imagine how UG could have evolved gradually, even with “Baldwinian Evolution” (see Replies in Week 6 on evolution) filling any gaps.

      Delete
    3. Hello Professor Harnad,

      Going over this article after Friday’s class again, along with your replies on skywritings made me deliberate over Pinker’s argument.

      So, (as you mention) the major issue with Pinker’s selectionist viewpoint on the evolution of language is treating language as a simple capacity. Such oversimplification is problematic because there is a clear distinction between OG and UG. OG is an invented feature of language which can be learned by supervised and unsupervised learning. It can also change through social convention. The capacity to learn language evolved through Baldwinian Evolution.
      Contrarily, UG is an evolved feature of language which is innate and cannot be learned because of the poverty of stimulus. Hence, evolutionary psychology must address this innate UG, its causal mechanism and adaptive advantages.

      I would just like to (attempt to) expand on your last point “it is very hard to imagine how UG could have evolved gradually, even with Baldwinian Evolution”.

      Selectionism’s gradualist explanation does not fit UG because much like how UG cannot be learned through trial and error, it could NOT have evolved through trial and error as well. The adaptive advantages of UG are unclear in terms of survival and reproduction so it is not apparent why nature would have preferred an ancestor with UG (as the way it is right now) compared to some other “faulty” form of predeveloped UG. Without this clear advantage, there would not have been any “correction” from evolution.

      You mentioned in class that Pinker did NOT deliberately leave out the discussion of UG and I believe that to be true. I just think he did not recognize that UG was an evolutionary problem BECAUSE of its vague adaptive advantages.

      I think this quote from section 3.3 shows his lack of understanding: “… one might posit that there is a learning mechanism, leading to the development of multiple languages. That is, some aspects of grammar might be easily learnable from environmental inputs by cognitive processes that may have been in existence prior to the evolution of grammar, for example, the relative order of a pair of sequenced elements within a bounded unit.”

      It’s true that we all have this precoded “mechanism” (UG) that allows us to produce correct grammar no matter what our first language might be. But (what I think) Pinker is stating here is that he believes some aspects of grammar were preferred because of the pre-existing way our ancestors processed “order”. However, again, even if this were true, the adaptive advantage of processing a pair of elements in a certain sequence seems hard to explain and we end up in a circular regress.

      Delete
    4. P&B's incoherence on this point is a symptom that they are not making the OG/UG distinction. "Adaptive value" concerns only UG. Parameter-settings (like Subject-Verb-Object vs. Subject-Object-Verb) are learned (OG), just as vocabulary and phonology are learned: UG leaves them free to be learned, so the outcome of the learning does not need a universal Darwinian explanation, just a local historical one.

      Delete
  2. "So we see a reason why functionalist theories of the evolution of language can be true while functionalist theories of the acquisition of language can be false. From the very start of language acquisition, children obey grammatical constraints that afford them no immediate communicative advantage."

    From what I gather, the argument here is that the evolution of language based on theories focusing on the utility of language evolution can be true while theories about the utility of acquiring a language are false. Thus, we are distinguishing evolution from acquisition. And this is why we can see children acquiring language in a way that some steps of acquisition provide no clear communicative advantage, yet it is not an argument against natural selection. Is this because acquisition need not follow the same path of development as evolution would have it? Eventually children abandon language practices for the ones used by adults, so attaining communicative advantage happens at some point. I still find myself a bit confused by this section, so I would appreciate another perspective.

    ReplyDelete
    Replies
    1. Hey Grace, I believe the idea is that these “grammatical constraints” children obey (which, at the time, don’t seem to convey a communication advantage) are innate constraints - i.e. evolutionarily-shaped, inborn characteristics. These types of innate constraints are beneficial to fitness at an evolutionary level (so, have been selected for), because they ensure that everyone will use a similar enough code that they can understand each other. This is the idea of the importance of parity in communication. It doesn’t matter *what* communication protocol is used per se, just that everyone uses a similar enough one that they can develop language in such a say that they will understand each other. We can imagine there being initial variation in protocols - multiple possible systems of innate constraints - but at a certain point one protocol became slightly more prevalent, and after that it would have been beneficial to have this protocol (where fitness here is tied to the fact that you can communicate with a larger proportion of your population). There doesn’t need to be a specific reason, at the level of selection, that *our* protocol was used, it just happened to be fixed at some point (happened to become slightly more prevalent, and then became increasingly more prevent to the point of being universal because, as the prevalent protocol, it conferred a fitness advantage). So, at a selection level, we don’t need to explain why saying "Who did you see her with?” would be inherently better that saying "Who did you see her and?”, it just, presumably, became more beneficial at a certain point because most other people were saying "Who did you see her with?”. So, at the level of selection (so innate constraints), we can explain why one protocol is followed and not another - that protocol is just the one that happened to become fixed.

      At the level of language acquisition, there wouldn’t be the same kind of immediate advantage to saying "Who did you see her with?” over "Who did you see her and?” because the people around the child saying this second sentence would probably still understand what they meant. To make sense of adherence to a certain protocol being advantageous at the level of acquisition, one would have to take language to be taught through a strict reward+punishment structure where a parent reinforces correct use of protocol, but that isn’t how kids learn language, it’s not all strictly supervised learning. To explain why kids acquiring language *do* conform to a certain protocol, then, it makes sense to point to innate constraints as an explanation. There is no person sitting down and explaining the rules of language to a baby, so if this baby starts to speak according to certain rules, it likely an “internal rulebook” that it is following. And the idea of parity of communication protocol can explain why it is following *our* rulebook and not another. Let me know if I should clarify something further!

      Delete
    2. Grace. what is a “functionalist” theory? Are Pinker & Bloom talking here about Universal Grammar (UG) or Ordinary Grammar (OG)? Either way, isn’t saying things in a way that other people can understand an “advantage”? How can it be useful to evolve a language, but not to acquire it? The confusion comes from P & B’s not distinguishing UG and OG. OG can be (and is) learned by unsupervised learning, supervised learning, or instruction. UG cannot be learned by unsupervised learning (why not?), nor by supervised learning (why not?), and it is not learned (by the child) by instruction (why not?). We’ll talk more about this next week when we get to learnability and the “poverty of the stimulus.”

      Caroline, UG is not “parity,” and not “like parity,” and trying to explain it by analogy with parity (or any other “protocol”) is hand-waving. P & B are simply fudging the UG (unlearnable) vs. OG (learnable) distinction. Why isn't all grammar just learnable OG?

      Delete
    3. A functionalist theory would be one that is concerned with what the object of the theory can do (its function). Yes, I would agree that for UG or OG, speaking in a way that others can understand is an advantage. If I'm thinking of it correctly, can't acquisition be a result of evolution?

      UG can't be learned by unsupervised learning because, for example, just listening to adults around them speak, a child would not be able to pick up the rules given the complexity. It can't be learned by supervised learning because this is trial and error learning, and as we have read and discussed, children aren't making errors, so they aren't learning via supervised learning.

      Delete
    4. We may understand action categories by what they can do ("running") or what we can do with them (sensorimotor affordances), but objects have a lot more kinds of features than those.

      Yes, the capacity to learn language is evolved. (But even the capacity to learn chess is evolved; it it based on our general category learning capacities, whereas learning language some more language-specific learning capacities.)

      OG rules are complex too, yet they can be learned from unsupervised learning.

      If the child never makes or hears UG errors (or corrections) then it already "has" UG.

      Once trained on trillions of samples of text, GPT can produce limitless UG-compliant text of its own: Does GPT "have" UG?

      Delete
    5. I think GPT-3 are learning about OG, for every language it encountered, which implies that it has the ability to learn the grammers of any language, and this ability is not a learned one but a built-in function of GPT-3 implemented in its programs and codes. In that sense, I think GPT-3 has UG. However, notice that because OG and UG are just syntax, it certainly can not actually understand the language because of the symbol grounding problem, but GPT-3 indeed has the ability(UG), at least partially, to command the syntax of any language(OG).

      Delete
  3. After reading 3.3. Language Design and Language Diversity, I'm trying to infer more reasons behind the language diversity among us. After reading the argument in 8b that the development of "preposition" is due to the motivation of more efficient teaching and communication between kin members, I began to speculate if the development of different languages is also a way to ensure information about resources be kept within kin species without letting the competitors know (e.g where the food is .etc). Yet, for me, evolutionary theory is a kind of just-so story as it is unfalsifiable. For instance, my speculation above is explained under the assumption that evolution is true. It seems that we can come up with different interpretations that all seemed to "make sense" but cannot be proved wrong.

    ReplyDelete
    Replies
    1. Hi Xingti,

      I think you make a good connection between both readings, and your point is something I was reflecting on too! Different languages may have been developed in order to keep information within species so as to keep kinfolk alive and keep others out. However, I think section 3.2 argues against your point about evolutionary theory being a just-so story. Though I understand where you’re coming from when you say that claims in evolutionary theory cannot be proven wrong, it seems more complicated than that to me. Section 3.2 talks about how human language can communicate extremely complex messages and how grammar alone is not enough to convey the full range of human emotions. Artificial languages have many parallels with human grammar, but there is still something that is missing when compared to natural languages. I think this difference lies in the undeniable, unfalsifiable claims that evolutionary theory makes. But, you are right in saying that we have to take the “assumption” of evolution truly filling in these gaps in order to say things like that.

      Delete
    2. XingTi, languages are diverse because lazy evolution has made them learnable.

      The motivational theory of the origin of propositions is just a hypothesis (and probably wrong).
      Evolutionary theory is more at risk of Just-So Stories than ordinary experimental science because the evolution happened a long time ago. But some Just-So Stories turn out to be true, based on fossil, carbon-dating and geological evidence, selective breeding experiments, and computer- modelling.

      Emma, the family origins of language is a possibility, but not really an explanation.

      Delete
  4. I found Pinker and Bloom’s argument for the natural selection driven evolution of language to be quite compelling. However, I feel like two contrasting selective pressures for the evolution of language were left out, specifically informativeness against simplicity. In section 3.1 Pinker says, “grammars for spoken languages must map propositional structures onto a serial channel, minimizing ambiguity in context, under the further constraints that the encoding and decoding be done rapidly, by creatures with limited short-term memories, according to a code that is shared by an entire community of potential communicants.” In trying to minimize ambiguity, there is a dichotomy of trying to express oneself simply, using what the interlocutor already knows, against complex new forms which hold the best amount of information about what is being described. The need for simplicity has halted the development of our grammar into further complexity, while the need to be informative drives grammar’s evolution. I am curious if these two opposing forces continue to influence our language development, and what environmental factors might drive either simplicity or informativeness to be selected upon?

    ReplyDelete
    Replies
    1. Informativeness is a semantic matter; grammar is syntax.
      There may be "optimality" factors (efficiency, speed, economy) constraining grammars (both OG and UG), but they don't explain UG -- neither why it is what it is, nor why it is unlearnable, nor what its adaptive value is or was, hence how and why it evolved.

      Delete
    2. You raise an interesting point, Genavieve. I’d say the main factor that decides whether simplicity or informativeness is more important in any given situation is temporality. That is, if a group of humans are generally living in more dire survival situations, simplicity would likely be selected for, since communication must be quick. The recipient of a communication must be able to quickly process the information given, so it must be as simple as possible. If a group of humans has more leisure time, then there would be more time to process information, so informativeness would be more important. This is mostly just logical conjecture, so I probably oversimplified here.

      Delete
  5. An interesting argument by Chomsky is that language isn’t only used for communication but also for thought. This proves that it couldn’t be, as some psychologists said, that children searched for the most optimal way to communicate and acquired language because thought wouldn’t be needed in communication. Therefore, functionalism on the acquisition of language is refuted as a counterexample is found. According to this argument, the acquisition of language is through a nature-built system that takes information from the environment. Consequently, the environment being hazardous and complex, grammatical concepts must be hardwired to facilitate language acquisition. My question in that case would be: what does it mean for grammatical concepts/rules to be hardwired? How could they be stored in an organism?

    ReplyDelete
    Replies
    1. Please ask your question again in a kid-sib way, so everyone can understand exactly what you mean (and what you are assuming). Chomsky's point about communication and thought requires that you first understand what UG is, and why it is unlearnable.

      Grammatical rules could be "hard-wired" (i.e., genetically coded) just as any other structural or functional or behavioral trait is coded.

      Delete
    2. Universal Grammar is an innate rule-based mechanism that helps us acquire language. UG is unlearnable because it does not rely on formal teaching or on a surplus of examples in order to arise in children. Children begin to be able to recognize language and its rules without being explained how it works – it’s sort of paradoxical because kids can’t really be explained this because they don’t yet know how language works, they just begin to pick it up.

      Delete
    3. OG rules can be learned by the language-learning child through unsupervised learning, supervised learning or instruction (explanation). UG rules can only be learned by language-speaking adults (or adolescents) by instruction (see MIT online courses on UG for undergrads.

      Delete
    4. Hey Ryan, reading your shy as well as having my understanding of this paper, I believe that UG, in this case what professor Harnad is referring to, are again “UG rules can only be learned by language-speaking adults (or adolescents) by instruction” – Harnad. In this case I don’t believe that your question was well-forged when asking about how these mechanisms (grammatical concept/rules can be hardwired). To answer your question (from my understanding), humans have certain preferences of innate biases that will allow them to facilitate their language acquisition. Although there is an innate mechanism in place to facilitate language acquisition, the child will still need input from which they will be able to extract which takes to follow by getting feedback.

      Delete
  6. Only when theories are put in a debate do I fully understand what they say. Especially when reading the works of 'giants' that discuss a theory of a large scope (like evolution or language) I feel like everything makes sense.. Ex) That language is a tool for 'expression'(p.15). When expression is contrasted this with communication, i.e. there ought to be a receiver, the implication of the statement "language is designed for the expression of thought" was clearer. One supporting evidence that striked me was the superiority of facial expressions in 'expressing' internal states than spoken language.

    ReplyDelete
    Replies
    1. The expressive power of language is orders of magnitude greater than the expressive power of facial expressions, pantomime or any other nonverbal form of communication, and it is all due to the power of subject/predicate propositions, which, like the Strong C/T Thesis, make it possible to say and define and describe and explain just about anything -- as long as it is grounded.

      Delete
    2. April, to continue your thought about the expressive power of language, a question that came to me after reading this paper was: How would one acquire abstract categories via gestures, and how would this vary from spoken language? Here's what I think! A gesture, like a word, is a symbol, and gestures become more efficient and abstract with time. I believe that, similar to how one learns fundamental words and concepts, which can then be recombined into more sophisticated, abstract ones, gestural communication may be done in the same way; however, it would be more difficult and time consuming, which is why spoken language became more prevalent. Anything that can be stated with words can also be communicated with gestures (just with vastly more difficulty and inefficiency). That is language's fundamental power and definition: the ability to express any proposition you can think of. Propositions began with pointing and pantomime, and as a result, anyone who made a propositional gesture captured the complete potential of language, which is why there was no protolanguage.

      Delete
    3. As impressive as human abilities for language and especially language acquisition are, I think there are some aspects of our gestural abilities that are quite amazing as well. Take dance for example. In a 3-hour long ballet, an entire, complex narrative can be told without a single word being spoken. Furthermore, these productions hold the power to bring an audience to tears through nothing but gestures, albeit complex and challenging ones. Could this be considered a way to acquire abstract categories through pantomime? Can one learn about heartbreak from Giselle? Wonder from the Nutcracker?

      Delete
    4. I want to touch on what we spoke about in class, in terms of the evolution from gestural language to verbal language. I wonder whether speaking in metaphors and very abstract ways was influential in this shift. As we encounter new things in the world, and have new experiences, the way we express things inevitably changes. How can expressions, sarcasm, metaphors, jokes or secrets be told with only gestures? It seems that pantomime was specialized to express the literal meaning of things, which, as society and circumstances change, may no longer be enough to describe the entirety of one’s experience.

      Delete
    5. To respond to Lucie -- you make a really interesting point! I agree with you that a ballet can arouse incredibly strong emotions and express important life lessons, but I do think that this layer of meaning in gestures is not entirely inherent to the performance. I think that part of the reason we feel emotions to such an extent when watching a ballet, or a silent play, is by virtue of the existence of verbal language, and what it has taught us about art, artistic interpretation and emotional expression (among other things).
      This isn’t to say that the same ballet wouldn’t have had an emotional impact on an audience before the advent of verbal language — I just think that we did not have all the tools necessary to understand certain themes and aspects of human existence, and also that this non-verbal expression was all that was known as a way to express oneself. In fact, I would say one of the factors that makes non-verbal artistic performances so emotionally strong nowadays is that during such a performance, we get to experience incredibly powerful emotions without the presence of language, unlike all other aspects of our lives — I think it’s the ubiquity of verbal expression that makes its absence that much more significant.

      Delete
  7. From what I understood of this article and the attached video review from Prof. Harnad, Pinker and Bloom discuss how language is the result of natural selection, then examine why there have been arguments that language is nonselectionist, and finally refuting the claims that specialization for grammar is not compatible with Darwinian evolution. As it mentions in the beginning, these arguments are “boring” because they are relatively uncontroversial. What is more controversial of a topic in regards to language is what constitutes language (what it “really” is), how and why it started, what is different about true language (Propositionality) and what is different about those who have the capacity to use language and those who do not. Therefore, refuting claims that innate grammar specialization cannot be the result of Darwinian evolution does nothing actionable in answering how it actually fits within this theory, but could help narrow the scope of the question.

    ReplyDelete
    Replies
    1. And P & B never even addressed the OG (learnable) vs. UG (unlearnable) distinction.

      Delete
  8. Chomsky’s analogy of the eye and language is interesting. Nonetheless, it doesn’t work for Universal Grammar (UG), because what makes UG unlearnable for a child, is what makes it difficult to explain how it evolved (unlike the eye). The eye is a structure which has the function of seeing, however language is the function, a behavioural cognitive capacity generated by the brain. While the brain does evolve, it doesn’t explain how or why UG evolve. Unlike the eye, gradualism cannot explain UG evolution, and UG’s adaptive advantages aren’t clear either. Therefore, Pinker is missing the hard part by not explaining UG’s evolution.
    Nonetheless, how would a partial UG be useful in genetic selection? How did primitive versions of UG arise? Can we determine when UG was fully established through language’s evolution in such a way that we have now? Pinker and Bloom do not sufficiently elaborate on the evolution of UG. While there are variations in grammatical ability, isn’t the UG is common to all languages? Regardless of eloquences, sentences can be understood if they are following UG, which we all have. The eloquence is about the workings of ordinary grammar (OG), yet, this still doesn’t explain the evolution of UG.

    ReplyDelete
    Replies
    1. I think some parts of the text address a couple of your questions. How would a partial UG be useful in genetic selection, for example, is explained in different sections. One of the argument was that instead of UG evolving completely gradually, some mutations could have led individuals to have one new rule compared to their parents. That rule could have been useful even for other things than language: as they say, symbol manipulation is highly useful in many domains of cognition. Another argument (if I understood correctly) is brought with learning: even if we say that UG is not useful unless you possess all the rules, learning could allow individual who genetically have a fraction of the rules to learn the rest of it. Then, people who are a bigger fraction of those rules genetically encoded would have an easier time learning how to complete them, which would give them an edge. This would lead individual to get a bigger and bigger fraction of the rules, until you reach 100% of the rules being genetically encoded.

      Delete
    2. Partial OGs make sense, but they are irrelevant, because OG -- whether partial or "full" is learned (and changes across time).

      Partial UGs don't make sense -- especially if you can't have language at all without UG.

      P&B give no actual examples whatsoever of what a "partial UG" (and its adaptive advantage) would be. They don't even mention the UG/OG distinction.

      Delete
    3. Hi Lola, I think what you're getting at with a "partial" UG is the fact that a physical attribute, such as an eye, would have to gradually evolve. It is impossible for an individual to have no eyes, then for it's progeny to have fully-developed eyes. Through generations of selection and genetic variation, the eyes we have today would eventually develop. However, UG is different because the "steps" between no UG and "full" UG would not be inherently helpful in an evolutionary sense. We can't have ANY language without UG (like Professor Harnad says above), so an individual with genetic variability that allows them to have some aspects of UG would not have an evolutionary advantage.

      Delete
    4. My takeaway from this reading is that language has many features, some of which are evolved genetically in a Darwinian aligning way which was subject to gradualism, and others are learned. From what I gather, to distill language as a singular capacity would be wrong, hence, UG and OG. Therefore, while OG may or may not fall under a Darwinian lens, UG does not need to. Moreover, unlike OG, UG syntax is more fluid and requires more semantic dependency to make sense? Not entirely sure if this synthesis is correct, but would love to hear/read more opinions on this!

      Delete
  9. While other species have communication, they do not have language, they do not have a “universal’ code. Language enables us to acquire new categories by recombining names of different categories into subject and predicate proposition with truth/false values. Language is not just computational, because unlike mathematics (which is), syntax cannot be independent of meaning/semantics. Humans are unique such that they have a “universal code” that allow the formation of such propositions which can describe anything there is, or any fiction. Language is a special capacity that differentiates us from animals, however other animals such as great apes, and elephants are able to communicate, and seem to have the required intelligence, yet they do not have language, and we cannot teach it to them, why so?

    ReplyDelete
    Replies
    1. "Language is a special capacity that differentiates us from [other] animals, however other animals such as great apes, and elephants are able to communicate, and seem to have the required intelligence, yet they do not have language, and we cannot teach it to them, why so?"

      Yup, that's the question.

      So what do you think?

      Delete
    2. We would have to posses some type of innate ability that makes language possible. Is UG simply a characteristic of said mechanism? Or is UG somewhat necessary to the creation of any language *that works better at communicating*? -Elyass

      Delete
    3. I think it is possible that the reason why other animals with the intellectual capacity to develop language do not have language is because they have not had the need for it. We’ve previously talked about how evolutionary processes are lazy and how they provide the means for certain skills but not the skills themselves. So while it might be perfectly possible for them to learn, these animals are doing well enough in their respective environments that they have no need for other means of communication.

      Delete
    4. In response to Prof. Harnad's question of "Language is a special capacity that differentiates us from [other] animals, however other animals such as great apes, and elephants are able to communicate, and seem to have the required intelligence, yet they do not have language, and we cannot teach it to them, why so?"

      The direct answer will be that, other animals don't have UG, and therefore they don't have the ability to learn language. Essentially, UG is the ability that allows us to learn any language. The distal answer will be, as Marisa suggested, during the evolution other animals don't need to communicate in the form of language because they are already doing a good job without language, so they don't need the ability to learn language, which is UG.

      Delete
    5. For this question, I agree with above comments and I think the major difference we should consider the environment when the species evolved, maybe it coincides with the environmental change when human ancestors started to walk upright. There must be some fundamental differences (like, facing or lacking certain threat stimulus) between the species who evolved to walk upright and who not evolved so.In one sense, there's also a 'critical period' in the scope of evolution. And language capacity and UG might be the same circumstance.

      Delete
  10. The clearest definition of what grammar might be was given by this paragraph: "Thus grammars for spoken languages must map propositional structures onto a serial channel, minimizing ambiguity in context, under the further constraints that the encoding and decoding be done rapidly, by creatures with limited short-term memories, according to a code that is shared by an entire community of potential communicants."

    While I am unsure if this is the full story, it gives a good look at some of the properties and constraints under which grammar may have emerged.

    ReplyDelete
    Replies
    1. Yup. But why can't all of grammar be learned and learnable OG?

      (The sentence you quoted is the core of P&B's question-begging.)

      Delete

    2. I think the reason why can't all of the grammar be learned and learnable OG is that there would be so much work for learning!

      Delete
  11. A question that kept creeping up in my head while doing this reading is whether or not we could’ve survived almost as well without such complex languages. Of course, our high quality of life is the result of highly efficient communication, but I sense that our use of language and thinking abilities sometimes hinder us and that we would’ve been capable of surviving just fine with basic words and non-verbal communication. Inspired by the MinSet that can be extracted from dictionaries, I wonder if there is a minimal set of words that would be enough for us to collectively survive with sedentary lifestyles without the added complexities of present-day languages. Another related question I have is if we have reached peak language complexity and to what extend we could be economic with our vocabulary and grammar. Is language as we know it creating greater cognitive load by its complexity and its ability to facilitate social networks and inventions that are overwhelming for our brains? I am very much interested in how language leads to civilization, and civilization (as we know it) leads to mental health issues and other problems.

    ReplyDelete
    Replies
    1. I'm not sure what you mean by "complexity." It has a formal computational meaning, and also a more subjective meaning. I think what you are wondering about is more on the subjective side.

      Humans (like other animals) compete to succeed in surviving and reproducing. Both evolution and individual learning are a kind of supervised (reinforcement) learning: Whatever "complexity" means, if genetic traits help survival or reproduction, they will be "selected." Same for anything we learn to do, whether it's to plant better crops or to make more money -- even if it eventually destroys the planet, if the feedback from that is too delayed or weak to make it maladaptive in our life-times.

      But "life could be simpler" is a bit too vague...

      Delete
    2. I really like the question you have raised here. It is certainly interesting to consider the limits of our language ability, and the possibility that its growing complexity hinders other aspects of cognitive growth. I feel, however, that attributing mental health issues and decline of society to language complexity may be a bit too presumptive. There are many other issues that contribute to these issues, and although cognitive load may be an issue, I believe that as language develops, it would never be past the limit of human capability. Language would be constrained already by what humans are able to handle, and probably would not extend to the point of harm to the individual. More simply put, we would not produce language that we could not understand. Language will also continue to develop because our survival is pressured by the need to outperform our conspecifics. Regressing to a simpler form of language could not happen, as a prisoner’s dilemma would develop. You cannot trust competitors to not use complex language, so to not use it as well would cause underperformance and reduced fitness in comparison to that of your competitors. Perhaps it is not the complexity of language that is detrimental but rather the breadth of information we are submitted to through media in our greater civilization. Nevertheless, I am also intrigued by the question of the language development. I’d also like to know if there is limit to how far language can progress.

      Delete
  12. I did not get the time to ask this during our last class, but I would like to make sure that my understanding of “a picture is worth a thousand words” is correct. Of course, everyone perceives the world around them in different ways and uses different words to characterize it, but there is another more “objective” way of understanding this statement. As seen in class, dictionaries can be reduced to a minimal set and, in theory, this could also be done with an entire natural language. When looking at a picture and characterizing it, we can “decompose” the words we use to do so into their definitions. For example, instead of using “apple” to describe an apple in a picture, we could call it a “round red fruit” and continue defining the terms that were just used until we reach words found in the MinSet. This process multiplies exponentially the words that are used to describe a picture, hence why over a thousand words can be derived from a single picture.

    ReplyDelete
    Replies
    1. You're right, but it's simpler than that.

      Every "thing" (whether picture, object, person, place, event state, or [recusively] 'feature'" has an infinity of potential "features." Whether detected directly by sensorimotor learning or transmitted indirectly by (grounded) verbal instruction, the features will only be approximate. There will always be more potential features to detect or describe, in order to inform (i.e., reduce uncertainty among the [finite] options that matter [for success, survival, reproduction] at the time).

      So there is no single, definitive, once-and-for-all direct sensorimotor feature-detector or indirect verbal feature description. Tomorrow's uncertainty may always depend on features you've left out...

      Delete
    2. PS Don't be seduced by what seems to make sense, whether it's the skyreading that says it or me (the local pygmy): Find the flaws, keep skywriting, and cognizing...

      Delete
  13. "In sum, primitive humans lived in a world in which language was woven into the intrigues of politics,
    economics, technology, family, sex, and friendship and that played key roles in individual reproductive
    success. They could no more live with a Me-Tarzan-you-Jane level of grammar than we could"


    I think this quote brings an important concept about the complexity of language in our ancestors. We somewhat believe that our language systems are much complex than our ancestors, but it does not seem that simple. Professor Harnad said something similar in class when he talked about how modern English is not more complex than contemporary English.

    I wonder if we can relate those ideas to distinguish UG and OG. Where the language itself (OG) is merely a matter of semantics; the real limit for what we can say is in UG.

    In other words, a real object is continuous, any description of this object (in any language and any length) will never fully be able to contain as much information as the object because it will be discrete in nature. From what I understand, UG is this universal innate system that mediate the language description of an object. Those set of rules not only dictate the language systems, but it describes its limits. (I don't know if what I am saying makes any sense, I am still trying to put everything together) -elyass

    ReplyDelete
  14. So to be clear, Baldwinian evolution is different from Darwinian evolution as it is the transition from a construct or behaviour being learned entirely from the beginning by every individual member of a species to gradually becoming instinctual in a species. Darwinian evolution then dictates that natural selection favours members of a species that have this genetic predisposition to quickly, accurately and reliably develop such innate behaviours. Does that sound right? I’m pretty sure that when we consider language from this framework, there are components of language that people once had to learn from scratch, that eventually became innate. Therein lies the poverty of the stimulus, when infants can hear discrete categories of speech sounds despite their continuous nature. They can do without very much environmental input which has led to the theory that this ability is inborn.

    ReplyDelete
    Replies
    1. Let's begin by re-phrasing in a more kid-sibly way: Baldwinian evolution is when adaptive behaviors that once needed, as you said "to be learned from scratch" turn -through natural selection of the best learners, i.e., Darwinian evolution- into a genetic, and therefore innate, predisposition to learn. (Learning + Natural selection -- appearance of a new predisposition to learn).

      In the case of language these principles would apply to a) the predisposition to phonetic learning (innate phonetic perception ability) and b) Universal Grammar (UG), which suggests that we are born with a set of structural rules that allow us to learn grammar of any given language.

      Most IMPORTANTLY: Poverty of the stimulus does NOT refer to phonetic learning, but to grammar learning. It postulates that grammar cannot be merely learned, because we face only POSITIVE evidence (only sentences that make grammatic sense) and no NEGATIVE evidence. As we mentioned in previous classes, to learn any category we need examples of members (positive evidence) and non-members (negative-evidence). According to this idea, environmental data is insufficient to learn grammar, therefore there must be an innate component to it (UG).

      Delete
  15. To summarize my understanding of this article, it is stating that the human ability for grammar evolved by a neo-Darwinian process. This means that a combination of genetics and natural selection were involved in this evolutionary process. Since this is an evolutionary process, this must mean that grammatical competence had a reproductive advantage, and was therefore selected for and passed on through the generations. Being more competent in grammar would have been caused by a genetic mutation or recombination and these mutations must have happened a very long time ago for grammar to be universal among all cultures and languages. While explanations were provided for what kind of evolutionary or reproductive advantage grammar could have, I find myself still confused by this part. Particularly, I am a bit confused as to how grammatical competence would have been advantageous at the very start, as only the person with the mutation would understand the language and others would be unable to understand them. The advantages would also have to be quite large for specific grammar competence to become so widespread that it is universal in all humans.

    ReplyDelete
    Replies
    1. I understand your confusion regarding the first grammatically competent person (it seems to be a rather insignificant trait from an adpative point of view) but I think that you might be seeing this problem as a sort of chicken and egg situation between Universal grammar and grammatical competence. I'm pretty sure that there wasn't a mutation that allowed a first person to "understand language" because how would other people be producing this language if they weren't understanding it? I think it's more useful to think of language as a really slow, incremental process where everyone is always "communicatively competent" (where everyone always understands everyone else to the same extent). Over time, communication likely complexified, maybe from pantomime to verbal exchanges, as mentioned by prof Harnad, with neuroanatomical structures also evolving to support them in parallel. In a nutshell, no individual member was "grammatically competent" first, rather everyone was always competent and the structures that support universal grammar evolved slowly so that individual differences accross a few generations were never significant enough for anyone to be "gramatically incompetent" at any given time.

      Delete
  16. Gould and Lewontin’s argument against adaptationist explanations urged me to think again about my previous conception about how it is that humans evolved to have such distinct cognitive faculties than other animals. My ideas that this could be explained by our uniquely slow growth process or by an intuitive desire to resolve consistent problems in our environment was first called into question by Chomsky’s argument that attributing the development of innate mental structures to evolution is an insubstantial assertion as it simply suggests that there’s simply a naturalistic explanation for the phenomenon in question. Although Gould and Lewontin’s ideas about how there are several nonadaptationist mechanisms that could provide an explanation for evolution were compelling, I agree that the idea that nonselectionist mechanisms such as genetic drift could give rise to such complex organs is almost implausible as the odds, statistically are so low. However, I don’t entirely understand the notion that ‘evolution is the only physical process in which the criterion of being good at seeing can play a causal role’. Doesn’t this imply that our need to be good at seeing caused the eyes to evolve? In that case, wouldn’t our need to be good at seeing also drive certain nonselectionist mechanisms such as the general law of growth and form because the eyes need to develop to a point that is optimal for the complex functioning of the eye?

    ReplyDelete
    Replies
    1. Adebimpe, you pose a good question when you analyze the “the notion that ‘evolution is the only physical process in which the criterion of being good at seeing can play a causal role’. Doesn’t this imply that our need to be good at seeing caused the eyes to evolve?”.
From my understanding, “evolution is the only process in which the criterion of being good at seeing can play a causal role” relates to the fact that being good at seeing would be an extremely beneficial survival mechanism, leading to reproduction and the continuation of those genes. So basically, the trait of being good a seeing leading to an evolutionary advantage and thus those genes of good sight would be passed on to future generations, affording them good eyesight as well.
      However, if the ancestral environment were to suddenly become pitch-black, the advantage of good eye sight would no longer be an advantage — perhaps good hearing would instead be advantageous. So, those with good eyesight would not be fit to survive in a non-seeing world and their genes would not be passed on. Instead, those carrying the good-hearing genes would now be the most likely to reproduce and cause good hearing to evolve.

      Delete
  17. I found the case of unmodified spandrels to be quite interesting (and funny). A wading bird using its wings to block reflections on the water instead of for flight is a funny image. An extremely complex structure that took an incredibly long time to evolve (wings) turned into visors in what was likely a much shorter time frame. Quite a ridiculous process, but also, if I understand correctly, a prime example of lazy evolution. Evolution took the first possible adaptation to the problem of reflections blinding the birds as they fished—using its wings to block the bright light—because it was likely the first solution to randomly mutate. Instead of a complex adaptation in the birds’ eyes that would better handle such reflections, the simplest and first solution evolved, because the ocular adaptation would likely have required a series of random mutations. Even though it was likely not the most efficient solution, it worked, so it stuck.

    ReplyDelete
  18. The questions which have been raised so far in this discussion have provoked an interesting kind of ultimatum. If we take it to be true that UG is some kind of Baldwinian evolved capacity, that appeared, not progressively, but rather suddenly, in the evolution of humans, there are some important stakes at hand. If the capacity to learn any OG is subsumed into the innate capacity for language (UG), and this innate capacity cannot be learned (due to poverty of stimulus), this rules out the possibility that any robot (short of it having the right genetic makeup, an idea which I have trouble wrapping my head around), could actually develop the capacity for language. Like the abovementioned GPT-3, a robot could produce an infinite number of UG-compliant sentences, but given that it does not have the built-in/evolved capacity of UG, it could not be said to have the capacity for language in the same way that humans do. Can you pack hundreds of thousands of years of evolutionary history into a computer program? Especially given the uncertainty as to the origins/starting point of propositionality, and therefore language, how would it even be possible to instill/install this kind of inherited/inborn knowledge into a robot?

    ReplyDelete
  19. It is very interesting to consider the debate of whether language is a direct benefit of evolution, or a by-product of other evolutionary developments such as "an increase in overall brain size and constraints of as-yet unknown laws of
    structure and growth". The main argument of complex design does weigh heavily - that is, the complexity of language systems would have to be a naturally selected ability, but I wonder then about how different environments could lead to different characteristics in language. For example, the Khoisan languages of some African tribes utilize unconventional clicking as a large component of their language and communication, and that makes me wonder what was so special about their environment that would lead naturally selection to emphasize the clicking language, despite this being very unusual in most other languages. I suppose, the big picture idea of language itself must be naturally selected, but the nuances of each language is highly dependent on chance historical development by smaller populations.

    ReplyDelete
    Replies
    1. Could you consider clicking a specialization due to environment? From what I interpreted from the specialization argument of language, is that although languages develop based on culture, the underlying grammatical rules remain the same and the development of language in infants is the same across cultures.

      Delete
  20. From what I understand, Pinker & Bloom underestimated the mechanisms involved in language by providing an oversimplification of it. They failed to consider UG as a component of language. UG is considered to be innate to children as they learn language from the moment they are born. OG seems to be everything else that the child learns that is not innate. They might make mistakes in OG but don’t necessarily correct themselves, but with UG, they do correct themselves (unsupervised learning). It seems that ‘lazy’ evolution acted on human language just enough to allow some basic communication skills, and left the rest to the language learning capacity, from where OG emerges.

    ReplyDelete
  21. From my understanding, Pinker's discussion on language could be seen as a model for looking at the interaction between biology and culture elsewhere. Pinker argues that language is built onto certain fundamental inherited assumptions about the world. Namely, what biology gives us is the ability to learn certain ways to pay attention to certain aspects of the environment and analyze the world into certain categories. From my understanding, one significance of language is what combinatorics allow us to do, which is that we could take a fixed number of ideas that we have words for, then combine them in phrases and sentences. The meaning of the sentences can be computed from the meanings of words and the way they are arranged. Since the number of messages grows geometrically with the length of a sentence, enormous numbers of thoughts could be expressed even with a finite set of tools, a manipulating symbol system, as long as the tools allow for combining the nouns and verbs into strings.
    One question that Prof. Harnad raised in the comments: Is GPT (google GPT-3) learning UG through unsupervised learning by being exposed to enormous volumes of text input? 'Given the example, John is easy to please Marry, my kidsib told me that one of the ways he could think of to write a program to print sentences with these six words in six positions without violating UG or any other rules in English is to enumerate all the outcome that could be generated by these six words and compare with examples of how human use these words. I think this idea aligns with 'learn UG through unsupervised learning by being exposed to enormous volumes of text input.' However, according to this article, how a particular language was learned is the process of analyzing enormous volumes of language stimuli into units to recombine to express new ideas. To simulate language acquisition in a math model or a computer program, I think we must build in some assumptions about the units, which cue our 'machine' to look for the pattern worth paying attention to.

    ReplyDelete
  22. When discussing the nonselectionist mechanisms of evolution, the author mentions Gould and Lewontin’s “naïve adaptionism, the inappropriate use of adaptive theorizing to explain traits that have emerged for other reasons”
    This notion is slightly confusing to me. How can you claim that something is wrong? The examples given in the following explanation seem to be based off of a subjective decision and lack of knowledge. The architectural dome example being a prime example of how a lack of knowledge could contribute to the wrong conclusion of its conception. The authors give no explanation as to how something should be defined as the wrong conclusion or the right one. They just say it is wrong.

    ReplyDelete
  23. This paper by Pinker and Bloom states that characteristics of language make it meets the criteria for it could be attributed to natural selection. However, I agree that it proposed oversimplified explanation to the fact that linguistic capacity, are partially inherited/inborn (UG) and partially learnt (OG).

    My first impression and question here are to define language - would only language with systematic grammar can be regarded language or methods of communication can be regarded as language? Broadly speaking, if we including how animals communicate via sounds, I’m convinced that language evolved as a by-product of natural selection. However, from a cognitive scientist’s view, language capacity is different from other behavioral capacity, it is the inherited universal grammar provide the possibility of language development at the first place - that’s why critical period exists. For the learnt aspect of language, I’m thinking about the diversity of language and the different use of language that can reflect people’s mindset (which I discussed in 6b.), which I wasn’t sure about if mindsets for producing language could have an evolutionary benefit.

    ReplyDelete
  24. What I found fascinating is the relation of OG and UG somehow resonant with the relation of learned CP and innate CP, that, due to the laziness of the evolution, we are only born with necessaries, viz. UG to the language capacity and innate CP to symbol grounding, etc., with this least number of inborn necessaries, we are able to develop the whole capacities

    ReplyDelete
  25. I am not sure if I fully understand this paper. In my understanding, the aim of this paper is to defend the view that language has been shaped by natural selection—the evolution of language can be explained by Darwinian evolution theory. The authors argue that the criteria for attributing a certain trait to natural selection is that it shows “complex design for some function, and the absence of alternative processes capable of explaining such complexity.” One of their main evidences is grammar, which shows complex design of function, meaning that it is best to be explained by evolutionary theory. The grammar is an innate function rather than a learned ability.
    One of the difficulties of their argument, which we discuss in class is how to evolutionally explain the universal grammar of Chomsky. The universal grammar cannot be learned because of the poverty of stimulus argument—the infants do not have enough negative evidences to learn all the complexity of the grammar structure. Baldwinian evolution happens when adaptive behaviors are learned and transformed into a genetic predisposition to learn. through natural selection of the best learners. But if universal grammar cannot be learned, then we cannot use the Baldwinian evolution to explain the evolution of universal grammar.

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2021 Time : FRIDAYS 11:35-2:25  Place : ZOOM Instructors : Stevan Harnad & Fernanda Pere...