Shifting sands and bendy bedrock

shifting-sands-on-ocracoke-outer-banks-dan-carmichael

Chomsky offers a theory of language and of language learning. The theory claims that all human languages share an underlying grammar and that human beings are born with a knowledge of this grammar, which partly explains how they learn their first language(s) as young children. Criticism of Chomsky’s theory is mounting, as evidenced by a recent article in Scientific American which claims that “evidence rebuts Chomsky’s theory of language learning”. Here, I question that claim.

First, the Scientific American article doesn’t give any evidence to “rebut” Chomsky’s theory. The article talks about counter evidence, but it doesn’t actually give any. The real thrust of the current popular arguments against Chomsky’s theory have nothing to do with its ability to stand up to empirical challenges. Arguments against Chomsky’s theory are based on

  1. the weaknesses in Chomsky’s theory in terms of its reasoning and its falsifiability,
  2. the claim that no recourse to innate knowledge, specifically to a Language Acquisition Device, is necessary, because language learning can be explained by a general learning theory.

As to the first point, I refer you to Sampson   and Bates, the latter particularly eloquently voicing a strong case. You might also look at my discussion of Chomsky’s theory itself. There are, I think, serious weaknesses in Chomsky’s theory. To summarise: it moves the goal posts and it uses ad hoc hypotheses to deflect criticism.

As to the second point, no theory to date has provided an answer to the poverty of the stimulus argument which informs Chomsky’s theory. No attempt to show that usage can explain what children know about language has so far succeeded – none. Theories range from what I personally see as the daft (e.g. Larson Freeman and Cameron ) through the unlikely (e.g. Bates and MacWhinney )  to the attractive (e.g. O’Grady and Rastelli).

As Gregg (1993) makes clear, a theory of language learning has to give a description of what is learned and an explanation of how it’s learned. UG theory acts in a deliberately limited domain. It’s a “property theory” about a set of constraints on possible grammars, which has a causal relation to L1 acquisition through a “transition theory”, which connects UG with an acquisition mechanism that acts on the input in such a way as to lead to the formation of a grammar. Describing that grammar is the real goal of Chomsky’s work. In successive attempts at such a description, those working within a Chomskian framework have made enormous progress in understanding language and in helping those in various fields, IT, for example. Chomsky roots his work in a realist, rational epistemology and in a scientific method which relies on logic and on empirical research.

Any rival theory of language learning must state its domain, give its own property theory (its own account of what language is), and its own transition theory to explain how the language described is learned. You can take Halliday’s or Hoey’s description of language, or anybody’s you choose, and you can then look at the transition theories that go with them. When you do so, you should not, I suggest, be persuaded by silly appeals to chaos theory, or by appeals to the sort of emergentism peddled by Larsen-Freeman, or by circular appeals to “priming”. And you should look closely at the claim that children detect absolute frequencies, probabilistic patterns, and co-occurrences of items in the linguistic environment, and use the resulting information to bootstrap their way into their L1. It’s a strong claim, and there’s interesting work going on around it, but to date, there’s very little reason to think that it explains what children know about language or how they got that knowledge.

To say that Chomsky’s theory is dead and that a new “paradigm” has emerged is what one might expect from a journalist. To accept it as fact is to believe what you read in the press.

3-5

The post on Scientific American’s article on Chomsky prompted suggestions for further reading. Here’s a summary.

untitled

Kevin Gregg recommends Evans, N. and Levinson, S.C. (2009) The myth of language universals: language diversity and its importance for cognitive science. Behavioural & Brain Sciences 32:429-492.

This is an excellent article. The main article makes an argument that I don’t think stacks up (more importantly, neither does Gregg) but it’s well presented and it’s followed by “Open Peer Commentary”, where a very wide selection of scholars , including Baker, Bevett, Christiansen and Chater, Croft, Adele Golberg, Habour, Nevins, and Pinker & Jakendoff respond.  Very highly recommended.

9780262034319

Scott Thornbury recommends Christiansen and Chater (2016)  Creating Language: Integrating Evolution, Acquisition and Processing. MIT.  Scott makes a refreshing confession that he didn’t finish reading Everett’s awful book Don’t sleep, There are snakes, which inspired his post “P is for Poverty Of the Stimulus”. The post sparked a lively discussion, where Scott showed signs of a less than complete grasp of UG theory, so it’s good to see him recant here on his previous enthusiastic endorsement of Everett. Among other daft stuff, Everett claims that the Pirahā language refutes Chomsky’s claim that recursion is a universal characteristic of natural languages, which it doesn’t.  Anyway, the book Scott recommends looks interesting, and, judging from reviews, follows what we’ve come to expect from Christiansen and his colleagues. At the risk of sounding condescending, it’s good to see Scott moving on from Everett and from the equally unscholarly nonsense found in Larsen Freeman and Cameron’s attempts to promote emergentism, to a more sophisticated view.

harpersweb-cover-2016-08-302x410

Talk of Everett brings us nicely to Ruslana Westerlund, who  urges those “with an open mind and more importantly, critical mind” (sic) to read an article in Harpers magazine which reports on Tom Wolfe’s book on Chomsky, The Origins of Speech. The article is what one might expect from something in Harpers – it’s rubbish, and it only confirms one’s suspicion that Wolfe has nothing much to contribute to any critical debate about Chomsky’s UG theory. Wolfe apparently says that Chomsky is a nerd, and a nasty person to boot, while his hero Everett is a macho man, i.e., in Wolfe’s scheme of things, a good and proper man.  Wolfe thinks that Everett’s ability to pose for a photo up to to his neck in dangerous waters while one of the Pirahā tribe looks on from his boat, is evidence to support Everett’s theory of language learning.  I don’t really get Westerlund’s insistance that only those with open and critical minds will appreciate the Harper piece; I reckon that only those lacking both will be impressed.

hidden-mother

Phil Chappell, a valued contributor to this blog, suggests we look at a blog post by a mother with a Ph.D. in linguistics who says that her relationship with her baby proves Chomsky wrong. More rubbish. The Ph.D. enriched mum confuses Chomsky’s treatment of linguistic competence, a carefully defined construct in a deliberately restricted domain, with a baby’s need to interact lovingly with his mother.

Phil also suggests that we read Lee, N., Mikesell, L., Joaquin, A. D. L., Mates, A. W., & Schumann, J. H. (2009). The interactional instinct: The evolution and acquisition of language. Oxford University Press. I’ve read this, well, sort of, and I think it’s terrible. To quote the promotional blurb: “Language acquisition is seen as an emotionally driven process relying on innately specified “interactional instinct.” This genetically-based tendency provides neural structures that entrain children acquiring their native language to the faces, voices, and body movements of conspecific caregivers”.  I don’t know if Phil goes along with this mumbo jumbo, and I hope he’ll comment.

Robert Taylor says “Here’s some interesting research about sounds for common ideas being the same across languages (roughly 2/3rds)”. I’m not sure what to make of it, but maybe it’s grist for the mill.

Finally, I recommend an article from the Stanford Encycopedia of Philosophy, a website that I love and that I visit almost as often as I visit VinoOnLine. The article is called Innateness and Language, and I think it gives a good review of the stuff we’re talking about.  I particularly like its discussion of the Popperian view versus the “inference to the best explanation” view (best articulated, I think by the ever so wonderful Ian Hacking).

Gregg, K. R. (1993) Taking explanation seriously; or, let a couple of flowers bloom. Applied Linguistics 14, 3, 276-294.

Advertisements

25 thoughts on “Shifting sands and bendy bedrock

  1. Hi, Geoff,
    About an hour after I read your comment on that Scientific American article, William O’Grady sent me (unasked, I assure you) a copy of it. His subject heading was “This will infuriate you”. I replied that it wouldn’t infuriate me if I didn’t bother to read it, and seeing the title and the name ‘Tomasello’ as the co-author was enough to tell me I needn’t bother. But I did skim through it, enough to see how appallingly bad it was. One doesn’t have the highest expectations of Scientific American, but this!
    If you’re interested in an argument against Universal Grammar–indeed, against linguistic universals–that has evidence, at least, you might want to cast an eye over Evans & Levinson’s BBS article, with commentaries. I think they’re wrong, mind you, but they do go far beyond mere say-so. (One of these days I have to look up your stuff on Sampson and Bates to see what you could possibly have to say in their favor, but next week classes begin.)
    N. Evans & S.C. Levinson. 2009. The myth of language universals: language diversity and its importance for cognitive science. Behavioral & Brain Sciences 32:429-492.

    Like

    • Thanks for the reference Kevin. I don’t suppose you’ll bother to say anything about Samson, who does a perfectly good job of burying himself, but maybe Bates is worth a few words. And Rastelli?

      Like

      • Well, I just looked at your thing on Bates, and I can’t see what your enthusiastic about. What you quote of her is pretty much empty of content. Of COURSE (sorry; can’t figure out how to do italics here) it might be the case that some characteristic is universal but not innate; to say that is to say nothing, until you get down to concrete examples; which linguists have done. To say that parameter theory is unfalsifiable is to ignore the research (Janet Fodor comes to mind, or Charles Yang) on what sort of natural input could lead to parameter setting. Got to catch a train; sorry.

        Like

      • This is a PS to my comment on Bates. You quote her as follows:
        Bates goes on to say that when the nativists point to the “long list of detailed and idiosyncratic properties” described by UG, and ask how these could possibly have been learned, this begs the question of whether UG is a correct description of the human language faculty. Bates paraphrases their argument as follows:

        English has property P.
        UG describes this property of English with Construct P’.
        Children who are exposed to English, eventually display the ability to comprehend and produce English sentences containing property P.
        Therefore English children can be said to know Construct P’.

        Bates comments:

        There is, of course, another possibility: Children derive Property P from the input, and Construct P’ has nothing to do with it. (Bates, 2000: 6)

        Can you think of an example of a property of English claimed to be an effect of UG that Bates has shown to be derivable from input?

        Like

      • hi, is the work by Reali and Christiansen
        (2005) on bigram and trigram stat model on auxillary inversion in yes/no questions an example?
        Reali, F., & Christiansen, M. (2005). Uncovering the richness of the stimulus: Structure dependence and
        indirect statistical evidence. Cognitive Science, 29, 1007-1028.

        Like

  2. Thanks as usual for your provocative thoughts, Geoff. A colleague recently wrote about the schism that you talk about, and the lack of argument against the poverty of stimulus myth that still prevails. I hope you might be a little more open than the previous respondent who rejected an article based purely on the author’s name, and have a read of this. I look forward to looking up the references you mention to further my understandings of the schism (I hesitate to call it a debate as there doesn’t seem to be much willingness amongst folks to make the effort to see the other side’s point of view). https://theconversation.com/how-a-phd-in-linguistics-prepared-me-for-motherhood-39499

    Regards

    Phil Chappell

    Like

    • Hi Phil,

      Thanks for the reference. I’ll read it today!

      It’s interesting that you call the poverty of the stimulus argument a “myth”. Obviously, you mean that the argument is not true, rather than that it doesn’t exist! I notice that in a Tweet you ask “Is Chomsky’s poverty of stimulus idea able to be refuted?” At least this is better than Thornbury’s absurd demand, made in his discussion of the poverty of the stimulus argument ( https://scottthornbury.wordpress.com/2015/06/07/p-is-for-poverty-of-the-stimulus/ )and which I think he got from Everett’s terrible book, that Chomsky must prove that aspects of grammar that children know about could not have been acquired from language input! You, reasonably, demand that in order for the hypothesis “We are born with knowledge of aspects of grammar” (because we didn’t get it from the input) must, in principle, be falsifiable. But it is – in principle at least, as thousands of studies on grammaticality judgements and of other aspects of linguistic competence, show. The problem is moving the goal posts, and saying, when a counter example is given, that it’s not part of grammar in question.

      Like

      • The poverty of the stimulus argument threatens to turn into the poverty of the argument stimulus! For a very recent take on this, I recommend (not Everett, which i never finished) but Christiansen and Chater, 2016. Creating Language: Integrating Evolution, Acquisition and Processing. MIT. Spoiler alert (for those who can’t stomach Tomasello) there’s a whole chapter that explains how recursion could be a usage-based skill. Occam, pass the razor.

        Like

      • Hi Scott,

        Good point! 🙂

        Thanks for the recommended reading – and the warning. Always good to keep that razor handy.

        Like

  3. “Would those with closed minds please sit in the back and turn off their Smart phones. Questions only at the end from open and critical minds, please.”

    T

    Like

  4. In the spirit of offering up additional readings, I recommend Lee et al “The Interactional Instinct. The evolution and acquisition of language”. Chapter 1: Grammar as a complex adaptive system sets the scene for the rest of the book.

    Like

  5. I just read Lukin’s piece, cited by philchappell above. Since she provides no references, I can’t trace the Pinker quote, but in any case his putative advice to parents of infants is beside the point (nor is Pinker Chomsky’s “disciple”). On the one hand, no one in his right mind denies the need for linguistic input for language acquisition. On the other, parent-child interaction, eye contact, and so on are presumably important for the child’s emotional and cognitive development. The question is–assuming that a person comes to have a grammar of his native language (and one could deny that assumption)–how does this happen? Children raised by loving, voluble parents wind up with essentially the same grammar as children raised by self-absorbed Calvin Coolidges. Differences in vocabuary size, perhaps; in grammar, no. There are societies (Samoa supposedly is one, the working-class communities investigated by Shirley Brice Heath others) where the infant is seen as in Pinker’s quote; as not possible conversational partners. The infant grows up a fluent native speaker. Lukin puts “degenerate” (a word I haven’t seen in the linguistic literature for years) and “impoverished” in scare quotes, but impoverished is precisely what the input is, including Lukin’s to her baby; It is clearly–clearly–not enough to bring about a grammar in the mind of a child. That’s what ‘poverty of the stimulus’ means. The ‘usage-based’ and other empiricist approaches touted by Tomasello and others have yet to address the countless examples of POS that have been offered by generative and other linguists.
    One basic flaw in the attempts to argue for a language-learning capacity without something like a Universal Grammar is their failure to deal with the problem of non-occurrence, the so-called No Negative Evidence problem. How does the child learn that X is NOT possible? (An example from outside language: you can watch a lot of chess games, and perhaps learn what moves are possible. How do you learn that castling can only be done once?) How do we know that we can’t say e.g. “She donated the library her money”? Or ‘the afraid boy’? For the latter, you might want to look at a recent paper by Charles Yang, “Negative knowledge from positive evidence”, (Lg 91:938-953, 2015).
    Phil, could you give us the complete citation for the Lee thng?

    Like

    • Apologies for not supplying the full reference to the Lee et al book

      Lee, N., Mikesell, L., Joaquin, A. D. L., Mates, A. W., & Schumann, J. H. (2009). The interactional instinct: The evolution and acquisition of language. Oxford University Press

      Like

      • Thanks very much, Phil. This is a very different perspective, one that sees language as “a cultural artefact that emerges as a complex adaptive system from the verbal interaction among humans.” Comments on this post are generating an interesting reading list.

        Like

    • This is fascinating. Even if the claim is true, though, the relationship between the meaning of a word and its sound is still ‘arbitrary’. That there may be some universal (psycho)biological constraints on the form that a word for some particular thing may take does not, it seems to me, change that.

      Liked by 2 people

      • Well, the relation between sound and meaning, interlanguage, may be arbitrary, but the research, or rather the results thereof, do suggest a single, common point of origin for all languages. And it hints at language being an internal mechanism; language, perhaps, is something that finds its roots in the mind (craply worded, I know), rather than from any external source.

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s