* Chomsky

230px-Chomsky

Before Chomsky, researchers saw their job almost exclusively as the collection of data. All languages were seen to be composed of a set of meaningful sentences, each composed of a set of words, each of which was in turn composed of phonemes and morphemes. Each language also had a grammar which determined the ways in which words could be correctly combined to form sentences, and how the sentences were to be understood and pronounced. It was held that the best way to understand the over 2,500 languages said to exist was to collect and sort data about them so that eventually the patterns characterising the grammar of each language would emerge, and that then, interesting differences among different languages, and even groups of languages, might also emerge.

Chomsky’s revolutionary argument, begun in Syntactic Structures (1957), and consequently developed in Aspects of the Theory of Syntax (1965) and Knowledge of Language (1986) was that all human beings are born with an innate grammar – a fixed set of mental rules that enables children to create and utter sentences they have never heard before. Chomsky asserted that language learning was a uniquely human capacity, a result of Homo Sapiens’s possession of what Chomsky referred to as a Language Acquisition Device. Chomsky eventually claimed that language consists of a set of abstract principles that characterise the core grammars of all natural languages, and that the task of learning one’s L1 is thus simplified since one has an innate mechanism that constrains possible grammar formation. Children do not have to learn those features of the particular language to which they are exposed that are universal, because they know them already. The job of the linguistic was to describe this generative, or universal, grammar, as rigorously as possible.

It is important to emphasise that Chomsky’s theory has gone through various quite radical changes since 1957 (see Cook and Newson, 1996: 41). Perhaps the most stable part of Chomsky’s theory is expressed in his “Principles and Parameters” model, outlined in the early 1980s, and this will be discussed below.

The arguments for Universal Grammar (UG) start with the poverty of the stimulus argument, often referred to as the “logical problem of language learning”: children learning their first language cannot induce rules of grammar from the input they receive, the knowledge of language which they manifest cannot be explained by appealing to the language they are exposed to. On the basis of degenerate input children produce language which is far more complex and rule-based than could be expected, and which is very similar to that of other adult native speakers of the same language variety, at an age when they have difficulty grasping abstract concepts. That their production is rule-based and not mere imitation as the behaviourist view held, is shown by the fact that they frequently invent well-formed utterances of their own. That they have an innate capacity to discern well-formed utterances is shown by a number of different studies, for example, the often-cited (White, 1989) study of L1 English learners’ use of “wanna”, where input does not explain how children know when the use of “wanna” is correct or not.

Chomsky’s model of language distinguished between competence and performance, between the description of underlying knowledge, and the use of language, influenced as the latter is by limits in the availability of computational resources, stress, tiredness, alcohol, etc. Chomsky is concerned with “the rules that specify the well-formed strings of minimal syntactically functioning units” and with “an ideal speaker-listener, in a completely homogenous speech-community, who knows his language perfectly and is unaffected by such grammatically irrelevant conditions as memory limitations, distractions, shifts of attention and interest, and errors (random or characteristic) in applying his knowledge of the language in actual performance” (Chomsky, 1965: 3).

As to the innateness of the language faculty, Chomsky points to the fact that language acquisition has nothing to do with intelligence, and that, despite the enormous complexities of this abstract knowledge, the vast majority of children successfully reach full linguistic competence by the age of five. Language, it is claimed, is separate from other aspects of cognition, and, according to Chomsky, is looked after by a special module of the mind. Thanks to this faculty of mind, language develops in more or less the same natural way as teeth or internal organs or limbs do.

Chomsky’s radical new approach to linguistics marked the beginning of an important shift not only in linguistics but also in SLA research. The shift was away from behaviourist assumptions and structural linguistics, away from the emphasis on the pedagogical implications of research, and towards an explanation of the phenomena of SLA as a research project in itself. Under the influence of Chomsky, a number of academics decided to deliberately ignore the implications for teaching in favour of developing a more rigorous theory that could deal more adequately with the phenomena of SLA.

phrasestructure
Chomsky’s Theory of Universal Grammar: Principles and Parameters.

In a series of steps (Chomsky 1980, 1981a, 1981b, 1986, 1987) Chomsky developed his Principles and Parameters Model, which, until fairly recently, was seen as the mature expression of his theory of Universal Grammar. The theory attempts to explain what linguistic knowledge consists of, and how it is acquired.

When a child experiences linguistic input, the parameter values of the universal grammar are set and this allows the child to understand and produce the specific language corresponding to the particular parameter settings. “Principles” are the universal, invariant design features of all human languages, while “parameters” constrain the limited possibilities for variation allowed. A parameter can have two or more values, and particular languages make different choices among the values allowed.

Chomsky’s “principles and parameters” model can be seen as an answer to the limitations of phrase structure grammars, which assume that sentences consist of phrases that have certain structures. Cook (1989) gives this lucid summary of structure-dependency. “To take an English example sentence “Max played the drums with Charlie Parker”, principles of phrase structure require every phrase in it to have a head of a syntactic category and permit it to have complements of various types; A Verb Phrase such as “played the drums” must have a head that is a verb, “play”, and may have a complement “the drums”, a Prepositional Phrase such as “with Charlie Parker” must have a head that is a preposition, “with”, and a complement “Charlie Parker”; Noun Phrases such as “Max”, “the drums”, and “Charlie Parker” must have noun heads and may, but in this case do not, have complements. This is not true only of English; the phrases of all languages consist of heads and possible complements – Japanese, Catalan, Gboudi and so on. The difference between the phrase structures of different languages lies in the order in which head and complement occur within the phrase; in English the head verb comes before the complement, the head preposition comes before its complement, while Japanese is the opposite. This variation in languages is captured by the head parameter, which has two settings “head first” and “head last” according to whether the head comes before or after the complement in the phrases of the language.”

Complementary to these phrase structure principles is the Projection Principle which claims that syntax and the lexicon are closely tied together. As well as knowledge of where the complement goes in the phrase, we need to know whether a complement is actually allowed, and this depends upon the lexical item that is used; hence the Projection Principle states that the English verb “play” must be specified as taking a complement (i.e. it is normally transitive); the lexical entry for the verb “faint” must specify it has no complement (i.e. it is intransitive), while that for the verb “give” must specify that it has two complements (i.e. direct and indirect objects). The question of whether the phrase structure of a sentence is grammatical is a matter not just of whether it conforms to the overall possible structures in the language but also whether it conforms to the particular structures associated with the lexical items in it; “Max played the drums” is grammatical because the verb occurs in the correct head-first position, compared to “Max the drums played” and because the verb “play” has an Object Noun Phrase following it, compared to “Max played”. (Cook, 1989: 169-170)

Other elements of UG include:
• Subjacency, which constrains the movement of categories. See Cook and Newson, 1996: 258-261.
• Case Theory, which constrains S structures. See Cook and Newson, 1996: 222-227.
• C-command and Government Theory, which constrain a number of the subsystems, such as Case Theory. See Cook and Newson, 1996: 234-239.
• Binding Theory, which constrains the formation of NPs. See Cook and Newson, 1996: 250-256.

To sum up: “UG consists of a highly structured and restrictive system of principles with certain open parameters to be fixed by experience. As these parameters are fixed, a grammar is determined, what we may call a `core grammar'” (Chomsky 1980, 67).

The principles are universal properties of syntax which constrain learners’ grammars, while parameters account for cross-linguistic syntactic variation, and parameter setting leads to the construction of a core grammar where all relevant UG principles are instantiated.

How is knowledge of the core grammar of language acquired? We have already noted that Chomsky takes a “nativist” approach – he sees language as an innate faculty of mind, a natural human endowment. This native endowment has been called by Chomsky the “Language Acquisition Device” (LAD), which can be seen as a “ black box”: children receive a certain amount of language input from their environment which is processed in some way by the LAD so that they end up with their linguistic competence. As Cook (1993) points out, the UG theory “fleshes out” the LAD “by establishing the crucial features of the input, the contents of the black box, and the properties of the resultant grammar.” (Cook, 1993: 200) The consequence, as already indicated, is that the “what” and “how” questions merge and the process of language acquisition is simply one of selecting, rather than learning. We know the principles of grammar innately, and parameter settings are triggered by input. We should remember that the final steady state does not comprise only the knowledge of UG. Pragmatic competence, knowledge of peripheral grammar and of lexis are also involved, but lie outside the domain of Chomsky’s theories.

Given that the principles are already in place in the mind, learning focuses on setting parameters and acquiring vocabulary. Input is important because it acts as a trigger, but it does not in itself account for acquisition. The acquisition of grammatical competence is organised through principles and parameters. The learning of vocabulary is organised in the lexicon and guided by the Projection Principle. The properties of a language’s lexical items (stored in the lexicon) are projected onto the syntax. Language grows.

Researchers in the field of UG seek to determine whether the various logical possibilities are found across languages. Greenberg, for example, discovered the universal that all languages with verb-subject-object word order have prepositions (McLaughlin 1987: 83). This was an absolute universal because there were no exceptions. It led to the generalisation that the VSO languages have prepositions, while non-VSO languages (SVO or SOV) can occur with or without prepositions. (Some non-VSO languages may have post-positions). Another absolute universal is that all languages without exception have vowels. That all languages have nasal consonants is only a tendency, because there are a few that have no nasal consonants at all. Most languages do not have clicks, but there are a few that do.

The Chomskyan view of language differentiated core and peripheral grammar. The core grammar is that which grows in the mind, dependent upon the application of principles and setting of parameters. UG determines the core, but outside of that lies a vast range of language, for example, idiomatic expressions. Other features may be derived from other languages and other processes of language development (fashion, historical development, trends, accidents, inventions etc). The core grammar accounts for the relatively prototypical, unmarked elements of the language. The periphery contains the marked elements, including for example idioms.

This leads to a view of language acquisition that has the child acquiring the core grammar, the basic tool-kit of the language through the principles and by setting parameter choices. As a result, unmarked forms are acquired through UG while peripheral marked forms may also be learned through experience. ‘Hence we may expect to find a continuum of markedness from core to periphery’ (Cook 1985: 6). Thus emerges an explanation for the acquisition of marked/unmarked forms. UG prescribes some obligatory structures, parameters allow limited choices and the theory of markedness accounts for the acquisition of peripheral elements. This means that lexis becomes the key learning burden: ‘A large part of language learning is a matter of determining, from presented data, the elements of the lexicon and their properties’ (Chomsky 1982: 8, cited in Cook 1985: 7).

In the last twenty years, Chomsky has made a number of serious modifications to his theory of UG. The Minimalist program (Chomsky, 1995) and a view of semantics part epistemological, part ontological, which he calls “Internalism” (Chomsky, 2000), are the most obvious examples. Those interested in following these developments are directed towards Cook and Newson (1996); the works of Vivian Cook already mentioned here, and Botha (1991). Despite Chomsky’s tendency to “move the goalposts”, the “Principles and Parameters” model of Universal Grammar quickly sketched above still best represents Chomsky’s contribution to attempts to explain language acquisition. It remains true to say that UG involves the following claims:

• A theory of linguistics is concerned with describing and explaining an individual’s knowledge of certain core principles of language, not with his or her or a community’s use of language
• The main way to test such a theory is through the intuitions of native speakers about whether or not sentences in their language are well formed or not.
• The language faculty responsible for our linguistic knowledge is innate and autonomous, i.e. it is an independent cognitive module that interacts with, but does not derive from, other cognitive faculties. UG theory rests on a modular view of cognition which sees the mind not as a uniform system, but rather as containing a central processing system and a set of autonomous systems or modules that function largely independent of one another.

Chomsky’s Critics

See Botha (1991) for an engaging account of Chomsky’s main ctritics. Three of Chomsky’s best-known critics who cover the main charges made against him will be briefly outlined here.

piaget

Jean Piaget

Piaget suggested in the 1920s (see Piaget, 1960) that a child goes through four qualitatively different stages in the process of his cognitive development. Until the age of two, the child is sorting out space, objects and causality. From around two to five years old his thought processes begin to use mental images arising from imitation or words. Language skills and reasoning from memory also develop in this second stage. From the ages of five to ten, again approximately speaking, the child can classify hierarchical structures, understand ordinal relations, and the conservation of continuous properties like weight quantity and volume. From around ten to fourteen years old the real world is seen as one of possible worlds, logical thinking improves, and the child realises that appearances can be deceiving.

How does the child manage all this? Piaget says that the child’s knowledge develops by his interaction with his environment and by the use of two strategies: assimilation, where the child fits his new experiences into the established patterns of thought, and accommodation, where changing existing patterns are used to account for novel aspects of reality. The tensions between these two strategies for dealing with new information is resolved by what Piaget calls equilibrium which balances out the competing forces.

In Piaget´s opinion, there is no modularity of mind, no innate language faculty or any other specialised mechanism at work: the child creates his own concepts through interaction with the environment. In Piaget’s view, language is just one part of the knowledge the child acquires as he goes through his stages of development, constructing his understanding of the world for himself on the basis of dynamic interplay with the world around him.

Chomsky’s reply to Piaget was made publicly at the famous 1975 conference at Royaumont, where Piaget, Chomsky, Fodor, and others gathered to discuss the limitations of the genetic contribution to culture. Criticising Piaget’s four stages of development, Chomsky suggested that if children must pass through Piaget’s first stage of development before their language development takes place, then we would expect paraplegics to have a distorted path of language development, which, in fact, is not the case. (It is worth noting here that supporters of the UG theory often cite cases of abnormal children, those with serious cognitive and/or psychological problems, and “language savants” such as Williams syndrome children, as evidence for the independent, innate nature of the language faculty.) Chomsky went on to use the favourite UG argument against Piaget, the “logical” problem: how could Piaget (or anyone else for that matter) explain the poverty of the stimulus? No generalised learning strategies can, said Chomsky, ever meet this objection.

.
ug

Geoffrey Sampson

Sampson says that Chomsky sees a new-born child as being “just like a very learned man who is asleep; the knowledge is in there, it just needs stirring up a bit before it is available for use” (Sampson, 1997: 8). Against such a view, Sampson argues, with Locke, that experience gives us knowledge, and we do not have any particular ideas or knowledge “built in.”

In addressing the question of what explains language acquisition, if not Chomsky’s innate language learning device, Sampson argues that the essential feature of languages is their hierarchical structure. Children, like our ancestors, start with relatively crude systems of verbal communication, and gradually extended syntactic structures in a pragmatic way so as to allow them to express more ideas in a more sophisticated way. The way they build up the syntax is piecemeal; they concentrate on assembling a particular part of the system from individual components, and then put together the subassemblies. This gives them low level structures which are then combined, with modifications on the basis of input, into higher level structures, and so on.

Sampson’s argument has two main strands: gradual evolutionary processes have a strong tendency to produce tree structures, and (following Popper) knowledge develops in a conjectures-and-refutations evolutionary way. Sampson claims that these two strands are enough to explain language acquisition.
bates

Elizabeth Bates

Another well-known critic of Chomsky, Elizabeth Bates, challenges the modular theory of mind and, more specifically, criticises the nativists’ use of the accounts of “language savants” and those suffering from cognitive or language impairment disabilities to support their theory.

As for the poverty of the stimulus argument, Bates says “Linguists of a nativist orientation tend to recite this argument like a mantra, but we must remember that it is a conjecture not a proof.” (Bates, 2000: 6) Bates, who sees language as consisting of a network, or set of networks, says that neural network simulations of learning are still in their infancy, and that it is still not clear how much of human language learning they are able to capture, but she cites some research that challenges the poverty of the stimulus argument, and says that the neural network systems already constructed are able to generalise beyond the data and recover from error. “The point is, simply,” says Bates, “that the case for the unlearnability of language has not been settled one way or the other.” (Bates, 2000: 6)

An important criticism raised by many, and taken up by Bates, against Chomsky’s theory is that it is difficult to test. Bates argues that the introduction of parameters and parameter settings “serve to insulate UG from a rigorous empirical test.”

cartoon-chomsky

How does UG relate to SLA?

There are four main hypotheses:
• 1. There is no such thing as UG.
• 2. UG exists, but second language learners only have indirect access to it via the L1.
• 3. UG exists, but L2 learners only have partial access to it.
• 4. Second language learners have full access to UG.

As for hypothesis 1, those who deny the existence of UG (like Piaget, Sampson, and Bates) see no need to postulate a language module, and no need to look for linguistic universals either. O’Grady (1996) takes this approach, but a better example of such an approach is the Competition Model, first proposed by Bates and MacWhinney in 1983.

The best-known hypothesis regarding the second position, that UG exists, but that second language learners only have indirect access to it, is Bley-Vroman’s Fundamental Difference Hypothesis (Bley-Vroman, 1989a, 1989b). Brey-Vronan argues that the mind is modular, and that there exists a language faculty (UG) which is essential for the development of L1, but that UG is not directly at work in SLA. According to Bley-Vroman, adult second language learners do not have direct access to UG; what they know of universals is constructed through their L1, and they then have to use general problem-solving abilities, such as those that operate in non-modular learning tasks: hypothesis testing, inductive and deductive reasoning, analogy, etc. The Bley-Vroman approach provides an explanation for the “poverty of the stimulus”, or “logical” problem of SLA – the complex L2 knowledge or interlanguage grammar which second language learners develop is (partly) a result of UG’s influence on L1.

The third hypothesis, partial access, claims that L2 learners have access to principles but not to the full range of parameters. Schacter (1988) and Clahsen and Muysken (1989) have argued this case. It differs from the “indirect access” position in that it predicts that no evidence of “wild grammars” will be found, and that L2 learners will not reset the values of parameters of the L2 when these differ from the L1 settings.

Finally, the full access hypothesis claims that UG is an important causal factor in SLA, although not, of course, the only one. Those adopting the full access view (e.g., Flynn, 1987) claim more than that the L1 UG affects the second language learning process. They claim that principles not applicable to the second language learner’s L1, but needed for the L2, will constrain the L2 learner’s interlanguage. For example, the principle of Subjacency, which constrains the kind of wh-movement permitted, is irrelevant to languages that lack wh-movement. While those adopting the partial access approach would claim that a Korean native speaker learning English would not be affected by the Subjacency Principle, since it is irrelevant to Korean, those taking a full access stance would expect the Subjacency principle to constrain the Korean learner’s interlanguage grammar. In regard to parameter re-setting, the full access position, contrary to the partial access position, suggests that while the learner may pass through a stage where the L1 setting is applied to the L2, he will eventually attain the L2 setting, assuming a sufficient amount of relevant input.

Attempts to use theories of UG to explain SLA have met with various criticisms.

First, the empirical evidence for the various positions that argue for some role for UG in the SLA process is mixed. Here are a few examples. A study by Ritchie (1978, cited in Ellis, 2008) of Japanese students of English gave “preliminary support to the assumption that linguistic universals are intact in the adult.” White (1989) reports on a study of Japanese learners of English who, despite having no knowledge of question formation involving complex subjects, successfully acquired this knowledge in English. White argues that the learners must have had access to the principle of structural dependence. Flynn (1996, cited in Mitchell and Myles, 2004) reviewed research on Japanese learners of English, and claimed that it supported the view that UG constrains L2 acquisition. Mitchell and Myles (2004) also cite work by Thomas (1991), and by White Travis and Maclachlan (1992) in support of the full access to UG hypothesis.

On the other hand, a study by Bley-Vroman, Felix, and Ioup (1988) of Korean learners of English concluded that the results made it “extremely difficult to maintain the hypothesis that Universal Grammar is accessible to adult learners” (Bley-Vroman, Felix, and Ioup,1988, cited in Ellis, 2008). A study by Meisel in 1997 of the acquisition of negation in French and German by L1 and L2 learners (cited in Mitchell and Myles, 1998) concludes that the UG principle of structure-dependency is not available to L2 learners. Schachter’s (1989) test on Subjacency gave much more doubtful results than White’s, which she says constitute a “serious challenge” to the claim that UG is available to adult learners.

In general, then, it seems that there is conflicting evidence for all positions, although Cook and Newson claim that there is “a great deal of evidence” that knowledge of some aspect of language has been acquired in an L2 “that is not learnable from input, that was not part of the learners’ L1 and that is unlikely to have been taught by language teachers” (Cook and Newson, 1996: 293).

Let us now deal with the doubts about empirical evidence. The problem here is that L2 learners do not begin at the same stage as do very young children in L1, and nor is there any general homogeneity in their “end state” as there is in L1 acquisition. Ellis (2008) discusses various problems with grammaticality judgements that stem from the different beginning and end states in L2 learning. The first problem is how to ensure that the subjects have the requisite level of L2 proficiency to demonstrate whether or not a particular principle is operating in their interlanguage grammar – learners might violate a principle not because of non-availability of UG, but because the structure in question is beyond their present capacity. The second problem dealt with by Ellis is how to rule out the effects of the L1. If subjects act in accordance with UG this might be because they have access to it, or because they are drawing on their L1. Thus it is necessary to use subjects whose L1 does not manifest the principle under investigation. White (1989) also accepts this problem and points out that since not all UG principles operate in all languages, the problem can be solved. A third problem is that of literacy. Birdsong (1989, cited in Ellis, 2008: 441) says that grammaticality judgement tests are not appropriate for learners with poor L2 literacy, and that differences in the metalinguistic skills of literate learners will affect responses.

Second, methodology. Cook (1993) states the problem of the methodology of UG-based SLA research thus: “What can count as data for knowledge of a second language? L1 acquisition starts from the single sentences accepted by native speakers on the deliberate assumption that there is a native speaker standard. Whatever the merits of an idealisation to a normalised native speaker, this is less convincing in an L2, since there is no clear norm of what a successful L2 learner should look like, other than the monolingual; L2 learners vary extremely in the level of language they attain while L1 children do not. L2 research can therefore be based with difficulty on the same kind of single sentence evidence used with L1 research…. Most researchers therefore resort to grammaticality judgements as their main source of data – a source of evidence that has to be treated with extreme caution as it is unclear how directly it taps the individual’s knowledge of language” (Cook, 1993: 482).

Thirdly, we have the problem of the empirical adequacy of UG when applied to SLA: to what degree is the UG theory falsifiable? Gass and Selinker (1994) raise a number of objections to the work of those taking a UG approach to SLA, all concerning falsification. They argue that while UG theory is well-defined, and thus able to make more accurate predictions, because of the changing nature of the linguistic constructs on which it is based, UG-based research is difficult to falsify. Upon being confronted with data apparently contradicting the predictions of UG access, it is always possible to argue that the underlying linguistic formulation was not the correct one. Furthermore, they argue, apart from “moving the goalposts”, UG researchers have not always heeded falsifying evidence. They suggest that when predictions are not borne out, there are three options: assume a no-access to UG position, say methodological problems are to blame, or assume the theory is false. Gass and Selinker state that in their opinion the third position seems the most likely, but surprisingly, give no reasons for this opinion.

Larsen-Freeman and Long (1991), in addition to misgivings about falsifiability, question three assumptions of Chomsky’s explanation of language acquisition. The first is that learning occurs quickly and is mostly complete by age five. “In fact, a good deal of complex syntax is not mastered until much later. English dative movement, for example, is not fully learned until about age sixteen” (Larsen-Freeman and Long, 1991:236). Other examples of “late” acquisition include some WH questions and yes/no questions.

The second questionable assumption is that certain syntactic principles are unlearnable, and therefore innate. This, say the authors, is increasingly being challenged. “General cognitive notions and strategies, such as conservative hypothesis-formation, developmental sequences based on cumulative complexity, and avoidance of discontinuity are being used to re-examine such UG icons as structure-dependence, PD phenomena, Subjacency and binding principles” (Larsen-Freeman and Long ,1991: 236).

The third assumption made by Chomsky and challenged by Larsen-Freeman and Long is that the input available to learners is inadequate and thus implies innate linguistic knowledge. As with the second assumption, a different explanation can be offered by a general learning theory.

The principal problem seems to be that UG is essentially an attempt to describe a core grammar: it is not really a theory of learning at all. According to Chomsky we do not “learn” our I-Language in the usual sense of the word: we are born with linguistic competence, and all we need is some positive evidence to trigger particular parameters so that the particular version of UG corresponding to our L1 becomes instantiated in the mind. Thus the process of acquisition is not interesting, the main task is to describe the components of the core grammar. In SLA on the other hand, we are interested in explaining the language learning process. We are also interested in a variety of phenomena such as variability, fossilisation, and individual differences, all of which are deliberately ruled out of a theory of UG because they have nothing to do with the acquisition of L1 linguistic competence.

In summary then, the limited domain of Chomsky’s theory means that there are many aspects of L1 acquisition that fall outside it; Chomsky has nothing to say about pragmatics and discourse, linguistically he concentrates heavily on syntax, and even there only on core grammar; the acquisition of language-specific tense and case morphology, for example, are not included. Wolfe-Quintero comments: “UG may account for the successful acquisition of core grammar, but there is much more to language learning than that” (Wolfe-Quintero, 1996: 343). In the case of SLA, the limitations of a UG approach are even greater. Even assuming that UG exists, that UG theories of L1 acquisition are true, and that L2 learners have at least some access to UG, most of the questions that concern SLA researchers remain unanswered by Chomskian theory; indeed they are not even addressed.

References can be found in the Suggested Reading and References section, at the end, under SLA stuff.

Advertisements

24 thoughts on “* Chomsky

  1. I am preparing for a comp-exam in applied linguistics this coming May, and found your essays to be highly informative, accessible, and helpful. Thank you – Carla

    Like

  2. I agree with Chomsky with a proviso. There does exist an innate language acquisition device, but it not ONLY creates/learns language. That’s one of the tasks it can do. The COMParison process very likely meets these criteria for the LAD. It exists in the cortex of the human brain, it processes information by comparison, NOT formal verbal logic, tho logic (mathematical, empirical verbal, boolean, etc.) can be generated by it. It’s found at the top of the cortical cell columns in human cortex. It learns language by comparison, imitation, by demonstration and by trial and error practice. It’s also capable of learning a good deal more than just language. It stored what it has learned in the cortex, by using the Long Term Memory and recognizes that by a simple LTM call.
    One hears a word cluster. It’s compared to the LTM, which triggers recognition, or not. If not, it asks what the word means in terms of other words, Comparison. It’s constantly trying to make sense out of sensory experiences. By this simple means explains language and its acquisition & by extension all learning.
    The system really works. Surprisingly grammar is not that important. It’s the COMParison processor which exists in all humans (in various states of abilities) which creates the language and finds those word clusters which make sense, and those which do not, that is, meaning, not necessarily grammar. Verbs/nouns, parts of speech don’t help that much in understanding the COMP process of language. It’s NOT necessarily logical, but usually consistent with COMP, unless it makes an error, which is also possible. It explains context and a good many other things. And it works. The COMP can even recognize errors (testing/checking by comparison) and correct them.
    When the cortex is damaged, language is damaged related to the severity of the damage. It’s in the cortex where the LAD/COMP resides. It even explains why Koko could learn ASL and be creative using it. She, as a great ape, has cortex much like ours, as do chimps.
    http://jochesh00.wordpress.com/2014/02/14/le-chanson-sans-fin-the-comparison-process-introduction/

    Thanks for your time.
    Herb Wiggins, MD. ret., Diplomat Am. Board Psych./Neuro

    Like

  3. Hi Herb,

    I visited your website and skimmed your posts on COMP. This is a quick reply which can’t claim to have given your work proper consideration.

    Your account of language learning via the COMP differs markedly from Chomsky’s theory of language and the operation of the LAD. I’m not sure quite how you think that the COMP “explains” more; quite how it operates on LTM, and how you see the roles of syntax and lexis, but anyway, thanks for sharing as they say.

    Like

  4. Fallacies in the poverty of stimulus argument (since you asked)

    Children can’t possible learn language by just imitating their parents.

    “Bloom goes on to tell Vuolo about, what she called the ascendancy or “domination of the MIT theory of language,” which says that much of what language is about is innately determined and that kids, in fact, didn’t have to hear a whole lot of language to acquire it. She says “You had this theory that said it doesn’t really matter very much what parents do. Kids don’t really need all that much input. And, so what Hart and Risley pointed out was that at least for word learning, that’s just simply not true. The numbers of words really do make a difference for children. They really do have to hear all those words. Now, I think, that eventually began to be acknowledged by the world according to MIT. But, they just didn’t take it seriously. They just ignored it.” She explained it was MIT, Noam Chomsky and all of his descendants who ignored Hart and Risley’s research”
    http://operationreadyby3.wordpress.com/2013/06/27/how-to-raise-a-verbal-child-lexicon-valley-on-betty-hart-and-todd-risleys-research/

    you might find this interesting:

    “iewed from a contemporary perspective, Chomsky’s concerns about the unlearnability of language seem at best rather dated and at worst misguided. There are two key features in current developmental psycholinguistics that were lacking from Chomsky’s account, both concerning the question of what is learned. First, there is the question of the units of acquisition: for Chomsky, grammar is based on abstract linguistic units such as nouns and verbs, and it was assumed that children operated with these categories. Over the past 15 years, direct evidence has emerged to indicate that children don’t start out with awareness of underlying grammatical structure; early learning is word-based, and patterning in the input at the level of abstract elements is something children become aware of as their knowledge increases (Tomasello, 2000).

    Second, Chomsky viewed grammar as a rule-based system that determined allowable sequences of elements. But people’s linguistic knowledge is probabilistic, not deterministic. And there is now a large body of research showing how such probabilistic knowledge can be learned from sequential inputs, by a process of statistical learning. To take a very simple example, if repeatedly presented with a sequence such as ABCABADDCABDAB, a learner will start to be aware of dependencies in the input, i.e. B usually follows A, even if there are some counter-examples. Other types of sequence such as AcB can be learned, where c is an element that can vary (see Hsu & Bishop, 2010, for a brief account). Regularly encountered sequences will then form higher-level units. At the time Chomsky was first writing, learning theories were more concerned with forming of simple associations, either between paired stimuli, or between instrumental acts and outcomes. These theories were not able to account for learning of the complex structure of natural language. However, once language researchers started to think in terms of statistical learning, this led to a reconceptualisation of what was learned, and many of the conceptual challenges noted by Chomsky simply fell away. ”

    http://deevybee.blogspot.co.uk/2012/09/what-chomsky-didnt-get-about-child.html

    Like

  5. Hi Russ,

    Thanks for these quotes, but I’m afraid you’re going to have to use a few of your own grey cells if you want to get a better handle on this.

    The first quote does nothing to refute Chomsky’s claim that children’s knowledge of language can’t be explained by appeal to evidence of input received from their parents – or from anyone else. I hope you get this point, because, if you do, then you’ll appreciate that you’ve failed to show any fallacy in the theory of UG.

    The second quote cites work done by Tomasello, which, again does nothing to refute UG, and, further, is based on a misinterpretation of Chomsky’s theory. Tomasello doesn’t address the huge amount of research done by those in the UG camp into children’s awareness of “underlying grammatical structure” and his claim that “early learning is word-based, and patterning in the input at the level of abstract elements is something children become aware of as their knowledge increases” again is no more than an empty claim which fails to deal with the evidence for the poverty of the stimulus argument.

    As for the claim that “people’s linguistic knowledge is probabilistic, not deterministic” and that there is “now a large body of research showing how such probabilistic knowledge can be learned from sequential inputs, by a process of statistical learning”, nobody, least of all Chomsky, says that linguistic knowledge is “deterministic”. Furthermore, it’s the easiest thing in the world to provide evidence, unlimited tons of evidence, to “show” that ANY model of statistical learning works, but it’s another thing to make a strong case for the argument that such models explain first language acquisition. What theory of language underlies these claims? And what general theory of learning is being appealed to here?

    Do a bit more homework Russ; have a look at connectionist theories which I suspect are being referred to in your second quote, have a look at what Chomsky actually says, not what others say he says, make a bit of an effort to get both sides of the argument, and then assess the arguments and the evidence for yourself. If you take the trouble to do so, I’ll be very interested to hear your conclusions.

    Like

    • Hanks for the reply Geoff. You say ‘have a look at what Chomsky actually says” I’ll certainly try (and have been trying for years) but Chomsky is, in my opinion, the worst kind of academic writer. His prose is almost unreadable, to me and I think he makes little effort to make the abstract concrete. I also think, from interviews I’ve seen of him, that he attempts to shrug of most criticism with an air of ‘of course I’m right’ arrogance. Of course, none of this is evidence of anything other than my personal opinions about the man.

      I have to disagree with you about the first point. The research cited does indeed dispute POS. Children’s vocabularies are dependant upon the language they hear. Now you may argue that Noam wasn’t talking about vocab, but was talking about underlying grammar, but you wrote ‘knowledge of language’ so I can’t be sure in this point.

      You also talk about the huge amount of research done by the UG camp into underlying structures. I’m curious did the Chomsky or the UG camp ever attempt to ascertain if the POS argument restated in anything other than conjecture? That is, did they attempt to prove that the stimulus was indeed impoverished, or is that just taken as ‘common sense’?

      Like

      • Hi Russ,

        I’m sure most people will agree with you that reading Chomsky is hard work. The subject matter is difficult enough, and Chomsky’s style doesn’t help one bit! I myself relied heavily on secondary sources when I first studied UG theory, and I find his more recent essays on Internalism impossible to follow. I was just surprised to see you chime in following Lexical Leo’s pronouncement “The evidence is in: there’s no language instinct!”. Surely before accepting such a sweeping judgement one should do more than read the summary of one silly book! One should, I suggest, read stuff by authors such as Lydia White, Vivian Cook, Kevin Gregg, Cook and Newson, for example, who are “pro-Chomsky”, as well as work by Geoffrey Samson, Elizabeth Bates and V. Jenkins, for example, who are “anti-Chomsky”.

        Moving on, while I don’t question that the research cited in your first quote disputes POS, I don’t think it does so successfully, because it fails to explain the knowledge of underlying rules of grammar which many studies have shown children to have. Of course some of the L1 acquisition process can be explained by input, but nobody, IMHO, has provided a satisfactory alternative explanation for the logical problem of language learning: children learning their first language can’t induce rules of grammar from the input they receive, the knowledge of language which they manifest can’t be explained by appealing to the language they are exposed to.

        This is the same point as that I made in reference to the research done by the UG camp. You ask if researchers ever attempted to ascertain if the the stimulus was indeed impoverished. Yes, they did. Data collected by observing the input to children over tens of thousands of hours was used to test the hypothesis that many of the rule-based utterances and grammaticality judgements made in tests by young children can’t be explained by the input they’ve been exposed to.

        In sharp contrast to Jenkin’s case, which is built on a ridiculous “interpretation” of Chomsky’s work, the attempts being made by emergentists (especially MacWhinney and Bates) to come up with a general learning model which will explain the data I just mentioned are extremely interesting. I personally don’t think they’ve got very far yet. While I’m at it, I think that attempts by leading lights in ELT to champion emergentism have thrown very little light on the discussion – D. Larson Freeman’s enthusiastic attempts to proselytize strike me as particularly awful.

        Like

  6. There are many points to be made about language. First of all, formal,
    English class grammar has not that much to do with the way the cortex hears, organizes and makes sense of and understands, and outputs languages. Grammar is a red herring in this situation, the crowbar in the gears which has prevented a fuller understanding of grammar. Rather, words act as vehicles for meaning, rather than nouns or verbs. In fact, many languages use noun-verb words, such as the romance languages, typified, by “amo, amas, amat”, etc. Nouns and verbs are NOT seen as discrete parts of speech in these innumerable cases. It’s the two ideas together, which create meaning. No word is an island. No word stands alone outside of context of social, other words, sensory inputs, and so forth. Therefore, meaning and language are comparison processes, which is likely to be the single, multiplicit, rather simple process which creates all the rest of language. That is the “universal grammar” of our cortex.

    We hear a word phrase, we process the words all together, which gives context by those comparisons. The words are then compared to Long Term Memories and recognized within their contexts. It’s the input, the comparison to LTM meanings, and so forth. Upon those simple comparisons, the higher level abstractions are built up in a sort of hierarchical way. From the simple to the complex. The taxonomies of the living species are one of the highest levels of this process. The reductionist methods of the sciences, from particles (proton/electron) to atoms, to the elements and isotopes, to the periodic chart of the elements, each with their hierarchies of relationships (comparisons) electron levels and nuclear organization to the ferrous metals, PGM, the rare earths, noble gases, etc. Hierarchies within hierarchies. Organized, recognized patterns. From the elements, we ascend to the molecules, and from those to carbon chains, to biochemistry, leaping from there to single cells, then to multicellular forms, and so forth until we reach the immanent, border zone effects, the emergent qualities of the mind directly from the cortical cell columns. These are the hierarchies, which our brains create.

    When you state “regularly encountered sequences (note your compound word yielding meaning phrase, which is very prescient) will then form higher units.” This seems likely to be the case. Those constantly encountered stabilities are recorded in LTM, becoming the stable platform upon which most all further recognitions can be created. Language starts out in babies as simple and then becomes complex, as the child learns. Language from a carefully examined standpoint, is likely to be a “complex system” as opposed to the linear logics, maths and approaches formerly used, and has thus far, due to this conceptual and epistemological limit been very hard to understand.

    The statistical methods stated simply follow the Markovian and Bayesian computer programming methods to create recognition. Frankly, once doubts very much that the cortical cell columns which create language use any math or statistics at all to do their linguistic tasks, either in humans or other animals. One very much doubts that the bird flying to catch the insect in mid flight does any math at all!!

    There’s much more to be said. Have written about meaning and words and their origins as part of a much deeper model and conceptualizations using complex system methods and the comparison process. Many might find it interesting. have also found an analogous system for speech in the brain which does a great deal of Chomsky’s LAD, too. From teh baby’s babbling center, to speech initiation in the adult, and much else. It’s built into the brain, too, just like our vocal cords are in our throats. and it all fits.

    https://jochesh00.wordpress.com/2014/02/14/le-chanson-sans-fin-the-comparison-process-introduction/
    Start reading with paragraph 4, et seq. This shows the limits of grammar in understanding language, and the universality of the cortical cell columns which output, largely, the comparison process, building up the languages from there.

    Like

  7. As a teacher, not a researcher, whose interest lie in understanding what is going on in my class and how I could best help my learners progress in their language acquisition/learning, I pretty much appreciate all the discussions going on around the blogs and twitter. Although twitter is not the best place to discuss it as it limits us to express ourselves, it can trigger interesting discussions like this which I think probably happens around M.A programs.
    This is really great to show teachers who had not done an M.A yet that there are things we should be reflecting on and digging deeper.

    What I like the most about Geoff’s writtings is that he has the ability to show us different perspectives and critically analyse them. When someone subscribe to one theory is easy to try to defend it especially if you had made a career out of it.

    Geoff knows that I have not ever delved into behaviorism or nativism because in my major in Education, they were both discarded as valid theories for language acquisition in L1. Because of that, when I moved way from PPP model, I easily embraced Dogme principles.

    I use Twitter RT feature as a way to keep track of tweets I want to check out later or easy access. I’m here for the learning journey. Thanks again for all the contributions Russ and Geoff here and in twitter. It really contributes to my current interests.

    Glad the discussion went on and on. 😉

    Like

  8. Hi Geoff,
    Do I get this right? Chomsky claims that the phenomena we call language comes into existence as a result of biological growth processes. Compared to growing in height and strength, children are growing language. This is not a mere analogy—it is an example. This growth, as all physiological growth, must be anchored in cellular duplication and diversification. It is this place of origin that would allow labeling Chomsky’s theory as innate. Language thus must be subject to biological principles, I can think of cell mutation, hereditary laws, aging, etc (I am not a biologist; I am just trying to get a sense of the argument). Now I would like to quote from the article:
    “Given that the principles are already in place in the mind, learning focuses on setting parameters and acquiring vocabulary. Input is important because it acts as a trigger, but it does not in itself account for acquisition. The acquisition of grammatical competence is organised through principles and parameters. The learning of vocabulary is organised in the lexicon and guided by the Projection Principle. The properties of a language’s lexical items (stored in the lexicon) are projected onto the syntax. Language grows.”
    What gives parameters and principles their existence-“given that the principles are already in place in the mind”? Would we have to assume that these are found on a cellular, or sub-cellular, or molecular, or inter cellular level? If they are in place, how did they get there? As they were not learned, they must be explained following either evolutionary reasoning or no answer can be given. The term mind is a crucial one because it suggest the existence of a variable that on a verbal level assumes a somewhat active role, (as do the terms organized / guided / learned), but, and since we try to discuss an innate model of language origin, it might be necessary to identify what this term corresponds to in observable biological phenomena. Does mind equal brain functions? (you mentioned in your reply to Thornbury’s talk on embodied mind that Chomsky knew what he meant with the term mind….I am very interested in understanding this.)

    Being exposed to outside impressions, parameters are being set. Thus it seems that, to describe a known characteristic of growth in biology, regardless of the fact that we can find all genetic information in any cell, cells get to express only some desired quality (there is regulating factor/process that determines whether a cell turns into an eye, or a nose.) In similar fashion parameter information must (could) be expected to be present in DNA, which then triggered by outside impression leads the brain to express one among several options. Has this mechanism been identified?

    If the principle/parameter claim is true, one could observe that they provide general guidance to growth and have relatively little restrictive power on language use. The often quoted ability of children to come up with novel language, “they could not have heard it as input”, while still being grammatical, seems to be interesting due to novelty in terms of the use of vocabulary, as opposed to staying within the guidelines corresponding to a parameter setting. In other words, parameter settings being very general, easily allow novel use of language. The novelty gets more interesting as one notices the innovative use of vocabulary. As regulatory mechanism referred to above in cell specification, the principle/parameter entity (lack of a better term) seems to have limited regulatory force, hence the reduction to the role of “core grammar”.

    Leaving this thought for the moment, and it is a thought I am trying to get clearer into focus, I wonder how close Chomsky gets to Kant’s ideas of a priori knowledge. Kant’s departure from the blank slate, suggesting organizing principles that give meaning to sense impression, seems to echo in epistemology what Chomsky’s principle/parameters attempt for linguistics.

    Regards,
    Thomas

    Like

  9. Hi Thom,

    Below (starting with “QUESTION”) is a bit from an interview with Chomsky: http://chomsky.info/198311__/

    In brief, Chomsky rejects empiricist claims that we can’t talk rationally about the construct of mind. BTW, all those (including the unlikely bedfellows Scott Thornbury and Hugh Dellar) who embrace an emergentist view of language learning are actually accepting, whether they realize it or not, an empiricist epistemology. In contrast, rationalists use the construct of mind in explanations of feelings, sentiments, learning and experiencing the world around us. It’s a hypothetical, theoretical construct which is posited in order to explain these things. Chomsky says that innate mental structures of the mind explain the acquisition of knowledge of grammar.

    Kant tried to bridge the gap between rationalists and empiricists or “idealists and realists” as I think he called them. How can we know things that are necessary and universal but not self-evident or definitional? Through “synthetic a priori knowledge”, whereby our minds organize experience according to certain categories so that these categories become necessary and universal features of our experience. It’s a feature of the way our minds make sense of reality that we perceive causes and effects to be at work everywhere. So there’s definitely an analogy between Chomsky’s principles and parameters and Kant’s “a priori knowledge”.

    Here’s the extract from the interview:

    QUESTION: Why do you believe that language behavior critically depends on the existence of a genetically preprogrammed language organ in the brain?

    CHOMSKY: There’s a lot of linguistic evidence to support this contention. But even in advance of detailed linguistic research, we should expect heredity to play a major role in language because there is really no other way to account for the fact that children learn to speak in the first place.

    QUESTION: What do you mean?

    CHOMSKY: Consider something that everybody agrees is due to heredity — the fact that humans develop arms rather than wings. Why do we believe this? Well, since nothing in the fetal environments of the human or bird embryo can account for the differences between birds and men, we assume that heredity must be responsible. In fact, if someone came along and said that a bird embryo is somehow “trained” to grow wings, people would just laugh, even though embryologists lack anything like a detailed understanding of how genes regulate embryological development.

    QUESTION: Is the role of heredity as important for language as it is for embryology?

    CHOMSKY: I think so. You have to laugh at claims that heredity plays no significant role in language learning because exactly the same kind of genetic arguments hold for language learning as hold for embryological development. I’m very much interested in embryology but I’ve got just a layman’s knowledge of it. I think that recent work, primarily in molecular biology, however, is seeking to discover the ways that genes regulate embryological development. The gene-control problem is conceptually similar to the problem of accounting for language growth. In fact, language development really ought to be called language growth because the language organ grows like any other body organ.

    QUESTION: Is there a special place in the brain and a particular kind of neurological structure that comprises the language organ?

    CHOMSKY: Little enough is known about cognitive systems and their neurological basis; so caution is necessary in making any direct claims. But it does seem that the representation and use of language involve specific neural structures, though their nature is not well understood.

    QUESTION: But, clearly, environment plays some role in language development. What’s the relationship between heredity and environment for human language?

    CHOMSKY: The language organ interacts with early experience and matures into the grammar of the language that the child speaks. If a human being with this fixed endowment grows up in Philadelphia, as I did, his brain will encode knowledge of the Philadelphia dialect of English. If that brain had grown up in Tokyo, it would have encoded the Tokyo dialect of Japanese. The brain’s different linguistic experience — English versus Japanese — would modify the language organ’s structure.

    Roughly the same thing goes on in animal experiments, showing that different kinds of early visual experience can modify the part of the brain that processes visual information. As you may know, cats, monkeys, and humans have hierarchically organized brain-cell networks connected to the retina in such a way that certain cells fire only when there is a horizontal line in the visual field; other hierarchies respond only to vertical lines. But early experience can apparently change the relative numbers of horizontal- and vertical-line detectors. MIT psychologists Richard Held and Alan Hein showed some time ago, for example, that a kitten raised in a cage with walls covered by bold, black vertical lines will display good sensitivity to vertical lines as an adult but poor horizontal-line sensitivity. Lack of stimulation apparently causes the horizontal-line detectors to atrophy.

    An even closer analogy exists between language growth and the growth that appears in human beings after birth — for example, the onset of puberty. If someone came along and said, “Kids are trained to undergo puberty because they see other people,” once again everybody would laugh. Would we laugh because we know in great detail the gene mechanisms that determine puberty? As far as I can tell, no one knows much of anything about that. Yet we all assume that puberty is genetically determined.

    Like

    • Dear Geoff,
      Thank you for replying.
      What do you make of the way Chomsky compares fetal growth with language “growth” ? This sounds wrong to me. Sure, the interview is dated 1983 and a lot has happened since in molecular biology. Growth processes are much better understood today. To me it seems that the difference in language “growth” and physiological growth is significant. The body grows necessarily. Cell division happens on a given path with complex regulating mechanisms in place. The environment has an influence on how genes express their potential, but apart from some pathologies, and I am not really aware of any, express they must. With language this does not seem to be the case. Language is not necessary. The body could survive without it. We do not grow language the way we grow arms.

      I’ll give the interview a closer reading.
      Best,
      Tk

      Like

  10. Hi Thom,

    Chomsky has changed his mind about a lot of things, but not about this. He’d say, I think, that learning a language is facilitated by hereditary genes; that language growth is similar to other kinds of biological growth; that language is necessary; that the body could survive without an arm.

    Best,

    Geoff

    Like

  11. Hi again,
    Hmmm. Linking language to genes and to hereditary processes is a bold move. What is clearly /unquestionably inherited through genes, hair color, complexion, height, etc. follows some “simple” rules. It gets more complex when we draw conclusions on behavior, attitudes, values etc. I think the term “facilitate” serves as a safety valve here. I think nobody would follow a reasoning that would suggest that we can breed for language. In fact, UG seems to suggest precisely the opposite. It is fairly robust in the face of whatever upheaval might afflict our gene-pool. You can take an Australian aboriginal, an Austrian body builder and school them in LA–they will sound just the same.

    We can admit cases of apparent giftedness, as the Bach family was in music, but these outliers are flushed back into the general flow of the species.

    Language growth is different in kind from biological growth. One might be able to use biological growth metaphorically, but I do not see how language, a phenomena that to me seems to reside in psychological experience, can be similar in any other way. How the physical platform allows the experience of language, well that is the nut we have been trying to crack. How thought, and awareness can come about as a “product” of electro/chemical processes is, still, unknown.

    The body cannot survive if organs fail to thrive. I do not see how language is necessary for survival. This is of course an absurd argument, I am not suggesting that life without language is a condition worth studying, but it serves to make the point of distinguishing physical, biological growth, that runs on autopilot, from language development.

    Genie was looked into her basement. When the atrocity came to an end, she emerged with her body grown up.

    Pegging language to genetics, to me, is a more problematic move than, as you mentioned in passing, an empirical approach. Rather than assuming that language is “in here”, i.e. the genes, empiricist would have to admit that language is “out there”. It makes me just slightly more comfortable.

    Regards,

    Thomas k.

    Like

  12. Hi Thom,

    How can language be “out there” in an empiricist sense? It beggars belief. The poverty of the stimulus argument is just too strong. I suggest you look at Chomsky’s work – the Managua Lectures, maybe – and get back to me.

    Once again, thanks for your interest.

    Like

    • Dear Geoff,
      I started looking at the material you pointed out. Are you sure this is the place where I should start? (The Managua Lectures offer a starting point for discussing neo-liberalism and modern forms of imperialism, topics very close to home, but will it help me as a puzzled applied linguist?)

      I just finished listening to a Krashen lecture (dated 2015) where he talks briefly about his understanding of Chomsky. Krashen mentions that he had to backtrack and start at the very beginning with Syntactic Structures to make sense of Chomsky, back in 1967 being a grad school student. I somehow proceeded the other way around. I started with V.Cook’s Chomsky’s Universal Grammar. In a sense I wanted to spare myself having to work through a long intellectual trail, with dead ends and false leads, and deal with the latest, and, I would assume, state of the art Chomskian thinking. I read some of the better known introductory books on SLA, I liked Mitchell and Myles, and got myself the handbooks on SLA, even the “crap” one. But I must admit that when it comes to Chomsky, I relied on secondary sources.

      Regards,

      tk

      Like

      • Hi Thomas,

        Sorry – there’s a bit of confusion between Language and Problems of Knowledge – the Managua Lectures http://www.amazon.es/Language-Problems-Knowledge-Lectures-Linguistics/dp/0262530708 and On Power and Knowledge – Managua Lectures. I was referring to the first.

        I think “Language and Problems ..” is a good place to start reading Chomsky himself. Of course Krashen has a good point, but Chomsky’s original works are very difficult for the layman (I include myself); and I think the lectures I refer to are the most accessible statement of the Principles and Parameters version of UG, although they don’t make for easy reading.

        Vivian Cook is, IMO an excellent secondary source and the Mitchel and Miles book is a really excellent intro to SLA. The Handbooks of SLA are all fantastic – I hope I never said that any of them was crap.

        Best,
        Geoff

        Like

      • Hi again,
        Ah, I googled and amazoned and got to the wrong set of lectures. Thank you for the correction.

        I went back to my bookshelf, and you are right, I made reference to a book that reads something like The Study of SLA. This one made your crap-list. I mistakenly recalled it as handbook. My fault. The Blackwell Handbooks look great. I have only the SLA one and would love to see the one on emergence.

        regards,

        tk

        Like

  13. Hi again,
    Thank you for the new expression. I had never come across “beggar belief”. That is exactly the term that would apply to earlier parts of our dialogue. The part when we talk about growing arms. Language growth, the same–really?

    I find the conclusion to whatever Chomsky’s data, facts, etc are (poverty of stimulus,…) somewhat fantastic. To me it seems that the LAD idea is like using a hypothetical point in topography to calculate the route of a highway. As a mind construct it is useful. But it remains hypothetical.

    Innatism and cellular biology have to meet, if Chomsky’s ideas are true. This is like digging a tunnel starting from both ends. Molecular biology, neurosciences, genetics, dig on the one side, and Chomsky on the other.

    Thank you for the suggested reading. I will take some time to do so.

    You have never commented on Nick Ellis’ view of SLA. Does he hold a place in the debate?

    And, finally, I read your latest post on the highlights of the ELT year. I think you deserve an honorary degree for keeping up the (British) art of insult. I have never been good at it. I get too emotional. I think overall we lack this ability in our time. Maybe we take ourselves too seriously.

    Regards,
    Thomas

    (*out there* If sense perception is the only reliable source of knowledge, then what we think we know about language depends on what is experienced. And what is experienced has its genesis in phenomena residing outside the individual–in this case, language is “happening” to the individual. In terms of language competence (as opposed to knowing about language), the acquisition process builds on the same set up. Language, as a cultural artifact, is passed on from the competent language community to subsequent generations. This passing on depends on sense perception.)

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s