Interlanguage Development: Some Evidence

untitled

As a follow-up to my two previous posts, here’s some information about interlanguage development.

Doughty and Long (2003) say

There is strong evidence for various kinds of developmental sequences and stages in interlanguage development, such as the well known four-stage sequence for ESL negation (Pica, 1983; Schumann, 1979), the six-stage sequence for English relative clauses (Doughty, 1991; Eckman, Bell, & Nelson, 1988; Gass, 1982), and sequences in many other grammatical domains in a variety of L2s (Johnston, 1985, 1997). The sequences are impervious to instruction, in the sense that it is impossible to alter stage order or to make learners skip stages altogether (e.g., R. Ellis, 1989; Lightbown, 1983). Acquisition sequences do not reflect instructional sequences, and teachability is constrained by learnability (Pienemann, 1984).

Let’s take a look at the “strong evidence” referred to, beginning with Pit Corder and error analysis.

images06zudd8w

Pit Corder: Error Analysis

Corder (1967) argued that errors were neither random nor systematic results of L1 transfer; rather, they were indications of learners’ attempts to figure out an underlying rule-governed system. Corder distinguished between errors and mistakes: mistakes are slips of the tongue, whereas errors are indications of an as yet non-native-like, but nevertheless, systematic, rule-based grammar. Interesting and provocative as this was, error analysis failed to capture the full picture of a learner’s linguistic behaviour. Schachter (1974) compared the compositions of Persian, Arabic, Chinese and Japanese learners of English, focusing on their use of relative clauses. She found that the Persian and Arabic speakers had a far greater number of errors, but she went on to look at the total production of relative clauses and found that the Chinese and Japanese students produced only half as many relative clauses as did the Persian and Arabic students. Schachter then looked at the students’ L1 and found that Persian and Arabic relative clauses are similar to English in that the relative clause is placed after the noun it modifies, whereas in Chinese and Japanese the relative clause comes before the noun. She concluded that Chinese and Japanese speakers of English use relative clauses cautiously but accurately because of the distance between the way their L1 and the L2 (English) form relative clauses. So, it seems, things are not so straightforward: one needs to look at what learners get right as well as what they get wrong.

imagesrq1c5n2n

The Morpheme Studies

Next came the morpheme order studies. Dulay and Burt (1974a, 1974b) claimed that fewer than 5% of errors were due to native language interference, and that errors were, as Corder suggested, in some sense systematic, that there was something akin to a Language Acquisition Device at work not just in first language acquisition, but also in SLA.

The morpheme studies of Brown in 1973 resulted in his claim that the morphemes below were acquired by L1 learners in the following order:

1 Present progressive (-ing)

2/3 in, on

4 Plural (-s)

5 Past irregular

6 Possessive (-’s)

7 Uncontractible copula (is, am, are)

8 Articles (a, the)

9 Past regular (-ed)

10 Third person singular (-s)

11 Third person irregular

12 Uncontractible auxiliary (is, am, are)

13 Contractible copula

14 Contractible auxiliary

This led to studies in L2 by Dulay & Burt (1973, 1974a, 1974b, 1975), and Bailey, Madden & Krashen (1974), all of which suggested that there was a natural order in the acquisition of English morphemes, regardless of L1. This became known as the L1 = L2 Hypothesis, and further studies (by Ravem (1974), Cazden, Cancino, Rosansky & Schumann (1975), Hakuta (1976), and Wode (1978) all pointed to systematic staged development in SLA.

Some of these studies, particularly those of Dulay and Burt, and of Bailey, Madden and Krashen, were soon challenged, but over fifty L2 morpheme studies have since been carried out using more sophisticated data collection and analysis procedures, and the results of these studies have gone some way to restoring confidence in the earlier findings.

copy-cropped-slide5

Selinker’s Interlanguage.

The third big step was Selinker’s (1972) paper, which argues that the L2 learners have their own autonomous mental grammar (which came to be known as interlanguage grammar), a grammatical system with its own internal organising principles. One of the first stages of this interlanguage to be identified was that for ESL questions. In a study of six Spanish students over a 10-month period, Cazden, Cancino, Rosansky and Schumann (1975) found that the subjects produced interrogative forms in a predictable sequence:

  1. Rising intonation (e.g., He works today?),
  2. Uninverted WH (e.g., What he (is) saying?),
  3. “Overinversion” (e.g., “Do you know where is it?),
  4. Differentiation (e.g., “Does she like where she lives?).

Then there was Pica’s study of 1983 which suggested that learners from a variety of different L1 backgrounds go through the same four stages in acquiring English negation:

  1. External (e.g., No this one./No you playing here),
  2. Internal, pre-verbal (e.g., Juana no/don’t have job),
  3. Auxiliary + negative (e.g., I can’t play the guitar),
  4. Analysed don’t (e.g., She doesn’t drink alcohol.)

Apart from these two examples, we may cite the six-stage sequence for English relative clauses (see Doughty, 1991 for a summary) and sequences in many other grammatical domains in a variety of L2s (see Johnston, 1997).

slide_9

 Pienemann’s 5-stage Sequence.

Perhaps the most extensive and best-known work in this area has been done by Pienemann whose work on a Processability Theory started out as the Multidimensional Model, formulated by the ZISA group mainly at the University of Hamburg in the late seventies. One of the first findings of the group was that all the children and adult learners of German as a second language in the study adhered to the five-stage developmental sequence shown below:

Stage X – Canonical order (SVO)

die kinder spielen mim bait //// the children play with the ball

(Romance learners’ initial SVO hypothesis for GSL WO is correct in most German sentences with simple verbs.)

Stage X + I – Adverb preposing (ADV)

da kinder spielen //// there children play

(Since German has a verb-second rule, requiring subject—verb inversion following a preposed adverb {there play children), all sentences of this form are deviant. The verb-second (or ‘inversion’) rule is only acquired at stage X + 3, however. The adverb-preposing rule itself is optional.)

Stage X + 2 – Verb separation (SEP)

alle kinder muss die pause machen //// all children must the break have

(Verb separation is obligatory in standard German.)

Stage X+3 – Inversion (INV)

dam hat sie wieder die knock gebringt //// then has she again the bone brought

(Subject and inflected verb forms must be inverted after preposing of elements.)

Stage X+4 – Verb-end (V-END)

er sagte, dass er nach house kommt //// he said that he home comes

(In subordinate clauses, the finite verb moves to final position.)

Learners did not abandon one interlanguage rule for the next as they progressed; they added new ones while retaining the old, and thus the presence of one rule implies the presence of earlier rules.

A few words about the evidence. There is the issue of what it means to say that a structure has been acquired, and I’ll just mention three objections that have been raised. In the L1 acquisition of morphemes, a structure was assumed to be acquired when it occurred three times in a row in an obligatory context at a rate of 90%. The problem with such a measurement is, first, how one defines an “obligatory” context, and second, that by only dealing with obligatory contexts, it fails to look at how the morphemes might occur in incorrect contexts. The second example is that Pienemann takes acquisition of a structure as the point at which it emerges in the interlanguage, its first “non-imitative use”, which many say is hard to operationalise. A third example is this: in work reported by Johnson, statistical measures using an experimental group of L2 learners and a control group of native speakers have been used where the performance of both groups are measured, and if the L2 group performance is not significantly different from the control group, then the L2 group can be said to have acquired the structure under examination. Again, one might well question this measure.

To return to developmental sequences, by the end of the 1990s, there was evidence of stages of development of an interlanguage system from studies in the following areas:

  • morphemes,
  • negation,
  • questions,
  • word order,
  • embedded clauses
  • pronouns
  • references to the past

emile_friant_fri010

Discussion

Together these studies lend very persuasive support to the view that L2 learners follow a fairly rigid developmental route. Moreover, it was seen that this developmental route sometimes bore little resemblance to either the L1 of the learner, or the L2 being learnt. For example, Hernández-Chávez (1972) showed that although the plural is realised in almost exactly the same way in Spanish and in English, Spanish children learning English still went through a phase of omitting plural marking. It had been assumed prior to this that second language learners’ productions were a mixture of both L1 and L2, with the L1 either helping or hindering the process depending on whether structures are similar or different in the two languages. This was clearly shown not to be the case. All of which was taken to suggest that SLA involves the development of interlanguages in learners, and that these interlanguages are linguistic systems in their own right, with their own sets of rules.

There are lots of interesting questions and issues that I haven’t even mentioned here about interlanguage development in general and about orders of acquisition in SLA in particular. It’s worth pointing out that Corder’s and Selinker’s initial proposal of interlanguage as a construct was an attempt to explain the phenomenon of fossilisation. As Tarone (2006) says:

Second language learners who begin their study of the second language after puberty do not succeed in developing a linguistic system that approaches that developed by children acquiring that language natively. This observation led Selinker to hypothesize that adults use a latent psychological structure (instead of a LAD) to acquire second languages.  

The five psycholinguistic processes of this latent psychological structure that shape interlanguage  were hypothesized (Selinker, 1972) to be (a) native language transfer, (b) overgeneralization of target language rules, (c) transfer of training, (d) strategies  of communication, and (e) strategies of learning.

It wasn’t long before Krashen’s Monitor Model claimed that there was no evidence of L1 transfer in the morpheme studies, denied the central role of L1 transfer which the original Interlanguage Hypothesis gave it, and also denied that there were sensitive (critical) periods in SLA. Generativist studies of SLA also minimised the role of L1 transfer. And there have been some important updates on the interlanguage hypothesis since the 1980s, too (see Tarone (2006) and Hong and Tarone (2016) for example).

My main concern in discussing interlanguage development, as you must be all too well aware by now, is to draw attention to the false assumptions on which coursebook-based ELT are based. Coursebooks assume that structures can be learned on demand. If this were the case, then acquisition sequences would reflect the sequences in which coursebooks present them, but they do not. On the contrary, the acquisition order is remarkably resilient to coursebook presentation sequences. Long (2015, p. 21) gives some examples to demonstrate this:

…. Pica (1983) for English morphology by Spanish-speaking adults, by Lightbown (1983) for the present continuous -ing form by French-speaking children in Quebec being taught English as a second language (ESL) using the Lado English series, by Pavesi (1986) for relative clauses by children learning English as a foreign language (EFL) in Italy and Italian adults learning English naturalistically in Scotland, and by R. Ellis (1989) for English college students learning word order in German as a foreign language.

Long goes on to point out that accuracy orders and developmental sequences found in instructed settings match those obtained for the same features in studies of naturalistic acquisition, and that the striking commonalities observed suggest powerful universal learning processes are at work. He concludes (Long, 2015, p.23):

… instruction cannot make learners skip a stage or stages and move straight to the full native version of a construction, even if it is exclusively the full native version that is modelled and practiced. Yet that is what should happen all the time if adult SLA were a process of explicit learning of declarative knowledge of full native models, their comprehension and production first proceduralized and then made fluent, i.e., automatized, through intensive practice. One might predict utterances with occasional missing grammatical features during such a process, but not the same sequences of what are often completely new, never-modelled interlingual constructions, and from all learners.

While practice has a role in automatizing what has been learned, i.e., in improving control of an acquired form or structure, the data show that L2 acquisition is not simply a process of forming new habits to override the effects of L1 transfer; powerful creative processes are at work. In fact, despite the presentation and practice of full native norms in focus-on-forms instruction, interlanguages often stabilize far short of the target variety, with learners persistently communicating with non-target-like forms and structures they were never taught, and target-like forms and structures with non-target-like functions (Sato 1990).

post3image

Conclusion

That’s a taste of the evidence. We can’t conclude from it, as a few insist, that there’s no point in any kind of explicit teaching, but it does mean that, in Doughty and Long’s words (2003):

The idea that what you teach is what they learn, and when you teach it is when they learn it, is not just simplistic, but wrong.

The dynamic nature of SLA means that differentiating between different stages of interlanguage development is difficult – the stages overlap, and there are variations within stages – and so the simplistic view of a “Natural Order”, where a learner starts from Structure 1 and reaches, let’s say, Structure 549, is absurd. Imagine trying to organise stages such as those identified by Pienemann into ordered sets! As Gregg (1984) points out:

If the structures of English are divided into varying numbers of ordered sets, the number of sets varying according to the individual, then it makes little sense to talk about a ‘natural order’. If the number of sets varies from individual to individual; then the membership of any given set will also vary, which makes it very difficult to compare individuals, especially since the content of these sets is virtually completely unknown.

So the evidence of interlanguage development doesn’t mean that we can design a syllabus which coincides with any “natural order”, but it does suggest that we should respect the learners’ internal syllabuses and their developmental sequences, which most coursebooks fail to do. Doughty and Long (2003) argue that the only way to respect the learner’s internal syllabus is

by employing an analytic, not synthetic, syllabus, thereby avoiding futile attempts to impose an external linguistic syllabus on learners (e.g., the third conditional because it is the third Wednesday in November), and instead, providing input that is at least roughly tuned to learners’ current processing capacity by virtue of having been negotiated by them during collaborative work on pedagogic tasks.

Long has since (Long, 2015) given a full account of his own version of task-based language teaching, and whether or not we are in a position to implement a similar methodology in our own teaching situations, at least we can agree that we’d be well-advised to concentrate more on facilitating implicit learning than on explicit teaching, to give more carefully-tuned input, and to abandon the type of synthetic syllabus used in coursebooks in favour of an analytic one.

 

Bibliography

Sorry, can’t give all references. Here are a few of “key” texts. Tarone (2006) free to download (see below) is a good place to start.

Adjemian , C. (1976) On the nature of interlanguage systems. Language Learning 26, 297–320.

Bailey,N., Madden, C., Krashen, S. (1974) Is there a “natural sequence” in adult second language learning? Language Learning 24, 235-243.

Corder, S. P.  (1967) The  significance  of  learners’ errors. International Review of

Applied Linguistics (IRAL) 5, 161-9.

Corder, S. P. (1981) Error analysis and interlanguage. Oxford: Oxford University Press.

Dulay, H. and Burt, M. (1974a) Errors and strategies in child second language acquisition. TESOL Quarterly 8, 12-36.

Dulay, H. and Burt, M. (1974b) Maturational sequences in child second language acquisition. Language Learning 24, 37-53.

Doughty, C. and Long, M.H. (2003) Optimal Psycholinguistic Environments for Distance Foreign Language Learning. Downloadable here: http://llt.msu.edu/vol7num3/doughty/default.html

Gregg, K. R. (1984) Krashen’s monitor and Occam’s razor. Applied Linguistics 5, 79-100.

Hong, Z. and Tarone, E. (Eds.) (2016) Interlanguage Forty years later. Amsterdam, Benjamins.

Krashen S (1981) Second language acquisition and second language learning.  Oxford: Pergamon Press.

Long, M. H. (2015) SLA and TBLT. Oxford, Wiley Blackwell.

Nemser W (1971) Approximative systems of foreign language learners.’ IRAL 9, 115–23.

Selinker L (1972). ‘Interlanguage.’ IRAL 10, 209–231.

Selinker L  (1992).  Rediscovering interlanguage.   London: Longman.

Schachter, J. (1974) An error in error analysis. Language Learning 24, 3-17.

Tarone E (1988) Variation in interlanguage. London: Edward Arnold.

Tarone, E. (2006) Interlanguage. Downloadable here: http://socling.genlingnw.ru/files/ya/interlanguage%20Tarone.PDF

 

Advertisements

What good is relativism?

screen-shot-2014-01-11-at-8-59-11-pm

Scott Thornbury (2008) asks “What good is SLA Theory?” . This is a question beloved of populists, all of whom agree that it’s of no use to anyone, except the rarefied crackpots who dream it up. Thornbury sets the tone of his own populist piece by saying that most teachers display a general ignorance of, and indifference to, SLA theory, due to “the visceral distrust that most practitioners feel towards ivory-tower theorising”.  If he’d said that most English language teachers have an ingrained distrust of academic research into language learning, we might have asked him for some evidence to support the assertion, but who can question that ivory tower theorists are not to be trusted? Note how Thornbury, who teaches a post-graduate course on theories of SLA at a New York university, and who has published many articles in serious, peer-reviewed journals, smears academics with the “ivory tower” brush, while himself sidling up to the hard-working, down to earth sceptics who read the English Teaching Professional magazine

Thornbury gives a brief sketch of 4 types of SLA theory and then gives 4 reasons why “knowledge of theory” is a good thing for teachers. But you can tell that his heart’s not in it.  He knows perfectly well that “knowledge” of the theories of SLA he mentions is of absolutely no use to anybody unless those theories are properly scrutinised and evaluated, but, rather than attempt any such evaluation, Thornbury prefers to devote the article to reassuring everybody that there’s no need to take SLA theories too seriously.

To help him drive home this anti-intellectual message, Thornbury turns to “SLA heavyweight” John H Schumann. Most SLA scholars regard the extreme relativist position Schumann adopts in his 1983 article as almost comically preposterous, while his acculturation theory is about as “heavyweight” as Dan Brown’s theory of the Holy Grail.  But anyway, judge for yourself.  Schumann (1983) suggests that theory construction in SLA should be regarded not as a scientific task, but as a creative endeavour, like painting. Rather than submitting rival theories of SLA to careful scrutiny, looking for coherence, logical consistency and empirical adequacy, for example, Schumann suggests that competing theories of SLA should be evaluated in the same way that one might evaluate different paintings.

“When SLA is regarded as art not science, Krashen’s and McLaughlin’s views can coexist as two different paintings of the language learning experience… Viewers can choose between the two on an aesthetic basis favouring the painting which they find phenomenologically  true to their experience.”

Thornbury seems to admire this suggestion. He comments:

“This is why metaphors have such power. We tend to be well disposed to a theory if its dominant imagery chimes with our own values and beliefs. If we are inclined to think of learning as the meeting of minds, for example, an image such as the Zone of Proximal Delevopment is more likely to attract us than the image of a black box.”

Schumann’s paper was an early salvo in what, 10 years later, turned into a spirited war between academics who adopted a relativist epistemology; and those who held to a rationalist epistemology. The war is still waging, and, typically enough, Thornbury stays well clear of the front line, while maintaining friendly relations with both camps. But let’s be clear: relativism, even though not often taken to the extreme that Schumann does, is actually taken seriously by many academics, including Larsen-Freeman and sometimes (depending on how the wind’s blowing) by Thornbury himself. Rational criteria for the evaluation of rival theories of SLA, including logical consistency and the weighing of empirical evidence, are abandoned in favour of the “thick description” of different “stories” or “narratives”,  all of them deemed to have as much merit as each other. Relativists suggest that trying to explain SLA in the way that rationalists (or “positivists” as they like to call them) do is no more than “science envy”, and basically a waste of time. Which is actually the gist of Thornbury’s argument in the 2008 article discussed here.

In response to this relativist position, let me quote Larry Laudan, who says

“The displacement of the idea that facts and evidence matter by the idea that everything boils down to subjective interests and perspectives is—second only to American political campaigns—the most prominent and pernicious manifestation of anti-intellectualism in our time.”

sin-titulo1

Thornbury asks “What good is SLA theory?” without making any attempt to critically evaluate the rival theories he outlines. But then, why should he? After all, if you adopt a relativist stance, then no theory is right, none is of much importance, so why bother to sort them out? Instead of going to all that unnecessary trouble, all you have to do is take a quick look at Thornbury’s little summary in Table 1 and choose the theory that grabs you, or rather, choose the “dominant metaphor” which best chimes with your own values and beliefs. And if you can’t be bothered to check out which theory goes best with your values and beliefs, then why not use some other, equally arbitrary subjective criterion? You could toss a coin, or stare intently at a piece of toast, or ask Jeremy Harmer.

“What good is SLA theory?” is actually a very stupid question. It’s as if “SLA theory” were some sort of uncountable noun, like toothpaste. What good is toothpaste? It doesn’t actually make much difference to brushing your teeth. But “SLA theory” is not uncountable; some SLA theories are very bad, and some are very good, and consequently, we need to agree on criteria for evaluating them so as to concentrate on what we can learn from the best theories. Instead of pandering to the misinformed view that SLA theories are equally unscientific, equally based on metaphors, equally relative in their appeal, Thornbury could have used the space he had in the journal to examine – however “lightly”- the relative merits of the theories he discusses, and the usefulness to teachers of the best theories.  He could have mentioned some of the findings of psycholinguistic research into the influence of the L1; age differences and sensitive periods; error correction; incomplete trajectories; explicit and implicit learning, and much besides. He could have mentioned one or two of the most influential current hypotheses about SLA, for example that instruction can influence the rate but not the route of interlanguage development.

He could have also pointed out that those adopting a relativist epistemology have achieved very little; that Larsen-Freeman’s exploration of complexity theory has achieved precisely nothing; that his own attempts to use emergentism to conjure up “grammar for free” have been equally woeful; and that the relativists he supports are more responsible than anyone else for the popular view that academics sit in an ivory tower writing unintelligible articles packed with obscurantist jargon for publication in journals that only they bother to read.

References

Laudan, L. (1990) Science and Relativism: Dialogues on the Philosophy of Science. Chicago, Chicago University Press.

Schumann, J. H. (2003) Art and Science in SLA research. Language Learning, 33, 409 – 75.

Larsen Freeman’s IATEFL 2016 Plenary

metaphor

In her plenary talk, Larsen Freeman argued that it’s time to replace “input-output metaphors” with “affordances”. The metaphors of input and output belong to a positivist, reductionist  approach to SLA which needs to be replaced by “a new way of understanding” language learning based on Complexity Theory.

Before we look at Larsen Freeman’s new way of understanding, let’s take a quick look at what she objects to by reviewing one current approach to understanding the process of SLA.

Interlanguage and related constructs 

There’s no single, complete and generally agreed-upon theory of SLA, but there’s a widespread view that second language learning is a process whereby learners gradually develop their own autonomous grammatical system with its own internal organising principles. This system is referred to as “interlanguage”.  Note that “interlanguage” is a theoretical construct (not a fact and not a metaphor) which has proved useful in developing a theory of some of the phenomena associated with SLA; the construct itself needs further study and the theory which it’s part of  is incomplete, and possibly false.

Support for the hypothesis of interlanguages comes from observations of U-shaped behaviour in SLA, which indicate that learners’ interlanguage development is not linear. An example of U-shaped behaviour is this:

educ-1817-intro-to-tesol-18-638

The example here is from a study in the 70s. Another example comes from morphological development, specifically, the development of English irregular past forms, such as came, went, broke, which are supplanted by rule-governed, but deviant past forms: comed, goed, breaked. In time, these new forms are themselves replaced by the irregular forms that appeared in the initial stage.

This U-shaped learning curve is observed in learning the lexicon, too, as Long (2011) explains. Learners have to master the idiosyncratic nature of words, not just their canonical meaning. While learners encounter a word in a correct context, the word is not simply added to a static cognitive pile of vocabulary items. Instead, they experiment with the word, sometimes using it incorrectly, thus establishing where it works and where it doesn’t. The suggestion is that only by passing through a period of incorrectness, in which the lexicon is used in a variety of ways, can they climb back up the U-shaped curve. To add to the example of feet above, there’s the example of the noun shop. Learners may first encounter the word in a sentence such as “I bought a pastry at the coffee shop yesterday.” Then, they experiment with deviant utterances such as “I am going to the supermarket shop,” correctly associating the word ‘shop’ with a place they can purchase goods, but getting it wrong. By making these incorrect utterances, the learner distinguishes between what is appropriate, because “at each stage of the learning process, the learner outputs a corresponding hypothesis based on the evidence available so far” (Carlucci and Case, 2011).

Automaticity

The re-organisation of new information as learners move along the U-shaped curve is a characteristic of interlanguage development. Associated with this restructuring is the construct of automaticity. Language acquisition can be seen as a complex cognitive skill where, as your skill level in a domain increases, the amount of attention you need to perform generally decreases . The basis of processing approaches to SLA is that we have limited resources when it comes to processing information and so the more we can make the process automatic, the more processing capacity we free up for other work. Active attention requires more mental work, and thus, developing the skill of fluent language use involves making more and more of it automatic, so that no active attention is required. McLaughlin  (1987) compares learning a language to learning to drive a car. Through practice, language skills go  from a ‘controlled process’ in which great attention and conscious effort is needed to an ‘automatic process’.

Automaticity can be said to occur when associative connections between a certain kind of input and output pattern occurs. For instance, in this exchange:

  • Speaker 1: Morning.
  • Speaker 2: Morning. How are you?
  • Speaker 1: Fine, and you?
  • Speaker 2: Fine.

the speakers, in most situations, don’t actively think about what they’re saying. In the same way, second language learners’ learn new language through use of controlled processes, which become automatic, and in turn free up controlled processes which can then be directed to new forms.

Sequences

There is a further hypothesis that is generally accepted among those working on processing models of SLA, namely that L2 learners pass through developmental sequences on their way to some degree of communicative competence, exhibiting common patterns and features across differences in learners’ age and L1, acquisition context, and instructional approach. Examples of such sequences are found in the well known series of morpheme studies; the four-stage sequence for ESL negation; the six-stage sequence for English relative clauses; and the sequence of question formation in German (see Long, 2015 for a full discussion).

Development of the L2 exhibits plateaus, occasional movement away from, not toward, the L2, and U-shaped or zigzag trajectories rather than smooth, linear contours. No matter what the learners’ L1 might be, no matter what the order or manner in which target-language structures are presented to them by teachers, learners analyze the input and come up with their own interim grammars, and they master the structures in roughly the same manner and order whether learning in classrooms, on the street, or both. This led Pienemann to formulate his learnability hypothesis and teachability hypothesis: what is processable by students at any time determines what is learnable, and, thereby, what is teachable (Pienemann, 1984, 1989).

All these bits and pieces of an incomplete theory of L2 learning suggest that learners themselves, not their teachers, have most control over their language development. As Long (2011) says:

Students do not – in fact, cannot – learn (as opposed to learn about) target forms and structures on demand, when and how a teacher or a coursebook decree that they should, but only when they are developmentally ready to do so. Instruction can facilitate development, but needs to be provided with respect for, and in harmony with, the learner’s powerful cognitive contribution to the acquisition process.

Let me emphasise that the aim of this psycholinguistic research is to understand how learners deal psychologically with linguistic data from the environment (input) in order to understand and transform the data into competence of the L2. Constructs such as input, intake, noticing, short and long term memory, implicit and explicit learning, interlanguage, output, and so on are used to facilitate the explanation, which takes the form of a number of hypotheses. No “black box” is used as an ad hoc device to rescue the hypotheses. Those who make use of Chomsky’s theoretical construct of an innate Language Acquisition Device in their theories of SLA do so in such a way that their hypotheses can be tested. In any case, it’s how learners interact psychologically with their linguistic environment that interests those involved in interlanguage studies. Other researchers look at how learners interact socially with their linguistic environment, and many theories contain both sociolinguistic and psycholinguistic components.

So there you are. There’s a quick summary of how some scholars try to explain the process of SLA from a psychological perspective. But before we go on, we have to look at the difference between metaphors and theoretical constructs.

Metaphors and Constructs

A metaphor is a figure of speech in which a word or phrase denoting one kind of object or idea is used in place of another to suggest a likeness or analogy between them. She’s a tiger. He died in a sea of grief. To say that “input” is a metaphor is to say that it represents something else, and so it does. To say that we should be careful not to mistake “input” for the real thing is well advised. But to say that “input” as used in the way I used it above is a metaphor is quite simply wrong. No scientific theory of anything uses metaphors because, as Gregg (2010) points out

There is no point in conducting the discussion at the level of metaphor; metaphors simply are not the sort of thing one argues over. Indeed, as Fodor and Pylyshyn (1988: 62, footnote 35) say, ‘metaphors … tend to be a license to take one’s claims as something less than serious hypotheses.’ Larsen-Freeman (2006: 590) reflects the same confusion of metaphor and hypothesis: ‘[M]ost researchers in [SLA] have operated with a “developmental ladder” metaphor (Fischer et al., 2003) and under certain assumptions and postulates that follow from it …’ But of course assumptions and postulates do not follow from metaphors; nothing does.

In contrast, theoretical constructs such as input, intake, noticing, automaticity, and so on, define what they stand for, and each of them is used in the service of exploring a hypothesis or a more general theory. All of the theoretical constructs named above, including “input”, are theory-laden: they’re terms used in a special way in the service of the hypothesis or theory they are part of,  and their validity or truth value can be tested by appeals to logic and empirical evidence. Some constructs, for example those used in Krashen’s theory, are found wanting because they’re so poorly-defined as to be circular. Other constructs, for example noticing, are the subject of both logical and empirical scrutiny. None of these constructs is correctly described as a metaphor, and Larsen Freeman’s inability to distinguish between a theoretical construct and a metaphor plagues her incoherent argument.  In short: metaphors are no grounds on which to build any theory, and dealing in metaphors assures that no good theory will result.

Get it? If you do, you’re a step ahead of Larsen Freeman, who seems to have taken several steps backwards since, in 1991, she co-authored, with Mike Long, the splendid An introduction to second language acquisition research.

Let’s now look at what Larsen Freeman said in her plenary address.

The Plenary

Larsen Freeman read this out:

lf2

Then, with this slide showing:

lf

she said this:

Do we want to see our students as black boxes, as passive recipients of customised input, where they just sit passively and receive? Is that what we want?

Or is it better to see our learners as actively engaged in their own process of learning and discovering the world finding excitement in learning and working in a collaborative fashion with their classmates and teachers?

It’s time to shift metaphors. Let’s sanitise the language. Join with me; make a pledge never to use “input” and “output”.

You’d be hard put to come up with a more absurd straw man argument; a more trivial treatment of a serious issue. Nevertheless, that’s all Larsen Freeman had to say about it.

Complex

With input and output safely consigned to the dustbin of history, Larsen Freeman moved on to her own new way of understanding. She has a “theoretical commitment” to complexity theory, but, she said:

If you don’t want to take my word for it that ecology is a metaphor for now, .. or complexity theory is a theory in keeping with ecology, I refer you to your own Stephen Hawkins, who calls this century “the century of complexity.”

Well, if the great Stephen Hawkins calls this century “the century of complexity”, then  complexity theory must be right, right?

With Hawkins’ impressive endorsement in the bag, and with a video clip of a flock of birds avoiding a predator displayed on her presentation slide, Larsen Freeman began her account of the theory that she’s now so committed to.

lf4

She said:

Instead of thinking about reifying and classifying and reducing, let’s turn to the concept of emergence – a central theme in complexity theory. Emergence is the idea that in a complex system different components interact and give rise to another pattern at another level of complexity.

A flock of birds part when approached by a predator and then they re-group. A new level of complexity arises, emerges, out of the interaction of the parts.

All birds take off and land together. They stay together as a kind of superorganism. They take off, they separate, they land, as if one.

You see how that pattern emerges from the interaction of the parts?

Notice there’s no central authority: no bird says “Follow me I’ll lead you to safety”; they self organise into a new level of complexity.

What are the levels of complexity here? What is the new level of complexity that emerges out of the interaction of the parts? Where does the parting and reformation of the flock fit in to these levels of complexity? How is “all birds take off and land together” evidence of a new level of complexity?

What on earth is she talking about? Larsen Freeman constantly gives the impression that she thinks what she’s saying is really, really important, but what is she saying? It’s not that it’s too complicated, or too complex; it’s that it just doesn’t make much sense. “Beyond our ken”, perhaps.

Open

The next bit of Larsen Freeman’s talk that addresses complexity theory was introduced by reading aloud this text:

lf5

After which she said:

Natural themes help to ground these concepts. …………….

I invite you to think with me and make some connections. Think about the connection between an open system and language. Language is changing all the time, its flowing but it’s also changing. ………………

Notice in this eddy, in this stream, that pattern exists in the flux, but all the particles that are passing through it are constantly changing.  It’s not the same water, but it’s the same pattern. ………………………..

So this world (the stream in the picture) exists because last winter there was snow in the mountains. And the snow pattern accumulated such that now when the snow melts, the water feeds into many streams, this one being one of them. And unless the stream is dammed, or the water ceases, the source ceases, the snow melts, this world will continue. English goes on, even though it’s not…. the English of Shakespeare and yet it still has the identity we know and call English. So these systems are interconnected both spatially and temporally, in time. 

Again, what is she talking about? What systems is she talking about? What does it all mean? The key seems to be “patterns in the flux”, but then, what’s so new about that?

At some point Larsen Freeman returned to this “patterns in the flux” issue. She showed a graph of the average performance of a group of students which indicated that the group, when seen as a whole, had made progress. Then she showed the graphs of the individuals who made up the group and it became clear that one or two individuals hadn’t made any progress. What do we learn from this? I thought she was going to say something about a reverse level of complexity, or granularity, or patterns disappearing from the flux from a lack of  interaction of the parts, or something.  But no. The point was:

When you look  at group average and individual performance, they’re different.

Just in case that’s too much for you to take in, Larsen Freeman explained:

Variability is ignored by statistical averages. You can make generalisations about the group but don’t assume they apply to individuals. Individual variability is the essence of adaptive behaviour. We have to look at patterns in the flux. That’s what we know from a complexity theory ecological perspective.

Adaptation

Returning to the exposition of complexity theory, there’s one more bit to add: adaptiveness. Larsen Freeman read aloud the text from this slide

lf6

The example is the adaptive immune system, not the innate immune system, the adaptive one. Larsen Freeman invited the audience to watch the video and see how the good microbe got the bad one, but I don’t know why. Anyway, the adaptive immune system is an example of a system that is nimble, dynamic, and has no centralised control, which is a key part of complexity theory.

And that’s all folks! That ‘s all Larsen Freeman had to say about complexity theory: it’s complex, open and adaptive. I’ve rarely witnessed such a poor attempt to explain anything.

Affordances

Then Larsen Freeman talked about affordances. This, just to remind you, is her alternative to input.

There are two types of affordances

  1. Property affordances. These are in the environment. You can design an affordance. New affordances for classroom learning include providing opportunities for engagement; instruction and materials that make sure everybody learns; using technology.
  2. Second Order Affordances. These refer to the learner’s perception of and relation with affordances. Students are not passive receivers of input. Second order affordances Include the agent, the perceiver, in the system. Second order affordances are dynamic and adaptive; they emerge when aspects of the environment are in interaction with the agent. The agent’s relational stance to the property affordances is key. A learner’s perception of and interaction with the environment is what creates a second order affordance.

To help clarify things, Larsen Freeman read this to the audience:

lf9

(Note here that their students “operate between languages”, unlike mine and yours (unless you’ve already taken the pledge and signed up) who learn a second or foreign language. Note also that Thoms calls “affordance” a construct.)

If I’ve got it right, “affordances” refer first to anything in the environment that might help learners learn, and second to the learner’s relational stance to them. The important bit of affordances is the relational stance  bit: the learner’s perception of, and interaction with, the environment. Crucially, the learner’s perception of the affordance opportunities, has to be taken into account. “Really?” you might say, “That’s what we do in the old world of input too – we try to take into account the learner’s perception of the input!”

Implications for teaching

Finally Larsen Freeman addresses the implications of her radical new way of understanding for teaching.

Here’s an example. In the old world which Larsen Freeman is so eager to leave behind, where people still understand SLA in terms of input and output, teachers use recasts. In the shiny new world of complexity theory and emergentism, recasts become access-creating affordances.

lf8

Larsen Freeman explains that rather than just recast, you can “build on the mistake” and thus “manage the affordance created by it.”

And then there’s adaption.

lf14

Larsen Freeman refers to the “Inert Knowledge Problem”: students can’t use knowledge learned in class when they try to operate in the real world. How, Larsen Freeman asks, can they adapt their language resources to this new environment?  Here’s what she says:

So there’s a sense in which a system like that is not externally controlled through inputs and outputs but creates itself. It holds together in a self-organising manner – like the bird flock –  that makes it have its individuality and directiveness in relation to the environment.  Learning is not the taking in of existing forms but a continuing dynamic adaptation to context which is always changing  In order to use language patterns , beyond a given occasion, students need experience in adapting to multiple and variable contexts.

“A system like that”??  What system is she talking about? Well it doesn’t really matter, does it, because the whole thing is, once again, beyond our ken, well beyond mine, anyway.

Larsen Freeman gives a few practical suggestions to enhance our students’ ability to adapt, “to take their present system and mold (sic) it to a new context for a present purpose.”

You can do the same task in less time.

Don’t just repeat it, change the task a little bit.

Or make it easier.

Or give them a text to read.

Or slow down the recording.

Or use a Think Aloud technique in order to freeze the action, “so that you explain the choices that exist”. For example:

If I say “Can I help you?”, the student says:

“I want a book.”

and that might be an opportunity to pause and say:

“You can say that. That’s OK; I understand your meaning.”

But another way to say it is to say

“I would like a book.”

Right? To give information. Importantly, adaptation does not mean sameness, but we are trying to give information so that students can make informed choices about how they wish to be, um,… seemed.

And that was about it. I don’t think I’ve left any major content out.

Conclusion

This is the brave new world that two of the other plenary speakers – Richardson and Thornbury – want to be part of. Both of them join in Larsen Freeman’s rejection of the explanation of the process of SLA that I sketched at the start of this post, and both of them are enthusiastic supporters of Larsen Freeman’s version of complexity theory and emergentism.

Judge for yourself.      

 

References

Carlucci, L. and Case, J. (2013) On the Necessity of U-Shaped Learning.  Topics in Cognitive Science, 5. 1,. pp 56-88.

Gregg, K. R. (2010) Shallow draughts: Larsen-Freeman and Cameron on complexity. Second Language Research, 26(4) 549–56.

McLaughlin, B. (1987) Theories of Second Language Learning.  London: Edward Arnold.

Pienemann, M. (1987) Determining the influence of instruction on L2 speech processing. Australian Review of Applied Linguistics 10, 83-113.

Pienemann, M. (1989) Is language teachable? Psycholinguistic experiments and hypotheses. Applied Linguistics 10, 52-79.

A New Term Starts!

a-girl-fell-asleep-while-reading-a-book-in-the-library-b96r5j

Here we go again – a new term is starting at universities offering Masters in TESOL or AL, so once again I’ve moved this post to the front.

Again, let’s run through the biggest problems students face: too much information; choosing appropriate topics; getting the hang of academic writing.

1. Too much Information.

An MA TESOL curriculum looks daunting, the reading lists look daunting, and the books themselves often look daunting. Many students spend far too long reading and taking notes in a non-focused way: they waste time by not thinking right from the start about the topics that they will eventually choose to base their assignments on.  So, here’s the first tip:

The first thing you should do when you start each module is think about what assignments you’ll do.

Having got a quick overview of the content of the module, make a tentative decision about what parts of it to concentrate on and about your assignment topics. This will help you to choose reading material, and will give focus to studies.

Similarly, you have to learn what to read, and how to read. When you start each module, read the course material and don’t go out and buy a load of books. And here’s the second tip:

Don’t buy any books until you’ve decided on your topic, and don’t read in any depth until then either.

Keep in mind that you can download at least 50% of the material you need from library and other web sites, and that more and more books can now be bought in digital format. To do well in this MA, you have to learn to read selectively. Don’t just read. Read for a purpose: read with a particular topic (better still, with a well-formulated question) in mind. Don’t buy any books before you’re abslutely sure you’ll make good use of them .

2. Choosing an appropriate topic.

The trick here is to narrow down the topic so that it becomes possible to discuss it in detail, while still remaining central to the general area of study. So, for example, if you are asked to do a paper on language learning, “How do people learn a second language?” is not a good topic: it’s far too general. “What role does instrumental motivation play in SLA?” is a much better topic. Which leads me to Tip No. 3:

The best way to find a topic is to frame your topic as a question.

Well-formulated questions are the key to all good research, and they are one of the keys to success in doing an MA. A few examples of well-formulated questions for an MA TESL are these:

• What’s the difference between the present perfect and the simple past tense?

• Why is “stress” so important to English pronunciation?

• How can I motivate my students to do extensive reading?

• When’s the best time to offer correction in class?

• What are the roles of “input” and “output” in SLA?

• How does the feeling of “belonging” influence motivation?

• What are the limitations of a Task-Based Syllabus?

• What is the wash-back effect of the Cambridge FCE exam?

• What is politeness?

• How are blogs being used in EFL teaching?

To sum up: Choose a manageable topic for each written assignment. Narrow down the topic so that it becomes possible to discuss it in detail. Frame your topic as a well-defined question that your paper will address.

3. Academic Writing.

Writing a paper at Masters level demands a good understanding of all the various elements of academic writing. First, there’s the question of genre. In academic writing, you must express yourself as clearly and succinctly as possible, and here comes Tip No. 4:

In academic writing “Less is more”.

Examiners mark down “waffle”, “padding”, and generally loose expression of ideas. I can’t remember who, but somebody famous once said at the end of a letter: “I’m sorry this letter is so long, but I didn’t have time to write a short one”. There is, of course, scope for you to express yourself in your own way (indeed, examiners look for signs of enthusiasm and real engagement with the topic under discussion) and one of the things you have to do, like any writer, is to find your own, distinctive voice. But you have to stay faithful to the academic style.

While the content of your paper is, of course, the most important thing, the way you write, and the way you present the paper have a big impact on your final grade. Just for example, many examiners, when marking an MA paper, go straight to the Reference section and check if it’s properly formatted and contains all and only the references mentioned in the text. The way you present your paper (double-spaced, proper indentations, and all that stuff); the way you write it (so as to make it coherent); the way you organise it (so as to make it cohesive); the way you give in-text citations; the way you give references; the way you organise appendices; are all crucial.

Making the Course Manageable

1. Essential steps in working through a module.

Focus: that’s the key. Here are the key steps:

Step 1: Ask yourself: What is this module about? Just as important: What is it NOT about? The point is to quickly identify the core content of the module. Read the Course Notes and the Course Handbook, and DON’T READ ANYTHING ELSE, YET.

Step 2: Identify the components of the module. If, for example, the module is concerned with grammar, then clearly identify the various parts that you’re expected to study. Again, don’t get lost in detail: you’re still just trying to get the overall picture. See the chapters on each module below for more help with this.

Step 3: Do the small assignments that are required. If these do not count towards your formal assessment , then do them in order to prepare yourself for the assignments that do count, and don’t spend too much time on them. Study the requirements of the MA TESL programme closely to identify which parts of your writing assignments count towards your formal assessment and which do not. • Some small assignments are required (you MUST submit them), but they do not influence your mark or grade. Don’t spend too mch time on these, unless they help you prepare for the main asignments.

Step 4: Identify the topic that you will choose for the written assignment that will determine your grade. THIS IS THE CRUCIAL STEP! Reach this point as fast as you can in each module: the sooner you decide what you’re going to focus on, the better your reading, studying, writing and results will be. Once you have identified your topic, then you can start reading for a purpose, and start marshalling your ideas. Again, we will look at each module below, to help you find good, well-defined, manageable topics for your main written assignments.

Step 5: Write an Outline of your paper. The outline is for your tutor, and should give a brief outline of your paper. You should make sure that your tutor reviews your outline and gives it approval.

Step 6: Write the First Draft of the paper. Write this draft as if it were the final version: don’t say “I’ll deal with the details (references, appendices, formatting) later”. Make it as good as you can.

Step 7: If you are allowed to do so, submit the first draft to your Tutor. Some universities don’t approve of this, so check with your tutor. If your tutor allows such a step, try to get detailed feedback on it. Don’t be content with any general “Well that look’s OK” stuff. Ask “How can I improve it?” and get the fullest feedback possible. Take note of ALL suggestions, and make sure you incorporate ALL of them in the final version.

Step 8: Write the final version of the paper.

Step 9: Carefully proof read the final version. Use a spell-checker. Check all the details of formatting, citations, Reference section, Appendices. Ask a friend or colleage to check it. If allowed, ask your tutor to check it.

Step 10: Submit the paper: you’re done!

3. Using Resources

Your first resource is your tutor. You’ve paid lots of money for this MA, so make sure you get all the support you need from him or her! Most importantly: don’t be afraid to ask help whenever you need it. Ask any question you like (while it’s obviously not quite true that “There’s no such thing as a stupid question”, don’t feel intimidated or afraid to ask very basic questions) , and as many as you like. Ask your tutor for suggstions on reading, on suitable topics for the written assignments, on where to find materials, on anything at all that you have doubts about. Never submit any written work for assessment until your tutor has said it’s the best you can do. If you think your tutor is not doing a good job, say so, and if necessary, ask for a change.

Your second resource is your fellow students. When I did my MA, I learned a lot in the students’ bar! Whatever means you have of talking to your fellow-students, use them to the full. Ask them what they’re reading, what they’re having trouble with, and share not only your thoughts but your feelings about the course with them.

Your third resource is the library. It is ABSOLUTELY ESSENTIAL to teach yourself, if you don’t already know, how to use a university library. Again, don’t be afraid to ask for help: most library staff are wonderful: the unsung heroes of the academic world. At Leicester University where I work as an associate tutor on the Distance Learning MA in Applied Linguistics and TESOL course, the library staff exemplify good library practice. They can be contacted by phone, and by email, and they have always, without fail, solved the problems I’ve asked them for help with. Whatever university you are studying at, the library staff are probably your most important resource, so be nice to them, and use them to the max. If you’re doing a presential course, the most important thing is to learn how the journals and books that the library holds are organised. Since most of you have aleady studied at university, I suppose you’ve got a good handle on this, but if you haven’t, well do something! Just as important as the physical library at your university are the internet resources offered by it. This is so important that I have dedicated Chapter 10 to it.

Your fourth resource is the internet. Apart from the resources offered by the university library, there is an enormous amount of valuable material available on the internet. See the “Doing an MA” and “Resources” section of this website for more stuff.

I can’t resist mentioning David Crystal’s Encyclopedia of The English Language as a constant resource. A friend of mine claimed that she got through her MA TESL by using this book most of the time, and, while I only bought it recently, I wish I’d had it to refer to when I was doing my MA.

Please use this website to ask questions and to discuss any issues related to your course.

Theoretical Constructs in SLA

tumblr_inline_o06bsj2y4a1snzoyc_540

Here again is my short contribution to Robinson, P. (ed) 2013 The Encyclopedia of SLA London, Routledge.

1. Introduction
Theoretical constructs in SLA include such terms as interlanguage, variable competence, motivation, and noticing. These constructs are used in the service of theories which attempt to explain phenomena, and thus, in order to understand how the term “theoretical construct” is used in SLA, we must first understand the terms “theory” and “phenomena”.

A theory is an attempt to provide an explanation to a question, usually a “Why” or “How” question. The “Critical Period” theory (see Birdsong, 1999) attempts to answer the question “Why do most L2 learners not achieve native-like competence?” The Processability Theory (Pienemann, 1998) attempts to answer the question “How do L2 learners go through stages of development?” In posing the question that a theory seeks to answer, we refer to “phenomena”: the things that we isolate, define, and then attempt to explain in our theory. In the case of theories of SLA, key phenomena are transfer, staged development, systemacity, variability and incompleteness. (See Towell and Hawkins, 1994: 15.)

A clear distinction must be made between phenomena and observational data. Theories attempt to explain phenomena, and observational data are used to support and test those theories. The important difference between data and phenomena is that the phenomena are what we want to explain, and thus, they are seen as the result of the interaction between some manageably small number of causal factors, instances of which can be found in different situations. By contrast, any type of causal factor can play a part in the production of data, and the characteristics of these data depend on the peculiarities of the experimental design, or data-gathering procedures, employed. As Bogen and Woodward put it: “Data are idiosyncratic to particular experimental contexts, and typically cannot occur outside those contexts, whereas phenomena have stable, repeatable characteristics which will be detectable by means of different procedures, which may yield quite different kinds of data” (Bogen and Woodward, 1988: 317). A failure to appreciate this distinction often leads to poorly-defined theoretical constructs, as we shall see below.

While researchers in some fields deal with such observable phenomena as bones, tides, and sun spots, others deal with non-observable phenomena such as love, genes, hallucinations, gravity and language competence. Non-observable phenomena have to be studied indirectly, which is where theoretical constructs come in. First we name the non-observable phenomena, we give them labels and then we make constructs. With regard to the non-observable phenomena listed above (love, genes, hallucinations, gravity and language competence), examples of constructs are romantic love, hereditary genes, schizophrenia, the bends, and the Language Acquisition Device. Thus, theoretical constructs are one remove from the original labelling, and they are, as their name implies, packed full of theory; they are, that is, proto-typical theories in themselves, a further invention of ours, an invention made in our attempt to pin down the non-observable phenomena that we want to examine so that the theories which they embody can be scrutinised. It should also be noted that there is a certain ambiguity in the terms “theoretical construct” and “phenomenon”. The “two-step” process of naming a phenomenon and then a construct outlined above is not always so clear: for Chomsky (Chomsky, 1986), “linguistic competence” is the phenomenon he wants to explain, to many it has all the hallmarks of a theoretical construct.

Constructs are not the same as definitions; while a definition attempts to clearly distinguish the thing defined from everything else, a construct attempts to lay the ground for an explanation. Thus, for example, while a dictionary defines motivation in such a way that motivation is distinguishable from desire or compulsion, Gardener (1985) attempts to explain why some learners do better than others, and he uses the construct of motivation to do so, in such a way that his construct takes on its own meaning, and allows others in the field to test the claims he makes. A construct defines something in a special way: it is a term used in an attempt to solve a problem, indeed, it is often a term that in itself suggests the answer to the problem. Constructs can be everyday parlance (like “noticing” and “competence”) and they can also be new words (like “interlanguage”), but, in all cases, constructs are “theory-laden” to the maximum: their job is to support a hypothesis, or, better still, a full-blown theory. In short, then, the job of a construct is to help define and then solve a problem.

untitled

2. Criteria for assessing theoretical constructs used in theories of SLA

There is a lively debate among scholars about the best way to study and understand the various phenomena associated with SLA. Those in the rationalist camp insist that an external world exists independently of our perceptions of it, and that it is possible to study different phenomena in this world, to make meaningful statements about them, and to improve our knowledge of them by appeal to logic and empirical observation. Those in the relativist camp claim that there are a multiplicity of realities, all of which are social constructs. Science, for the relativists, is just one type of social construction, a particular kind of language game which has no more claim to objective truth than any other. This article rejects the relativist view and, based largely on Popper’s “Critical Rationalist” approach (Popper, 1972), takes the view that the various current theories of SLA, and the theoretical constructs embedded in them, are not all equally valid, but rather, that they can be critically assessed by using the following criteria (adapted from Jordan, 2004):

1. Theories should be coherent, cohesive, expressed in the clearest possible terms, and consistent. There should be no internal contradictions in theories, and no circularity due to badly-defined terms.
2. Theories should have empirical content. Having empirical content means that the propositions and hypotheses proposed in a theory should be expressed in such a way that they are capable of being subjected to tests, based on evidence observable by the senses, which support or refute them. These tests should be capable of replication, as a way of ensuring the empirical nature of the evidence and the validity of the research methods employed. For example, the claim “Students hate maths because maths is difficult” has empirical content only when the terms “students”, “maths”, “hate” and “difficult” are defined in such a way that the claim can be tested by appeal to observable facts. The operational definition of terms, and crucially, of theoretical constructs, is the best way of ensuring that hypotheses and theories have empirical content.
3. Theories should be fruitful. “Fruitful” in Kuhn’s sense (see Kuhn, 1962:148): they should make daring and surprising predictions, and solve persistent problems in their domain.

Note that the theory-laden nature of constructs is no argument for a relativist approach: we invent constructs, as we invent theories, but we invent them, precisely, in a way that allows them to be subjected to empirical tests. The constructs can be anything we like: in order to explain a given problem, we are free to make any claim we like, in any terms we choose, but the litmus test is the clarity and testability of these claims and the terms we use to make them. Given it’s pivotal status, a theoretical construct should be stated in such a way that we all know unequivocally what is being talked about, and it should be defined in such a way that it lays itself open to principled investigation, empirical and otherwise. In the rest of this article, a number of theoretical constructs will be examined and evaluated in terms of the criteria outlined above.

3. Krashen’s Monitor Model

The Monitor Model (see Krashen, 1985) is described elsewhere, so let us here concentrate on the deficiencies of the theoretical constructs employed. In brief, Krashen’s constructs fail to meet the requirements of the first two criteria listed above: Krashen’s use of key theoretical constructs such as “acquisition and learning”, and “subconscious and conscious” is vague, confusing, and, not always consistent. More fundamentally, we never find out what exactly “comprehensible input”, the key theoretical construct in the model, means. Furthermore, in conflict with the second criterion listed above, there is no way of subjecting the set of hypotheses that Krashen proposes to empirical tests. The Acquisition-Learning hypothesis gives no evidence to support the claim that two distinct systems exist, nor any means of determining whether they are, or are not, separate. Similarly, there is no way of testing the Monitor hypothesis: since the Monitor is nowhere properly defined as an operational construct, there is no way to determine whether the Monitor is in operation or not, and it is thus impossible to determine the validity of the extremely strong claims made for it. The Input Hypothesis is equally mysterious and incapable of being tested: the levels of knowledge are nowhere defined and so it is impossible to know whether i + 1 is present in input, and, if it is, whether or not the learner moves on to the next level as a result. Thus, the first three hypotheses (Acquisition-Learning, the Monitor, and Natural Order) make up a circular and vacuous argument: the Monitor accounts for discrepancies in the natural order, the learning-acquisition distinction justifies the use of the Monitor, and so on.

In summary, Krashen’s key theoretical constructs are ill-defined, and circular, so that the set is incoherent. This incoherence means that Krashen’s theory has such serious faults that it is not really a theory at all. While Krashen’s work may be seen as satisfying the third criterion on our list, and while it is extremely popular among EFL/ESL teachers (even among those who, in their daily practice, ignore Krashen’s clear implication that grammar teaching is largely a waste of time) the fact remains that his series of hypotheses are built on sand. A much better example of a theoretical construct put to good use is Schmidt’s Noticing, which we will now examine.

4. Schmidt’s Noticing Hypothesis

Schmidt’s Noticing hypothesis (see Schmidt, 1990) is described elsewhere. Essentially, Schmidt attempts to do away with the “terminological vagueness” of the term “consciousness” by examining three senses of the term: consciousness as awareness, consciousness as intention, and consciousness as knowledge. Consciousness and awareness are often equated, but Schmidt distinguishes between three levels: Perception, Noticing and Understanding. The second level, Noticing, is the key to Schmidt’s eventual hypothesis. The importance of Schmidt’s work is that it clarifies the confusion surrounding the use of many terms used in psycholinguistics (not least Krashen’s “acquisition/ learning” dichotomy) and, furthermore, it develops one crucial part of a general processing theory of the development of interlanguage grammar.

Our second evaluation criterion requires that theoretical constructs are defined in such a way as to ensure that hypotheses have empirical content, and thus we must ask: what does Schmidt’s concept of noticing exactly refers to, and how can we be sure when it is, and is not being used by L2 learners? In his 1990 paper, Schmidt claims that noticing can be operationally defined as “the availability for verbal report”, “subject to various conditions”. He adds that these conditions are discussed at length in the verbal report literature, but he does not discuss the issue of operationalisation any further. Schmidt’s 2001 paper gives various sources of evidence of noticing, and points out their limitations. These sources include learner production (but how do we identify what has been noticed?), learner reports in diaries (but diaries span months, while cognitive processing of L2 input takes place in seconds and making diaries requires not just noticing but also reflexive self-awareness), and think-aloud protocols (but we cannot assume that the protocols identify all the examples of target features that were noticed).

Schmidt argues that the best test of noticing is that proposed by Cheesman and Merikle (1986), who distinguish between the objective and subjective thresholds of perception. The clearest evidence that something has exceeded the subjective threshold and been noticed is a concurrent verbal report, since nothing can be verbally reported other than the current contents of awareness. Schmidt adds that “after the fact recall” is also good evidence that something was noticed, providing that prior knowledge and guessing can be controlled. For example, if beginner level students of Spanish are presented with a series of Spanish utterances containing unfamiliar verb forms, and are then asked to recall immediately afterwards the forms that occurred in each utterance, and can do so, that is good evidence that they noticed them. On the other hand, it is not safe to assume that failure to do so means that they did not notice. It seems that it is easier to confirm that a particular form has not been noticed than that it has: failure to achieve above-chance performance in a forced-choice recognition test is a much better indication that the subjective threshold has not been exceeded and that noticing did not take place.

Schmidt goes on to claim that the noticing hypothesis could be falsified by demonstrating the existence of subliminal learning, either by showing positive priming of unattended and unnoticed novel stimuli, or by showing learning in dual task studies in which central processing capacity is exhausted by the primary task. The problem in this case is that, in positive priming studies, one can never really be sure that subjects did not allocate any attention to what they could not later report, and similarly, in dual task experiments, one cannot be sure that no attention is devoted to the secondary task. In conclusion, it seems that Schmidt’s noticing hypothesis rests on a construct that still has difficulty measuring up to the second criteria of our list; it is by no means easy to properly identify when noticing has and has not occurred. Despite this limitation, however, Schmidt’s hypothesis is still a good example of the type of approach recommended by the list. Its strongest virtues are its rigour and its fruitfulness, Schmidt argues that attention as a psychological construct refers to a variety of mechanisms or subsystems (including alertness, orientation, detection within selective attention, facilitation, and inhibition) which control information processing and behaviour when existing skills and routines are inadequate. Hence, learning in the sense of establishing new or modified knowledge, memory, skills and routines is “largely, perhaps exclusively a side effect of attended processing”. (Schmidt, 2001: 25). This is a daring and surprising claim, with similar predictive ability, and it contradicts Krashen’s claim that conscious learning is of extremely limited use.

untitled

5. Variationist approaches

An account of these approaches is given elsewhere In brief, variable competence, or variationist, approaches, use the key theoretical construct of “variable competence”, or, as Tarone calls it, “capability”. Tarone (1988) argues that “capability” underlies performance, and that this capability consists of heterogeneous “knowledge” which varies according to various factors. Thus, there is no homogenous competence underlying performance but a variable “capacity” which underlies specific instances of language performance. Ellis (1987) uses the construct of “variable rules” to explain the observed variability of L2 learners’ performance: learners, by successively noticing forms in the input which are in conflict with the original representation of a grammatical rule acquire more and more versions of the original rule. This leads to either “free variation” (where forms alternate in all environments at random) or “systematic variation” where one variant appears regularly in one linguistic context, and another variant in another context.

The root of the problem of the variable competence model is the weakness of its theoretical constructs. The underlying “variable competence” construct used by Tarone and Ellis is nowhere clearly defined, and is, in fact, simply asserted to “explain” a certain amount of learner behaviour. As Gregg (1992: 368) argues, Tarone and Ellis offer a description of language use and behaviour, which they confuse with an explanation of the acquisition of grammatical knowledge. By abandoning the idea of a homogenous underlying competence, Gregg says, we are stuck at the surface level of the performance data, and, consequently, any research project can only deal with the data in terms of the particular situation it encounters, describing the conditions under which the experiment took place. The positing of any variable rule at work would need to be followed up by an endless number of further research projects looking at different situations in which the rule is said to operate, each of which is condemned to uniqueness, no generalisation about some underlying cause being possible.

At the centre of the variable competence model are variable rules. Gregg argues cogently that such variability cannot become a theoretical construct used in attempts to explain how people acquire linguistic knowledge. In order to turn the idea of variable rules from an analytical tool into a theoretical construct, Tarone and Ellis would have to grant psychological reality to the variable rules (which in principle they seem to do, although no example of a variable rule is given) and then explain how these rules are internalised, so as to become part of the L2 learner’s grammatical knowledge of the target language (which they fail to do). The variable competence model, according to Gregg, confuses descriptions of the varying use of forms with an explanation of the acquisition of linguistic knowledge. The forms (and their variations) which L2 learners produce are not, indeed cannot be, direct evidence of any underlying competence – or capacity. By erasing the distinction between competence and performance “the variabilist is committed to the unprincipled collection of an uncontrolled mass of data” (Gregg 1990: 378).

As we have seen, a theory must explain phenomena, not describe data. In contradiction to this, and to criteria 1and 2 in our list, the arguments of Ellis and Tarone are confused and circular; in the end what Ellis and Tarone are actually doing is gathering data without having properly formulated the problem they are trying to solve, i.e. without having defined the phenomenon they wish to explain. Ellis claims that his theory constitutes an “ethnographic, descriptive” approach to SLA theory construction, but he does not answer the question: How does one go from studying the everyday rituals and practices of a particular group of second language learners through descriptions of their behaviour to a theory that offers a general explanation for some identified phenomenon concerning the behaviour of L2 learners?

Variable Competence theories exemplify what happens when the distinction between phenomena, data and theoretical constructs is confused. In contrast, Chomsky’s UG theory, despite its shifting ground and its contentious connection to SLA, is probably the best example of a theory where these distinctions are crystal clear. For Chomsky, “competence” refers to underlying linguistic (grammatical) knowledge, and “performance” refers to the actual day to day use of language, which is influenced by an enormous variety of factors, including limitations of memory, stress, tiredness, etc. Chomsky argues that while performance data is important, it is not the object of study (it is, precisely, the data): linguistic competence is the phenomenon that he wants to examine. Chomsky’s distinction between performance and competence exactly fits his theory of language and first language acquisition: competence is a well-defined phenomenon which is explained by appeal to the theoretical construct of the Language Acquisition Device. Chomsky describes the rules that make up linguistic competence and then invites other researchers to subject the theory that all languages obey these rules to further empirical tests.

untitled

6. Aptitude

Why is anybody good at anything? Well, they have an aptitude for it: they’re “natural” piano players, or carpenters, or whatever. This is obviously no explanation at all, although, of course, it contains a beguiling element of truth.To say that SLA is (partly) explained by an aptitude for learning a second language is to beg the question: What is aptitude for SLA? Attempts to explain the role of aptitude in SLA illustrate the difficulty of “pinning down” the phenomenon that we seek to explain. If aptitude is to be claimed as a causal factor that helps to explain SLA, then aptitude must be defined in such a way that it can be identified in L2 learners and then related to their performance.

Robinson (2007) uses aptitude as a construct that is composed of different cognitive abilities. His “Aptitude Complex Hypothesis” claims that different classroom settings draw on certain combinations of cognitive abilities, and that, depending on the classroom activities, students with certain cognitive abilities will do better than others.. Robinson adds the “Ability Differentiation Hypothesis” which claims that some L2 learners have different abilities than others, and that it is important to match these learners to instructional conditions which favor their strengths in aptitude complexes. In terms of classroom practice, these hypotheses might well be fruitful, but they do not address the question of how aptitude explains SLA.

One example of identifying aptitude in L2 learners is the CANAL-F theory of foreign language aptitude, which grounds aptitude in “the triarchic theory of human intelligence” and argues that “one of the central abilities required in FL acquisition is the ability to cope with novelty and ambiguity” (Grigorenko, Sternberg and Ehrman, 2000: 392). However successfully the test might predict learner’s ability, the theory fails to explain aptitude in any causal way. The theory of human intelligence that the CANAL-F theory is grounded in fails to illuminate the description given of FL ability; we do not get beyond a limiting of the domain in which the general ability to cope with novelty and ambiguity operates. The individual differences between foreign language learners’ ability is explained by suggesting that some are better at coping with novelty and ambiguity than others. Thus, whatever construct validity might be claimed for CANAL-F, and however well the test might predict ability, it leaves the question of what precisely aptitude at foreign language learning is, and how it contributes to SLA, unanswered.

How, then, can aptitude explain differential success in a causal way? Even if aptitude can be properly defined and measured without falling into the familiar trap of being circular (those who do well at language aptitude tests have an aptitude for language learning), how can we step outside the reference of aptitude and establish more than a simple correlation? What is needed is a theoretical construct.

7. Conclusion

The history of science throws up many examples of theories that began without any adequate description of what was being explained. Darwin’s theory of evolution by natural selection (the young born to any species compete for survival, and those young that survive to reproduce tend to embody favourable natural variations which are passed on by heredity) lacked any formal description of the theoretical construct “variation”, or any explanation of the origin of variations, or how they passed between generations. It was not until Mendel’s theories and the birth of modern genetics in the early 20th century that this deficiency was dealt with. But, and here is the point, dealt with it was: we now have constructs that pin down what “variation” refers to in the Darwinian theory, and the theory is stronger for them (i.e. more testable). Theories progress by defining their terms more clearly and by making their predictions more open to empirical testing.

Theoretical constructs lie at the heart of attempts to explain the phenomena of SLA. Observation must be in the service of theory: we do not start with data, we start with clearly-defined phenomena and theoretical constructs that help us articulate the solution to a problem, and we then use empirical data to test that tentative solution. Those working in the field of psycholinguistics are making progress thanks to their reliance on a rationalist methodology which gives priority to the need for clarity and empirical content. If sociolinguistics is to offer better explanations, the terms used to describe social factors must be defined in such a way that it becomes possible to do empirically-based studies that confirm or challenge those explanations. All those who attempt to explain SLA must make their theoretical constructs clear, and improve their definitions and research methodology in order to better pin down the slippery concepts that they work with.

References

Birdsong, D. (ed) (1999) Second Language Acquisition and the Critical Period Hypothesis. Mahwah, NJ: Lawrence Erlbaum Associates
Bogen, J. and Woodward, J. (1988) “Saving the phenomena.” Philosophical Review 97: 303-52.
Cheesman, J., & Merikle. P. M. (1986) “Distinguishing conscious from unconscious perceptual processes.” Canadian Journal of Psychology, 40:343-367.
Chomsky, N. (1986) Knowledge of Language: Its Nature, Origin and Use. New York:
Prager.
Ellis, R. (1987) “Interlanguage variability in narrative discourse: style-shifting in the use of the past tense.” Studies in Second Language Acquisition 9, 1-20.
Gardner, R. C. (1985) Social psychology and second language learning: the role of
attitudes and motivation. London: Edward Arnold.
Gregg, K. R. (1990) “The Variable Competence Model of second language acquisition
and why it isn’t.” Applied Linguistics 11, 1. 364—83.
Grigorenko, E., Sternberg, R., and Ehrman, M. (2000) “A Theory-Based Approach to the Measurement of Foreign Language Learning Ablity: The Canal-F Theory and Test.” The Modern Language Journal 84, iii, 390-405.
Jordan, G. (2004) Theory Construction in SLA. Benjamins: Amsterdam
Kuhn, T. (1962) The Structure of Scientific Revolutions. Chicago: University of Chicago Press.
Krashen, S. (1985) The Input Hypothesis: Issues and Implications. New York: Longman.
Pienemann, M. 1998: Language Processing and Second Language Development:
Processability Theory. Amsterdam: John Benjamins
Popper, K. R. 1972: Objective Knowledge. Oxford: Oxford University Press.
Schmidt, R. (1990) “The role of consciousness in second language learning.” Applied
Linguistics 11, 129-58
Schmidt, R.(2001) “Attention.” In Robinson, P. (ed.) Cognition and Second Language
Instruction. Cambridge: Cambridge University Press, 3-32.
Tarone, E. 1988: Variation in interlanguage. London: Edward Arnold.
Towell, R. and Hawkins, R. (1994) Approaches to second language acquisition.
Clevedon: Multilingual Matters.

Can we get a pineapple?

38905a1

Lost and Unfounded

Leo Selivan’s and Hugh Dellar’s recent contributions to EFL Magazine give further evidence that their strident, confidently expressed ideas lack any proper theoretical foundations.

We can compare the cumulative attempts of Selivan and Dellar to articulate their versions of the lexical approach with the more successful attempts made by Richards and Long to articulate their approaches to ELT.  Richards (2006) describes what he calls “the current phase” of communicative language teaching as

a set of principles about the goals of language teaching, how learners learn a language, the kinds of classroom activities that best facilitate learning, and the roles of teachers and learners in the classroom ( Richards, 2006:2)

Note that Richards says this on page 2 of his book: he rightly starts out with the assumption that “a set of principles” is required.

Long (2015) offers his own version of task based language teaching and he goes to great lengths to explain the underpinnings of his approach. His book is, in my opinion, the best example in the literature of a well-founded, well-explained approach to ELT. It’s based on a splendidly lucid account of a cognitive-interactionist theory of instructed SLA, on careful definitions of task and needs analysis, and on 10 crystal clear methodological principles. Long’s book is to be recommended for its scholarship, its thoroughness, and, not least, for its commitment to a progressive approach to ELT.

So what do Selivan and Dellar offer?

In his “Beginners’ Guide To Teaching Lexically”, http://eflmagazine.com/beginners-guide-to-the-lexical-approach/ Selivan makes a number of exaggerated generalisations about English and then outlines “the main principles of the lexical approach”. These turn out to be

  1. Ban Single Words
  2. English word ≠ L1 word
  3. Explain less – explore more
  4. Pay attention to what students (think they) know.

To explain how such “principles” adequately capture the essence of the lexical approach, Sellivan offers “A bit of theory”for each one. For example, Selivan says “A new theory of language, known as Lexical Priming, lends further support to the Lexical Approach.  ……. By drawing students’ attention to collocations and common word patterns we can accelerate their priming”. Says he. But what reasons does he have for such confident assertions? Selivan fails to give his reasons, and fails to give any proper rationale for the claims he makes about language and teaching.

In his podcast, http://eflmagazine.com/hugh-dellar-discusses-the-lexical-approach/ Dellar agrees that collocation is the driving force of English. He claims that the best way to conduct ELT is to concentrate on presenting and practising the lexical chunks needed for different communicative events. Teachers should get students to do things with these chunks such as “fill in gaps, discuss them, order them, say them, write them out themselves, etc.” with the goal of getting students to memorize them. Again, Dellar doesn’t explain why we should concentrate on these chunks, or why teachers should get students to  memorise them. Maybe he thinks “It stands to reason, yeah?”

At one point in his podcast Dellar says that, while those just starting to learn English will go into a shop and say “I want, um, coffee, um sandwich”,

…. as your language becomes more sophisticated, more developed, you learn to kind of grammar the basic content words that you’re adding thereSo you learn “Hi. Can I get a cup of coffee and a sandwich, please.” So you add the grammar to the words that drive the communication, yeah? Or you just learn that as whole chunk. You just learn “Hi. Can I get a cup of coffee? Can I get a sandwich, please?” Or you learn “Can I get…” and you drop in a variety of different things.

This is classic “Dellarspeak”: a badly-expressed misrepresentation of someone else’s erroneous theory.  Dellar doesn’t tell us how we teach learners “to grammar” content words, or when it’s better to teach “the whole chunk” – or what informs his use of nouns as verbs, for that matter. As for the “can I get…?” example, what’s wrong with just politely naming what we want:  Good MorningA coffee and a sandwich, please.”?  What is gained by teaching learners to use the redundant Can I get…. phrase?

But enough of Dellar’s hapless attempts to express other people’s ideas, let’s cut to the chase, if you get my drift. The question I want to briefly discuss is this:

Are Selivan’s and Dellar’s claims based on coherent theories of language and language learning, or are they mere opinions?

3281465

Models of English 

Crystal (2003) says: “an essential step in the study of a language is to model it”. Here are two models:

  1. A classic grammar model of the English language attempts to capture its structure, described in terms of grammar, the lexicon and phonology (see Quirk et.al. 1985, and Swan, 2001, for examples of descriptive and pedagogical grammars). This grammar model, widely used in ELT today, is rejected by Hoey.
  2. Hoey (2005) says that the best model of language structure is the word, along with its collocational and colligational properties. Collocation and “nesting” (words join with other primed words to form sequence) are linked to contexts and co-texts. So grammar is replaced by a network of chunks of words. There are no rules of grammar; there’s no English outside a description of the patterns we observe among those who use it. There is no right or wrong in language. It makes little sense to talk of something being ungrammatical (Hoey, 2005).

Selivan and Dellar uncritically accept Hoey’s radical new theory of language, but is it really better than the model suggested by grammarians?

Surely we need to describe language not just in terms of the performed but also in terms of the possible. Hoey’s argument that we should look only at attested behaviour and abandon descriptions of syntax strikes most of us as a step too far. And I think Selivan and Dellar agree, since they both routinely refer to the grammatical aspects of language. The problem is that Selivan and Dellar fail to give their own model of language, they fail to clearly indicate the limits of their adherence to Hoey’s model, they fail to say what place syntax has in their view of language. In brief, they have no coherent theory of language.

Hoey’s Lexical Priming Theory

Hoey (2005) claims that we learn languages by subconsciously noticing everything (sic) that we have ever heard or read about words, and storing it all in a massively repetitious way.

The process of subconsciously noticing is referred to as lexical priming. … Without realizing what we are doing, we all reproduce in our own speech and writing the language we have heard or read before. We use the words and phrases in the contexts in which we have heard them used, with the meanings we have subconsciously identified as belonging to them and employing the same grammar. The things we say are subconsciously influenced by what everyone has previously said to us.

This theory hinges on the construct of “subconscious noticing”, but instead of explaining it, Hoey simply asserts that language learning is the result of repeated exposure to patterns of text (the more the repetition the better the knowledge), thus adopting a crude version of behaviourism. Actually, several on-going quasi-behaviourist theories of SLA try to explain the SLA process (see, for example, MacWhinney, 2002; O’Grady, 2005; Ellis, 2006; Larsen-Freeman and Cameron, 2008), but Hoey pays them little heed, and neither do Selivan and Dellar, who swallow Hoey’s fishy tale hook line and sinker, take the problematic construct of priming at face value, and happily uses “L1 primings” to explain L1 transfer as if L1 primings were as real as the nose on Hoey’s face.

Hoey rejects cognitive theories of SLA which see second language learning as a process of interlanguage development, involving the successive restructuring of learners’ mental representation of the L2, because syntax plays an important role in them. He also rejects them because, contrary to his own theory, they assume that there are limitations in our ability to store and process information. In cognitive theories of SLA, a lot of research is dedicated to understanding how relatively scarce resources are used. Basically, linguistic skills are posited to slowly become automatic through participation in meaningful communication. While initial learning involves controlled processes requiring a lot of attention and time, with practice the linguistic skill requires less attention and less time, thus freeing up the controlled processes for application to new linguistic skills. To explain this process, the theory uses constructs such as comprehensible input, working and long term memory, implicit and explicit learning, noticing, intake and output.

In contrast, Hoey’s theory concentrates almost exclusively on input, passing quickly over the rest of the issues, and simply asserts that we remember the stuff that we’ve most frequently encountered. So we must ask Selivan and Dellar: What theory of SLA informs your claims? As an example, we may note that Long (2015) explains how his particular task-based approach to ELT is based on a cognitive theory of SLA and on the results of more than 100 studies.

Hoey’s theory doesn’t explain how L2 learners process and retrieve their knowledge of L2 words, or how paying attention to lexical chunks or “L1 primings” affects the SLA process. So what makes Selivan and Dellar think that getting students to consciously notice both lexical chunks and “L1 primings” will speed up primings in the L2? Priming, after all, is a subconscious affair. And what makes Dellar think that memorising lexical chunks is a good way to learn a second language? Common sense? A surface reading of cherry-picked bits of contradictory theories of SLA? Personal experience? Anecdotal evidence? What? There’s no proper theoretical base for any of Dellar’s claims; there’s scarce evidence to support them; and there’s a powerful theory supported by lots of evidence which suggests that they’re mistaken.

black-and-white-pineapple-png-pineapple-die-cut-vinyl-decal-pv733

 All Chunks and no Pineapple 

Skehan (1998) says:

Phrasebook-type learning without the acquisition of syntax is ultimately impoverished: all chunks but no pineapple. It makes sense, then, for learners to keep their options open and to move between the two systems and not to develop one at the expense of the other. The need is to create a balance between rule-based performance and memory-based performance, in such a way that the latter does not predominate over the former and cause fossilization.

If Selivan and Dellar agree that there’s a need for a balance between rule-based performance and memory-based performance, then they have to accept that Hoey is wrong, and confront the contradictions that plague their present position on the lexical approach, especially their reliance on Hoey’s description of language and on the construct of priming. Until Selivan and Dellar sort themselves out, until they tackle basic questions about a model of English and a theory of second language learning, so as to offer some principled foundation for their lexical approach, then it amounts to little more than an opinion, more precisely: the unappetising opinion that ELT should give priority to helping learners memorise pre-selected lists of lexical chunks. 

References

Crystal, D. (2003) The English Language. Cambridge: Cambridge University Press.

Ellis, N. C. (2006) Language acquisition and rational contingency learning. Applied Linguistics, 27 (1), 1-24.

Hoey, M. (2005) Lexical Priming: A New Theory of Words and Language. Psychology Press.

Krashen, S. (1985) The Input Hypothesis: Issues and Implications. Longman.

Larsen-Freeman, D and Cameron, L. (2008) Complex Systems and Applied Linguistics. Oxford, Oxford University Press.

Lewis, M. (1993) The Lexical Approach. Language Teaching Publications.

Lewis, M. (1996) Implications of a lexical view of language’. In Willis, J,, & Willis, D. (eds.) Challenge and Change in Language Teaching, pp. 4-9. Heinemann.

Lewis, M. (1997) Implementing the Lexical Approach. Language Teaching Publications.

Long, M. (2015) Second Language Acquisition and Task-Based Language Teaching. Wiley.

MacWhinney, B. (2002) The Competition Model: the Input, the Context, and the Brain. Carnegie Mellon University.

O’Grady, W. (2005) How Children Learn Language Cambridge, Cambridge Universiy Press.

Richards, J (2006) Communicative Language Teaching Today. Cambridge University Press.

Quirk, R., Greenbaum, S., Leech, G. and Svartvik, J. (1985) A Comprehensive Grammar of the English Language, London: Longman.

Skehan, P. (1998) A Cognitive Approach to Language Learning. Oxford: Oxford University Press.

Swan, M. (2001) Practical English usage. Oxford: Oxford University Press.

Do It Like Dellar, Or Use Your Brain?

ventril

In his talk Teaching Grammar Lexically, Dellar tells us about the life-changing effects that reading Michael Lewis’ The Lexical Approach had on him. What it did was to jolt him out of his comfortable life of grammar-based PPP teaching, and make him realise that language was not lexicalised grammar, but rather grammaticalised lexis. This “profound shift in perspective” took its toll; Dellar confesses that struggling with the challenging implications of Lewis’ text threw his teaching into a state of chaos for two years, and that he and his co-author Andrew Walkley have spent more than twenty years “unpicking” its “dense” content.  About ten years later, Dellar had sufficiently recovered from his intellectual odyssey to read another book, Hoey’s Lexical Priming, and this led to an even deeper understanding of, and commitment to, the lexical approach. The extent of Hoey’s influence on Dellar can be appreciated by noting that, after 2005, Dellar’s stock of constructs doubled – from one to two, so that it now consists of “lexical chunks” and “priming”.  Undaunted by widespread criticism of Lewis’ and Hoey’s arguments (neither offers a developed theory of SLA or a principled methodology for ELT), and unencumbered by complicated theorising or thinking critically, Dellar sees ELT with uncluttered clarity. Alas, he also sees the need to share his vision with others, and so he travels around the world extolling teachers everywhere to profoundly shift their perspective, throw off the “tyranny of PPP” and embrace the promise of properly primed lexical chunks.

In contrast to this simplistic, evangelical proselytising, real educators take the view that the primary goal of education is to encourage people to question everything, to think critically for themselves. It emphasises the dance of ideas, the delight in thinking about things in such a way that one’s intellect is engaged, one’s appreciation of the complexity of things is improved, and the accumulation of information is down-played. In primary and secondary schools, the good teachers are those who encourage students to question conventional wisdom and at university, the same ethos of critical thinking is what informs the best academic staff; they’re less concerned with facts than with what their students make of them. My concern is that in the world of ELT training, this type of approach is not much in evidence.

What is critical thinking?

The ability to think critically involves three things:

  1. An attitude: don’t believe what you’re told.
  2. Knowledge of the methods of logical inquiry and reasoning
  3. Skill in applying the methods referred to in 2 above.

Critical thinking demands a persistent effort to examine anything you’re told in the light of the evidence that supports it and the logic of its conclusions. It demands that you’re not impressed by who said it, that you remain open-minded, and, above all, that you think rationally for yourself. It refers to your ability to interpret data, to appraise evidence and evaluate arguments. it refers, that is, to your ability to critically examine the so-called facts, to assess the existence (or non-existence) of logical relationships between propositions and to assess whether conclusions are warranted.

Critical thinking is needed when you do your own work and when you assess the work of others. When you do your own work in an MA paper, critical thinking demands that you

  • articulate the problems addressed
  • look for means for solving those problems
  • gather and marshal pertinent information
  • evaluate the information gathered.

When you evaluate your own work and the work of others you must identify defects such as

  • poor articulation of the problem
  • appeals to authority
  • unstated assumptions and values
  • partial data
  • unwarranted conclusions

Thinking critically is, in my opinion, most of all an attitude.  It’s the attitude of  a sceptic, of one who can sniff a rat. In many areas of life you’ll be well-advised to deliberately ignore the scent, but when it comes to matters academic, sniffing a rat, sensing that there’s something wrong in the argumentation and evidence given in a text, is a skill that needs nurturing and honing. Once you adopt the right attitude, then you have to improve your ability to not just feel there’s something wrong, but to identify exactly what that something is.

dell1

Back to the Baloney: Dellar’s Lexical Approach  

From the evidence available on his websites, blogs, and recorded interviews, webinars and presentations, Dellar’s approach to ELT training is severely at odds with the critical approach I’ve sketched above. Despite his confusion about both theories of language (UG = Structuralism??) and theories of SLA, Dellar presumes to tell teachers what’s wrong and what’s right. PPP grammar teaching is wrong, teaching lexical chunks is right. Rather than attempt to evaluate different approaches to ELT and tentatively recommend this or that alternative, Dellar banishes doubt and gives the strong impression that he’s cracked it: he’s worked out the definitive blueprint of ELT, and all he has to do is to overcome the entrenched resistance of those still chained to Headway, or English File, or whatever coursebook it is that keeps them languishing under the oppression of PPP grammar teaching.

Any initial excitement  you might feel about someone proposing a move away from coursebook-led teaching soon evaporates when you realize that Dellar is against grammar-based coursebooks, but not coursebooks per se; indeed the definitive blueprint of ELT turns out to be nothing other than his own coursebook series Outcomes! Delivery from the tyranny of grammar teaching and emergence into the brave new world of the lexical approach is a simple affair: you just throw away Headway and pick up Outcomes and  then lead your students, unit by unit. through its mind-numbingly boring pages, just as with any other coursebook. The only difference between the tyrannical past and the liberated future is that grammar boxes are replaced with long lists of leaden lexical chunks, repeated exposure to which is somehow supposed to lead to communicative competence.

If we examine Dellar’s published work – his blogs, his video presentations, his webinars, his conference presentations, and his coursebooks – we find very little evidence of critical thinking about language, language learning or language teaching.

  • Does his work invite teachers to critically consider different views of language? Does it consider the arguments for a generative grammar as argued by Chomsky, versus the arguments for a structuralist approach as argued by Bloomfield, or a functional approach, as argued by Halliday, or a functional-notional approach as argued by Wilkims, or a lexical approach as argued by Pawley, or Nattinger, or Byber? Or does it tell them that English is best seen as lexically-driven, and that’s that?
  • Does his work invite teachers to consider the pros and cons of different accounts of SLA, of different weights given to input and output, of explicit and implicit learning, of different accounts of interlanguage development? Or does it tell them that priming is the key to SLA?  Does Dellar’s oeuvre encourage teachers to critically assess the construct of priming? Or is priming taken as a given, and is it simply asserted that lexical chunks are the secret of language learning?

Conclusion

ELT training is too often characterised by an “I know best” assumption, and by its general rejection of a critical approach to education. Instead of approaching teacher training sessions with prepared answers already in hand, teacher trainers should adopt a critical thinking approach to their job. They should ask teachers open questions, toss some provisional answers out for discussion, invite teachers to critically evaluate them, and work with teachers to help them come up with their own tentative solutions. I need hardly add that these tentative solutions should then be critically discussed.

Criticising Harmer

image-from-textbook

My criticisms of Jeremy Harmer’s latest published work have caused some dismay, which was only to be expected. Equally predictable was that so few of those who objected to what I said, or how I said it, voiced their concerns; silence, as usual, was the preferred response. To those who did speak up, either in emails to me, or in other forums, here’s my reply.

First, a summary of my criticisms:

  • Harmer’s latest edition of The Practice of Language Teaching is badly written, badly informed, and displays a lack of critical acumen.
  •  Harmer’s pronouncements on testing in 2015 were appalling.
  • Harmer is an obstacle to progress in ELT.

Well, that’s my view, and I’ve given some evidence to support it in various posts. Further evidence can be got by simply reading his book and watching his presentations. I’ll be glad to talk to Harmer face to face in any forum that he or anyone else wants to organise. Anytime, anywhere.

I take criticism here to be the act of analysing and evaluating the quality of a given text. This involves deconstructing it. I use “deconstruct” as Gill in the quote that heads this blog uses it (not in the special sense that Derrida uses it), to refer to a process that’s been used down through the ages: to deconstruct a text is to critically take it apart. What we examine is the coherence and cohesion of the text, its expression and its content.

At the most superficial (I mean “surface”, not unimportant) level, the quality of a text can be judged by its coherence and cohesion. Coherence refers to clarity, while cohesion refers to organisation and flow. Harmer’s texts lacks both.  Pick up Harmer’s magnum opus, the truly appalling Practice of English Language Teaching, start reading, and ask yourself: Is this clear? Is this well-expressed? Does the text flow?

  • How many sentences are ungrammatical?
  • How often could things have been more succinctly expressed?
  • How often do you struggle to get to the end of a sentence?
  • How often does the text meander?
  • How often do you feel that the writing is tedious?
  • How often are you referred elsewhere?

The coherence of the text is severely weakened by its author’s inability to stick to the point and to express himself clearly: so often a simple point is dragged out for pages. As for cohesion, the text looks well-organised, but it fails to properly sequence its arguments. It’s full of references to other places in the text where what’s being dealt with is dealt with differently, so you never quite get a handle on anything. And, crucially for cohesion, there’s no over-arching argument running through the text: it’s a motley collection of bits and pieces.

At a deeper level of criticism, we should ask questions about content.

  • Does the text show a good command of things discussed?
  • Does it present an up to date summary of ELT?
  • Does it give a fair and accurate description of current views of the English language, of L2 language learning, of teaching, and of assessment ?
  • Is there the slightest hint of originality?
  • Does it give a good critical evaluation of matters discussed?
  • Is it enjoyable to read?

A critical view of the text demands that we don’t take anything for granted. No assertions should be taken at face value; we should carefully scrutinise any opinions, and we should give some attention to the kind of critical discourse analysis (CDA) proposed by Fairclough and others where political issues are weighed. If we critically examine Harmer’s Practice of Language Teaching in this way, I suggest that we’ll conclude that the answer to the 6 questions above is a resounding “No!”, and that a CDA of the book reveals a deeply conservative commitment to the status quo.

Thoughts of The Master

2

Those who are about to embark on the discourse analysis bit of their MA course might like to examine the quotes below. They’re all quotes from the published work of Jeremy Harmer. As you know, the purpose of discourse analysis is to examine texts “beyond the sentence boundary”. Various frameworks can be used, but I recommend a literacy approach here, where you concentrate on the verbosity, bathos, and general pumped-up, faux academic prose of the writer, blissfully unaware of his limitless limitations. Note the cascade of clichés, the resort to tired truisms, the bumbling use of brackets, and the general tedium of the text, not alleviated by random bits of bullshit. The final example in the list below refers you to a video recording on Harmer’s blog where you’ll find the master examining the finer points of testing in his own unique manner.

So take a look below. As they say in MacDonald’s when they bring you your tasteless, lack-lustre, nutrition-free meal: Enjoy!

The constant interplay of applied linguistic theory and observed classroom practice attempts to draw us ever closer to a real understanding of exactly how languages are learnt and acquired, so that the work of writers such as Ellis (1994) and Thornbury (1999)—to mix levels of theory and practice—are written to influence the methodology we bring to language learning. We ignore their challenges and suggestions at our peril, even if due consideration leads us to reject some of what they tell us.

Teaching may be a visceral art, but unless it is informed by ideas it is considerably less than it might be.

Without beliefs and enthusiasms, teachers become client-satisfiers only—and that is a model which comes out of a different tradition from that of education, and one that we follow at our peril.

A problem with the idea that methodology should be put back into second place (at the very most) is that it threatens to damage an essential element of a teacher’s make-up—namely what they believe in, and what they think they are doing as teachers.

A belief in the essentially humanistic and communicative nature of  language  may well  pre-dispose certain teachers towards a belief in group participation and learner input rather than relying only on the straightforward transmission of knowledge from instructor to passive instructee.

One school of thought which is widely accepted by many language teachers is that the development of our conceptual understanding and cognitave skills is a main objective of all education. Indeed, this is more important than the acquisition of factual information (Williams and Burden 1997:165).

Any teacher with experience knows that it is one thing to put educational temptation in a child’s way (or an adult’s); quite another for that student to actually be tempted.

There is nothing wrong (and everything right) with discovery-based experiential learning. It just doesn’t work some of the time.

What precisely is the role of a cloud granny and how can she (or perhaps he) make the whole experience more productive.

Yet without our accumulated knowledge and memories what are we? Our knowledge is, on the contrary, the seat of our intuition and our creativity. Furthermore, the gathering of that knowledge from our peers and, crucially, our elders and more experienced mentors is part of the process of socialization. Humanity has thought this to be self-evident for at least 2000 years.

On testing https://jeremyharmer.wordpress.com/2013/12/16/testophile-or-testophobe/

As you watch the master deliver his polished address:

  • Note the setting: the well-appointed sitting room, the unused, high quality microphone, the classical music in the background.
  • Note the speaker: the pose, the homely mug of tea, the air of quiet confidence, the carefully-practiced delivery.
  • Note, too, the complete lack of content in what he says, the utter disregard for any serious engagement with an important issue, the assumption that this indulgent, look-at-me-farting-around-saying-absolutely-nothing display will be well met.
  • Such, you might think, is the arrogance of power.

 

Harmer: The Practice of English Language Teaching 5th Edition

imagesU7V0TEMM

The new edition of Harmer’s Practice of English Language Teaching is over 500 pages and includes chapters on:

  • English as a world language
  • Theories of language and language learning
  • Learner characteristics which influence teacher decisions
  • Guidance on managing learning
  • Teaching language systems (grammar, vocabulary and pronunciation)
  • Teaching language skills (speaking, writing, listening and reading)
  • Practical teaching ideas
  • The role of technology (old and new) in the classroom
  • Assessment for language learning in the digital age

If you’re doing a course in ELT, then reading the new edition of Harmer’s massive tome might well have the salutary effect of making you re-consider your career choice. Nobody could blame you if, having read this mind numbingly tedious book, you decided to quit ELT and apply for a job in the Damascus Tourist Agency.  In the unlikely event that you reach the end of its 550 pages, you’ll probably have lost the will to live, let alone teach. Each page is weighed down by badly crafted, appallingly dull writing; each chapter says nothing new or succinct about its subject; each section says nothing that you can’t find much better treatments of in other, well-focused books.

The section on English as a world Language is absurdly long, badly considered and leans heavily on Crystal, who does a much better job of it, far more concisely and completely in his  book The English Language. The section on theories of  language learning is disgraceful; not one of the theories mentioned is properly stated or discussed. I really can’t bring myself to go through the rest of the book; it’s consistantly badly informed, badly considered, wordy and unhelpful.

It’s the style that offends me most in this horrendously-long, door-jam of a book; despite the efforts of all his editors, the suffocating effect of Harmer’s faux academic, charmlessly chummy, verbose and ineffectual prose is to turn everything to sludge. The reader wades endlessly through the sludge, unaided even by decent signposts, towards another badly defined horizon, there to meet more of the same: another, different hill to climb.

Even if you can get over the soporific effects of Harmer’s writing, the content is not likely to satisfy you, whatever TESOL qualification you’re aiming at. The audacious sweep of the book is almost ironic: here’s a book where everything is mentioned and nothing is adequately dealt with. Magpies skillfully take what they need from other nests; Harmer haplessly crashes into the work of scholars, conveying almost nothing of their contribution. Anything, but anything, mentioned here needs further reading. Needless to say, the bibliography is hopeless.

Just to round it off, the seemingly endless trudge through Harmer’s wasteland gets the reader precisely nowhere.  No final vision awaits; all you get at the end of this pathetic pilgrimage are poorly considered, unoriginal platitudes.

This dreadful book serves as a mirror for everybody involved in ELT. How can we in the ELT world be taken seriously by other areas of education when such a book is recommended reading in so many teacher training courses, and even in post-graduate courses?

Harmer, J. (2015) The Practice of English Language Teaching . 5th Edition. London, Pearson.

A Final Tilt at the Windmill of Thornbury’s A to Z

don_quijote_and_sancho_panza_by_psdeluxe-d7wb1df

They keep coming, like burps after a poorly-digested Christmas lunch: comments on Thornbury’ A to Z blog. I’ve read 3 in the last few days, so let me add my own final swipe at the edifice before 2015 concludes.

Thornbury’s Sunday posts on his A to Z blog only lasted a few months, but during that short season they became part of my Sunday morning routine: late breakfast, read Thornbury, join in the discussions that aways followed. The final Sunday post was The Poverty of the Stimulus, and as usual it had enough good stuff in it to spark off an interesting discussion. On this particular Sunday  I made a few contributions and the exchange went something like this:

Initial statement from Scott (I use his first name to emphasise the cosy Sunday morning feel of the discussion, and also as a way of reminding myself to be nice.)

  1. The quantity and quality of language input that children get is so great as to question Chomsky’s poverty of the stimulus argument.
  2. An alternative to Chomsky’s view of language and language learning, is that “language is acquired, stored and used as meaningful constructions (or ‘syntax-semantics mappings’).”
  3. Everett (he of “There is no such thing as UG” http://www.theguardian.com/technology/2012/mar/25/daniel-everett-human-language-piraha is right to point out that since no one has proved that the poverty of the stimulus argument is correct, “talk of a universal grammar or language instinct is no more than speculation”.

Development

My first reply is short:

“Everett’s claim is nonsense since it’s logically impossible to prove that a theory is true.”

Scott ignores this comment and prefers to pay attention to a certain Svetlana (I imagine her sitting in a wifi-equipped tent, huddled over an Apple app projecting a 3-D crystal ball) who tells him that he’s right to question the POS claim because tiny babies, only recently emerged (sic) from the womb, form huge numbers, like, well millions, of neural connections per second and what’s more, they rapidly develop dendritic spines containing “lifelong memories”.  A few unsupported pseudo-scientific, quasi-philosophical assertions which sound as if they’ve been picked up from a hazy weekend seminar at the Sorbonnne are thrown in for good measure.

Imagine my surprise when Scott thanks the mystic Svetlana for bringing “new evidence to bear”, and says that this evidence serves to confirm his “initial hunch.”

“WHAT??” I typed furiously. “Are you really going to be hoodwinked by such postmodernist, obscurantist mumbo jumbo?” (There’s not much known for sure about the role dendritic spines play in learning and memory; I suspect she thinks that mentioning them here is evidence of deep knowledge of the scientific study of the nervous system; and suggesting that they disprove the POS argument is fanciful nonsense.)

“Give us an example of a lifelong memory stored in a dendritic spine that ’s relevant to this discussion then!” I shout uselessly at the monitor.

Well, Scott’s not just hoodwinked, he actually becomes emboldened. Spurred on by the compelling “new evidence”, he’s now ready to dismiss the POS argument completely.

“Actually”, he says, the stimulus is quite enough to explain everything children know about language. Corpus studies “suggest that everything a child needs is in place”.

Asked how these corpus studies explain what children know about language, Scott (apparantly still intoxicated by Svetlana’s absurd revelations) says “the child’s brain is mightily disposed to mine the input”, adding, as if this were the clincher, “a little stimulus goes a long way, especially when the child is so feverishly in need of both communicating and becoming socialized.”

“Cripes! His brain’s gone soft!” I thought. “He’s barking mad!”

“Platitudes and unsupported assertions have now completely replaced any attempt at reasoned argument”, I wrote.

“Anyone who claims that children’s knowledge about an aspect of syntax could not have been acquired from language input has to prove that it couldn’t. Otherwise it remains another empirically-empty assertion” says Scott.

Dear oh dear, here we are back at the start. As with the Everett quote, for purely formal reasons, it’s not possible to prove such a thing, and to demand such “proof” demonstrates an ignorance of logic and of how rational argument, science, and theory construction work. Failing to meet the impossible demand of proof doesn’t make the POS argument an empirically-empty assertion.

Then Russ Mayne joins in to have his typically badly-informed little say. Chomsky, he tells us, is “utterly scornful of data.”

“No he’s not”, says I, ““Chomsky’s theory of UG has a long and thorough history of empirical research.”

And blow me down if Thornbury doesn’t chime in:

““Chomsky’s theory of UG has a long and thorough history of empirical research”. What!!? Where? When? Who?”

So now he’s not just showing a prediliction for explanations involving the lifelong memories stored in dendritic spines, he’s showing even worse signs of ignorance.

Discussion

That the discussion of the POS argument didn’t get satisfactorily resolved is hardly surprising, but I was more than a bit surprised to hear Scott telling us that language learning can be satisfactorily explained by the general learning processes going on inside feverish young brains that are “mightily disposed to mine the input”. (Just in passing, all these references to the child’s brain seem to contradict the part of the current Thornbury canon which deals with “the language body”.) Asked to say a bit more about how language learning can be done through general learning processes and input alone, Thornbury says

“If we generalize the findings beyond the single word level to constructions…” and then “… generalize from constructions to grammar…”,  “hey presto, the grammar emerges on the back of the frequent constructions.”

Hey presto? What grammar? What “findings beyond the single word level”? How do you generalise these findings to “constructions” And how do you generalise from constructions to “grammar”?

This unwarranted dismissal of the POS argument, coupled with its incoherent account of language learning is, you might think, excusable in a Sunday morning chat, but we find more evidence of both the ignorance and the incoherence displayed here in more carefully-prepared public pronouncements on the same subjects. Thornbury’s very poor attempts to challenge Chomsky and psychological approaches to SLA by offering a particularly lame and simplistic version of emergentism, mostly based on Larsen-Freeman’s recent work have already been commented on in this blog (see for example Thornbury and the Learning Body and Emergentism 2), but let me say just a bit more.

Thornbury and Emergentism

Thornbury keeps telling people about Larsen-Freeman’s latest project. The best criticism I’ve read of it is the 2010 article by Kevin Gregg in SLR entitled “Shallow draughts: Larsen-Freeman and Cameron on complexity.” There’s no way I can do justice to the article by quickly summarising it, and I urge readers of this post to read Gregg’s article for themselves. As always with Gregg, the argument is not just devastating, but delightfully written. Gregg dismantles the pretences of the Larsen-Freeman and Cameron book and shows that all their appeals to complexity theory are so much hogwash; nothing of substance sustains the fanciful opinions of the authors. And likewise, Thornbury.

Thornbury has said nothing to persuade any intelligent reader that his version of emergentism provides a good explanation of SLA. Just a few points:

  • Emergentism rests on empiricism and empiricism pure and simple is a bankrupt epistemology.
  • Emergentism doesn’t get the support Thornbury claims it gets from the study of corpora – how could it? Thornbury’s claims show an ignorance of both theory construction and scientific method.
  • As Gregg (2010) points out, the claim that language is a complex dynamical system makes no sense. “Simply put, there is no such entity as language such that it could be a system, dynamical or otherwise……. Terms like ‘language’ and ‘English’ are abstractions; abstract terms, like metaphors, are essential for normal communication and expression of ideas, but that does not mean they refer to actual entities. English speakers exist, and (I think) English grammars come to exist in the minds/brains of those speakers, so it remains within the realm of possibility that a set of speakers is a dynamical system, or that the acquisition process is; but not language, and not a language.”
  • Thornbury’s assertion that language learning can be explained as the detection and memorisation of “frequently-occurring sequences in the sensory data we are exposed to” is an opinion masquerading as an explanatory theory. How can general conceptual representations acting on stimuli from the environment explain the representational system of language that children demonstrate?  Thornbury’s  suggestion that we have an innate capacity to “unpack the regularities within lexical chunks, and to use these patterns as templates for the later development of a more systematic grammar” begs more questions than it answers and, anyway, contradicts the empiricist epistemology adopted by most emergentists who say that there aren’t, indeed can’t be, any such things as innate capacities.

NOTE: I’ve added 2 appendices to deal with the 2 questions asked by Patrick Amon.

Appendix 1: Why can’t you prove that a general causal theory is true?

The problem of induction

Hume (1748) started from the premise that only “experience” (by which Hume meant that which we perceive through our senses) can help us to judge the truth or falsity of factual sentences. Thus, if we want to understand something, we must observe the relevant quantitative, measurable data in a dispassionate way. But if knowledge rests entirely on observation, then there is no basis for our belief in natural laws because it is an unwarranted inductive inference. We cannot logically go from the particular to the general: no amount of cumulative instances can justify a generalisation; ergo no general law or generalised causal explanation is true. No matter how many times the sun rises in the East, or thunder follows lightening, or swans appear white, we will never know that the sun rises in the East, or that thunder follows lightning or that all swans are white. This is the famous “logical problem of induction”. Why, nevertheless, do all reasonable people expect, and believe that instances of which they have no experience will conform to those of which they have experience?” Hume’s answer is: ‘Because of custom or habit’. (Popper, 1979: 4)More devastating still was Hume’s answer to Descartes’ original question “How can I know whether my perceptions of the world accurately reflect reality?” Hume’s answer was “You can’t.”

It is a question of fact whether the perceptions of the senses be produced by external objects resembling them: how shall this question be determined? By experience surely; as all questions of a like nature.  But here experience is, and must be, entirely silent.  The mind has never anything present to it but the perceptions, and cannot possibly reach any experience of their connection with objects.  The supposition of such a connection is, therefore, without any foundation in reasoning. (Hume, 1988 [1748]: 253)

Thus, said Hume, Descartes was right to doubt his experiences, but, alas, experiences are all we have.

The asymmetry between truth and falsehood.

Popper (1972) offers a way out of Hume’s dilemma. He concedes that Hume is right: there is no logical way of going from the particular to the general, and that is that: however probable a theory might claim to be, it can never be claimed to be true.

Popper (1959, 1963, 1972) argued that the root of the problem of induction was the concern with certainty. In Popper’s opinion Descartes’ quest was misguided and had led to three hundred years of skewed debate.  Popper claimed that the debate between the rationalists and the empiricists, with the idealists pitching in on either side, had led everybody on a wild goose chase – the elusive wild goose being “Truth”.  From an interest in the status of human knowledge, philosophers and philosophers of science had asked which, if any, of our beliefs can be justified.  The quest was for certainty, to vanquish doubt, and to impose reason.  Popper suggested that rather than look for certainty, we should look for answers to problems, answers that stand up to rational scrutiny and empirical tests.

Popper insists that in scientific investigation we start with problems, not with empirical observations, and that we then leap to a solution of the problem we have identified – in any way we like. This second anarchic stage is crucial to an understanding of Popper’s epistemology: when we are at the stage of coming up with explanations, with theories or hypotheses, then, in a very real sense, anything goes.  Inspiration can come from lowering yourself into a bath of water, being hit on the head by an apple, or by imbibing narcotics.  It is at the next stage, the stage of the theory-building process, that empirical observation comes in, and, according to Popper, its role is not to provide data that confirm the theory, but rather to find data that test it.

Empirical observations should be carried out in attempts to falsify the theory: we should search high and low for a non-white swan, for an example of the sun rising in the West, etc. The implication is that, at this crucial stage in theory construction, the theory has to be formulated in such a way as to allow for empirical tests to be carried out: there must be, at least in principle, some empirical observation that could clash with the explanations and predictions that the theory offers.  If the theory survives repeated attempts to falsify it, then we can hold on to it tentatively, but we will never know for certain that it is true.  The bolder the theory (i.e. the more it exposes itself to testing, the more wide-ranging its consequences, the riskier it is) the better.  If the theory does not stand up to the tests, if it is falsified, then we need to re-define the problem, come up with an improved solution, a better theory, and then test it again to see if it stands up to empirical tests more successfully.  These successive cycles are an indication of the growth of knowledge.

Popper (1974: 105-106) gives the following diagram to explain his view:

P1 -> TT -> EE -> P2

P = problem   TT = tentative theory  EE = Error Elimination (empirical experiments to test the theory)

We begin with a problem (P1), which we should articulate as well as possible. We then propose a tentative theory (TT), that tries to explain the problem. We can arrive at this theory in any way we choose, but we must formulate it in such a way that it leaves itself open to empirical tests.  The empirical tests and experiments (EE) that we devise for the theory have the aim of trying to falsify it.  These experiments usually generate further problems (P2) because they contradict other experimental findings, or they clash with the theory’s predictions, or they cause us to widen our questions.  The new problems give rise to a new tentative theory and the need for more empirical testing.

Popper thus gives empirical experiments and observation a completely different role: their job now is to test a theory, not to prove it, and since this is a deductive approach it escapes the problem of induction. Popper takes advantage of the asymmetry between verification and falsification: while no number of empirical observations can ever prove a theory is true, just one such observation can prove that it is false.  All you need is to find one black swan and the theory “All swans are white” is disproved.

Appendix 2: Empiricism and epistemology

Moving to Patrick’s second question, I meant to say that “pure” or “extreme” forms of empiricsm are now generally rejected. Those who adopt a relativist epistemology (e.g most post-modernists) and those who are ignorant of the philosophy of science (e.g. Thornbury) wrongly label their opponents (rationalists who base their arguments on logic and empirical observation) as “positivists”. In fact, nobody in the scientific community is a positivist these days. The last wave of positivists belonged to the famous Vienna Circle. The objective of the members of the Vienna Circle was to continue the work of their predecessors (most importantly Comte and Mach) by giving empiricism a more rigorous formulation through the use of recent developments in mathematics and logic. The Vienna circle, which comprised Schlick, Carnap, Godel, and others, and had Russell, Whitehead and Wittgenstein as interested parties (see Hacking, 1983: 42-44), developed a programme labelled Logical Positivism, which consisted first of cleaning up language so as to get rid of paradoxes , and then limiting science to strictly empirical statements: in the grand tradition of positivism they pledged to get rid of all speculations on “pseudo problems” and concentrate exclusively on empirical data.   Ideas were to be seen as “designations”, terms or concepts, that were formulated in words that needed to be carefully defined in order that they be meaningful, rather than meaningless. The logical positivists are particularly well-known for their attempt to answer Hume’s criticism of induction through Probability Theory, which, crudely, proposed that while a finite number of confirming instances of a theory could not prove it, the more numerous the confirming instances, the more probability there was that the theory was true. This, like just about all of their work, ended in failure.

Empiricism in Linguistics: Behaviourism  

But empiricism lived on, and in linguistics, the division between “empiricist” and “rationalist” camps is noteworthy. The empiricists, who held sway, at least in the USA, until the 1950s, and whose most influential member was Bloomfield, saw their job as field work: accompanied with tape recorders and notebooks the researcher recorded thousands of hours of actual speech in a variety of situations and collected samples of written text. The data was then analysed in order to identify the linguistic patterns of a particular speech community.  The emphasis was very much on description and classification, and on highlighting the differences between languages.  We might call this the botanical approach, and its essentially descriptive, static, “naming of parts” methodology depended for its theoretical underpinnings on the “explanation” of how we acquire language provided by the behaviourists.

Behaviourism was first developed in the early twentieth century by the American psychologist John B. Watson, who, influenced by the work of Pavlov and Bekhterev on conditioning of animals, attempted to make psychological research “scientific” by using only objective procedures, such as laboratory experiments which were designed to establish statistically significant results. Watson formulated a stimulus-response theory of psychology according to which all complex forms of behaviour are explained in terms of simple muscular and glandular elements that can be observed and measured.  No mental “reasoning”, no speculation about the workings of any “mind”, were allowed. Thousands of researchers adopted this methodology, and from the end of the first world war until the 1950s an enormous amount of research on learning in animals and in humans was conducted under this strict empiricist regime.  In 1950 behaviourism could justly claim to have achieved paradigm status, and at that moment B.F. Skinner became its new champion.  Skinner’s contribution to behaviourism was to challenge the stimulus-response idea at the heart of Watson’s work and replace it by a type of psychological conditioning known as reinforcement.  Important as this modification was, it is Skinner’s insistence on a strict empiricist epistemology, and his claim that language is learned in just the same way as any other complex skill is learned, by social interaction, that is important here.

In sharp contrast to the behaviourists and their rejection of “mentalistic” formulations is the rationalist approach to linguistics championed by Chomsky. Chomsky (in 1959 and subsequently) argued that it is the similarities among languages, what they have in common, that is important, not their differences. In order to study these similarities we must allow the existence of unobservable mental structures and propose a theory of the acquisition of a certain type of knowledge.

Well, you know the story: Chomsky’s theory was widely adopted and became the new paradigm. Currently, badly-informed people like Larsen-Freeman and Thornbury (as opposed to serious scholars like O’Grady, MacWhinney and others) are claiming that no appeals to innate, unobservable mental processes or to modules of mind are necessary to explain language learning. What they don’t appreciate is that, unless, like William O’Grady or Brian MacWhinney, they deal properly with epistemological questions about the status of psychological processes, mental states, mind versus brain, and so on, they are either trying to have their cake and eat it or adopting an untenable empiricist epistemology.

 

References

Gregg, K.R. (2010) Shallow draughts: Larsen-Freeman and Cameron on complexity. Second Language Research, 26(4), 549 – 560.

Hacking, I. (1983) Representing and Intervening. Cambridge: Cambridge University Press.

Hume, D. (1988) [1748]: An Enquiry Concerning Human Understanding. Amherst, N.Y.  Promethius.

Popper, K. R. (1959) The Logic of Scientific Discovery. London: Hutchinson.

Popper, K. R. (1963) Conjectures and Refutations. London: Hutchinson.

Popper, K. R. (1972) Objective Knowledge. Oxford: Oxford University Press.

Popper, K. (1974) Replies to Critics in P.A. Schilpp (ed.), The Philosophy of Karl Popper. Open Court, La Salle, III.

Thornbury, S. (2013) ‘The learning body,’ in Arnold, J. & Murphey, T. (eds.) Meaningful Action: Earl Stevick’s influence on language teaching. Cambridge. Cambridge University Press.

Thornbury, S. (2012?) Language as an emergent system. British Council, Portugal: In English. Available here: http://www.scottthornbury.com/articles.html

 

 

Call that evidence? Russ Mayne on Chomsky

 untitled

Mayne’s talk at the 2014 IATEFL conference exposed Neuro-Linguistic Programming and other claims about “Learner Styles” for the hocus-pocus they so obviously are. Thanks to the fact that NLP had actually been endorsed by leading ELT figures like Harmer and Rinvolucri, it was a job worth doing, and indeed it was a job done well. The problem was that, buoyed by all the attention his talk received,  Mayne was emboldened to seek bigger prey, and the result led to very serious doubts about Mayne’s credibility.  Could anybody really take Mayne seriously after his post about Chomsky? The post showed not only a complete failure to confront the evidence, but also a hopeless ignorance of matters discussed, including UG, scientific method, empiricist epistemology, and even the role of evidence in explanations. For someone claiming to be the champion of evidence-based ELT, someone, that is, urging us to base our views on a careful and critical interrogation of the facts, the Chomsky post amounts not just to shooting yourself in the foot, it’s more like hurling yourself off a very high cliff. To mix metaphors, one thing is to splash around in a paddling pool where silly ideas about ELT methodology are easily dispatched; another is to dive into the deep end of a discussion about generative linguistics without the slightest ability to swim.

The post about Skinner and Chomsky, called The Myth of Neat Histories, starts with what I take to be an attempt at humour – an exaggerated sketch of Skinner as the bad guy and Noam the brave young hero who showed that language was innate and that consequently “no one needed to teach grammar anymore.” That done, Mayne reveals the purpose of the piece, which is to explode “the top 5 myths and misconceptions about the infamous Chomsky/Skinner debate.”  Under the guise of helping his less well-informed readers to adopt a more critical view of the world, Mayne presents a number of  short, fatuous sections where cherry-picked “evidence” from a scarce selection of carefully-chosen sources is offered, with about the same disregard for scholarship or critical acumen as Harmer shows when talking about testing. Here’s a sample:

Chomsky ideas are accepted by few. The idea of Universal Grammar has been shown to be a myth, the Poverty of Stimulus argument has been rejected, and could only apply to syntax anyway. Vocabulary development in children has clearly been shown to be entirely affected by ‘stimulus’. the generative grammar paradigm he created has been rewritten several times by the (sic) Chomsky himself in a failed attempt to salvage it.

The bits in blue indicate references to books or articles. The evidence that Chomsky’s ideas are accepted by few is that Evans says so in a book published in 2014. UG is a myth because an article by Evans and Levinson in 2009 has shown it to be so. The poverty of the stimulus  argument can safely be rejected because two authors in a 1996 article rejected it.  (We’ll let the ridiculous comment about syntax go without comment.) Vocabulary development in children has clearly been shown to be entirely affected by ‘stimulus’ by Betty Hart in 1995. The claim that “the generative grammar paradigm” has been rewritten several times by Chomsky “in a failed attempt to salvage it” relies entirely on Mayne’s say so, but he obviously didn’t make up the sentence himself, so it must be true. Mayne doesn’t explain how generative grammar can be a “paradigm” and, at the same time,  “accepted by few”; and nor does he explain how a paradigm can be “re-written several times”, but these are minor matters. The major matter is that the post is pure bullshit.

Mayne gives further evidence of his ignorance, confusion and poor judgement in his replies to comments, and also in his contributions to Scott Thornbury’s post on Poverty of The Stimulus where he says the following

What I always found staggering (or brilliant?) about Chomsky was how he not only developed this new theory but developed a new set of rules for linguistic inquiry which insulated his theories from criticism. I’m sure Geoff will point out if/where I’m wrong, but Chomsky suggested that the stimulus is impoverished but based this on nothing but logic. A reasonable academic might ask “have you tested this? Where’s your evidence that it is impoverished?’ which may have undone C but he simultaneously introduced the concept that linguistics didn’t need, and in fact should spurn empirical research (I think he called it linguistic stamp collecting). He claimed that being a NS he could just sit in his office and come up with endless examples, -and that would suffice for evidence. How nice.

Where, we might ask, did Mayne get all the absurd misinformation which so continuously staggers him?

  • Who told him that Chomsky developed a new set of rules for linguistic inquiry which insulated his theories from criticism?
  • Where did he read that Chomsky introduced the concept that linguistics should spurn empirical research?
  • Which of Chomsky’s works contains the claim that being a native speaker meant that his own invented examples were evidence enough to support his theory?

Mayne’s comments make it clear that he’s no idea what he’s talking about. There are surely good grounds for the view that Mayne hasn’t read Chomsky, and that he hasn’t even read enough of his carefully-chosen secondary sources to get a minimal understanding of the issues involved.  The most elementary grasp of Chomsky’s work would make it clear that UG theory is carefully stated so as to make it open to empirical investigation; and the most basic study of UG would reveal that UG theory has been subjected to a great deal of empirical research. Mayne fails to appreciate the implications of the distinction Chomsky makes between performance and competence, completely misses the point of Chomsky’s decision to concentrate on other evidence for linguistic competence than that provided by spoken corpora, and seems astonished to learn that thousands of empirically-based studies (using grammaticality judgment tests and other tests) have been carried out in the last 50 years to test Chomsky’s theory.

Mayne jumps on the band wagon of emergentism to support his claim that no appeal to innate knowledge is required to explain language learning. Once again, nothing in what he says gives the impression that he knows what he’s talking about; perhaps emergentism recommends itself to Mayne because of his “personal dislike of everything Chomskyan.” Whatever the reasons, there’s no use looking to him for a reasoned critique of connectionist theories of SLA. As I said in reply to Scott Thornbury (whose enthusiastic writings on emergentism are a bit better informed than Mayne’s, i.e. not completely uninformed), data from corpora and appeals to general learning theories might be a starting point for an explanation that doesn’t rely on innate knowledge, but just repeatedly stating that they are enough to explain L1 acquisition isn’t good enough. Before Mayne makes another disastrous excursion out of the shallow end, he should at least get a good Dummies’ Guide to the next batch of work he decides to criticise. When you look at the hard graft MacWhinney and his colleagues, or Nick Ellis and his colleagues are putting in, and when you consider how meagre the results are to date (see Gregg, 2003 for a review which, while not up to date, gives a good indication of the size of the task), you can’t help feeling that Mayne would be well-advised to steer clear of theories of language learning.

Gregg, K.R. (2003) The state of emergentism in second language acquisition. Second Language Research, vol. 19, 2, 95-128.

Scheffler on The Lexical Approach

In January 2015, ELTJ published a commentary with the title Lexical priming and explicit grammar in foreign language instruction, which provoked a reply and counter-reply in subsequent issues. Scheffler (2015) argues that, pace Hoey’s theory of lexical priming, “lexis should be subordinated to grammar in FL teaching.”  Scheffler reminds us that Hoey sees lexical priming as “the mechanism that drives language acquisition”; that the successful language learner recognises, understands and produces lexical phrases as ready-made chunks; and that, consequently, teachers should concentrate on vocabulary in context and particularly on fixed expressions in speech. Scheffler’s reply is that mastery of lexical associations takes too long to be a viable objective for classroom-based foreign language learning and that grammar-based teaching is more efficacious.

According to Scheffler, in order to reach proficiency through learning lexical chunks, EFL learners have two options: either they use the same subconscious mechanism that operates in L1 acquisition, or they consciously apply themselves to the study of appropriate language material. As to the first option, studies of first language acquisition cited by Scheffler show that by the time they’re five years old children have encountered more than twelve million meaningful utterances in communicative context.  Scheffler comments: “No classroom input can come close to this amount of linguistic data”. Regarding the second option, mastering lexical associations through the conscious study of chunks,  collocations, etc., Scheffler cites Pawley and Syder’s (1983) claim that native speakers know ‘hundreds of thousands’ of memorized sequences, and argues that “in the classroom setting, committing to memory even a small subset of these would be a daunting task.”

So, says Scheffler, given that the subconscious or conscious learning of lexical chunks is not a viable option, classroom time should be spent focusing more on grammatical systems than on lists of lexical phrases. He cites Spada and Tomita’s (2010) meta-analysis as convincing evidence that explicit grammar instruction is more effective than implicit instruction for both simple and complex English grammar structures. Scheffler concludes by suggesting that classroom EFL teaching should provide “a combination of grammatically oriented presentation and lexically oriented practice” involving explicit grammar explanations followed by practice to encourage lexical priming. Schleffer offers this example: the teacher presents and explains the present perfect as a grammatical category and then provides practice in the form of drills. The drills allow links between the most frequent lexical instantiations of the present perfect  to be established, and these links are further strengthened in communicative activities and in exposure outside the classroom.  Such procedures offer learners both explicit and implicit instruction. Explicit instruction aims at linguistic awareness, proceduralization of explicit knowledge, and lexical priming, while implicit instruction reinforce primings established in class and gradually create new ones.

I’d say that Scheffler’s suggestion is a bit of a dog’s dinner, mutton dressed as lamb, old wine in new bottles, a botched attempt to have your cake and eat it. Rather than effortlessly trot out more examples of my store of food-and-drink-related-put-downs, let me be more specific:

First, no attempt is made to evaluate Hoey’s theory of language or of SLA. What are the strengths and weaknesses of a theory which makes collocational priming the key construct in both a description of language and an explanation of how it is learned?

Next, Spada and Tomita’s (2010) meta-analysis in no way supports the view that the presentation and practice of grammatical categories is the best, or even a good, way to organise classroom-based ELT. The authors limit themselves to the claim that adult learners sometimes benefit from explicit attention to form and from opportunities to practise explicit knowledge.  Nothing in Spada and Tomita’s review suggests that organising a syllabus around the presentation and practice of pre-selected bits of grammar like the present perfect is recommendable; nothing in the review challenges the findings of studies which support the view that most explicit grammar teaching falls on deaf ears most of the time, and that teaching can affect the rate but not the route of interlanguage development.

Finally, Scheffler begins by saying that basing ELT on the implicit or explicit learning of lexical chunks is unrealistic because there’s too much to learn. That’s a good point, but why then does he end up trying to somehow cram all this lexical chunk learning into the last part of the lesson? The claim is that in a grammar-based PPP approach where communicative activities are included, the explicit grammar teaching will help lexical priming, while the communicative activities will reinforce primings and create new ones. I can see no reason for thinking that this would work. And, of course, it contradicts Hoey’s theory. Grammar views lexical items as isolated elements organised by syntax; and this is exactly the view that Hoey wants to challenge. What sense does it make to expect the explicit teaching of the forms of the present perfect to facilitate fabulous amounts of lexical priming?

Despite its shortcoming, Scheffler’s article does draw attention to the fact that no proper account has ever been given by proponents of the lexical approach of how exposure to massive amounts of what Lewis (1993) calls “suitable input” should be organised into a syllabus.  Walkley and Dellar have published coursebooks which they claim exemplify the lexical approach, but I’ve never managed to get review copies and I can’t bring myself to buy them. As far as I can gather from Dellar’s public pronouncements , he thinks teaching should concentrate on giving learners repeated exposure to the most frequent words in English in context, but important questions remain unanswered.

  1. How should the repeated exposure to massive numbers of lexical chunks be organised? Is frequency of occurrence in the biggest corpora the only criterion for the selection and presentation of lexical items, or are there others?
  2. How do teachers make classroom sessions dedicated to “repeated exposure to the most frequent words in English” interesting and motivating?
  3. How much input do learners need? How do Walkley and Dellar respond to the research findings which suggest that it’s unreasonable to expect FL classroom learners to remember even a small subset of what native speakers know? Sinclair (2004: 282) warns of “the risk of a combinatorial explosion, leading to an unmanageable number of lexical items” and Harwood (2002: 142) warns against “learner overload”, insisting that “implementing a lexical approach requires a delicate balancing act” between exploiting the richness of fine-grained corpus-derived descriptions and keeping the learning load at a manageable level.
  4. How do teachers help learners notice and store the thousands of lexical chunks which are required for a minimum level of proficiency? Put another way, how do teachers help learners turn massive loads of input into an ability to use the language for effective communication?

The fact is that, pace Hoey and Lewis, L1 acquisition is not the same as the acquisition which takes place in FL classrooms. As Granger (2011) points out, Lewis claims that “phrases acquired as wholes are the primary resource by which the syntactic system is mastered” (Lewis (1993: 95). This assertion, frequently found in the Lexical Approach literature, is based on L1 acquisition studies which demonstrate that children first acquire chunks and then progressively analyse the underlying patterns and generalize them into regular syntactic rules (Wray 2002).  But (as Wray points out in her overview of findings on formulaicity in SLA), in classroom-based L2 acquisition, learners don’t get enough exposure for the ‘unpacking’ process to take place, and as a result formulaic sequences don’t contribute, as they do in L1 acquisition, to the mastery of grammatical forms. Granger concludes that “while lexical phrases are likely to have some generative role in L2 learning, it would be a foolhardy gamble to rely primarily on the generative power of lexical phrases.” She goes on to cite Pulverness (2007: 182-183) who points to the risk of the ‘phrasebook effect’, whereby lexical items accumulate in an arbitrary way as learners get presented with an ever-expanding lexicon without being given a structural framework within which to make use of all the lexis.

Scheffler’s main objective is to defend the grammar-based syllabus against Hoey’s  suggestion that lexis should be subordinated to grammar in FL teaching. I think he does a poor job of it, but at least he gives voice once again to some unanswered questions. We’ve been waiting for an answer for a while now, and I doubt that we’ll get one any time soon. Meanwhile, perhaps we should turn our attention to the newer but somehow more interesting question of what we are to make of Hoey’s collocational priming: Can this construct lead to a satisfactory explanation of both language and of SLA?

References

Granger, S. (2011) From phraseology to pedagogy: Challenges and prospects. In: Uhrig, P., Chunks in the Description of Language. A tribute to John Sinclair.  Mouton de Gruyter : Berlin and New York.

Harwood, N. (2002) Taking a lexical approach to teaching: principles and problems. International Journal of Applied Linguistics 12/2: 139-155.

Lewis, M. (1993). The Lexical Approach. The State of ELT and a Way Forward. Hove: Language Teaching Publications.

Pawley, A. and Syder, F. (1983) Two puzzles for linguistic theory: nativelike selection and nativelike fluency. in J. C. Richards and R. Schmidt (eds.) Language and Communication. London: Longman.

Pulverness, A. (2007) Review of McCarthy, M. & F. O’Dell, English Collocations in Use, ELT Journal 61: 182-185.

Scheffler, P. (2015) Lexical priming and explicit grammar in foreign language instruction. ELT  Journal, 69,1, 93-96.

Sinclair, J. (ed.) (2004) How to use corpora in language teaching. Benjamin : Amsterdam.

Spada, N. and Tomita, Y. (2010) Interactions between type of instruction and type of language feature: a meta-analysis. Language Learning 60/2: 263–308.

Wray, A. (2002) Formulaic Language and the Lexicon. Cambridge: Cambridge University Press.

A Sketch of a Process Syllabus

5448178933_acc789d9fd_z

Introduction

Winston Churchill made the good point that “if a job’s worth doing, it’s worth doing badly.” Rather than work patiently and diligently on a carefully-crafted Process syllabus which perfectly captures the progressive educational principles that underlie it, I offer this very elementary sketch.

Rationale

In Freire’s (2000) view of adult education, personal freedom and the development of individuals can only occur mutually with others: Every human being, no matter how ‘ignorant’ he or she may be, is capable of looking critically at the world in a ‘dialogical encounter’ with others. In this process, the old, paternalistic teacher-student relationship is overcome.

To paraphrase Breen (1987), the Process syllabus prioritises classroom decision-making on the assumption that participation by learners in decision-making will be conducive to learning. Decision-making can be seen as an authentic communicative activity in itself. The objective of the Process syllabus is to serve the development of a learner’s communicative competence in a new language by calling upon the communicative potential which exists in any classroom group. It is based on the principle that authentic communication between learners will involve the genuine need to share meaning and to negotiate about things that actually matter and require action on a learner’s part. The Process syllabus proposes that metacommunication and shared decision-making are necessary conditions of language learning in any classroom.

What Does the Process Syllabus Provide?

Two things: a plan relating to the major decisions which teacher and learners need to make during classroom language learning, and a bank of classroom activities which consist of sets of tasks.

The plan consists of answers to questions which the teacher and learners discuss and agree on together. Questions refer to the purposes of language learning; the content or subject matter which learners will work upon; the ways of working in the classroom ; and means of evaluation of the efficiency and quality of the work and its outcomes. Clearly, decisions made about these areas will relate one to the other and they will generate the particular process syllabus of the classroom group. They will also lead to agreed working procedures within the class; a ‘working contract’ to be followed for an agreed time, evaluated in terms of its helpfulness and appropriateness, and subsequently refined or adapted for a further agreed period of time. This joint decision-making will lead to a particular selection of activities and tasks.

As for the materials bank, this needs to be built by each local centre, taking into consideration the known general and specific needs of its students. While today a great deal of material can be found on the internet, there’s no denying that implementing a process syllabus requires an initial investment in both materials and  teacher time in order to assemble and organise a good materials bank. In 1987, Breen had this to say:

A classroom group adopting a Process syllabus would deduce and implement its own content syllabus; a syllabus of subject-matter in the conventional sense would be designed, implemented, and evaluated within the Process syllabus. In circumstances where an external pre-planned syllabus already existed and had to be undertaken by the teacher with his or her learners, the decisions for classroom language learning would be related directly to such a pre-planned syllabus. As a result, the external syllabus may be incorporated within the group’s process — with or without modifications as decided upon by the group — and used as a continual reference point – or source of helpful criteria — during decision-making and evaluation. It is more than likely that any external syllabus will be modified as the group works with it. In sum, the Process syllabus is a context within which any syllabus of subject-matter is made workable. 

I think it’s very important to do without any external syllabus, so my proposal is based on the assumption that the teacher can call on a rich diversity of well-organised materials, where the diversity and the organisation are both crucial factors. Currently, few ELT centres provide such a materials bank, and this fact has led Rose Bard and I to the conclusion that we need to build a materials bank comprised entirely of materials which can be downloaded from a dedicated web site. The materials bank will grow and teachers will, of course, supplement the central hub with their own locally produced materials. The  “central” materials will be organised in a data base using the following provisional fields:

  1. Access number
  2. Source
  3. Medium
  4. Activity type
  5. Level
  6. Topic
  7. Sub-topic
  8. Grammar area
  9. Function
  10. Comments

The Access Number is a simple code which defines the level of difficulty (l-6), the ‘medium’ (‘A’ or audio; ‘V’ for video; ‘T’ for text; ‘I’ for Internet;  etc.), and a sequential number of no other significance than to allow us to keep materials in order. Thus, for example, “4.V.19” is the nineteenth video segment for the fourth level of English. The creation of a database allows those creating the materials bank  to produce indexes for as many fields or combinations as they want, and this allows teachers to quickly see what video material is available at their level, or, for example, to find a reading text on tourism at that level. But teachers can also decide that they will concentrate on a certain topic, tourism, for example, and then confect a multi-activity task on that topic. The indexes tell them exactly what is available on this subject in each medium at each level. They can define the task for themselves or choose a ready-made task from the ‘Activities’ index. If they design their own task, they could start with a reading text, go on to an information-gap activity, then use video, then do an Internet  exercise, then move to discussions, presentations, and reports, staying all the time on the topic of tourism and at their chosen level.

So let’s look now at the example.

An Example Process Syllabus

In the following, I’ll refer to the teacher as “he”, supposing him to be a younger version of me.

Type of Student: Adult

Number of Students: 12

Level: Mid-Intermediate (CEFR: B2). The students should have done a proficiency test, ideally including an interview, prior to enrolling in the course.

Course Duration: 100 hours; 6 hours a week.

Objectives: The main objective of the course is to improve the students’ ability to use English for professional purposes. Priority is given to oral communication.

Step 1    

  1. The teacher greets everybody, introduces himself, and then asks students to get into four groups of 3 (A,B, & C) and to use questions displayed on the whiteboard in order to find out personal and professional information about each other. He then asks A to introduce B, B to introduce C, and C to introduce A.
  2. After that he explains that the course will use a process syllabus and gives a quick description of what that involves.
  3. (This bit of the class aims to reaffirm the feeling that the main thrust of the course is communicative oral practice.) “What’s in the news?”. The teacher brainstorms things in the news – local, national, international, sport, scandal, business, etc., – in order to generate a list of 6 or 7 items on the whiteboard. Students are put into groups of 3 and each group has to choose 1 of the items on the list. They then discuss the item and prepare a report. When everybody’s ready, the class gets back together and then each group gives its report (A gives the background, B gives the news, C gives the group’s opinion). The topic’s then open for others in the class to have their say.
  4. After a break, the teacher gives out a needs analysis worksheet: see here for possible format. (To get back to this page, hit the arrow at the top left of the screen) When everybody has filled in the form, the class divides into 3 groups and discusses their answers. The teacher asks one of each group to report on the answers to each of the 7 questions. This is a long activity, and will take the rest of the class time. The teacher collects the worksheets before everybody leaves.

Step 2

Before the next class, the teacher looks through the needs analysis forms and designs the next 10 hours of the course, confecting tasks from activities and materials in the materials bank. In fact he’s already done most of this work in previous courses, but just needs to fine-tune the tasks in line with the data collected.

Step 3

In the next classes the teacher leads students through a series of tasks involving activities such as problem-solving, information-gap, data gathering, case studies, role plays, presentations, debates, discussions, etc. involving group work, pair work, and whole class work, and using a variety of media. Various types of focus on form, vocabulary building, feedback and correction are used, and homework includes written work, and participation in an on-line discussion forum set up for this course.  Apart from their overt usefulness, the idea of these classes is to give students a taste of a wide variety of activities and formats.

During the classes the teacher tries to put into practice the Methodological Principles (MPs) re-stated in Long (2015), which I’ve summarised in a previous post. In fact, there are very important differences between Long’s TBLT syllabus proposal and this one, which will need discussing at some point, but I think the important points of agreement are:

  • MP2: Promote Learning by Doing. Practical hands-on experience with real-world tasks brings abstract concepts and theories to life and makes them more understandable);
  • MP4: Provide Rich Input. Adult foreign language learners require not just linguistically complex input, but rich input (i.e., realistic samples of discourse use surrounding accomplishment of tasks).
  • MP5: Encourage Inductive (“Chunk”) Learning. Learners need exposure to realistic samples of target language use and then helped to incorporate, store and retrieve whole chunks of that input as whole chunks.
  • MP6: Focus on Form. A focus on meaning can be improved upon by periodic attention to formal aspects of the language. This is best achieved during an otherwise meaning-focused lesson, and using a variety of pedagogic procedures where learners’ attention is briefly shifted to linguistic code features, in context, to induce “noticing”, when students experience problems as they work on communicative tasks. The most difficult practical aspect of focus on form is that, to be psycholinguistically relevant, it should be employed only when a learner need arises, thus presenting a difficulty for the novice teacher, who may not have relevant materials to provide. Where face-to-face interaction is the norm, as in L2 classrooms, recasting is the obvious pedagogical procedure. Once an L2 problem has been diagnosed for a learner, then pedagogical procedures may be decided upon and materials developed for use when the need next arises.
  • MP7: Provide Negative Feedback. Recasts are proposed as an ideal (but not the only) form of negative feedback.
  • MP8: Respect Developmental Processes and “Learner Syllabuses”. I’ve already said enough about this in previous posts dealing with interlanguage development.

Step 4

After approx. 12 hours of classroom time, the teacher holds “Feedback and Planning Session 1” where everybody reflects on what happened and plans the next part of the course. See here for possible format of the worksheet.  (To get back to this page, hit the arrow at the top left of the screen.) This session is obviously a pivotal part of the process syllabus and requires careful handling by the teacher . I think training in how to conduct these sessions is required for new teachers, and I’ll devote a separate post to discussing such sessions. Let me just say here that the teacher must avoid defending himself against any criticisms and must also avoid the temptation of the students to say “Everything’s fine: carry on!” With a bit of practice, these sessions become very dynamic and rewarding encounters, and succeed in giving the teacher a good idea of how to proceed.  I think it’s a good idea for the teacher to video-record the session so as to be freed of the task of taking notes.

Step 5   

Before the next class, the teacher assimilates the data from the planning session and designs the next 20 hours of the course, again confecting tasks from activities and materials in the materials bank, but this time based on what the students have indicated they want to do.

Step 6

The teacher presents his plan at the next class, and proceeds with its implementation. At around Hour 30 there’s “Planning Session 2” where feedback is again sought and the next part of the course is planned. The whole course thus comprises of about 5 cycles.

As for assessment, I refer you to final section of my post on Test Validity where I briefly summarise Fulcher’s important distinction between large-scale testing and classroom assessment. To the extent that students need an external assessment of their current language proficiency when they finish the course, they have various alternatives, such as those offered by TOEFL, or the Cambridge Examination Board.

Conclusion

This is a rough sketch of a possible process syllabus and I’m aware that it raises lots of important questions. Most importantly, in my opinion, it dispenses with any proper needs analysis and relies on the use of what Long (2015) would call a “hit and miss” approach to materials and task design. But it’s a start: it flies a very fragile kite.

To the objection that the syllabus relies on the existence of a materials bank, I can only reply that there is an abundance of cheap or free material available for ELT these days, but I agree that it needs organising for a particular school’s or institute’s needs. Similarly, to the objection that the teacher is expected to do much work preparing the tasks, my reply is that it doesn’t actually involve that much work, although an initial effort is certainly required. As the saying goes: Where there’s a will there’s a way.

The most important element in the proposal is the negotiation between teacher and students and it’s this element which needs to be tested by being put into practice by teachers in their local environments. I’ve done it myself and I know lots of other teachers who’ve done it, but more serious study is required to properly test the assertions made here. In my experience, students soon get used to the new roles, and the inevitable initial scepticism is soon overcome. The students’ contribution to decision-making, and everybody’s appreciation of the new approach, grows as the course develops; it is, indeed a virtuous circle.  I’m aware of the need to take into account local cultural issues, but, on the other hand, we should not bow to stereotypes. There are schools in England where students are punished for speaking out of turn or for challenging the authority of the teacher, and there are schools in South Korea where students are encouraged to contribute to decisions about what and how they learn. The principles underlying a process syllabus reflect a libertarian educational philosophy which in recent times has perhaps been best articulated by Friere, but which has echoes in all the major cultures of the world.

References

Breen, M.P. (1987) Contemporary Paradigms in Syllabus Design Part II. Language Teaching, 20, pp 157-174.

Friere, P. (2000) Pedagogy of the Oppressed: 30th Anniversary Edition. Bloomsbury Academic.

Long. M. H. (2014) Second Language Acquisition and Task-Based Language Teaching. Wiley-Blackwell.

Dellar and Lexical Priming

ventril

In a recent webinar (which I read about in a post by Leo Selivan) Hugh Dellar talked about colligation. I missed the webinar and I found Selivan’s report of it confusing, so I took a look at the slides Dellar used.  Early on in his presentation, Dellar quotes Hoey (2005, p.43)

The basic idea of colligation is that just as a lexical item may be primed to co-occur with another lexical item, so also it may be primed to occur in or with a particular grammatical function. Alternatively, it may be primed to avoid appearance in or co-occurrence with a particular grammatical function. 

I don’t know how Dellar explained Hoey’s use of the term “primed” in his webinar, but I understand priming to be based on the idea that each word we learn becomes associated with the contexts with which we repeatedly encounter it, so much so that we subconsciously expect and replicate these contexts when we hear and speak the words. The different types of information that the word is associated with are called its primings.

What does Hoey himself say? Hoey says that we get all our knowledge about words (their collocations, colligations, and so on) by subconsciously noticing everything that we have ever heard or read, and storing it in memory.

The process of subconsciously noticing is referred to as lexical priming. … Without realizing what we are doing, we all reproduce in our own speech and writing the language we have heard or read before. We use the words and phrases in the contexts in which we have heard them used, with the meanings we have subconsciously identified as belonging to them and employing the same grammar. The things we say are subconsciously influenced by what everyone has previously said to us (Hoey, 2009 – Lexical Priming)  

Hoey rejects Chomsky’s view of L1 acquisition and claims that children learn language starting from a blank slate and then building knowledge from subconsciously noticed connections between lexical items. All language learning (child L1 and adult SLA alike) is the result of repeated exposure to patterns of text, where the more the repetition, the more chance for subconscious noticing, and the better our knowledge of the language.

The weaknesses of this theory include the following:

  • Hoey does not explain the key construct of subconscious noticing;
  • he does not explain how the hundreds of thousands of patterns of words acquired through repeatedly encountering and using them are stored and retrieved;
  • he does not acknowledge any limitations in our ability to remember, process or retrieve this massive amount of linguistic information;
  • he does not reply to the argument that we can and do say things that we haven’t been trained to say and that we have never heard anybody else say, which contradicts the claim that what we say is determined by our history of priming.
  • while Hoey endorses Krashen’s explanation of SLA (it’s an unconscious process dependent on comprehensible input), Krashen’s Natural Order Hypothesis contradicts Hoey’s lexical priming theory, since, while the first claims that SLA involves the acquisition of grammatical structures in a predictable sequence, the second claims that grammatical structures are lexical patterns and that there is no order of acquisition.

These limitations in Hoey’s theory get no mention from Dellar, who, having previously modelled his lexical approach on Michael Lewis, now seems to have fully embraced Hoey’s lexical priming theory. Let’s look at how this theory compares to rival explanation. (I’m here making use of material I’ve used in previous posts about Dellar & Hoey.)

Interlanguage Grammar versus Lexical Priming

In the last 40 years, great progress has been made in developing a theory of SLA based on a cognitive view of learning. It started in 1972 with the publication of Selinker’s paper where he argues that the L2 learners have their own autonomous mental grammar which came to be known as interlanguage grammar, a grammatical system with its own internal organising principles, which may or may not be related to the L1 and the L2.

One of the first stages of this interlanguage to be identified was that for ESL questions. In a study of six Spanish students over a 10-month period, Cazden, Cancino, Rosansky and Schumann (1975) found that the subjects produced interrogative forms in a predictable sequence:

  1. Rising intonation (e.g., He works today?),
  2. Uninverted WH (e.g., What he (is) saying?),
  3. “Overinversion” (e.g., Do you know where is it?),
  4. Differentiation (e.g., Does she like where she lives?).

A later example is in Larsen-Freeman and Long (1991: 94). They pointed to research which suggested that learners from a variety of different L1 backgrounds go through the same four stages in acquiring English negation:

  1. External (e.g., No this one./No you playing here),
  2. Internal, pre-verbal (e.g., Juana no/don’t have job),
  3. Auxiliary + negative (e.g., I can’t play the guitar),
  4. Analysed don’t (e.g., She doesn’t drink alcohol.)

In developing a cognitive theory of SLA, the construct of interlanguage became central to the view of L2 learning as a process by which linguistic skills become automatic. Initial learning requires controlled processes, which require attention and time; with practice the linguistic skill requires less attention and becomes routinized, thus freeing up the controlled processes for application to new linguistic skills. SLA is thus seen as a process by which attention-demanding controlled processes become more automatic through practice, a process that results in the restructuring of the existing mental representation, the interlanguage.

So there are two rival theories of SLA on offer here: Hoey’s theory of lexical priming (supported by Dellar, Selivan and others) and Selinker’s theory of interlanguage (developed by Long, Robinson, Schmidt, Skehan, Pienemann and others). Dellar should resist giving the impression that Hoey’s theory is the definitive and unchallenged explanation of how we learn languages.

Errors and L1 priming

in his presentation Dellar says “All our students bring L1 primings” and gives these examples from Polish.

On chce zebym studiowal prawo.

Zimno mi.

Jak ona wyglada?

These L1 primings “colour L2”

He wants that I study Law.

It is cold to me.

How does she look?

Dellar says that these are not grammar errors, but rather “micro-grammatical problems” caused by a lack of awareness of how the words attach themselves to grammar. The solution Dellar offers to these problems is to provide learners with lots of examples of “correct colligation and co –text”.

He wants me to study Law.

My dad’s quite pushy. He wants me to study Business, but I’m not really sure that I want to.

It’s really cold today.  It’s freezing!  I’m freezing!

What does she look like?  Oh, she’s quite tall . . . long hair . . . quite good-looking, actually. Well, I think so anyway.

This kind of correction is, says Dellar, “hard work, but necessary work”. It ensures that “students are made aware of how the way they think the language works differs from how it really works.” Dellar concludes that

 Hoey has shown the real route to proficiency is sufficient exposure. Teachers can shortcut the priming process by providing high-reward input that condenses experience and saves time.

We may note how Hoey, not Krashen, gets the credit for showing that the real route to proficiency is sufficient exposure; how priming now explains learning; and how teaching must now concentrate on providing shortcuts to the primimg process.

To return to Dellar’s “micro-grammatical problems”, we are surely entitled to ask if what SLA researchers for 50 years have referred to as the phenomenon of L1 transfer is better understood as the phenomenon of L1 primings. Recall that Pit Corder argued in 1967 that learner errors were neither random nor best explained in terms of the learner’s L1; errors were indications of learners’ attempts to figure out an underlying rule-governed system.  Corder distinguished between errors and mistakes: mistakes are slips of the tongue and not systematic, whereas errors are indications of an as yet non-native-like, but nevertheless, systematic, rule-based grammar.  Dulay and Burt (1975) then claimed that fewer than 5% of errors were due to native language interference, and that errors were, as Corder suggested, in some sense systematic.  The morpheme studies of Brown in L1 (1973) led to studies in L2 which suggested that there was a natural order in the acquisition of English morphemes, regardless of L1.  This became known as the L1 = L2 Hypothesis, and further studies all pointed to systematic staged development in SLA.  The emerging cognitive paradigm of language learning perhaps received its full expression in Selinker’s (1972) paper which argues that the L2 learners have their own autonomous mental grammar (which came to be known, pace Selinker, as interlanguage (IL) grammar), a grammatical system with its own internal organising principles, which may or may not be related to the L1 and the L2.

All of this is contradicted by Dellar, who insists that L1 priming explains learner errors. 

Language development through L2 priming  versus processing models of SLA

Explaining L2 development as a matter of strengthening L2 primings between words contradicts the work of those using a processing model of SLA, and I’ll give just one example. McLaughlin (1990) uses the twin concepts of “Automaticity” and “Restructuring” to describe the cognitive processes involved in SLA. Automaticity occurs when an associative connection between a certain kind of input and some output pattern occurs.   Many typical greetings exchanges illustrate this:

Speaker 1: Morning.

Speaker 2: Morning. How are you?

Speaker 1: Fine, and you?

Speaker 2: Fine.

Since humans have a limited capacity for processing information, automatic routines free up more time for such processing. To process information one has to attend to, deal with, and organise new information.  The more information that can be handled routinely, automatically, the more attentional resources are freed up for new information.  Learning takes place by the transfer of information to long-term memory and is regulated by controlled processes which lay down the stepping stones for automatic processing.

The second concept, restructuring, refers to qualitative changes in the learner’s interlanguage as they move from stage to stage, not to the simple addition of new structural elements. These restructuring changes are, according to McLaughlin, often reflected in “U-shaped behaviour”, which refers to three stages of linguistic use:

  • Stage 1: correct utterance,
  • Stage 2: deviant utterance,
  • Stage 3: correct target-like usage.

In a study of French L1 speakers learning English, Lightbown (1983) found that, when acquiring the English “ing” form, her subjects passed through the three stages of U-shaped behaviour.  Lightbown argued that as the learners, who initially were only presented with the present progressive, took on new information – the present simple – they had to adjust their ideas about the “ing” form.  For a while they were confused and the use of “ing” became less frequent and less correct.

According to Dellar (folowing Hoey) this “restructuring” explanation is wrong: what’s actually happening is that the L2 primings are not getting enough support from “high-reward input”.

Conclusion

There are serious weaknesses in the lexical priming theory as a theory of SLA, and few reasons to think that it offers a better explanation of the phenomena studied by SLA scholars, including the phenomenon of L1 transfer, than processing theories which use the construct of interlanguage grammar. Even if there were, Dellar seems not to have grasped that his newly-adopted explanation of language learning and his long-established teaching methods contradict each other. If lexical priming is a subconcious process which explains language learning, then the sufficient condition for learning is exposure to language and opportunities to strengthen and extend lexical primings. All the corrective work that Dellar recommends, all that “hard but necessary work” to ensure that “students are made aware of how the way they think the language works differs from how it really works” is useless interference in a natural process involving the unconscious acquisition of lexical knowledge.

 

References

Cazden, C., Cancino, E., Rosansky, E. and Schumann, J. (1975) Second language acquisition sequences in children, adolescents and adults. Final report submitted to the National Institute of Education, Washington, D.C.

Corder, S. P. (1967) The significance of learners’ errors. International Review of Applied Linguistics 5, 161-9.

Dulay, H. and Burt, M. (1975) Creative construction in second language learning and teaching. In Burt, M and Dulay, H. (eds.), New directions in second language learning, teaching, and bilingual education. Washington, DC: TESOL, 21-32.

Hoey, M. (2005) Lexical Priming: A New Theory of Words and Language. London: Routledge.

Krashen, S. (1981) Second language acquisition and second language learning. Oxford: Pergamon.

Larsen-Freeman, D. and Long, M. H. (1991) An introduction to second language acquisition research. Harlow: Longman.

McLaughlin, B. (1990) “Conscious” versus “unconscious” learning. TESOL Quarterly 24, 617-634.

Selinker, L. (1972) Interlanguage.  International Review of Applied Linguistics 10, 209-231.

British Council Cultural Claptrap

history-landing-page

In an article for the British Council website, Ian Clifford asks two questions:

Do learner-centred approaches work in every culture?

Is it time to challenge Western assumptions about education, especially when it comes to promoting ‘good teaching approaches’ in the developing world?

I bet you won’t fall off your chair when I tell you that Clifford thinks the answers to these questions are “No” and “Yes”.

Clifford starts by saying that most Western educators think learner-centred education represents “everything that’s good and wholesome in education.” Just in case that sounds a bit blasé, Clifford gets more scholarly and says that learner-centred educational practice can be traced back to ‘child-centred’ education which

draws on the work of 18th century philosophers such as Rousseau and Locke, who suggested that teachers should intervene as little as possible in the natural development of children.”

Unfortunately, this attempt at scholarship fails, since in fact Rousseau and Locke suggested the opposite. They were both pioneers in promoting child education where the teacher held absolute authority, and, in Locke’s case, children were expected to do exactly as they were told by teachers under threat of dire corporal punishment.

Clifford then asks “What exactly do these (learner-centred) approaches amount to in the classroom?” The answer to this important question is that some educators associate learner-centred approaches with group work, some think it means teachers let learners find out for themselves; and some can’t identify any method at all. You might think that this is not a very “exact” answer, but never mind, because the main point is to establish the different perceptions of learner-centred and teacher-centred approaches. While a learner-centred approach represents everything good, a teacher-centred approach is generally regarded as

“authoritarian and hierarchical, encouraging rte learning and memorisation, without any real understanding.”

Proceding with his absurd parody of the two approaches (in the West we blithely abandon learners to their own devices, while in the East learners bang away at drills and memorise things without achieving real understanding), Clifford cites Kirschner’s work,  which shows that

“leaving learners to solve problems for themselves leads to brain overload.”

Pretty persuasive evidence, don’t you think? And as if this scary brain overload weren’t reason enough to bury learner-centred approaches once and for all, Clifford goes on to give a skewed summary of two more studies. First, Clifford claims that a 2014 meta-study

“favoured ‘direct teaching’ over approaches that involved little teacher instruction such as ‘discovery learning’.”

Quite apart from the fact that “discovery learning” is not a method associated with ELT, if you click on the link and read the summary at the top for yourself, you’ll see that that’s not an accurate summary of the findings. Then Clifford cites Schweisfurth (2011) , who reviewed 72 articles about projects promoting “student-centred” approaches and concluded that they record ‘a history of failures great and small’. Clifford says that the “most important” reason for failure is “cultural mismatch.” He explains:

“Approaches to teaching based on a Western idea of the individual don’t fit well in cultures which emphasise group goals over individual needs. In such cultures, teachers are expected to be authoritative and learners obedient.”

This is a lazy, inaccurate, and misleading report of the findings. Nowhere in the entire article does Schweisfurth use the term “cultural mismatch”, nor does she say that cultural divergence is the most important reason for failure in any of the 72 cases studied. And of course she says absolutely nothing to warrant Clifford’s crude claim about the assumed roles of teachers and learners. On the contrary, she calls for analyses which “help to take us beyond the crude binary codes of Teacher-Centred Education versus Learner-Centred Education, or implementation success versus failure.”

Finally, Clifford gives “The case of Burma”, where, he says, various attempts to implement a ‘child-centred approach’ have failed. Who do you think have been called in to sort out the accumulated mess caused by the well-intentioned but misguided advocates of a learner-centred approach? Yes! The British Council – that hallowed institution famed for subordinating promotion of its own national culture to the greater mission of fostering global cultural diversity! Clifford proudly tells us that the British Council’s “English for Education College Trainers” project in Burma is going to

support local teachers to do whole-class teaching more effectively and interactively and in the second half of the year teach techniques to get learners learning from each other.”

Isn’t that just peachy, as we say in Henley on Thames.

Coming through, loud and clear, through the poor scholarship, the cherry-picking use of evidence, and the reliance on absurd straw-men versions of learner-centred and teacher-centred approaches, is a clear message. The West has been duped by lefty-liberals into accepting a dangerous, learner-centred approach to education as its paradigm, and it’s now trying to hoist this approach onto counties whose cultures make its implementation doomed to failure. We need to reject learner-centred approaches and go back to traditional “whole-class teaching”. The message is what you’d expect from a spokesman of the British Council (conservative, cautious, resistant to change) and it typically gets everything wrong. Pace Clifford, the West is not in the grips of a learner-centred paradigm in education, and there must be very few professionals working outside the cosy confines of the British Council who nurture such a paranoid illusion. More specifically, a learner-centred approach to ELT is not widespread in the West; rather, as I’ve argued elsewhere, ELT practice is mostly teacher-led and coursebook-driven. And while there is undeniably cultural resistance in many countries to the full implementation of a communicative approach to classroom language learning, surely we should be looking for ways to overcome this resistance rather than using opportunistic interpretations of multiculturalism to perpetuate the problem.

Three Cheers

Stout-67a2oikx3j06upqe3tpd6dgv3u82ykzxgrs0mbvj656

A quick post to give my support to three good ventures currently doing their best to improve the ELT world.

1. SLB Cooperative

The SLB Cooperative, founded by Neil McMillan, provides language services to companies, organisations and individuals in Barcelona and beyond. Their objective is to provide a whole range of resources and professional development opportunities to members and associates, which in turn enables them to deliver the best possible service to clients. They do English classes, translation, editing, proofreading, and other language services, and they also offer low-cost or free language classes and translation services to those individuals in the community who are otherwise unable to afford them.

As they say on their website, the SLB is “a forward-thinking organisation, not a traditional school or language academy.” Being a cooperative, they cut out the middle-man, and offer clients a personalised service, at great value. What makes them different is that they work for themselves and for each other, and not under a manager who does not understand their jobs. At SLB, they value quality over quantity and service over sales.  To become a member, you contact them, and they invite you to an interview. If everybody is happy, you join. By paying your membership fee, you are a full member of the cooperative. Each associate has the same responsibilities, and the same right to vote. Each associate has the same share in any potential profits, if the cooperative vote to award dividends at the end of the financial year. They have very nice premises in Gracia, where members meet, pool resources, hold training sessions, and can use a well-equipped classroom. There’s a a great atmosphere in their centre, numbers are growing, and I’m going to join next week!

2. TEFL Equity Advocates

TEFL Equity Advocates opposes discrimination against non-native speaker teachers (NNESTs) and has quickly established itself as a powerful voice for change. Its blog has over 2,500 followers, and there’s no doubt that in the last 2 years they’ve made a huge contribution to the fight. Their aims are

  1. Acknowledge and expose the discrimination of NNESTs in TEFL.
  2. Sensitise the public to the problem.
  3. Debunk the most common and damaging myths and stereotypes about NNESTs.
  4. Reduce the number of job ads only for NESTs.
  5. Give self-confidence to NNESTs.

3. Decentralising Teaching and Learning  

Decentralising Teaching and Learning is Paul Walsh’s blog. He created the concept of “Decentralised ELT”, believing that the teaching of languages is over-centralised.  When asked to describe Decentralised Teaching in one sentence he says “The central tenet of Decentralised Teaching would be: Devolving power, resources and responsibility down to the learner in order to optimise learning.  He has a very good ‘dummies guide’  to his teaching methodology on the About page, and there are some great blog posts, free lesson plans and other resources on offer.

Thornbury and The Learning Body

7914219-3d-illustration-of-human-egg-and-sperm-cells-over-black-background

Scott Thornbury has been talking about “The Learning Body” for a while now. You can see one version on YouTube and you can see another version at the ELTABB website. You can also read a fuller treatment in Thornbury’s chapter in the tribute to Earl Stevick: Meaningful Action   (Just BTW, it’s not a great collection.)  I base this critique on the YouTube talk.

Summary of the Talk

Thornbury starts by asserting that “Descartes got it wrong”. There is no mind/body dualism, rather “Brains are in bodies, bodies are in the world and meaningful action in these worlds is socially constructed and conducted” (Churchill et al, 2010). This devastating rebuttal of Descartes, which Thornbury (ignoring works by Locke, Hume, Derrida and others) reckons was “finally revealed” in 1994, has been ignored by those responsible for the prevailing orthodoxy in SLA, who insist that “language and language learning are a purely cognitive phenomenon.” Thornbury tells us that this orthodoxy claims that we need look no further than cognition for an explanation of SLA – other factors are not important.

Thornbury then goes on to explain that the modern view sees the brain as part of a larger set, involving the body and the world, leading to a new concept of “embodied cognition”. Without bothering with considerations of how “the mind” as a construct relates to the brain as a physical part of the body, Thornbury proceeds to look at the mind as embodied, embedded and extended.

Embodied Mind

The construct of the “embodied mind” is defined as “rooted in physical experience”. Our mind (see how hard it is, even for Thornbury, to stay away from Cartesian dualism) deals with ideas that are all related to our “physicality” as Thornbury puts it, and this applies to language and language learning. Key points here are:

  • “Language is rooted in human experience of the physical world”(Lee, 2010)
  • We adapt our language to different circumstances and different people.
  • Learning is enhanced by physical involvement.
  • Larsen-Freeman’s latest work argues that language is a dynamic emergent system.
  • Language is noted, applied and adapted in context.
  • Mirror neurons and body language are evidence for the embodied mind construct.

Embedded Mind

No definition of this construct is offered. Thornbury only says that language is embedded in context, which should come as a surprise to nobody. Thornbury refers to “ecolinguistics”, likens the learning of language to the learning of soccer by children, and reminds us that we adapt our language to different circumstances and different people.

Extended Mind

The “Extended mind” construct is nowhere even casually defined, but Scott uses the film Memento (a great film which I recommend, but which has little to do with Scott’s description or use of) to make the point that our bodies help us to remember. This is followed by a discussion of gestures, which have a big role in communication.

Alignment

Not much here. Thornbury refers to the importance of our physical relationship to our students and says “Learning is discovering alignment”. This means group work, gesture, eye contact, “acting out”.

Summary

Thornbury gives this summary:

  • I think therefore I am: Wrong. Better:
  • I move therefore I am.
  • I speak therefore I move.
  • I move, therefore I learn.

Discussion

Thornbury’s talk is interesting, and very well-delivered: he’s the best stand-up act in the business (sic) and his use of video clips is particularly good. But when you tot it all up, there’s almost nothing of substance, and the argument is hollow. Thornbury makes a straw man argument against research in SLA, and says nothing of much interest as to how all this “embodied” stuff might further our understanding of SLA. As to teaching, there’s absolutely no need to even mention “embodied cognition” in order to agree with all the good things he says about gestures and the rest of it. Earl Stevick was indeed concerned with holistic learning and a teaching methodology which reflected it, but I doubt he’d be impressed by this attempt to use fashionable speculations about cognitive science to back it up.

Specific Points

  1. The use of Descartes to promote an argument against current SLA research is simplistic and boringly trite. In the “Discourse on Method” Descartes was concerned with epistemology, with reliable knowledge. His famous conclusion “Cogito ergo sum” has never been falsified – how could it be! – and it’s plain silly to say that “he got it wrong”
  2. Thornbury says that SLA orthodoxy sees language learning as a purely cognitive phenomenon taking place in the mind. He’s wrong. The most productive research in SLA concentrates on cognitive aspects of SLA, but those involved in such research are quite aware that they’re focusing on just one aspect of the problem. They do so for the very good reason that scientific research gets the best results. The job of those who look at other aspects, such as those covered by sociolinguistics, is to show how their work has academic respectability, and misrepresenting the work of those who adopt a scientific methodology does nothing to enhance that job.
  3. The question of the distinction between the brain and the mind is a fundamental one. Thornbury doesn’t even mention it. .

Conclusions

Thornbury, following the muddled and generally incoherent arguments of Larsen Freeman, wants to say that SLA is best seen as an emerging process where, well, things emerge. And given that it all kind of emerges, ELT should help all these things, well, emerge. This is absolutely hopeless, isn’t it? Any theory of SLA must be sharper than this; any teaching methodology needs a firmer basis. There is, of course, a very interesting strand of SLA research that takes an emergentist approach, but it has little in common with Thornbury’s musings. And there are, of course, teaching methodologies based on helping learners “emerge”, although they don’t put it quite like that. Thornbury has done very little to critique SLA research, or to explain how all his “emerging” bits and pieces might help future research move in a better direction.Furthermore, nothing in his suggestions for teaching practice is new, and none of it depends on his “theoretical basis”.

Finally, let’s just have another look at this:

  • I think therefore I am: Wrong. Better:
  • I move therefore I am.
  • I speak therefore I move.
  • I move, therefore I learn.

Not exactly a syllogism, now is it? I speak therefore I move? Really?

And quite apart from being incoherent, how will it affect your understanding of SLA?  Still, at least the last sorry line might inspire you to get off your butt and revisit Asher – he of Total Physical Response.

A New Term Starts!

Here we go again – a new term is starting at universities offering Masters in TESOL or AL, so once again I’ve moved this post to the front.

Again, let’s run through the biggest problems students face: too much information; choosing appropriate topics; getting the hang of academic writing.

1. Too much Information.

An MA TESOL curriculum looks daunting, the reading lists look daunting, and the books themselves often look daunting. Many students spend far too long reading and taking notes in a non-focused way: they waste time by not thinking right from the start about the topics that they will eventually choose to base their assignments on.  So, here’s the first tip:

The first thing you should do when you start each module is think about what assignments you’ll do.

Having got a quick overview of the content of the module, make a tentative decision about what parts of it to concentrate on and about your assignment topics. This will help you to choose reading material, and will give focus to studies.

Similarly, you have to learn what to read, and how to read. When you start each module, read the course material and don’t go out and buy a load of books. And here’s the second tip:

Don’t buy any books until you’ve decided on your topic, and don’t read in any depth until then either.

Keep in mind that you can download at least 50% of the material you need from library and other web sites, and that more and more books can now be bought in digital format. To do well in this MA, you have to learn to read selectively. Don’t just read. Read for a purpose: read with a particular topic (better still, with a well-formulated question) in mind. Don’t buy any books before you’re abslutely sure you’ll make good use of them .

2. Choosing an appropriate topic.

The trick here is to narrow down the topic so that it becomes possible to discuss it in detail, while still remaining central to the general area of study. So, for example, if you are asked to do a paper on language learning, “How do people learn a second language?” is not a good topic: it’s far too general. “What role does instrumental motivation play in SLA?” is a much better topic. Which leads me to Tip No. 3:

The best way to find a topic is to frame your topic as a question.

Well-formulated questions are the key to all good research, and they are one of the keys to success in doing an MA. A few examples of well-formulated questions for an MA TESL are these:

• What’s the difference between the present perfect and the simple past tense?

• Why is “stress” so important to English pronunciation?

• How can I motivate my students to do extensive reading?

• When’s the best time to offer correction in class?

• What are the roles of “input” and “output” in SLA?

• How does the feeling of “belonging” influence motivation?

• What are the limitations of a Task-Based Syllabus?

• What is the wash-back effect of the Cambridge FCE exam?

• What is politeness?

• How are blogs being used in EFL teaching?

To sum up: Choose a manageable topic for each written assignment. Narrow down the topic so that it becomes possible to discuss it in detail. Frame your topic as a well-defined question that your paper will address.

3. Academic Writing.

Writing a paper at Masters level demands a good understanding of all the various elements of academic writing. First, there’s the question of genre. In academic writing, you must express yourself as clearly and succinctly as possible, and here comes Tip No. 4:

In academic writing “Less is more”.

Examiners mark down “waffle”, “padding”, and generally loose expression of ideas. I can’t remember who, but somebody famous once said at the end of a letter: “I’m sorry this letter is so long, but I didn’t have time to write a short one”. There is, of course, scope for you to express yourself in your own way (indeed, examiners look for signs of enthusiasm and real engagement with the topic under discussion) and one of the things you have to do, like any writer, is to find your own, distinctive voice. But you have to stay faithful to the academic style.

While the content of your paper is, of course, the most important thing, the way you write, and the way you present the paper have a big impact on your final grade. Just for example, many examiners, when marking an MA paper, go straight to the Reference section and check if it’s properly formatted and contains all and only the references mentioned in the text. The way you present your paper (double-spaced, proper indentations, and all that stuff); the way you write it (so as to make it coherent); the way you organise it (so as to make it cohesive); the way you give in-text citations; the way you give references; the way you organise appendices; are all crucial.

Making the Course Manageable

1. Essential steps in working through a module.

Focus: that’s the key. Here are the key steps:

Step 1: Ask yourself: What is this module about? Just as important: What is it NOT about? The point is to quickly identify the core content of the module. Read the Course Notes and the Course Handbook, and DON’T READ ANYTHING ELSE, YET.

Step 2: Identify the components of the module. If, for example, the module is concerned with grammar, then clearly identify the various parts that you’re expected to study. Again, don’t get lost in detail: you’re still just trying to get the overall picture. See the chapters on each module below for more help with this.

Step 3: Do the small assignments that are required. If these do not count towards your formal assessment , then do them in order to prepare yourself for the assignments that do count, and don’t spend too much time on them. Study the requirements of the MA TESL programme closely to identify which parts of your writing assignments count towards your formal assessment and which do not. • Some small assignments are required (you MUST submit them), but they do not influence your mark or grade. Don’t spend too mch time on these, unless they help you prepare for the main asignments.

Step 4: Identify the topic that you will choose for the written assignment that will determine your grade. THIS IS THE CRUCIAL STEP! Reach this point as fast as you can in each module: the sooner you decide what you’re going to focus on, the better your reading, studying, writing and results will be. Once you have identified your topic, then you can start reading for a purpose, and start marshalling your ideas. Again, we will look at each module below, to help you find good, well-defined, manageable topics for your main written assignments.

Step 5: Write an Outline of your paper. The outline is for your tutor, and should give a brief outline of your paper. You should make sure that your tutor reviews your outline and gives it approval.

Step 6: Write the First Draft of the paper. Write this draft as if it were the final version: don’t say “I’ll deal with the details (references, appendices, formatting) later”. Make it as good as you can.

Step 7: If you are allowed to do so, submit the first draft to your Tutor. Some universities don’t approve of this, so check with your tutor. If your tutor allows such a step, try to get detailed feedback on it. Don’t be content with any general “Well that look’s OK” stuff. Ask “How can I improve it?” and get the fullest feedback possible. Take note of ALL suggestions, and make sure you incorporate ALL of them in the final version.

Step 8: Write the final version of the paper.

Step 9: Carefully proof read the final version. Use a spell-checker. Check all the details of formatting, citations, Reference section, Appendices. Ask a friend or colleage to check it. If allowed, ask your tutor to check it.

Step 10: Submit the paper: you’re done!

3. Using Resources

Your first resource is your tutor. You’ve paid lots of money for this MA, so make sure you get all the support you need from him or her! Most importantly: don’t be afraid to ask help whenever you need it. Ask any question you like (while it’s obviously not quite true that “There’s no such thing as a stupid question”, don’t feel intimidated or afraid to ask very basic questions) , and as many as you like. Ask your tutor for suggstions on reading, on suitable topics for the written assignments, on where to find materials, on anything at all that you have doubts about. Never submit any written work for assessment until your tutor has said it’s the best you can do. If you think your tutor is not doing a good job, say so, and if necessary, ask for a change.

Your second resource is your fellow students. When I did my MA, I learned a lot in the students’ bar! Whatever means you have of talking to your fellow-students, use them to the full. Ask them what they’re reading, what they’re having trouble with, and share not only your thoughts but your feelings about the course with them.

Your third resource is the library. It is ABSOLUTELY ESSENTIAL to teach yourself, if you don’t already know, how to use a university library. Again, don’t be afraid to ask for help: most library staff are wonderful: the unsung heroes of the academic world. At Leicester University where I work as an associate tutor on the Distance Learning MA in Applied Linguistics and TESOL course, the library staff exemplify good library practice. They can be contacted by phone, and by email, and they have always, without fail, solved the problems I’ve asked them for help with. Whatever university you are studying at, the library staff are probably your most important resource, so be nice to them, and use them to the max. If you’re doing a presential course, the most important thing is to learn how the journals and books that the library holds are organised. Since most of you have aleady studied at university, I suppose you’ve got a good handle on this, but if you haven’t, well do something! Just as important as the physical library at your university are the internet resources offered by it. This is so important that I have dedicated Chapter 10 to it.

Your fourth resource is the internet. Apart from the resources offered by the university library, there is an enormous amount of valuable material available on the internet. See the “RESCOURCES” section of this website for a collection of Videos and other stuff.

I can’t resist mentioning David Crystal’s Encyclopedia of The English Language as a constant resource. A friend of mine claimed that she got through her MA TESL by using this book most of the time, and, while I only bought it recently, I wish I’d had it to refer to when I was doing my MA. Lexis, grammar, pronunciation, discourse, learning English – it’s all there.

Please use this website to ask questions and to discuss any issues related to your course. You might like to subscribe to it: see the box on the right.

The Negotiated Syllabus

negotiating1

I suggested in my last post that a real paradigm shift in ELT would involve throwing out the coursebook and standardised tests and replacing them with a process-driven approach which concentrates on the “how” more than the “what” of teaching. So far, so good. But I went further, and in fact, I went rather too far, and I now have to make amends. I suggested that the alternative paradigm was fundamentally defined by a process of negotiation between teacher and learners, and that’s not so. I didn’t actually spell out what this negotiation amounted to, and neither did I make it clear that there are some very good alternatives to the present coursebook-driven paradigm which don’t involve any such “fundamental” negotiation.

For example, Mike Long’s detailed proposal for task-based language teaching, while it’s certainly learner-centred, and while it rejects the “product” or “synthetic” or “Type A” syllabus and hence the use of coursebooks and standardised tests, doesn’t include negotiation with learners about what tasks will form the content of the course, since these are determined by an external needs analysis, and then converted by teaching experts into peadagogical tasks. Various forms of task-based syllabuses, including many designed for business, or academic, or nursing, or other special purposes, while they are neither synthetic nor coursebook-driven (relying on their own materials), do not actually fit the “negotiated syllabus” brief. Even Dogme expects the teacher to be responsible for most of the important decisions about course content and methodology. So I need to explain the negotiated syllabus here, before finally presenting my own suggestion for an alternative to the present coursebook-driven paradigm in ELT.

The negotiated syllabus is the most extreme alternative approach to ELT, the one which most radically challenges assumptions held by most teachers today, the one which really turns everything on its head. Not only does the negotiated syllabus throw out the coursebook, it throws out the traditional roles of teacher and learner too. What follows is a brief summary, which relies heavily on an article by Sofia Valavani from the Second Chance School of Alexandroupolis. 

Second Chance Schools

Valavani explains.

Facilitating the fight against illiteracy of adults, the Adult Education General Secretariat implements programmes through which adults who have dropped out of schools have the opportunity to improve their academic and professional qualifications, so that they can get more easily integrated in the labour market or have a second chance for the continuation of their studies. This action addresses adults who were not able to complete their initial compulsory education and aims at offering them a second chance for the acquisition of a study certificate of the compulsory education.

Second Chance Schools is, therefore, a flexible and innovative programme, based on learners’ needs and interests, which aims at combatting the social exclusion of the individuals who lack the qualifications and skills necessary for them to meet the contemporary needs in social life and labour market.

Theoretical considerations

Valavani cites Freire’s view of adult education (Freire 2000: 32) that personal freedom and the development of individuals can only occur mutually with others: Every human being, no matter how ‘ignorant’ he or she may be, is capable of looking critically at the world in a ‘dialogical encounter’ with others. In this process, the old, paternalistic teacher-student relationship is overcome. She adopts Friere’s pedagogy and his “education for freedom”, and proposes a Second Chance Schools syllabus which provides “grounding for a learner-centred syllabus, reintroducing the SCS learners as the key participants in the learning process.”

Methodology

Following Breen and Littlejohn (2000), teachers and students negotiate together so as to reach agreement in four areas:

  1. Why? The purposes of language learning.
  2. What? The content which learners will work on.
  3. How? The ways of working in the classroom.
  4. How well? Evaluating the efficiency & quality of the work and outcomes.

These four areas of decision-making are expressed in terms of questions. The questions are listed in a questionnaire (best format probably multiple-choice or Likert-scale) and the answers are negotiated by the teacher and the learners together-

The Negotiation Cycle

The negotiation cycle is illustrated below (Breen and Littlejohn (2000, 284)

neg1

The syllabus identifies different reference points for the negotiation cycle in terms of levels in a curriculum pyramid. The figure below (Breen and Littlejohn, 2000, 286), illustrates these levels on which the cycle may focus at appropriate times.

 

untitled

Decisions range from the immediate, moment-by-moment decisions made while learners are engaged in a task, to the more long-term planning of a language course (and in the Breen & Littlejohn model, through to the planning of the wider educational curriculum). Together, the negotiation cycle and the curriculum pyramid represent a negotiated syllabus as negotiation at specific levels of syllabus and curriculum planning. Breen’s figure (ibid: 287) illustrates this, with the negotiation cycle potentially being applied to a particular decision area at each of the different levels in the pyramid.

Tools

In order to implement this design, a number of tools are needed, among them, the 4 below.

  1. Tools for establishing purposes: Initial questionnaires to learners, learning contracts, planning templates.
  2. Tools for making decisions concerning contents: A learning plan developed jointly by a teacher and learners; learner-designed activities; a materials bank including a wide variety of tasks, texts, worksheets, grammar work, etc..
  3. Tools for making decisions about ways of working.
  4. Tools to evaluate outcomes: Daily/ Weekly/ Monthly retrospective accounts, reflection charts, assessment (can-do) cards, work diaries, reflective learning journals, peer interviews, portfolios, one-to one consultations, etc..

Discussion

This very brief outline gives a good idea of the principles involved, but doesn’t give us a good picture of what actually happens in the implementation of such a negotiated syllabus. I think that what’s most important is to see it as an extension of the task-based syllabus: tasks are what drive it, but the tasks are decided on by teacher and learners together. The negotiation part looms large, like a bogey man, but in fact 90% of the course would be dedicated to carrying out the tasks.  What we need to explore more is how the negotiation affects the selection, sequencing and evaluation of the tasks.  So, for example, at the start of the course, the teacher, having explained what’s going to happen, works through the first questionnaire which aims to make a plan for the first phase of the course, maybe the first 6 to 10 hours. The questionnaire is obviously vital here, as is the teacher’s ability to help members of the new group to articulate their views and find consensus. Objectives may  be quite broad (improve ability to communicate with people) or more specific (give a presentation in a business meeting), but  they have to provide a good idea of priorities in terms of “can-dos”, the 4 skills, etc.. The content at this stage is also broadly specified, but again, a general feeling for areas of interest is teased out, and, similarly, preferred ways of working are voiced and discussed.  What would the questionnaire look like? How long would be spent on discussing it and arriving at a plan?  What happens next?

My own version of this entails the teacher doing the first phase without any negotiation about the tasks to be done, in order to help the learners see the range of possibilities and get a feel for more “micro” levels of negotiation. In the next post, I’ll try to tackle some practical issues, flesh out the tools listed above, and suggest a syllabus for a group of lower intermediate students enrolled in a course of General English.

References

 Breen,M. and Littlejohn, A. (2000) The practicalities of negotiation.  http://www.andrewlittlejohn.net/website/docs/chapt18.pdf

Valavani, S. Negotiated syllabus for Second Chance Schools: Theoretical considerations and the practicalities of its implementation. http://www.enl.auth.gr/gala/14th/Papers/English%20papers/Valavani.pdf