The CriticElt 2016 Awards


Only 5 awards this year.


1. The Can I Get A Pineapple Award for the worst contribution to vocabulary teaching.

Winner: Leo Selivan, for his exhortation to teachers: “Ban Single Words”.

Some excellent work is going on these days to improve the teaching of vocabulary in ELT. It’s now widely recognised that vocabulary teaching needs to take account of collocation, formulaic language and language chunks, thanks to the work of, among others, Pawley and Syder, Nattinger and Carrico, Sinclair, Biber, Nation, Carter, and Schmitt (whose website is an excellent source). Great use is being made of concordance programs and ever more widely available big corpora (see Mura’s fantastic blog EFL Notes for up to date news), and increasing attention is being given to principles of vocabulary teaching (see, for example, Norbett Schmitt’s presentation “Research-based Principles of Vocabulary Teaching” available on his website).

At the same time, there are those who follow in the footsteps of Michael Lewis and try to persuade us to make vocabulary teaching the main pillar of ELT. Lewis’ work leaves a great deal to be desired in terms of scholarship, and, alas, so does the work of most of his followers, who have so far failed to give any credible explanation of the principles that inform their “Lexical Approach” (see my review of Teaching lexically ).

Leo Selivan is one such follower, and it is he who carries off the award for his post Beginners’ Guide To Teaching Lexically where his First Principle of the Lexical Approach is “Ban Single Words”. I commented on this extraordinary injunction in a post in March this year, but I still can’t quite believe he said it.


2. The Doublespeak Award is for a public address using language that is “grossly deceptive, evasive, euphemistic, confusing, or self-centered”. It was started by the Canadian National Council of Teachers of English since 1974. Nominees must be American.

Winner: Diane Larsen-Freeman, for saying this in her 2016 IATEFL plenary:

I invite you to think with me and make some connections. Think about the connection between an open system and language. Language is changing all the time, its flowing but it’s also changing. ……

Notice in this eddy, in this stream, that pattern exists in the flux, but all the particles that are passing through it are constantly changing. It’s not the same water, but it’s the same pattern. …….. 

So this world (the stream in the picture) exists because last winter there was snow in the mountains. And the snow pattern accumulated such that now when the snow melts, the water feeds into many streams, this one being one of them. And unless the stream is dammed, or the water ceases, the source ceases, the snow melts, this world will continue. English goes on, even though it’s not the English of Shakespeare and yet it still has the identity we know and call English. So these systems are interconnected both spatially and temporally, in time.

I commented on this plenary in a post soon after the event.

Larsen-Freeman followed this up last month with an equally weird, deceptive, evasive, euphemistic, confusing plenary at the TESOL France 35th Annual Colloquium called “Patterns in Language:  Why are they the way that they are?”, where she revealed how fractals (geometric shapes that are self-similar at different levels of scale, but really, broccoli) are the key to discovering patterns in language. Her inability to say anything remotely coherent about how complexity theory might inform language or second language learning hasn’t impressed many, but Scott Thornbury is an exception. So loyal has Scott remained to Larsen-Freeman’s garbled pronouncements on complexity theory and emergentism that I think he deserves an award of his own: The New Zealand Military Award for Obedience Under Heavy Fire.


3. The Foot in Mouth Award is presented each year by the Plain English Campaign for “a baffling comment by a public figure”. Here it’s for the most ill-informed and illogical comment made on an ELT blog.

Winner: Chris Smith on the Evidence based EFL blog for this comment:

So, when it comes to evidence based EFL, we can conclude that the evidence shows that error correction works. I would also assert that if people want to argue that it does not work, they cannot merely cherry pick one or two articles that did not find a link. They would need to show why all the clear evidence mentioned above (and more) is wrong.

The “clear evidence mentioned” is both cherry-picked and seriously flawed, while the assertion is illogical and shows a basic misunderstanding of the role of evidence in argumentation. For more on this, see my post on Making a mess of evidence.


4. The Empty Vessel Award is presented for the most content-free, loudly voiced, garbled-collection-of-platitudes-confidently-rolled-out-as-if-it–all-meant-something address of 2016.

Winner: Jeremy Harmer for his talk at the TESOL Convention in April 2016. Just 2 examples:

If students are feeling happy and warm and open, open to new words and new language, they will receive it with more enthusiasm than if they’re closed off.

In a lesson, it’s what the students have, bring and do that is the beginning, the middle and the end of everything.

It’s impossible to get any real idea of the total emptiness of his talk without going through the awful effort of actually listening to it, so why don’t you just take my word for it.


5. The Blotted Copybook Award goes to the ELT organisation that this year has done most to damage a good cause.

Winner: TEFL Equity Advocates Blog

What started out as a good blog supporting a good cause has turned into a commercial-looking, badly-edited, conservative-minded, disingenuous mouthpiece for Marek Kiczkowiak. The blog looks like it’s desperately trying to sell stuff; a “Join Now!” pop-up page blocks your view 5 seconds after you arrive at the blog; every page invites donations to the cause, and there are promotional links to all sorts of training courses that Mr. Kiczkowiak runs or supervises. All in all, it seems to have veered a long way away from its original core cause of 2014. But it gets the award for Wiktor Kostrzewski’s post After 2016 trust native speakers less, which includes this gem:

Simply put, no British or American person teaching English after 2016 can claim “native” prerogative to decide which language use is “good” or “bad”. Not a Leaver / Republican, who has yet to see the long-term fall-out from their vote. Not a Remainer / Democrat, whose efforts to stop the destructive propaganda on their doorsteps were just proven inadequate. And definitely not the abstainers.”

Mr. Kostrzewski asserts that Brexit and Trump’s election mean that British and American teachers have “lost the right to claim that their version of English can serve as a reasonable model of English language use”. I don’t think I’ve ever read such a ridiculous post, and the way the author and blog owner handled the flack that inevitably followed was also very inept. I for one intend to steer well clear of the TEFL Equity Advocates blog from now on.

The lose-lose folly of coursebook consumption



One of the main aims of this blog is to draw attention to the detrimental effects that using coursebooks has on both teachers and learners. I’ve already done a few posts challenging coursebooks (see bar Menu on the right), but here I try to draw the strands together. My argument is twofold; first, coursebooks are based on false assumptions about second language acquisition (SLA), and second, coursebooks deprive teachers and learners of ownership of the learning process.


Coursebooks embody a synthetic approach to syllabus design. Wilkins (1976) distinguished between a ‘synthetic’ approach, where items of language are presented one by one in a linear sequence to the learner, whose job is to build up, or ‘synthesizes’, the knowledge incrementally, and an ‘analytic’ approach, where the learner does the ‘analysis’, i.e. ‘works out’ the system, through engagement with natural language data. Coursebook writers take the target language (the L2) as the object of instruction, and they divide the language up into bits of one kind or another – words, collocations, grammar rules, sentence patterns, notions and functions, for example – which are presented and practiced in a sequence. The criteria for sequencing can be things like valency, criticality, frequency, or saliency, but the most common criterion is ‘level of difficulty’, which is intuitively defined by the writers themselves.

Different coursebooks claim to use different types of syllabus, – grammatical, lexical, or notional-functional, for example – but they’re all synthetic syllabuses with the same features described above, and they all give pride of place to explicit teaching and learning. The syllabus is delivered by the teacher, who first presents the bits of the L2 chosen by the coursebook writers (in written and spoken texts, grammar boxes, vocabulary lists, diagrams, pictures, and so on), and leads students through a series of activities aimed at practicing the language, like drills, written exercises, discussions, games, tasks and practice of the four skills.


Among the courseboooks currently on sale from UK and US publishers, and used around the world are the following:

Headway;     English File;      Network;      Cutting Edge;      Language Leader;      English in Common;      Speakout;      Touchstone; Interchange;      Mosaic;      Inside Out;      Outcomes.

Each of these titles consists of a series of five or six books aimed at different levels, from beginner to advanced, and offers a Student’s Book, a Teacher’s Book and a Workbook, plus other materials such as video and on-line resources. Each Student’s Book at each level is divided into a number of units, and each unit consists of a number of activities which teachers lead students through. The Student’s Book is designed to be used systematically from start to finish – not just dipped into wherever the teacher fancies. The different activities are designed to be done one after the other; so that Activity 1 leads into Activity 2, and so on. Two examples follow.


In New Headway, Pre-Intermediate, Unit 3, we see this progression of activities:

  1. Grammar (Past tense) leads into ( ->)
  2. Reading Text (Travel) ->
  3. Listening (based on reading text) ->
  4. Reading (Travel) ->
  5. Grammar – (Past tense) ->
  6. Pronunciation ->
  7. Listening (based on Pron. activity) ->
  8. Discussing Grammar –>
  9. Speaking (A game & News items) ->
  10. Listening & Speaking (News) ->
  11. Dictation (from listening) ->
  12. Project (News story) ->
  13. Reading and Speaking (About the news) ->
  14. Vocabulary (Adverbs) ->
  15. Listening (Adverbs) ->
  16. Grammar (Word order) ->
  17. Everyday English (Time expressions)


And if we look at Outcomes Intermediate, Unit 2, we see this:

  1. Vocab. (feelings) ->
  2. Grammar (be, feel, look, seem, sound + adj.) ->
  3. Listening (How do they feel?) ->
  4. Developing Conversations (Response expressions) ->
  5. Speaking (Talking about problems) ->
  6. Pronunciation (Rising & fallling stress) ->
  7. Conversation Practice (Good / bad news) ->
  8. Speaking (Physical greetings) ->
  9. Reading (The man who hugged) ->
  10. Vocabulary (Adj. Collocations) ->
  11. Grammar (ing and ed adjs.) ->
  12. Speaking (based on reading text) ->
  13. Grammar (Present tenses) ->
  14. Listening (Shopping) ->
  15. Grammar (Present cont.) ->
  16. Developing conversations (Excuses) ->
  17. Speaking (Ideas of heaven and hell).

All the other coursebooks mentioned are similar in that they consist of a number of units, each of them containing activities involving the presentation and practice of target versions of L2 structures, vocabulary, collocations, functions, etc., using the 4 skills. All of them assume that the teacher will lead students through each unit and do the succession of activities in the order that they’re set out. And all of them wrongly assume that if learners are exposed to selected bits of the L2 in this way, one bit at a time in a pre-determined sequence, then, after enough practice, the new bits, one by one, in the same sequence, will become part of the learners’ growing L2 competence. This false assumption flows from a skill-based view of second-language acquisition, which sees language learning as the same as learning any other skill, such as driving a car or playing the piano.


Skills-based theories of SLA

The most well-known of these theories is John Anderson’s (1983) ‘Adaptive Control of Thought’ model, which makes a distinction between declarative knowledge – conscious knowledge of facts; and procedural knowledge – unconscious knowledge of how an activity is done. When applied to second language learning, the model suggests that learners are first presented with information about the L2 (declarative knowledge ) and then, via practice, this is converted into unconscious knowledge of how to use the L2 (procedural knowledge). The learner moves from controlled to automatic processing, and through intensive linguistically focused rehearsal, achieves increasingly faster access to, and more fluent control over the L2 (see DeKeyser, 2007, for example).

The fact that nearly everybody successfully learns at least one language as a child without starting with declarative knowledge, and that millions of people learn additional languages without studying them (migrant workers, for example), might make one doubt that learning a language is the same as learning a skill such as driving a car. Furthermore, the phenomenon of L1 transfer doesn’t fit well with a skills based approach, and neither do putative critical periods for language learning. But the main reason for rejecting such an approach is that it contradicts SLA research findings related to interlanguage development.

Firstly, it doesn’t make sense to present grammatical constructions one by one in isolation because most of them are inextricably inter-related. As Long (2015) says:

Producing English sentences with target-like negation, for example, requires control of word order, tense, and auxiliaries, in addition to knowing where the negator is placed. Learners cannot produce even simple utterances like “John didn’t buy the car” accurately without all of those. It is not surprising, therefore, that Interlanguage development of individual structures has very rarely been found to be sudden, categorical, or linear, with learners achieving native-like ability with structures one at a time, while making no progress with others. Interlanguage development just does not work like that. Accuracy in a given grammatical domain typically progresses in a zigzag fashion, with backsliding, occasional U-shaped behavior, over-suppliance and under-suppliance of target forms, flooding and bleeding of a grammatical domain (Huebner 1983), and considerable synchronic variation, volatility (Long 2003a), and diachronic variation.



Secondly, research has shown that L2 learners follow their own developmental route, a series of interlocking linguistic systems called “interlanguages”.  Myles (2013) states that the findings on the route of interlanguage (IL) development is one of the most well documented findings of SLA research of the past few decades. She asserts that the route is “highly systematic” and that it “remains largely independent of both the learner’s mother tongue and the context of learning (e.g. whether instructed in a classroom or acquired naturally by exposure)”. The claim that instruction can influence the rate but not the route of IL development is probably the most widely-accepted claim among SLA scholars today.

Selinker (1972) introduced the construct of interlanguages to explain learners’ transitional versions of the L2. Studies show that interlanguages exhibit common patterns and features, and that learners pass through well-attested developmental sequences on their way to different end-state proficiency levels. Examples of such sequences are found in morpheme studies; the four-stage sequence for ESL negation; the six-stage sequence for English relative clauses; and the sequence of question formation in German (see Hong and Tarone, 2016, for a review).  Regardless of the order or manner in which target-language structures are presented in coursebooks, learners analyse input and create their own interim grammars, slowly mastering the L2 in roughly the same manner and order. The  acquisition sequences displayed in interlanguage development don’t reflect the sequences found in any of the coursebooks mentioned; on the contrary, they prove to be  impervious to coursebooks, as they are to different classroom methodologies, or even whether learners attend classroom-based courses or not.

Note that interlanguage development refers not just to grammar; pronunciation, vocabulary, formulaic chunks, collocations, sentence patterns, are all part of the development process. To take just one example, U-shaped learning curves can be observed in learning the lexicon. Learners have to master the idiosyncratic nature of words, not just their canonical meaning. While learners encounter a word in a correct context, the word is not simply added to a static cognitive pile of vocabulary items. Instead, they experiment with the word, sometimes using it incorrectly, thus establishing where it works and where it doesn’t. Only by passing through a period of incorrectness, in which the lexicon is used in a variety of ways, can they climb back up the U-shaped curve.

Interlanguage development takes place in line with what Corder (1967) referred to as the internal “learner syllabus”, not the external syllabus embodied in coursebooks. Students don’t learn different bits of the L2 when and how a coursebook says that they should, but only when they are developmentally ready to do so. As Pienemann demonstrates (e.g. Pienemann, 1987) learnability (i.e., what learners can process at any one time), determines teachability (i.e., what can be taught at any one time). Coursebooks flout the learnability and teachability conditions; they don’t respect the learner’s internal learner syllabus.


 False Assumptions made by Coursebooks

To summarise the above, we may list the 3 false assumptions made by coursebooks.

Assumption 1: In SLA, declarative knowledge converts to procedural knowledge. Wrong! No such simple conversion occurs. Knowing that the past tense of has is had and then doing some controlled practice, does not lead to fluent and correct use of had in real-time communication.

Assumption 2: SLA is a process of mastering, one by one, accumulating structural items. Wrong! All the items are inextricably inter-related. As Long (2015, 67) says:

The assumption that learners can move from zero knowledge to mastery of negation, the present tense, subject- verb agreement, conditionals, relative clauses, or whatever, one at a time, and move on to the next item in the list, is a fantasy.

Assumption 3: Learners learn what they’re taught when they’re taught it. Wrong – as every teacher knows! Pienemann (1987) has demonstrated that teachability is constrained by learnability.


Five Objections to Coursebooks

  1. As the section on interlanguage above indicates, presenting and practicing a pre-set series of linguistic forms (pronunciation contrasts, grammatical structures, notions, functions, lexical items, collocations, etc.) simply does not work, unless a form coincidentally happens to be learnable (by some students in a class), and so teachable, at the time it is presented.
  2. The approach is counterproductive: both teachers and students feel frustrated by the constant mismatch between teaching and learning.
  3. The cutting up of language into manageable pieces (or “McNuggets” as Thornbury (2014) calls them) often results in impoverished input and output opportunities.
  4. Both the content and methodology of the course are externally pre-determined and imposed. This point will be developed below.
  5. Coursebooks pervade the ELT industry and stunt the growth of innovation and teacher training. The publishing companies that produce coursebooks also produce exams, teacher training courses and everything else connected to ELT; Pearson’s GSE initiative is a good example. Publishing companies spend tens of millions of dollars on marketing, aimed at persuading stakeholders that coursebooks represent the best practical way to manage ELT. Pearsons is one example, another is the the British ELT establishment, where key players like the British Council, the Cambridge Examination Boards, the Cambridge CELTA and DELTA teacher training bodies among them, accept the coursebook as central to ELT practice. TESOL and IATEFL, bodies that are supposed to represent teachers’ interests, have also succumbed to the influence of the big publishers, as their annual conferences make clear. So the coursebook rules, at the expense of teachers, of good educational practice, and of language learners.


An Alternative: The Analytic or Process Syllabus

An analytic syllabus rejects the method of cutting up a language into manageable pieces, and instead organises the syllabus according to the needs of the learners and the kinds of language performance that are necessary to meet those needs. “Analytic” refers not to what the syllabus designer does, but to what learners are invited to do. Grammar isn’t “taught” as such; rather learners are provided with opportunities to engage in meaningful communication on the assumption that they will slowly analyse and induce language rules, by exposure to the language and by the teacher providing scaffolding, feedback, and information about the language.

Breen’s (1987) distinction between product and process syllabuses contrasts the focus on content and the pre-specification of linguistic or skill objectives, with a “natural growth” approach which aims to expose the learners to to real-life communication without any pre-selection or arrangement of items. Figure 1, below, summarises the differences.


A process approach focuses on how the language is to be learned. There is no pre-selection or arrangement of items; the syllabus is negotiated between learners and teacher as joint decision makers, and emphasises the process of learning rather than the subject matter. No coursebook is used. The teacher implements the evolving syllabus in consultation with the students who participate in decision-making about course objectives, content, activities and assessment.



Hugh Dellar has made a number of attempts to defend coursebooks, and here are some examples of what he’s said:

  • “Attempts to talk about coursebook use as one unified thing that we all understand and recognise are incredibly myopic. Coursebooks differ greatly in terms of the way they frame the world and in terms of the questions and positions they expect or allow students to take towards these representations. …. So hopefully it’s clear that far from being one homogenous unified mass of media, coursebooks are wildly heterogeneous in both their world views and their presentations of language.”
  • “Teachers mediate coursebooks”.
  • “The kind of broad brush smearing of coursebooks you’re engaging in does those teachers a profound disservice as it’s essentially denying the possibility of them still being excellent practitioners. I’d also suggest that grammar DOES still seem to be the primary – though not the only – thing that the vast majority of teachers around the world expect and demand from material, whether you like it or not (and I don’t, personally, but there you go. We live in an imperfect world). To pretend this isn’t the case or to denigrate all those who believe this is wipe out a huge swathe of the teaching profession and preach mainly to the converted.”
  • Teachers in very poor parts of the world would just love to have coursebooks.
  • Coursebooks are based on the presentation and practice of discrete bits of grammar because that’s what teachers want.
  • Coursebooks help teachers do their jobs.
  • Coursebooks save time on lesson preparation.
  • Coursebooks meet student and parental expectations.

These remarks are echoed by others (e.g. Harmer, Scrivener, Prodromou, Urr, Lansford, Walter), and can be summed up by the following:

  1. Coursebooks are not all the same.
  2. Teachers adapt, modify and supplement them.
  3. They’re convenient.
  4. They give continuity and direction to a language course.

I accept that some coursebooks don’t follow the synthetic syllabus I describe, and don’t make the 3 assumptions I suggest they make (see Anthony Teacher’s post, for some examples). But these are the exceptions. All the coursebooks I list at the start of this article, and I’d say those that make up 90% of the total sales of coursebooks worldwide, use a synthetic syllabus and make the 3 assumptions I suggest, including Dellar’s. All the stuff about coursebooks differing greatly “in terms of the way they frame the world and in terms of the questions and positions they expect or allow students to take towards these representations” has absolutely no relevance to the arguments made against them.

As for teachers adapting, modifying and supplementing coursebooks, the question is to what extent they do so. If they do so to a great extent,  then the coursebook no longer serves as the syllabus, but they’ve rather contradicted the main point of having a coursebook, and one wonders how they can justify getting their students to buy the book if it’s only used let’s say 30% of the time. If they only modify and supplement to a small extent, then the coursebook drives the course, learners are led through a pre-determined series of steps, and my argument applies. The most important thing to note is that what teachers actually do is ameliorate coursebooks; they make them less terrible, more bearable, in dozens of different clever and inventive ways. But this, of course, is no argument in favour of the coursebook itself; indeed, to the extent that students learn, it will be more despite than because of the damn coursebook.

Which brings us to the claim that the coursebook is convenient, time-saving, etc.. Even if it’s true (which it won’t be if you spend lots of time adapting, modifying and supplementing (i.e. ameliorating) it), the trouble is, it doesn’t work: students don’t learn what they’re taught. And that applies to the other arguments used to defend coursebooks, such as that parents expect their kids to use them, that they give direction to the course, and so on: such arguments simply ignore the evidence that students do not, indeed cannot, learn in the way assumed by a coursebook.


Thus, the points above fail to address the main criticisms levelled against coursebooks, which are that they fly in the face of robust research findings and that they deprive teachers and learners of control of the learning process, leading to a lose-lose classroom environment.  In order to reply to these arguments, those wishing to defend coursebooks must first confront the three false assumptions on which coursebook use is based (i.e. they must confront the evidence of how SLA actually happens) and they must then argue the case for dictating what is learned. That coursebooks are the dream of teachers working in Ethiopia; that coursebooks are cherished by millions of teachers who just really love them; that the Headway team have succeeded in keeping their products fresh and lively; that Outcome includes recordings of people who don’t have RP accents; that coursebooks are mediated by teachers; that coursebooks are here to stay, so get real and get used to it; none of these statements does anything to answer the case against them, and none carries any weight for those who wish to base their teaching practice on critical thinking and rational argument. No matter how “different” coursebooks are, or how flexibly they can be used, coursebooks rely on false assumptions about L2 learning, and impose a syllabus on learners who are largely excluded from decisions about what and how they learn.

Managing a process syllabus is no more difficult than mastering the complexities of a modern coursebook. All you need to get started is a materials bank and a crystal-clear explanation of roles and procedures. Part 2 of Breen 1987 provides a framework; the collection of articles edited by Breen (2000) has at least 5 really helpful “road maps”; Meddings and Thornbury (2009) give a detailed account of their approach in this excellent book; and I outline a process syllabus on my blog. As befits an approach based on libertarian, co-operative educational principles, a process syllabus is best seen in local rather than global settings. If the managers of local ELT centres have the will to break the grip of the coursebook, they only have to make a small initial investment in local training and materials, and to then support teachers in their efforts to involve their students in the new venture. I dare to say that such efforts will transform the learning experience of everybody involved.



Coursebooks oblige teachers to work within a framework where students are presented with and then practice dislocated bits of English in a sequence which is pre-determined and externally imposed on them by coursebook writers. Most teachers have little say in the syllabus design which shapes their work, and their students have even less say in what and how they’re taught. Furthermore, results of coursebook-based teaching are bad; most learners don’t reach the level they aim for, and most don’t reach the level of proficiency the coursebook promises (English Proficiency Index, 2015). At the same time, alternatives to coursebook-driven ELT which are much more attuned to what we know about psycholinguistic, cognitive, and socio-educational principles for good language teaching don’t get the exposure or the fair critical evaluation that they deserve.

Despite flying in the face of what we know about L2 learning, despite denying teachers and learners a decision-making voice, and despite poor results, the coursebook dominates current ELT practice to an alarming extent. The main pillars of the ELT establishment, from teacher organisations like TESOL and IATEFL, through bodies like the British Council, examination boards like Cambridge English Language Assessment and TEFL, to the teacher training certification bodies like Cambridge and Trinity, all support the use of coursebooks.

The increasing domination of coursebooks in a global ELT industry worth close to $200 billion (Pearson, 2016) means that they’re not just a symptom but a major cause of the current lose-lose situation we find ourselves in, where both teachers and learners are restrained and restricted by the demonstrably faulty methodological principles which coursebooks embody. I think we have a responsibility to raise awareness of the damage that coursebooks are doing, and to fight against the suffocating effects of continued coursebook consumption.


Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press.

Breen, M.P. (1987) Contemporary Paradigms in Syllabus Design. Part I. Language Teaching, 20, pp 81-92.

Breen, M.P. (1987) Contemporary Paradigms in Syllabus Design. Part II. Language Teaching, 20, 20, Issue 03.

Breen, M.P. and Littlejohn, A. (2000) Classroom Decision Making: Negotiation and Process Syllabuses in Practice. Cambridge: CUP.

English Proficiency Index (2015) Accessed from  9th November, 2015

Hong, Z. and Tarone, E. (Eds.) (2016) Interlanguage Forty years later. Amsterdam, Benjamins.

Long, M.H. (2011) “Language Teaching”. In Doughty, C. and Long, M.  Handbook of Language Teaching. NY Routledge.

Long, M.H. (2015) SLA and Task Based Language Teaching. N.Y., Routledge.

Long, M.H. & Crookes, G. (1993). Units of analysis in syllabus design: the case for the task. In G. Crookes & S.M. Gass (Eds.). Tasks in a Pedagogical Context. Cleveland, UK: Multilingual Matters. 9-44.

Meddings, L. And Thornbury, S. (2009) Teaching Unplugged. Delta.

Mitchell, R. and Myles, F. (2004)  Second Language Learning Theories.  London: Arnold.

Myles, F. (2013): Theoretical approaches to second language acquisition research. In Herschensohn, J. & Young-Scholten, M. (Eds.) The Cambridge Handbook of Second Language Acquisition. CUP

Ortega, L. (2009) Sequences and Processes in Language Learning. In Long and Doughty Handbook of Language Teaching. Oxford, Wiley.

Pearson (2016) GSE  Global Report Retrieved from 5/12/2016.

Pienemann, M. (1987) Psychological constraints on the teachability of languages. In C. Pfaff (Ed.) First and Second Language Acquisition Processes. Rowley, MA: Newbury House. 143-168.

Rea-Dickins, P. M. (2001) Mirror, mirror on the wall: identifying processes of classroom assessment. Language Testing 18 (4), p. 429 – 462.

Selinker, L. (1972) Interlanguage. International Review of Applied Linguistics 10, 209-231.

Statista (2015) Publisher sales of ELT books in the United Kingdom from 2009 to 2013. Accessed from 9th November, 2015.

Thornbury, S. (2014) Who ordered the Mcnuggets? Accessed from 9th November, 2015.

Walkley, A. And Dellar, H. (2015) Outcomes:Intermediate. National Geographics.

Wilkins, D. (1976) Notional Syllabuses: A Taxonomy and its Relevance to Foreign Language Curriculum Development. London: Oxford University Press.

A few grumps about lazy language use


I was quite a precocious grumpy old sod, in fact I think I was grumpy before I could walk, but there’s no doubt about my status now: I’m officially a grumpy old man. Once you get past seventy years old you’re allowed to be as grumpy as you like, which is one of the few good things to be said for it. I’m particularly keen these days to nurture my grumpy hyper-sensitivity to the decline in the use of the English language; every day (or on a daily basis as they say) I see examples of awful English which, if it had more strength, would make my hair stand on end. This morning, after I’d eventually found my glasses (under a pillow), been to the loo a few times, taken my pills, put my slippers on the wrong feet, huffed and puffed my way down the stairs, and finally sat down in the kitchen with the kind of satisfied grunt that old people make when their bums successfully meet the middle of a chair, I listened to the BBC Radio 4 programme “Today”. Here are some of the things I heard people say:


  1. Interviewer: What are the main findings of your report?  Interviewee: So basically we feel that …….
  2. We’re reaching out to all those who have issues with our plans.
  3. I can’t have closure until the guilty men are locked up.
  4. Interviewer: Did you actually see the accident?  Interviewee: So I was standing at the bus stop, and ……
  5. Well, it is what it is.
  6. We’re taking their comments on board.
  7. Interviewer: Good game! How do you feel?  Interviewee: So it was a really hard match and …….
  8. We don’t expect any major changes going forward.
  9. We’re worried about some of Trump’s plans for outreach.
  10. He’s going to hit the ground running .
  11. His performance was what can only be described as flawless.
  12. There was a palpable sense of relief when he resigned.
  13. Since time immemorial we have kept dogs as pets.
  14. The full extent of the damage remains to be seen.
  15. There’s been a paradigm shift in cancer care.
  16. The situation is fluid (code for “I have no idea what is going on”)
  17. At set point he really committed to a hard passing shot.
  18. The attack was a victory for terror.

If I had more time (it’s a myth, by the way, that we oldies have nothing to do; apart from the fact that whatever we do takes so much longer, there are just so many complaints you can make and medical appointments you can keep in a single day) … Where were we? That’s another sign (or “significant trait” as they say) of old age: you forget what you’re doing – which adds to the time it takes to do it, of course.  If I had more time, I’d sort the 18 items above into carefully-considered categories and then spin some well-crafted yarn that would make compelling sense of it all.  As it is, my younger, more sprightly wife is going hang-gliding this afternoon, and I have to help. I say “help”; actually, all I have to do is stand in the field she plans to land in and phone our neighbour, who has a tractor, if she crashes into the woods nearby. Anyway, I only have a couple of hours free, and time (which marches on, waits for no person,  and is certainly not on my side) is a limited rescource going forward.


We can start with that one:  going forward.  In the quote above, “We don’t expect any major changes going forward”, the going forward bit adds nothing, and neither do moving forwards or other variations. Often, the best remedy is to simply leave out the phrase, although it might occasionally mean from now on, when an intended change is involved, or simply in future.  The culprit here, is, of course, management speak, which has been infiltrated into most parts of public language, and which John Humphries (of the “Today” programme) calls “a loathsome serpent crawling into our bed at night and choking the life out of our language”.  Management speak tries to sound important, but actually reduces language to a “debased, depleted sludge” as Don Watson, in his wonderful book Death Sentence calls it.  Sometimes it’s just annoying mumbo-jumbo (actioning deliverables by using blue sky thinking to build on best practice, for example), but sometimes its obscurantism is a deliberate attempt to conceal the truth. In the Mission Statements of so many companies’ Core Beliefs and Values,  “What we Stand For” can often give a good indication of what they do not in fact do.  Then there is the more sinister double speak that both dehumanises and seeks to control. Don Watson gives this example of judging an employee’s performance:

The role of the corporate centre is to worry  about talent and how people do, relative to each other. Workers build a set of intangibles around who they are. If they are not appreciated for their value-added, they will go somewhere else.

As Watson comments: “Ask yourself: would you stay if your value-added was not appreciated?”


In my collection of quotes from the Today programme, Items 2,6,8,9, and 10 are examples of management speak, and one can detect its influence in a few others, too. They’re objectionable to grumpy old me for various reasons, like being pompous or untrutrhful, but basically because they all lack clarity and precision. What does it mean to say that you’ve “taken on board” the criticisms or recommendations of a report? That you’ve understood them? Agreed with them? Heeded them? Accepted them? All that’s clear is that you have avoided any promise to actually do something about them. And what does “reaching out” entail? Making a public appeal to? Contacting by phone? Contacting in person?  Inviting to participate?  Again, one gets the strong impression that those who are being reached out to are unlikely to see much improvement in their situations.

Item 17 “At set point he really committed to a hard passing shot”,  is an example of management speak spilling over to affect other areas, in this case sports commentary. To paraphrase Watson, just as a parrot might screech all day “Where’s my other sock?”, as if socks mattered to a bird, tennis players are now expected to show commitment and to be accountable, as if they were global corporations.

The other item connected to sport, Item 11, uses the journalistic favourite It was what can only be described as…... to avoid the effort of actually describing something; and this in turn is connected to Item 16, where The situation is fluid is code for “I’ve no idea what’s going on”, and to Item 12, where a palpable sense of relief is unlikely to have been palpable.


A lot of the other items are clichés, dead bits of language which have been worn out by too much use. They just don’t convey much any more. I’m particularly sorry that “paradigm” has suffered this fate.

Items 1,4, and 7 are of the now ubiquitous “So”, usually found at the start of the utterance, and used like “Well”, or “Um”. I find it very annoying, but while I’m not prepared to take my wife’s advice to “just get over it”, I recognise that this is no more than me being an old grump.  The same goes for the tendency to use nouns as verbs. “Impact” is now firmly established as a verb; everybody with any business credibility is leveraging and actioning;  in our field, “grammar” has succumbed (Hugh Dellar likes to talk of grammaring); “fast track” was bad enough when it was only used as a noun; and I’m sure you have your own list of peeves. The older you are, the longer it’s likely to be.


Finally, there’s Item 3: “I can’t get closure until the guilty men are locked up”. I wonder if the locked up bit owes anything to the Trump campaign against Hilary Clinton. In any case, what this utterance illustrates is the widespread sense of entitlement observed in our society today. These days, there is a growing belief, as Watson puts it, that no mistake, inadequacy or failure should be accepted as a normal part of life: someone or some human process is to blame, and without blame, there can be no closure. Getting closure is already a cliché; but given the amount of litigation flying around these days, where people try to get compensation from those they blame for what happened to them, it’s fast becoming associated with insincere greed.


OMG, is that the time? Before I go out, I have to find the car keys (putting things in their “usual place” doesn’t seem to work these days); send off this letter to the local town council (complaining about the inappropriate use of the Catalan flag to wrap the baby Jesus in the nativity crib); trim the hair in my ears (sic); make an appointment with an optician (I can’t see the ball in the TV coverage of golf tournaments, even when I sit up close to the 60 inch screen); and fix the plug on my electric blanket (well it’s either that or a hot water bottle). Now, where did I put those scissors?

Making a mess of the evidence on error correction


Earlier this year, Russ Mayne invited Chris Smith to share his thoughts on error correction. On Friday, Russ advised teachers to use this weekend to “do their homework” and read Smith’s post, as if it were an important part of their education. Being a firm believer in taking the weekend off,  my advice is: “Don’t bother!”

Smith reviewed some of the literature on error correction and concluded:

So, when it comes to evidence based EFL, we can conclude that the evidence shows that error correction works. I would also assert that if people want to argue that it does not work, they cannot merely cherry pick one or two articles that did not find a link. They would need to show why all the clear evidence mentioned above (and more) is wrong. So going back to Krashen and Terrell, they asserted that EC is useless, and this idea has been dogmatically perpetuated. However, this is demonstrably wrong. The evidence shows that EC clearly is effective.

I’d like to make three points:

  1. Unless Chris Smith tells us what kind of error correction works with what kind of error, his claim that “error correction works” has little force.
  2. Smith states that anyone who disagrees with his claims for error correction has to show why all the evidence mentioned is wrong. This is based on a misunderstanding of the role of evidence in SLA research.
  3. Smith fails to distinguish between different views on error correction, which stem from different views of second language learning. By presenting the argument as a binary choice (for or against error correction), Jones misrepresents the scholarly discussion of error correction that’s taken place in the last twenty years, and misinterprets the evidence.


As to the first point, in order to investigate the question of whether or not error correction for speaking works, we need to define “error correction” and agree on what counts as evidence that “it works”. Smith claims that evidence from 6 different studies supports his poorly-articulated assertion, but there’s good reason to question this claim.

Study 1

The first study cited by Smith is by Lightbown and Spada (1990). This study gave no indication of what type of error correction leads to what kind of improvement in the participants’ speaking, and furthermore, as Lyster and Ranta (1997) point out, it

examined the effect of a combination of …. both form-focused instructional materials and feedback on error, and thus shed no light on the effectiveness of error correction on its own. (Emphasis added.)

Studies 2 & 3

The second study Smith mentions, by Carroll, Roberge and Swain (1992) and the third, by Carroll and Swain (1993) have methodological problems, as the authors admit, that make generaslisation questionable. In the case of the 1993 study, for example, the time between initial and final testing was 1 week, far too short to know if learning was retained. Apart from these shortcomings, the studies only look at learning a particular kind of grammatical generalisation. Lyster and Ranta (1997) comment:

In Carroll, Roberge, and Swain (1992), adult subjects were trained and given feedback (or not) on two rules of suffixation in French, whereas Carroll and Swain (1993) investigated the effect of different types of feedback on the learning of the dative alternation rule in English, also by adults. It is difficult to know just what relevance the findings of these studies have for the treatment of learner errors during communicative interaction in school settings, particularly with younger learners.

The 1993 article itself ends

Despite these limitations, we tentatively conclude that our study lends support to Schacter’s claim that indirect and direct corrective feedback can help adult second language learners learn abstract linguistic generalisations.

This modest claim for the study falls far short of the claim Smith makes for it.

Study 4

The fourth study by Lyster and Ranta (1997) looks at four different types of error correction and concludes that

 “the feedback-uptake sequence engages students more actively when there is negotiation of form, that is, when the correct form is not provided to the students—as it is in recasts and explicit correction—and when signals are provided to the learner that assist in the reformulation of the erroneous utterance” (emphasis added).

Thus Chris Smith’s summary, which says that the data suggests that “explicit EC was more effective than implicit EC” is inaccurate. Furthermore, the article concludes

Providing feedback as part of a negotiated sequence in this way, however, is of course feasible in L2 classrooms only where learners already possess an adequate level of proficiency. If this condition is met, then the four feedback types that serve to actively engage learners in the negotiation of form remain non-threatening and potentially useful.

Again, the claim that, providing a certain very constraining condition is met, then 4 types of feedback are “potentially useful”, falls far short of the claim Jones makes.

Studies 5 & 6

Loewen’s (2005) analysis of 17 hours of classroom interaction concludes, as Jones says, that “incidental focus on form does have some effect on L2 learning” (p381). Once again, this is a long way from concluding that “error correction clearly works”.

The Ellis, Loewen and Erlam (2006) study did, as Jones says, find that explicit feedback was more effective than implicit feedback and that the benefits became more evident over time. But Jones neglects to mention works which question these findings, including Li’s (2010) meta-analysis of 33 studies, which found that the longer-term effect of recasts was larger and more effective than explicit feedback, and Long’s (2015) summary of recasts, which cites Li (2010) and concludes that recasts are a better form of oral error correction for classroom based SLA than explicit error correction.



The important thing to note here is that there is a major disagreement among scholars of instructed SLA about the relative merits of implicit and explicit oral correction. Smith rightly notes that more studies these days prefer explicit error correction, but he doesn’t acknowledge that these studies represent work done by a group of scholars who all share the same view of instructed SLA; namely that there is a “strong interface” between explicit and implicit knowledge. Likewise, Smith does not acknowledge that studies which give evidence for the superiority of implicit correction are carried out by those taking the “weak interface”, and “no inteface” position. Instead of dealing with the fact that there are important disagreements among scholars about the quality of the evidence regarding error correction and the conclusions we can draw from it, Smith presents the argument as being between those, like Krashen, who argue that error correction “doesn’t work” and those like him, who think it “clearly works”. This allows Smith to lump all the evidence showing some effect for error correction together; which amounts to deliberately misinterpreting the evidence, a far worse academic sin than cherry picking.


The Role of Evidence

Which brings us to Smith’s assertion that if people want to argue that error correction does not work, “they cannot merely cherry pick one or two articles that did not find a link. They would need to show why all the clear evidence mentioned above (and more) is wrong”. Well, no, they wouldn’t, because evidence doesn’t work like that. Empirical evidence is used to support or to challenge a theory or hypothesis, which in turn tries to answer a question or explain a problem that we’ve articulated.

In the case of this issue, the question is “What is the effect of error correction on instructed SLA?” How you answer this question depends on how you see the process of instructed SLA, which in turn involves how you see the interface (the connection or overlap) between explicit and implicit knowledge. As Jones indicates, the non-interface position is the one taken by Krashen (although he has modified this position a bit in the last 20 years), but this leaves importance differences between the weak-interface the strong- interface positions.

Strong Interface

The strong-interface position is the one that Smith seems to take, and is based on skill acquisition theory as applied to SLA. As Han and Finneran (2014, p.317) say:

It suggests that language learning, consists of, and proceeds through, a series of stages: a declarative stage, where learners first accumulate a factual understanding (developing ‘knowledge that’); then, a procedural stage, where learners act on the declarative knowledge (developing ‘knowledge how’); and finally, a stage of automatization, where the procedural knowledge becomes fluent, spontaneous, and effortless.

Han and Finneran (2014) give the example of Schmidt’s Noticing Hypothesis (Schmidt 1990; 1995; 2001) and the claim that ‘you can’t learn a foreign language (or anything else, for that matter) through subliminal perception,’ implying that conscious attention is the only viable pathway to learning, everything else ensuing as its spinoffs.

Weak Interface

Han and Finneran (2014) describe two variants of the weak-interface position. First, R. Ellis (1994; 2005; 2006) argues that

“explicit knowledge can turn into implicit knowledge, contingent on the nature of grammatical elements – whether they are developmentally constrained or not. Explicit knowledge of developmental elements can become implicit only when learners are developmentally ready, while explicit knowledge of non-developmental or so-called variational elements can turn into implicit knowledge at any time” (Han and Finneran 2014, p. 318).

Nick Ellis (2005; 2006; 2007), on the other hand, holds a much “weaker” position and sees learning as a largely implicit, associative and rational process,

whereby learners intuitively identify and organize constructions or form-function mappings based on their probabilistic encounters with relevant exemplars in the communicative environment.

However, the L1 often interferes with the learner’s processing of L2 input, and explicit instruction can help fix the problems, if the instruction

involves the learner in a conscious tension between the conflicting forces of their current interlanguage (IL) productions and the evidence of feedback, either linguistic, pragmatic, or metalinguistic, that allows socially scaffolded development’ (N. Ellis 2007: 84, cited in Han and Finneran, 2014, p. 321).

Nick Ellis’ view is one attempt to articulate an emegentist, or ‘associative- cognitive’, or ‘connectionist’ theory of SLA, based on a usage-based view of language and language development.

We may easily note that the two weak-interface positions are qualitatively different and rest on opposing theories of SLA. One position considers the learning process largely explicit and the other largely implicit; in one, explicit knowledge is necessary, in the other it is ancillary.


As for the non-interface position, Krashen (1982) differentiates between consciously learned knowledge (explicit knowledge) and subconsciously acquired knowledge (implicit knowledge), and argues learned knowledge and acquired knowledge are dissimilar, separate, and mutually irreplaceable. This view rests on Chomsky’s generative view of language development, where competence is enabled by Universal Grammar interacting with L2 natural input; instruction, by implication, may help performance only under monitored conditions, but it does not help competence or implicit knowledge, which drives spontaneous performance.



I’m not going to argue for any particular view here, suffice it to say that Chris Smith fails to recognise the complexity of the issue of error correction, fails to make his own position clear, and fails to appreciate that arguments about the effectiveness of different kinds of error correction stem from arguments about different theories of SLA. Thus, he is wrong to demand that people who want to argue that error correction does not work need to show “why all the evidence mentioned above (and more) is wrong”.  Unless the evidence is faulty (and not all of it is), we can’t show that it’s wrong, but that doesn’t mean we are bound to accept the claims that Jones makes for it. In general, when we look at evidence, we don’t ask “Is it right or wrong?”, we ask “Does it support or challenge a particular theory or hypothesis?”

All of which suggests that Chris Smith’s claim for error correction is both simplistic and misleading. The evidence does not in fact show that “error correction works”, rather conflicting evidence from different studies gives different amounts of support to different, often contradictory claims, which flow from different, often contradictory hypotheses and theories of SLA. The important question is “What kind and degree of error correction works best with what kinds of errors?”; and there is little likelihood that those who take different positions on the interface between explicit and implicit knowledge will agree, because they are informed by conflicting explanations of SLA.


Carroll, S. and Swain, M. (1993) Explicit and Implicit Negative Feedback: An Empirical Study of the Learning of Linguistic Generalizations. Studies in Second Language Acquisition 15, 357-386.

Carroll, S. and Swain, M., and Roberge, Y. (1992). The role of feedback in adult second language acquisition: Error correction and morphological generalizations. Applied Psycholinguistics 13, no. 2 173-198.

Ellis, R. Loewen, S. and Erlam, R. (2006). Implicit and Explicit Corrective Feedback and the Acquisition of L2 Grammar. Studies in SLA, 28,2.

Han, Z. and Finneran, R. (2014) Re-engaging the interface debate:strong, weak,none, or all?International Journal of Applied Linguistics,24. 3.

Krashen, S. and Terrell, T. (1988) The Natural Approach: Language acquisition in the classroom. Hemel Hempstead, Prentice Hall.

Li, S. (2010) The effectiveness of corrective feedback in SLA: A meta-analysis. Language Learner, 60, 2.

Lightbown P. and Spada, N. (1990) Focus on Form and Corrective Feedback in Communicative Language Teaching. Studies in Second Language Acquisition 12, 429-448.

Loewen, S. (2005). Incidental focus on form and second language learning. Studies in Second Language Acquisition, 27(03), 361-386.

Long, M. (2015) TBLT and SLA.  Oxford, Wiley.

Lyster, R. and Ranta. L. (1997). Corrective Feedback and Learner Uptake: Negotiation of form in communicative classrooms. Studies in Second Language Acquisition, 20, 37-66.



Pearson’s Grand Vision: Standardised Everything!

Pearson PLC is a British multinational publishing and education company headquartered in London. It’s the largest education company and the largest book publisher in the world. It generates total revenues of $10 billion. It’s a key player in the ELT world and in the last couple of months, Pearson has stepped up its promotional campaign for its Global Scale of English (GSE). Here’s what they say in their Single Global Framework report


The GSE comprises four distinct parts to create an overall English learning ecosystem:

  1. The scale itself – a granular, precise scale of proficiency aligned to the Common European Framework of Reference.
  2. GSE Learning Objectives – over 1,800 “can-do” statements that provide context for teachers and learners across reading, writing, speaking and listening.
  3. Course Materials – both digital and printed materials, aligned to the selection of learning objectives relevant for a course/level.
  4. Assessments – Placement, Progress and Pearson Test of English Academic (PTE Academic) tests, which are placement, formative/ summative assessments and high stakes tests aligned to the GSE.

The use of the Global Scale of English and GSE Learning Objectives is free along with the full database of GSE Grammar and Vocabulary. A range of Pearson English coursebooks, digital tools and assessments that are mapped to the GSE are available.

Pearson explain that the global ELT industry will be a much better place once everybody in it is using their Global Scale of English ecosystem. The GSE reinforces the Common European Framework of Reference (CEFR) as a tool for standards-based assessment, and is

 the world’s first truly global English language standard, allowing educators, employers and learners to measure progress accurately, easily, and in context.

The GSE uses “Learning Objectives” to describe

 what a learner should be able to do at every point on the Global Scale of English for reading, writing, speaking and listening.

Pearson proudly claim that

Thousands of teachers from over 50 countries have worked on the project to rate the GSE Learning Objectives – to come to a shared understanding of what it means to be at a level in English across each of the four skills on a scale from 10 to 90. 

Pearson now has every part of the ELT business covered (although maybe it needs to make further incursions into the lucrative teacher training sector, just to really sew things up) and is set to solve all our problems, no matter where we might happen to teach. According to Pearson

The GSE is becoming an indispensable tool for schools and educators as a global framework for auditing, building and modifying curriculums.


The question is whether or not we should welcome this latest attempt by Pearson to neatly package the whole of our teaching lives for us; and the answer is, of course, that we should not. The GSE is the most audacious realisation so far of an on-going attempt to standardise ELT, starting with assessment, as epitomised by the CEFR which it aims to supersede. As Glenn Fulcher has pointed out over a number of years (see for example, Fulcher 2004, 2006, 2008, 2010) we are moving towards “a common educational policy in language learning, teaching and assessment, both at the EU level and beyond” (Bonnet 2007: 672, cited in Fulcher, 2010). Fulcher notes that the CEFR has been indiscriminately exported for use in standards-based education and assessment in non-European contexts, such as Hong Kong and Taiwan; that it is manipulated by  centralizing institutions which use it to define required levels of achievement for school pupils and adult language learners; and that it is likely to lead to “reducing diversity and experimentation” in research and language pedagogy (Davies, 2008).



The basic problem of the GSE, in common with the CEFR, is that it reifies the language learning process, converting the abstract concepts of its granular descriptors into real entities and inviting us to accept the fallacy that those real entities represent language learning and communicative competence. All the difficult to define and difficult to measure processes involved in language learning,  and all the different kinds of knowledge and skills which make up communicative competence, are flattened out, granularised and turned into measurable entities. The learning objectives of the GSE, which describe “what a learner should be able to do at every point on the Global Scale of English”, are mistakenly taken as statements which reflect what is learned and how language acquisition actually happens. What makes this conceit truly preposterous is that the learning objectives of the GSE are not the result of a principled analysis of language use, or of the application of a theory of second language acquisition. Rather, they’re the result of asking teachers to make judgments on sets of descriptors, which they classify according to “levels of difficulty”, “usefulness”, “relevance”, and so on. The data from these teacher judgements are then used to construct scales of unidimensional items using Rasch analysis.

We may summarise the weaknesses of the GSE as follows:

  • The GSE has absolutely no basis in theory or in SLA research.
  • The GSE is an example of what Fulcher calls “Frankenstein scales”, which don’t relate to any specific communicative context, or give a good description of any particular communicative language ability.
  • The GSE assumes that the abilities it describes develop in the way implied by the hierarchical structure of the scales, but we know that learners don’t actually acquire language or communicative abilities in this way. Statistical and psychological unidimensionality are not equivalent, and the pedagogic notion of learners moving unidimensionally along the line from 10 to 90 is ridiculuous. Learning an L2 is gradual, incremental and slow, exhibiting plateaus, occasional movement away from, not toward, the L2, and U-shaped or zigzag trajectories rather than smooth, linear contours.
  • Post-hoc attempts by Pearson to produce benchmark samples showing typical performance along the GSE do no more than state things that are true by definition only. These definitions are both circular and reductive (Fulcher 2008: 170-171).

epa01350400 Students in Deventer buckle down to their exam, 19 May 2008. More than 200.000 surdents of secondary schools in the Netherlands take their final exams this week. EPA/Vincent Jannink

Pearson’s “overall English learning ecosystem” wants to turn the messy, unruly thing that is communicative competence into thousands of carefully identified, described and graded granules, serve them up in coursebooks which respect the “correct order” described in the GSE, and then assess learners’ ability to regurgitate them to specification. The motivation for this appalling mission is profit, and its rationale is creating order out of chaos. Alas, Pearson’s joyless vision of the future of ELT is a real threat, representing the culmination of a process started in the nineties, when coursebooks first started to snuff out the wonderful profusion of methods which abounded during the previous decade. This standardised, granular  approach to ELT  means that learners don’t learn what is taught and that teachers don’t heed what is known about the language learning process. It also means that most learners fall short of the level of proficiency they aim for, and that most teachers fall short of the level of job satisfaction they expected.


It’s a bit like climate change. There are those who deny that the commercialisation of education as manifested in Pearson’s GSE, and the fall in standards that it represents, is happening at all. There are those who say that it’s not as bad as the doom mongers make out. There are those who grudgingly admit that it’s a problem, but don’t want to give up the comfortable positions they enjoy, and those who just can’t imagine ELT being any other way. But there are also those who speak out, and who organise against the status quo. Join us! Teachers of the world unite! You have nothing to lose but your coursebook!



Fulcher G. (2004) Deluded by artifices? The Common European Framework and harmonization. Language Assessment Quarterly 1/4: 253-266.

Fulcher G. (2006) Test architecture. Foreign Language Education Research 12: 1-22.

Fulcher G. (2008) Criteria for evaluating language quality. In E. Shohamy (ed.), Language testing and assessment. Encyclopedia of language and education, Vol 7. Amsterdam: Springer, 157-176.

Fulcher, G. (2010) The reification of the Common European Framework of Reference (CEFR) and effect-driven testing. In Advances in Research on Language Acquisition and  Teaching. Selected Papers.  



What good is relativism?


Scott Thornbury (2008) asks “What good is SLA Theory?” . This is a question beloved of populists, all of whom agree that it’s of no use to anyone, except the rarefied crackpots who dream it up. Thornbury sets the tone of his own populist piece by saying that most teachers display a general ignorance of, and indifference to, SLA theory, due to “the visceral distrust that most practitioners feel towards ivory-tower theorising”.  If he’d said that most English language teachers have an ingrained distrust of academic research into language learning, we might have asked him for some evidence to support the assertion, but who can question that ivory tower theorists are not to be trusted? Note how Thornbury, who teaches a post-graduate course on theories of SLA at a New York university, and who has published many articles in serious, peer-reviewed journals, smears academics with the “ivory tower” brush, while himself sidling up to the hard-working, down to earth sceptics who read the English Teaching Professional magazine

Thornbury gives a brief sketch of 4 types of SLA theory and then gives 4 reasons why “knowledge of theory” is a good thing for teachers. But you can tell that his heart’s not in it.  He knows perfectly well that “knowledge” of the theories of SLA he mentions is of absolutely no use to anybody unless those theories are properly scrutinised and evaluated, but, rather than attempt any such evaluation, Thornbury prefers to devote the article to reassuring everybody that there’s no need to take SLA theories too seriously.

To help him drive home this anti-intellectual message, Thornbury turns to “SLA heavyweight” John H Schumann. Most SLA scholars regard the extreme relativist position Schumann adopts in his 1983 article as almost comically preposterous, while his acculturation theory is about as “heavyweight” as Dan Brown’s theory of the Holy Grail.  But anyway, judge for yourself.  Schumann (1983) suggests that theory construction in SLA should be regarded not as a scientific task, but as a creative endeavour, like painting. Rather than submitting rival theories of SLA to careful scrutiny, looking for coherence, logical consistency and empirical adequacy, for example, Schumann suggests that competing theories of SLA should be evaluated in the same way that one might evaluate different paintings.

“When SLA is regarded as art not science, Krashen’s and McLaughlin’s views can coexist as two different paintings of the language learning experience… Viewers can choose between the two on an aesthetic basis favouring the painting which they find phenomenologically  true to their experience.”

Thornbury seems to admire this suggestion. He comments:

“This is why metaphors have such power. We tend to be well disposed to a theory if its dominant imagery chimes with our own values and beliefs. If we are inclined to think of learning as the meeting of minds, for example, an image such as the Zone of Proximal Delevopment is more likely to attract us than the image of a black box.”

Schumann’s paper was an early salvo in what, 10 years later, turned into a spirited war between academics who adopted a relativist epistemology; and those who held to a rationalist epistemology. The war is still waging, and, typically enough, Thornbury stays well clear of the front line, while maintaining friendly relations with both camps. But let’s be clear: relativism, even though not often taken to the extreme that Schumann does, is actually taken seriously by many academics, including Larsen-Freeman and sometimes (depending on how the wind’s blowing) by Thornbury himself. Rational criteria for the evaluation of rival theories of SLA, including logical consistency and the weighing of empirical evidence, are abandoned in favour of the “thick description” of different “stories” or “narratives”,  all of them deemed to have as much merit as each other. Relativists suggest that trying to explain SLA in the way that rationalists (or “positivists” as they like to call them) do is no more than “science envy”, and basically a waste of time. Which is actually the gist of Thornbury’s argument in the 2008 article discussed here.

In response to this relativist position, let me quote Larry Laudan, who says

“The displacement of the idea that facts and evidence matter by the idea that everything boils down to subjective interests and perspectives is—second only to American political campaigns—the most prominent and pernicious manifestation of anti-intellectualism in our time.”


Thornbury asks “What good is SLA theory?” without making any attempt to critically evaluate the rival theories he outlines. But then, why should he? After all, if you adopt a relativist stance, then no theory is right, none is of much importance, so why bother to sort them out? Instead of going to all that unnecessary trouble, all you have to do is take a quick look at Thornbury’s little summary in Table 1 and choose the theory that grabs you, or rather, choose the “dominant metaphor” which best chimes with your own values and beliefs. And if you can’t be bothered to check out which theory goes best with your values and beliefs, then why not use some other, equally arbitrary subjective criterion? You could toss a coin, or stare intently at a piece of toast, or ask Jeremy Harmer.

“What good is SLA theory?” is actually a very stupid question. It’s as if “SLA theory” were some sort of uncountable noun, like toothpaste. What good is toothpaste? It doesn’t actually make much difference to brushing your teeth. But “SLA theory” is not uncountable; some SLA theories are very bad, and some are very good, and consequently, we need to agree on criteria for evaluating them so as to concentrate on what we can learn from the best theories. Instead of pandering to the misinformed view that SLA theories are equally unscientific, equally based on metaphors, equally relative in their appeal, Thornbury could have used the space he had in the journal to examine – however “lightly”- the relative merits of the theories he discusses, and the usefulness to teachers of the best theories.  He could have mentioned some of the findings of psycholinguistic research into the influence of the L1; age differences and sensitive periods; error correction; incomplete trajectories; explicit and implicit learning, and much besides. He could have mentioned one or two of the most influential current hypotheses about SLA, for example that instruction can influence the rate but not the route of interlanguage development.

He could have also pointed out that those adopting a relativist epistemology have achieved very little; that Larsen-Freeman’s exploration of complexity theory has achieved precisely nothing; that his own attempts to use emergentism to conjure up “grammar for free” have been equally woeful; and that the relativists he supports are more responsible than anyone else for the popular view that academics sit in an ivory tower writing unintelligible articles packed with obscurantist jargon for publication in journals that only they bother to read.


Laudan, L. (1990) Science and Relativism: Dialogues on the Philosophy of Science. Chicago, Chicago University Press.

Schumann, J. H. (2003) Art and Science in SLA research. Language Learning, 33, 409 – 75.

Let’s slay the Cousebook


He took his vorpal sword in hand;

      Long time the manxome foe he sought—

So rested he by the Tumtum tree

      And stood awhile in thought.

(Lewis Carroll: The Jabberwocky)

There is a maxmome foe stalking the ELT hinterland: an ELT Jabberwock; and it needs a vorpal sword to off its head and leave it dead. It is, of course, the dreaded coursebook.

The Jabberwock of ELT, the cousebook, is a lumbering, huge, oppressive, mind-numbing, life-sucking monster.

The ELT Jabberwock certainly burbles, but it doesn’t have eyes of flame; rather, its eyes are horribly hooded; they’re myopic, void of sparkle, mean, narrow, blighted and unblinking.

The ELT Jaberwock is a huge beast. Its hulky bulk strides through the world of ELT, carelessly flattening dissent as it plods its steady, purposeful way on towards its food supply – the bank.

The ELT Jabberwock’s brain is the size of a pea. It lacks any trace of critical acumen, it stomps mindlessly on, carelessly ignoring reasoned criticism.

The ELT Jabberwock’s habitat is big cities. London, Oxford, Cambridge, New York, Boston, Chicago, Amsterdam, Sydney, Hong Kong, Bejing. Here are the centres where decisions about the different facets of the multi billion enterprise of ELT (publishing, testing, acrediting, ELT training, even representation of teachers’ interests) are made.

The ELT Jabberwock sits slimy and slothful at a table meant to be shared by all, slobbering over duck’s livers, lark’s tongues and other people’s dreams. It sleeps under duvets stuffed with the duck’s feathers, serenaded by the lark’s song, oblivious to the dreams it daily destroys. It lives in luxury. It luxuriates in an atmosphere of smug self satisfaction. It wallows, stuffed to bursting, smothered in excess, in its protected lairs.

The ELT Jabberwock stinks. It gives off an offensive smell of decay, complacency and corruption. It does dirty deals in China, Brazil, Chile, Indonesia, Australia, Vietnam, Hungary,South Korea, Mexico, Canada, and Kazakstan, for example, ensuring the use of a particular coursebook through wining and dining, favours and bribes.

The ELT Jabberwock takes possession of its owner’s home, like some huge, now unmoveable, untrained, out of control domestic pet, naively brought in by a gullible, well-intentioned family, to brighten things up.

The ELT Jabberwock is reluctant to move, It sits there, defiant, unlistening, too big to be challenged, suffocating development towards a more liberal, a more humanistic, a more shared way of doing things.

The ELT Jabberwock snarls at any attempt to challenge it. It insists, through silence and bad tempered grunts, that things be done its way.

The ELT Jabberwock  insists that each unit of its cousebooks should contain a test what’s been learned. The content of each dead, pre-dissected corpse of language contained in each awful unit should be tested, as if its rarified content could be learned without respect for the learners’ non-linear development of their interlanguage. And each external test of proficiency, run by the Cambridge Examination Board, or by the truly awful Pearson wing, or by anybody else, should fall in with the fatally flawed Jabberwocky agenda.

The ELT Jabberwock presides, like a flatulent, overweight, dying old beast, over teacher education. It breathes its noxious fumes into CELTA and DELTA courses, encouraging students to use couesebooks and to believe the crap peddled by coursebook writers. It bankrolls the IATEFL and TESOL conventions; it promotes the superstar agenda that typifies these conventions, and it does everything it can to stifle objections to its rotten view of ELT.

I’m tempted to re-name this blog “The Tumtum tree”, a place to rest by, there to indulge in uffish thought, but it’s a tad too contemplative. We need to slay the Jabberwock.

And hast thou slain the Jabberwock?

Come to my arms, my beamish boy!

O frabjous day! Callooh! Callay!”

He chortled in his joy.

Why PPP makes no sense at all. A reply to Anderson




I made a comment on Jason Anderson’s Blog in reply to his post The PPP Saga Ends  It hasn’t appeared, so here’s an amended version.

Hi Jason,

An interesting journey, and it makes good reading. You make an impressive attempt to defend the indefensible, and there are lots of good references, even if you play fast and loose with what your sources actually say.

To the issues, then.

First, let’s establish what we know about the SLA process after 50 years of SLA research. Students do not learn target forms and structures when and how a teacher decrees that they should, but only when they are developmentally ready to do so. Studies in interlanguage development have shown conclusively that L2 learners exhibit common patterns and features across differences in learners’ age and L1, acquisition context, and instructional approach. Independent of those and other factors, learners pass through well-attested developmental sequences on their way to mastery of target-language structures, or, as is often the case, to an end-state short of mastery.

Acquisition of grammatical structures (and also of pronunciation features and some lexical features such as collocation), is typically gradual, incremental and slow, sometimes taking years to accomplish. Development of the L2 exhibits plateaus, occasional movement away from, not toward, the L2, and U-shaped or zigzag trajectories rather than smooth, linear contours. No matter what the order or manner in which target-language structures are presented to them by teachers, learners analyze the input and come up with their own interim grammars, the product broadly conforming to developmental sequences observed in naturalistic settings. They master the structures in roughly the same manner and order whether learning in classrooms, on the street, or both.

That’s what we know. As a result this statement is plain wrong:

while research studies conducted between the 1970s and the 1990s cast significant doubt on the validity of more explicit, Focus on Forms-type instruction such as PPP, more recent evidence paints a significantly different picture.

It does not. No study conducted in the last 20 years has come up with evidence to challenge the established claim that explicit focus on forms such as PPP can do nothing to alter the route of interlanguage development. As Ortega (2009), in her summary of SLA findings states

Instruction cannot affect the route of interlanguage development in any significant way.

Teaching is constrained by the learners’ own powerful cognitive contribution, and to assume that learners will learn what they’re taught when they’re taught it using a PPP paradigm is false.

These statements are also false:

  • we have no evidence that PPP is less effective than other approaches
  • writers in academia have neither evidence nor theoretical justification for criticising coursebook writers 
  • The research on which writers such as Michael Long have based their promotion of focus on form is scant

But let’s get to the heart of the matter, which is really quite simple. You base your arguments on a non-sequitur  that appears throughout your paper. It’s this:

There is evidence to support explicit (grammar) instruction, therefore there is evidence to support the “PPP paradigm”.

It’s generally accepted, a non-controversial opinion, that explicit instruction has an important role to play in classroom-based SLA, but it doesn’t follow that PPP is a good approach to classroom based ELT.  PPP runs counter to a mass of SLA research findings, and that’s that. There is nothing, I repeat nothing, in “recent evidence from research studies” that supports PPP as an approach to classroom teaching.  You appeal to evidence for the effectiveness of explicit grammar teaching to support the argument that students will learn what they’re taught in class by a teacher implementing a synthetic syllabus, based on the presentation, practice and production of a sequence of chopped up bits of the language, thus making a schoolboy error in logic.

The rest of your paper says absolutely nothing to rescue a PPP approach from the fundamental criticism that students don’t learn an L2 in the way it assumes they do. The paper consists of a series of non-sequiturs and unsupported assertions which attempt to argue that the way the majority of institutions go about ELT is necessarily the best way.

To say that the PPP approach is popular with students and that coursebooks are consumer-driven, and that PPP is attractive to low income countries, and that this is evidence to support a “PPP paradigm” is patently ridiculous. The remarks about low income countries are also patronising and arrogant. You make a naive appeal to an “apples and pears” group of factors that need to be carefully examined and distinguished. I won’t go into any proper analysis now, but, just for example, the multi billion dollar ELT coursebook industry is not so much driven by the opinions of the end users, as by the language teaching institutions, both public and private, that deliver foreign language courses to them. For these institutions, the coursebook is convenient – it packages the otherwise “messy” thing that is language learning.  Which is not to say that it wouldn’t be cheaper, better, more efficient, and more rewarding for everybody if the coursebook were abandoned in favour of locally-produced materials used in a more learner centred approach.

Likewise, to say in reply to Neill that

the notion of ‘linear progress’ is a reflection of a much wider tendency in curricula and syllabi design. Given that the vast majority of English language teaching in the world today is happening in state-sponsored primary and secondary education, where national curricula perform precisely this role, we can predict to a large extent that top down approaches to language instruction are going to dominate for the foreseeable future

is to give absolutely no justification for such top down approaches to language instruction. Yes, as a matter of fact, they dominate ELT today, but that’s no argument in their favour, now is it?

You fail to address the arguments for a learner-centred approach, or any version of the process syllabus suggested by Breen. Those of us who oppose PPP do so not only because it contradicts what we know about SLA, but also because it adopts a pedagogy where students are given no say in the decisions that affect their learning, where the commodification of education goes unchallenged, and where Friere’s “banking” view of education rules. To oppose the way ELT is currently organised is not unrealistic, any more than opposing the privatisation of education in the UK is; but it is difficult. Whatever ones’ views, the kind of faux academic baloney present in your paper really doesn’t help.

Finally, your long quote from Ur in reply to Neill is just one more example of argument by assertion. She’s good at this kind of stuff, and I’m not surprised that you like it, but it’s pure rhetoric. She says “such features as students’ socio-cultural background, relationships, personalities; motivation;” etc., etc,  “often actually have more influence on how grammar is taught, and whether it is successfully learnt, than any of those dealt with in research”. This ignores all the research that has been done into those features, and provides no evidence or arguments to challenge SLA research findings with regard to the development of interlanguages.


Ortega (2009) “Sequences and processes in language learning”. In Long and Doughty (2009) Handbook of Language Teaching. Wiley

MA students question Coursebooks


September is “marking month” for those of us involved in MAs in TEFL and Applied Linguistics courses, and this batch of marking has been interesting because, for the first time, I’ve been asked to mark papers that address the subject of coursebooks. I’d love to think that I had something to do with this welcome new development, but, alas, I can’t take any credit for it. No, the credit has to go mostly to Scott Thornbury, who alone in his own steady stream of well-argued publications, and together with Luke Meddings in their joint works on Dogme, has managed to publicise this “elephant in the room”, this supremely important matter in the ELT world – the domination of the coursebook, to such an extent that it’s now being widely discussed not just in teachers’s rooms and at conferences, but also in academic departments such as the one I work in at Leicester University. Thornbury is the most cited name in the references of the papers I’ve read, even ahead of Tomlinson.


The Tide Turns

I think we may actually be witnessing a change of the tide here. Scott has often said that he’s pessimistic about the chances of throwing off the shackles of the coursebook (my words, of course, not his), but I reckon that the widespread revolt against the hegemony of the coursebook is gathering splendid momentum. It might be fanciful to link this growing disenchantment with the coursebook to what we’re seeing in the political sphere, but I’d like to think that the way citizens in so many so-called democracies are currently revolting against the choices they’re offered shares something in common with the revolt among teachers against the choices they’re offered.  It’s a revolt against the ELT establishment, and its symbolic figurehead, the coursebook.



When I read the papers on coursebooks submitted by our MA students, what was so invigorating was that, in place of the hopeless homilies of Harmer, the plodding platitudes of Prodromou, the superior sneerings of Scrivener, the dour drudgery of Dellar, the bilious boredom of Billocks (Is this right? Ed.), the papers argued their case coherently. They cited sources for their opinions, they debated the issues carefully, and they unanimously concluded, as Scott does, that coursebooks are a stifling influence on teaching.

The MA papers I’ve read recently have reminded me of how far back the challenge to coursebooks goes – back further than when, in the late 1980s, modern coursebooks took over as the core syllabus for most ELT around the world.

  • Breen and Candlin (1980), were earlier protesters;
  • Breen (1987) gave his criticism of product syllabuses;
  • Allwright (1981) claimed that the course of second language learning is too complex to be packaged neatly into the pages of a coursebook.
  • Littlejohn (1992) argued that language learning through coursebooks is achieved mainly through “reproductive’ tasks which require learners to reproduce, often identically, the content they are presented with”.  He suggested that this degree of scripting in the materials places teacher and learner in a subordinate position to the materials writer.
  • Fox (1998) made similar objections. (Just BTW, his study found that children in South Korea were taught “like/don’t like” using food, and that the “extension”, where they were asked if they liked their classmates left them horrified. Not a good argument against modern coursebooks, you might rightly say.)
  • imagesx6w31zp8

More Recent Stuff

On to the 21st century, then.

  • Thornbury and Meddings (2001) argue that so-called communicative tasks in coursebooks merely simulate genuine communication rather than stimulating conversation and enabling learners to communicate their own meanings and intentions.
  • Crawford’s (2002 ) anti-coursebook view describe them as a ‘debilitating crutch’, that de-skills teachers and stifles their ability to be creative and innovative, which in turn reduces their capacity to respond to learner needs effectively.
  • McGrath (2002) highlights the ideological bias of coursebooks, noting that “ideology, like culture, can be built into materials by design”.

And on and on it goes, through the excellent criticisms found in Tomlinson (1998; 2001; 2003; 2010; 2013); Gray  (2002; 2010; 2013); Thornbury (too many to mention, but see 2013 in Gray); till we reach Long’s (2015) careful dissection of the cousebook, which I’ve discussed elsewhere on this blog.



We might take a moment to look at McGrath (2013) who looks at how coursebooks can be adapted. He states that adaptation is a ‘necessary and natural’ part of using a coursebook and is achieved through “evaluative-creative decisions that lead to three processes: omission, addition, and change”. This is surely the crux of the argument used by those who defend the coursebook: “It’s not the coursebook, it’s how you use it!” is the mantra. But what does this actually mean? It means, in my opinion, and pace McGrath, that teachers actually teach despite the coursebook, not with it.  One study that I read last week in an MA dissertation showed that the participants used less than 30% of the coursebook’s contents (omission); spent most of their time finding or creating other material (addition); and changed the order of the units of the book (change).  Isn’t this patently absurd?  Isn’t it obviously better to throw the damn coursebook out the window and devote your energy elsewhere?



My arguments against the coursebook chime with Scott’s and many others. We in the anti coursebook camp are against the use of coursebooks because we have a radically different view of what ELT should be. We see coursebooks as an obstacle to be overcome. They represent the commodification of education, they suffocate good teaching practice, and they stand in the way of progress towards a more local, a more humanistic, a more efficient way of helping our students towards their goals.



Breen, M., & Candlin, C. N. (1980). The essentials of a communicative curriculum in language teaching. Applied Linguistics, 1(2), 89-112.

Crawford, J. (2002) ‘The role of materials in the language classroom: finding the balance’, in Richards, J.C. and Renandya, W.A. (eds.) Methodology in language teaching. Cambridge: CUP, pp. 80-91.

Fox, G. (1998) ‘Using corpus data in the classroom’, in Tomlinson, B. (ed.) Materials development in language teaching. Cambridge: Cambridge University Press, pp. 25-43.

Gray, J. (2002) ‘The global coursebook in English language teaching’, in Block, D. and Cameron, D. (eds.) Globalization and language teaching. London: Routledge, pp. 151-167.

Gray, J. (ed.) (2013) Critical perspectives on language teaching materials. London: Palgrave Macmillan.

Gray, J. (2010) The Construction of English: Culture, Consumerism and Promotion in the ELT Global Coursebook. London: Palgrave Macmillan.

Littlejohn, A. (1992) Why are ELT materials the way they are? Phd. Lancaster University.

McGrath, I. (2002) Materials evaluation and design for language teaching. Edinburgh: Edinburgh University Press.

McGrath, I. (2013) Teaching Materials and the Roles of EFL/ESL Teachers. Bloomsbury Academic.

Rixon, S. (1999) ‘Where do the words in EYL textbooks come from?’, in Rixon, S. (ed.) Young learners of English : some research perspectives. Harlow: Longman, pp. 55-71.

Thornbury, S. (2013) ‘Resisting coursebooks’, in Gray, J. (ed.) Critical perspectives on language teaching materials. London: Palgrave Macmillan, pp. 204-223.

Thornbury, S. and Meddings, L.(2001) ‘The roaring in the chimney (Or: what coursebooks are good for)’, Humanising Language Teaching, 3(5), pp. 22/08/2016.

Tomlinson, B. (ed.) (2013) Applied linguistics and materials development. New York; London: Bloomsbury.

Tomlinson, B. (2010) ‘What do teachers think about EFL coursebooks?’, Modern English teacher., 19(4), pp. 5-9.

Tomlinson, B. (ed.) (2003) Developing Materials for Language Teaching. New York; London: Bloomsbury.

Tomlinson, B. (2001) ‘Materials development’, in Carter, R. and Nunan, D. (eds.) The Cambridge guide to teaching English to speakers of other languages. Cambridge: Cambridge University Press, pp. 66-71.

Tomlinson, B. (ed.) (1998) Materials development in language teaching. Cambridge: Cambridge University Press.

Tomlinson, B. and Masuhara, H. (eds.) (2010) Research for materials development in language learning: evidence for best practice. New York; London: Continuum.

Tomlinson, B. (2012) ‘Materials development for language learning and teaching’, Language Teaching; Lang.Teach., 45(2), pp. 143-179.

Bendy Bedrock, Part 2


The post on Scientific American’s article on Chomsky has prompted some suggestions for further reading. Here’s a summary.


Kevin Gregg recommends Evans, N. & Levinson, S.C. (2009) The myth of language universals: language diversity and its importance for cognitive science. Behavioral & Brain Sciences 32:429-492.

The main article makes an argument that I don’t think stacks up (more importantly, neither does Gregg) but it’s well presented and it’s followed by “Open Peer Commentary”, where a very wide selection of scholars , including Baker, Bevett, Christiansen and Chater, Croft, Adele Golberg, Habour, Nevins, and Pinker & Jakendoff respond.  Very highly recommended.


Scott Thornbury confesses that his enthusiasm for Everett’s awful book has dwindled. (See here for the post on his A to Z blog where he cites Everett. A lively discussion followed.)   He now recommends Christiansen and Chater (2016) Creating Language: Integrating Evolution, Acquisition and Processing. MIT. A review on the  website   says:

Because children learn language quickly and easily, many theorists have believed this means there are specialized brain mechanisms specific to language acquisition. This has led them to ask how the brain has changed to accommodate language. Christiansen and Chater flip the question around, asking, “Why is language so well suited to being learned by the brain?” Taking a cultural evolution approach, they conclude language is easy for us to learn and use because language, like a living organism, has evolved in a symbiotic relationship with humans. Language has adapted to what our brains can do, rather than the other way around.

“We view language as piggy-backing on older pre-linguistic abilities,” Christiansen says. “Results from my lab indicate that there’s likely to be some biological differences in how people are able to process sequences of information and ‘chunk’ that information together into larger units. These differences interact with variation in linguistic experience and give rise to individual differences in language processing. The importance of experience is further underscored by the many studies showing that there’s a strong correlation between the number and variety of words that children hear and their language abilities. It can make a huge difference.”


Phil Chappel points us to a web site called The Conversation where a mum with a Ph.D. begs to differ with Chomsky’s UG theory, based partly on her experiences with her baby. She cites Vivian Evan’s book The Lnguage Myth (which is almost as bad as Everett’s), goes on to cite  “the growing body of research on infant and mother communication” and then claims that “babies need joyful, responsive human company”.  The article illustrates the danger of not appreciating  the strictly-defined domain of Chomsky’s theory, namely language competence. That babies need human company is a fine example of a motherhood statement, but it has nothing to do with the POS argument or UG theory.

More seriousy, Phil Chappell recommends Lee, N., Mikesell, L., Joaquin, A. D. L., Mates, A. W., & Schumann, J. H. (2009). The interactional instinct: The evolution and acquisition of language. Oxford University Press.  You can download a pdf file of Lee & Schmann’s presentation of the book at this web site. They say:

In this presentation, we outline a perspective on language acquisition based on evolutionary biology and neurobiology. We argue that language is a cultural artifact that emerges as a complex adaptive system from the verbal interaction among humans. We see the ubiquity of language acquisition among children generation after generation as the product of an interactional instinct that, as Tomasello indicates, is based on an innate drive communicate with and become like conspecifics.

The “cultural artifact” line is common to thise working on emergentist models. Note the reference to Tomasello who figures in so much of the lit. these days, including the book Scott mentions.


Ms. Westerlund invites “those with an open mind and more importantly, critical mind” (sic) to read an article by Tom Wolfe about Darwin and Chomsky (which is partly aimed at promoting his new book on Chomsky) in Harper’s magazine.

I don’t know whether Ms. Westerlund is implying that only those who possess an open critical mind will apreciate Wolfe’s argument, or that only they will recognise it for the blustering tosh that it is. I personallly think that just about anybody will quickly conclude that Wolfe has no idea what he’s talking about. What’s intersting is that Wolfe relies on the aforementioned Daniel Everett, he of the Pirahã study, to argue the case against Chomsky for him. Wolfe likes Everett becuase he’s a real macho man, a man who poses for the front jacket of his book up to his neck in water in a dangerous river while a Pirahã fisherman sits safely in his boat. Unlike the nasty, arrogant, desk-bound cissy Noam Chomsky, Daniel is a proper linguist who thinks fieldwork is essential, and “winds up in primitive places, emerging from the tall grass zipping his pants up”.


But, according to Wolfe, Everett is not just a proper macho man, he’s a brilliant academic. Everett’s fieldwork, Wolfe assures us, revealed that the Pirahã language lacked recursion, thus refuting Chomsky’s claims for the universality of his UG. Of course, Everett’s work did no such thing. Lots of linguists have since replied to Everett’s claim, pointing out that the Pirahã language indeed had recursion (e.g., “I want the same hammock you just showed me”) and that “recursion” is not that central to Chomsky’s theory anyway.


Robert Taylor offers “some interesting research about sounds for common ideas being the same across languages (roughly 2/3rds)” The link takes you to an article about the study, not the study itself, which is published in the journal Proceedings of the National Academy of Sciences, and is written by Morten H. Christiansen – Scott’s man turns up yet again!  Christiansen and colleagues argue that basic concepts, like body parts and familial relationships, and the sounds that people around the world turn to in describing them have a robust statistical relationship.


Finally, let me recommend an article from one of favorite web sites, the Stanford Encyclopedia of philosophy. The article, Innateness and Language gives an interesting discussion of some of the arguments for and against Chomsky’s theory,

Two Plenaries at the Chile IATEFL Conference, July 2016


I’ve just been watching YouTube videos of the IATEFL Chile Conference that took place in July. I recommend that you watch them, because they demonstrate just how much we need new organisations to represent  teachers. The conference plenaries show the same old faces trotting out the same old stuff, and there’s absolutely nothing here to make your heart sing, or, more mundanely, to make you think that the raft of real teachers’ concerns are being addressed . Did anybody mention the miserable pay that millions of teachers get, or zero hour contracts without pension rights, or appalling conditions of work? Did anybody question how teacher qualifications are decided on, or how professional development is organised? Was there any mention of teachers’ feeling of worth?  Did anybody question the IATEFL statutes? Well of course not, because that’s not what the carefully chosen plenary speakers are here to do.

What we see in these videos is a show, a promotion of the stars of ELT who are supposed to enrich the lives of teachers in much the same way as going to see any other celebrity “live” is supposed to do. It’s a travesty of what a conference of working teachers should be. It’s proof, as if proof were needed, of the commercialisation of ELT.


Scott Thornbury gave two talks that he’d done before.  His review of the history of ELT was a repeat of the plenary he gave just a few months previously at the IATEFL international conference, and his talk on his attempts to improve his Spanish was a version of what he’d already said years before. Like so many of the army of professional speakers who tour the world, Scott is almost expected to trot out the same old stuff time and time again. Like Elton John singing Candle in the wind, or Tony Blair chanting I’d do it all again, the audience doesn’t even expect to hear anything new; they just want to be in the audience where the celebrity entertains them. How long before Scott has to autograph the IATEFL programme pushed towards him by admiring fans as he leaves the stage?


Now guess who else gave a plenary in Chile. Guess who the organisers thought was worth flying 9,000 kilometres to address their teachers. Why, who else than that rightly revered, roundly respected, super scholar Jeremy Harmer! And once again Harmer demonstrated his uncanny ability to insult his audience’s intelligence without being booed off the stage. This time, Harmer chose to defend the coursebook, in a plenary titled Back between the covers: should coursebooks exist in a modern age.  Please, before you do anything else, watch it by clicking on this link.

What did you make of that hour long talk? Maybe you can use it in some teacher training programme. Get everybody comfortably seated, play the video, and use this worksheet.


  • How many times does he lose the thread?
  • As a sub-set, how many times does he confess that he can’t remember what he’s talking about?
  • How many times does he contradict himself?
  • How many times (to the nearest 100) does he not bother (sic) to finish a sentence?
  •  How many times does he not answer his own questions?
  • Is he bothered?


  • Give 5 examples of where he resorts to what he really, really sincerely believes rather than to what might pass for a reasoned argument.
  • Give 5 examples of how he misreprent the arguments he doesn’t like.
  • Give 5 examples of where he shows an ignorance of emergentism and interlanguage research.


  • How does Harmer come accross?
  • How does he treat his audience?


  • Give 1 example of something he said that you didn’t aleady know.
  • Summarise his argument for why coursebooks are useful.
  • Suggest what a plenary talk about the place of ELT coursebooks should discuss.

Now let me give my own view of the plenary. Harmer doesn’t inform or debate about the important issues involved, he blusters. From a discourse point of view, he looks to me like a confused, ill-prepared clown hired to appear at a 2 year old kid’s birthday party. Talk about impoverished input!  Nevertheless, observe his general stage manner.  It’s a display of authority: he knows he’s a powerful figure in ELT and he acts like it.

As to content, what did he say? Take away the endless pile of platitudes, ignore the sporadic Oh and by the way remarks, leave out the cascade of careless clichés and the endless homilies; in short, do away with the “noise” that always surrounds Harmer’s discourse as he stumbles around the stage like someone who can’t quite remember what he’s so urgently looking for, and what have you got? What do we get from all this pumped up but ultimately lifeless torrent of confident, disorganised clatter and chatter? What does it all mean? What does Harmer’s defence of coursebooks amount to?  Predictably, it amounts to almost nothing. He gave an absurd summary of the arguments against them and then took the audience through some exercises to show that talking about music can be fun. From this he concluded with a trite re-hash of the old chestnut that it’s not the coursebook, it’s what you do with it.

“Two plenaries do not a conference make”, you may say. Quite right, and for all I know, great things might have gone on at the conference. But the plenaries do, I suggest, say a lot about IATEFL conferences.

As an alternative to the way IATEFL organises its conferences, I recommend that you look at the way ELTjam and Oxford TEFL organised their two Innovate ELT annual conferences in Barcelona. No plenaries; no good rooms, bad rooms; no grace and favour crap; nobody get’s paid for presenting. There’s a focus on issues that affect teachers’ lives; a genuine attempt to involve every single person who attends the conference, with no special attention to well-known names; an innovative mix of presentation formats; a marvellous range of social activities. I can honestly say that I’ve never attended any conferences with better content, and nothing, but nothing, compares to the wonderful cooperative, friendly, uplifting atmosphere that they managed to create. Of course there are ways that this great initiative can be improved, but the Innovate ELT conference shows the way forward, and it shows that there’s hope for those of us who want change.

Shifting sands and bendy bedrock


Chomsky offers a theory of language and of language learning. The theory claims that all human languages share an underlying grammar and that human beings are born with a knowledge of this grammar, which partly explains how they learn their first language(s) as young children. Criticism of Chomsky’s theory is mounting, as evidenced by a recent article in Scientific American which claims that “evidence rebuts Chomsky’s theory of language learning”. Here, I question that claim.

First, the Scientific American article doesn’t give any evidence to “rebut” Chomsky’s theory. The article talks about counter evidence, but it doesn’t actually give any. The real thrust of the current popular arguments against Chomsky’s theory have nothing to do with its ability to stand up to empirical challenges. Arguments against Chomsky’s theory are based on

  1. the weaknesses in Chomsky’s theory in terms of its reasoning and its falsifiability,
  2. the claim that no recourse to innate knowledge, specifically to a Language Acquisition Device, is necessary, because language learning can be explained by a general learning theory.

As to the first point, I refer you to Sampson   and Bates, the latter particularly eloquently voicing a strong case. You might also look at my discussion of Chomsky’s theory itself. There are, I think, serious weaknesses in Chomsky’s theory. To summarise: it moves the goal posts and it uses ad hoc hypotheses to deflect criticism.

As to the second point, no theory to date has provided an answer to the poverty of the stimulus argument which informs Chomsky’s theory. No attempt to show that usage can explain what children know about language has so far succeeded – none. Theories range from what I personally see as the daft (e.g. Larson Freeman and Cameron ) through the unlikely (e.g. Bates and MacWhinney )  to the attractive (e.g. O’Grady and Rastelli).

As Gregg (1993) makes clear, a theory of language learning has to give a description of what is learned and an explanation of how it’s learned. UG theory acts in a deliberately limited domain. It’s a “property theory” about a set of constraints on possible grammars, which has a causal relation to L1 acquisition through a “transition theory”, which connects UG with an acquisition mechanism that acts on the input in such a way as to lead to the formation of a grammar. Describing that grammar is the real goal of Chomsky’s work. In successive attempts at such a description, those working within a Chomskian framework have made enormous progress in understanding language and in helping those in various fields, IT, for example. Chomsky roots his work in a realist, rational epistemology and in a scientific method which relies on logic and on empirical research.

Any rival theory of language learning must state its domain, give its own property theory (its own account of what language is), and its own transition theory to explain how the language described is learned. You can take Halliday’s or Hoey’s description of language, or anybody’s you choose, and you can then look at the transition theories that go with them. When you do so, you should not, I suggest, be persuaded by silly appeals to chaos theory, or by appeals to the sort of emergentism peddled by Larsen-Freeman, or by circular appeals to “priming”. And you should look closely at the claim that children detect absolute frequencies, probabilistic patterns, and co-occurrences of items in the linguistic environment, and use the resulting information to bootstrap their way into their L1. It’s a strong claim, and there’s interesting work going on around it, but to date, there’s very little reason to think that it explains what children know about language or how they got that knowledge.

To say that Chomsky’s theory is dead and that a new “paradigm” has emerged is what one might expect from a journalist. To accept it as fact is to believe what you read in the press.


The post on Scientific American’s article on Chomsky prompted suggestions for further reading. Here’s a summary.


Kevin Gregg recommends Evans, N. and Levinson, S.C. (2009) The myth of language universals: language diversity and its importance for cognitive science. Behavioural & Brain Sciences 32:429-492.

This is an excellent article. The main article makes an argument that I don’t think stacks up (more importantly, neither does Gregg) but it’s well presented and it’s followed by “Open Peer Commentary”, where a very wide selection of scholars , including Baker, Bevett, Christiansen and Chater, Croft, Adele Golberg, Habour, Nevins, and Pinker & Jakendoff respond.  Very highly recommended.


Scott Thornbury recommends Christiansen and Chater (2016)  Creating Language: Integrating Evolution, Acquisition and Processing. MIT.  Scott makes a refreshing confession that he didn’t finish reading Everett’s awful book Don’t sleep, There are snakes, which inspired his post “P is for Poverty Of the Stimulus”. The post sparked a lively discussion, where Scott showed signs of a less than complete grasp of UG theory, so it’s good to see him recant here on his previous enthusiastic endorsement of Everett. Among other daft stuff, Everett claims that the Pirahā language refutes Chomsky’s claim that recursion is a universal characteristic of natural languages, which it doesn’t.  Anyway, the book Scott recommends looks interesting, and, judging from reviews, follows what we’ve come to expect from Christiansen and his colleagues. At the risk of sounding condescending, it’s good to see Scott moving on from Everett and from the equally unscholarly nonsense found in Larsen Freeman and Cameron’s attempts to promote emergentism, to a more sophisticated view.


Talk of Everett brings us nicely to Ruslana Westerlund, who  urges those “with an open mind and more importantly, critical mind” (sic) to read an article in Harpers magazine which reports on Tom Wolfe’s book on Chomsky, The Origins of Speech. The article is what one might expect from something in Harpers – it’s rubbish, and it only confirms one’s suspicion that Wolfe has nothing much to contribute to any critical debate about Chomsky’s UG theory. Wolfe apparently says that Chomsky is a nerd, and a nasty person to boot, while his hero Everett is a macho man, i.e., in Wolfe’s scheme of things, a good and proper man.  Wolfe thinks that Everett’s ability to pose for a photo up to to his neck in dangerous waters while one of the Pirahā tribe looks on from his boat, is evidence to support Everett’s theory of language learning.  I don’t really get Westerlund’s insistance that only those with open and critical minds will appreciate the Harper piece; I reckon that only those lacking both will be impressed.


Phil Chappell, a valued contributor to this blog, suggests we look at a blog post by a mother with a Ph.D. in linguistics who says that her relationship with her baby proves Chomsky wrong. More rubbish. The Ph.D. enriched mum confuses Chomsky’s treatment of linguistic competence, a carefully defined construct in a deliberately restricted domain, with a baby’s need to interact lovingly with his mother.

Phil also suggests that we read Lee, N., Mikesell, L., Joaquin, A. D. L., Mates, A. W., & Schumann, J. H. (2009). The interactional instinct: The evolution and acquisition of language. Oxford University Press. I’ve read this, well, sort of, and I think it’s terrible. To quote the promotional blurb: “Language acquisition is seen as an emotionally driven process relying on innately specified “interactional instinct.” This genetically-based tendency provides neural structures that entrain children acquiring their native language to the faces, voices, and body movements of conspecific caregivers”.  I don’t know if Phil goes along with this mumbo jumbo, and I hope he’ll comment.

Robert Taylor says “Here’s some interesting research about sounds for common ideas being the same across languages (roughly 2/3rds)”. I’m not sure what to make of it, but maybe it’s grist for the mill.

Finally, I recommend an article from the Stanford Encycopedia of Philosophy, a website that I love and that I visit almost as often as I visit VinoOnLine. The article is called Innateness and Language, and I think it gives a good review of the stuff we’re talking about.  I particularly like its discussion of the Popperian view versus the “inference to the best explanation” view (best articulated, I think by the ever so wonderful Ian Hacking).

Gregg, K. R. (1993) Taking explanation seriously; or, let a couple of flowers bloom. Applied Linguistics 14, 3, 276-294.

Teaching Lexically by Hugh Dellar and Andrew Walkley: A Review


Teaching Lexically is divided into three sections.

Part A begins with “Principles of how people learn”. The authors suggest that the “many thousands of pages written about how people learn languages” can all be “neatly summarised” in 6 principles:

The 6 principles of how people learn languages

I quote:

Essentially, to learn any given item of language, people need to carry out the following stages:

  • Understand the meaning of the item.
  • Hear/see an example of the item in context.
  • Approximate the sounds of the item.
  • Pay attention to the item and notice its features.
  • Do something with the item – use it in some way.
  • Repeat these steps over time, when encountering the item again in other contexts.

These “principles” are repeated in slightly amplified form at the end of Part A, and they inform the “sets of principles” for each of the chapters in Part B.

Principles of why people learn languages

The “Principles of why people learn” are taken en bloc from the Common European Framework of Reference for languages.  The authors argue that teachers should recognise that

for what is probably the majority of learners, class time is basically all they may have spare for language study. [This] … “emphasises how vital it is that what happens in class meets the main linguistic wants and needs of learners, chiefly:

  • To be able to do things with their language.
  • To be able to chat to others.
  • To learn to understand others cultures better.


Two Views of Language

The first view is

Grammar + words + skills

This is the authors’ way of characterising what they see as the predominant view of language in  ELT, a view they disagree with. According to Dellar ans Walkley, most people in ELT hold the view that

language can be reduced to a list of grammar structures that you can drop single words into.

The implications of this view are:

  1. Grammar is the most important area of language. …The examples used to illustrate grammar are relatively unimportant. …It doesn’t matter if an example used to illustrate a rule could not easily (or ever) be used in daily life.
  2. If words are to fit in the slots provided by grammar, it follows that learning lists of single words is all that is required, and that any word can effectively be used if it fits a particular slot.
  3. Naturalness, or the probable usage of vocabulary, is regarded as an irrelevance; students just need to grasp core meanings.
  4. Synonyms are seen as being more or less interchangeable, with only subtle shades of meaning distinguishing them.
  5. Grammar is acquired in a particular order – the so-called “buildings blocks” approach where students are introduced to “basic structures”, before moving to “more advanced ones”.
  6. Where there is a deficit in fluency or writing or reading, this may be connected to a lack of appropriate skills. These skills are seen as existing independently of language .

The second view of language is the one Dellar and Walkley agree with, and they call it “from words with words to grammar”.


From words with words to grammar

This view is based on the principle that “communication almost always depends more on vocabulary than on grammar”. The authors illustrate this view by taking the sentence

I’ve been wanting to see that film for ages.

They argue that “Saying want see film is more  likely to achieve the intended communicative message than only using what can be regarded as the grammar and function words I’ve been –ing to that for. “

The authors go on to say that in daily life the language we use is far more restricted than the infinite variety of word combinations allowed by rules of grammar. In fact, we habitually use the same chunks of language, rather than constructing novel phrases from an underlying knowledge of “grammar + single words”.  This leads the authors to argue the case for a lexical approach to teaching  and to state their agreement with Lewis’ (1993) view that

 teaching should be centred around collocation and chunks, alongside large amount of input from texts.  

They go on:

From this input a grasp of grammar ‘rules’ and correct usage would emerge. 

Hoey’s Lexical Priming (2005) is said to give theoretical support for this view of language. Hoey shows how words that are apparently synonymous – such as result and consequence – typically function in quite different ways. The differences in the usage of these two synonyms is easily seen in statistics from corpora which show when and how they are used.  Dellar and Walkley continue:

Hoey argues that these statistical differences must come about because, when we first encounter these words (he calls such encounters ‘primings’) our brains somehow subconsciously record some or all of this kind of information about the way the words are used. Our next encounter may reaffirm – or possibly contradict – this initial priming, as will the next encounter, and the one after that – and so on. ….

The authors then take Hoey’s citing of “evidence from psycholinguistic studies” as evidence to support the claim that we remember words in pairings and in groups and that doing so allows for quicker and more accurate processing of the language we hear and want to produce. They say that this implies that

 spoken fluency, the speed at which we read and the ease and accuracy with which we listen may all develop as a result of language users being familiar with groupings of words.

Therefore, teach lexical chunks, not the 4 skills as done in most grammar based coursebooks.


A lexical view of teaching

Approaches to ELT that are influenced by research into interlanguages are briefly discussed. We’re told that interlanguage research concerns grammar and that it has nothing to say about the teachability of vocabulary, where other research suggests that vocabulary teaching is effective. Thus, the criticisms made of grammar based coursebooks don’t apply to coursebooks like theirs, which concentrate on vocabulary teaching. Teachers are urged to

think of whole phrases, sentences or even ‘texts’ that students might want to say when attempting a particular task or conversation because

at least some of those lexical items are learnable, and some of that learning could be done with the assistance of materials before students try to have particular kinds of communication.

The authors then look at some problems of teaching lexically, which are basically that it’s difficult for teachers to come up, in real time, with the right kind of lexical input and the right kind of questions to help students notice lexical chunks, collocations, etc..

They then discuss the practicalities of teaching lexically, under the heading “Pragmatism in a grammar-dominated world” and suggest that teachers should work with the coursebooks they’ve got and approach coursebook materials in a different way, focusing on the vocabulary and finding better ways of exploiting it.

The rest of Part 1 is devoted to a lexical view of vocabulary (units of meaning, collocation, co-text, genre and register, lexical sets, antonyms, word form pragmatic meanings and synonyms are discussed), a lexical view of grammar (including “words define grammar” and “grammar is all around”), and a lexical view of skills.

Part 1 ends with “A practical pedagogy for teaching and learning”, which stresses the need to consider “Naturalness, priming and non-native speakers”, and ends with “The Process”, which repeats the 6 processes introduced at the start, noting that noticing and repetition are the two stages that the lexical teacher should place the most emphasis on.

Part B offers 100 worksheets for teachers to work through. Each page shares the same format: Principle; Practising the Principle; Applying the principle. In many of the worksheets, it´s hard to find the “principle” and in most worksheets “applying the principle” involves looking for chances to teach vocabulary, particularly lexical chunks.

Here’s an example:

 Worksheet 2: Choosing words to teach.

Principle: prioritise the teaching of more frequent words.

Practicing the principle involves deciding which words in a box (government / apple for example)  are more frequent and looking at the on line Macmillan Dictionary or the British Corpus to check.

Applying the Principle involves choosing 10 words from “a word list of a unit or a vocabulary exercise that you are going to teach”, putting the words in order of frequency, checking your ideas, challenging an interested colleague with queries about frequency and “keeping a record of who wins!”

The worksheets cover teaching vocabulary lexically, teaching grammar lexically, teaching the 4 skills lexically, and recycling and revising. Many of them involve looking at the coursebook which readers are presumed to be using in their teaching, and finding ways to adapt the content to a more lexical approach to teaching. In the words of the authors,

the book is less about recipes and activities for lessons, and more about training for preparing lexical lessons with whatever materials you are using.        

Part C (10 pages long) looks at materials, teaching courses other than general English, and teacher training.


Language Master

Language Learning

Let’s start with Dellar and Walkley’s account of “how people learn”. More than 50 years of research into second language learning is “neatly summarised” by listing the 6 steps putatively involved in learning “any given item of language”.  You (1) understand the meaning, (2) hear/see an example in context, (3) approximate the sound, (4) pay attention to the item and notice its features, (5) do something with it – use it some way, and (6) then repeat these steps over time.  We’re not told what “an item of language” refers to, but whatever the items are, we can safely presume that there are tens, if not hundreds of thousands of them, all learned by going through the same process.

Let’s look at an alternative account. Bachman (1990) suggests that rather than learning countless thousands of “Items” of language in a 6-step process, people learn languages by developing a complex set of competencies. His framework includes three components: language competence (“a set of specific knowledge components that are utilised in communication via language.”; strategic competence (the mental capacity for implementing language competence in contextualized communicative language use”); and psychophysiological mechanisms (the neurological and psychological processes involved in the actual execution of language as a physical phenomenon”).  Below are the main parts of language competence:


There remains the question of how these competencies are developed. We can compare Dellar and Walkley’s view with that offered by theories of interlanguage development (see Tarone, 2001, for a review). Language learning is, in this view, gradual, incremental and slow, sometimes taking years to accomplish. Development of the L2 involves all sorts of learning going on at the same time as learners use a variety of strategies to confront problems of comprehension, pronunciation, grammar, lexis, idioms, fluency, appropriacy, and so on. The concurrent development of the many competencies Bachman refers to exhibits plateaus, occasional movement away from, not toward, the L2, and U-shaped or zigzag trajectories rather than smooth, linear contours.  This applies not only to learning grammar, but also to lexis, and to that in-between area of malleable lexical chunks as described by Pawley and Syder.

Learners have to master the idiosyncratic nature of words, their collocates etc., not just their canonical meaning. When learners encounter a word in a correct context, the word is not simply added to a static cognitive pile of vocabulary items. Instead, they experiment with the word, sometimes using it incorrectly, thus establishing where it works and where it doesn’t. By passing through a period of incorrectness, in which the lexicon is used in a variety of ways, they climb back up the U-shaped curve.  Take the example of the noun ‘shop.’ Learners may first encounter the word in a sentence such as “I had breakfast  at the coffee shop yesterday.” Then, they experiment with deviant utterances such as “I am going to the supermarket shop,” correctly associating the word ‘shop’ with a place they can purchase goods, but getting it wrong. By making these incorrect utterances, the learner distinguishes between what is appropriate, because “at each stage of the learning process, the learner outputs a corresponding hypothesis based on the evidence available so far” (Carlucci and Case, 2011).

Dellar and Walkley’s account of language learning is surely neither well explained nor anything like complete. These are not, I suggest, very robust principles on which to build. The principles of why people learn are similarly flimsy. To say that people learn languages “to be able to do things with their language; to be able to chat to others; and to learn to understand others cultures better” is to say very little indeed.


Two Views of Language

Next, we read about two views of language. The first is the “grammar + words” view, neatly summarised thus:

language can be reduced to a list of grammar structures that you can drop single words into.

Grammar models of the English language, such as that found in Quirk (1985), or Swan (2001), and used in coursebooks such as Headway or English File, describe the structure of English in terms of grammar, the lexicon and phonology. These descriptions have almost nothing in common with the description given on page 9 of Teaching Lexically, which is subsequently referred to dozens of times throughout the book as if it were an accurate summary, rather than a biased straw man used to promote their own view of language. The one sentence description, and the 6 simplistic assumptions that are said to flow from it, completely fail to fairly represent grammar models of the English language.

The second view of language: From words + words to grammar

Here, Dellar and Walkley could be expected to take extra care, since this is really the most important, the most distinguishing, feature of their whole approach. But in fact their preferred view of language is poorly articulated and mixed up with arguments for teaching lexically. It attempts to describe Hoey’s (2005) view that the best model of language structure is the word, along with its collocational and colligational properties. Collocation and “nesting” (words join with other primed words to form sequence) are linked to contexts and co-texts, and grammar is replaced by a network of chunks of words. There are no rules of grammar; there’s no English outside a description of the patterns we observe among those who use it. There is no right or wrong in language. It makes little sense to talk of something being ungrammatical.

This is surely a step too far; surely we need to describe language not just in terms of the performed but also in terms of the possible. Hoey argues that we should look only at attested behaviour and abandon descriptions of syntax, but, while nobody these days denies the importance of lexical chunks, very few would want to ignore the rules which guide the construction of novel, well formed sentences. After all, pace Hoey, people speaking English (including learners of English as an L2) invent millions of novel utterances every day.  They do so by making use of, among other things, grammatical knowledge.

The fact that the book devotes some attention to teaching grammar indicates that the authors recognise the existence and importance of grammar, which in turn indicates that there are limits to their adherence to Hoey’s model. But nothing is said in the book to clarify these limits. Given that Dellar and Walkley repeatedly stress that their different view of language is what drives their approach to teaching,  their failure to offer any  coherent account of their own view of language is telling. We´re left with the impression that the authors are enthusiastic purveyors of a view which they don’t fully understand and are unable to adequately describe or explain.


Teaching Lexically

 Teaching Lexically concentrates very largely on what Breen, in his characterisation of the product syllabus, called “doing things to learners”: it’s probably the most teacher-centred, the least learner-centred, book on ELT I’ve ever read. There’s not one mention in the book of including students in decisions affecting what and how things are to be learned. In Teaching Lexically, teachers make all the decisions. They work with a pre-confected product or synthetic syllabus, usually defined by a coursebook, and they plan and execute lessons on the basis of adapting the syllabus or coursebook to a lexical approach. Students are expected to learn what is taught in the order that it’s taught, the teacher deciding the “items”, the sequence of presentation of these “items”, the recycling, the revision, and the assessment.

Secondly, there’s a narrow minded, almost obsessive concentration on teaching as many lexical chunks as possible. The need to teach as much vocabulary as possible pervades the book. The chapters in Part B on teaching speaking, reading, listening and writing are driven by the same over-arching aim: look for new ways to teach more lexis, or to re-introduce lexis that has already been presented.

Thirdly, the book promotes the view that education is primarily concerned with the transmission of information. In doing so, it runs counter to the principles of learner-centred teaching, as argued by educators such as John Dewey, Sebastian Faure, Paul Friere, Ivan Illich, and Paul Goodman, and supported in the ELT field by educators such as Chris Candlin, Catherine Doughty, Caorl Chapelle, Grahame Crookes, Rebecca Brent, Earl Stevick, John Faneslow, Vivian Cook, Sue Sheerin, Alan Maley and Mike Long.  All these educators reject the view of education as the transmission of information, and, instead, see the student as a learner whose needs and opinions have to be continuously taken into account. For just one opinion, see  Weimer (2002) who argues for the need to bring about changes in the balance of power; changes in the function of course content; changes in the role of the teacher: changes in who is responsible for learning; and changes in the purpose and process of evaluation.

Fourthly, the book takes an extreme interventionist position on teaching English as an L2: it’s about as far from Krashen’s Natural Approach as it´s possible to go. Teaching Lexically involves dividing the language into items, presenting them to learners via various types of carefully-selected texts, and practising them intensively, using pattern drills, exercises and all the other means outlined in the book, including comprehension checks, error corrections and so on, before moving on to the next set of items.  As such, it mostly replicates the grammar-based PPP method it so stridently criticises. Furthermore, it sees translation into the L1 as the best  way of dealing with meaning, because it wants to get quickly on to the most important part of the process , namely memorising bits of lexis with their collocates and even co-text.  Compare this to an approach that sees the negotiation of meaning as a key aspect of language teaching, where the lesson is conducted almost entirely in English and the L1 is used  sparingly, where students have chosen for themselves some of the topics that they deal with, where they contribute some of their own texts, and where most of classroom time is given over to activities where the language is used communicatively and spontaneously, and where the teacher reacts to linguistic problems as they arise, thus respecting the learners’ ‘internal syllabus’.

Teaching Lexically sees explicit learning and explicit teaching as paramount, and it assumes that explicit knowledge, otherwise called declarative knowledge, can be converted into implicit (or procedural) knowledge through practice. These assumptions, like the assumptions that students will learn what they’re taught in the order they’re taught it, clash with SLA research findings. As Long says: “implicit and explicit learning, memory and knowledge are separate processes and systems, their end products stored in different areas of the brain” (Long, 2015, p. 44).  To assume, as Dellar and Walkley do, that the best way to teach English as an L2 is to devote the majority of classroom time to the explicit teaching and practice of pre-selected bits of the language is to fly in the face of SLA research.

Children learn languages in an implicit way – they are not consciously aware of most of what they learn about language. As for adults, all the research in SLA indicates that implicit learning is still the default learning mechanism. This suggests that teachers should devote most of the time in class to giving students comprehensible input and opportunities to communicate among themselves and with the teacher.

Nevertheless, adult L2 learners are what Long calls partially “disabled” language learners, for whom some classes of linguistic features are “fragile”. The implication is that, unless helped by some explicit instruction, they are unlikely to notice these fragile (non-salient )features, and thus not progress beyond a certain, limited, stage of proficiency.  The question is: What kind of explicit teaching helps learners progress in their trajectory towards communicative competence?  And here we arrive at lexical chunks.


Teaching Lexical Chunks

One of the most difficult parts of English for non native speakers to learn is collocation. As Long (2015, pages 307 to 316) points out in his section on lexical chunks, while children learn collocations implicitly, “collocation errors persist, even among near-native L2 speakers resident in the target language environment for decades.” Long cites Boers work, which suggests a number of reasons for why L2 collocations constitute such a major learning  problem, including L1 interference, the semantic vagueness of many collocations, the fact that collocates for some words vary , and the fact that some collocations look deceptively similar.

The size and scope of the collocations problem can be appreciated by considering findings on the lesser task of word learning. Long cites work by Nation (2006) and Nation and Chung (2009) who have have calculated that learners require knowledge of between 6000 and 7000 word families for adequate comprehension of speech and 9000 for reading. Intentional vocabulary learning has been shown to be more effective than incidental learning in the short tem, but, the authors conclude, “there is nowhere near enough time to handle so many items in class that way”.  The conclusion is that massive amounts of extensive reading outside class, but scaffolded by teachers, is the best solution.

As for lexical chunks, there are very large numbers of such items, probably hundreds of thousands of them. As Swan (2006) points out, “memorising 10 lexical chunks a day, a learner would take nearly 30 years to achieve a good command of 10,000 of them”. So how does one select which chunks to explicitly teach, and how does one teach them? The most sensible course of action would seem to be to base selection on frequency , but there are problems with such a simple criterion, not the least being the needs of the set of students in the classroom. Although Dellar and Walkley acknowledge the criterion of frequency, Teaching Lexically gives very little discussion of it, and there is very little clear or helpful advice offered about what lexical chunks to select for explicit teaching, – see the worksheet cited at the start of this review. The general line seems to be: work with the material you have, and look for the lexical chunks that occur in the texts, or that are related to the words in the texts. This is clearly not a satisfactory criterion for selection.

The other important question that Teaching Lexically does not give any well considered answer to  is: how best to facilitate the learning of lexical chunks?  Dellar and Walkley could start by addressing the problem of how their endorsement of Hoey’s theory of language learning, and his 100% endorsement of Krashen’s Natural Approach, fits with their own view that explicit instruction in lexical chunks should be the most important part of classroom based instruction. The claim that they are just speeding up the natural, unconscious process doesn’t bear examination because two completely different systems of learning are being conflated. Dellar and Walkley take what’s called a “strong interface” position, whereas Krashen and Hoey take the opposite view. Dellar and Walkley make conscious noticing the main plank in their teaching approach, which contradicts Hoey’s claim that lexical priming is a subconscious process.

Next, Dellar and Walkley make no mention of the fact that learning lexical chunks is one of the most challenging aspects of learning English as an L2 for adult learners.  Neither do they discuss the questions related to the teachability of lexical chunks that have been raised by scholars like Boers (who confesses that he doesn’t know the answer to the problems they have identified about how to teach lexical chunks). The authors of Teaching Lexically blithely assume that drawing attention to features of language (by underlining them, mentioning them and so on), and making students aware of collocations, co-text, colligations, antonyms, etc., (by giving students (repeated) exposure to carefully-chosen written and spoken texts, using drills, concept questions, input flood, bottom-up comprehension questions, and so on) will allow the explicit knowledge taught to become fully proceduralised.  Quite apart from the question of how many chunks a teacher is expected to treat so exhaustively, there are good reasons to question the assumption that such instruction will have the desired result.

In a section of his book on TBLT, Long (2015) discusses his 5th methodological principle: “Encourage inductive ·chunk” learning”.  Note that Long discusses 10 methodological principles, and sees teaching lexical chunks as an important but minor part of the teacher’s job. The most important concluson that Long comes to is that there is, as yet, no satisfactory answer to “the $64,000 dollar question: how best to facilitate chunk learning”.  Long’s discussion of explicit approaches to teaching collocations includes the following points:

  • Trying to teach thousands of chunks is out of the question.
  • Drawing learners attention to formulaic strings does not necessarily lead to memory traces usable in subsequent receptive L2 use, and in any case there are far too many to deal with in that way.
  • Getting learners to look at corpora and identify chunks has failed to produce measurable advantages.
  • Activities to get learners to concentrate on collocations on their own have had poor results.
  • Grouping collocations thematically increases the learning load (decreasing transfer to long term memory) and so does presentation of groups which share synonymous collocates, such as make and do.
  • Exposure to input floods where collocations are frequently repeated has poor results.
  • Commercially published ELT material designed to teach collocations have varying results. For example, when lists of verbs in one column are to be matched with nouns in another, this inevitably produces some erroneous groupings that, even when corrective feedback is available, can be expected to leave unhelpful memory traces.
  • It is clear that encouraging inductive chunk learning is well motivated, but it is equally unclear how best to realise it in practice, i.e., which pedagogical procedures to call upon.


Teaching Lexically is based on a poorly articulated view of the English language and on a flimsy account of second language learning. It claims that language is best seen as lexically driven, that a grasp of grammar ‘rules’ and correct usage will emerge from studying lexical chunks, that spoken fluency, the speed at which we read, and the ease and accuracy with which we listen will all develop as a result of language users being familiar with groupings of words, and that therefore, the teaching of lexical chunks should be the most important part of a classrooms teacher’s job. These claims often rely on mere assertions, and include straw man fallacies, cherry picking the evidence of research findings and ignoring counter evidence. The case made for this view of teaching is in my opinion, entirely unconvincing. The concentration on just one small part of what’s involved in language teaching, and the lack of any well considered discussion of the problems associated with teaching lexical chunks, are seriously flaws in the book’s treatment of an interesting topic.


Bachman, L. (1990). Fundamental considerations in language testingOxford University Press.

Breen, M. (1987) Contemporary Paradigms in Syllabus Design, Parts 1 and 2. Language Teaching 20 (02) and 20 (03).

Carlucci, L. and Case, J.  (2013)  On the Necessity of U-Shaped Learning. Topics.

Hoey, M.(2005) Lexical Priming. Routeledge.

Long, M. (2015) Second Language Acquisition and Task Based Language Teaching. Wiley.

Swan, M. (2006) Chunks in the classroom: let’s not go overboard. The Teacher Trainer, 20/3.

Tarone, E. (2001), Interlanguage. In R. Mesthrie (Ed.). Concise Encyclopedia of Sociolinguistics. (pp. 475–481) Oxford: Elsevier Science.

Weimer, M. (2002) Learner-Centered Teaching. Retrieved from  3/09/2016

Materials Evaluation


Here’s a vocabulary exercise I found while browsing through material that Gerry Sweeny, a one time colleague at ESADE Idiomas, gave me.

Vocabulary in Context

The following sentences contain nonsense words. Can you make sense of them?

  1. The sentence was written on a piece of drurb.
  2. Most drurb, like snow, is osgrave.
  3. Cats are domestic ningles.
  4. Polar bears, which are osgrave ningles, live where there is cridlington.
  5. If you set fire to drurb, it firtles.
  6. If you pour narg on firtling drurb, the flames go out.
  7. If you put cridlington into hot narg, it frumes.
  8. Cridlington frumes at a bazoota over 0º C.
  9. Narg boobles at a bazoota of 100º C .
  10. We frize bazootas with a nast.

What do you think the nonsense words mean in the above sentences?

  1. drurb
  2. osgrave
  3. ningles
  4. cridlington
  5. firtles
  6. narg
  7. frumes
  8. bazoota
  9. boobles
  10. frize
  11. nast



I’m currently looking through material available to members of the Cooperativa de Serveis Linguistics de Barcelona, with the idea of getting a materials bank together which would help members to avoid using coursebooks.  While there’s an ambundance of ELT materials available online, it’s difficult to quickly find material that satisfies a few basic criteria, such as relevance, quality, useability and legality. Neill McMillan and I met recently and we reckon that we need to assemble a lot of material which satisfies these criteria, or rather, well-considered criteria that we can all agree on, and then classify them according to fields such as, off the top of my head, level, topic, media, grammar point, and skill. The idea is to give members access to a data base of materials where they can find written and spoken texts, with accompanying worksheets, at a certain level, topic, etc., so that they can easily confect everything from an ESP course with appropriate tasks, to lesson plans, to fillers. Maybe you’re only looking for a text; maybe you’re looking for a text plus worksheet, maybe you’re looking for a fresh aproach to practicing a function; maybe you need a good clear explanation of some grammar point, maybe you’re trying to get together a proposal for a 50 hour course aimed at auditors, and so on.  I should add that I have a particular interest in developing a process syllabus, which I’ve discussed in a previous post and which relies on a materials bank.


So we see the challenges of this project as being to decide on the criteria for any bit of material, to decide on how the collection of the individual bits of material is organised in the data base, and to indicate links among them.

Looking at the worksheet above, what to do? Supposing that it were well presented, and that there were no copyright issues, does it warrant inclusion? Is its openness a good thing (allowing teachers to exploit it in their own way), or does it need some lead in and some further work? Is it useful, anyway?  More generally, how do we judge it’s worth? If you look at most of the literature on materials evaluation, you’ll be hard put to apply the frameworks to this, because most frameworks are, either explicitly or implicitly geared to coursebooks. Rather than indulge in a rant, I invite you to give your opinion. If you were getting a materials bank together, would you include this?

Dumb bells in the Language Gym


The Language Gym follows the classic self-help format: I’ll tell you the answers to all your worries and fears (about language teaching) but you need to park your critical faculties at the front door. The posts are stridently prescriptive, shamelessly self-promotional, and dumbly dogmatic, with titles like these:

  • 10 commonly made mistakes in vocabulary instruction
  • Eight motivational theories and their implications for the classroom
  • 10 commonly made mistakes in vocabulary instruction
  • Six ‘useless’ things foreign language teachers do


The author of this blog is Gianfranco Conti, who never tires of selling himself and his terrible book. A few examples from recent posts:

  • But I do have a teacher-training background, a PhD in Applied Linguistics and an MA in TEFL on top of 25 years language teaching experience.
  • As professor Macaro, former Head of the Oxford University Education Department, wrote in his excellent review of our book ‘The Language Toolkit’ (click here) …
  • I have had to adopt feedback-to-writing strategies that are not aligned with my espoused theory of L2 learning and current research wisdom – despite having a PhD in error correction in second language writing.
  • Since posting my three articles on listening … I have been flooded with e-mail, Twitter and Facebook messages from teachers worldwide
  • My students conjugate verbs every day on the http://www.language-gym conjugator… often scoring 90 -100%

Every post has references to his book, and ends with a plug for it.

Well, “no harm done” you might reasonably say, and maybe none is. Still, in his two most recent posts, Dr. Conti says a few things that I think need commenting on.


1. Principled Teaching

In his latest post Conti argues that ELT must be grounded in a deep understanding (like his) of SLA. He says that teachers need to ask themselves these 3 questions:

  1. How are foreign languages learnt ?
  2. What are the implications of the answer to question (1) for language teaching and learning ?
  3. Is the answer to (2) truly reflected in your own teaching practice?

We’ll skip all the preamble, where Conti explains how his abundant qualifications and experience make him more ready than most teachers to be a “reflective practitioner” and look at his answer to Question 1. He says this:

Cognitive models of language acquisition (especially Skill-based theories and Connectionism) provided the basis for my espoused theory of learning and shaped much of what you read in my blogs and of what I have been doing in the classroom for the last 20 years.

I couldn’t find anything about Connectionism in the gym, but there are certainly quite a few posts where we’re told how learners’ brains work, and how getting things from their working memory into their long term memory is the secret of all teaching and learning. So let’s have a look at the theory which provides the basis for Conti’s principled teaching.


Skill Acquisition Theory

As a general learning theory, skill acquisition theory argues that when you start learning something, you do so through largely explicit processes; then, through subsequent practice and exposure, you move into implicit processes. So you go from declarative knowledge to procedural knowledge and the automatisation this brings. Declarative knowledge involves explicit learning or processes; learners obtain rules explicitly and have some type of conscious awareness of those rules. The automatization of procedural knowledge entails implicit learning or processes; learners proceduralise their explicit knowledge, and through suitable practice and use, the behaviour becomes automatic.

Quite a few objections have been raised to to this theory. First, the lack of an operational definition undermines the various versions of skill acquisition theory that Conti has referred to: there is no agreed operational definition for the constructs “skill”, “practice”, or “automatization”. Partly as a result, but also because of methodological issues (see, for example, Dekeyser, 2007), the theory is under-researched; there is almost no empirical support for it.

Second, skill acquisition theory is in the “strong-interface” camp with regard to the vexed issue of the roles of explicit and implicit learning in SLA. It holds that explicit knowledge is transformed into implicit knowledge through the process of automatization as a result of practice. Many, including perhaps most famously Krashen, dispute this claim, and many more point to the fact that the  theory does not take into account the role played by affective factors in the process of learning.  Practice, after all, does not always make perfect.

Third, the practice emphasized in this theory is effective only for learning similar tasks: it doesn’t transfer to dissimilar tasks. Therefore, many claim that the theory disregards the role that creative thinking and behaviour plays in SLA.

Fourth, to suggest that the acquisition of all L2 features starts with declarative knowledge is to ignore the fact that a great deal of vocabulary and grammar acquisition in an L2 involves incidental learning where no declarative stage is involved.

In my opinion, the most important weakness of skill acquisition theory is that it fails to deal with the sequences of acquisition which have been the subject of hundreds of studies in the last 50 years, all of them supporting the construct of interlanguages.

We may conclude that while there are some interesting aspects of skill acquisition theory, it is both poorly constructed and incomplete. Given the current state of SLA theory, and given the essentially unscientific nature of the craft of language teaching, the strident claims made by Conti are unwarranted. In as far as he gives the impression that he knows how people learn a foreign language, and that he knows how to use this knowledge to build the best methodology for ELT, Conti is as deluded as those who use their web sites to peddle homeopathic pills.


Planting Seeds

The limitations of Conti’s understanding of SLA are evident in his previous post “The seed-planting technique …..”,   where he says:

effective teaching and learning cannot happen without effective curriculum design…… A well-designed language curriculum plans out effectively when, where and how each seed should be sown and the frequency and manner of its recycling with one objective in mind : that by the end of the academic year the course’s core language items are comprehended/produced effectively across all four language skills under real life conditions.

This amounts to what Breen (1987) calls a “Product” syllabus, what White calls a “Type A” syllabus and what Long (2011 and 2015) calls a “Synthetic” syllabus. The key characteristic of Conti’s “effective curriculum” is that it concentrates on WHAT is to be learned. The designer decides on the content, which is divided up into bits of lexis and grammar that are presented and practiced in a pre-determined order (planting “seeds” which precede the scheduled main presentation and subsequent recycling). The syllabus is external to the learner, determined by authority. The teacher is the decision maker, and assessment of success and failure is done in terms of achievement or mastery.

The problem with Conti’s curriculum is that he relies on skill acquisition theory, which makes two false assumptions. First, it assumes that declarative knowledge is a necessary precursor to procedural knowledge, and second, it assumes that learners learn what teachers teach them, an assumption undermined by all the evidence from interlanguage studies. We know that learners, not teachers, have most control over their language development. As Long (2011) says:

Students do not – in fact, cannot – learn (as opposed to learn about) target forms and structures on demand, when and how a teacher or a coursebook decree that they should, but only when they are developmentally ready to do so. Instruction can facilitate development, but needs to be provided with respect for, and in harmony with, the learner’s powerful cognitive contribution to the acquisition process.

Even when presented with, and drilled in, target-language forms and structures, even when errors are routinely corrected, and even when the bits and pieces are “seeded” and recycled in various ways, learners’ acquisition of newly-presented forms and structures is rarely either categorical or complete, and it is thus futile to plan the curriculum of an academic year on the assumption that the course’s “core language items” will be “comprehended/produced effectively” by the end of the year. Acquisition of grammatical structures and sub-systems like negation or relative clause formation is typically gradual, incremental and slow, sometimes taking years to accomplish. Development of the L2 exhibits plateaus, occasional movement away from, not toward, the L2, and  U-shaped or zigzag trajectories rather than smooth, linear contours. No matter what the order or manner in which target-language structures and vocabulary are presented to them by teachers, learners analyze the input and come up with their own interim grammars, the product broadly conforming to developmental sequences observed in naturalistic settings. They master the structures in roughly the same manner and order whether learning in classrooms, on the street, or both. This led Pienemann to formulate his learnability hypothesis and teachability hypothesis: what is processable by students at any time determines what is learnable, and, thereby, what is teachable (Pienemann, 1984, 1989).

Once again, the hyped-up sales pitch turns out to be unwarranted. The carefully-planned, “principled” curriculum Conti showcases is nothing more than an old-fashioned product syllabus, with a few bells and whistles, or rather dumbbells and seeds, thrown in.



Breen, M. (1987) Learner contributions to task design. In C. Candlin and D. Murphy (eds.), Language Learning Tasks. Englewood Cliffs, N.J.: Prentice Hall. 23-46.

Dekeyser, R. (2007) Skill acquisition theory. In B. VanPatten & J. Williams (Eds.), Theories in second language acquisition: An introduction (pp. 97-113). New Jersey: Lawrence Erlbaum Associates, Inc.

Long, M. (2011) “Language Teaching”. In Doughty, C. and Long, M. Handbook of Language Teaching. NY Routledge.

Long, M. (2015) SLA and TBLT. N.Y., Routledge.

White, R.V. (1988) The ELT Curriculum, Design, Innovation and Management.  Oxford: Basil Blackwell.

Selivan explains Trump speech similarities. No pigs seen flying


Leo Selivan has a rather off-hand way of treating research findings in corpus linguistics: he often uses undefined terms and blurred summaries to support his own particular view of ELT, which, let’s not forget, includes the breath-taking injunction “Never teach single words”. In his most recent post, Selivan repeatedly uses the term “chunks” without defining it, he misrepresents Pawley and Syder’s 1983 paper, and he then examines an excerpt from Melania Trump’s recent speech to the Republican party in order to demonstrate that “chunks”, not blatant plagiarism, explain the similarities with M. Obama’s 2008 speech.


1.  “Chunks”

Selivan says:

corpus research …. has shown that language is highly formulaic, i.e. consisting of recurring strings of words, otherwise known as “chunks”. What makes them chunks is the fact that they are stored in and retrieved from memory as ‘wholes’ rather than generated on a word-by-word basis at the moment of language production. 

Two comments are in order.

a)  What makes recurring strings of words “chunks” is not how they’re memorised, but rather their form.

b) It is not “a fact” that chunks are stored in and retrieved from memory as ‘wholes’. The hypothesis suggested by Pawley and Syder is that certain  types of strings of words are memorised and recalled in a certain carefully described way. By definition, this hypothesis is not true – it is a tentative theory which attempts to explain a problem.


2. Pauley and Syder

Selivan says:

The formulaic nature of language was first brought to the fore in a seminal paper by Australian linguist Andrew Pawley and and his colleague Frances Syder, who pointed out that competent language users have at their disposal hundreds of thousands of ready-made phrases (Pawley and Syder 1983).

Pawley and Syder’s paper was a lot more nuanced than Selivan suggests. They argued that control of a language entails knowledge of more than just a generative grammar, and that ‘memorized sentences’ and ‘lexicalized sentence stems’ (not “ready-made phrases”) were important additional parts of linguistic competence, useful in explaining the two puzzles of “nativelike selection” and fluency. As they say:

The terms refer to two distinct but interrelated classes of units, and it will be suggested that a store of these two unit types is among the additional ingredients required for native control (Pawley and Syder, 1983, p. 204).

When discussing ‘lexicalized sentence stems’, Pawley and Syder make it clear that these stems often include parts which can be transformed in various ways. They also admit that there are many problems in the treatment of lexicalized sentence stems.

How is a lexicalized sentence stem defined? How do you tell it apart from non-lexicalized sequences? There is no simple operation for doing this. The problem is essentially the same as in distinguishing any morphologically complex lexical item from other sequences; the question is what is ‘lexicalization’? What makes something a lexeme? ….  An expression may be more or less a standard designation for a concept, more or less clearly analysable into morphemes, more or less fixed in form, more or less capable of being transformed without change of meaning or status as a standard usage, and the concept denoted by the expression may be familiar and culturally recognized to varying degrees. Nor is there a sharp boundary between the units termed here ‘sentence stems’ and other phraseological units of a lower order (Pawley and Syder, 1983, p. 207).


3. The Speech

With regard to Melania Trump’s speech, Selivan looks at one of the copied parts and comments on the common uses of “impress upon”, and  the ubiquity of the phrases “work hard” and “keep promise” (sic). As a clincher, Selivan says

Looking at “treat people with respect” which is supposedly copied from Michelle Obama’s “treat people with dignity and respect”, you will see that “dignity” and “respect” are two of the very highly likely collocates here.

From this carefully assembled evidence, Selivan concludes:

If Melania’s faux pas indeed constitutes plagiarism, the text of her speech was no more plagiarized than an academic paper containing “Recent research has shown that” or “The results are consistent with data obtained in…”

Apart from the sentence being very badly constructed, and the claim being a ridiculous non-sequitur, can you imagine anybody seriously saying that the use of  “Recent research has shown that” or “The results are consistent with data obtained in…” by an academic in a published paper constitutes plagiarism? Likewise, who but Selivan and his Humpty-Dumpty use of “chunks” could seriously offer the analogy in order to defend Melanie Trump from the accusation of plagiarism?

Here’s an extract from the recent speech:

M Trump: Because we want our children in this nation to know that the only limit to your achievements is the strength of your dreams and your willingness to work for them.

And here’s an extract from the 2008 speech:

M. Obama: Because we want our children — and all children in this nation — to know that the only limit to the height of your achievements is the reach of your dreams and your willingness to work for them.

To attempt to explain the “similarities” between the two texts by appealing to “recurring sequences” is an indication of how far a little knowledge can lead one astray.


Pawley, A., & Syder, F.H. (1983) Two puzzles for linguistic theory: nativelike selection and nativelike fluency in Richards, J.C. & Schmidt, R.W. (eds) Language and Communication, London; New York: Longman, pp 191 – 225. *

*As Selivan usefully points out, this article is available online at


Mura Nava’s “Quick Cups of COCA”


Mura’s blog EFL Notes is an excellent source of up to date, well-considered information on using  corpora in ELT.  Mura uses his elegant blog to talk to teachers about how to use concordancers in their jobs, and he’s recently published “Quick Cups of Coca”, which I thoroughly recommend. You can download this gem from his website, and you should do it today.

Using a concordancer to search corpora for information about the English language is a rewarding activity for anybody  involved in ELT. It’s fascinating, absorbing, revealing, and it helps us to see the limitations of the explanations of grammatical forms, lexis, and lexical chunks that are offered by current coursebook writers, including those who claim to be implementing a lexical approach.

A concordancer helps you to examine these questions:

  • What words occur in the corpus (a body of texts)?
  • How often does each word occur? (Frequency counts)
  • In how many different types of text (different subject areas, different modes, different mediums) does the word appear?
  • Are there any significant subsets? (For example, in English, the 700 most frequent words account for 70% of all text.)
  • What are the collocations of the target item?
  • What are the contexts in which the word appears?

Taking a word as the search item, a concordancer will list all the different occurrences of the word in a text, it will count how often the word occurs, it will indicate what type of text the word appears in, and it will display the instances of the word in its context in a variety of formats, the most usual being the Key Word In Context (KWIC ) format, which lists all occurrences of the word in a 1 line context.


Tim Johns was among the first to suggest that a concordancer could be used in the classroom, either as a “silent resource” (just waiting until somebody asked a question it could help with), or as a means of making materials. Mura continues Tim’s work, and he does it splendidly. He uses one of the very best corpora available for free consultation (which is accompanied by a very user-friendly concordancer), namely COCA,  a corpus containing more than 520 million words of American English text: 20 million words each year 1990-2015, equally divided among spoken, fiction, popular magazines, newspapers, and academic texts.

Mura’s Quick Cups of COCA, which you can download from his site, is clear as a bell, uncluttered, interesting and thought-provoking. These are the tasks which he outlines:

  1. Using the wildcard asterisk to explore the difference between unmotivated and demotivated.
  2. How to look for synonyms.
  3. Variations of “bring to the boil”.
  4. Relative clauses.
  5. Lemmas (in this case benefit) and parts of speech.
  6. Compound words.
  7. Comparing words (in this case rotate and revolve).
  8. Clauses (in this case the verb claim: claim to have, claim to be, claim to know, etc.).
  9. Miscellaneous: possessives; past regular & irregular; progressive auxiliaries; passives.

Notice the breadth of the tasks. Mura has, I’m sure deliberately, chosen tasks that illustrate how broad is the sweep of questions that you can ask.

So many questions come to mind about the use of concordancers in ELT. There’s so much to discuss here, and, somewhat typically, Mura leaves us to muse for ourselves. I did my MA dissertation on concordancers (contact me if you’d like a copy) and I worked with Tim Johns and others to produce Microconcord, a concordancer published by OUP and still available (Google it). Here’s an example of a worksheet that I wrote 20 long years ago to accompany the Microconcord software:


Activity:  Examine the different ways that for and during are used.

Warm Up

How do you think the two words above are used? Here are two examples:

  I haven’t seen Jim for two months.

  I lived in Holland during the war.

A common mistake is:

  x I haven’t seen Jim during two months. x

As a preliminary description, we can say that for is used to say how long something lasts, and during is used to say when something happened, but only in reference to a given stretch of time, like the second world war, or the summer holidays, for example.  For is much more common than during, and it is used in more different ways.

Write down a sentence of your own for each word.

Now we will see what the concordancer can find.


Note: Have the BASIC INSTRUCTIONS sheet with you, so that you can follow the steps.

  1. Load the program.
  2.  Type in: during\for as the search words.
  3. Hit RETURN.
  4. You see the texts that the program is sorting through, and a running total of the number of examples it has found. When the total is 100, hit the Esc key.
  5.  You see at the bottom of the screen a report on how many examples it found, and their frequency. Hit RETURN.
  6.  You see the examples of the 2 words in the middle of lines of text. They are sorted with 1st Right as first priority, and Search Word as second priority.
  7. Use the arrow keys to look through the examples.
  8. Use the arrow keys to go to the examples of during.

QUESTION: What words occur after during? Are there any examples that surprise you?   Write down some examples of words that come after during.

9. Now look at the examples of for. There are a lot, and the word is used in different ways.

QUESTION: How many of the examples refer to how long something lasts? Write down 5 examples.

QUESTION: Can you identify other ways that for is used? Try to find different categories. Write down 5 sentences that interest you.

QUESTION: What would you add to the explanation at the beginning of the exercise?

Well, there it is. I won’t bother to comment on the worksheet, which has many weaknesses, save to note that it attempts to engage learners in an exploration, rather than simply telling them “the answer”.

For the moment I urge you to get a copy of Quick Cups of COCA, after which I hope you’ll talk about Mura’s work here, at his blog, and to all those who care about ELT.

Summer Reading


Dan Brown doesn’t do it for you?  Jeremy Harmer’s greatest hits leave you unquenched? Try these:

Best Fiction of 2016 so far


Julian Barnes: The Noise of Time.

A real tour de force by Barnes who does a fine job of transmitting the true horror of Stalinist Russia’s denial of free expression, the awful results of the absolute and fickle control of philistines over culture, the constant fear under which everybody lived their lives. This is a very powerful book, “a condensed masterpiece that traces the lifelong battle of one man’s conscience, one man’s art, with the insupportable exigencies of totalitarianism” as Guardian critic Alex Preston says in his review.   Not the most relaxing pool side read, not easy, not light, but it’s a compelling story and it left me with a re-kindled fear of totalitarian regimes and a grudging gratitude for living in the West.

Best Fiction I’ve read in 2016 so far


Edward St Aubyn: The Melrose Novels. I don’t know why it took me so long to find these 5 novels, but I’m so glad that I finally had the chance to enjoy them. From the opening lines of the first book – Never Mind – to the last lines of book 5 – At Last – St Aubyn dazzles with his quite extraordinary writing. He tells a harrowing tale, but he tells it with verve, sparkle, wit and honesty; he doesn’t flinch, he doesn’t hold back and there’s not a trace of bathos or self-pity. I don’t think I’ve ever been so initially impressed with a novelist’s style. The 5 books rip along – you can read the lot in a week. The story is awful, starting with how he was consistently raped by his father.  It’s frightening, it’s magnificent, it’s funny, it’s appalling, it’s heroic, it’s witty; it’s tragic, it’s inspring. St. Auybyn says that writing these books saved his life and it’s obvious that they’re cathartic. You have to read them all, but if you only have time for one, then I recommend Mother’s Milk. If you think Silvia Plath was scathing about her dad, read what St Aubyn has to say about his mum – the mum who did nothing to protect him from his dad’s abuse.

Best Non-Fiction books of 2016 so far


Yanis Varoufakis And The Weak Suffer What They Must?

The Greek Finance Minister takes us on a compelling ride through the eurozone from post Second World War attempts at recovery to the inevitable collapse in 2008 and beyond. This is a fresh, persuasive narrative which argues that “the weakest citizens of the weakest nations have paid the price for bankers’ mistakes” and that “the principle of the greatest austerity for those suffering the greatest recessions has led to a resurgence of racist extremism.”  Well-written and well-informed, with perhaps just a tad too much reliance on fiscal and monetary shenanigans to explain the fundamental flaws in the EU, this is a real pool side page turner; no really: it is.


Miichael Greger: How Not To Die.

The best guide to healthy eating ever written. All the top causes of premature death – heart disease, various cancers, diabetes and many more – can be beaten by “nutritional and lifestyle interventions”. Well. I’ll grant you that that isn’t the best phrase ever written, but the book is wonderfully clear and very practical. We really must stop eating red meat and processed food. Unprocessed plant foods – beans, berries, other fruits, cruciferous vegetables, greens, other veg., flaxseeds, nuts, spices, whole grains – plus lots of teas and water, is what you need.  Greger argues his case very forcefully, but he’s not a zealot. Here’s a sample:

Whenever I’m asked whether a certain food is healthy or not, I reply “Compared to what?” For example, are eggs healthy? Compared to oatmeal, definitely not. But compared to the sausage links next to them on the breakfast platter? Yes.  

Best New Book on SLA so far in 2016


Stefano Rastelli Discontinuity in SLA.

I’ve already given Mike Loing’s review of this book, so suffice it to say that it’s a must read. Tired of bullshit from the likes of Larsen Freeman? Read this. Stefano will deliver a paper on Intra language at the upcoming SLRF conference in September- stand by!  If you’re pool side, get in the shade, put down that drink and read Rastelli’s book. It’s invigorating. Mike Long has already questioned bits of it, and I await the verdicts of Kevin Gregg, Nick Ellis, Peter Robinson, William O’Grady and others. I wonder what Scott Thornbury will make of it.

Best Book on SLA I’ve read in 2016 so far


William O’Grady How Children Learn Language. Kevin Gregg chastised me for not having already read this book. It’s superb. The clarity of O’Grady’s writing is supreme, and the force of his argument is daunting. All those who fumble and stumble in their criticisms of Chomsky’s UG should read O’Grady’s splendid work. It’s one of the best books on language learning I’ve ever read. It’s accessable, it’s persuasive, it’s a model of coherence and cohesion.  It should, in my opinion, be required reading on any ELT course.

Best Book on ELT so far in 2016


Brian Tomlinson (ed) SLA Research and Materials Development  For Language Learning. I’m a bit wary about recommending this book because I haven’t finished reading it, but it looks good. It has Tomlinson’s hand all over it, and it’s uneven, but still, it has some some good chapters in it, including some that slam the use of cousebooks and give a much more considered view of how lexical chunks should be dealt with than that provided by the usual suspects, who give so little evidence of scholarship.

And if you don’t like the sound of any of the above books, may I recommend Thomas Pynchon’s V – the best novel I’ve ever read.


Have a great summer.

Rastelli’s Discontinuity Hypothesis: a new challenge for SLA researchers


Mike Long’s review of Stefano Rastelli: Discontinuity in Second Language Acquisition. the Switch between Statistical and Grammatical Learning, Multilingual Matters, 2014, appeared recently in the Applied Linguistics journal’s on line Advance Access. Here’s a brief summary of the review. I’ve taken gross liberties cutting Long’s text, but that’s about all I’ve done: some of it appears below verbatim and the rest is as Long wrote it, but with big bits lopped off. I share Long’s view that this is an important book which deserves our attention, but additionally I personally think that it highlights the weaknesses of attempts made by Larsen-Freeman, Thornbury, Hoey and others to use a garbled version of emergentism to support their views. Rastelli’s hypothesis represents the beginnings of a research programme that could pose a real challenge to Processability Theory, which is the theory most often adopted currently in attempts to explain SLA.

Rastelli’s book is part of the growing research interest in the potential of statistical learning and usage-based accounts of SLA by adults. The general idea is that learners can detect absolute frequencies, probabilistic patterns, and co-occurrences of items in the linguistic environment, and use the resulting information to bootstrap their way into the L2. Statistical learning (SL) is a general learning theory which relies on the construct of a domain-general capacity that operates incidentally, results in implicit knowledge, and functions for all linguistic sub-systems, from phonology, through word learning, morphology and syntax, to pragmatics.

Long says that Stefano Rastelli’s book (henceforth, Discontinuity) is remarkable for 3 things:

  1. its coherence.
  2. The breadth and depth of Rastelli’s knowledge of current theory and research in linguistics, cognitive psychology, neurolinguistics and SLA, and his ability to synthesize and integrate work from all four.
  3. The originality of his perspective.

Rastelli claims that SL is the initial way learners handle combinatorial grammar, i.e., regular co-occurrence relationships between audible or visible forms  that are overt in the input and the meanings and functions of those forms. Because audible or visible and regular, the patterns are frequency driven and countable, which is what SL requires to operate. Combinatorial grammar comprises recurrent combinations of adjacent and non-adjacent whole words and morphemes. The form-function pairs can be stored and retrieved first as wholes, and then broken down into their component parts in order to be computed by abstract rules.

Combinatorial grammar is learned twice, Rastelli claims, first by SL, and then by grammatical learning (GL). This is the meaning of ‘discontinuity’ in his hypothesis. SL prepares the ground for GL: “Statistics provides the L2 grammar the ‘environment’ to grow and develop” (2014: 220). SL involves first a computation over transition probabilities and subsequently bottom-up category formation; GL is achieved through computation over symbolic abstract rules and top-down category formation. GL happens when learners recognize (implicitly) not just regularities in the ways certain words co-occur, but why they co-occur. At that point, they can move beyond statistically based patterns and induce productive combinatorial rules. They can abstract away from particular exemplars that contain regular markings for number, tense, case, etc., now understanding (implicitly, again) that these properties can be applied to new exemplars.

The shift to GL is an abrupt, qualitative change — a rupture, not simply the next stage in a single continuous developmental process.This is one of several places where Rastelli departs from received wisdom in the field. He likens the SLA process to learning to swim, ride a bicycle or ski: progress is initially slow, tentative and uneven, with many failures, not a gradual succession of gradient states, until suddenly, the child (or adult) can swim, ride or ski unaided. This, he claims, is because SLA is quantized. Learners need to encounter a statistically critical number of instances of a form or structure. Once that threshold is crossed, they are able to perceive regularities in the features they share andto conceptualize the motivation behind those regularities, in order to apply a rule over novel instances. The formation of grammatical categories is what triggers discontinuity — sudden quantum leaps from SL to GL.

Crucially, the new grammatical representations do not displace previously acquired statistical rule(s). Rather, the sudden shift to GL is marked by gemination: dual statistical and grammatical representation of an item or structure at two cognitive levels in underlying competence. The two learning processes, SL and GL, and the two mental representations for the same L2 phenomena, statistical rules and grammatical categories, continue to exist side by side.

The continued co-existence of SL and GL has at least two possible neurophysiological explanations. First, implicit and explicit knowledge of the same item coexist, remain independent, and can be accessed independently by speakers (Paradis 2009: 15). Second, although independent, declarative and procedural memory compete and cooperate with one another across a learner’s lifespan. Some parts of the temporal lobe serve as a repository for already proceduralised knowledge, while some areas of the prefrontal cortex are activated when knowledge stored in declarative memory is selected and retrieved. There is also evidence of a direct anatomical connection between the medial temporal lobe and the striatum, that is, the caudate nucleus and putamen in the basal ganglia (Poldrack and Packard 2003: 4), which, says Rastelli, is why Ullman and colleagues believe L2 acquirers can learn the same items by exploiting the resources of both declarative and procedural memory.

The use of ‘quantum’ and ‘quantized’ is deliberate. Rastelli notes that the idea of abrupt discontinuity in SLA parallels the trajectory identified for many phenomena in the natural sciences, and above all in quantum physics and quantum probability theory. A classic example is the finding in quantum physics that electrons do not change their orbit around a nucleus gradually along a continuous gradient-like energy scale with change in proportion to increased energy, but instead ‘jump’ from one energy level to another at the precise moment that the energy supplied is sufficient to reach the threshold required to trigger the change. In just the same way, SLA is quantized; there is no straightforward relationship between increased L2 exposure and L2 development.

So much for combinatorial grammar. Non-combinatorial grammar, in contrast, pertains to invisible features, such as null subjects, filler-gap dependencies, and island constraints on wh- extraction, and phenomena at the discourse-syntax and syntax-semantics interfaces. This means there is nothing overt in the input to combine, and frequency is therefore irrelevant. SL is no use here because learners cannot categorize over absences (empty categories or displaced items). Such items are computed and represented only mentally. Thus, non-combinatorial grammar cannot be acquired via SL. Rastelli predicts, for example, that adult learners of Italian will have more trouble with null subjects than with auxiliaries in compound tenses, not due to differences in their frequency, but because SL can support the procedure for concatenation of co-occurring items (auxiliaries and main verbs), but not for computation of absent items (missing subject pronouns). In the sentence Elena e arrivata (Elena is arrived), e + arrivata is a chunk that may consolidate in a learner’s memory over time and eventually constitute the basis for a productive rule for auxiliary selection. Conversely, the absent pronoun in Elena e arivata ma _ non ha parlato (Elena arrived but [she] did not talk) provides nothing the learner can remember and re-use in similar situations. SL allows the need for some form of the auxiliary verb ‘to be’ eventually to become predictable every time ‘arrived’ appears (and later, other verbs of movement), whereas the presence or absence of a subject pronoun cannot be predicted and must be computed each time. Gemination will occur in the former case, but not in the latter, when GL alone will be pressed into service. If the non-/combinatorial distinction turns out to be valid, Rastelli suggests, it is presumably one of the reasons missing features are problematic and often never acquired by some adult L2ers. Instead of SL, non-combinatorial grammar must be handled by GL, and the capacity for GL differs at the individual level and is more subject than SL to age-effects.

After a discussion of Rastelli’s position on age effects, Long moves to the differences between Rastelli’s hypothesis and other theories of SLA. Rastelli notes how ‘discontinuity’ differentiates his position from that of ‘continuity’ theories, such as Processability Theory, the norm in most SLA theorizing. As should be clear by now, he rejects the notion that L2 development is continuous, a series of incremental shifts (developmental stages) as a result of increased exposure to L2 input, without fractures or leaps:

The core idea of discontinuity is that the process of adult acquisition of L2 grammar is not uniform and incremental but differentiated and redundant. To learn a second language, adults apply two different procedures to the same linguistic materials: redundancy means that the same language items may happen to be learned twice.  (2014: 5)

The SL/GL distinction is qualitative (neurophysiological) in nature. It is not a matter of converting explicit to implicit knowledge (for Rastelli, implicit learning takes precedence, after all), so not a question of automatization of what started life as declarative knowledge, as in Skill Acquisition Theory, and not amenable, therefore, to the use of such measures as processing speed or reaction times. Discontinuity differs from restructuring in that the qualitative shift is not from non-productive to productive use of chunks via practice, but between two neurophysiologically distinct ways of learning that target two different parts of grammar. It shares ground with Ullman’s Declarative/Procedural Model (DPM), but as Rastelli shows through a detailed comparison, differs in important ways, with the discontinuity hypothesis, again, focusing on two kinds of learning processes, rather than two kinds of learning products, the lexicon and the grammar (differentiating between which is in any case far from straightforward), with some L2 grammatical items held to be learned statistically before grammatically. Rastelli also discusses the relevance of work in theoretical linguistics by Berwick, Yang, Roeper, O’Grady, Chomsky, Pinker, Grodzinsky, Hawkins, Tsimpli and others, the discontinuity hypothesis being shown to constitute a ‘semi-modular’ position in which categorical grammar relies on innate principles, while probabilistic grammars can be learned from positive evidence alone. Work of SLA scholars considered includes that of Bley-Vroman, N. Ellis, Wray, Sorace, Pienemann, Sharwood-Smith, Paradis, Slabakova, White, Ullman, Montrul, Robinson, Newport, and Williams.

Despite its broad scope and the obvious interest in similarities and differences between his own position and that of other theorists, Rastelli denies that Discontinuity offers a new theory of SLA:

Crucially, the word ‘theory’ is avoided purposely in this book . . . Basically, there cannot be a theory of discontinuity yet because the evidence provided so far can be interpreted in different ways . . . An expression such as ‘discontinuity hypothesis’ better conveys the image of the embryonic stage of a prospective theory of discontinuity. (2014: 6)

Nevertheless, the hypothesis he proposes is unquestionably innovative, and likely to motivate several new lines of empirical work. It will probably be regarded as (healthily) controversial in some quarters, but is without doubt an exceptionally interesting and intellectually refreshing contribution to the current SLA literature.



Abrahamsson, N. and K. Hyltenstam. 2009. ‘Age of onset and nativelikeness in a second language: Listener perception versus linguistic scrutiny,’Language Learning 59: 249-306.

Aslin, R. N. and E. L. Newport. 2012. ‘Statistical learning: From acquiring specific items to forming general rules,’Psychological Science 21/3: 170-76.

Aslin, R.N. and E. L. Newport. 2014. ‘Distributional language learning: Mechanisms and models of category formation,’ Language Learning 64/1: 86-105.

Berwick, R. C. 1997. ‘Syntax facitsaltum: Computation and the genotype and phenotype of language,’ Journal of Neurolinguistics 10/2-3: 231-49.

DeKeyser, R. M. 2000. ‘The robustness of critical period effects in second language acquisition,’ Studies in Second Language Acquisition 22/4: 499-533.

Ellis, N. C. 2002. ‘Frequency effects in language acquisition: A review with implications for theories of implicit and explicit language acquisition,’ Studies in Second Language Acquisition 24/1: 143-88.

Ellis, N. C. 2006. ‘Language acquisition as rational contingency learning,’ Applied Linguistics 27/1: 1-24.

Ellis, N. C. 2009. ‘Optimizing the input: Frequency and sampling in usage-based and form-focused learning’ in M. H. Long, andC. J. Doughty (eds): The Handbook of Language Teaching. Blackwell, pp. 139-58.

Ellis, N. C. and S. Wulff. 2015. ‘Usage-based approaches to SLA’ in B. VanPatten, and J. Williams (eds.):Theories in Second Language Acquisition. An introduction. 2nd edition. Lawrence Erlbaum, pp. 75-93.

Hilles, S. 1986. ‘Interlanguage and the pro-drop parameter,’ Second Language Research 2/1: 33-51.

Granena, G. and M. H. Long.2013. ‘Age of onset, length of residence, language aptitude, and ultimate L2 attainment in three linguistic domains,’Second Language Research 29/3: 311-43.

Hamrick, P. 2014. ‘A role for chunk formation in statistical learning of second language syntax,’Language Learning 64/2: 247-78.

Janacsek, K., J. Fiser, and D. Nemeth. 2012. ‘The best time to acquire new skills: age-related differences in implicit sequence learning across the human lifespan,’Developmental Science 15/4: 496-505.

Munnich, E. and B. Landau. 2010. ‘Developmental decline in the acquisition of spatial language,’Language Learning and Development 6/1: 32-59.

Nemeth, D., K. Janacsek, and J. Fiser. 2013. ‘Age-dependent and coordinated shift in performance between implicit and explicit skill learning,’Frontiers in Computational Neuroscience 7/147: 1-13.

Osterhout, L., A. Poliakov, K. Inoue, J. McLaughlin, G. Valentine, L. Pitkanen, C. Frenck-Mestre, and J. Hirschensohn. 2008. ‘Second-language learning and changes in the brain,’ Journal of Neurolinguistics 21: 509-21.

Paradis, M. 2009. Declarative and procedural determinants of second languages. John Benjamins.

Poldrack, R. A. andM. G. Packard. 2003. ‘Conpetition among multipl memory systems: Converging evidence from animal and human brain studies,’ Neuropsychologia1497: 1-7.

Rebuschat, P. (ed). 2015. Implicit and Explicit Learning of Languages. John Benjamins.

Rebuschat, P. and J. N. Williams (eds).­ 2012. Statistical Learning and Language Acquisition. Walter de Gruyter.

Robinson, P. andN. C. Ellis (eds). 2008. Handbook of Cognitive Linguistics and Second Language Acquisition.Routledge.

Saffran, J. R. 2003. ‘Statistical language learning: Mechanisms and constraints,’Current Directions in Psychological Science 12: 110-14.

Saffran, J. R., E. L. Newport, and R. N. Aslin. 1996. ‘Word segmentation: The role of distributional cues,’Journal of Memory and Language 35: 606-21.

Spadaro, K. 2013. ‘Maturational constraints on lexical acquisition in a second language’ in G. Granena, and M. H. Long (eds): Sensitive Periods, Language Aptitudes, and Ultimate L2 Attainment. John Benjamins, pp. 43-68

Tanner, D., K. Inoue, and L. Osterhout. 2014. ‘Brain-based individual differences in on-line L2 grammatical comprehension,’ Bilingualism: Language and Cognition 17: 277-93.

Williams, J. N. 2009. ‘Implicit learning’ inW. C.Ritchie, and T. K. Bhatia (eds):The New Handbook of Second Language Acquisition. Emerald Group Publishing, pp. 319-53.


Harmer on Brexit: Version 2


Harmer’s response on Facebook to the UK referendum result blames “the sclerotic elderly” and “an angry working class” for what he considers to be a catastrophic decision. He predicts it will soon result in “prime minister Farage, President Le Pen and a right-wing surge across the continent with a rise in racist violence and the gradual growth of intolerance and misunderstanding.” Harmer expresses “a profound loathing for the people who have led this despicable isolationist and backward-looking movement.” While these loathsome people hail “a new dawn”, Harmer sees “nothing but a great darkness settle over the land.”

Nearly 5,000 people “Liked” Harmer’s text and I think it’s curious that while nobody raises any objection to Harmer’s declaration of “a profound loathing” for the leaders of the “Leave” campaign, many strongly object to my repeated criticisms of the style and content of Harmer’s writing. That aside, I suggest that Harmer’s “Apology to Europe” is over-emotional and badly argued. It’s highly unlikely that Farage will become UK prime minister, or that Le Pen will become President of France. If there is “a right-wing surge across the continent, … “,  etc., etc., it won’t be the fault of those in the UK who voted “Leave”, but rather of the racists themselves and of the economic conditions which provide fertile ground for the spread of such beliefs. There’s been a spike in race-hate complaints since 23rd June, no doubt because racists feel emboldened, and thus there is some justification for fears of the right wing surge, etc. which Harmer predicts. I can understand why so many ordinary people are very upset by the result, and I think there are certainly reasons to be worried about what happens next. But we don’t know what will happen, and in my opinion Harmer’s reaction is simplistic, unreasonable and unhelpful.

It’s also worth pointing out that not everybody who voted “Leave” was a racist, or old, or working class – many wanted to leave the EU because there are a great many things wrong with its institutions and because the policies carried out by the unelected EU Commission and the Council of Ministers, with little control from the European Parliament, have caused a great deal of hardship. The Common Agricultural policy, at one point responsible for 60% of the total EU budget, was for decades a wasteful disaster which did much to damage good farming practices. The budget deficit limits imposed by the 1992 Maastricht treaty, triggered a wave of unemployment and welfare cuts across the continent. After that, the financial sector was increasingly de-regulated, and, with increasing pressure from Germany and France, the euro was introduced as a common currency, making it impossible for weaker members to use their own currency as a tool to manage their economic affairs. During the first decade of monetary union, weaker European economies were subjected to a wave of cheap credit from banks of the most powerful states. When the global crisis erupted, banking bailouts, rising social spending and sharp declines in tax revenue sparked a debt crisis in countries such as Greece, Portugal and Spain. The EU Commission responded by imposing severe austerity on Greece and doing everything possible to bring down the Syriza government, a demonstration of the Troika’s determination to maintain a system of austerity across the region. The recently passed European Fiscal Compact further limits state spending across the euro zone.

So the EU is a deeply undemocratic organisation that promotes and protects the interests of its members’ ruling classes. On the other hand, there’s no doubt that the UK “Leave” campaign was fuelled by ugly racism and absurd “Little Englander” propaganda, and there’s no obvious reason to think that things will be better in the UK or anywhere else as a result of the decision of the UK to leave.

Rather than react as Harmer has done, we should surely concentrate on promoting grass roots democratic organisations that fight for people’s rights wherever they are. The gap between rich and poor is widening, and there’s little reason to believe that if the UK had remained in the EU things would have improved for most of its inhabitants. In the UK today, 63 per cent of poor children grow up in families where one member is working. More than 600,000 residents of Manchester, are “experiencing the effects of extreme poverty” and 1.6 million are slipping into penury. The situation in other EU countries is even worse and the inability of EU members to use their own separate currencies as a way of dealing with economic problems, coupled with the policy of austerity imposed by the Troika, makes it likely that things will get worse before they get better.  What’s done is done; however regrettable you might think it is, I suggest that it can be seen not just as a worrying  threat, but also as an opportunity.


I apologise to those who wrote the works below for not citing them properly in my text.

Observations on Brexit

The left wing case for quitting  the EU

John Pilger: Why the British said no to Europe