HI,

3

This blog is dedicated to criticism. It offers

  1. Critical suggestions and resources for those doing post graduate courses in teaching English as a foreign language.
  2. A critical appraisal of what’s happening in the world of English Language Teaching.

The commercialisation of the ELT industry (estimated to be worth more than $20 billion) and the corresponding weakening of genuinely educational concerns, means that today most teachers are forced to teach in a way that shows scant regard for their worth, their training, their opinions, their job satisfaction, or the use of appropriate methods and materials. The biggest single reason for this sorry state of affairs, and the biggest single obstacle to good ELT, is the coursebook.

Using a coursebook entails teachers leading students through successive units of a book. Each unit of the book concentrates on a certain topic where isolated bits of grammar and vocabulary are dealt with, on the assumption that students will learn them in the order that they’re presented. Such an approach to ELT flies in the face of research which suggest that SLA is a process whereby the learner’s interlanguage (a dynamic, idiosyncratic, evolving linguistic system approximating to the target language) develops as a result of communicating in the target language, and is impervious to attempts to impose the sequences found in coursebooks.

The publishing companies that produce coursebooks spend enormous sums of money on marketing, aimed at persuading stakeholders that coursebooks represent the best practical way to manage ELT. As an example, key  players in the British ELT establishment, the British Council, the Cambridge Examination Boards, the Cambridge CELTA and DELTA teacher training bodies among them, accept the coursebook as central to ELT practice. Worse still, TESOL and IATEFL, bodies that are supposed to represent teachers’ interests, have also succumbed to the influence of the big publishers, as their annual conferences make clear. So the coursebook rules, at the expense of teachers, of good educational practice, and of language learners.

By critically assessing the published views of those in the ELT establishment who promote coursebook-driven ELT, this blog hopes to lend support to those who fight for a less commercial, less centralised, more egalitarian, more learner-centred approach to ELT.

Advertisements

Just Say No

It’s started: the Silly Season, the suffocatingly-anodine, unchallenging, leave-your-brain-at-the-door ELT Conference Season.

These conferences ignore the increasing commodification of the education industry.

  • Teachers are seen as the deliverers of courses which reduce education to the learning of a pre-determined list of discrete units of testable ‘knowledge’.
  • This “knowledge” is delivered by coursebooks with minimal mediation from de-skilled teachers, and it turns learners into consumers.
  • The ‘knowledge’ is tested by a raft of high-stake exams which reify that ‘knowledge’.
  • Most learners of English as an L2 fail the exams they take.
  • Most teachers of English as an L2 have no job security, no proper contracts, no professional development provisions, no pension plans, bad pay, and bad working conditions.

With this backdrop, once again we’re about to witness the sorry spectacle of the usual suspects strutting their stuff, spinning their tired recipes, while ignoring the failure of ELT as an industry to address the problems alluded to above.

The conferences will be sponsored by those who make huge profits from the on-going provision of shoddy goods. These providers – the big publishing companies, the British Council, the Cambridge Clan who control proficiency exams and teaching certification – crucially influence the conference agenda, they make sure that the plenary speakers talk safely within carefully defined limits, and they sponsor speakers to promote their products. It’s very unlikely that anything challenging will be said, and if it is, it will be said in a back room. Just about the only chance of any controversial stuff being widely aired is that one of these back room talks will be picked up by social media, without any support from the organisers.

Judging from data from questionnaires, teachers feel that the best part of their conference attendance is the chance to share time with their colleagues. Get together, swap ideas, tell stories, support each other, wine, dine, dance and feel good together. But why depend on conferences like IATEFL and TESOL to do these things? Why should we continue to go to these events where we’re lectured by highly paid clowns pushing dodgy goods, where no attempt to seriously address the issues that need talking about is made? I’m labouring the point now, but why should thousands of teachers pay to listen to Jeremy Harmer promoting the Pearson Academic English exam by describing  how he learned to play the tuba, rather than pay a tenth of the price to share in a conversation with someone local who can talk knowledgeably about language testing?

Why support these commercial events with their shows and shoddy spectacles that reflect the commodification of education, instead of organising our own events?

Hundreds of anecdotes about “What I got from a conference” should not deflect attention from this simple argument: the aim of the big ELT conferences is to promote coursebook-driven ELT and to encourage the mistaken view that there’s nothing fundamentally wrong with the way things are going. Locally organised events are a credible, better alternative.

Encounters with Noticing Part 2

If I’d actually drunk a bottle of tequila while trying to understand Schmidt’s Noticing Hypothesis last Tuesday, I would have woken up with a hangover, and these days the hangovers are so bad that I just can’t face them. So when I woke up the morning after, all was well; my surroundings were familiar, my wife was with me, there was nothing to make amends for. Reassuring, of course, but I confess to feeling nostalgia for my younger days. There’s nothing quite like the fun you have drinking; the Devil has all the best songs, they say, and I bet Hades had all the best cocktails. Easy to imagine getting the ferry across the Acheron, sitting around the lounge bar waiting to see where you were going (probably not to the Elysian fields!), banging back dry martinis with funny people like W.C. Fields (“I cook with wine. Sometimes I even add it to the food”, and Tommy Cooper (“I’m on a whisky diet. I’ve lost three days already), grateful that you’d never been a mere sober mortal.

Downstairs, I made a nice big mug of tea and took it to the study. There on the desk and on the monitor was all this stuff about the Noticing hypothesis. Not just Schmidt versus Truscott, and Gregg versus Krashen, and all the other SLA feuds, but also the famous Locke versus Leibniz debate and the equally famous Aristotle versus Plato debate about more or less the same thing. Aristotle wasn’t quite an empiricist, but certainly got the better of Plato on epistemology, while Leibniz is generally regarded as coming out on top against Locke. Specially the Leibniz-Locke debate still seems relevant today in the light of the latest challenge to nativist views on language learning, and I think Leibniz might have had some harsh words to say  about the blurred lines between awareness, atttention and consciousness in Schmidt’s attempts to develop the Noticing Hypothesis.

Just to reassure those who might be unduly swayed by the likes of Penny Ur (and Scott Thornbury on a bad day) into thinking that they shouldn’t worry their heads with all all this theoretical stuff (just trust your instincts and polish your presentation skills), my motivation for sniffing around this particular theoretical stuff is to check on the foundations of our teaching. It’s a terrible job, the pay’s lousy, but somebody’s got to do it, right? Somebody’s got to check, that is, to see whether ‘noticing’ justifies all the explicit teaching done in its name. I suspect that the influential teacher trainers who rely on ‘noticing’ to justify their encouragement of everything from teaching a grammar-based syllabus to teaching as many lexical chunks as you can cram into a 90 minute class are talking baloney, and it should be made clear that their advice gets no support from any good research. On the face of it ‘noticing’ encourages bad teaching practice, and so needs to be carefully examined.

So here we go with Part 2. I left Part 1 face down on the carpet, exhausted by unsuccessful efforts to understand the Noticing Hypothesis.  In the comments that followed, one particular problem was highlighted by Kevin Gregg, who said:

You can’t notice what is not in the input; and rules, for instance, or functions, are not in the input.

This prompted Thom to ask:

In what other way can anybody learn grammar if it is not by way of input?

Kevin’s on-going tussle with time (trains to catch, letters to write, shopping to do) prevented him from replying, so I’ll try.

Well it depends where you’re coming from, as they say. Empiricists, or rather, “‘empiricist’ emergentists” as Gregg calls them would say that input is the sufficient condition for learning an L2, and they’d probably caution against listening to any talk of mental grammars. Empiricists like Nick Ellis see all knowledge as coming from the information we get through our senses during our interaction with the environment, and with reference to language learning, the emergentists argue that we aren’t born with linguistic knowledge of any sort because we don’t need it. General learning devices (capable of making generalisations based on exemplars found in the input, for example) are all we need. In Nick Ellis’ words:

massively parallel systems of artificial neurons use simple learning processes to statistically abstract information from masses of input data. What evidence is there in the input stream from which simple learning mechanisms might abstract generalizations? The Saussurean linguistic sign as a set of mappings between phonological forms and conceptual meanings or communicative intentions gives a starting point. Learning to understand a language involves parsing the speech stream into chunks which reliably mark meaning. 

…  in the first instance, important aspects of language learning must concern the learning of phonological forms and the analysis of phonological sequences: the categorical units of speech perception, their particular sequences in particular words and their general sequential probabilities in the language…. 

In this view, phonology, lexis and syntax develop hierarchically by repeated cycles of differentiation and integration of chunks of sequences.

On the other hand, nativists like Kevin Gregg, specially those who accept Chomsky’s principles and parameters UG theory, point to the knowledge young children have of language to argue that SLA is the result of an innate representational system specific to the language faculty acting on input in such a way that an L2 grammar is created. We are born with knowledge of various linguistic rules, constraints and principles. In interaction with the environment, which exposes us to ‘primary linguistic data’, we acquire a new, expanded body of linguistic knowledge, namely, knowledge of a specific language like English. This final state of the language faculty constitutes our ‘linguistic competence’, essential, but not sufficient for our ability to speak and understand a language. Additional knowledge about actual language use is acquired through other general learning mechanisms.

Whatever view we take of the SLA process, the question of how it starts (input) is obviously critical, but re-visiting Schmidt’s Noticing Hypothesis has led me to appreciate that the question of how it ends up is equally important. What finally gets acquired? To answer this question we need what Gregg calls a “property” theory of SLA – a theory of language, or, more precisely, of linguistic knowledge of the L2. What is the knowledge that is acquired when someone learns a second language?  O’Grady (2005) notes that while the UG camp talk about problems sorting out categories and structures, the emergentists talk about sorting out words and their meanings, and this leads him to suggest that the disagreement about how we learn an L2 stems from a deeper disagreement about “the nature of language itself”. O’Grady (2005, p. 164) explains:

On the one hand, there are linguists who see language as a highly complex formal system that is best described by abstract rules that have no counterparts in other areas of cognition. …. Not surprisingly, there is a strong tendency for these researchers to favor the view that the acquisition device is designed specifically for language. On the other hand, there are many linguists who think that language has to be understood in terms of its communicative function. According to these researchers, strategies that facilitate communication – not abstract formal rules – determine how language works. Because communication involves many different types of considerations … this perspective tends to be associated with a bias toward a multipurpose acquisition device.

This excellent comment is echoed by Susanne Carroll (2001, p. 47), who distinguishes between

  • Classical structural theories of information processing which claim that mental processes are sensitive to structural distinctions encoded in mental representations. Input is a mental representation which has structure.
  • Classical connectionist approaches to linguistic cognition which deny the relevance of structural representations to linguistic cognition. For them, linguistic knowledge is encoded as activated neural nets and is only linked to acoustic events by association.

Carroll comments:

Anyone who is convinced that the last 100 years of linguistic research demonstrate that linguistic cognition is structure dependent — and not merely patterned— cannot adopt a classical connectionist approach to SLA.

O’Grady’s and Carroll’s remark remind me that the majority of scholars who are currently looking closely at how input ends up as knowledge don’t articulate a coherent answer to the crucial question: “What is the linguistic knowledge that is acquired?”. Many years ago, I myself made some effort to kick this question into the long grass. Gregg’s repeated insistence on the need for a property theory of SLA which describes what is acquired, prompted me to say in a book and in an article for Applied Linguistics that researchers could perfectly well get on with developing a theory of SLA without worrying about the damn property theory. In a short reply (I think he had a bus to catch that time), Gregg effortlessly dealt with my bleatings (the bus and, I like to think, our friendship saved me from the full Gregg treatment) and I’m now fully persuaded that he’s right to demand a property theory.

I think it’s the absence of a well-articulated property theory that makes it so difficult for Schmidt and others to explain how information from the environment ends up as linguistic knowledge of the L2.  They accept that the knowledge acquired includes linguistic knowledge of, for example, the structure of an English verb phrase, and they insist that learning this knowledge depends on ‘noticing’ things in the input” But how, we must ask again, does ‘noticing’ audio stimuli from the environment lead to the acquisition of the linguistic knowledge demonstrated by proficient L2 users?  Let’s take a quick look at the history of SLA research.

The shift from a behaviouristic to a mentalist view of language learning (sparked by Chomsky’s rebuttal of Skinner in 1957) prompted scholars in the field of psycholinguistics to see language learning as a process which goes on inside the brain and involves the workings of some kind of acquisition device. The, as yet unobservable, “black box” that we can refer to as an acquisition device is almost certainly not located in one particular part of the brain, might or might not be dedicated exclusively to language learning, might or might not make use of innate linguistic knowledge, but certainly does (somehow) enable us to receive, organise, store and retrieve, and manipulate ‘input so as to facilitate learning the L2.

And there it is: ‘input’. The Merriam-Webster dictionary says that the term was first used in 1953, in the context of computer design, to refer to data sent to a computer for processing. In the study of SLA, Corder (1967) was the first to suggest that we acquire the rules of language in a predictable way, and that the order is independent of the order in which rules are taught in language classes. This led Corder to suggest that there was a difference between input and intake.

The simple fact of presenting a certain linguistic form to a learner in the classroom does not necessarily qualify it for the status of input, for the reason that input is ‘what goes in’ not what is available for going in, and we may reasonably suppose that it is the learner who controls this input, or more properly his intake. This may well be determined by the characteristics of his language acquisition mechanism. (p. 165).

Here, input is what’s available, and intake is what the learner decides to take in. It’s not clear to me what either ‘input’ or ‘intake’ refer to, and anyway, as Schmidt (1990) points out, Corder contradicts himself by saying in the first sentence that the learner controls intake, and by then saying in the second sentence that his language acquisition mechanism does. More importantly for our hunt, Schmidt goes on to say that it’s not clear whether intake is the subset of input that makes it into short term memory, or whether it’s that part of input that has been sufficiently processed to now form part of the learner’s interlanguage system. The way Schmidt expresses this second point is instructive. Schmidt says that Corder’s treatment of intake does not make any clear distinction between that part of input used to comprehend messages and that part used “for the learning of form” (Schmidt, 1990, p. 139). Schmidt also endorses Slobin’s (1985) distinction between processes involved in converting input into stored data for the construction of language, and processes used to organise stored data into linguistic systems. Schmidt is obviously aware (sorry) of the problem of clearly identifying not just the level of conscious attention /awareness involved in noticing, but also the problems of clearly defining what is noticed and what (if any) processing goes on when learners notice whatever it is they notice.

Moving on to Krashen, his input hypothesis draws on the “natural order” of L2 acquisition that Corder drew attention to, and supposes that learners progress along a pre-determined learning trajectory which is impervious to instruction and controlled by a language acquisition device. Acquisition, Krashen says, is triggered by receiving L2 input that is one step beyond their current stage of linguistic competence. If a learner is at a stage ‘i‘, then acquisition takes place when he/she is exposed to ‘Comprehensible Input’ which belongs to level ‘i + 1‘. In Krashen’s model, learners only need comprehensible input and a low affective filter to acquire the L2, because once the i+1 input is received, Chomsky’s LAD does the rest. Almost needless to say, the trouble with Krashen’s input hypothesis is that he nowhere explains what comprehensible input consists of, or tells us how to recognise it.

Unsurprisingly, Schmidt’s not very impressed with Krashen’s badly-defined hypothesis, but it’s not just the lack of definition that Schmidt objects to; crucially, Schmidt insists that  SLA is triggered by conscious attention. Krashen’s comprehensible input is, says Schmidt, much better seen as intake, itself defined as that part of the input which is ‘noticed’. Because what learners actually do is consciously attend to, notice, certain parts of the input, and the noticed parts becomes intake. Furthermore, since the parts of the input which aren’t ‘noticed’ are lost, it follows that noticing is the necessary condition for learning an L2. In his 1990 paper, at least, the claim is not, as so many now want to interpret the Noticing Hypothesis, “More noticing leads to more learning”, but rather, the much stronger claim “Learning can’t take place without noticing”.

In the next post, I intend to look at processing models and try to pin down Schmidt’s “technical” definition of ‘noticing’, which he says is “equivalent” to Gass’ ‘apperception’.  Hmmm. I’ll also look at Suzanne Carroll’s very different view of input. She says:

The view that input is comprehended speech is mistaken  and has arisen from an uncritical examination of the implications of Krashen’s (1985) claims to this effect. …… Comprehending speech is something which happens as a consequence of a successful parse of the speech signal. Before one can successfully parse the L2, one must learn it’s grammatical properties. Krashen got it backwards!”  

To be continued.

References

Carroll, S. (2001) Input and Evidence. Amsterdam; Benjamins.

Corder, P. (1967) The significance of learners’ errors. International Review of Applied Linguistics, 5, 161-169

Ellis, N. (1998) Emergentism, Connectionism and Language Learning. Language Learning 48:4,  pp. 631–664.

O’Grady, W. (2005) How Children learn language. CUP.

 

Encounters with Noticing

Things started pretty normally yesterday. I looked at Twitter, and, as usual, there were numerous tweets by Dr. Conti urging  readers to revisit his web site. Unable to resist, I thought I’d have a quick look at the post on parallel texts, which, I soon learned, are good because they encourage ‘noticing’. Dr. Conti explains:

According to Schmidt’s (1990) ‘Noticing hypothesis’ the learning of a foreign language grammar structure cannot occur unless the learner ‘notices’ the gap between the way that structure is used in the target language and his/her own L1. In my classroom experience I have witnessed many a time that Eureka moment when a student said, almost thinking aloud, “Oh, I get it! ‘I went’ in French is actually ‘I am gone’. That would be an occurrence of ‘noticing’

“Well, but would it?”, I wondered. Surely Schmidt’s noticing hypothesis makes no such claim; surely noticing the gap is more a trigger for noticing than noticing itself, and anyway, surely it’s not a question of noticing the gap between the L1 and the L2 but between features in input and output? Isn’t it?

As I sipped my third green tea of the morning, I hunted for Schmidt’s 1990 paper, but came across Truscott’s critique first. On the first page of his paper, Truscott (1998) says that he’s going to ignore the “noticing the gap” claim altogether:

Proponents of noticing also give much attention to noticing the gap – learners’ awareness of a  mismatch between the input and their current interlanguage (see especially Schmidt and Frota, 1986). It is important to avoid confusion between this idea, which necessarily involves awareness, and the more general notion of a comparison between input and interlanguage. Theories of unconscious acquisition naturally hypothesize an unconscious comparison process. Thus, arguments that learners must compare input to their interlanguage grammar (e.g., Ellis, 1994b) are not arguments for noticing.    

This is where I think things started to get a bit odd. Was the green tea starting to kick in?  Surely Schmidt says that the conscious comparison of input and interlanguage triggers noticing, so surely Truscott is wrong to just kick Schmidt’s rejection of Krashen’s “more general notion” under the carpet?  Why was ‘noticing the gap’ nothing to do with ‘noticing’?  On the other hand, wasn’t Truscott right to challenge the claim that the only way L2 learners make progress in interlanguage development is through consciously attending to new features of the L2 that are present in the input? I got a panicky feeling that I needed to remind myself of what I thought about all this before some student asked me. More tea.

Right, then. What had I said publicly? I found a post on the blog where I lamented the way Schmidt’s Noticing Hypothesis was being used to support all manner of explicit teaching practices. Whether it’s presenting the present perfect on Tuesday at 8pm, making the explicit teaching of lexical chunks the number one priority in a course, or using a red pen to indicate all the missing third person ‘s’s in a composition, it’s all OK because the Noticing Hypothesis says that bringing things to learners’ attention is a good thing. “Schmidt’s construct has been watered down so much that it now means no more that noticing in the everyday meaning of the word”, I’d said. Hmmm. Things were now getting Alice-in-Wonderland weird: suddenly I found myself unable to say what the special, unwatered down meaning of ‘noticing’ was, or to remember why it was so highly regarded. I searched for Schmidt, 1990, again, found it and found this:

subliminal language learning is impossible …… noticing is the necessary and sufficient condition for converting input into intake  (Schmidt, 1990:  130).

OK, clear enough, but why would anyone believe such a thing? If input can’t get processed without being noticed, then ALL second language learning is conscious. Surely this is either trivially true by adopting some very weak definition of ‘conscious’ or ‘learning’, or obviously, monstrously false?

Dear oh dear, I was falling down the hole fast. And I’d been so absorbed reading Schmidt that I’d made a triple expresso with freshly ground Robusta coffee beans and drunk it without, OMG, noticing!

Trying to get a grip, I carried on reading. Schmidt says that the term ‘unconscious’ is used in three distinct senses:

  1. to describe learning without ‘intention’,
  2. to describe learning without metalinguistic ‘understanding’,
  3. to describe learning without attention and ‘awareness’.

He goes on to assert that although L2 learning without intention or metalinguistic understanding is clearly possible, there can be no learning without attention, accompanied by the subjective experience of being aware of – that is of ‘noticing’ – aspects of the surface structure  of the input. Intake is

that part of the input which the learner notices … whether the learner notices a form in linguistic input because he or she was deliberately attending to form, or purely inadvertently.  If noticed, it becomes intake (Schmidt, 1990: 139).

“What?”,  I thought, “You can notice things purely inadvertently? Without paying attention? But with focal awareness??” I tried some deep breathing but it didn’t help.   Anyway I had to focus.  Apart from confusing me about what ‘noticing’ meant, where was Schmidt’s argument? How could he just DEFINE intake as noticed input in the way he did? What had I missed?  I went back and tried to read it again, but all I could think was that I’d read it a dozen times already. I drew a little diagram:

There it was: Consciousness as awareness, level 2, noticing. But what did it mean? And what was all the rest about?  Schmidt claimed that it was all supposed to sort out the “confusion” he saw in the use of the terms conscious and unconscious, but in fact, all I could see was a terrible muddle.

  • we notice
  • we pay attention
  • we are aware
  • we are focally aware
  • we deliberately attend to form
  • we notice purely inadvertently
  • our focus of attention is on surface structures in the input
  • we perceive competing stimuli and may attention to them (notice them) if we choose
  • storage without conscious awareness is impossible
  • the primary evidence for the claim that noticing is a necessary condition for storage comes from studies in which the focus of attention is experimentally controlled
  • the basic finding, that memory requires attention and awareness, was established at the very beginning of research within the information processing model.

I needed a drink. Time, I thought, for a good shot of tequilla.

I decided to concentrate on the question of what exactly ‘noticing’ refers to, and how we can be sure when it is, and is not being used by L2 learners. I had 3 sources: Schmidt and Frota (1986), Schmidt, 1990, and Schmidt 2001. Schmidt claims that ‘noticing’ can be operationally defined as “the availability for verbal report”, “subject to various conditions”.  He adds that these conditions are discussed at length in the verbal report literature, and cites Ericsson and Simon (1980, 1984), and Faerch and Kasper (1987), but he does not discuss the issue of operationalisation further until 2001, and even there he fails to provide any reliable way of knowing if and when ‘noticing’ is being used.

But in the 2001 article Schmidt says that ‘noticing’ is related to attention and argues that attention as a psychological construct refers to a variety of mechanisms or subsystems (including alertness, orientation, detection within selective attention, facilitation, and inhibition) which control information processing and behaviour when existing skills and routines are inadequate. Hence, learning is “largely, perhaps exclusively a side effect of attended processing”. (Schmidt, 2001: 25). Oh no! We’re back! What’s “attended processing”? Is it ‘noticing’? Is attention the same as awareness?  Another shot needed.

I was so dizzy by now that Truscott’s words came floating towards me like a lifeline:

current research and theory on attention, awareness and learning are not clear enough to  support any strong claims about relations among the three. … they do not offer any basis for strong claims of the sort embodied in the Noticing Hypothesis (Truscott, (1998, p. 106).

Well at least that sounded right. Schmidt was tossing all these theoretical terms of attention, awareness and learning around like Humpty Dumpty, wasn’t he? Tuscott was surely right to question the assertion that attention can be equated with awareness, and obviously right to say that there is no evidence to support the sweeping claim that “learners must consciously notice the particular details to be learnt”.  But why does Truscott say “consciously notice”? ‘Noticing’ can’t be unconscious, can it? Agghhh!

Start again. ‘Noticing’ is part of the first stage of the process of converting input into implicit knowledge. Learners notice language features in the input, absorb them into their short-term memories, and compare them to features produced as output. The claim is that noticing takes place inside short-term memory, and Schmidt explains that it is triggered by different influences, namely instruction, perceptual salience, frequency, skill level, task demands, and comparing.

I decided to take ‘noticing’ to mean‘noticing’, defined by the OCD as “becoming aware of something”. It seemed to me preposterous to suggest that second language acquisition could be explained as a process that starts with input going through a necessary stage in short-term memory where “language features” had to be noticed in order to get any further along the way towards knowledge of, or competence in, the target language. What, ALL language features? Seriously? All language features in the L2 shuffle through short-term memory and if unnoticed have to re-present themselves? Was that a serious suggestion for the acquisition of grammatical competence, for example?  I recalled what Gregg had said to me:

Noticing is a perceptual act; you can’t perceive what is not in the senses, so far as I know. Connections, relations, categories, meanings, essences, rules, principles, laws, etc. are not in the senses.

So how on earth can one build up grammatical competence by simply noticing things in the input?

And how had the Noticing Hypothesis come to be accepted as an explanation of how input becomes intake, prior to processing and availability for integration into a learner’s developing interlanguage system? I found R. Ellis’ diagram, which is reproduced all over the place:

It appears to suggest that the 3 constructs of ‘noticing’, comparing and integrating are what turn input into output and explain IL development. Can it really be making such a claim? Where’s the noticing supposed to take place according to the figure? And what  is short/medium-term memory? Anyway, as Cross (2002) points out, Ellis (1994, 1997), Lewis (1993), Skehan (1998), Gass (1988), Batstone (1994), Lynch (2001), Sharwood-Smith (1981), Rutherford (1987) and McLaughlin (1987) all agree that noticing a feature in the input is an essential first step in language processing. How depressing.

By now I was most of my way through the bottle of tequilla, and I became reckless: I decided to confront the main man, M.H.Long. I opened the obra maestra, Long (2015) SLA and TBLT, to page 55:

With Nick Ellis and others, what I claim is that explicit learning (not necessarily as a result of explicit instruction) involves a new form or form–meaning connection being held in short-term memory long enough for it to be processed, rehearsed, and an initial representation stored in long-term memory, thereafter altering the operation of the way additional exemplars of the item in the input are handled by the default implicit learning process. It is analogous to setting a radio dial to a new frequency. The listener has to pay close attention to the initial crackling reception. Once the radio is tuned to the new frequency, he or she can sit back, relax, and listen to the broadcast with minimal effort. Ellis identifies what he calls the general principle of explicit learning in SLA: “Changing the cues that learners focus on in their language processing changes what their implicit learning processes tune” (Ellis 2005, p. 327). The prognosis improves for both simple and complex grammatical features, including fragile features, and for acquisition in general, if adult learners’ attention is drawn to problems, so that they are noticed (Schmidt 1990 and elsewhere). This is the first of four or five main stages in the acquisition process (Chaudron 1985; Gass 1997), in which what is noticed is held and processed in short-term, or working, memory long enough for it to be compared with what is in storage in long-term memory, and, as a result, a sub-set of input becomes intake.

A couple of pages on:

Noticing in Schmidt’s sense, where the targets are the subject of focal attention, facilitates the acquisition of new items, especially non-salient ones, and as Schmidt maintains, and as demonstrated by 20 years of studies, from Schmidt and Frota (1986) to Mackey (2006), “more noticing leads to more learning” (Schmidt 1994, p. 18).

I’d read this before, lots of times, nodding sagely at the wisdom of the maestro, but suddenly, I doubted it all. Was Long really signing up to the explanation that Schmidt offered of how input gets processed?  Well it seemed that he was, and as the tequilla worked my rational doubts into sentimental despair, I flapped at the pages, turning them back and forth, trying to find the bits that I’d liked so much before. He CAN’T be saying that ALL new ‘items’ MUST be ‘noticed’, can he?

Ah! What was this?  

Crucially, however, as claimed by Gass (1997), and as embodied in the tallying hypothesis (N.C. Ellis 2002a,b), once a new form or structure has been noticed and a first representation of it established in long-term memory, Gass’ lower-level automatic apperception, and Tomlin and Villa’s detection, can take over, with incidental and implicit learning as the default process.

Black clouds threatened. I’d forgotten about the lower-level automatic apperception stuff. What, for pity’s sake, was THAT all about? I found some notes I’d made years earlier. “Gass claims that apperception is “the process of understanding by which newly observed qualities of an object are related to past experiences”. It “serves as selective cueing for the very first step of converting input into intake”. It “relates to the potentiality of comprehension of input, but does not guarantee that it will result in intake”. Beats me! It “relates to the potentiality of comprehension of input” indeed! Must ask Kevin.” In fact, much later I did ask Kevin, who told me that he’d actually been there at the plenary of whatever conference it was when Susan Gass introduced the lucky listeners to apperception. Now what exactly had he said about it?

Thankfully, the tequilla rescued me from musing on apperception’s mysterious properties, and allowed me to grasp what I was sure was the main message. Hallelujah! The Empire strikes back: incidental and implicit learning as the default process return! Phew! And there was more good news:

whether detection without prior noticing is sufficient for adult learning of new L2 items is still unclear – perhaps one of the single most critically important issues, for both SLA theory and LT, awaiting resolution in the field.

Yes! Now I remembered! I’d read the bit about noticing maybe not being necessary at all somewhere else. I found this in a document I’d done myself, but most of it was directly from Long, 2015:

In this view, once a new form or structure has been noticed and a first representation of it established in long-term memory, lower-level “apperception” (Gass) or Tomlin and Villa’s “detection” take over, with incidental and implicit learning as the default process. So the first representation in long-term memory primes the learner to unconsciously perceive subsequent instances in the input. The big question is of course whether noticing is necessary for any representation to be established in long-term memory: is consciously attending to and detecting a form or form-meaning connection in the input the necessary first stage in the process of acquiring some features and form–meaning connections?  Long calls this “perhaps one of the single most critically important issues, for both SLA theory, and language teaching, awaiting resolution in the field”. 

In other words, Rather than see ‘noticing’ as the necessary and sufficient condition of SLA, I could now interpret Long as saying that incidental and implicit learning are still the main ways adults learn an L2. Furthermore, while noticing might facilitate the acquisition of “new items”, it’s still an open question as to whether it’s a necessary condition for acquisition.

“This is surely a gap worth noticing” was the last thing I remember saying to myself.

To be continued.

References

Cross, J. (2002) ‘Noticing’ in SLA: Is it a valid concept? Downloaded from  http://tesl-ej.org/ej23/a2.html

Ellis, R. (1997) SLA Research and Language Teaching. OUP

Krashen, S. (1994) The input hypothesis and its rivals. In N. Ellis (Ed.), Implicit and explicit learning of language, (pp. 45-77). London: Academic Press.

Long, M.H. (2015) Second Language Acquisition and Task Based Language Learning. Wiley.

Schmidt,R.W. (1990) The role of consciousness in second language learning. Applied Linguistics 11, 129–58.

Schmidt, R. (2001) Attention. In P. Robinson (Ed.), Cognition and second language instruction (pp.3-32). Cambridge University Press.

Schmidt, R. and Frota, S.N.  (1986) Developing  basic  conversational  ability in  a  second language:  a  case  study of an adult learner of Portuguese . In Day , R.R., editor , Talking to learn: conversation in second language acquisition. Rowley, MA: Newbury.

Truscott, J. (1998). Noticing in second language acquisition: A critical review. SLA Research 14, 103-135.

 

Change must come

This is my final blast in 2017 against those who use their power to defend the lamentable state of English Language Teaching against those who want change.

Who are they?

Those in charge of The British Council, TESOL, IATEFL, Cambridge Assessment English, IELTS, Pearsons, Kaplan, International House, OUP, CUP, McGraw Hill, National Geographic, New Oriental, and many others. Between them, they control the coursebook-dominated ELT industry which has a turnover of $200,000,000,000 ($200 billion).

Their visible face comprises the stars of the ELT world. Many of them are multi-millionaries (Nunan, Richards, Rogers, Murphy, Mr. and Mrs. Soars, Mr. and Mrs Haycraft, for example) and all of them earn more than $200,000 a year.

What do they do?

They promote the use of coursebooks as the driver of ELT in classrooms, in in-company courses, and private classes all over the world. They also control assessment of English profiency and of teacher training.

They produce the coursebooks and the tests that determine a person’s level of proficiency in English, and they supervise the admin and marking of the tests. They also produce the teacher training courses and the tests that determine a teacher’s abilities, and they supervise the admin and marking of the tests.

Their “visible face” is seen at the ELT conferences and in the social media. They are the ELT stars, the shockingly uninformed “experts” who write coursebooks and “How to teach” manuals, who give plenaries and workshops around the world, promoting their own products, peddling the same orthodox line.

In brief, they decide what is taught and by whom.

What’s wrong with what they do?

The fusion of their roles as designers, producers and assessors points to something rotten in the field of education.

They commercialise ELT. They turn education into a product. And as in all commercial endeavors, they get rich, the teachers stay poor, and those who “receive” the product don’t learn as well as they could. By controlling the assessment of English proficiency, of ELT materials  and of teacher training, they ensure that ELT is a profit-driven industry where good practise and results are measured by profit margins. Most people who do English as an L2 courses don’t learn as much as they want, or as much as they could.

What should we do?

First, draw on the findings of instructed SLA research and on the insights of people like John Fanselow and other gifted crafts people. Somehow, we have to bring together our understanding of the learner’s interlanguage development and our collective wisdom of ELT as a craft. However this pans out, it will be a local solution. The biggest fight we have is to make our teaching relevant to our particular context and to slay the control of the global coursebook.

Second, break the hold of those who currently control ELT. Speak out. Criticise the Britsh Council, IH, the Cambridge Examiners, IATEFL, etc. Organise locally by forming cooperatives, in-house workshops, small scale conferences, etc.

Happy 2018 to all.

Reflections on 2017, Part 3

In 2017 the British Council’s charity status and its branding as a UK government agency helped to maintain tax-free income at around the £1 billion mark, which is what they made in 2016 according to their own report . Particularly lucrative among its commercial operations were those involving the International English Language Testing System (IELTS), its English teaching centres, and its educational marketing and education related contracts. These activities have led to accusations that its one-third share in the IELTS biases its testing and certification policies; that it competes with an unfair advantage to train teachers for overseas governments; and that its not-for-profit status means that the income it gets from English teaching is exempt from corporation tax in many countries, unlike its competitors. Concerns about the way that the British Council conducts itself as a charity have been compounded by the activities of the British Council Education UK website, which offers advice to international students looking to study in the UK. While British Council services are paraded all over the web site, other schools and colleges offering similar services have to pay to be on the lists provided, and the ones with the biggest marketing budget get the best positioning. There’s also the on-going struggle of ordinary teachers working for the British Council to be included in the generous pension plan which is presently only provided to the upper echelons of British Council staff.

The British Council is one of the most important pillars in the entire global ELT establishment, and, naturally, it gives active support and encouragement to all those who work to keep things running nice and smoothly just as they are, while frowning on any attempt to rock the boat. A visit to the huge British Council Teaching English web site  is a truly depressing experience, like going for tea at some minor country pile in Dorset where the half-crazed, inbred family members, slumped in their lumpy threadbare sofas,  talk about how hard it is to get a decent pot of tea these days. There’s no spark, no wit, no edge, no real sign of intelligent or joyful life anywhere to be found. The BC web site is clogged with unchallenging, boring, middle-of-the-road pap about the importance of motivation, ordered use of the whiteboard, conflict avoidance, dynamic classroom seating arrangements and positive feedback. There is quite simply no trace of serious critical evaluation of any of the issues facing us today. You don’t believe me? Take a visit there right now. This is what will greet you:

The BC proudly trumpets that

Armenian English language teachers are given the opportunity to attend a series of talks delivered by a distinguished ELT expert and develop professionally through exposure to the latest thinking and understanding in the field.

What the unlucky Armenian teachers actually got was five hours (FIVE HOURS!) exposure to Harmer, whose performance suggests that he can’t articulate his thoughts about anything for more than twenty seconds. I find it hard to believe that whoever is responsible for this web site actually watched the videos of Harmer stumbling around the room like someone who’s eaten the wrong mushrooms for lunch, before making them available to the public, and I find it even harder to believe that when Harmer’s talks mercifully finished, anyone in the audience felt that they were now up to date with “the latest thinking and understanding in the field”.

Still, that’s just the front page. Surely if you delve into the rich store of teacher materials you’ll find some stimulating, challenging, innovative stuff to make you think? Well no you won’t. You won’t find any reports and critical evaluations of research findings of the sort offered in the ELT Research Bytes blog, or anything that isn’t written by tried and trusted, controversy-averse, well-heeled teachers and teacher trainers dedicated to defending the status quo, keeping the good ship ELT on a steady course, and not rocking the boat.

To the extent that 2017 was a good year for the British Council, it was a bad year for innovative, democratic, progressive ELT.

Reflections of 2017, Part 2

Bad Mediators 

In Part 1 I suggested that those who write books and give teacher training courses in ELT have a duty to act as mediators between  researchers and teachers, and that most of them make a mess of the job. This opinion was supported by the mini study Thornbury carried out and then reported on at the 2017 IATEFL conference. The study looked at four top-selling “How to Teach English” books which are recommended reading for hundreds of thousands of people studying to get a qualification in ELT, and it found that all four books are based more on the authors’ biases, intuitions, feelings, and what somebody else told them, than on any serious attempt to critically assess what research findings tell us about how people learn languages. In a post on these mediators  I suggested that Thornbury took a disappointingly uncritical look at the data that his study had produced.

Staying on the Fence 

Unlike the four writers he reviewed, Thornbury himself has discussed research findings that challenge ELT orthodoxy more than once, so if he thinks it’s important for him to keep in touch with research and to use research findings to inform his views on methodology, why doesn’t he expect the same of others? And since he’s been so outspoken in his criticism of coursebooks, why didn’t he mention this when discussing his findings? The answer seems to be that Thornbury has developed the unique knack of not just sitting on the fence, but actually living perfectly perched on it. He’s become so adroit at deftly ducking controversy, so practiced at never getting drawn on the political issues raised by the matters he discusses, that he makes the UK liberal democrats look radical. He knows perfectly well that the bosses of the British Council, the publishing houses, the exam bodies, the training outfits and so on will simply not allow any serious attacks on current ELT practice to be made – witness his own publishers’ making it clear to him that they’re “not interested” in his McNuggets views or in what he really thinks of the CELTA course. He knows that the ELT educational system is set up in such a way that teachers are unlikely to hear about “inconvenient” research findings which challenge coursebook-driven ELT, or which show that the Pearson Test of English is built on sand, or which describe the Common European Framework of Reference as “a prime example of in the way political and social agendas can impact on language testing, and how language testing can be made to serve those agendas” (Fulcher, 2005). I suppose Thornbury thinks, like many reformers, that he can be more of a force for change by staying inside the tent than pissing on it from outside. I think that this argument is demonstrably wrong, but never mind; even if that is Thornbury’s view, it doesn’t explain why he doesn’t adopt a more critical stance. In the end, maybe it’s just that he’s a really nice guy and he doesn’t like upsetting people. Well, I can certainly relate to that. 🙂

Misrepresenting Chomsky 

Still, there’s another bone I have to pick with the loveable Thornbury, and that is his continued misrepresentation of Chomsky’s work. If you look at the “daft things the experts said” at the end of my last post, it was Thornbury who said “The NS-NNS distinction is absolutely central to the Chomskyan project”. It isn’t, of course, and, pace Thornbury, the onus isn’t on Chomsky to perform the logically impossible task of proving that some aspects of the knowledge of language that children demonstrate couldn’t have been acquired from input, and it isn’t the case that there’s no empirical evidence to support Chomsky’s theory of UG. In my post Treatise on Thornbury’s view of SLA I pointed to some mistakes in Thornbury’s account of what Chomsky says about language and language learning, and also the faults in his arguments about UG in general and the poverty of the stimulus argument in particular. It’s important to stress that none of the emergentists who Thornbury now seems to think offer the best explanation of SLA, least of all Larsen-Freeman, has offered an explanation for what young children know about language. As Eubank and Gregg (2002) argue, to suggest that language learning is explained by a general theory of associative learning is to leave unexplained

  1. the fact that children know which form-function pairings are possible in human language grammars and which are not, regardless of exposure.
  2. The countless cases of instantaneous learning .
  3. The knowledge children have in the absence of exposure (i.e., a frequency of zero) including knowledge of what is not possible.

Furthermore, to quote Eubank and Gregg (2002, p. 237)

Ellis aptly points to infants’ ability to do statistical analyses of syllable frequency (Saffran et al., 1996); but of course those infants haven’t learned that ability.  What needs to be shown is how infants uniformly manage this task:  why they focus on syllable frequency (instead of some other information available in exposure), and how they know what a syllable is in the first place, given crosslinguistic variation.  Much the same is true for other areas of linguistic import, e.g. the demonstration by Marcus et al. (1999) that infants can infer rules.  And of course work by Crain, Gordon, and others (Crain, 1991; Gordon, 1985) shows early grammatical knowledge, in cases where input frequency could not possibly be appealed to. Landau & Gleitman (1985) even document lexical acquisition in spite of frequent input, where a blind child acquired (her own interpretation of) verbs like “look” despite frequent training  under a different interpretation. 

In a comment on the post about Thornbury’s view of SLA, Gregg wrote this:

Hi Geoff,

I think I’d revise one bit of your discussion: Where you say

Thornbury’s unqualified assertion that language learning can be explained as the detection and memorisation of frequently occurring sequences in the sensory data we are exposed to is probably wrong and certainly not the whole story.

I’d change ‘probably’ to ‘definitely’. It’s striking, and depressing, to see how purveyors of ’emergentism’ continue to ignore the mountain of research showing the complexity of language, and the other mountain of research showing the kinds of linguistic (and other) knowledge young children show, knowledge that no one has been able to account for on an empiricist learning theory, and how they continue to blithely assert that it’s all done by generalization across input samples, without showing how. I’m again reminded of the story … of how Rockefeller became rich. One day as a young lad he found himself with a penny in his pocket. He walked down to the farmer’s market and bought an apple, walked to Wall Street and sold it for 2 cents. Then back to the market to buy 2 apples, back to Wall Street, … At the end of a week he’d bought an old wheelbarrow, and after a month he’d earned enough to put down the first month’s rent on a small fruit shop. But then his uncle died and he inherited everything.    

  

Sprawling in the Primeval Slime 

While Thornbury’s remarks about emergentism are slightly less preposterous this year than they were in 2016 (he’s moved on from the Larsen-Freeman and Cameron’s (2008) nonsense about complex systems to slightly better-argued stuff by the likes of Nick Ellis and Tomasello), he continues to incite a younger generation, who after a quick perusal of Samson, Everett, Wolfe and other reliable sources, share their ignorance with others in the comments sections of the A to Z of ELT blog. Thornbury being Thornbury, he doesn’t tell the young uns that they’re talking baloney, he actually encourages them. In one of his posts this November, Thornbury cheerfully quips that, given the choice between Chomsky’s self-proclaimed triumph of “human reason” on the one hand, and “beastly grovelling in the primeval slime” on the other, he’ll choose the slime every time. The trouble is, he invites the younger generation to join him in the beastly bog; he encourages them to think that their ignorance of Chomsky’s work should be worn like a badge of cool, and he confidently assures them that SLA is best explained as the complex result of a simple process of “reinforcing contingencies set up by the verbal community”. You couldn’t make it up, so it must be true. Well, for the time being we’ll have to leave them to it, happily frolicking in the slime, unconsciously strengthening the associations between who knows what cues, and trust that before they get too much older, the brighter ones will get tired of it, climb out, and leave their genial hero alone with his dirty bucket and spade, there to finally appreciate the power and utility of non-communicative uses of language, or ‘thinking’ as Chomsky refers to them.

Reference 

Eubank, L. and Gregg, K.R. (2002) News Flash: Hume Still Dead.St udies in Second Language Acquisition , 24, 2, pp. 237-247.

Reflections of 2017, Part 1

Looking back on the posts during 2017, I notice that I started the year (ELT: Art and Rationality) by happily conceding that ELT is

“a creative, imaginative endeavour where a teacher’s ability to bring language to life; to contextualise it; to create situations where students engage with it; to get students to learn some key parts of it by rote or at least through frequent re-cycling; to create group dynamics and nurture group cohesion; to empathise with the doubts and fears of students, to manage conflicting needs, and also to design, organise and carry out a coherent plan of learning, are all far more important than a critical appreciation of theories of SLA and the research they’re based on”.

Chomsky  (1995, cited in Gregg, 2006, p. 403) made a similar point when he remarked that we might well learn more about how people think and feel and act by studying history or reading novels than from empirical research, which, “outside of narrow domains has proven shallow or hopeless”.

Teaching English is still, despite all attempts to commodify education, an “arts and craft” activity, a job where experience counts a great deal, and where teachers who combine all manner of skills and knowledge and character traits, and who find themselves in the right place at the right time, can work wonders, making the difference between FonF and FonFs pale into insignificance. And yet, as I said in that post, things have changed from the time when Earl Stevick, John Fanselow Alan Maley and other master craftsmen (I’m afraid they were mostly men) shared their insights with teachers seeking awareness and inspiration. Since the widespread adoption of coursebooks, our freedom as teachers to express our individuality, inventiveness and creativity has shrunk alarmingly, while at the same time, research into the English language and into how people learn languages has greatly expanded. Despite these two decisive changes, we perversely persist in using syllabuses, methodological principles and pedagogic procedures that rob us of the freedom to pursue our craft, and that, at the same time, fly in the face of robust research findings.

My main argument throughout the year has concerned that enormous elephant in the room: ELT coursebooks. Pace the arguments of those who try to defend their use, coursebooks are not just “a symptom”; it’s not just a question of the way you use them, or that they put too much emphasis on grammar teaching, or that they’re tools of imperialism; or even that they’re stultifyingly boring. No, it’s that they have a huge, generally detrimental effect on the practice of ELT, including syllabus design, methodology, and testing. All the discussion of doing things better, of the role of  extensive reading, of what work to do in and outside classrooms, of how to use this or that bit of kit, of whether to teach vocabulary this way or that, of the best way to recycle work, of the efficacy of pronunciation teaching, of when to use the L1, of how to respond to written and spoken errors, and on and on, all take place against the backdrop of using a coursebook which imposes a restrictive and deforming framework on everything we do. We know that synthetic syllabuses, a PPP methodology and an incremental step by step view of progress are based on false assumptions about how people learn an L2, and yet, using the excuse of convenience and bowing to commercial pressure, we plod on regardless. To make matters worse, like politicians refusing to take climate change seriously, our stubborn refusal to face facts blights the future. The coursebook imposes its mistaken methodological principles and pedagogic procedures on teacher training, particularly the CELTA and Trinity College training courses, where learning to be a teacher of English to speakers of other languages is intricately bound up with learning how to use a coursebook.

In a number of posts this year, I’ve replied to those who have defended coursebook-driven ELT (see the Coursebook section of the menu on the right) and, in my opinion, no serious answers to the case against coursebooks have been offered. Penny Ur’s airy  dismissal of any criticism of them; her recent review of SLA research affecting teaching practice (see the Gagged post) where she made no mention of interlanguage research and ignored questions about the implications of interlanguage research for coursebook-driven ELT; and her continued reliance on the argument that the convenience of coursebooks outweighs all other considerations srikes me as typical of too many of today’s so-called ELT experts. Ur’s replies to Thornbury’s questions about the importance of research (“it’s certainly possible to write helpful and valid professional guidance for teachers with no research references whatsoever”), her misrepresentation of the research on TBLT (“there’s no evidence that it works”); and her extensive use of the well-known fallacy that “inconclusive” evidence in support of a hypothesis is reason to believe it’s false, are hallmarks of the unreliable expert.

I suggest that we have a right to expect that those whose job it is to oversee the training and on-going professional development of teachers should take robust research findings about how people learn an L2, particularly those regarding interlanguage development, more seriously and make discussion of them part of their books and training courses. Why does Ur’s book A Course in Language Teaching so confidently promote the coursebook and so completely ignore 40 years of interlanguage research?  Why does Harmer’s magnum opus The Practice of ELT (see here for a review ) devote more pages to a discussion of classroom seating arrangements than to a discussion of SLA research? Why does nobody in the ELT establishment (except Scott Thornbury) speak out against all the harm being done by the domination of coursebooks today?

The most obvious answer is “Because ELT is a business” and coursebooks are the perfect way to package what could otherwise be a rather messy “product”.  But I can’t help feeling that a certain insidious complacency is also to blame, especially when I see Ur, Harmer, Dellar and the rest of them jetting around the planet giving teachers everywhere expert advice on how to teach, without ever initiating a serious discussion of the mounting evidence from SLA research which indicates that current ELT methodology is fundamentally mistaken. Dellar’s*** tweet in September from some exotic corner of the globe illustrates the ease with which doubts about current practice can be shrugged off by those who feel themselves to be really in the know: You quickly realise how little the heated debates of the euro-centric #EFL blogosphere have to do with most contexts …. he wrote. Ohers were quick to “Like”.

*** My sincere apologies to Jim Scrivener, to whom I wrongly attributed the tweet when I published this post.

An examination of conference talks given by the leading lights in ELT in 2017 reveals a general lack of awareness and critical acumen that many of us find shocking; and almost as shocking is that these conference talks go almost entirely unchallenged. The bombast and chutzpah of so many in the ELT establishment gels with the gullibility and docility of their audiences to produce a complacent culture lacking any healthy critical edge. This year, every time the twenty or thirty plenary speakers who presently dominate the global ELT conference circuit finished their presentations, they were met with polite applause. Until this is replaced with a cacophony of affronted catcalls, change won’t come; or at least it won’t come from rank and file action, though it might well come soon enough from technological change which makes both coursebooks and most teachers redundant.

In Part 2 I’ll look at some of the daft things our experts said in 2017, including these:

  •  If you encounter the pattern They man-doubled across the place, you know that man-doubled is some kind of way of moving.
  • In academia the established use of ‘native speaker’ as a sociolinguistic category comes from particular paradigmatic discourses of science and is not fixed beyond critical scrutiny.
  • English sometimes seems as if it is everywhere, but in reality, of course, it is not.
  • The NS-NNS distinction is absolutely central to the Chomskyan project.  
  • English migrated to other countries … such as the USA, Canada, New Zealand, … and many other corners of the globe. And it didn’t stop there. It has morphed and spread to other countries too.
  • The way I see it Scott is that ‘interlanguage’ is one of the uglier of many unnecessary neologisms invented by academics, presumably to give them a sense that they are forging a profession: there are plenty of plain English alternatives.
  • Have you read Evans’ The language myth. Why language is not an instinct ? Very good book. Quite an eye-opener.

After the rain came falling, 

And the truth was washed away, 

I called my brother on the telephone, 

Just to see what he would say

The last one is the first stanza of a song, a lament one could say, in response to Brexit. The song inspired the best comment of 2017 from John Clave:

“I experienced such vergüenza ajena I curled up in a ball and rolled under my bed”.

Reference 

Gregg, K.R. (2006) Taking a social turn for the worse: The language socialization paradigm for second language acquisition. Second Language Research, 22: 413.

What science is not

A few weeks ago, someone in the ELT world tweeted that Salma Patel’s blog, which deals with management of the UK National Health Service, had a post that gave a good, brief summary of research paradigms. I went to the blog and found the post:

The research paradigm – methodology, epistemology & ontology – explained in simple language 

Published in 2015, it’s had 168,622 views so far, and there are dozens of comments at the end thanking Patel for his “clear”, “brilliant”, “superb”, “excellent”, “amazing”, “extremely useful” explanations.

The explanation starts with a summary of the main components of a research paradigm and there is then a video which explains the text. Patel begins by saying that there are two main approaches to research:

  1. Filling knowledge gap: positivist
  2. Problem-solving: interpretive.

He explains:

In the first you read a lot of books …..and you find a gap in the research. ……It is objective. What is the meaning of objective? Reality is external to us – I don’t know the reality. So, I propose a hypothesis. What is the meaning of a hypothesis? There is a relationship between X and Y, or not. That’s it.

In the second, you identify a problem, you ask “Why?”. There is no single reality so we have to look at reality from different perspectives, understand different characters, different people, .. So there’s no reality here. That’s why we have to go ourselves into the organisation and talk to people.

So there you have it: scientific, quantitative research is most suitable for research projects which seek to fill a knowledge gap, while qualitative research (which assumes that there’s no such thing as objective reality) is the best way to go about problem solving.

Scientific research is, of course, nothing like Patel’s description of it. Nor is positivism what Patel says it is, and nor does his chart present a reliable or useful guide to research projects.

The aim of scientific research is, precisely, to solve problems, or, to put it another way, to explain phenomena. The collection of empirical data, the organisation of taxonomies, etc. are carried out not for their own sakes but in the service of an explanatory theory. Hypotheses are the beginning of attempts to solve problems and should lead to theories that explain a certain group of phenomena. The aim is to unify descriptions and low-level theories into a general causal theory.

SLA research carried out under the umbrella of cognitive science adopts these aims and methods, and although far from achieving any general theory, it still has some claim to be part of what Kuhn calls a mature science tradition. In contrast, the sort of work Patel encourages falls, at best, into Kuhn’s “immature science” bag, in the ‘pre-paradigm’ period. It’s clear from the literature that some sociologists and sociolinguists want no part of the scientific enterprise, but Patel’s biased and distorted description of different approaches to research fails to properly explain either the realist or the relativist case. In order to provide newcomers with a clear, balanced, well informed introduction to research methodology, I think Patel needs a better grasp than he shows of the philosophy of science, the history of western philosophy, and how evidence-based research is conceived and conducted.

In response to information given to me by Steve Brown, Carol Goodey and others earlier this year, I wrote a post on Research Paradigms where I commented on the way that various influential sociology departments have developed their own particular post-Khunian narrative concerning how research is carried out. I said at the time that I was really surprised to learn how widely these daft notions of ‘positivism’ and ‘research paradigms’ had spread, but I find the fact that Patel’s post has reached over 160,000 grateful post graduate students quite shocking. Did nobody catch so much as a whiff of baloney? Nobody took the trouble to, ahem, deconstruct the text?

A more respectable version of Patel’s presentation can be found in Scotland (2012), which is cited, but it’s hardly any better. In the end, we can trace most of this “revised”, post-Kuhnian treatment of paradigms back to Lincoln and Guba (1985) who proposed a “Constructivist paradigm” as a replacement for “the conventional, scientific, or positivist paradigm of enquiry”.  This view is idealist (“what is real is a construction in the minds of individuals”), pluralist and relativist:

There are multiple, often conflicting, constructions and all (at least potentially) are meaningful.  The question of which or whether constructions are true is sociohistorically relative. (Lincoln and Guba, 1985: 85).

Lincoln and Guba assume that the observer can’t and shouldn’t be neatly disentangled from the observed in the activity of inquiring into constructions.  Constructions in turn are resident in the minds of individuals:

They do not exist outside of the persons who created and hold them; they are not part of some “objective” world that exists apart from their constructors (Lincoln and Guba, 1985: 143).

Thus constructivism is based on the principle of interaction.

The results of an enquiry are always shaped by the interaction of inquirer and inquired into which renders the distinction between ontology and epistemology obsolete: what can be known and the individual who comes to know it are fused into a coherent whole (Guba: 1990: 19).

Note that Patel has either overlooked or ignored the fact that, according to the leading lights in his “constructivist paradigm”, the distinction between ontology and epistemology is obsolete. In any case, if you want to find the roots of the full-blown idealist, relativist, pluralist, your-experience-of-me-experiencing-you-experiencing-the-teapot, topsy-turvy, now-you-see-it-now-you-don’t world of post-modern sociology, you need look no further than Lincoln and Guba, 1985. And if you want a demonstation of why it’s so much baloney, see Gross & Levitt, 1994; and Sokal & Bricmont, 1998.

Not far behind in terms of culpability for all this mess comes Crotty (1998), whose “seminal work” on research in the social sciences is required reading in thousands of undergraduate and post graduate courses all over the world. Crotty’s work quite wrongly states that positivism started with the work of Francis Bacon, completely misrepresents the work of the positivists themselves, and misrepresents the work of Popper, Kuhn and Feyerabend too. At one point, Crotty says that the real target of Feyerabend’s criticism were “the positivists”, despite the fact that before Feyerabend’s Against Method was published, positivists – scientists and philosophers alike – had thankfully disappeared. I challenge Crotty to find a scientific department in any university anywhere on the planet run by self-proclaimed positivists.

C.P. Snow, in his 1959 lecture, first described the ‘two cultures’ of science and the humanities (see Snow, 1993), since when the gap has widened considerably. Eleven years ago, Gregg (2006) noted that in the field of SLA, a look at the ‘applied linguistics’ literature

turns up doubts about the value of controlling for variables (Block, 1996), reduction of empirical claims to metaphors (Schumann, 1983; Lantolf, 1996), mockery of empirical claims in SLA as ‘physics envy’ and denials of the possibility of achieving objective knowledge (Lantolf, 1996), even wholesale rejection of the values and methods of empirical research (Johnson, 2004). Although the standpoints are various, one common thread unites these critiques: a fundamental misunderstanding of what science, and in particular cognitive science, is about (see, e.g. Gregg et al., 1997; Gregg, 2000; 2002).

Today, blogs and twitter exchanges abound with references to white coats, laboratory conditions and the other trappings of so-called positivists (including Chomsky of course) who, it’s claimed, fail to make any connections with the real world, even though, ironically enough, they’re the only ones who believe in such a thing. In my own case, in exchanges with Marek Kiczkowiak of TEFL Advocates about the existence (or not) of native speakers, I refer to the “sociolinguistic twaddle that obfuscates a simple  psychological reality”, while he refers to “the fantastic beast the NS has become in theoretical linguistics and SLA labs”. I’d say that in this case it’s Kiczkowiak who shows a typically depreciating and ignorant attitude towards SLA cognitive research, while I limit myself to the claim that regardless of how difficult it might be for sociolinguists to decide who belongs to what social group, there are such things as native speakers, and it is the case (a case worth researching) that most people who learn a L2 fall short of native competence. But then, I would say that, wouldn’t I.

Patel’s post is more evidence of the need to remain critical in our reading and thinking about our profession. There are so many examples of low standards of scholarship, rational criticism and intellectual honesty in the work of those who do research and teacher training that we need to be constantly on our guard. Down with baloney!

 

References

Crotty, M. (1998) The Foundations of Social Research: Meaning and Perspective in the Research Process. London, Routledge.

Gregg, K. R. (2006) Taking a social turn for the worse: the language socialization paradigm for second language acquisition. Second Language Research 22, 4; pp. 413–442.

Gross, P.R. and Levitt, N. (1994) Higher superstition: the academic left and its quarrels with science. Johns Hopkins University Press.

Lincoln,Y. & Guba, R. (1985) Naturalistic Enquiry. Newbury Park; Sage.

Scotland, J. (2012) Exploring the philosophical underpinnings of research: Relating ontology and epistemology to the methodology and methods of the scientific, interpretive, and critical research paradigms. English Language Teaching, 5(9), pp.9–16.

Snow, C.P. (1993) The two cultures. Syndicate of the University of Cambridge.

Sokal, A.D. and Bricmont, J. (1998) Intellectual impostures. London, Verso.

A Reply to A. Holliday’s “Why we should stop using native-non-native speaker labels”

Preamble

1  In the domain of English language teaching, there is just about universal agreement that discrimination against non-native speaker teachers must stop. Those who fight to end such discrimination have my full support.

2  In the domain of SLA research, native speakers of language X are people for whom language X is the language they learnt through primary socialization in early childhood, as a first language.

3  To paraphrase Long (2007, 2015), the psychological reality of native speakerness is easily demonstrated by the fact that we know one, and who isn’t one, when we meet them, often on the basis of just a few utterances. When monolingual speakers are presented with recorded stretches of speech by a large pool of NSs and NNSs and asked to say which are which, the judges are always very good at distinguishing them, with inter-rater reliability typically above .9. How do they do this, and why is there so much agreement if there is no such thing as a NS?

4  For the last 60 years, the term “native speaker” has been used in the literature concerning studies of language learning, and one of the most studied phenomenon of all is the failure of the vast majority of post adolescent L2 learners to achieve what Birdsong (2009) refers to as “native like attainment”.

On the prevailing view of ultimate attainment in second language acquisition, native competence cannot be achieved by post pubertal learners. There are few exceptions to this generalization (Birdsong 1992).

5  Claims concerning the relative abilities of native speakers and learners of the target language are not disconfirmed by individual cases. The claims all accept the psychological reality of native speakerness.

6  The specific claim that very few post adolescent L2 learners attain native like proficiency is supported by a great deal of empirical evidence (see, e.g., reviews by Long 2007, Harley and Wang 1995; Hyltenstam and Abrahamsson 2003).

7  When trying to explain why most L2 learners don’t attain native competence, scholars have investigated various “sensitive periods”. It’s widely accepted that there are multiple sensitive periods for different domains of second language learning  – pronunciation, morphology and syntax, lexis and collocation (see Long, 2007, Problems in SLA, Chapter 3 for a review of sensitive periods).

To the issue then

Adrian Holliday, Professor of Applied Linguistics & Intercultural Education at Canterbury Christ Church University, has just published a post on his blog: Why we should stop using native-non-native speaker labels  in response to queries about his claim that the terms native speaker and non-native speaker are neo-racist. He addresses the questions: “What does ‘neo-racist’ mean?” and “Are there no occasion (sic) when these labels can be used?”.

He starts with his own subjective impressions of what ‘native speaker’ means to him and then says

In academia the established use of ‘native speaker’ as a sociolinguistic category comes from particular paradigmatic discourses of science and is not fixed beyond critical scrutiny.

I’ve no idea what the phrase “particular paradigmatic discourses of science” refers to, but I’m sure we can all agree that the use of ‘native speaker’ as a sociological category is not fixed beyond critical scrutiny. Holliday seems to be saying that quantitative research based on testing hypotheses with empirical evidence, as carried out by many scholars trying to understand the  psychological process of SLA is part of a “mistaken paradigm”. Since in SLA research there isn’t, and never has been, any general theory of SLA with paradigm status, and since I’m sure that in the field of sociolinguistics and cultural education they’re even further away from any such theory, talk of paradigms, like talk of “imagined objective ‘science’”, and problems that reside in differences being evoked “regardless of the words that are being used”, and labels referring to things that “do not actually exist at all”, belongs to the giddy world of post modern sociology where words mean what their authors choose them to mean “neither more nor less”, as Humpty Dumpty triumphantly concludes.

Whatever the term ‘native speaker’ might be used for in sociolinguistics, in psycholinguistics ‘native speaker’ refers to real people, as I’ve explained above, and nothing that Holliday says challenges this fact. So we’re left with the charge that when we refer to people as ‘non-native speakers’, we imply that they are “culturally deficient”, which amounts to “deep and unrecognised racism”.  We “define, confine and reduce” this group of people and refer to their culture in a way that evokes “images of deficiency or superiority – divisive associations with competence, knowledge and race – who can, who can’t, and what sort of people they are”.

In my opinion this is so badly written as to be almost incoherent, but perhaps it expresses exactly what Holliday means to say. Whatever it means, it’s difficult to counter something like neo-racism if it’s “unrecognised”, and if any attempt we make to use other terms just pushes the labelling “even further into a normalised, reified discourse, where we are even less likely to reflect on their meaning, and where a technicalisation of the labels somehow makes them more legitimate”. Still, since Holliday confidently asserts that “the native-non-native speaker labels” refer to something “that does not actually exist”, it should be easy enough for sociolinguists (and those involved in intercultural education too, I suppose) to stop using them. Meanwhile, back in the real world,  it’s a different story.

Long (2007) argues that the issue of age differences is fundamental for SLA theory construction. If the evidence from sensitive periods shows that adults are inferior learners because they are qualitatively different from children, then this could provide an explanation for the failure of the vast majority of post adolescent L2 learners to achieve Birdsong’s “native like attainment”. If we want to propose the same theory for child and adult language acquisition, then we’ll have to account for the differences in outcome some other way; for example, by claiming that the same knowledge and abilities produce inferior results due to different initial states in L1 acquisition and L2 acquisition. Either way, the importance of the existence (or not) of sensitive periods for those scholars trying to explain the psychological process of SLA indicates that native speakerness will continue to be used as a measure of the proficiency of adult L2 learners.

References 

Harley, B. & Wang, W. (1997). “The critical period hypothesis: Where are we now?”. In A. M. B. de Groot & J. F. Kroll (Eds.), Tutorials in Bilingualism: Psycholinguistic perspectives (pp. 19–51). Mahwah, NJ: Lawrence Erlbaum Associates.

Hyltenstam, K. & Abrahamsson, N. (2003). “Maturational constraints in SLA”. In C.J. Doughty & M.H. Long (Eds.), Handbook of Second Language Acquisition. Oxford: Blackwell.

Long, M. (2007) Problems in SLA. London, Erlbaum.

Long, M. (2015) SLA and Task-based Language Teaching. Oxford, Wiley.

Gagged!

I’m in danger of crying wolf here, because the last time I said I’d been censored, it turned out that it was my own clumsy use of the “reply” function that was to blame. But I’ve checked, and I think this time I’m right. In any case, the important thing is to air the matter of an influential ELT author and teacher trainer not being as rigorous as I think she should be in her role as mediator between researchers and teachers.

Penny Ur recently wrote an article for the IATEFL materials writing special interest group called “And what about the research?”  Ur points out that in the last twenty years, research has produced some “convincing evidence” for ideas which challenge popular, widely-held views among teachers. Ur sympathises with the busy teacher, but urges them to read the research and to pay attention to it. In her role as mediator, Ur goes on to give three examples of this kind of research:

  • Use of the L1  Often proscribed by teachers (and/or their bosses), research shows that using the L1 is very helpful in some situations;
  • Lexical sets  Teaching lexical sets is popular, and the basis for a lot of ELT material. But research shows that it’s counter-productive: learners actually learn new items much better if they are disconnected, or connected thematically;
  • Guessing from context  Another popular activity in many classrooms and in workbooks, research shows that it’s “a thoroughly unreliable way of accessing meaning”.

I wrote a comment about the article and Ur replied. Next, I replied to Ur’s reply but Ur didn’t reply. Finally, Catherine Richards commented, I replied to Richards, but my reply wasn’t published.

There are two issues. The first is that Ur claims to act as an honest mediator between those doing research and busy teachers, and yet she ignores important research findings that don’t fit her own view of ELT.

The second is that whoever is responsible for looking after the MaWSIG blog chose to publish a quite personal, ad hominem attack on someone who criticises Ur, and yet refused to publish the reply.

Here is the exchange of comments, beginning with mine:

My first comment 

You have repeatedly given your own views on TBLT (“there’s no evidence that it works”) and the usefulness of teaching grammar proactively through traditional focus on formS (“it’s effective”), without adequately discussing the evidence from research findings that challenge such opinions (see, for example, Long 2015).

In this article, you mention 2 areas where research can inform ELT while ignoring the elephant in the room, i.e., the 60 years of research findings on interlanguage development. This research (see Han and Tarone, 2017 for a review) poses a serious challenge to the use of materials such as coursebooks, which chop the target language into bits, and then present and practice the bits in a pre-determined sequence on the assumption that learners learn what they’re taught it this way.

Pienemann’s ( e.g. 1987) work showed that all the children and adult learners of German as a second language in a very big study adhered to a five-stage developmental sequence. Later work by his group and others in the 1990s established an acquisition order for morphemes, negation, questions, word order, embedded clauses and pronouns (see Han and Tarone, 2017, for a review). The conclusion from the research findings is that there are various kinds of developmental sequences and stages in interlanguage development which are impervious to instruction, in the sense that stage order can’t be altered, or stages skipped: acquisition sequences do not reflect instructional sequences, and thus teachability is constrained by learnability.

The implication is that a lot of the materials you recommend, including coursebooks that implement a grammar-based syllaubus based on a PPP methodology, fly in the face of robust findings in SLA research.

References

Han, Z and Tarone, E. (eds.) (2017) Interlanguage Forty years later. Amsterdam, Benjamins.

Long, M. (2015) SLA and Task-based Language Teaching. Oxford, Wiley.

Pienemann, M. (1989). Is language teachable? Psycholinguistic experiments and hypotheses. Applied Linguistics, 10, 52-79.

Penny Ur’s reply

Thanks for your challenging response, Geoff! I’ll try to respond!

I don’t think I did, actually, in my piece, advocate coursebooks based on a grammatical syllabus? All I said was that the research on grammar teaching or about TBLT is inconclusive. You produced references against explicit grammar teaching and for TBLT: these could easily be countered with evidence such as that produced by Norris and Ortega (2002) in the first case or arguments put forward by Michael Swan (2006) in the second. And a lot of doubt has been cast on the practical implications for teaching of the Pienemann’s teachability hypothesis: see for example Spada and Lightbown, 1999. But my point in this case was not that materials should or should not be grammar based or that TBLT is or is not a good idea: but simply that we have no conclusive proof either way, and a lot of conflicting evidence. On the other hand where we DO have substantial and reliable evidence to support a conclusion that affects materials writing, and we have access to it, I think we have a moral obligation to take it into account in our own composition.

Norris, J. M. & Ortega, L.. (2001). Does type of instruction make a difference? Substantive findings from a meta-analytic review. Language Learning, 51, Supplement 1, 157-213.
Spada, N. & P. M. Lightbown. (1999). Instruction, first language influence, and developmental readiness in second language acquisition. Modern Language Journal, 83 (1), 1-22.
Swan, M. (2005). Legislation by hypothesis: the case of task-based instruction. Applied Linguistics, 26(3), 376-401.

My second comment

Dear Penny,

Thanks for your reply. I wasn’t referring only to your piece here, but rather to what you’ve said in recent conference talks and in your book “A Course in Language Teaching”. If we take all these into account, I think it’s fair to say that you have criticised, and indeed, dismissed, TBLT without properly discussing different versions of it, and commended courseboooks which implement a grammar-based syllabus through PPP, without properly discussing the evidence from research findings. My general point is that while you accept the role of mediator between academics who carry out empirical research into (instructed) SLA and teachers, you use this role to argue for a very partisan view of ELT, which is often at odds with research findings.

The works I cited were in support of findings in interlanguage development, and all four of the academics you cite – Spada, Lightbown, Norris and Ortega – support the consensus view among scholars of SLA that instruction can’t affect the route of interlanguage development. They also support the commonly held view that basing ELT on the presenting and practice of pre-selected formal elements of the grammar in a pre-determined order, a methodology which you recommend, flies in the face of robust research findings. It’s surely your duty to discuss these matters with the teachers you council and to explain why you disagree with these views.

You cite the work of Norris and Ortega (2002) as evidence of the value of explicit grammar teaching. Nowhere do these scholars recommend the kind of presentation and practice of successive bits of grammar as you do in your book “A Course in Language Teaching”.

You cite the work of Swan against TBLT. Nowhere does Swan deal with Long’s particular form of TBLT as described in his 2015 book.

You say “a lot of doubt has been cast on the practical implications for teaching of the Pienemann’s teachability hypothesis: see for example Spada and Lightbown, 1999”. One practical implication of Pienemann’s teachability hypothesis has already been mentioned: teaching should respect the learners’ own internal syllabus, and this is an implication that Spada and Lightbown accept. Pienemann’s hypothesis doesn’t ihave clear implications for how to teach, but it does have very clear implications for how not to. You choose to ignore these implications when you encourage teachers to carry on using coursebooks.

Of course we don’t have conclusive proof about the efficacy of grammar-based materials or TBLT. But we do have a great deal of evidence to suggest that you misguide teachers when you tell them that using coursebooks and other materials to support a gramar-based PPP methodology is a perfectly fine way to go about ELT. On the one hand you insist on the need for ELT teachers to be more critical and to pay more attention to research findings, while on the other hand, you don’t deal critically with research findings that flag up the false assumptions on which your own approach to ELT are based.

Catherine Richards’ comment 
I am a little bemused by your bad tempered, disrespectful approach to the exchange of ideas, Geoff Jordan. While some of your points may indeed be valid and worthy of debate, I don’t think you’re much interested in commenting on Penny Ur’s piece on the importance of materials writers being research-aware – the topic here.

You seem much more interested in attacking her for her views on Task Based Learning and for her views on the use of coursebooks that appear to follow a grammar-based syllabus. My own experience, Geoff, is that the vast majority of English teachers in the world don’t work in private language schools with small groups of motivated students and enthusiastic colleagues (with CELTAs and DELTA’s.) They are state school teachers, language or philology graduates, speak English as an L2, put up with poor working conditions – big classrooms, full timetables, hours of admin and stress to the eyeballs. For this reason they love coursebooks, love bite-sized grammar chunks – they are under obligation to test 3 times a semester – and they loathe Task Based Learning almost as much as they loathe pompous academics telling them that they should embrace it and that much of what they do is wrong (because it is based on false assumptions?)

We need to understand teachers first, before we beat them around the head with the latest theory, don’t you think?

My unpublished reply to Richards

Hi Catherine,

I don’t tell teachers what to do, and I certainly don’t beat them around the head with the latest – or any – theory. I dedicate just a bit of my time to taking leading members of the ELT establishment to task for writing books on how to teach English and giving PD teacher training courses which ignore research findings and misguide teachers by telling them that using coursebooks and other materials to support a gramar-based PPP methodology is a perfectly fine way to go about ELT.

Your only defence of coursebook-driven ELT is that it’s convenient. It’s based on false assumptions? Pah! It flies in the face of robust research findings? Never mind! The critic is a bad tempered, disrespectful, loathsome, pompous academic, so we can safely ignore his arguments.

The gentrification of inner cities

There’s increasing interest in what’s happening to neighbourhoods in big modern cities which suffered a drastic decline in the 1970s and 1980s and have now been “gentrified”. A 4-stage evolution has been detected: decline -> regeneration -> displacement -> gentrification, and the real problem is how to arrest the last two stages. Bennie Gray has written various investigative journalism bits about this, and I’m working with him now on something related to it all. What follows is based on Bennie’s work so far.

The story of Covent Garden is an example of the decline to gentrification process.

In 1973 the whole of Covent Garden, which, for more than a century had been devoted to selling fruit, vegetables and flowers, moved en masse to a new site in Battersea.  As a result, ten acres of wonderful old buildings fell empty. There were plenty of developers hungry to knock the whole lot down and put up some lucrative new office blocks, but the government decided to intervene, and slapped protective orders on most of the buildings, with the inevitable result that they became neglected and began to deteriorate.

The next stage of the cycle started when various dodgy people (“deviants” they’d probably be called by town planners) began squatting in the buildings, passing virtually unnoticed by those who preferred to look the other way. They included artists and other arty-crafty, alternative life style desperados looking for free space; drug dealers looking for a place to hide; winos looking for somewhere to crash, dossers looking for somewhere to, well, doss; and so on.

Gradually, a kind of demi-monde community grew up, which was perceived as wicked, which is to say, somehow glamorous and authentic. This in turn began to attract small-time hippy entrepreneurs who opened cafes, craft shops, tattoo parlors, art galleries and alternative therapy places. Quite soon, Covent Garden had become very cool indeed, and thus, more respectable activities began to take place. Art galleries equipped with proper lighting appeared and restaurants with proper kitchens and tablecloths soon followed. Even the squatters moved up a notch, taking an interest in plumbing, for example.

By around 1980, Covent Garden had become a popular tourist destination, which was when the big corporations began to move in, taking advantage of all that commercially fertile coolness.  Rents quickly shot through the new atrium-clad roofs; before long the spirit which had characterised the area in the 1970s evaporated; and by the noughties, Covent Garden had become just another cute, crowded, over-priced shopping centre.

So this is the cycle. An original set of buildings with an original purpose loses its purpose and the buildings fall derelict. They get colonised by people who, although generally regarded as disreputable, create a thriving community.  The place gets talked about and becomes a tourist destination. This generates investment, and although the area prospers, it loses its original appeal. Spiralling rents means that the area loses the very people who created that appeal in the first place – they simply can’t afford to stay. In the case of Covent Garden, many emigrated to run down,cheap parts of Shoreditch and Hackney, and the same old cycle started again.

Another example is Trellick Tower in North Kensington, a huge block of council flats which was once the highest residential building in Europe.  Designed by Erno Goldfinger, it’s a fine example of the “new brutalism”.  In its early life, Trellick Tower was a great success – fantastic views, airy and spacious apartments, lots of balconies, etc., – but thanks to appallingly negligent management, it fell into decline to the point that it became a very dangerous place to live, teeming with vandals and drug dealers.

In the 1980s, Trellick Towers started to be re-assessed, as part of the general re-assessment of North Kensington as a place to live. Nearby Notting Hill, which in the early 60s had been the scene of riots associated with the “No Irish, No blacks, No dogs”  policies adopted by slum landlords, had become absurdly expensive, and North Kennsington, despite the crime and the drugs, was dripping with the “authenticity” and “street credibility” that marks the start of stage 2.

As the early 90s went by, more and more Japanese tourists came to photograph this icon of brutalist architecture. The clicking cameras, together with builders’ skips, mineral water bars and artisan bakers, was a reliable marker of incipient gentrification. Conned by Margaret Thatcher’s catastrophic “right to buy” policy, the poor and needy original tenants sold up and moved out, and the hipsters moved in. Trellick Tower became a desirable place to live: the land at the base of the building was turned into a park, the common parts were carefully restored, and the flats themselves were spruced up. Thus, Trellick Tower went from being good social housing, to a near deathtrap, to a spectacular example of gentrification.

That same cycle is happening all over the Europe and, perhaps most spectacularly, in The US. One indication of the extent of  gentrification is the fact that in recent years, American speculators have taken to buying tracts of rundown inner-city property and land, and then paying groups of the aforementioned “artists and other arty-crafty, alternative life style desperados looking for free space” to live and work in the area free, in the sure knowledge that their mere presence will boost the desirability of the land and the property they occupy.

This process was described by Jane Jacobs in her groundbreaking books on 20th-century urban planning, notably “The Death amd Life of American Cities”.   She had no panacea, she simply pointed out that people are as important as property and that people, in the end, determine what happens with property.

As I’ve said, Bennie Gray has written about this cycle and informed what I’ve written here. Bennie masterminded the Custard Factory project in Birmingham, and he actually managed to prolong the second stage of regeneration for more than 20 years, just because he owned it.  In Bennie’s opinion, there are only two ways to blunt the damage caused by gentrification, neither of them, he confesses, being of much use. The first is a form of value taxation, as proposed by Henry George a hundred years ago. Liberals of one stripe or another have been tinkering with this mechanism ever since, and there’s general agreement that it won’t fly. The second way involves either hugely rich benefactors or the State interfering with the so-called free market.  The Duke of Westminister could, if had a change of heart (don’t hold your breath) freeze development in all the parts of London that he owns in their cozy second stage, and Xi Jinping or Raul Castro could do the same (ditto). Bennie’s pessimistic, of course, and he refuses to even consider the third option – see below.

A third possibility is land trusts, described in some detail in Context Institute website. These trusts divide land rights between immediate users and their community, and examples of them are springing up all over the world, including in India, Israel, Tanzania, Canada, and the US.  We may distinguish between conservation trusts, community trusts, and stewardship trusts.

A conservation trust preserves some part of the natural environment either by the full ownership of some piece of land that it then holds as wilderness, or by owning “development rights” to an undeveloped piece of land. Once conceded, these rights allow them to veto any attempts to develop.

A community land trust (CLT) attempts to remove land from the speculative market and to make it available to those who will use it for the long term benefit of the community. A CLT generally owns full title to its lands and grants long term renewable leases to those who use the land. Appropriate uses for the land are determined by the CLT in a process comparable to public planning or zoning. The lease own the buildings on the land and can take full benefit from improvements they make to the land. They can not, however, sell the land nor can they usually rent or lease it without the consent of the trust. The Institute For Community Economics in the USA is one of the major support groups for the creation of community land trusts in both urban and rural settings.

The stewardship trust combines features of both the conservation trust and the CLT, and is being used now primarily by intentional communities and non-profit groups such as schools. The groups using the land (the stewards) generally pay less than in a normal CLT, but there are more definite expectations about the care and use they give to the land.

 

In each one of these types, the immediate users have clear rights which satisfy all of their legitimate use needs. The needs of the local community are met through representation on the board of directors of the trust and the larger community has representation on the trust’s board.  Thus by dividing what we normally think of as ownership into “stewardship” (the users) and “trusteeship” (the trust organization), land trusts are pioneering an approach that better meets all the legitimate interests.

The system is, of course, still limited by the integrity and the attitudes of the people involved. Many anarchists will suspect that the idea is manipulative and are right to be sceptical, particularly when considering the possibility of any kind of land trust arrangement in big cities. So what is it then I wonder: nutritious food for thought, or sickly thin soup?

November Calendar

Learning to Teach (Better) with Penny Ur (OBE)

She’s back! This month, four fun-packed, informative webinars from the foremost purveyor of up to date ELT obsolescence. Lots of useful tips on how to carry on teaching in the tried and trusted PPP, grammar-based way that Penny herself remains so fully committed to. You’ll be confidently reassured that all the research is rubbish and that there’s absolutely nothing wrong with a methodology that demands the impossible of students. Carry on slogging through the coursebook!; keep those Concept Questions coming!; and above all, never forget: the teacher knows best!

Learn how to

  • use the latest, digitalised drill-and-kill exercises
  • create your own baffling phrasal verb tests for no particular reason
  • project pages from Murphy’s Grammar in Use on the night sky
  • time “free conversation practice” for just before the bell goes
  • write challenging dialogues about everyday British life and have them recorded by unemployed actors in regional accents
  • unobtrusively wake up sleeping students

and much, much more.

On completion of the course, for an extra $59 you’ll get a worthless Certificate if you answer 2 easy questions about the present perfect.

Teach abroad as an English language assistant for the British Council  

Applications for the 2018/19 academic year open on 6 November 2017. If you’re one of the lucky successful applicants you’ll pay for your flight, accommodation, travel insurance, visas, and so on and then receive a miserable monthly salary in return for the privilege of being a member of one of the most snobbish, ethically questionable UK organisations of the lot.

You’ll work in one of BC’s lucrative commercial ELT operations which in 2016 earned them a tax-free income of approx. £1 billion. These activities have led to accusations that the BC keeps valuable commercial information to itself; that its one-third share in the IELTS biases its testing and certification policies; that it competes with an unfair advantage to train teachers for overseas governments; and that its not-for-profit status means that the income it gets from English teaching is exempt from corporation tax in many countries, unlike its competitors.

But never mind; it’ll look great on your CV, you’ll get a free British Council RP phonetic wall chart to decorate the hovel where you’ll live, and if posted to Caracas, you’ll have free access to the BC’s Rudyard Kipling Memorial Library, though getting there can be a bit tricky after dark.

TESOL Kuwait  

Another chance to see these twin pillars of the ELT establishment strut their stuff! Just in case you missed their wonderful presentations on Being the Best Teacher You Can Possibly Be in 1977, this is a special “Forty Years On Anniversary Ruby Re-run”, where not a single word has been added or taken away from the original scripts.

And what better venue than Kuwait for two of the richest men in ELT, both multi-millionaires with a string of best-selling coursebooks to their names, to celebrate!  When their session draws to a close, dozens of falcons owned by the country’s leading families will swoop over the auditorium, scattering $1,000 bills over the audience, while keys to the limited edition Bugati Veyron are presented to the speakers.

Meanwhile, outside, life goes on for ordinary teachers, migrant workers who form two-thirds of Kuwait’s population. They work long hours for low salaries, have precarious working conditions, and almost no say in what or how they teach.

On 5th August 2017, a class action civil lawsuit and also criminal investigation against the State of Kuwait were opened in Kuwait for claims of decades of unpaid wages of thousands of foreign teachers, allegedly driven by a policy of discrimination. The Court ruling also issued a protective order against public threats by the Kuwaiti Ministry of Education overtly intimidating workers into not seeking access to Justice with the international Judiciary.

TEFL Equity Advocates: a conflict of interests

Every day on Twitter there are inspirational advertisements for the TEFL Equity Advocates.

They invite everybody to join in the fight against the discimination of NNESTs.

When you go the TEFL Equity Advocates web site, you see promotional stuff about training courses that Kiczkowiak runs or supervises if you click on the WEBINARS and TEFL EQUITY ACADEMY options on the home page.

The just cause to stamp out discrimination against NNESTs should, in my opinion, be rigidly separated from Kiczkowiak’s attempts to sell his own stuff.

A reply to Andrew Walkley’s post on teaching a unit from the “Outcomes” Coursebook

My attempts to comment on Andrew’s post – Complicating the coursebook debate: Part 4 – were unsuccessful, and I wrongly assumed that I’d been the victim of censorship. Andrew has explained (see the Comments section below) that nobody tried to stop my comment being published on the website, and I conclude that I, not he, did something wrong; so I apologise to him for the false accusation. Here’s what I wanted to put as a comment on Andrew’s post.

Hi Andrew,

Thanks for this interesting account of how you’d teach the sample unit from your coursebook. You give every indication of being an experienced, thoughtful teacher and I’m sure your students appreciate you. When we get down to this level of detailed teaching procedures, all the particularities of context play a part in deciding between the options and the learning outcomes, as you repeatedly recognise.

Our disagreement centres on the key issue of synthetic versus analytical syllabuses. You use a synthetic syllabus, where the teacher or coursebook writer decides what bits of language are to be taught, and where most of the time is spent teaching students explicit knowledge about the language: grammar, lexis (lexico-gammar if you like) and pronunciation. I use an analytical syllabus where the learners’ needs determine what is to be taught, and where most of the time is spent on scaffolding students’ engagement in pedagogic tasks designed to help them to develop the implicit knowledge required to carry out real life tasks in the L2.

Your description of how you’d use your coursebook makes it clear how heavily you rely on explicit teaching.  It fits well with what you say in Teaching Lexically about the “6 principles of how people learn languages”.  I quote:

Essentially, to learn any given item of language, people need to carry out the following stages:

  • Understand the meaning of the item.
  • Hear/see an example of the item in context.
  • Approximate the sounds of the item.
  • Pay attention to the item and notice its features.
  • Do something with the item – use it in some way.
  • Repeat these steps over time, when encountering the item again in other contexts.

Leaving aside any inadequacies of this mechanistic “explanation”, what stands out is the scant importance given to stage 1: Understand the meaning of the item. You seem impatient to get on to the next stages ASAP, recommending translation as the easiest, most efficient way of getting “meaning” out of the way, so as to get to the real heart of the matter, namely teaching words.  You’re thus at odds with those who believe that giving students opportunities for implicit learning by concentrating on meaningful communication should be a guiding principle of ELT. Meaningful communication about things students have indicated that they need to talk about, the negotiation of meaning, finding their voice, expressing themselves, working out the illocutionary force of messages, catching nuances, compensating for inadequate resources, and all the sorts of things involved in implicit language learning should, for us, be what goes on most of the time in class, not something that’s allotted a ten minute slot here and there. Your plan for how to work through the sample unit involves spending a great deal of the time talking about English; there seems to me to be far too little time devoted to letting students talk in the language. Right at the end you say: “Finally, there is a conversation practice”. Finally! But even here, you add “This is an opportunity for students to re-use language that has been ‘taught’ over the previous sequence of tasks. In fact, we ask them to write the conversation, which allows them to do this more consciously”.

Language learning is not, I suggest, what you assume it to be, and ELT teaching is not best carried out by trying to teach thousands of “items”, especially when you can’t explain the criteria for their selection, and especially when Dellar insists on also teaching the curious, bottom-up grammar which attaches itself to so many of them.

Grammar and vocabulary teaching: What a difference a brain makes

In my last post, I argued that Hugh Dellar’s negligent misrepresentation of grammar models of the English language, such as Huddleston’s (2009) or Swan’s (2005), and his inability to provide any clear description of an alternative  “bottom-up approach to grammar” combined to make his advice to teachers useless. In this post, I take a quick look at two texts that discuss aspects of grammar and vocabulary teaching, just to give some indication of how useful an articulate, well-informed discussion of such matters can be. The first is an article that appeared in ELTJ and the second a recent book which you can read a bit more about on Mura’s blog, where he interviews the authors.

Spoken Grammar: What is it and how can we teach it?

McCarthy and Carter (1995) argue that learners need to be given exposure to both spoken and written grammars, and that the inter-personal implications of spoken grammars are important. They use a relatively small corpus of spoken English, constructed specially for the study of spoken grammar, where particular genres of talk are collected.  In the article, they use 2 samples of data.

(McCarthy & Carter, 1995, p. 208)

In Sample 1, a couple are making food (a curry) for a party. The authors note that ellipsis is the most salient grammatical feature of the sample.  For example:

  • D: Didn’t know you had to do that.
  • B: Don’t have to…
  • B: Foreign body in there

The authors comment:

(McCarthy & Carter, 1995, p. 209)

The article goes on to summarise the grammar features which stand out from an examination of the data:

1. Tails: slots at the end of clauses for more information.

  • they tend to go cold.., pasta
  • He’s quite a comic, that fellow
  • It’s very nice, that road up to Shipton.

2. Reporting Verbs: Use of past continuous rather than past simple.

  • He was telling me…..
  • They were saying …..

3. Frequent use of tend to

  • I tend to put the salt in last
  • It tends to go cold
  • I tend not to use names

4. Question tags: Used most often when meaning is being negotiated

5. Will / going to   More to do with interactive turn taking than semantics of time.

Finally, McCarthy and Carter propose a “Three Is” methodology in place of the traditional PPP methodology.

(McCarthy & Carter, 1995, p. 216)

I have given the most skeletal outline of this article, which includes a lot more information about the data and a series of classroom activities designed to draw upper intermediate students’ attention to some of the most salient aspects of the spoken grammar.

Successful Spoken English: Findings from Learner Corpora 

The book defines a successful English speaker in terms of his/her communicative competence at the various levels outlined in the CEFR, from B1 to C1. This in itself is an interesting and welcome innovation, which moves us away from both the Native Speaker norm and from the vague and incremental CEFR scales. The authors explain how they measure successful spoken language and then discuss the data which emerge from their searches  of the UCLan Speaking Test Corpus. As the editors explain to Mura

This contained data from only students  from a range of nationalities who had been successful (based on holistic test scoring) at each level, B1-C1. As points of comparison, we also recorded native speakers undertaking each test. We also made some comparisons to the LINDSEI (Louvain International Database of Spoken English Interlanguage) corpus and, to a lesser extent, the spoken section of the BYU-BNC corpus.

They begin with an examination of the data pertinent to linguistic competence, describing frequency profiles, frequency lists, keyword lists and lexical chunks at each level, from B1 to C1. It makes fascinating reading, and particularly interesting (again, I take this from Mura’s interview with them) is the finding that higher levels of linguistic competence are not characterised by the use of a much greater range of vocabulary, but rather by a greater flexibility in the use of the words they knew  – most of which remained in the top 2,000 most frequent words in the corpora. As they made progress, students were able to use words with a wider range of collocates for a wider range of functions.

In the subsequent chapters on strategic, discourse and pragmatic competences, each of which ends with a lively, well-considered “Discussion” section, more fascinating insights are shared, and the teaching implications are discussed. Just for example, in tune with McCarthy and Carter (1995) discussed above, the authors stress the need to distinguish between different spoken genres and to recognise the cooperative nature of much spoken discourse: the ability to co-construct conversations and to develop ideas from and contribute to the turns of others, is one important mark of increasingly successful speakers.

The authors suggest that one practical way for teachers to use the book is by taking advantage of the lists of frequent words, keywords and chunks for each level, and to use, for example, the language of successful B2 level speakers to inform what they teach to B1 level speakers. This is a principled and powerful way of choosing the vocabulary and lexical chunks to concentrate on in any particular course, providing that the lists are taken from relevant corpora (that is, corpora built from learners performing relevant tasks).

Another clear message from the book is that successful speakers need to develop all aspects of communicative competence (linguistic, strategic, discourse and pragmatic competence) and that, therefore, teaching should focus on all of these areas rather than spending too much time on learning an unprincipled list of lexical chunks.

References 

Huddleston, R. (2009)  Introduction to the Grammar of English. Cambridge: Cambridge University Press.

Jones, C. Byrne, S., Halenko, N. (2017) Successful Spoken English: Findings from Learner Corpora. London, Routeledge

McCarthy, M. and Carter, R. (1995) Spoken Grammar: What is it and how can we teach it? ELTJ, 49/3, 207-218.

Swan, M. (2005) Practical English Usage. Oxford: Oxford University Press.

Dellar on Grammar Teaching

Over the last few years, Hugh Dellar has given various versions of the same talk on how to teach grammar. The latest version was at the IATEFL Hungary conference last week, called:

Following the patterns: colligation and the necessity of a bottom-up approach to grammar

At the IATEFL conference in Glasgow earlier this year, the title was:

Following the patterns: colligation and the need for a bottom-up approach to grammar.

So he’s gone from seeing “a need”, to seeing “the necessity” of change, a hardening of line perhaps explained by his ever-more detailed analysis of how lexical items “grammar” in their own uniquely primed ways.

The talk starts and finishes in various different ways. One way it starts (e.g. Brno, April 2017) is with Dellar insulting Toby Young (that’s him in the photo above), a British journalist who’s a stickler for grammar. Dellar shows Mr. Young’s photo and asks if anyone in the audience recognises him.  “Well thank your lucky stars if you don’t!” says Dellar, who assures everybody that Mr. Young is a privledged, middle class snob “in possession of a rather punchable face”. Another way it starts  (e.g  “Teaching Grammar Lexically”), is with Dellar talking learnedly about “Chomsky and the whole idea of structuralist grammar” and then going on to explain how ELT suffers from “the tyranny of grammar teaching”.

A popular way Dellar finishes is by reading out his “poem”, a re-working of Larkin’s classic “This be the verse”,  which starts out like this:

They fuck you up your language teachers,

They don’t mean to but they do.

They plague you with their rules of grammar

With extra homework (from Raymond Murphy’s English grammar probably photocopied) just for you.

 

But they were fucked up in their turn

By fools in old-style hats and coats

 Who half the time had games and fun

 And half Murphied you round the throat.         

Only one line in the second stanza has been changed from Larkin’s original; can you guess which one is Delllar’s? The whole thing (I’ve spared you the rest) must be seen as Dellar’s way of expressing his feelings about structuralist grammar, that “outmoded and outdated way of thinking about how language works”. But what exactly is the difference between this outmoded grammar and the sort of grammar that Dellar himself is so keen to promote?  Well that’s the middle bit of the talk, the bit that comes after insulting a middle class journalist and attributing structuralism to Chomsky, and before reading his poem.

Grammar, Dellar tells us, is not one thing but a range of different kinds of things. Having dealt with the kinds of things that “big, top-down grammar” is, Dellar explains the seven kinds of things that characterise his “bottom-up” grammar. Here they are, and you’ll probably spot when I’m using Dellar’s own words.

  1. Grammar as lexis /phrases

You can teach a grammatical structure as a phrase. For example,

  • What’s it like?
  • I’ve never seen it but it’s supposed to be great.
  • I’ll do it later.
  • You should have told me.

You can teach students these kinds of very common examples of how grammar is realised without studying them as grammar.

  1. Phrases providing slots.

There are lots of little patterns that are sort of flexible and sort of malleable that we can use in lots of varying ways, but not an infinite number of varying ways. For example

  • What are you doing…. tonight?
  • What are you doing ……. after this?

is a sort of fixed phrase that can be adapted a bit, but not much – there are only 6 ways of finishing the sentence in fact. And, while some phrases look flexible, in fact they aren’t. For example,

  • There’s no pleasing some people.

Isn’t flexible.  You can’t say

  • There’s no angering some people.

Why? Because nobody says it; its not a probable sentence in English; it’s a fixed expression. So sometimes you can alter the slots and sometimes you can’t.

  1. Collocations

If you think about collocations and then collocatons of collocations you start thinking about grammar. For example, take the word responsible. Used as an adjective, we get

  • I’m responsible for hiring and firing.

But used as a noun, it grammars differently:

  • It’s the responsibility of the boss to make decisions.

And the negative adjective forms different patterns again –  it has its own, different internal grammar:

  • It’s irresponsible of you to leave a gun in the house.   

So the adjective form and the noun form and the negative form grammar, or pattern grammatically, in different ways. Thinking about the grammar of individual words gives you a different way of thinking about what grammar is and how it works . Instead of the big top down grammar, which we just drop words into as Chomsky suggested, it’s thinking about the individual words that drive our communication and the grammatical patterns which often attach themselves to those particular words.

  1. Colligation

Colligation refers to the grammatical patterns which frequently attach themselves to words. For example, the verb to be born only colligates with the past simple passive. Likewise, the most frequent colligation of dub is past simple passive

  • Bandem was once dubbed the Paris of the East.

Phrases colligate in weird ways. You can say

  • I can’t be bothered but not
  • I can be bothered.

You can say

  • It was really surprising  or
  • It wasn’t that surprising.

– both are OK. But

  • It wasn’t that astonishing

sounds weird because ungraded or extreme adjectives don’t usually colligate with not. So again it’s about thinking about the patterns of individual words and making those patterns available to your students.

  1. Patterns

A very flexible pattern is

  • Just because … it doesn’t mean ……

There’s also this idea Nick Ellis has that we can learn the meaning of words because we’ve learned prototypical examples  of patterns that the new word is encapsulated within. For example,

  • verb across a place

Nearly always, the verb that goes into that pattern is go – you go across a place. So every other example you encounter of this pattern across a place will have a variation of  the verb go, like move, or travel, for example. If you then encounter:

  • They man-doubled across the place.

you know that man-doubled is some kind of way of moving.

  1. Discourse Patterns

These are very useful. For example:

  • While some people think …. it nevertheless seems true that …..
  • According to ……, however in reality, …….
  1. Genre Dependencies

All genres have their own grammatical and lexico-grammatical conventions.

Classroom Implications

Students need to see new vocabulary with the grammar that the new vocabulary is often used with, and they need to see grammar with the lexis it’s used with. They need to think “This is a language lesson where we’re learning this grammar in this context, with this vocabulary, and we’re learning this vocabulary in this context with this grammar”.  Apart from bottom-up grammar, teachers  should do some general grammar explanation and use general rules of grammar carefully, avoiding bad rules. They should use PPP, but only with chunks of language and to build conversation; and they should encourage noticing by constantly drawing their students attention to how words grammar. Finally, two-way translation of whole sentences, cloze exercises, gap fill exercises and drills are all good ways to teach students about language patterns.

Discussion        

How persuasive is Dellar’s argument that a bottom-up approach to grammar is “a necessity”? Teachers who use traditional, big, outmoded grammar to explain formal elements of English to their students might wonder just how they’re supposed to follow the patterns that Dellar is so captivated by. What are the patterns? Looking at his repeated presentations of bottom-up grammar, the patterns turn out to be so particular and idiosyncratic as to be of very little help in making any generalisations that serve to generate grammatically correct utterances. It boils down to doing what Dellar does: going through a seemingly endless list of exemplars.

Looking back at the seven kinds of things that characterise “bottom-up” grammar, we see phrases that just have to learned; phrases with slots where there’s no way of knowing when you can alter the slots and when you can’t; collocations where each different form of a word has different grammatical patterns which attach themselves to each different word; colligations where phrases work in weird ways and “so again it’s about thinking about the patterns of individual words and making those patterns available to your students”. Even the patterns themselves that Dellar presents don’t allow for much generalisation.

Surely “the big top down grammar, which we just drop words into”(Dellar would be pleased to know, if only he’d listen, that Chomsky has absolutely nothing to do with this grammar) at least has the value of usefulness: lots of sentences can be generated by knowing, for example, that English syntax is usually of the form subject – verb – object, that you can’t omit the pronouns in verb phrases, and that adjectives with 3 syllables form the comparative and superlative with more and most.

Dellar cites Michael Swan in support of his arguments, but Swan is, of course, a prominent critic of the lexical-chunk approach. While Swan sees a place for teaching ‘high-priority chunks ‘ he has forcefully argued against giving formulaic expressions so much attention that other aspects of language – ordinary vocabulary, grammar, pronunciation and skills – get sidelined.

Dellar advises teachers to “follow the patterns” without giving any clear description of what the patterns are or any explanation of how to “follow” them. In fact, as he’s demonstrated so many times in so many different talks, and in his magnum opus Teaching Lexically, Dellar’s methodology has at its heart the task of presenting and practicing lexical chunks, a task which makes the labour of Sisyphus look like a walk in the park. Given the fact that native speakers know tens or hundreds of thousands of such chunks (estimates vary, but 30,000 is conservative), as Swan has pointed out, a student could learn 10 chunks a day, every day, for 7 long years. and still not be a proficient user of English. So, as Dellar himself is fond of saying “Good luck with that”.

Modern ELT is based on the idea that the best way to help students learn an L2 is by involving them in activities where they use the language as a vehicle for genuine communication. Grammar teaching is still regarded as important, and how best to go about it is the subject of on-going debate, but most agree that it should take a back seat, and that teachers should spend most classroom time involving their students in activities where they communicate with each other in the target language, not listen to the teacher talk about it. The sovereign principle of Dellar’s pedagogy is: “Teach Them About Words”; and that involves spending a great deal of the scarce, precious resource that is classroom time on the explicit teaching of words. Apart from not answering critics who doubt the efficacy of spending so much time on explicit teaching, Dellar has never given any satisfactory criteria for choosing which words to teach, or any persuasive arguments for the way he goes about teaching them.

Dellar’s approach to teaching grammar misrepresents the traditional pedagogical grammar of Swan, Parrott and others and poorly represents the work of Pawley and Syder, Nattinger and DeCarrico, Sinclair and others. It’s myopic, obsessive and incredibly boring. In any talk that Dellar gives, he can’t go for five minutes without offering up some of his precious treasure:

It’s the small words that are such fun, yeah? I mean, I think it’s really important that we see that. Like even for example. The only way to explain what even means is through lots of examples.

  • I’ve had a really busy day. I haven’t even had time for a coffee. Yeah?
  • I’ve been on my feet all day I haven’t even had time for a break.
  • She doesn’t drink, doesn’t smoke, doesn’t even swear.
  • He’s got a semi-detached house outside London, a new car with four wheels and a steering wheel, he’s even got some dosh in the bank. Yeah?
  • It’s past midnight, they’re all falling asleep, but I haven’t even got to the best bit yet.
  • I don’t know what Chomsky said, I don’t understand Nick Ellis, I can’t even spell my own name.
  • Zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
  • Zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
  • zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz

 

Open letter to Anthony Teacher on slaying PPP

Dear Anthony,

First, let me say how much I like your blog and all the work you do on Research Bites

I write in reply to your post Is it time to SLAy PPP? which argues that my criticisms of the PPP approach to ELT methodology can be extended to any kind of explicit instruction. I’d like to argue that you fail to identify the root of the problem, viz.: the synthetic syllabus.

Here’s a brief summary of your argument, which I hope you think is fair:

  1. Learners are constrained in what they learn by the route of interlanguage development.
  2. Instruction cannot affect the route of interlanguage development in any significant way.
  3. If instruction has no role in language learning, then what’s the point of teaching? Instructed SLA seems like an oxymoron and discussions of both PPP and TBLT (Jordan’s preferred methodology) are rendered moot.
  4. Needs analysis can’t take account of learners’ developmentally readiness. Our knowledge about which aspects of language develop in a fixed order and why they do so is still too limited to make reliable pedagogical decisions.
  5. Therefore, the instructed SLA approach seems defeatist.
  6. But the effectiveness of instruction need not be as limited as the claims above suggest. While SLA research shows a firm order of development, we have learned that instruction can certainly impact it. Through understanding students wants and task needs, offering meaningful opportunities for language learning, explicit instruction, corrective feedback, working with cognitive load in mind, providing opportunities for recycling (all evidence-based principles), we can affect the route, speed, and level of language learning. With these principles in mind, both TBLT and PPP have their place.

Points 1 and 2 are fine, but let me include here Mura’s comment to your post. He gives a summary quoting VanPatten from his radio show:

Instruction makes a difference in the short-term (a week, 2 weeks, at most a month) but:

  • tests tend to measure explicit knowledge
  • long term (8-9 months, 1 year) gains disappear
  • sometimes instruction impedes acquisition, slows it down (3 studies only though)

Moving to the rest of the argument, I think it wrongly assumes that “providing the right instruction” depends on “pinpointing learners’ developmental readiness” for it. Interlanguage development moves through various stages, but it isn’t a linear process, and therefore it isn’t best helped by presenting and practicing bits of language IN ANY ORDER. You’re right to say that needs analysis of the usual type (What is the learner’s present level of proficiency as measured by grammar and vocabulary tests, competence in the 4 skills,  “can do” statements, and so on; Where does he/she wants to end up?) can’t pinpoint learners’ developmental readiness for this or that type of instruction, but you’re wrong to suppose that this is the only kind of needs analysis available (tasks as the unit of analysis is the alternative I’ll discuss later), and equally wrong to suppose that we have to identify the point where learners find themselves in their interlanguage development in order to provide efficient teaching.

The faulty argument stems, I suggest, from regarding English as an object of study, and supposing that the only practical way English as an L2 can be taught is by using some kind of synthetic syllabus, where the English language is divided up into hundreds of artificially separated pieces, and then taught through a process of presenting and practising the pieces in a pre-determined order. When their learning is framed by a synthetic syllabus, students are faced with the impossible job of putting Humpty Dumpty back together again, without even the benefit of having seen him before he got smashed to bits. The whole project is doomed to failure: regardless of how the language is cut up into bits, how those bits are categorised, organised and sequenced, and how they are presented and practiced, it won’t work, because students will only learn what they’re ready for. And trying to make sense of it by pinpointing each learner’s developmental readiness is a fool’s errand because there is no such point. The view of language learning which underpins any synthetic syllabus is the same as the view adopted by the CEFR and the Pearson’s Global Scale of English: it’s a linear, incremental, bit-by-bit “climbing the ladder” process involving the proceduralisation of declarative knowledge. SLA research tells us that language learning is nothing like this representation of it.

I find Point 6, which attempts to rescue your argument from its depressing conclusion in Point 5, too optimisitc, I’m afraid!  First, we can’t affect the route of language learning, and second, all the other good things you suggest – offering meaningful opportunities for language learning, explicit instruction, corrective feedback, opportunities for recycling, etc., – need a coherent framework which doesn’t clash with research findings. If they happen within the framework of a synthetic syllabus, provided by a General English coursebook for example, then they won’t work. As long as language teachers try to teach students the pre-prepared, pre-packaged stuff they or their bosses think makes up an attractive “course of English”, they’ll be forced to deal, one way or another, with their students’ inability to learn it, and they’ll never teach as well as they could if they took a different approach.  My (unoriginal) argument is that all ELT based on implementing a synthetic syllabus is fundamentally flawed and hugely inefficient.

An alternative to the kind of needs analysis you refer to is the kind Long (2015) refers to, where real-world tasks are used as the unit for needs analysis. The question informing the needs analysis is: What tasks do the students need to perform in the target language?  The tasks need to be clearly described, using either the ready-made job descriptions that exist in many sectors (including education, business, the professions, public administration, the military) or descriptions from what Long calls “linguistically naïve but work experienced informants”. From a task analysis, an analytical syllabus is designed, where target tasks are the source of pedagogic tasks, which themselves are defined as simpler versions of target tasks, not in language learning terms. This allows students’ real world needs, rather than teachers’ or coursebook writers’ ideas about English as an object of study, to guide the course, and it allows teachers to work with students in a way that respects the learners’ interlanguage development, while at the same time offering explicit instruction to help students with formal aspects of the language, vocabulary learning and so on. Note that TBLT as outlined here has nothing in common with the use of tasks in grammatical or functional or lexical syllabuses.

As you probably know, I’ve explained my objections to coursebook-driven ELT elsewhere,  and I’ve tried to answer those who defend it. If we accept SLA research findings, then we should reject the use of synthetic syllabuses as a way of organising ELT and explore alternatives, such as the TBLT outlined here, Dogme, content-based language teaching, and immersion courses, for example.

I look forward to hearing your thoughts; thanks for raising these important issues, Anthony, and keep up the good work.

Best,

Geoff

Appendices

Appendix 1: Steps in Long’s TBLT Syllabus Design. (Long, M. (2015) SLA and TBLT p. 224)

Appendix 2: Characteristics of tasks (Long, M. (2015) SLA and TBLT p. 233)

“Patrick”, the Common European Framework of Reference and Interlanguages

I’m afraid I don’t know who “Patrick” is, because his avatar gives no information about him, but whoever he is, he regularly leaves comments on Scott Thornbury’s blog. Most recently, Patrick has given his opinion on the CEFR and on research into interlanguages. On the former, in comments on P is for Predictions, he says he likes the CEFR and that its growing influence throughout the world is “a good thing”. Dismissing criticism of the CEFR, Patrick says:

Fulcher’s recently cited criticism doesn’t stand up to analysis – it does however demonstrate that either he hasn’t read or understood or used the CEFR, or that he’s more interested in distorting its message in the interests of marketing his new theory: he doesn’t even get the quotations right. Sadly this kind of shoddy scholarship seems to be widespread in TESOL academia.

He adds

Re the way it was devised, yes there have been critics .. , but if the choice is between the judgement of experienced teachers or the pseudo-science of some particular rating method then I’ll go with the former any day (there are so many rating methods to choose from anyway).

The following week, commenting on Scott’s blog I is for Idiolects Patrick says:

The way I see it Scott is that ‘interlanguage’ is one of the uglier of many unnecessary neologisms invented by academics, presumably to give them a sense that they are forging a profession: there are plenty of plain English alternatives.

Yes you can invoke it as a reason for accounting for the mismatch between teaching and learning/acquiring. But there are a thousand and one other potential reasons for the mismatch that have nothing to do with ‘interlanguage’.

But the main reason that I dislike it so much is that it is so tightly bound up with the notion that languages are learnt via mechanistic pre-programmed stages. The much-touted initial research on this was a travesty, both in terms of its design and its interpretation. And although these initial errors were highlighted and debated over an extended period of time, and refuted by subsequent research, the fact that they still persist in some quarters indicates that they are ideologically driven.

From memory, Rod Ellis gives a pretty even-handed account of this fiasco, and probably Norbert Schmitt too, but many of the academics writing in that particular field seem happy just to perpetuate unanalysed myth

Far be it from me to criticise anybody for expressing their views in a forthright way, but I’m afraid Patrick is talking so much baloney that his/her comments need a reply. It wouldn’t be polite for me to put my reply on Scott’s blog, hence this post.

Patrick has managed to get things completely back to front: it’s the CEFR which treats language learning as a mechanistic process where learners move along a series of stages in linear progression, and it’s the various hyptheses concerned with interlanguage development which insist that L2 learning, far from being linear, is, in fact, a process where lots of things are going on at the same time, and which exhibits plateaus, movement away from, not toward, the L2, and all sorts of U-shaped and zigzag trajectories.  Let’s take a look.

The CEFR Framework 

There are six levels in the CEF framework, from B1 to C2,  each associated with a set of descriptors.  At each level, a list of can-do statements describes what the learners can do, one example being what they can do when it comes to transactions to obtain goods and services:

So here we have the progression from ‘can’t-do-much’ to ‘can-do-it- all’ as described by a scale that is statistically determined, hierarchically structured, and linear. The assumed linearity of such scales is contradicted by research findings of SLA, including those on interlanguage development, which show that learners do not actually acquire language in this way. As Fulcher says:

The pedagogic notion of “climbing the CEFR ladder” is therefore naïve in the extreme (Westhoff 2007: 678), and so attempts to produce benchmark samples showing typical performance at different levels inevitably fall prey to the critique that the system merely states analytic truths (Lantolf and Frawley 1985: 339), which are both circular and reductive (Fulcher 2008: 170-171). 

Note here that I’m leaning on the work of Glenn Fulcher, who I’m sure would be upset to learn that he’s been accused of “shoddy scholarship” and of deliberately distorting the CEFR “message” in order to promote his own work.

Fulcher points out that the CEFR scales were made without any principled analysis of language use, and without reference to any explanation of how people learn an L2. The selection of descriptors by North were, as he admits (North, 1995), based soley on a theory of measurement, and it relied entirely on intuitive teacher judgments rather than any samples of learner performance. The CEFR scales are therefore “essentially a-theoretical’ (Fulcher 2003: 112), a critique which North and Schneider (1998: 242-243) accept.

The CEFR scales don’t relate to any specific communicative context, or provide any comprehensive description of communicative language ability. If we return to the description of transactions to obtain goods and services, we note that ‘goods’ and ‘services’ are grouped together, and thus no distinction is drawn between, for example, buying fish and chips and buying a new car, which are, of course, qualitatively different communicative transactions. McCarthy & Carter (1994: 63) demonstrate the problem with these two examples:

Customer: I’m interested in looking at a piece of cod, please.

Server: Yes madam, would you like to come and sit down.

Customer: A Ford Escort 1.6L please, blue.

Server: Right, £10,760, please.

The CEFR can thus be seen as an unstructured, incomprehensive list of things that language users might want to get done in a range of contexts. Any, or none, of these might be relevant to a particular testing situation, and any, or none, of them might be linked to any particular task types.

I’ll touch on one other bit of the framework: “transactions”, where the descriptors that are used at each scale level again rely on ‘can-do’ statements to define the levels. Davidson & Fulcher (2007) discuss a number of problems that arise from trying to use these descriptors, and I’ve chosen just two as illustrations:

  1. The descriptors mix participant roles within a single level. At A2, for example, the leaner can ‘ask for and provide’ goods and services’, implying that they would be able to function as a shopkeeper or travel agent, as well as a procurer of goods and services.
  2. The distinction between levels is not at all clear, often referring to a vague notion of ‘complexity’ of the transaction. For example, at level B1 learners can deal with ‘most situations, as well as ‘less routine’ situations. But there is no indication as to what kinds of ‘less routine’ situations a learner might not be able to deal with, and no definition of ‘less’, ‘more’ and ‘most’. A2 is characterized by ‘common’, ‘everyday’, ‘simple’, and ‘straightforward’ transactions, but the reader is left to infer what these presumably ‘more routine’ transactions might be.

I won’t attempt any proper critique of the CEFR here, but I hope that this evidence at least indicates that its weaknesses can’t be so breezily brushed aside as Patrick suggests. And  we should also note that most leading scholars of language assessment view the reification of the CEFR as both theoretically unjustified and damaging. To quote Fulcher (2008: 170) again:

It is a short step for policy makers, from “the standard required for level X” to “level X is the standard required for…”, a step, which has already been taken by immigration departments in a number of European countries.

Few, except Patrick, perhaps, would argue that the original CEFR framework has undergone an unfortunate process of reification, so that the CEFR scales have now assumed the role of constants which are used in the exercise of power.  Of course, this isn’t what Trim or North wanted, but we can’t, or at least we shouldn’t, deny that it’s happened.

As a final observation on Patrick’s remarks, Fulcher, far from misinterpreting “the CEFR message”, has more than once suggested that when the CEFR is seen as a heuristic model used at the practioner’s discretion (as Trim and North intended), it can become a useful tool in test construction. But the context of language use is critical, since it’s the context that limits the inferences drawn from test scores, and restricts the range of decisions to which the score might be relevant.

Fulcher makes the basic distinction between ‘measurement-driven’ and ‘performance-driven’ approaches to assessment. Measurement-driven approaches, like the CEFR, derive meaning from a scaling methodology which orders descriptors onto a single scale and relies on the opinion of judges as to the place of any descriptor on the scale.  In contrast, performance data-driven approaches are based on observations of language performance, and on generating descriptors that bear a direct relationship with the original observations of language use. Meaning is derived from the link between performance and description. As Fulcher says

“We argue that measurement-driven approaches generate impoverished descriptions of communication, while performance data-driven approaches have the potential to provide richer descriptions that offer sounder inferences from score meaning to performance in specified domains”.

Interlanguages 

Patrick’s comments on interlanguage research and its relevance to language teaching need less comment.

  1. He’s welcome to his harmless opinion that ‘interlanguage’ is an ugly unnecessary neologism invented by academics to bolster their confidence, although I wonder what “lots of plain English alternatives” he has in mind.
  2. Likewise, to suggest that there are a thousand and one other potential reasons for the mismatch between teaching and learning that have nothing to do with ‘interlanguage’ is to say nothing of interest, since nobody would suggest otherwise.

As to disliking the term so much because it implies that “languages are learnt via mechanistic pre-programmed stages”, the research on interlanguages implies no such thing. In a post on the subject, I quote Doughty and Long (2003)

There is strong evidence for various kinds of developmental sequences and stages in interlanguage development, such as the well known four-stage sequence for ESL negation (Pica, 1983; Schumann, 1979), the six-stage sequence for English relative clauses (Doughty, 1991; Eckman, Bell, & Nelson, 1988; Gass, 1982), and sequences in many other grammatical domains in a variety of L2s (Johnston, 1985, 1997). The sequences are impervious to instruction, in the sense that it is impossible to alter stage order or to make learners skip stages altogether (e.g., R. Ellis, 1989; Lightbown, 1983). Acquisition sequences do not reflect instructional sequences, and teachability is constrained by learnability (Pienemann, 1984).

Interlanguage research is on-going and under constant critical review (see Han & Tarone, 2016). As the above quote indicates, it is not accurate to say that Rod Ellis regards all the research as a “fiasco”, and the suggestion that “many of the academics writing in that particular field seem happy just to perpetuate unanalysed myth” is unworthy of any response.

References

Doughty, C. and Long, M.H. (2003) Optimal Psycholinguistic Environments for Distance Foreign Language Learning. Downloadable here: http://llt.msu.edu/vol7num3/doughty/default.html

Fulcher, G. See here for all the works cited: http://languagetesting.info/zoom/search.php?zoom_sort=0&zoom_query=Fulcher&zoom_per_page=10&zoom_and=0

Han, Z,H. & Tarone E.(eds) (2016) Interlanguage Forty years later. Amserdam, Benjamins.

Tarone, E. (2006) Interlanguage. Downloadable here:   http://socling.genlingnw.ru/files/ya/interlanguage%20Tarone.PDF

Cracks in the Wall?

Recent tweets by the usual suspects – coursebook writers, teacher trainers, publishers, examiners and paid conference speakers – suggest that members of the influential UK-based ELT establishment are getting a bit defensive in the face of growing criticism. Publishers and trainers talk more about “flexibility”; examiners insist on the “validity”of their tests; everybody goes on about  “listening to the learner”.  Dellar tweets from IATEFL Peru:

You quickly realise how little the heated debates of the euro-centric #EFL blogosphere have do with most contexts here.

Jim Scrivener adds:

….or in most teaching contexts in most countries around the world.

You don’t have to be an expert in critical discourse analysis to glean that the heated debates of the euro-centric EFL blogosphere are starting to rattle the composure of at least some of the globe-trotting doyens of ELT. Still, these are small cracks in a very thick wall, and we need to widen the debate so that more people become aware of the mess ELT is in. Pace Dellar, the matters we raise are not euro-centric at all: the commodification of ELT affects teachers and learners everywhere, including Peru, of course.

As Kerr and Wickham (citing Robertson, 2006) point out in an essay that deserves careful reading, it was the 1999 General Agreement on Trade in Services (GATS) that heralded the transformation of education into a commodity which can be traded like any other in the marketplace. Kerr and Wickham also point out that the most visible manifestation of the commodification of ELT is “the current ubiquity of learning outcomes”. These ‘learning outcomes’manifest themselves most clearly in current ELT performance scales, of which the Common European Framework of Reference for Languages (CEFR) is the most well-known. It’s been followed by the Cambridge English Scale and, most audaciously of all, the Pearson Global Scale of English, the GSE.

The GSE is “a granular, precise scale of proficiency” consisting of over 1,800 “can-do” statements that provide context for teachers and learners across reading, writing, speaking and listening”. Coursebooks aligned to these granular learning objectives, serving up tasteless, inoffensive (We Serve No PARSNIPS Here!)  warmed-through McNuggets of sanitised language,  plus placement, formative and high stakes tests aligned to the GSE, provide teachers with absolutely everything they need for modern day teaching. As I’ve pointed out elsewhere, the GSE wrongly assumes that its ‘can-do ‘abilities are

  1.  meaningful (what does “Can describe events, real or imagined” mean?), and
  2. develop in the way implied by the hierarchical structure of the scales.

Statistical and psychological unidimensionality are not equivalent, and the pedagogic notion of learners moving unidimensionally along the line from 10 to 90 is ridiculous. Learning an L2 is gradual, incremental and slow, exhibiting plateaus, occasional movement away from, not toward, the L2, and U-shaped or zigzag trajectories rather than smooth, linear contours.

Ridiculous as the GSE is, and despite the fact that it has been roundly condemned by experts in SLA, language teaching and language assessment everywhere, the mad Pearson project marches confidently on, buoyed by the knowledge that it has nothing to fear from the UK-based ELT establishment, who gratefully accept Pearson patronage and abstain from criticism.

Actually, it’s not that they abstain from criticism, it’s more that they abstain from reading or taking any notice of criticism. Scott Thornbury ‘s IATEFL talk this year reported on a small study he did of 6 authors of best-selling “How to teach English as a second/foreign language” books. He asked these influential members of the ELT establishment about the role they played as mediators between researchers and practitioners, and all of them, without exception, were happy to admit that they didn’t read or know much about academic, evidence-based research into language learning, teaching, or assessment. Penny Ur (who only last month, with no sense of irony at all, exhorted everybody to engage in more critical thinking) gave the most dismissive answers to Thornbury’s questions, including this gem:

it’s certainly possible to write helpful and valid professional guidance for teachers with no research references whatsoever.      

She doesn’t sound rattled, now does she!  How much more heated must our debates get, how much louder must we shout, before the wall of smug, anti-academic complacency defending current ELT practice is brought down and members of the UK-based establishment publically recognise that they are serving the interests of big business to the detriment of good teaching?  The evidence is there: teacher training, teaching methodology, teaching materials, and language assessment are all fatally flawed by their subservience to the industrialisation and commodification of education, as exemplified by Pearson’s GSE.

And what about the workers? To return to Kerr and Wicham’s article, they note that “the move towards privatization is accompanied by an overt attack on teachers’ unions, rights, pay and conditions”, and that “the drive to bring down costs has a negative impact on teachers worldwide”.  They cite Gwynt (2015) who catalogues “cuts in funding, large-scale redundancies, a narrowing of the curriculum, intensified workloads (including the need to comply with ‘quality control measures’), the de-skilling of teachers, dilapidated buildings, minimal resources and low morale”.  They also list the conditions of French teachers in the private sector which are shared by tens of thousands of ELT workers worldwide:

  • multiple employers,
  • limited or no job security,
  • limited or no sick pay and holiday pay,
  • little or no on-going training
  • low and deteriorating hourly rates of pay.

Kerr and Wickham conclude:

Given the current climate, teachers will benefit from closer networking with fellow professionals in order, not least, to be aware of the rapidly changing landscape. …. More generally, it is important to recognise that current trends have yet to run their full course. Conditions for teachers are likely to deteriorate further before they improve. More than ever before, teachers who want to have any kind of influence on the way that marketization and industrialization are shaping their working lives will need to do so collectively.

References

Gwynt, W. (2015) The effects of policy changes on ESOL. Language Issues 26 / 2: 58 – 60.

Kerr, P. and Wickham, A. (2017) ELT as an industry. https://adaptivelearninginelt.wordpress.com/2017/02/15/elt-as-an-industry/

Robertson, S. L. (2006) Globalisation, GATS and trading in education services. Centre for Globalisation, Education and Societies, University of Bristol, Bristol BS8 1JA, UK. Available at http://www.bris.ac.uk/education/people/academicStaff/edslr/publications/04slr

Gerontologically speaking, Geragogy’s time has come

What’s the definition of an ‘older person’? Different countries and different contexts afford different answers. According to the Euro-barometer survey (2011), in Slovakia a person is considered ‘old’ at the age of 57, whereas in the Netherlands ‘old’ applies only to people aged 70 and over.  Meanwhile, if you’re trying to get a job, you’ll be considered ‘older’ (i.e. past it) by most employers once you’re 55.  And in research, they’re even more cruel: Withnall (2010) suggests that “50 appears to have become the preferred age for the designation of ‘older adult’”.

I got this information while reading a dissertation that I marked recently, so I can’t acknowledge the author yet, but I will. The dissertation was about older adult foreign language learners, and in the literature review, the author mentioned that the perceived differences in learning processes between children and adults led in the 19th century to an alternative teaching approach for adults named “andragogy”.  The term was takrn up again by Malcolm Knowles, who described andragogy as “the art and science of helping adults to learn” in contrast to pedagogy which is defined as “the art and science of teaching children” (Knowles, 1980:43). Essentially, andragogy adapts teaching to the considerations that  adults are more goal-oriented, more self-directed, more heterogeneous in their learning aims, and more intrinsically motivated than children. Knowles (1980), and later  Knowles et al (2005), make a number of practical suggestions about precisely how teachers should adapt (assume the role of facilitator more than knower, use problem-solving activities, etc.),  but few teachers seem to have heard the message. 

More interesting is Formosa’s (2002) article on ‘critical educational geragogy’, which develops the principles for critical educational gerontology [CEG] first established by Glendenning and Battersby in the 1980s. Referring to “the gritty realities which embed older persons in structured positions of social inequality”, and to the ageist and patronising attitude towards older learners which pervades education, at least in the West, Formosa insists that his approach is “an actual example of ‘transformative education’ rather than yet another euphemism for glorified occupation therapy”.

Most interesting of all is the work of Ramírez-Gómez (2016a, 2016b), who has proposed the extension of critical geragogy to foreign language learning by formulating ‘critical foreign language geragogy’ (CFLG). Ramírez-Gómez argues that current beliefs and prejudices about older learners must be overturned in order to “redefine expectations and goals and improve proficiency”.  I haven’t been able to get hold of a copy of her book yet, but from what I’ve gathered so far, Ramírez-Gómez (2016b) describes a study on Japanese older learners of Spanish which focuses on the influence of learning experiences on vocabulary learning strategy use, often cited as the biggest problem of L2 learning for older people. She examines the influence of experience on the learning process, and common misconceptions about older learners that are imposed on learners and teachers, and proposes a set of practical recommendations for the development and adjustment of foreign language activities for older learners. CFLG puts special emphasis on drawing on the older learners’ experience, their high intrinsic motivation, and their capacity for autonomous development in L2 learning.

As far as I know, research on older L2 learners is very limited, there has been little discussion of the issues raised by Ramírez-Gómez, and hers is the first and the only evidence-based methodology specifically aimed at this growing age group. Given the demographics of so many parts of the world, particularly in Europe, maybe it’s time we paid attention. Prejudice and misconceptions affecting older learners abound, fueled by a general “loss-deficit” perspective which focuses on cognitive ‘decline’ and on what older people can’t do, rather than looking at the things they can actually do better, and the mechanisms they use to compensate for any deficit. I have a personal stake in all this of course, but well, none of us is getting any younger. Just to show that I’m still learning, I’ll put that cliché into the modern English vernacular: So, none of us are youngering, yeah?

Now, if you’ll excuse me, it’s time for The Archers.

References

Formosa, M. (2011) ‘Critical educational gerontology: A third statement of first principles.’ International Journal of Education and Ageing, 2(1), 317–332.

Knowles, M. (1980) The Modern Practice of Adult Education. From Pedagogy to Andragogy. New Jersey: Cambridge Adult Education.

Knowles, M., E.F. Holton and R.A. Swanson, (2005) The Adult Learner. 6th Edition. London: Elsevier.

Ramírez Gómez, D. (2014) ‘Older adult FL learning: Instructors’ Beliefs and Some Recommendations. In Sonda, N. and A. Krause (Eds.), JALT2013 Conference Proceedings. Tokyo: JALT.

Ramírez Gómez, D. (2016a) ‘Critical geragogy and foreign language learning: An exploratory application.’ Educational Gerontology, 42:2, 136-143.

Ramírez Gómez, D. (2016b) Language Teaching and the Older Adult: The Significance of Experience. Clavedon: Multilingual Matters. Kindle Edition

Withnall, A. (2010) Improving Learning in Later Life. London: Routledge.