* Processing Approaches to SLA

In exploring SLA, many researchers have taken a processing approach which sees L2 learning as the process by which linguistic skills become automatic. Initial learning requires controlled processes, which require attention and time; with practice the linguistic skill requires less attention and becomes routinized, thus freeing up the controlled processes for application to new linguistic skills.

The adoption of a common processing approach to SLA does not amount to the adoption of any paradigm (Point 6 of the Guidelines makes clear that none is needed), and indeed there are various statements of the problem of processing, and various proposals for its solution. While McLaughlin (1990) likens the SLA process to driving a car with a clutch (where the proficient driver no longer needs to think about how to use the clutch), Bialystok (1994, 2001) likens the L2 learner to a library user. Bialystok sees the L2 learner’s knowledge as a mental library – the contents of the learner’s linguistic knowledge. The knowledge can be structured to different degrees, and this represents different degrees of control over the knowledge. The learner’s ability to retrieve a book represents his access to the linguistic knowledge he has. The learner’s knowledge can be more or less analysed; according to Bialystok the information is the same, but the more it is analysed, the more the learner is aware of the structure of the information. Bialystok therefore seems to be arguing the opposite of McLaughlin: more conscious control is necessary to speed access and processing. To some extent the differences are due to the differences between the definition or use of different terms, and it is precisely the attempts to work out of these ambiguities and contradictions that allows progress to be made.

Another key element of the process approach is the argument that the L2 knowledge acquired is restructured during the process, a widely-accepted example being that there is a U-Shaped process of learning (see below).

Briefly, then, SLA is seen as a process by which attention-demanding controlled processes become more automatic through practice, a process that results in the restructuring of the existing mental representation. The adoption of such a framework gives focus and strength to the research: well-defined problems can be articulated, and other more powerful and daring solutions can be offered to the one that has been tentatively established.

McLaughlin: Automaticity and Restructuring

In an attempt to overcome the problems of finding operational definitions for concepts used to describe and explain the SLA process, McLaughlin argues (1990) that the distinction between conscious and unconscious should be abandoned in favour of clearly-defined empirical concepts. “Lacking an adequate theory of mind that allows us to decide that particular mental states or operations are “conscious” or “unconscious,” one cannot falsify claims regarding consciousness in second language learning” (McLaughlin, 1990: 617).

McLaughlin substitutes the use of the conscious/unconscious dichotomy with the distinction between controlled and automatic processing. Controlled processing requires attention, and humans’ capacity for it is limited; automatic processing does not require attention, and takes up little or no processing capacity. The L2 learner begins the process of acquisition of a particular aspect of the L2 by relying heavily on controlled processing; through practice the learner’s use of that aspect of the L2 becomes automatic. McLaughlin uses the twin concepts of Automaticity and Restructuring to describe the cognitive processes involved in SLA.

Automaticity occurs when an associative connection between a certain kind of input and some output pattern occurs. Many typical greetings exchanges illustrate this:

Speaker 1: Morning.
Speaker 2: Morning. How are you?
Speaker 1: Fine, and you?
Speaker 2: Fine.

Since humans have a limited capacity for processing information, automatic routines free up more time for such processing. To process information one has to attend to, deal with, and organise new information. The more information that can be handled routinely, automatically, the more attentional resources are freed up for new information. Learning takes place by the transfer of information to long-term memory and is regulated by controlled processes which lay down the stepping stones for automatic processing.

The second concept, restructuring, refers to qualitative changes in the learner’s interlanguage as they move from stage to stage, not to the simple addition of new structural elements. These restructuring changes are, according to McLaughlin, often reflected in “U-shaped behaviour”, which refers to three stages of linguistic use:

• Stage 1: correct utterance,
• Stage 2: deviant utterance,
• Stage 3: correct target-like usage.

In a study of French L1 speakers learning English, Lightbown (1983) found that, when acquiring the English “ing” form, her subjects passed through the three stages of U-shaped behaviour. Lightbown argued that as the learners, who initially were only presented with the present progressive, took on new information – the present simple – they had to adjust their ideas about the “ing” form. For a while they were confused and the use of “ing” became less frequent and less correct.

Discussion

The question of implicit and explicit knowledge, conscious and unconscious knowledge, acquisition and learning, is one that, in different ways, vexes many of those working on a theory of SLA. The problem is how to conceptualise this difference in such a way that the explanation it offers of the SLA process is non-circular. McLaughlin suggests that we need to get rid of those concepts which cannot be clearly defined in an empirically testable way. Whether or not McLaughlin is right to claim that the conscious-unconscious distinction is untestable depends, obviously, on how these two terms are defined, but in any case we may note yet again that a necessary condition for a theory is that it is testable. And the inevitable question is raised of to what extent the terms “controlled processing” and “automatic processing” are empirically testable. Are they simply measured by the length of time necessary to perform a given task? This is a weak type of measure, and one that does little to solve the problem it raises.

Finally, we may note that McLaughlin’s account of the process of SLA adopts the computer metaphor, which has become the most popular, widely-used nomenclature today. McLaughlin and Bialystok were among the first scholars to apply general cognitive psychological concepts of computer-based information-processing models to SLA research. Chomsky’s Minimalist Program confirms his commitment to the view that cognition consists in carrying out computations over mental representations. Those adopting a connectionist view, though taking a different view of the mind and how it works, also use the same metaphor. Indeed the basic notion of “input – processing – output” has become an almost unchallenged account of how we think about and react to the world around us. While in my opinion the metaphor can be extremely useful, it is worth making the obvious point that we are not computers. One may well sympathise with Foucault and others who warn us of the blinding power of such metaphors.

Schmidt: Noticing

Schmidt’s influential paper on the role of consciousness in second language learning argues that “subliminal language learning is impossible”, and that “noticing is the necessary and sufficient condition for converting input into intake.” (Schmidt, 1990: 130)

Schmidt, rather than accept McLaughlin’s advice to abandon the search for a definition of “consciousness” (see Section 9.2.6.), attempts to do away with its “terminological vagueness” by examining three senses of the term: consciousness as awareness, consciousness as intention, and consciousness as knowledge. Consciousness and awareness are often equated, but Schmidt distinguishes between three levels: Perception, Noticing and Understanding. The second level, Noticing, is the key to Schmidt’s eventual hypothesis.

1. Noticing as focal awareness

When reading, for example, we are normally aware of (notice) the content of what we are reading, rather than the syntactic peculiarities of the writer’s style, the style of type in which the text is set, music playing on a radio in the next room, or background noise outside a window. However, we still perceive these competing stimuli and may pay attention to them if we choose. (Schmidt, 1990: 132)

2. Noticing refers to a private experience, but it can be operationally defined as availability for verbal report, and “When problems of memory and metalanguage can be avoided, verbal reports can be used to both verify and falsify claims concerning the role of noticing in cognition” (Schmidt, 1990: 132). Consciousness as intention is used to distinguish between awareness and intention behaviour. “He did it consciously”, in this second sense, means “He did it intentionally.”

The third sense of the term – consciousness as knowledge – is the one that, as we have seen to some extent above, often causes problems in attempts to explain the SLA process. Schmidt cites White (1982) who argued that “experiential consciousness and knowledge are not at all the same thing”, and warned that “the contrast between conscious and unconscious knowledge is conceptually unclear when different authors are compared, since the ambiguities are combined with those of knowledge, equally difficult in psychological terms” (Schmidt, 1990: 133). Schmidt comments: “It is unfortunate that most discussion of the role of consciousness in language has focused on distinctions between conscious and unconscious knowledge, because the confusion warned against by White is apparent” (Schmidt, 1990:133).

Schmidt suggests that the ambiguities of “conscious” and “unconscious” can be tackled by recognising that it refers not to a single question but to six different contrasts:

1. Unconscious learning refers to unawareness of having learned something.
2. Conscious learning refers to awareness at the level of noticing and unconscious learning to picking up stretches of speech without noticing them. Schmidt calls this the “subliminal” learning question: is it possible to learn aspects of a second language that are not consciously noticed?
3. Conscious learning refers to intention and effort. This is the incidental learning question: if noticing is required, must learners consciously pay attention?
4. Conscious learning is understanding principles of the language, and unconscious learning is the induction of such principles. This is the implicit learning question: can second language learners acquire rules without any conscious understanding of them?
5. Conscious learning is a deliberate plan involving study and other intentional learning strategies, unconscious learning is an unintended by-product of communicative interaction.
6. Conscious learning allows the learner to say what they appear to “know”.

While, according to Schmidt, most of the literature has been concerned with the last two issues, Schmidt considers the issues of subliminal, incidental, and implicit learning more important.

Addressing the issue of what he calls “subliminal” learning, Schmidt notes that although the concept of intake is crucial, there is no agreement on a definition of intake. While Krashen seems to equate intake with comprehensible input, Corder distinguishes between what is available for going in and what actually goes in. Schmidt notes that neither Krashen nor Corder addresses the fact that all the input used to comprehend a message is unlikely to function as intake for the learning of form. Schmidt also notes the distinction Slobin (1985), and Chaudron (1985) make between preliminary intake (the processes used to convert input into stored data that can later be used to construct language), and final intake (the processes used to organise stored data into linguistic systems).

Schmidt proposes that intake be defined as “that part of the input which the learner notices … whether the learner notices a form in linguistic input because he or she was deliberately attending to form, or purely inadvertently. If noticed, it becomes intake” (Schmidt, 1990: 139). The only study mentioned by Schmidt in support of his hypothesis is by Schmidt and Frota (1986) which examined Schmidt’s own attempts to learn Portuguese, and found that his notes matched his output quite closely. Schmidt himself admits that the study does not show that noticing is sufficient for learning, or that noticing is necessary for intake. Nevertheless, Schmidt does not base himself on this study alone; there is, Schmidt claims evidence from a wider source: “Because of memory constraints, failure to report retrospectively that something has been noticed does not demonstrate that the event was not registered in conscious awareness at the time of the event. Therefore, the primary evidence for the claim that noticing is a necessary condition for storage comes from studies in which the focus of attention is experimentally controlled. The basic finding, that memory requires attention and awareness, was established at the very beginning of research within the information processing model” Schmidt, 1990: 141).

Addressing the second issue, of incidental learning versus paying attention, Schmidt acknowledges that the claim that conscious attention is necessary for SLA runs counter to both Chomsky’s rejection of any role for conscious attention or choice in L1 learning, and the arguments made by Krashen, Pienemann and others for the existence of a natural order or a developmental sequence in SLA. Schmidt says that Chomsky’s arguments do not necessarily apply to SLA, and that “natural orders and acquisition sequences do not pose a serious challenge to my claim of the importance of noticing in language learning, …they constrain but do not eliminate the possibility of a role for selective, voluntary attention” (Schmidt, 1990: 142).

Schmidt accepts that “language learners are not free to notice whatever they want” (Schmidt, 1990: 144), but, having discussed a number of factors that might influence noticing, such as expectations, frequency, perceptual salience, skill level, and task demands, and citing various studies, including the Schmidt and Frota study of his own attempts to learn Portuguese, concludes that “those who notice most, learn most, and it may be that those who notice most are those who pay attention most” (Schmidt, 1990: 144). Nevertheless, Schmidt accepts that incidental learning is possible, and suggests that more studies be carried out to determine how, in task-based language teaching, task characteristics affect message comprehension.

The third issue Schmidt examines is that of implicit learning versus learning based on understanding. How do second language learners generalise from instances and go on to form hypotheses about the L2? Does such learning depend on unconscious processes of induction and abstraction or does it depend on insight and understanding? While those such as White (1987, cited in Schmidt, 1990: 145) who take a Chomskian approach to SLA argue that the process is unconscious, a number of cognitive psychologists cited by Schmidt argue that there is no learning without awareness. Schmidt judges the question of implicit second language learning to be the most difficult “because it cannot be separated from questions concerning the plausibility of linguistic theories.” (Schmidt, 1990: 149) What Schmidt sees no reason to accept is the null hypothesis which claims that, as he puts it, “understanding is epiphenomenal to learning, or that most second language learning is implicit.” (Schmidt, 1990: 149)

Discussion

Schmidt’s work indicates that further progress has been made in the development of a coherent theory: we are a long way now from Corder’s original hypothesis, and indeed from Krashen’s “acquisition/ learning” dichotomy. Schmidt’s hypothesis manages to clear up a lot of confusion surrounding the use of terms used in psycholinguistics, and, furthermore, to improve one crucial part of a general processing theory of the development of interlanguage grammar. Not surprisingly perhaps, Schmidt’s hypothesis caused an immediate stir within the academic community and quickly became widely-accepted.

We need to take a closer look at Schmidt’s concept of noticing – what exactly does it refer to, and how can we be sure when it is, and is not being used by L2 learners? In his 1990 paper, Schmidt claims that noticing can be operationally defined as “the availability for verbal report”, “subject to various conditions”. He adds that these conditions are discussed at length in the verbal report literature, and cites Ericsson and Simon (1980, 1984), and Faerch and Kasper (1987), but he does not discuss the issue of operationalisation any further.

Schmidt’s 2001 paper gives various sources of evidence of noticing:

• Learner production. The problem here is how to identify what has been noticed.
• Learner reports in diaries. Schmidt cites Schmidt & Frota (1986), and Warden, Lapkin, Swain and Hart (1995). The problem here, as Schmidt himself points out, is that diaries span months, while cognitive processing of L2 input takes place in seconds. Furthermore, as Schmidt admits, making diaries requires not just noticing but reflexive self-awareness.
• Think-aloud protocols. Schmidt agrees with the objection made to such protocols that studies based on them cannot assume that the protocols include everything that is noticed. Schmidt cites Leow (1997), Jourdeais, Ota, Stauffer, Boyson, and Doughty (1995) who used think-aloud protocols in focus-on-form instruction, and Schmidt concludes that such experiments cannot identify all the examples of target features that were noticed.
• Learner reports in a CALL context (Chapelle, 98) and programs that track the interface between user and program – recording mouse clicks and eye movements (Crosby 1998). Again, Schmidt concedes that it is still not possible to identify with any certainty what has been noticed.
• Merikle and Cheesman distinguish between the objective and subjective thresholds of perception. The clearest evidence that something has exceeded the subjective threshold and been consciously perceived or noticed is a concurrent verbal report, since nothing can be verbally reported other than the current contents of awareness. Schmidt argues that this is the best test of noticing, and that after the fact recall is also good evidence that something was noticed, providing that prior knowledge and guessing can be controlled. For example, if beginner level students of Spanish are presented with a series of Spanish utterances containing unfamiliar verb forms, are forced to recall immediately afterwards the forms that occurred in each utterance, and can do so, that is good evidence that they did notice them. On the other hand, it is not safe to assume that failure to do so means that they did not notice. It seems that it is easier to confirm that a particular form has not been noticed than that it has: failure to achieve above-chance performance in a forced-choice recognition test is a much better indication that the subjective threshold has not been exceeded and that noticing did not take place.

Schmidt goes on to claim that the noticing hypothesis could be falsified by demonstrating the existence of subliminal learning either by showing positive priming of unattended and unnoticed novel stimuli or by showing learning in dual task studies in which central processing capacity is exhausted by the primary task. The problem in this case is that in positive priming studies one can never really be sure that subjects did not allocate any attention to what they could not later report, and similarly, in dual task experiments one cannot be sure that no attention is devoted to the secondary task. Jacoby, Lindsay, & Toth (1996, cited in Schmidt, 2001: 28) argue that the way to demonstrate true non-attentional learning is to use the logic of opposition, to arrange experiments where unconscious processes oppose the aims of conscious processes.

In conclusion, Schmidt argues that attention as a psychological construct refers to a variety of mechanisms or subsystems (including alertness, orientation, detection within selective attention, facilitation, and inhibition) which control information processing and behaviour when existing skills and routines are inadequate. Hence, learning in the sense of establishing new or modified knowledge, memory, skills and routines is “largely, perhaps exclusively a side effect of attended processing”. (Schmidt, 2001: 25). This is a daring and surprising claim, with similar predictive ability, and it contradicts Krashen’s claim that conscious learning is of extremely limited use.

It is also noteworthy that the hypothesis is supported by a lone study, co-written by Schmidt himself, about Schmidt’s own learning of Portuguese. What kind of evidence is that? one might ask. In fact, as Schmidt admits, it is slender evidence, and provides no support whatsoever for part of the claim. But more studies can be carried out, Schmidt’s construct of noticing is operational, at least to some extent, and, as I have said, the hypothesis makes strong, indeed, daring, predictions which can be tested. The strength and force of the hypothesis come from insight, and from the effort that Schmidt has made to literally come to terms with the knotty problems of consciousness and awareness. Such a paper is surely worth a thousand field studies where data is gathered for no very obvious purpose.

Long’s Interaction Hypothesis

Long’s hypothesis has matured and its current form has benefited from Schmidt’s definition of, and claims for, noticing, discussed in the previous section. Long (1983), critical of Krashen’s Input hypothesis, dissatisfied with much of the descriptive research into the kind of input that L2 learners were exposed to, and looking for a more rigorous formulation of the idea of comprehensible input, suggested that linguistic and conversational adjustments between native speakers (NS) and non-native speakers (NNS) promoted comprehension of input, and that comprehensible input promoted acquisition. Long’s research (1980, 1981, 1983a, 1983b) involved studying sixteen NS-NS, and sixteen NS-NSS pairs, and showed that the NS-NSS pairs, were much more likely to use a variety of strategies or functions in order to try and solve ongoing problems of communication during the conversations. Specifically, Long says that the NS-NSS pairs used repetition, confirmation checks, comprehension checks, and clarification requests. The prime trigger for these tactics is the perception that the interlocutor is having difficulty with comprehension, and Long suggests that in the ongoing negotiation of meaning the NS-NSS partnership is making sure that the NSS is getting comprehensible input. “Modification of the interactional structure of conversation … is a better candidate for a necessary (not sufficient) condition for acquisition. The role it plays in negotiation for meaning helps to make input comprehensible while still containing linguistic elements, and, hence, potential intake for acquisition” (Larsen-Freeman and Long, 1991: 134).

This hypothesis has very clear implications for classroom-based SLA and, not surprisingly, the classroom environment is where it has been most tested.

In a more general review of research into linguistic and conversational adjustments to non-native speakers, Larsen-Freeman and Long (1991, 128-139) suggested that five questions emerge:

1. What is the effect of deviant input on SLA? Do second language learners who are exposed only or predominantly to ungrammatical foreigner talk acquire a marked, substandard variety of the target language?
2. What is the role of conversation in developing syntax? Hatch suggests that to suppose that second language learners acquire syntactic structures which they then put to use in conversation is putting the cart before the horse: “language learning evolves out of learning how to carry on conversations.” (Hatch 1978, cited in Larsen-Freeman and Long, 1991: 130) Larsen-Freeman and Long comment that, as in the first question, little research has been done to answer this question, but related studies (e.g., Fourcin 1975, cited in Larsen-Freeman and Long, 1991: 130) show that conversation is not necessary for success.
3. Does input frequency affect acquisition sequence? Studies by Long (1980, 1981) support an input frequency/accuracy order relationship, but the late acquisition of articles (the most frequent item in (ESL) input), for example, makes it clear that frequency is not the only factor involved, and no study has demonstrated any causal relationship.
4. Does input modification enhance second language comprehension? The answer seems to be “yes”, but with the usual caveat that more research is needed to identify which types of modification are most beneficial.
5. What is the relationship between comprehension and SLA? Is Krashen’s Input Hypothesis correct?

This final question is obviously answered negatively by the Interaction hypothesis, just as we would expect the answer to Question 4 to be positive. Long’s Interaction Hypothesis argues that the negotiation of meaning causes L2 learners who are essentially concerned with meaning rather than form, to pay attention to the form in order to understand the message.

Long re-defined in 1996, in order to give more importance to the individual cognitive processing aspect of SLA, and in particular to noticing and the role of negative feedback. The newly-defined Interaction Hypothesis states that: “Environmental contributions to acquisition are mediated by selective attention and the learner’s developing L2 processing capacity, and that these resources are brought together most usefully, although not exclusively, during negotiation for meaning. Negative feedback obtained during negotiation work or elsewhere may be facilitative of L2 development, at least for vocabulary, morphology, and language-specific syntax, and essential for learning certain specifiable L1-L2 contrasts” (Long, 1996: 417).

Discussion

There is more evidence here of theory progression, of how an originally well-formulated hypothesis is upgraded in the light of criticism and developments in the field. The commitment to classroom-based research is evident, and there are obvious implications here for second language teaching. Two of the important implications of Long’s hypothesis are that a task-based approach to classroom teaching is the most efficient, and that tasks can be selected and manipulated so as maximise the opportunities for learners to turn input into intake. These implications have been tested in a number of classroom-based studies (see Doughty and Williams, 2000) and indeed the Interaction Hypothesis has led to growing support for a task-based approach to classroom-based teaching where opportunities for the “negotiation of meaning” in Long’s sense, and for “noticing” in Schmidt’s sense are created.

Pienemann’s Processability Theory

This model started out as the Multidimensional Model, which came from work done by the ZISA group mainly at the University of Hamburg in the late seventies. A full account can be found in Larsen-Freeman and Long, 1991: 270-287. I will describe the original model as briefly as possible.

One of the first findings of the group was that all the children and adult learners of German as a second language in the study adhered to the five-stage developmental sequence shown in Figure 10.

Stage X – Canonical order (SVO)
die kinder spielen mim bait
the children play with the ball

(Romance learners’ initial SVO hypothesis for GSL WO is correct in most German sentences with simple verbs.)

Stage X + I- Adverb preposing (ADV)
da kinder spielen
there children play

(Since German has a verb-second rule, requiring subject—verb inversion following a preposed adverb {there play children), all sentences of this form are deviant. The verb-second (or ‘inversion’) rule is only acquired at stage X + 3, however. The adverb-preposing rule itself is optional.)

Stage X + 2- Verb separation (SEP)
alle kinder muss die pause machen
all children must the break have

(Verb separation is obligatory in standard German.)

Stage X+3- Inversion (INV)
dam hat sie wieder die knock gebringt
then has she again the bone brought

(Subject and inflected verb forms must be inverted after preposing of elements.)

Stage X+4- Verb-end (V-END)
er sagte, dass er nach house kommt
he said that he home comes

(In subordinate clauses, the Finite verb moves to final position.)

Figure 10: Developmental Sequence for GSL Word Order Rules (based on Pienemann 1987). From Larsen-Freeman and Long, 1991: 271.

Learners did not abandon one interlanguage rule for the next as they progressed; they added new ones while retaining the old, and thus the presence of one rule implies the presence of earlier rules.

The explanation offered for this developmental sequence was that each stage reflects the learner’s use of three “speech-processing strategies” (Clahsen 1987). Clahsen and Pienemann argue that processing is “constrained” by the strategies available to the learner at any one time, and development consists of the gradual removal of these constraints, or the “shedding of the strategies”, which allows the processing of progressively more complex structures.

The strategies are:

(i) The Canonical Order Strategy. The construction of sentences at Stage X obeys simple canonical order that is generally assumed to be “actor – action – acted upon.” This is a pre-linguistic phase of acquisition where learners build sentences according to meaning, not on the basis of any grammatical knowledge.

(ii) The Initialisation-Finalisation Strategy. Stage X+1 occurs when learners notice discrepancies between their rule and input. But the areas of input where discrepancies are noticed are constrained by perceptual saliency – it is easier to notice differences at the beginnings or the ends of sentences since these are more salient, according to the model, than the middle of sentences. As a result, elements at the initial and final positions may be moved around, while leaving the canonical order undisturbed. Stage X+2 also involves this strategy, but verb separation is considered more difficult than adverb fronting because the former requires not just movement to the end position but also disruption of a continuous constituent, the verb + particle, infinitive, or particle. Thus the strategy of continuity of elements within the same constituent must be shed before verb separation can be acquired. Stage X+3 is even more complex, since it involves both disruption and movement of an internal element to a non-salient position, and so requires the learner to abandon salience and recognise different grammatical categories.

(iii) The Subordinate Clause Strategy. This is used in Stage X+4 and is held to require the most advanced processing skills because the learner has to produce a hierarchical structure, which involves identifying sub-strings within a string and moving elements out of those sub-strings into other positions. The prediction is that L2 learners will assume that German subordinate clauses have the same word order properties as main clauses until advanced stages of acquisition.

These constraints on interlanguage development are argued to be universal; they include all developmental stages, not just word order, and they apply to all second languages, not just German.

Apart from the developmental process, the ZISA model also proposed a variational dimension to SLA, and hence the name “Multidimensional”. While the developmental sequence of SLA is fixed by universal processing restraints, individual learners follow different routes in SLA, depending primarily on whether they adopt a predominantly “standard” orientation, favouring accuracy, or a predominantly “simplifying” one, favouring communicative effectiveness.

Pienemann (1998) expands the Multidemensional model into a Processability Theory which predicts which grammatical structures an L2 learner can process at a given level of development. “This capacity to predict which formal hypotheses are processable at which point in development provides the basis for a uniform explanatory framework which can account for a diverse range of phenomena related to language development” (Pienemann, 1998: xv).

The theory sees SLA as “the acquisition of the skills needed for the processing of language”. (Pienemann, 1998:39), and attempts to demonstrate the same case that most cognitive perspectives state: what is easy to process is easy to acquire. Pienemann is concerned to account for the route described by the Multidimensional Model in the development of Interlanguage grammar, to determine the sequence in which procedural skills develop. His theory proposes that “for linguistic hypotheses to transform into executable procedural knowledge the processor needs to have the capacity of processing those hypotheses” (Pienemann, 1998: 4).

Pienemann, in other words, argues that there will be certain linguistic hypotheses that, at a particular stage of development, the L2 learner cannot access because he does not have the necessary processing resources available. Pienemann claims that his concern is to explain the production of, and access to, linguistic knowledge; he insists that he is not attempting to describe that knowledge or to explain its origins – like McLaughlin, Pienemann adopts “a modular approach to the theory of SLA in which a linguistic theory and processing theory take on complementary roles.” (Pienemann, 1998: 42)

The processing resources that have to be acquired by the L2 learner will, according to Processability Theory, be acquired in the following sequence:

1. lemma access,
2. the category procedure,
3. the phrasal procedure
4. the S-procedure,
5. the subordinate clause procedure – if applicable. (Pienemann, 1998: 7)

The theory states that each procedure is a necessary prerequisite for the following procedure, and that “the hierarchy will be cut off in the learner grammar at the point of the missing processing procedures and the rest of the hierarchy will be replaced by a direct mapping of conceptual structures onto surface form” (Pienemann, 1998: 7). The SLA process can therefore be seen as one in which the L2 learner entertains hypotheses about the L2 grammar and that this “hypothesis space” is determined by the processability hierarchy. As Braidi puts it: “Each developmental stage represents a hypothesis space in which certain structural hypotheses are possible because they are processable. As a result, the hypothesis space defines which IL grammars are options but does not determine which ones will be chosen. Pienemann has thus incorporated the developmental focus of the Multidimensional Model and has extended the application of the earlier model to grammatical information exchange beyond word order phenomena. He has formulated the Processability Theory as a component in L2 acquisition that is complementary to a linguistic theory (such as Lexical Functional Grammar), which would in turn address the issues of the nature and origins of the learner’s IL grammatical rules” (Braidi, 1999: 126).

For a discussion of Pienemann’s theory, see the peer commentaries in the first issue of the journal Biligualism: Language and Cognition (vol. 1, number 1, 1998), which is entirely devoted to Processibility Theory.

Discussion

The Processability Theory is a good example of a cognitive approach to SLA, where the focus is on the learning process; the cognitivists are interested in the construction of L2 grammars and in performance: how do learners access linguistic knowledge in real time, and how do they cope with their deficiencies? In this account the mechanism is an information processing device, which is constrained by limitations in its ability to process input. The device adds new rules while retaining the old ones, and as the limiting “speech-processing strategies” that constrain processing are removed, this allows the processing of progressively more complex structures.

The Processability Theory also addresses several problems that were encountered with the Multidimensional Model. In the earlier version, falsifiability of some aspects of the model was difficult. Morphemes seemed to contradict the predicted stages of development in the model, and the theory was saved by calling these morphemes “chunked morphemes” each time they appeared. This is an ad hoc measure which damaged the theory. In the same way, if a grammatical item was learned by a student whose current stage of development predicted that it was not learnable, then the theory was saved by calling this item a variational feature (i.e. the distinction between grammatical items which are bound by processing restraints and those that can be acquired at any stage is not clearly defined). By extending the scope of the model to include grammatical forms, Pienemann has to some extent answered these criticisms.

Ellis (1994) discusses a problem already alluded to in Section 9.2.4., that of the operational definition of “acquisition”. Whereas the original research by the ZISA group quantified all the features examined by 85% production in obligatory contexts, Pienemann re-defined acquisition in terms of “onset”, i.e., the first appearance of a grammatical feature. Many (e.g., Larsen-Freeman and Long: 1991) consider this “onset” definition essential in order to explain the process of SLA, but there is still the problem of defining “onset” and of deciding what is evidence for the operation of a predicted processing strategy.

The problem of operational definitions, and in general of defining the concepts used in a theory, is at the heart of research methodology. While, as the discussion in Section 4.3.3. argues, there is no easy litmus test that can decide the issue, it behoves those working in the field to do their utmost to define terms in a non-ambiguous way, and in a way that allows empirical tests to be carried out.

As for empirical adequacy, the Processability Theory suggests that transfer is not important, and while, as we have seen, some studies support this suggestion, there are other cases where differences between different L1 speakers learning an L2 seem to challenge the theory. For example, Towell and Hawkins (1994: 51) cite Hulk’s 1990 study of L1 Dutch speakers acquiring French, where in the earliest stages the learners adopted Germanic word order not the Canonical word order suggested by the Multidimensional Model. Selinker, Swain and Dumas (1975) showed that French L1 speakers learning English have persistent problems with the post-verbal placement of adverbs in sentences such as “Mary eats often oysters”, which is against the canonical order, and violates “continuity”. White (1989) who observed differences between L1 English speakers acquiring French and L1 French speakers acquiring English in their ability to acquire frequency adverbs found similar results.

More fundamentally problematic for the theory is that it is assumed to be self-evident that our cognition works in the way the model suggests. We are told that people see things in a canonical order of “actor – action – acted upon.”, that people prefer continuous to discontinuous entities, that the beginnings and ends of sentences are more salient than the middles of sentences and so on, without being offered any justification for such a view, beyond the general assumption of what is easy and difficult to process. As Towell and Hawkins say of the Multidimensional Model: “They require us to take on faith assumptions about the nature of perception. The perceptual constructs are essentially mysterious, and what is more, any number of new ones may be invented in an unconstrained way” (Towell and Hawkins, 1994: 50). This criticism still applies to Pienemann’s 1998 account of Processability Theory. It is not such a damning criticism as it might appear in fact – whatever new assumptions “may be invented” can be dealt with if and when they appear. As Pienemann makes clear, the assumptions he makes are common to most cognitive models, and most importantly they result in making predictions that are highly falsifiable.

The two main strengths of this theory can be immediately appreciated: first, it provides not just a description, but an explanation of interlanguage development, and second, it is testable. The explanation is taken from experimental psycholinguistics, not from the data, and is thus able to make wide, strong predictions, and to apply to all future data. The predictions the theory makes are widely-applicable and, to some extent, testable: if we can find an L2 learner who has skipped a stage in the developmental sequence, then we will have found empirical evidence that challenges the theory. Since the theory also claims that the constraints on processability are not affected by context, even classroom instruction should not be able to change or reduce these stages. Pienemann’s Teachability Hypothesis, first proposed in 1984, predicts that items can only be successfully taught when learners are at the right stage of interlanguage development to learn them.

In summary, then the explanation offered by the Processability Theory is not complete: it makes some innocuous but unfounded assumptions, and has little to say about transfer.. It clashes with some empirical evidence, and the question of exactly what constitutes the acquisition of each level is not entirely resolved. Finally, the domain is limited; the theory restricts itself to an account of processing that accounts for speech production, and while it suggests that a certain type of linguistic theory should compliment it, it does not go into the details. These are significant issues, but they do not detract from the theory’s considerable strengths: it is well-argued, it has high empirical content, it makes daring predictions, it has clear and wide-ranging teaching implications, it is broad in scope, it encourages and facilitates more research, it can be seen as “progressive” in Lakatos’ terminology – it is extending its domain, refining its concepts, making the variables more operational, attracting more research.

References to all works cited can be found in the “* Xtra Suggested Reading and References” page under SLA.

Advertisements

4 thoughts on “* Processing Approaches to SLA

  1. Pingback: The Involvement Load Hypothesis | aplinglink

  2. Hi Geoff. I just read a paper published by the eLearning Institute that appears to have quoted you without attribution. You may want to give it a look. Title is “Applying a Communicative Language Approach to the Teaching of Arabic in the USA”

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s