* Requirements for a Theory of SLA

67dg-N1Tp1ZtXNUy4nyiSDl72eJkfbmt4t8yenImKBVaiQDB_Rd1H6kmuBWtceBJ

This follows on from the previous post about science and SLA. It’s an abridged version of the end of Chapter 5 of my book “Theory Construction in SLA”.

McLaughlin (1987), discussing theory construction in SLA, says that a theory of SLA should give a causal explanation of the phenomena. He agrees that first stage proto-theories (what Long (1985b) has called “storehouse” theories, and what Spolsky (1989) calls “set-of-laws” theories) are collections of often unrelated generalisations about phenomena. He gives these examples:

1. Adult SL learners learn faster than children but attain lower levels of ultimate proficiency.

2. Learners pass through a certain developmental sequence of structures

3. Errors made by learners in acquiring certain structures in a particular L2 are similar for all L1s.

If these generalisations are not unified under a general theory, then, in McLauglin’s opinion, they lead nowhere – they do not provide any coherent explanation of the phenomena we want to explain, nor can they lead to new hypotheses. McLaughlin sees one important task for theory builders in SLA as being to try to fit the different “bits” together.

McLaughlin suggests that an SLA theory should meet various types of requirements. First, there are requirements to do with meeting the correspondence norm.

• A theory should correspond to external reality. In effect this means that a theory must have empirical elements.

• The concepts employed in a theory must be described so that anyone will interpret it in the same way.

• Terms used in the theory may be drawn from everyday language or the theorist may invent his own terms. If the terms are drawn from everyday language, then all ambiguity must be removed. If the term is a neologism (4) it can be precisely defined but risks being misunderstood, an example is intake. Operational definitions are very helpful.

• A theory must have explanatory power – good theories go beyond the facts and can be generalised.

A good theory meets the norms of correspondence when the explanation it provides applies to a specified range of phenomena and when the conditions suitable to its application are met.

Second, there are coherence norms. The simpler a theory is, the better. Do not multiply variables, do not use ad hoc explanations. A theory should be consistent with other theories in the field. McLaughlin gives the example of telepathy being suspect because it is the only form of transmitted information that is not affected by the distance travelled.

Third, there is the pragmatic norm: a theory should be practical – it should make predictions.

And finally, a theory must be falsifiable. An adequate hypothesis is one that repeatedly survives “probing”.

With the one exception of the requirement that the theory should be consistent with other theories in the field, I endorse these requirements, which will be be incorporated into my summary below. The “consistency” requirement seems similar to Laudan’s arguments that were discussed in the earlier post, and also with demands made by Long (2007) and Gregg (2006), but I can see no good reason to make such a requirement. If there are those who want to suggest that telepathy is part of an explanation of SLA, then they must make their case and then the community can decide what to make of it.

Preamble

Although there are certainly problems in the conduct of research in the field of SLA, none gives any reason to abandon the project of conducting research and building theories of SLA in a rational way that includes the use of empirical data, rather as the arguments of radical scepticism do not force one to abandon a rationalist approach to problem-solving in general. Conceptual problems do exist, but they are not insurmountable, and neither are the problems of observer bias or experimental conditions. Biologists, chemists, even physicists, face problems with the proper conceptualisation of phenomena, viable constructs, taxonomies, classification, experimental procedures, measurement, statistical analysis, etc., and it would be wrong to think that the problems facing a theory of SLA were so great as to disqualify it from scientific status. There are, of course, differences between a theory of gravity and a theory of SLA, but there are also differences between theories of genetics, evolution, geometry, information, etc. Physics has always been considered the most exact science, and it is quite unnecessary for a theory of SLA to adopt its language, its instruments or its peculiar experimental procedures. The decision of the Nobel committee in 2000 to give the science prize to two economists working in the area of econometrics, for their work on new statistical procedures used to interpret data, is significant. The economists tackled such apparently unscientific questions as Why do people buy a Ford rather than a Nissan? How valuable is a university degree? At what age do senior managers work best? What factors cause people to develop cancer? While decision-makers in industry and commerce are accustomed to using gut reactions, focus groups or simple correlations for answers to these questions, the prize winners found more rigorous solutions to some of them.

The methods adopted by any body of researchers will depend on the phenomena to be explained. All that is needed is that researchers in the field agree on criteria for the construction and testing of a theory of SLA which are rational, consensible in Ziman’s term, and likely to lead to consensus. There is, as in any field, good research and bad research going on in SLA, and I suggest that it behoves those working in the field to clarify the guidelines for their work. It would surely be “a good thing” if those working on SLA research could reach some general agreement on the research methodology to be used. If there were general agreement, they would then need to agree on the selection of phenomena, and on the testing and validation of hypotheses by the collective activity of those researchers working in the field. The question is whether those working in the field of SLA can produce an unambiguous framework of concepts and relations which will allow them to understand the phenomena involved in acquiring a second language and to make successful predictions about when and how the process occurs.

.

imagesCAOQ09NX

The Guidelines

I propose the following guidelines for those interested in constructing a rational theory of SLA:

A. Assumptions

1. An external world exists independently of our perceptions of it. It is possible to study different phenomena in this world, to make meaningful statements about them, and to improve our knowledge of them. This amounts to a minimally realist epistemology, and therefore excludes those who claim that there is no objective way to judge among competing theories.

2. Research is inseparable from theory. We cannot just observe the world: all observation involves theorising. As Popper (1959) argued, there is no way we can talk about something sensed and not interpreted. This is a rejection of the behaviourist and logical positivist position, but does not exclude all empiricists. It is important in discussions with those in the field of SLA who wish to challenge the critical rationalist research methodology to emphasise that we are not traditional “empiricists” or any kind of “positivists”.

3. Theories attempt to explain phenomena. Observational data are used to support and test those theories.

4. Research is fundamentally concerned with problem-solving. Research in SLA should be seen as attempted explanations. Data collection, taxonomies, “rich descriptions” of events, etc., must be in the service of an explanatory theory. Hypotheses are the beginning of attempts to solve problems. Hypotheses should lead to theories that organise and explain a certain group of phenomena and the hypotheses about them. Theories are explanations and are the final goal of research. The aim should be to unify descriptions and low-level theories into a general causal theory.

5. We cannot formalise “the scientific method”. Science is not only experimentation in a laboratory, it is not only physics, and, in any case, it is not necessary for a theory of SLA to be “scientific” in any narrow sense. There is no strict demarcation line between “science” and “non-science”: there is no small set of rules, adherence to which defines the scientific method, and no need for SLA researchers and theory builders to emulate the methods of physics, for example. There is no one road to theory (we do not have to start with the careful accumulation of data, or with universal principles). SLA research needs a multi-method approach.

6. There is no need for paradigmatic theories. As many theories as possible should be encouraged; there is no need for Lakatos’ protective belt, no need for Laudan’s research programmes. (See Long, 2007, for an opposing view). We should clearly distinguish between the context of discovery and the context of justification. It is interesting and informative to trace the history of a theory, to see theories in terms of paradigms, etc., but such considerations have little to do with the question of whether the theory is reasonable, supported by evidence, confirmed by experiment, etc. Perhaps the main function of considerations to do with the context of discovery is to encourage tolerance of young theories. As far as the context of justification is concerned, we should base ourselves on the principle of criticism: all theories should be open to as much criticism as possible.

B. Criteria for the evaluation of SLA theories

7. Research, hypotheses, and theories should be coherent, cohesive, expressed in the clearest possible terms, and consistent There should be no internal contradictions in theories, and no circularity due to badly-defined terms. Badly-defined terms and unwarranted conclusions must be uncovered and the clearest, simplest expression of the theory must be sought.

8. Theories should have empirical content. Propositions should be capable of being subjected to an empirical test. This implies that hypotheses should be capable of being supported or refuted, that hypotheses should not fly in the face of well-established empirical findings, and that research should be done in such a way that it can be observed, evaluated and replicated by others. The operational definition of variables is an extremely important way of ensuring that hypotheses and theories have empirical content. A final part of this criteria is that theories should avoid ad hoc hypotheses.

9. Theories should be fruitful. “Fruitful” in Kuhn’s sense (see Kuhn, 1963:148): they should make daring and surprising predictions, and solve persistent problems in their domain.

10. Theories should be broad in scope. Ceteris paribus, the wider the scope of a theory, the better it is.

11. Theories should be simple. Following the Occam’s Razor principle, ceteris paribus, the theory with the simplest formula, and the fewest number of basic types of entity postulated, is to be preferred for reasons of economy.

Finally, Casti (1989) provides some “hallmarks of pseudoscience” which may help to define good research practice, by indicating practises to be avoided.

• A casual approach to evidence: Pseudoscientists think that sheer quantity of evidence makes up for any deficiency in quality, they do not scrutinise the evidence carefully and do not drop questionable evidence.

• Irrefutable hypotheses: “If nothing conceivable could speak against the hypotheses then it has no claim to be labelled scientific.” (Casti, 1989: 58)

• Explanation by scenario: Casti cites Velikovsky, “who states that Venus’ near collision with the earth caused the earth to flip over and reverse its magnetic poles.” Velikovsky offers no mechanism by which this cosmic event could have taken place, and the basic principle of deducing consequences from general principles is totally ignored in his “explanation” of such phenomena.” (Casti, 1989: 59)

• Research by literary interpretation: focusing on the words, not on the underlying facts and reasons for the statements that appear in scientific literature, and then suggesting what the writer “really meant”, or could be interpreted as meaning.

• Refusal to revise: pseudoscientists either couch their work in such vague terms that it cannot be criticised or they refuse to acknowledge criticism. “They see scientific debate not as a mechanism for scientific progress but as an exercise in rhetorical combat.” (Casti, 1989: 59)

I would add Predilection for obscure prose to the list. Obscure prose was anathema to Popper, who also believed that scientists should publicly announce changes in their positions and explain what led them to make such changes. As Koetge says of the postmodernists, who she calls “pomo-cons authors”: “They cover up or rationalize subtle shifts in the doctrines they espouse or present radically different emphases according to the audience they are addressing. As anyone who has tried to give a précis of many of the pomo-cons authors has discovered, it is often extremely hard to pin them down” (Koetge, 1996: 270).

These minimum conditions allow for a wide range of research methods and programmes. Both qualitative and quantitative methods may be used. Quantitative research programmes more easily conform to these conditions, since they use a deductive methodology that starts with a theory or hypothesis, gather data to test it, and then return to the theory at the end of the study to see how it has stood up to the tests. Qualitative research tends to begin by gathering information without doing more than articulating some questions (sometimes even the questions come later), and then looking for patterns out of which hypotheses or a theory can develop. Of course there are many, particularly in the field of SLA, who use a quantitative approach and yet do not develop any very challenging hypotheses or powerful theories, and there is nothing in principle to prevent those using a qualitative approach from developing a fully-fledged theory. In any case, if no hypothesis or theory emerges, or if there is no attempt to compare a pattern or theory with other theories, then the research has done no more than add to the data that a more adventurous researcher may put to some use. When evaluating theories, if they have survived rigorous examination and a certain amount of testing, then the bolder the theory is in terms of its predictive ability, and the wider its explanatory power, the better.

See the Suggested Reading (SLA) page for all references.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s