English language teaching | An A-Z of ELT | Page 19

E is for Error

9 05 2010

“It’s self-evident,” wrote an MA student of mine recently, in an online forum, “that most learner errors are caused by mother tongue interference”.  Is it really self-evident? It was certainly self-evident in the mid-twentieth century, when the notion of interference reigned supreme.  But the advent of interlanguage studies put paid to that.  The new science of error analysis (as distinct from contrastive analysis)  suggested that many – some would say most – errors are the effect of developmental  processes and performance demands, and have nothing to do with the learner’s L1. This is evidenced by the fact that many errors are shared by learners from different language groups, and occur in a similar developmental sequence and under comparable processing conditions.

Büyük Han, Nicosia

Nevertheless, the idea that errors are caused by negative transfer is a persistent one and is still invoked in order to justify proscribing translation activities in the classroom (see T is for Translation). I was intrigued, therefore, to find that the case against L1 interference in fact predates the work of Pit Corder and Jack Richards in the 1960s and 70s, judging by a book I found in a second-hand bookshop in Nicosia this week. (The photo shows Büyük Han, the restored Ottoman inn in one corner of which the bookshop was nestled). The book is called Common Errors in English: Their Cause, Prevention and Cure (!), by F.G. French (published by OUP in 1949).  The author states his case thus:

The argument here presented is that if errors are due … to cross-association, then the Japanese form of error should be one thing and the Bantu form quite another…. But that is not the case. .. The collection of ‘common errors’ … proves that the errors which exasperate teachers of English are indeed ‘common’.

French adds: “In seeking the source of error in the vernacular, the teacher is searching in the wrong field. The fact that the errors are common indicates that they have a common cause”.

This ‘common cause’, according to French (although he doesn’t use the term) is false hypothesizing, including over-  and under-generalising.  (The antidote that the author suggests, by the way, is much more typical of its time: he recommends the ‘drilling-in’ of correct forms, and the ‘drilling-out’ of errors, all of which involves “considerable trouble and constant vigilance”).

In discussing this topic on the bus from Nicosia to Kyrenia en route to the conference dinner, Nick Jaworski pointed out, that if transfer were the explanation, why is it that his Turkish students willfully produce errors like *I went Antalya, when the analogous verb + prepositional phrase exists in Turkish (even if the preposition is attached as a suffix)?  The same might be asked of the commonly attested *I working, *the boys playing etc, by speakers of languages, like Spanish, that have a matching auxiliary construction: estoy trabajando, los niños están jugando…

But is the case for interference  really dead and buried?  Isn’t it a fact that many (if not most) learner errors are – as my student suggested – directly traceable to L1 influence?  





T is for Time

2 05 2010

Rossini is supposed to have said of Wagner’s music: “He has some wonderful moments, but some terrible quarters of an hour”. I’ve observed (and taught!) lessons like that: some great moments but a lot of unnecessary time-wasting: over-prolonged warmers, games with little or no language output, instructions that take more time than the activity they’re designed to support, and so on. Time, I’ve come to the conclusion, is the single most wasted resource that teachers have available to them. And time is of the essence. The task of learning a second language is enormous. For many learners it is also expensive. To fritter the time away seems irresponsible.

Hence I’ve always liked the term “time-on-task”, since it captures for me an essential characteristic of good teaching: the capacity of the teacher to ensure that classroom time is optimized and that the learners are engaged in productive language activity to the fullest possible extent. This means, of course, that the learners know what is required of them – and there is a tension between, on the one hand, giving detailed instructions and, on the other, getting down to the task as quickly as possible. I knew one teacher who was dismissive about the need for clear task-setting. Her attitude was “Give them the material and let them get on with it – you can sort it out ‘in flight’”. I’m not sure I agree entirely, but I can see her point.

Likewise, I am suspicious of technology that isn’t already installed in the classroom and operational at the flick of a switch – or click of a mouse. Lesson time that is wasted in faffing about with cables and recalcitrant software is lost learning time. The same goes for games that require more explanation than their likely language affordances can possibly justify.

Time 'consumption' in different countries

Faffing about, as it happens, accounts for big chunks of lesson time in mainstream classes, according to figures that were published recently in a Spanish newspaper. The chart on the right shows how much time is lost in routine administrative activities (‘tareas administrativas’) and in controlling the class (‘mantenimiento de orden’) as compared to actual teaching (‘impartir clase’) in classrooms in a number of countries worldwide. Fortunate are the students in Bulgaria, where only round 10% of time is lost, compared to, say, Brazil, where up to a third of the lesson is frittered away.

Some recommendations, then, for exploiting time effectively:

1. Develop a set of reliable classroom routines that students will immediately recognise and which therefore require minimal explanation;

2. Resist the temptation to front-end the lesson with lots of warmers and ice-breakers. Get to the point as quickly as possible!

3. Evaluate any activity in terms of the likely language production it will generate against the time it will take to set up. If the pay-off is small, ditch the activity, or think of a quicker way of setting it up.

4. Use only those tehnological aids that you are already comfortable with and which are already installed and easily accessible in the classroom, and – even then – measure their worth against the language learning affordances that they are likely to provide;

5. Set for homework those activities (such as reading, listening, and doing grammar practice exercises) that might otherwise cut into classroom time that could more usefully be spent speaking.

6. Use the students’ L1 to cut corners, e.g. in explaining an activity, in providing glosses for unfamiliar vocabulary, in checking understanding of  a text, and even in presenting grammar.

7. Be punctual yourself – set a good example and impress on students the importance of starting (and finishing) on time. Likewise, don’t wait until the last student has arrived before you start the lesson.

8. With younger learners, reduce the time that needs to be spent maintaining order by keeping the pace of the lesson fairly upbeat, thereby avoiding the kinds of longeurs during which anti-social activity is likely to occur.

Any other suggestions out there?





N is for Native-speakerism

27 04 2010

There are periodic bouts of hand-wringing in the blogosphere on the subject of ‘native-speakerism’  – the term that Adrian Holliday coined to capture “the chauvinistic belief that ‘native speakers’ represent a ‘Western culture’ from which spring the ideals both of the language and of language teaching methodology” (2008, p. 49). It manifests itself not only in the adoption of native-speaker models as the most desirable standards of accuracy, but also in the dominance of native speaker “experts” at international, national and regional conferences. (Ironically,  it is typically native-speakers who are the ones doing the hand-wringing: there is a dominant discourse trope in a lot of current ‘critical’ theory that consists of native-speaker academics condemning the pervasiveness of native-speakerism, while urging those who are oppressed by it to fling it off and assert their own legitimate identities as users, and hence owners, of global English. It’s as if the poison and the antidote are being administered by the same hand).

I have just come back from a conference in Occupied Palestine/the West Bank/the Palestinian Territories/Judea and Samaria (choose the name according to your political persuasion – my preference is for the first). It was co-sponsored by three UK-based organizations (the British Council, Macmillan Education, and IATEFL) and it featured several native speaker experts, myself included. (There would have been more but the Icelandic ash cloud put paid to that). It had all the hallmarks, therefore, of the kind of disenfranchising native-speaker-fest that Holliday, Phillipson, Pennycook et al, decry. This was doubly ironic, perhaps, since the conference took place in a context where oppression is experienced on a daily basis – an oppression whose origins are directly traceable to the machinations of British imperialist strategists at the turn of the last century.

Yet, the conference was judged – by those who attended – a huge success. I can’t count the number of participants who thanked us for being there and who hoped we would be back. Invitations flowed. Two young teachers from Jenin, for example, urged me to come and visit their university: “We badly need native speakers”. A subsequent day’s training I did in a private school in East Jerusalem was similarly enthusiastically received.

Which leaves me in two minds. Clearly, the presence of foreign “experts” in a  country where travel is so constrained, and where visitors are so few, acts as a kind of validation of the teachers’ collective commitment to their profession and to their national identity, as well as providing a rare break from the daily grind of checkpoints and restrictions. At the same time, their readiness to embrace imported methodologies, however capably presented by the (well-intentioned and highly-experienced) visitors, may divert attention away from the real task, which is to develop a homegrown methodology suitable for local conditions. As Holliday points out, “We should not model ‘best practice’, which is ideologically embedded, but encourage spaces for reflection on and scrutiny of existing practice” (op. cit, p. 59). But would a conference with these objectives have been half as attractive to the participants?

I suspect not. Even Holliday is realist enough to concede that “we must recognize people’s aesthetic preferences for types of English and types of speakers, and the possibility that they may prefer flavours from the English-speaking West over indigenous flavours for a multiplicity of reasons” (op. cit., p. 60). It’s this multiplicity that I’m presently trying to untangle, as I face the prospect of more trips to even less familiar contexts.

Reference:

Holliday, A. 2008. ‘What happens between people: who are we and what we do’. In Gieve, S., and Miller, I. (eds.) Understanding the Language Classroom. Palgrave Macmillan.





T is for Translation

21 04 2010

During a talk on grammar teaching techniques, last week in Turkey, one participant queried my suggestion that translation could be a useful technique for raising awareness of similarities and differences between the students’ L1 and the target language. I went so far as to suggest that – with some structures (such as the future perfect) it could be the most economical way of presenting them. However, the participant felt (strongly) that encouraging learners to translate L1 forms into the L2 would cause negative transfer.

This led to an interesting discussion with other trainers and teachers, after the session, as to the current status of translation – specifically as a means of presenting grammar – on methodology courses, and prompted me to re-visit the entry in An A-Z of ELT. There I don’t exactly come out in favour of translation, but, in weighing up the pros and cons, I definitely give translation the last word. To quote:

Apart from being a skill in its own right, translation is also an aid to teaching and learning a second language. In this sense, translation has been central to some teaching methods, such as grammar translation, and frowned upon by others, such as the direct method. The reasons for not using translation in teaching include the following:

  • translation encourages a dependence on the L1, at the expense of the learner constructing an independent L2 system
  • translation encourages the notion of equivalence between languages, yet no two languages are exactly alike (although languages from the same language family may be similar in lots of respects)
  • the L1 system interferes with the development of the L2 system
  • translation is the “easy” approach to conveying meaning, and is therefore less memorable than approaches that require more mental effort, such as working out meaning from context
  • the “natural” way of acquiring a language is through direct experience and exposure, not through translation
  • translation is simply not feasible in classes of mixed nationalities, or where the teacher does not speak the learners’ L1.

On the other hand, the arguments for using translation in the classroom include:

  • new knowledge (e.g. of the L2) is constructed on the basis of existing knowledge (e.g. of the L1), and to ignore that is to deny learners a valuable resource
  • languages have more similarities than differences, and translation encourages the positive transfer of the similarities, as well as alerting learners to significant differences
  • translation is a time-efficient means of conveying meaning, compared, say, to demonstration, explanation, or working out meaning from context
  • learners will use translation, even if covertly, as a strategy for making sense of the L2, so it may as well be used as an overt tool
  • the skill of translation is an integral part of being a proficient L2 user, and contributes to overall pluralingualism
  • translation is a natural way of exploiting the inherent bilingualism of language classes, especially where the teacher is herself bilingual

The question is, do the pros outweight the cons – or should I have emphasised the negative factors more strongly?





L is for Lockstep

12 04 2010

I’m preparing a talk for a conference next week, to be held in Palestine, and the subject I’ve chosen is “lockstep activities” – specifically “Six Lockstep Activities and How to Improve them.” (Shades of Lindsay Clandfield’s “6 Things” rubric, I know, but then, I’ve never been very original!) I figured that the topic would be an appropriate one in contexts where classroom management is a challenge, where classes are large, furniture fixed, and resources minimal. From experience having observed classes there (see the attached photo taken on my last trip), that description fits the Palestine context fairly accurately.

Classroom in Palestine (photo by Gavin McLean of Macmillan ELT)

In my abstract I’ve defined “lockstep activities” as “whole-class activities, in which the class ‘marches in time’, as opposed to group, pair or individual work”. In their Dictionary of Language Teaching & Applied Linguistics (3rd edition, Longman, 2002) Richards and Schmidt hyphenate the term, and define it as “a situation in which all students in a class are engaged in the same activity at the same time, all progressing through tasks at the same rate” (p. 315). (Regrettably, there is no mention of lockstep in An A-Z of ELT – an oversight I hope to redress in a future edition).  Lockstep is the default setting for much traditional teaching and in all subject areas: the teacher at the front of the room, the students seated in rows and dutifully attentive.

Now I’m wondering where the term “lockstep” originated. According to the Free Online Dictionary, it is “A way of marching in which the marchers follow each other as closely as possible”. It also has a figurative meaning, and one with negative connotations, as in “A standardized procedure that is closely, often mindlessly followed.” It is this negative aspect that permeates the literature on lockstep activities in methodology books. Jeremy Harmer, for example, in the 1991 edition of The Practice of English Language Teaching (Longman) wrote: “Students working in lockstep get little chance to practise or to talk at all”.  And added: “Lockstep always goes at the wrong speed!” (p. 243).  However, Jeremy concedes that “we should not abandon the whole-class grouping completely”. It is, for example, the best grouping for giving instructions and for checking the answers to a reading or listening task.

Given the ubiquity of this grouping, especially in contexts such as in Palestine, I’m looking for more ways of ‘accentuating the positive’, and specifically for ideas for making the most of such standard lockstep activities as drilling, reading aloud, dictation, and checking understanding of reading and listening tasks. The dynamic of reading aloud activities, for example, can be improved if – rather than the teacher nominating turns – the readers themselves nominate who the next reader will be (and so long as it isn’t anyone who has read before).

So, if you have any ideas of ways of jazzing up lockstep activities, please let me know! All suggestions will be gratefully attributed.





S is for Scaffolding

4 04 2010

In An A-Z I include an entry for scaffolding, but don’t mention the fact that it has become such a buzz term that it’s starting to lose all significance. Teachers and trainers regularly talk about their role in ‘scaffolding’ learning, but if you unpick their examples, it’s difficult to distinguish these from simple question-and-answer sequences that have always characterised effective teaching. Here, for example, is an extract that Rod Ellis uses to exemplify scaffolding:

1 Teacher         I want you to tell me what you can see in the picture or what’s wrong with the picture.

2 Learner            A /paik/ (= bike)

3 Teacher            A cycle, yes. But what’s wrong?

4 Learner            /ret/ (= red)

5 Teacher            It’s red yes. What’s wrong with it?

6 Learner            Black

7 Teacher            Black. Good. Black what?

8 Learner            Black /taes/ (= tyres)

 (Ellis, 2003, p. 181)

 

Ellis explains that “the teacher is able to draw on his experience of communicating with low-level proficiency learners to adjust the demands of the task and to scaffold the interaction so that a successful outcome is reached” (p. 182).  But I’m not convinced. It seems that – far from being an instance of co-constructed learning – the teacher and the learner are talking at cross-purposes, and that all this is mapped on to the traditional IRF (initiate—respond–follow-up) model of classroom discourse. This does not seem to embody Bruner’s (1978) definition of scaffolding as “the steps taken to reduce the degree of freedom in carrying out some tasks so that the child can concentrate on the difficult skill she is in the process of acquiring” (quoted in Gibbons, 2002).

What, then, are these ‘steps’? Looking at the literature on scaffolding, a number of key features have been identified. Wood, Bruner, and Ross (1976) in one of the first attempts to define the term, itemise six:

1 recruiting interest in the task

2 simplifying the task

3 maintaining pursuit of the goal

4 marking critical features and discrepancies between what has been produced and the ideal solution

5 controlling frustration during problem solving

6 demonstrating an idealized version of the act to be performed.

 (quoted in Ellis op. cit)

 

What they seem to leave out – and what is so attractive (to me) about the metaphor of scaffolding – is the relinquishing of the teacher’s role as the learner appropriates the targeted skill – what Applebee (1986) calls ‘transfer of control’: “As students internalize new procedures and routines, they should take a greater responsibility for controlling the progress of the task such that the amount of interaction may actually increase as the student becomes more competent” (quoted in Foley 1994). Also missing is what van Lier (1996) calls the “principle of continuity”, i.e. that “there are repeated occurrences, often over a protracted period of time, of a complex of actions, characterized by a mixture of ritual repetition and variations” (p. 195). That is to say, scaffolded learning is not a one-off event, but is embedded in repeated, semi-ritualised, co-authored language-mediated activities, typical of many classroom routines such as games and the opening class chat. Finally, any definition of scaffolding needs to highlight the fact that this kind of interaction is a site for learning opportunities, and is not simply a way of modelling, supporting, or practising interaction.

Does this tighter definition of scaffolding improve matters? Or is it now so tight that it deprives teachers of a useful metaphor for a whole range of classroom interactions?

References:

Ellis, R. 2003. Task-based Language Learning and Teaching, OUP.

Foley, J. 1994. ‘Key concepts: Scaffolding’. ELT Journal 48/1.

Gibbons, P. 2002. Scaffolding Language Scaffolding Learning. Heinemann (USA)

van Lier, L. 1996. Interaction in the Language Curriculum. Longman,





G is for Grice (and his Maxims)

28 03 2010

H.P. Grice

What would the language philosopher H.P. Grice have made of Twitter, I wonder? If you recall (and if you don’t, you have only to check the A-Z!) Grice formulated what is perhaps the most influential theory in the development of pragmatics, now best known as the Cooperative Principle:

The cooperative principle is the principle that speakers try to cooperate with one another. When people take part in conversations they do so on the assumption that the other speakers will observe certain unstated “rules”… (An A-Z of ELT)

 These rules (popularly known as Grice’s Maxims) are:

 1.         Maxim of quantity: Make your contribution just as informative as required.

2.         Maxim of quality: Make your contribution one that is true.

3.         Maxim of relation: Make your contribution relevant.

4.         Maxim of manner: Avoid obscurity and ambiguity. Be brief and orderly.

Of course, speakers frequently violate these maxims, but they do so in the full knowledge that they are breaking the rules – and they will often signal that they are doing so, by, for example, prefacing a statement with “This is totally beside the point, but…” or “I’m sorry to bang on about it, but….” As I point out, in An A-Z, “Without the shared belief in a cooperative principle, we would be compelled to ask, after any utterance, Is that all? Is that true? What has that got to do with it? and Can you be any clearer? The fact that this only normally happens in a court of law suggests that, for day to day purposes, Grice’s maxims apply.”

Twitter seems both to affirm and to challenge Grice’s cooperative principle. In encouraging concision, the 140-character limit works brilliantly to enforce Maxim 1 (The maxim of Quantity) and, to a lesser extent, Maxim 4 (The maxim of Manner). But how do you explain the relevance (maxim 3) of tweets like the following:

Went out and bought a plastic lining for the compost frame and put that in.

Chicken burger with avocado and blue cheese, accompanied by butternut squash wedges.

Sitting with my brother discussing the weather. 

By what possible standards could the above texts be considered relevant? And yet a significant proportion of tweets that are sent are of this nature. Perhaps the assumption is that, if you’ve chosen to follow me, everything I tweet is relevant. And that, in the absence of a shared world (which would confer a degree of relevance), trivia helps to create one.

Be that as it may, Grice’s maxims have helped in the formulation of some ground-rules for Discussion Board postings on the on-line MA TESOL that I teach on. For example:

1.         be brief – 250 words max.

2.         be relevant: stick to the topic; if you need to digress, signal the fact in your subject line;  

3.         be explicit: change the subject line to make it clear whether your posting is a new response to the main DB task, or a digression (see above) or simply a social intervention;

To which I’ve added:

4.         be original (i.e. no plagiarism)

5.         be appropriate (i.e. this is an academic context even if the medium tolerates a degree of informality) and

6.         be courteous (i.e. no flaming)

So far, these rules seem to have worked fine, on the Discussion Boards, to encourage both cooperative interaction and critical thinking. What chance of imposing them on Twitter!?





L is for Learning Styles

21 03 2010

When I wrote An A-Z of ELT, I was not entirely persuaded by the argument that learners can be categorized in terms of their preferred learning style, whether visual, aural, kinesthetic etc.:

So far… there is no convincing evidence that any of these dispositions correlates with specific learning behaviours. Nor has it been shown that a preference in one area predicts success in language learning.  In fact, it is very difficult to separate learning style from other potentially influential factors, such as personality, intelligence, and previous learning experience.  Nor is it clear to what extent learning style can be manipulated, e.g. through learner training (p. 116).

However, I was prepared to accept the case for “meshing” learning style and teaching style: “If the learner’s preferred learning style is out of synch with the type of instruction on offer, then success is much less likely than if the two are well matched.”

It seems I was wrong. Alerted by a somewhat sensationalist headline in a recent Guardian Weekly (5th March) to the effect that Learning styles ‘are hogwash’, I hunted out the research study on which this shock-horror claim was based: ‘Learning Styles: Concepts and Evidence’, by Pashler, H., McDaniel, M., Rohrer, D., and Bjork, R. in Psychological Science in the Public Interest,  9/3, December 2008 , pp. 105-119.

While the authors do not  dismiss the notion of learning style outright,  they cannot find any evidence for the ‘meshing hypothesis’, i.e. the idea that learning is optimsed when instruction is matched to the individual learner’s learning style. Sifting through a host of studies on learning style, they found no study that proved conclusively that a teaching approach that was effective for one style of learner was NOT effective for a different style of learner. They concluded, therefore, that “there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice” (p. 105).

Given this lack of evidence, why has the case for matching teaching and learning style  persisted?  The authors of the paper suspect that this belief  “may reflect the fact that people are concerned that they, and their children, be seen and treated by educators as unique individuals” (p. 107).  Moreover, learning styles offer unsuccessful learners (and their parents) a stick to beat their teachers with: “If a person or a person’s child is not succeeding or excelling in school, it may be more comfortable for the person to think that the educational system ..  is responsible [and] that the fault lies with instruction being inadequately tailored to one’s learning style” (p. 107-8).  Learning styles, in other words, are a convenient untruth.

Rather than pigeon-holing learners  into aural, visual, verbal, etc.-types, the authors of this study “think the primary focus should be on identifying and introducing the experiences, activities, and challenges that enhance everybody’s learning” (p.117, emphasis added). “Given the capacity of humans to learn, it seems especially important to keep all avenues, options, and aspirations open” (ibid.).  Besides, an approach that focuses on what learners have in common, rather than on what differentiates them, is ultimately more practicable. The alternative–small groups of like-minded learners getting individualised instruction– is a luxury few educational institutions or systems can afford.

So, is this how I should re-write the last sentence of the entry on Learning Styles in An A-Z?

…Nor is it clear to what extent learning style can be manipulated, e.g. through learner training. Nor are there grounds (apart from wishful thinking) to believe that adapting teaching style to learning style produces any increments in learning.





C is for Conditional (the Third)

14 03 2010

A recent report on the BBC News website had this to say:

“The district committees approve plans weekly without informing me,” Interior Minister Eli Yishai, the chairman of the ultra-Orthodox Shas party, told Israel Radio on Wednesday morning.

“If I’d have known, I would have postponed the authorisation by a week or two since we had no intention of provoking anyone.”  (March 10th, 2010)

If I’d have known – not If I’d known …  Whose wording was this, I wonder? Eli Yishai’s? Israel Radio’s? Or the BBC’s?  Interesting, anyway, that the BBC didn’t feel the need to correct it. Maybe they didn’t even notice it, so frequent it has become.

Out of interest, I ran a check  using – just for fun – this data base of cinema screenplays (thanks to Nik Peachey for this link) to see how often – and how far back – the “if I’d have known…” conditional occurs. Here are some examples:

Sorry about that. If I’d have known, I’d take you to New Orleans (Apocalypse Now 1979)

If I’d have known this was going to be the last time me and Bubba…  (Forrest Gump 1994)

Yeah? I’d have brought my gloves if I’d have known  (Lock Stock and Two Smoking Barrels 1998)

If I’d’ve known this was gonna happen, I’d have brought my motherfuckin’ gun! Help!   (The Rock, 1996)

These examples suggest that the phraseology is most common with the verb know – forming what amounts to a fixed expression. But what about these?

I couldn’t live with myself if I’d have hit her. (Cinderella Man 2005)

If I’d have fallen asleep then I would have ended up in a ditch with a headache .(Signs 2002)

If I’d have went to jail, I’d be getting out today (Jarhead 2005)

And just to show that the usage is not new, here’s an example from over 50 years ago!

If I’d have been careful piloting that reconnaissance plane you wouldn’t have had the chance to take the pictures

(Rear Window 1954)

All of which raises the question – if this usage is so well-established –  should we accept it when our students produce it?





S is for “Strategies”

7 03 2010

At the ELTONs awards ceremony in London this week, I had the good fortune to be seated next to Ingrid Freebairn. If you weren’t already teaching in the 70s and 80s, you might not know that Ingrid was one of the team that wrote  Strategies (what became known as ‘white Strategies’), published by Longman in 1975. Strategies was the first major course to espouse a functional (or notional-functional) syllabus. Up until then the structural syllabus – that legacy of audiolingualism – was still the reigning paradigm. A structural syllabus is a form-based syllabus, organized primarily according to criteria of structural complexity. So, you start with the verb to be, then the present continuous, then the present simple, and so on.  Strategies, by contrast, adopted a semantic organization, with unit headings such as Invitations, Ability, Polite requests, Recent Activities and Speculating about the past. In the words of the blurb on the back cover: “In these materials the criteria are primarily functional and secondarily structural.”

In adopting a semantic organisation, Strategies was instrumental in introducing the ‘communicative approach’ to a generation of teachers (including myself) who had been formed during the late-audiolingual era. Although still labelled “functional-notional”,  the approach that Strategies embodied  was communicative. It had to be, because, if you base your curricular goals around functions (such as Polite requests) or notions (such as Ability), you need a methodology that allows these meaning-driven goals to be realized in terms of classroom activity. You need role plays and dialogues. Moreover, you need activities that distract attention away from a focus on (grammatical) form and, instead, encourage a concern for meaningful interaction. So you need communicative games, information-gap tasks, and jigsaw activities. For someone like myself who had been trained mainly to elicit, drill and correct structural patterns, this radical shift in learning objectives and teaching procedures was truly revolutionary.

For that reason I have always had a soft spot for ‘white’ Strategies, and its subsequent re-packaging as the (more systematic and more colourful)  Strategies series (Starting… Opening… Building… Developing…). So, as we tucked into the ELTONs dinner, I happened to ask Ingrid what had inspired the concept behind the Strategies series.  While she did not quite echo my sentiment of  “bliss was it, in that dawn to be alive!”, she did confirm that the mid-seventies was an exhilarating time for methodologists and materials writers, where the sense of a sea-change was palpable, and where the publishers, too, were prepared to throw caution to the wind.

The original site of the University of Reading

What I hadn’t realized, until talking with Ingrid, was that the thinking behind Strategies was directly influenced by the work of David Wilkins, then at the University of Reading, where Ingrid had just completed a Masters degree. Wilkins was one of the chief architects of what would come to be known as the communicative approach: his seminal Notional Syllabuses, building on his work with the Council of Europe, would be published in 1976. In fact, when I got home and pulled down my copy of Strategies (long ago rescued from a recycling bin at IH Barcelona) I found that this connection is explicitly acknowledged:

From the work of David Wilkins we took as our starting point this quotation:

What people want to do through language is more important than the mastery of language as an unapplied system.

How come I had never noticed that acknowledgement before? More worryingly, what happened, subsequently, to reverse this sea-change – to make the structurasl syllabus the primary one again, and the semantic one only secondary? Why is it that “the mastery of language as an unapplied system” again takes precedence over its communicative purposes? What happened to the communicative approach?