(PDF) An Introduction to Language | Valeria Bech - Academia.edu
9th edition An Introduction to Language Victoria Fromkin Robert Rodman Nina Hyams Classification of American English Vowels Part of the Tongue Involved Tongue Height FRONT HIGH i beet CENTRAL I bit BACK boot u put Á ROUNDED MID e bait ” bet E o bore O Rosa Ø butt LOW boat œ bat bomb a A Phonetic Alphabet for English Pronunciation Consonants Vowels p pill t till k kill i beet ɪ bit b bill d dill g gill e bait ɛ bet m mill n nil ŋ ring u boot ʊ foot f feel s seal h heal o boat ɔ bore v veal z zeal l leaf æ bat a pot/bar θ thigh tʃ chill r reef ʌ butt ə sofa ð thy dʒ gin j you aɪ bite aʊ bout ʃ shill ʍ which w witch ɔɪ boy ʒ measure An Introduction 9e to Language An Introduction 9e to Language V I C T O R IA F R O M K I N Late, University of California, Los Angeles RO B E R T RO D M A N North Carolina State University, Raleigh N I N A H YA M S University of California, Los Angeles Australia • Brazil • Japan • Korea • Mexico • Singapore • Spain • United Kingdom • United States An Introduction to Language, Ninth Edition Victoria Fromkin, Robert Rodman, Nina Hyams Senior Publisher: Lyn Uhl Publisher: Michael Rosenberg Development Editor: Joan M. Flaherty Assistant Editor: Jillian D’Urso Editorial Assistant: Erin Pass Media Editor: Amy Gibbons Marketing Manager: Christina Shea Marketing Coordinator: Ryan Ahern Marketing Communications Manager: Laura Localio Senior Content Project Manager: Michael Lepera Senior Art Director: Cate Rickard Barr Senior Print Buyer: Betsy Donaghey Permissions Editor: Bob Kauser Production Service/Compositor: Lachina Publishing Services Text Designer: Brian Salisbury Photo Manager: John Hill Cover photograph: © Ed Scott/ maXximages.com © 2011, 2007, 2003 Wadsworth, Cengage Learning ALL RIGHTS RESERVED. No part of this work covered by the copyright herein may be reproduced, transmitted, stored or used in any form or by any means graphic, electronic, or mechanical, including but not limited to photocopying, recording, scanning, digitizing, taping, Web distribution, information networks, or information storage and retrieval systems, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act or applicable copyright law of another jurisdiction, without the prior written permission of the publisher. For permission to use material from this text or product, submit all requests online at www.cengage.com/permissions Further permissions questions can be emailed to permissionrequest@cengage.com International Student Edition: ISBN-13: 978-1-4390-8241-6 ISBN-10: 1-4390-8241-3 Cengage Learning International Offices Asia cengageasia.com tel: (65) 6410 1200 Australia/New Zealand cengage.com.au tel: (61) 3 9685 4111 Brazil cengage.com.br tel.: (011) 3665 9900 India cengage.co.in tel: (91) 11 30484837/38 Latin America cengage.com.mx tel: +52 (55) 1500 6000 UK/Europe/Middle East/Africa cengage.co.uk tel: (44) 207 067 2500 Represented in Canada by Nelson Education, Ltd. tel: (416) 752 9100 / (800) 668 0671 nelson.com Cengage Learning is a leading provider of customized learning solutions with office locations around the globe, including Singapore, the United Kingdom, Australia, Mexico, Brazil, and Japan. Locate your local office at: www.cengage.com/global Cengage Learning products are represented in Canada by Nelson Education, Ltd. Printed in Canada 1 2 3 4 5 6 7 14 13 12 11 10 For product information: www.cengage.com /international Visit your local office: www.cengage.com /global Visit our corporate website: www.cengage.com In memory of Irene Moss Hyams Contents Preface xiii About the Authors xix PA RT 1 The Nature of Human Language INTRODUCTION Brain and Language 3 The Human Brain 4 The Localization of Language in the Brain 5 Aphasia 6 Brain Imaging Technology 12 Brain Plasticity and Lateralization in Early Life 14 Split Brains 15 Other Experimental Evidence of Brain Organization 16 The Autonomy of Language 18 Other Dissociations of Language and Cognition 19 Laura 20 Christopher 20 Genetic Basis of Language 21 Language and Brain Development 22 The Critical Period 22 A Critical Period for Bird Song 25 The Development of Language in the Species 26 Summary 28 References for Further Reading 29 Exercises 30 PA RT 2 Grammatical Aspects of Language CHAPTER 1 Morphology: The Words of Language 36 Dictionaries 38 Content Words and Function Words 38 Morphemes: The Minimal Units of Meaning 40 Bound and Free Morphemes 43 Prefixes and Suffixes 43 Infixes 45 Circumfixes 45 Roots and Stems 46 Bound Roots 47 Rules of Word Formation 47 Derivational Morphology 48 Inflectional Morphology 50 The Hierarchical Structure of Words 53 Rule Productivity 56 Exceptions and Suppletions 58 Lexical Gaps 59 Other Morphological Processes 60 Back-Formations 60 Compounds 60 “Pullet Surprises” 63 Sign Language Morphology 63 vii viii CONTENTS Semantic Rule II 145 When Compositionality Goes Awry 146 Anomaly 147 Metaphor 149 Idioms 150 Morphological Analysis: Identifying Morphemes 64 Summary 67 References for Further Reading 68 Exercises 68 Lexical Semantics (Word Meanings) 152 Theories of Word Meaning 153 Reference 154 Sense 155 Lexical Relations 156 Semantic Features 159 CHAPTER 2 Syntax: The Sentence Patterns of Language 77 What the Syntax Rules Do 78 What Grammaticality Is Not Based On Sentence Structure 83 Constituents and Constituency Tests Syntactic Categories 86 Phrase Structure Trees and Rules Heads and Complements Selection 103 Sentence Relatedness 115 Transformational Rules 89 105 109 115 The Structural Dependency of Rules 117 Further Syntactic Dependencies 120 UG Principles and Parameters 124 Evidence for Semantic Features 160 Semantic Features and Grammar 160 Argument Structure 84 102 What Heads the Sentence Structural Ambiguities More Structures 111 82 163 Thematic Roles 164 Pragmatics 167 Pronouns 167 Pronouns and Syntax 168 Pronouns and Discourse 169 Pronouns and Situational Context Deixis 170 More on Situational Context Maxims of Conversation Implicatures 174 Speech Acts 175 172 172 Summary 176 References for Further Reading 178 Exercises 178 Sign Language Syntax 127 Summary 128 References for Further Reading 129 Exercises 130 CHAPTER 3 The Meaning of Language 139 What Speakers Know about Sentence Meaning 140 Truth 140 Entailment and Related Notions 141 Ambiguity 142 Compositional Semantics 144 Semantic Rules 144 Semantic Rule I 145 CHAPTER 4 Phonetics: The Sounds of Language 189 Sound Segments 190 Identity of Speech Sounds 191 The Phonetic Alphabet 192 Articulatory Phonetics Consonants 195 195 Place of Articulation 195 Manner of Articulation 197 Phonetic Symbols for American English Consonants 204 Vowels 206 Tongue Position 206 Lip Rounding 208 169 Contents Diphthongs 208 Nasalization of Vowels 209 Tense and Lax Vowels 209 Different (Tongue) Strokes for Different Folks 210 Major Phonetic Classes 210 Noncontinuants and Continuants Obstruents and Sonorants 210 Consonantal 211 Syllabic Sounds 211 210 Prosodic Features 212 Tone and Intonation 213 Phonetic Symbols and Spelling Correspondences 215 The “Phonetics” of Signed Languages 217 Summary 219 References for Further Reading 220 Exercises 221 CHAPTER 5 Phonology: The Sound Patterns of Language 226 The Pronunciation of Morphemes 227 The Pronunciation of Plurals 227 Additional Examples of Allomorphs Phonemes: The Phonological Units of Language 232 Vowel Nasalization in English as an Illustration of Allophones 232 Allophones of /t/ 234 Complementary Distribution 235 230 ix Distinctive Features of Phonemes 238 Feature Values 238 Nondistinctive Features 239 Phonemic Patterns May Vary across Languages 241 ASL Phonology 242 Natural Classes of Speech Sounds 242 Feature Specifications for American English Consonants and Vowels 243 The Rules of Phonology 244 Assimilation Rules 244 Dissimilation Rules 248 Feature-Changing Rules 249 Segment Insertion and Deletion Rules 250 Movement (Metathesis) Rules 252 From One to Many and from Many to One 253 The Function of Phonological Rules 255 Slips of the Tongue: Evidence for Phonological Rules 255 Prosodic Phonology 256 Syllable Structure 256 Word Stress 257 Sentence and Phrase Stress 258 Intonation 259 Sequential Constraints of Phonemes Lexical Gaps 262 Why Do Phonological Rules Exist? 260 262 Phonological Analysis 264 Summary 268 References for Further Reading 269 Exercises 270 PA RT 3 The Biology and Psychology of Language CHAPTER 6 What Is Language? 284 Linguistic Knowledge 284 Knowledge of the Sound System Knowledge of Words 286 Arbitrary Relation of Form and Meaning 286 285 The Creativity of Linguistic Knowledge 289 Knowledge of Sentences and Nonsentences 291 Linguistic Knowledge and Performance What Is Grammar? 294 Descriptive Grammars 294 292 x CONTENTS The Acquisition of Signed Languages 355 Prescriptive Grammars 295 Teaching Grammars 297 Language Universals 298 The Development of Grammar 299 Sign Languages: Evidence for the Innateness of Language 300 American Sign Language 301 Animal “Languages” 302 “Talking” Parrots 303 The Birds and the Bees 304 Can Chimps Learn Human Language? 306 In the Beginning: The Origin of Language 308 Divine Gift 309 The First Language 309 Human Invention or the Cries of Nature? 310 Language and Thought 310 What We Know about Human Language 315 Summary 317 References for Further Reading 318 Exercises 319 CHAPTER 7 Language Acquisition 324 Mechanisms of Language Acquisition 325 Do Children Learn through Imitation? 325 Do Children Learn through Correction and Reinforcement? 326 Do Children Learn Language through Analogy? 327 Do Children Learn through Structured Input? 329 Children Construct Grammars 330 The Innateness Hypothesis 330 Stages in Language Acquisition 332 The Perception and Production of Speech Sounds 333 Babbling 334 First Words 335 Segmenting the Speech Stream 336 The Development of Grammar 339 Setting Parameters 354 Knowing More Than One Language 357 Childhood Bilingualism 357 Theories of Bilingual Development 358 Two Monolinguals in One Head 360 The Role of Input 360 Cognitive Effects of Bilingualism 361 Second Language Acquisition 361 Is L2 Acquisition the Same as L1 Acquisition? 361 Native Language Influence in L2 Acquisition 363 The Creative Component of L2 Acquisition 364 Is There a Critical Period for L2 Acquisition? 365 Summary 366 References for Further Reading 368 Exercises 369 CHAPTER 8 Language Processing: Humans and Computers 375 The Human Mind at Work: Human Language Processing 375 Comprehension 377 The Speech Signal 378 Speech Perception and Comprehension 379 Bottom-up and Top-down Models 381 Lexical Access and Word Recognition 383 Syntactic Processing 384 Speech Production 387 Planning Units 387 Lexical Selection 389 Application and Misapplication of Rules 389 Nonlinguistic Influences 390 Computer Processing of Human Language 391 Computers That Talk and Listen 391 Contents Computational Phonetics and Phonology 391 Computational Morphology 396 Computational Syntax 397 Computational Semantics 402 Computational Pragmatics 404 Computational Sign Language 405 Applications of Computational Linguistics 406 Computer Models of Grammar 406 Frequency Analysis, Concordances, and Collocations 407 Computational Lexicography 409 Information Retrieval and Summarization 410 Spell Checkers 411 Machine Translation 412 Computational Forensic Linguistics 414 Summary 418 References for Further Reading 420 Exercises 421 PA RT 4 Language and Society CHAPTER 9 Jargon and Argot 470 Taboo or Not Taboo? 471 Euphemisms 473 Racial and National Epithets Language and Sexism 474 Language in Society 430 Dialects 430 Regional Dialects 432 Phonological Differences 434 Lexical Differences 435 Dialect Atlases 436 Syntactic Differences 436 Social Dialects Marked and Unmarked Forms Summary 477 References for Further Reading 479 Exercises 480 The “Standard” 439 African American English 442 Latino (Hispanic) English 446 Genderlects 448 Sociolinguistic Analysis 451 CHAPTER 10 Language Change: The Syllables of Time 488 Languages in Contact 452 Lingua Francas 453 Contact Languages: Pidgins and Creoles 454 Creoles and Creolization 457 Bilingualism 460 Codeswitching 461 Language in Use 469 Styles 469 Slang 470 475 Secret Languages and Language Games 476 439 Language and Education 463 Second-Language Teaching Methods Teaching Reading 465 Bilingual Education 467 “Ebonics” 468 474 The Regularity of Sound Change 489 Sound Correspondences 490 Ancestral Protolanguages 490 Phonological Change 491 Phonological Rules 492 The Great Vowel Shift 493 463 Morphological Change Syntactic Change 494 496 Lexical Change 500 Change in Category 500 Addition of New Words 500 Word Coinage 501 xi xii CONTENTS Words from Names 502 Blends 503 Reduced Words 504 Borrowings or Loan Words CHAPTER 11 Writing: The ABCs of Language 540 504 The History of Writing 541 Pictograms and Ideograms 541 Cuneiform Writing 543 The Rebus Principle 545 From Hieroglyphics to the Alphabet Loss of Words 507 Semantic Change 508 Broadening 508 Narrowing 509 Meaning Shifts 509 Modern Writing Systems 547 Word Writing 548 Syllabic Writing 549 Consonantal Alphabet Writing Alphabetic Writing 551 Reconstructing “Dead” Languages 509 The Nineteenth-Century Comparativists 510 Cognates 511 Comparative Reconstruction 514 Historical Evidence 516 Extinct and Endangered Languages 518 The Genetic Classification of Languages Languages of the World 523 Types of Languages 525 520 Writing and Speech 553 Spelling 556 Spelling Pronunciations 560 Summary 561 References for Further Reading 562 Exercises 563 Why Do Languages Change? 528 Glossary Summary 530 References for Further Reading 531 Exercises 532 Index 601 569 551 546 Preface Well, this bit which I am writing, called Introduction, is really the er-h’r’m of the book, and I have put it in, partly so as not to take you by surprise, and partly because I can’t do without it now. There are some very clever writers who say that it is quite easy not to have an er-h’r’m, but I don’t agree with them. I think it is much easier not to have all the rest of the book. A. A. MILNE , Now We Are Six, 1927 The last thing we find in making a book is to know what we must put first. BLAISE PASCAL (1623–1662) The ninth edition of An Introduction to Language continues in the spirit of our friend, colleague, mentor, and coauthor, Victoria Fromkin. Vicki loved language, and she loved to tell people about it. She found linguistics fun and fascinating, and she wanted every student and every teacher to think so, too. Though this edition has been completely rewritten for improved clarity and currency, we have nevertheless preserved Vicki’s lighthearted, personal approach to a complex topic, including witty quotations from noted authors (A. A. Milne was one of Vicki’s favorites). We hope we have kept the spirit of Vicki’s love for teaching about language alive in the pages of this book. The first eight editions of An Introduction to Language succeeded, with the help of dedicated teachers, in introducing the nature of human language to tens of thousands of students. This is a book that students enjoy and understand and that professors find effective and thorough. Not only have majors in linguistics benefited from the book’s easy-to-read yet comprehensive presentation, majors in fields as diverse as teaching English as a second language, foreign language studies, general education, psychology, sociology, and anthropology have enjoyed learning about language from this book. Highlights of This Edition This edition includes new developments in linguistics and related fields that will strengthen its appeal to a wider audience. Much of this information will enable students to gain insight and understanding about linguistic issues and debates appearing in the national media and will help professors and students stay current with important linguistic research. We hope that it may also dispel certain common misconceptions that people have about language and language use. Many more exercises (240) are available in this edition than ever before, allowing students to test their comprehension of the material in the text. Many of the exercises are multipart, amounting to more than 300 opportunities for “homework” so that instructors can gauge their student’s progress. Some exercises are marked as “challenge” questions if they go beyond the scope of what is xiii xiv PREFACE ordinarily expected in a first course in language study. An answer key is available to instructors to assist them in areas outside of their expertise. The Introduction, “Brain and Language,” retains its forward placement in the book because we believe that one can learn about the brain through language, and about the nature of the human being through the brain. This chapter may be read and appreciated without technical knowledge of linguistics. When the centrality of language to human nature is appreciated, students will be motivated to learn more about human language, and about linguistics, because they will be learning more about themselves. As in the previous edition, highly detailed illustrations of MRI and PET scans of the brain are included, and this chapter highlights some of the new results and tremendous progress in the study of neurolinguistics over the past few years. The arguments for the autonomy of language in the human brain are carefully crafted so that the student sees how experimental evidence is applied to support scientific theories. Chapters 1 and 2, on morphology and syntax, have been heavily rewritten for increased clarity, while weaving in new results that reflect current thinking on how words and sentences are structured and understood. In particular, the chapter on syntax continues to reflect the current views on binary branching, heads and complements, selection, and X-bar phrase structure. Non-English examples abound in these two chapters and throughout the entire book. The intention is to enhance the student’s understanding of the differences among languages as well as the universal aspects of grammar. Nevertheless, the introductory spirit of these chapters is not sacrificed, and students gain a deep understanding of word and phrase structure with a minimum of formalisms and a maximum of insightful examples and explanations, supplemented as always by quotes, poetry, and humor. Chapter 3, on semantics or meaning, has been more highly structuralized so that the challenging topics of this complex subject can be digested in smaller pieces. Still based on the theme of “What do you know about meaning when you know a language?”, the chapter first introduces students to truth-conditional semantics and the principle of compositionality. Following that are discussions of what happens when compositionality fails, as with idioms, metaphors, and anomalous sentences. Lexical semantics takes up various approaches to word meaning, including the concepts of reference and sense, semantic features, argument structure, and thematic roles. Finally, the chapter concludes with pragmatic considerations, including the distinction between linguistic and situational context in discourse, deixis, maxims of conversation, implicatures, and speech acts, all newly rewritten for currency and clarity. Chapter 4, on phonetics, retains its former organization with one significant change: We have totally embraced IPA (International Phonetics Association) notation for English in keeping with current tendencies, with the sole exception of using /r/ in place of the technically correct /ɹ/. We continue to mention alternative notations that students may encounter in other publications. Chapter 5, on phonology, has been streamlined by relegating several complex examples (e.g., metathesis in Hebrew) to the exercises, where instructors can opt to include them if it is thought that students can handle such advanced material. The chapter continues to be presented with a greater emphasis on insights through linguistic data accompanied by small amounts of well-explicated for- Preface malisms, so that the student can appreciate the need for formal theories without experiencing the burdensome details. Chapter 6 is a concise introduction to the general study of language. It now contains many topics of special interest to students, including “Language and Thought,” which takes up the Sapir-Whorf hypothesis; discussions of signed languages; a consideration of animal “languages”; and a treatment of language origins. The chapters comprising Part 3, “The Psychology of Language,” have been both rewritten and restructured for clarity. Chapter 7, “Language Acquisition,” is still rich in data from both English and other languages, and has been updated with newer examples from the ever expanding research in this vital topic. The arguments for innateness and Universal Grammar that language acquisition provides are exploited to show the student how scientific theories of great import are discovered and supported through observation, experiment, and reason. As in most chapters, American Sign Language (ASL) is discussed, and its important role in understanding the biological foundations of language is emphasized. In chapter 8, the section on psycholinguistics has been updated to conform to recent discoveries. The section on computational linguistics has been substantially reorganized into two subsections: technicalities and applications. In the applications section is an entirely new presentation of forensic computational linguistics—the use of computers in solving crimes that involve language, and, similarly, resolving judicial matters such as trademark disputes. Part 4 is concerned with language in society, including sociolinguistics (chapter 9) and historical linguistics (chapter 10). Readers of previous editions will scarcely recognize the much revised and rewritten chapter 9. The section “Languages in Contact” has been thoroughly researched and brought up to date, including insightful material on pidgins and creoles, their origins, interrelationship, and subtypes. An entirely new section, “Language and Education,” discusses some of the sociolinguistic issues facing the classroom teacher in our multicultural school systems. No sections have been omitted, but many have been streamlined and rewritten for clarity, such as the section on “Language in Use.” Chapter 10, on language change, has undergone a few changes. The section “Extinct and Endangered Languages” has been completely rewritten and brought up to date to reflect the intense interest in this critical subject. The same is true of the section “Types of Languages,” which now reflects the latest research. Chapter 11, on writing systems, is unchanged from the previous edition with the exception of a mild rewriting to further improve clarity, and the movement of the section on reading to chapter 9. Terms that appear bold in the text are defined in the revised glossary at the end of the book. The glossary has been expanded and improved so that the ninth edition provides students with a linguistic lexicon of nearly 700 terms, making the book a worthy reference volume. The order of presentation of chapters 1 through 5 was once thought to be nontraditional. Our experience, backed by previous editions of the book and the recommendations of colleagues throughout the world, has convinced us that it is easier for the novice to approach the structural aspects of language by first looking at morphology (the structure of the most familiar linguistic unit, the word). This is followed by syntax (the structure of sentences), which is also familiar xv xvi PREFACE to many students, as are numerous semantic concepts. We then proceed to the more novel (to students) phonetics and phonology, which students often find daunting. However, the book is written so that individual instructors can present material in the traditional order of phonetics, phonology, morphology, syntax, and semantics (chapters 4, 5, 1, 2, and 3) without confusion, if they wish. As in previous editions, the primary concern has been with basic ideas rather than detailed expositions. This book assumes no previous knowledge on the part of the reader. An updated list of references at the end of each chapter is included to accommodate any reader who wishes to pursue a subject in more depth. Each chapter concludes with a summary and exercises to enhance the student’s interest in and comprehension of the textual material. Acknowledgments Our endeavor to maintain the currency of linguistic concepts in times of rapid progress has been invaluably enhanced by the following colleagues, to whom we owe an enormous debt of gratitude: Susan Curtiss Jeff MacSwan John Olsson Fernanda Pratas Otto Santa Ana Andrew Simpson University of California, Los Angeles Arizona State University Forensic Linguistic Institute, Wales, U.K. Universidade Nova de Lisboa University of California, Los Angeles University of Southern California brain and language bilingual education, bilingual communities forensic linguistics pidgin/creoles Chicano English language and society We would also like to extend our appreciation to the following individuals for their help and guidance: Deborah Grant Edward Keenan Giuseppe Longobardi Pamela Munro Reiko Okabe Megha Sundara Maria Luisa Zubizarreta Independent consultant University of California, Los Angeles Università di Venezia University of California, Los Angeles Nihon University, Tokyo University of California, Los Angeles University of Southern California general feedback historical linguistics historical linguistics endangered languages Japanese and gender early speech perception language contact Preface Brook Danielle Lillehaugen undertook the daunting task of writing the Answer Key to the ninth edition. Her thoroughness, accuracy, and insightfulness in construing solutions to problems and discussions of issues will be deeply appreciated by all who avail themselves of this useful document. We also express deep appreciation for the incisive comments of eight reviewers of the eighth edition, known to us as R1–R8, whose frank assessment of the work, both critical and laudatory, heavily influenced this new edition: Lynn A. Burley Fred Field Jackson Gandour Virginia Lewis Tom Nash Nancy Stenson Mel Storm Robert Trammell University of Central Arkansas California State University, Northridge Purdue University, West Lafayette Northern State University Southern Oregon University University of Minnesota, Twin Cities Emporia State University Florida Atlantic University, Boca Raton We continue to be deeply grateful to the individuals who have sent us suggestions, corrections, criticisms, cartoons, language data, and exercises over the course of many editions. Their influence is still strongly felt in this ninth edition. The list is long and reflects the global, communal collaboration that a book about language—the most global of topics—merits. To each of you, our heartfelt thanks and appreciation. Know that in this ninth edition lives your contribution:1 Adam Albright, Massachusetts Institute of Technology; Rebecca Barghorn, University of Oldenburg; Seyed Reza Basiroo, Islamic Azad University; Karol Boguszewski, Poland; Melanie Borchers, Universität Duisburg-Essen; Donna Brinton, Emeritus, University of California, Los Angeles; Daniel Bruhn, University of California, Berkeley; Ivano Caponigro, University of California, San Diego; Ralph S. Carlson, Azusa Pacific University; Robert Channon, Purdue University; Judy Cheatham, Greensboro College; Leonie Cornips, Meertens Institute; Antonio Damásio, University of Southern California; Hanna Damásio, University of Southern California; Julie Damron, Brigham Young University; Rosalia Dutra, University of North Texas; Christina Esposito, Macalester College; Susan Fiksdal, Evergreen State College; Beverly Olson Flanigan and her teaching assistants, Ohio University; Jule Gomez de Garcia, California State University, San Marcos; Loretta Gray, Central Washington University; Xiangdong Gu, Chongqing University; Helena Halmari, Sam Houston State University; Sharon Hargus, University of Washington; Benjamin H. Hary, Emory University; Tometro Hopkins, Florida International University; Eric Hyman, University of North Carolina, Fayetteville; Dawn Ellen Jacobs, California Baptist University; Seyed Yasser Jebraily, University of Tehran; Kyle Johnson, University of Massachusetts, Amherst; Paul Justice, San Diego State University; Simin Karimi, University of Arizona; Robert D. King, University of Texas; Sharon M. Klein, California State University, Northridge; Nathan Klinedinst, Institut 1Some affiliations may have changed or are unknown to us at this time. xvii xviii PREFACE Jean Nicod/CNRS, Paris; Otto Krauss, Jr., late, unaffiliated; Elisabeth Kuhn, Virginia Commonwealth University; Peter Ladefoged, Late, University of California, Los Angeles; Mary Ann Larsen-Pusey, Fresno Pacific University; Rabbi Robert Layman, Philadelphia; Byungmin Lee, Korea; Virginia “Ginny” Lewis, Northern State University; David Lightfoot, Georgetown University; Ingvar Lofstedt, University of California, Los Angeles; Harriet Luria, Hunter College, City University of New York; Tracey McHenry, Eastern Washington University; Carol Neidle, Boston University; Don Nilsen, Arizona State University; Anjali Pandey, Salisbury University; Barbara Hall Partee, University of Massachusetts, Amherst; Vincent D. Puma, Flagler College; Ian Roberts, Cambridge University; Tugba Rona, Istanbul International Community School; Natalie Schilling-Estes, Georgetown University; Philippe Schlenker, Institut Jean-Nicod, Paris and New York University; Carson Schütze, University of California, Los Angeles; Bruce Sherwood, North Carolina State University; Koh Shimizu, Beijing; Dwan L. Shipley, Washington University; Muffy Siegel, Temple University; Neil Smith, University College London; Donca Steriade, Massachusetts Institute of Technology; Nawaf Sulami, University of Northern Iowa; Dalys Vargas, College of Notre Dame; Willis Warren, Saint Edwards University; Donald K. Watkins, University of Kansas; Walt Wolfram, North Carolina State University. Please forgive us if we have inadvertently omitted any names, and if we have spelled every name correctly, then we shall believe in miracles. Finally, we wish to thank the editorial and production team at Cengage Learning. They have been superb and supportive in every way: Michael Rosenberg, publisher; Joan M. Flaherty, development editor; Michael Lepera, content project manager; Jennifer Bonnar, project manager, Lachina Publishing Services; Christy Goldfinch, copy editor; Diane Miller, proofreader; Bob Kauser, permissions editor; Joan Shapiro, indexer; and Brian Salisbury, text designer. Last but certainly not least, we acknowledge our debt to those we love and who love us and who inspire our work when nothing else will: Nina’s son, Michael; Robert’s wife, Helen; our parents; and our dearly beloved and still deeply missed colleagues, Vicki Fromkin and Peter Ladefoged. The responsibility for errors in fact or judgment is, of course, ours alone. We continue to be indebted to the instructors who have used the earlier editions and to their students, without whom there would be no ninth edition. Robert Rodman Nina Hyams About the Authors VICTORIA FROMKIN received her bachelor’s degree in economics from the University of California, Berkeley, in 1944 and her M.A. and Ph.D. in linguistics from the University of California, Los Angeles, in 1963 and 1965, respectively. She was a member of the faculty of the UCLA Department of Linguistics from 1966 until her death in 2000, and served as its chair from 1972 to 1976. From 1979 to 1989 she served as the UCLA Graduate Dean and Vice Chancellor of Graduate Programs. She was a visiting professor at the Universities of Stockholm, Cambridge, and Oxford. Professor Fromkin served as president of the Linguistics Society of America in 1985, president of the Association of Graduate Schools in 1988, and chair of the Board of Governors of the Academy of Aphasia. She received the UCLA Distinguished Teaching Award and the Professional Achievement Award, and served as the U.S. Delegate and a member of the Executive Committee of the International Permanent Committee of Linguistics (CIPL). She was an elected Fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, the New York Academy of Science, the American Psychological Society, and the Acoustical Society of America, and in 1996 was elected to membership in the National Academy of Sciences. She published more than one hundred books, monographs, and papers on topics concerned with phonetics, phonology, tone languages, African languages, speech errors, processing models, aphasia, and the brain/mind/language interface—all research areas in which she worked. Professor Fromkin passed away on January 19, 2000, at the age of 76. ROBERT RODMAN received his bachelor’s degree in mathematics from the University of California, Los Angeles, in 1961, a master’s degree in mathematics in 1965, a master’s degree in linguistics in 1971, and his Ph.D. in linguistics in 1973. He has been on the faculties of the University of California at Santa Cruz, the University of North Carolina at Chapel Hill, Kyoto Industrial College in Japan, and North Carolina State University, where he is currently a professor of computer science. His research areas are forensic linguistics and computer speech processing. Robert resides in Raleigh, North Carolina, with his wife, Helen, Blue the Labrador, and Gracie, a rescued greyhound. NINA HYAMS received her bachelor’s degree in journalism from Boston University in 1973 and her M.A. and Ph.D. degrees in linguistics from the Graduate Center of the City University of New York in 1981 and 1983, respectively. She joined the faculty of the University of California, Los Angeles, in 1983, where she is currently a professor of linguistics. Her main areas of research are childhood language development and syntax. She is author of the book Language Acquisition and the Theory of Parameters (D. Reidel Publishers, 1986), a milestone in language acquisition research. She has also published numerous articles on the xix xx ABOUT THE AUTHORS development of syntax, morphology, and semantics in children. She has been a visiting scholar at the University of Utrecht and the University of Leiden in the Netherlands and has given numerous lectures throughout Europe and Japan. Nina lives in Los Angeles with her pal Spot, a rescued border collie mutt. 1 The Nature of Human Language Reflecting on Noam Chomsky’s ideas on the innateness of the fundamentals of grammar in the human mind, I saw that any innate features of the language capacity must be a set of biological structures, selected in the course of the evolution of the human brain. S . E . L U R I A , A Slot Machine, a Broken Test Tube, an Autobiography, 1984 2 PART 1 The Nature of Human Language The nervous systems of all animals have a number of basic functions in common, most notably the control of movement and the analysis of sensation. What distinguishes the human brain is the variety of more specialized activities it is capable of learning. The preeminent example is language. N O R M A N G E S C H W I N D , 1979 Linguistics shares with other sciences a concern to be objective, systematic, consistent, and explicit in its account of language. Like other sciences, it aims to collect data, test hypotheses, devise models, and construct theories. Its subject matter, however, is unique: at one extreme it overlaps with such “hard” sciences as physics and anatomy; at the other, it involves such traditional “arts” subjects as philosophy and literary criticism. The field of linguistics includes both science and the humanities, and offers a breadth of coverage that, for many aspiring students of the subject, is the primary source of its appeal. D AV I D C R Y S TA L , The Cambridge Encyclopedia of Language, 1987 Introduction Brain and Language The functional asymmetry of the human brain is unequivocal, and so is its anatomical asymmetry. The structural differences between the left and the right hemispheres are visible not only under the microscope but to the naked eye. The most striking asymmetries occur in language-related cortices. It is tempting to assume that such anatomical differences are an index of the neurobiological underpinnings of language. ANTONIO AND HANNA DAMÁSIO, University of Southern California, Brain and Creativity Institute and Department of Neuroscience Attempts to understand the complexities of human cognitive abilities and especially the acquisition and use of language are as old and as continuous as history itself. What is the nature of the brain? What is the nature of human language? And what is the relationship between the two? Philosophers and scientists have grappled with these questions and others over the centuries. The idea that the brain is the source of human language and cognition goes back more than two thousand years. The philosophers of ancient Greece speculated about the brain/ mind relationship, but neither Plato nor Aristotle recognized the brain’s crucial function in cognition or language. However, others of the same period showed great insight, as illustrated in the following quote from the Hippocratic Treatises on the Sacred Disease, written c. 377 b.c.e.: [The brain is] the messenger of the understanding [and the organ whereby] in an especial manner we acquire wisdom and knowledge. The study of language has been crucial to understanding the brain/mind relationship. Conversely, research on the brain in humans and other primates is helping to answer questions concerning the neurological basis for language. The study of the biological and neural foundations of language is called neurolinguistics. Neurolinguistic research is often based on data from atypical or impaired language and uses such data to understand properties of human language in general. 3 4 INTRODUCTION Brain and Language The Human Brain “Rabbit’s clever,” said Pooh thoughtfully. “Yes,” said Piglet, “Rabbit’s clever.” “And he has Brain.” “Yes,” said Piglet, “Rabbit has Brain.” There was a long silence. “I suppose,” said Pooh, “that that’s why he never understands anything.” A. A. MILNE, The House at Pooh Corner, 1928 The brain is the most complex organ of the body. It lies under the skull and consists of approximately 100 billion nerve cells (neurons) and billions of fibers that interconnect them. The surface of the brain is the cortex, often called “gray matter,” consisting of billions of neurons. The cortex is the decision-making organ of the body. It receives messages from all of the sensory organs, initiates all voluntary and involuntary actions, and is the storehouse of our memories. Somewhere in this gray matter resides the grammar that represents our knowledge of language. The brain is composed of cerebral hemispheres, one on the right and one on the left, joined by the corpus callosum, a network of more than 200 million fibers (see Figure I.1). The corpus callosum allows the two hemispheres of the brain to communicate with each other. Without this system of connections, the Front Left Hemisphere Corpus Callosum Right Hemisphere Cortex White Matter Back FIGURE I.1 | Three-dimensional reconstruction of the normal living human brain. The images were obtained from magnetic resonance data using the Brainvox technique. Left panel = view from top. Right panel = view from the front following virtual coronal section at the level of the dashed line. Courtesy of Hanna Damásio. The Human Brain two hemispheres would operate independently. In general, the left hemisphere controls the right side of the body, and the right hemisphere controls the left side. If you point with your right hand, the left hemisphere is responsible for your action. Similarly, sensory information from the right side of the body (e.g., right ear, right hand, right visual field) is received by the left hemisphere of the brain, and sensory input to the left side of the body is received by the right hemisphere. This is referred to as contralateral brain function. The Localization of Language in the Brain “Peanuts” copyright . 1984 United Feature Syndicate, Inc. Reprinted by permission. An issue of central concern has been to determine which parts of the brain are responsible for human linguistic abilities. In the early nineteenth century, Franz Joseph Gall proposed the theory of localization, which is the idea that different human cognitive abilities and behaviors are localized in specific parts of the brain. In light of our current knowledge about the brain, some of Gall’s particular views are amusing. For example, he proposed that language is located in the frontal lobes of the brain because as a young man he had noticed that the most articulate and intelligent of his fellow students had protruding eyes, which he believed reflected overdeveloped brain material. He also put forth a pseudoscientific theory called “organology” that later came to be known as phrenology, which is the practice of determining personality traits, intellectual capacities, and other matters by examining the “bumps” on the skull. A disciple of Gall’s, Johann Spurzheim, introduced phrenology to America, constructing elaborate maps and skull models such as the one shown in Figure I.2, in which language is located directly under the eye. Gall was a pioneer and a courageous scientist in arguing against the prevailing view that the brain was an unstructured organ. Although phrenology has long been discarded as a scientific theory, Gall’s view that the brain is not a uniform mass, and that linguistic and other cognitive capacities are functions of localized 5 6 INTRODUCTION Brain and Language FIGURE I.2 | Phrenology skull model. brain areas, has been upheld by scientific investigation of brain disorders, and, over the past two decades, by numerous studies using sophisticated technologies. Aphasia For Better Or For Worse © 2007 Lynn Johnston Prod. Reprinted by permission of Universal Press Syndicate. All rights reserved. The study of aphasia has been an important area of research in understanding the relationship between brain and language. Aphasia is the neurological term for any language disorder that results from brain damage caused by disease or trauma. In the second half of the nineteenth century, significant scientific advances were The Human Brain FIGURE I.3 | Lateral (external) view of the left hemisphere of the human brain, showing the position of Broca’s and Wernicke’s areas—two key areas of the cortex related to language processing. made in localizing language in the brain based on the study of people with aphasia. In the 1860s the French surgeon Paul Broca proposed that language is localized to the left hemisphere of the brain, and more specifically to the front part of the left hemisphere (now called Broca’s area). At a scientific meeting in Paris, he claimed that we speak with the left hemisphere. Broca’s finding was based on a study of his patients who suffered language deficits after brain injury to the left frontal lobe. A decade later Carl Wernicke, a German neurologist, described another variety of aphasia that occurred in patients with lesions in areas of the left hemisphere temporal lobe, now known as Wernicke’s area. Language, then, is lateralized to the left hemisphere, and the left hemisphere appears to be the language hemisphere from infancy on. Lateralization is the term used to refer to the localization of function to one hemisphere of the brain. Figure I.3 is a view of the left side of the brain that shows Broca’s and Wernicke’s areas. The Linguistic Characterization of Aphasic Syndromes Most aphasics do not show total language loss. Rather, different aspects of language are selectively impaired, and the kind of impairment is generally related to the location of the brain damage. Because of this damage-deficit correlation, research on patients with aphasia has provided a great deal of information about how language is organized in the brain. Patients with injuries to Broca’s area may have Broca’s aphasia, as it is often called today. Broca’s aphasia is characterized by labored speech and certain kinds of word-finding difficulties, but it is primarily a disorder that affects a person’s ability to form sentences with the rules of syntax. One of the most 7 8 INTRODUCTION Brain and Language notable characteristics of Broca’s aphasia is that the language produced is often agrammatic, meaning that it frequently lacks articles, prepositions, pronouns, auxiliary verbs, and other grammatical elements that we will call “function words” for now. Broca’s aphasics also typically omit inflections such as the past tense suffix -ed or the third person singular verb ending -s. Here is an excerpt of a conversation between a patient with Broca’s aphasia and a doctor: doctor: patient: doctor: patient: Could you tell me what you have been doing in the hospital? Yes, sure. Me go, er, uh, P.T. [physical therapy] none o’cot, speech . . . two times . . . read . . . r . . . ripe . . . rike . . . uh write . . . practice . . . get . . . ting . . . better. And have you been going home on weekends? Why, yes . . . Thursday uh . . . uh . . . uh . . . no . . . Friday . . . Bar . . . ba . . . ra . . . wife . . . and oh car . . . drive . . . purpike . . . you know . . . rest . . . and TV. Broca’s aphasics (also often called agrammatic aphasics) may also have difficulty understanding complex sentences in which comprehension depends exclusively on syntactic structure and where they cannot rely on their real-world knowledge. For example, an agrammatic aphasic may have difficulty knowing who kissed whom in questions like: Which girl did the boy kiss? where it is equally plausible for the boy or the girl to have done the kissing; or might be confused as to who is chasing whom in passive sentences such as: The cat was chased by the dog. where it is plausible for either animal to chase the other. But they have less difficulty with: Which book did the boy read? or The car was chased by the dog. where the meaning can be determined by nonlinguistic knowledge. It is implausible for books to read boys or for cars to chase dogs, and aphasic people can use that knowledge to interpret the sentence. Unlike Broca’s patients, people with Wernicke’s aphasia produce fluent speech with good intonation, and they may largely adhere to the rules of syntax. However, their language is often semantically incoherent. For example, one patient replied to a question about his health with: I felt worse because I can no longer keep in mind from the mind of the minds to keep me from mind and up to the ear which can be to find among ourselves. Another patient described a fork as “a need for a schedule” and another, when asked about his poor vision, replied, “My wires don’t hire right.” The Human Brain People with damage to Wernicke’s area have difficulty naming objects presented to them and also in choosing words in spontaneous speech. They may make numerous lexical errors (word substitutions), often producing jargon and nonsense words, as in the following example: The only thing that I can say again is madder or modder fish sudden fishing sewed into the accident to miss in the purdles. Another example is from a patient who was a physician before his aphasia. When asked if he was a doctor, he replied: Me? Yes sir. I’m a male demaploze on my own. I still know my tubaboys what for I have that’s gone hell and some of them go. Severe Wernicke’s aphasia is often referred to as jargon aphasia. The linguistic deficits exhibited by people with Broca’s and Wernicke’s aphasia point to a modular organization of language in the brain. We find that damage to different parts of the brain results in different kinds of linguistic impairment (e.g., syntactic versus semantic). This supports the hypothesis that the mental grammar, like the brain itself, is not an undifferentiated system, but rather consists of distinct components or modules with different functions. The kind of word substitutions that aphasic patients produce also tell us about how words are organized in the mental lexicon. Sometimes the substituted words are similar to the intended words in their sounds. For example, pool might be substituted for tool, sable for table, or crucial for crucible. Sometimes they are similar in meaning (e.g., table for chair or boy for girl). These errors resemble the speech errors that anyone might make, but they occur far more frequently in people with aphasia. The substitution of semantically or phonetically related words tells us that neural connections exist among semantically related words and among words that sound alike. Words are not mentally represented in a simple list but rather in an organized network of connections. Similar observations pertain to reading. The term dyslexia refers to reading disorders. Many word substitutions are made by people who become dyslexic after brain damage. They are called acquired dyslexics because before their brain lesions they were normal readers (unlike developmental dyslexics, who have difficulty learning to read). One group of these patients, when reading words printed on cards aloud, produced the kinds of substitutions shown in the following examples. Stimulus Response 1 Response 2 act applaud example heal south play laugh answer pain west play cheers sum medicine east The omission of function words in the speech of agrammatic aphasics shows that this class of words is mentally distinct from content words like nouns. A similar phenomenon has been observed in acquired dyslexia. The patient who produced the semantic substitutions cited previously was also agrammatic and 9 10 INTRODUCTION Brain and Language was not able to read function words at all. When presented with words like which or would, he just said, “No” or “I hate those little words.” However, he could read homophonous nouns and verbs, though with many semantic mistakes, as shown in the following: Stimulus Response Stimulus Response witch hour eye hymn wood witch time eyes bible wood which our I him would no! no! no! no! no! All these errors provide evidence that the mental dictionary has content words and function words in different compartments, and that these two classes of words are processed in different brain areas or by different neural mechanisms, further supporting the view that both the brain and language are structured in a complex, modular fashion. Additional evidence regarding hemispheric specialization is drawn from Japanese readers. The Japanese language has two main writing systems. One system, kana, is based on the sound system of the language; each symbol corresponds to a syllable. The other system, kanji, is ideographic; each symbol corresponds to a word. (More about this in chapter 11 on writing systems.) Kanji is not based on the sounds of the language. Japanese people with left-hemisphere damage are impaired in their ability to read kana, whereas people with right-hemisphere damage are impaired in their ability to read kanji. Also, experiments with unimpaired Japanese readers show that the right hemisphere is better and faster than the left hemisphere at reading kanji, and vice versa. Most of us have experienced word-finding difficulties in speaking if not in reading, as Alice did in “Wonderland” when she said: “And now, who am I? I will remember, if I can. I’m determined to do it!” But being determined didn’t help her much, and all she could say, after a great deal of puzzling, was “L, I know it begins with L.” This tip-of-the-tongue phenomenon (often referred to as TOT) is not uncommon. But if you could rarely find the word you wanted, imagine how frustrated you would be. This is the fate of many aphasics whose impairment involves severe anomia—the inability to find the word you wish to speak. It is important to note that the language difficulties suffered by aphasics are not caused by any general cognitive or intellectual impairment or loss of motor or sensory controls of the nerves and muscles of the speech organs or hearing apparatus. Aphasics can produce and hear sounds. Whatever loss they suffer has to do only with the language faculty (or specific parts of it). Deaf signers with damage to the left hemisphere show aphasia for sign language similar to the language breakdown in hearing aphasics, even though sign language is a visual-spatial language. Deaf patients with lesions in Broca’s area show language deficits like those found in hearing patients, namely severely dysfluent, agrammatic sign production. Likewise, those with damage to Wer- The Human Brain nicke’s area have fluent but often semantically incoherent sign language, filled with made-up signs. Although deaf aphasic patients show marked sign language deficits, they have no difficulty producing nonlinguistic gestures or sequences of nonlinguistic gestures, even though both nonlinguistic gestures and linguistic signs are produced by the same “articulators”—the hands and arms. Deaf aphasics also have no difficulty in processing nonlinguistic visual-spatial relationships, just as hearing aphasics have no problem with processing nonlinguistic auditory stimuli. These findings are important because they show that the left hemisphere is lateralized for language—an abstract system of symbols and rules—and not simply for hearing or speech. Language can be realized in different modalities, spoken or signed, but will be lateralized to the left hemisphere regardless of modality. The kind of selective impairments that we find in people with aphasia has provided important information about the organization of different language and cognitive abilities, especially grammar and the lexicon. It tells us that language is a separate cognitive module—so aphasics can be otherwise cognitively normal—and also that within language, separate components can be differentially affected by damage to different regions of the brain. Historical Descriptions of Aphasia Interest in aphasia has a long history. Greek Hippocratic physicians reported that loss of speech often occurred simultaneously with paralysis of the right side of the body. Psalm 137 states: “If I forget thee, Oh Jerusalem, may my right hand lose its cunning and my tongue cleave to the roof of my mouth.” This passage also shows that a link between loss of speech and paralysis of the right side was recognized. Pliny the Elder (c.e. 23–79) refers to an Athenian who “with the stroke of a stone fell presently to forget his letters only, and could read no more; otherwise, his memory served him well enough.” Numerous clinical descriptions of patients like the Athenian with language deficits, but intact nonlinguistic cognitive systems, were published between the fifteenth and eighteenth centuries. The language difficulties were not attributed to either general intellectual deficits or loss of memory, but to a specific impairment of language. Carl Linnaeus in 1745 published a case study of a man suffering from jargon aphasia, who spoke “as if it were a foreign language, having his own names for all words.” Another physician of that century reported on a patient’s word substitution errors: After an illness, she was suddenly afflicted with a forgetting, or, rather, an incapacity or confusion of speech. . . . If she desired a chair, she would ask for a table. . . . Sometimes she herself perceived that she misnamed objects; at other times, she was annoyed when a fan, which she had asked for, was brought to her, instead of the bonnet, which she thought she had requested. Physicians of the day described other kinds of linguistic breakdown in detail, such as a priest who, following brain damage, retained his ability to read Latin but lost the ability to read German. 11 12 INTRODUCTION Brain and Language The historical descriptions of language loss following brain damage foreshadow the later controlled scientific studies of aphasia that have provided substantial evidence that language is predominantly and most frequently a lefthemisphere function. In most cases lesions to the left hemisphere result in aphasia, but injuries to the right do not (although such lesions result in deficits in facial recognition, pattern recognition, and other cognitive abilities). Still, caution must be taken. The ability to understand intonation connected with various emotional states and also to understand metaphors (e.g., The walls have ears), jokes, puns, double entendres, and the like can be affected in patients with right hemisphere damage. If such understanding has a linguistic component, then we may have to attribute some language cognition to the right hemisphere. Studies of aphasia have provided not only important information regarding where and how language is localized in the brain, but also data bearing on the properties and principles of grammar that have been hypothesized for nonbrain-damaged adults. For example, the study of aphasia has provided empirical evidence concerning theories of word structure (chapter 1), sentence formation (chapter 2), meaning (chapter 3), and sound systems (chapters 4 and 5). Brain Imaging Technology The historical descriptions of aphasia illustrate that people have long been fascinated by the brain-language connection. Today we no longer need to rely on surgery or autopsy to locate brain lesions or to identify the language regions of the brain. Noninvasive brain recording technologies such as computer tomography (CT) scans and magnetic resonance imaging (MRI) can reveal lesions in the living brain shortly after the damage occurs. In addition, positron emission tomography (PET) scans, functional MRI (fMRI) scans, and single photon emission CT (SPECT) scans provide images of the brain in action. It is now possible to detect changes in brain activity and to relate these changes to localized brain damage and specific linguistic and nonlinguistic cognitive tasks. Figures I.4 and I.5 show MRI scans of the brains of a Broca’s aphasic patient and a Wernicke’s aphasic patient. The black areas show the sites of the lesions. Each diagram represents a slice of the left side of the brain. A variety of scanning techniques permit us to measure metabolic activity in particular areas of the brain. Areas of greater activity are those most involved in the mental processes at the moment of the scan. Supplemented by magnetic encephalography (MEG), which measures magnetic fields in the living brain, these techniques can show us how the healthy brain reacts to particular linguistic stimuli. For example, the brains of normal adults are observed when they are asked to listen to two or more sounds and determine if they are the same. Or they may be asked to listen to strings of sounds or read a string of letters and determine if they are real or possible words, or listen to or read sequences of words and say whether they form grammatical or ungrammatical sentences. The results of these studies reaffirm the earlier findings that language resides in specific areas of the left hemisphere. Dramatic evidence for a differentiated and structured brain is also provided by studies of both normal individuals and patients with lesions in regions of the brain other than Broca’s and Wernicke’s areas. Some patients have difficulty speaking a person’s name; others have problems naming animals; and still oth- The Human Brain FIGURE I.4 | Three-dimensional reconstruction of the brain of a living patient with Broca’s aphasia. Note area of damage in left frontal region (dark gray), which was caused by a stroke. Courtesy of Hanna Damásio. FIGURE I.5 | Three-dimensional reconstruction of the brain of a living patient with Wernicke’s aphasia. Note area of damage in left posterior temporal and lower parietal region (dark gray), which was caused by a stroke. Courtesy of Hanna Damásio. ers cannot name tools. fMRI studies have revealed the shape and location of the brain lesions in each of these types of patients. The patients in each group had brain lesions in distinct, nonoverlapping regions of the left temporal lobe. In a follow-up PET scan study, normal subjects were asked to name persons, animals, or tools. Experimenters found that there was differential activation in the normal brains in just those sites that were damaged in the aphasics who were unable to name persons, animals, or tools. Further evidence for the separation of cognitive systems is provided by the neurological and behavioral findings that follow brain damage. Some patients 13 14 INTRODUCTION Brain and Language lose the ability to recognize sounds or colors or familiar faces while retaining all other functions. A patient may not be able to recognize his wife when she walks into the room until she starts to talk. This suggests the differentiation of many aspects of visual and auditory processing. Brain Plasticity and Lateralization in Early Life It takes only one hemisphere to have a mind. A. L. WIGAN, The Duality of the Mind, 1844 Lateralization of language to the left hemisphere is a process that begins very early in life. Wernicke’s area is visibly distinctive in the left hemisphere of the fetus by the twenty-sixth gestational week. Infants as young as one week old show a greater electrical response in the left hemisphere to language and in the right hemisphere to music. A recent study videotaped the mouths of babies between the ages of five and twelve months when they were smiling and when they were babbling in syllables (producing sequences like mamama or gugugu). The study found that during smiling, the babies had a greater opening of the left side of the mouth (the side controlled by the right hemisphere), whereas during babbling, they had a greater opening of the right side (controlled by the left hemisphere). This indicates more left hemisphere involvement even at this very early stage of productive language development (see chapter 7). While the left hemisphere is innately predisposed to specialize for language, there is also evidence of considerable plasticity (i.e., flexibility) in the system during the early stages of language development. This means that under certain circumstances, the right hemisphere can take over many of the language functions that would normally reside in the left hemisphere. An impressive illustration of plasticity is provided by children who have undergone a procedure known as hemispherectomy, in which one hemisphere of the brain is surgically removed. This procedure is used to treat otherwise intractable cases of epilepsy. In cases of left hemispherectomy after language acquisition has begun, children experience an initial period of aphasia and then reacquire a linguistic system that is virtually indistinguishable from that of normal children. They also show many of the developmental patterns of normal language acquisition. UCLA professor Susan Curtiss and colleagues have studied many of these children. They hypothesize that the latent linguistic ability of the right hemisphere is “freed” by the removal of the diseased left hemisphere, which may have had a strong inhibitory effect before the surgery. In adults, however, surgical removal of the left hemisphere inevitably results in severe loss of language function (and so is done only in life-threatening circumstances), whereas adults (and children who have already acquired language) who have had their right hemispheres removed retain their language abilities. Other cognitive losses may result, such as those typically lateralized to the right hemisphere. The plasticity of the brain decreases with age and with the increasing specialization of the different hemispheres and regions of the brain. Despite strong evidence that the left hemisphere is predetermined to be the language hemisphere in most humans, some evidence suggests that the right The Human Brain hemisphere also plays a role in the earliest stages of language acquisition. Children with prenatal, perinatal, or childhood brain lesions in the right hemisphere can show delays and impairments in babbling and vocabulary learning, whereas children with early left hemisphere lesions demonstrate impairments in their ability to form phrases and sentences. Also, many children who undergo right hemispherectomy before two years of age do not develop language, even though they still have a left hemisphere. Various findings converge to show that the human brain is essentially designed to specialize for language in the left hemisphere but that the right hemisphere is involved in early language development. They also show that, under the right circumstances, the brain is remarkably resilient and that if brain damage or surgery occurs early in life, normal left hemisphere functions can be taken over by the right hemisphere. Split Brains © Scott Adams/Dist. by United Feature Syndicate, Inc. People suffering from intractable epilepsy may be treated by severing communication between their two hemispheres. Surgeons cut through the corpus callosum (see Figure I.1), the fibrous network that connects the two halves. When this pathway is severed, there is no communication between the “two brains.” Such split-brain patients also provide evidence for language lateralization and for understanding contralateral brain functions. The psychologist Michael Gazzaniga states: With [the corpus callosum] intact, the two halves of the body have no secrets from one another. With it sectioned, the two halves become two different conscious mental spheres, each with its own experience base and 15 16 INTRODUCTION Brain and Language control system for behavioral operations. . . . Unbelievable as this may seem, this is the flavor of a long series of experimental studies first carried out in the cat and monkey.1 When the brain is surgically split, certain information from the left side of the body is received only by the right side of the brain, and vice versa. To illustrate, suppose that a monkey is trained to respond with both its hands to a certain visual stimulus, such as a flashing light. After the training is complete, the brain is surgically split. The stimulus is then shown only to the left visual field (the right hemisphere). Because the right hemisphere controls the left side of the body, the monkey will perform only with the left hand. In humans who have undergone split-brain operations, the two hemispheres appear to be independent, and messages sent to the brain result in different responses, depending on which side receives the message. For example if a pencil is placed in the left hand of a split-brain person whose eyes are closed, the person can use the pencil appropriately but cannot name it because only the left hemisphere can speak. The right brain senses the pencil but the information cannot be relayed to the left brain for linguistic naming because the connections between the two halves have been severed. By contrast, if the pencil is placed in the right hand, the subject is immediately able to name it as well as to describe it because the sensory information from the right hand goes directly to the left hemisphere, where the language areas are located. Various experiments of this sort have provided information on the different capabilities of the two hemispheres. The right brain does better than the left in pattern-matching tasks, in recognizing faces, and in spatial tasks. The left hemisphere is superior for language, rhythmic perception, temporal-order judgments, and arithmetic calculations. According to Gazzaniga, “the right hemisphere as well as the left hemisphere can emote and while the left can tell you why, the right cannot.” Studies of human split-brain patients have also shown that when the interhemispheric visual connections are severed, visual information from the right and left visual fields becomes confined to the left and right hemispheres, respectively. Because of the crucial endowment of the left hemisphere for language, written material delivered to the right hemisphere cannot be read aloud if the brain is split, because the information cannot be transferred to the left hemisphere. An image or picture that is flashed to the right visual field of a split-brain patient (and therefore processed by the left hemisphere) can be named. However, when the picture is flashed in the left visual field and therefore “lands” in the right hemisphere, it cannot be named. Other Experimental Evidence of Brain Organization Dichotic listening is an experimental technique that uses auditory signals to observe the behavior of the individual hemispheres of the human brain. Subjects hear two different sound signals simultaneously through earphones. They may hear curl in one ear and girl in the other, or a cough in one ear and a laugh in the other. When asked to state what they heard in each ear, subjects are more fre1Gazzaniga, M. S. 1970. The bisected brain. New York: Appleton-Century-Crofts. The Human Brain quently correct in reporting linguistic stimuli (words, nonsense syllables, and so on) delivered directly to the right ear, but are more frequently correct in reporting nonverbal stimuli (musical chords, environmental sounds, and so on) delivered to the left ear. Such experiments provide strong evidence of lateralization. Both hemispheres receive signals from both ears, but the contralateral stimuli prevail over the ipsilateral (same-side) stimuli because they are processed more robustly. The contralateral pathways are anatomically thicker (think of a fourlane highway versus a two-lane road) and are not delayed by the need to cross the corpus callosum. The accuracy with which subjects report what they hear is evidence that the left hemisphere is superior for linguistic processing, and the right hemisphere is superior for nonverbal information. These experiments are important because they show not only that language is lateralized, but also that the left hemisphere is not superior for processing all sounds; it is only better for those sounds that are linguistic. The left side of the brain is specialized for language, not sound, as we also noted in connection with sign language research discussed earlier. Other experimental techniques are also being used to map the brain and to investigate the independence of different aspects of language and the extent of the independence of language from other cognitive systems. Even before the advances in imaging technology of the 1980s and more recently, researchers were taping electrodes to different areas of the skull and investigating the electrical activity of the brain related to perceptual and cognitive information. In such experiments scientists measure event-related brain potentials (ERPs), which are the electrical signals emitted from the brain in response to different stimuli. For example, ERP differences result when the subject hears speech sounds versus nonspeech sounds, with a greater response from the left hemisphere to speech. ERP experiments also show variations in timing, pattern, amplitude, and hemisphere of response when subjects hear sentences that are meaningless, such as The man admired Don’s headache of the landscape. as opposed to meaningful sentences such as The man admired Don’s sketch of the landscape. Such experiments show that neuronal activity varies in location within the brain according to whether the stimulus is language or nonlanguage, with a left hemisphere preference for language. Even jabberwocky sentences—sentences that are grammatical but contain nonsense words, such as Lewis Carroll’s ’Twas brillig, and the slithy toves—elicit an asymmetrical left hemisphere ERP response, demonstrating that the left hemisphere is sensitive to grammatical structure even in the absence of meaning. Moreover, because ERPs also show the timing of neuronal activity as the brain processes language, they can provide insight into the mechanisms that allow the brain to process language quickly and efficiently, on the scale of milliseconds. ERP and imaging studies of newborns and very young infants show that from birth onward, the left hemisphere differentiates between nonlinguistic acoustic processing and linguistic processing of sounds, and does so via the same neural 17 18 INTRODUCTION Brain and Language pathways that adults use. These results indicate that at birth the left hemisphere is primed to process language, and to do so in terms of the specific localization of language functions we find in the adult brain. What is more, these studies have shown that early stages of phonological and syntactic processing do not require attentional resources but are automatic, very much like reflexes. For example, even sleeping infants show the asymmetrical and distinct processing of phonological versus equally different but nonlinguistic acoustic signals; and adults are able to perform a completely unrelated task, one that takes up considerable attentional resources, at the same time they are listening to sentences, without affecting the nature or degree of the brain activity that is the neural reflex of automatic, mandatory early syntactic processing. Experimental evidence from these various neurolinguistic techniques has provided empirical confirmation for theories of language structure. For example, ERP, fMRI, PET, and MEG studies provide measurable confirmation of discrete speech sounds and their phonetic properties. These studies also substantiate linguistic evidence that words have an internal structure consisting of morphemes (chapter 1) and belong to categories such as nouns and verbs. Neurolinguistic experiments also support the mental reality of many of the syntactic structures proposed by linguists. Thus neurolinguistic experimentation provides data for both aspects of neurolinguistics: for helping to determine where and how language is represented and processed in the brain, and for providing empirical support for concepts and hypotheses in linguistic theory. The results of neurolinguistic studies, which use different techniques and different subject populations, both normal and brain damaged, are converging to provide the information we seek on the relationship between the brain and various language and nonlanguage cognitive systems. However, as pointed out by Professors Colin Phillips and Kuniyoshi Sakai, . . . knowing where language is supported in the human brain is just one step on the path to finding what are the special properties of those brain regions that make language possible. . . . An important challenge for coming years will be to find whether the brain areas implicated in language studies turn out to have distinctive properties at the neuronal level that allow them to explain the special properties of human language.2 The Autonomy of Language In addition to brain-damaged individuals who have lost their language ability, there are children without brain lesions who nevertheless have difficulties in acquiring language or are much slower than the average child. They show no other cognitive deficits, they are not autistic or retarded, and they have no perceptual problems. Such children are suffering from specific language impairment 2 Phillips, C., and K. L. Sakai. 2005. Language and the brain. Yearbook of science and technology 2005. Boston: McGraw-Hill Publishers. The Autonomy of Language (SLI). Only their linguistic ability is affected, and often only specific aspects of grammar are impaired. Children with SLI have problems with the use of function words such as articles, prepositions, and auxiliary verbs. They also have difficulties with inflectional suffixes on nouns and verbs such as markers of tense and agreement. Several examples from a four-year-old boy with SLI illustrate this: Meowmeow chase mice. Show me knife. It not long one. An experimental study of several SLI children showed that they produced the past tense marker on the verb (as in danced) about 27 percent of the time, compared with 95 percent by the normal control group. Similarly, the SLI children produced the plural marker -s (as in boys) only 9 percent of the time, compared with 95 percent by the normal children. Other studies of children with SLI reveal broader grammatical impairments, involving difficulties with many grammatical structures and operations. However, most investigations of SLI children show that they have particular problems with verbal inflection, especially with producing tensed verbs (walks, walked), and also with syntactic structures involving certain kinds of word reorderings such as Mother is hard to please, a rearrangement of It is hard to please Mother. In many respects these difficulties resemble the impairments demonstrated by aphasics. Recent work on SLI children also shows that the different components of language (phonology, syntax, lexicon) can be selectively impaired or spared. As is the case with aphasia, these studies of SLI provide important information about the nature of language and help linguists develop theories about the underlying properties of language and its development in children. SLI children show that language may be impaired while general intelligence stays intact, supporting the view of a grammatical faculty that is separate from other cognitive systems. But is it possible for language to develop normally when general intelligence is impaired? If such individuals can be found, it argues strongly for the view that language does not derive from some general cognitive ability. Other Dissociations of Language and Cognition [T]he human mind is not an unstructured entity but consists of components which can be distinguished by their functional properties. NEIL SMITH AND IANTHI-MARIA TSIMPLI, The Mind of a Savant: Language, Learning, and Modularity, 1995 There are numerous cases of intellectually handicapped individuals who, despite their disabilities in certain spheres, show remarkable talents in others. There are superb musicians and artists who lack the simple abilities required to take care of themselves. Such people are referred to as savants. Some of the most famous savants are human calculators who can perform arithmetic computations at phenomenal speed, or calendrical calculators who can tell you without pause on which day of the week any date in the last or next century falls. 19 20 INTRODUCTION Brain and Language Until recently, most such savants have been reported to be linguistically handicapped. They may be good mimics who can repeat speech like parrots, but they show meager creative language ability. Nevertheless, the literature reports cases of language savants who have acquired the highly complex grammar of their language (as well as other languages in some cases) but who lack nonlinguistic abilities of equal complexity. Laura and Christopher are two such cases. Laura Laura was a retarded young woman with a nonverbal IQ of 41 to 44. She lacked almost all number concepts, including basic counting principles, and could draw only at a preschool level. She had an auditory memory span limited to three units. Yet, when at the age of sixteen she was asked to name some fruits, she responded with pears, apples, and pomegranates. In this same period she produced syntactically complex sentences like He was saying that I lost my battery-powered watch that I loved, and She does paintings, this really good friend of the kids who I went to school with and really loved, and I was like 15 or 19 when I started moving out of home . . . Laura could not add 2 + 2. She didn’t know how old she was or how old she was when she moved away from home, nor whether 15 is before or after 19. Nevertheless, Laura produced complex sentences with multiple phrases and sentences with other sentences inside them. She used and understood passive sentences, and she was able to inflect verbs for number and person to agree with the subject of the sentence. She formed past tenses in accord with adverbs that referred to past time. She could do all this and more, but she could neither read nor write nor tell time. She did not know who the president of the United States was or what country she lived in. Her drawings of humans resembled potatoes with stick arms and legs. Yet, in a sentence imitation task, she both detected and corrected grammatical errors. Laura is but one of many examples of children who display well-developed grammatical abilities, less-developed abilities to associate linguistic expressions with the objects they refer to, and severe deficits in nonlinguistic cognition. In addition, any notion that linguistic competence results simply from communicative abilities, or develops to serve communicative functions, is belied by studies of children with good linguistic skills, but nearly no or severely limited communicative skills. The acquisition and use of language seem to depend on cognitive skills different from the ability to communicate in a social setting. Christopher Christopher has a nonverbal IQ between 60 and 70 and must live in an institution because he is unable to take care of himself. The tasks of buttoning a shirt, cutting his fingernails, or vacuuming the carpet are too difficult for him. However, his linguistic competence is as rich and as sophisticated as that of any native speaker. Furthermore, when given written texts in some fifteen to twenty languages, he translates them quickly, with few errors, into English. The languages include Germanic languages such as Danish, Dutch, and German; Romance languages such as French, Italian, Portuguese, and Spanish; as well as Polish, Finnish, Greek, The Autonomy of Language Hindi, Turkish, and Welsh. He learned these languages from speakers who used them in his presence, or from grammar books. Christopher loves to study and learn languages. Little else is of interest to him. His situation strongly suggests that his linguistic ability is independent of his general intellectual ability. The question as to whether the language faculty is a separate cognitive system or whether it is derivative of more general cognitive mechanisms is controversial and has received much attention and debate among linguists, psychologists, neuropsychologists, and cognitive scientists. Cases such as Laura and Christopher argue against the view that linguistic ability derives from general intelligence because these two individuals (and others like them) developed language despite other pervasive intellectual deficits. A growing body of evidence supports the view that the human animal is biologically equipped from birth with an autonomous language faculty that is highly specific and that does not derive from general human intellectual ability. Genetic Basis of Language Studies of genetic disorders also reveal that one cognitive domain can develop normally along with abnormal development in other domains, and they also underscore the strong biological basis of language. Children with Turner syndrome (a chromosomal anomaly) have normal language and advanced reading skills along with serious nonlinguistic (visual and spatial) cognitive deficits. Similarly, studies of the language of children and adolescents with Williams syndrome reveal a unique behavioral profile in which certain linguistic functions seem to be relatively preserved in the face of visual and spatial cognitive deficits and moderate retardation. In addition, developmental dyslexia and SLI also appear to have a genetic basis. And recent studies of Klinefelter syndrome (another chromosomal anomaly) show quite selective syntactic and semantic deficits alongside intact intelligence. Epidemiological and familial aggregation studies show that SLI runs in families. One such study is of a large multigenerational family, half of whom are language impaired. The impaired members of this family have a very specific grammatical problem: They do not reliably use word-endings or “irregular” verbs correctly. In particular, they often fail to indicate the tense of the verb. They routinely produce sentences such as the following: She remembered when she hurts herself the other day. He did it then he fall. The boy climb up the tree and frightened the bird away. These and similar results show that a large proportion of SLI children have language-impaired family members, pointing to SLI as a heritable disorder. Studies also show that monozygotic (identical) twins are more likely to both suffer from SLI than dizygotic (fraternal) twins. Thus evidence from SLI and other genetic disorders, along with the asymmetry of abilities in linguistic savants, strongly supports the view that the language faculty is an autonomous, genetically determined module of the brain. 21 22 INTRODUCTION Brain and Language Language and Brain Development “Jump Start” copyright . United Feature Syndicate. Reprinted with permission. Language and the brain are intimately connected. Specific areas of the brain are devoted to language, and injury to these areas disrupts language. In the young child, injury to or removal of the left hemisphere has severe consequences for language development. Conversely, increasing evidence shows that normal brain development depends on early and regular exposure to language. (See chapter 7.) The Critical Period Under normal circumstances, a child is introduced to language virtually at the moment of birth. Adults talk to him and to each other in his presence. Children do not require explicit language instruction, but they do need exposure to language in order to develop normally. Children who do not receive linguistic input during their formative years do not achieve nativelike grammatical competence. Moreover, behavioral tests and brain imaging studies show that late exposure to language alters the fundamental organization of the brain for language. The critical-age hypothesis assumes that language is biologically based and that the ability to learn a native language develops within a fixed period, from birth to middle childhood. During this critical period, language acquisition proceeds easily, swiftly, and without external intervention. After this period, the acquisition of grammar is difficult and, for most individuals, never fully achieved. Children deprived of language during this critical period show atypical patterns of brain lateralization. The notion of a critical period is true of many species and seems to pertain to species-specific, biologically triggered behaviors. Ducklings, for example, during the period from nine to twenty-one hours after hatching, will follow the first moving object they see, whether or not it looks or waddles like a duck. Such behavior is not the result of conscious decision, external teaching, or intensive practice. It unfolds according to what appears to be a maturationally determined schedule that is universal across the species. Similarly, as discussed in a later section, certain species of birds develop their bird song during a biologically determined window of time. Instances of children reared in environments of extreme social isolation constitute “experiments in nature” for testing the critical-age hypothesis. The most Language and Brain Development dramatic cases are those described as “wild” or “feral” children. A celebrated case, documented in François Truffaut’s film The Wild Child, is that of Victor, “the wild boy of Aveyron,” who was found in 1798. It was ascertained that he had been left in the woods when very young and had somehow survived. In 1920 two children, Amala and Kamala, were found in India, supposedly having been reared by wolves. Other children have been isolated because of deliberate efforts to keep them from normal social intercourse. In 1970, a child called Genie in the scientific reports was discovered. She had been confined to a small room under conditions of physical restraint and had received only minimal human contact from the age of eighteen months until nearly fourteen years. None of these children, regardless of the cause of isolation, was able to speak or knew any language at the time they were reintroduced into society. This linguistic inability could simply be caused by the fact that these children received no linguistic input, showing that language acquisition, though an innate, neurologically based ability, must be triggered by input from the environment. In the documented cases of Victor and Genie, however, these children were unable to acquire grammar even after years of exposure, and despite the ability to learn many words. Genie was able to learn a large vocabulary, including colors, shapes, objects, natural categories, and abstract as well as concrete terms, but her grammatical skills never fully developed. The UCLA linguist Susan Curtiss, who worked with Genie for several years, reported that Genie’s utterances were, for the most part, “the stringing together of content words, often with rich and clear meaning, but with little grammatical structure.” Many utterances produced by Genie at the age of fifteen and older, several years after her emergence from isolation, are like those of two-year-old children, and not unlike utterances of Broca’s aphasia patients and people with SLI, such as the following: Man motorcycle have. Genie full stomach. Genie bad cold live father house. Want Curtiss play piano. Open door key. Genie’s utterances lacked articles, auxiliary verbs like will or can, the thirdperson singular agreement marker -s, the past-tense marker -ed, question words like who, what, and where, and pronouns. She had no ability to form more complex types of sentences such as questions (e.g., Are you feeling hungry?). Genie started learning language after the critical period and was therefore never able to fully acquire the grammatical rules of English. Tests of lateralization (dichotic listening and ERP experiments) showed that Genie’s language was lateralized to the right hemisphere. Her test performance was similar to that found in split-brain and left hemispherectomy patients, yet Genie was not brain damaged. Curtiss speculates that after the critical period, the usual language areas functionally atrophy because of inadequate linguistic stimulation. Genie’s case also demonstrates that language is not the same as communication, because Genie was a powerful nonverbal communicator, despite her limited ability to acquire language. 23 24 INTRODUCTION Brain and Language Chelsea, another case of linguistic isolation, is a woman whose situation also supports the critical-age hypothesis. She was born deaf but was wrongly diagnosed as retarded. When she was thirty-one, her deafness was finally diagnosed, and she was fitted with hearing aids. For years she has received extensive language training and therapy and has acquired a large vocabulary. However, like Genie, Chelsea has not been able to develop a grammar. ERP studies of the localization of language in Chelsea’s brain have revealed an equal response to language in both hemispheres. In other words, Chelsea also does not show the normal asymmetric organization for language. More than 90 percent of children who are born deaf or become deaf before they have acquired language are born to hearing parents. These children have also provided information about the critical age for language acquisition. Because most of their parents do not know sign language at the time these children are born, most receive delayed language exposure. Several studies have investigated the acquisition of American Sign Language (ASL) among deaf signers exposed to the language at different ages. Early learners who received ASL input from birth and up to six years of age did much better in the production and comprehension of complex signs and sign sentences than late learners who were not exposed to ASL until after the age of twelve, even though all of the subjects in these studies had used sign for more than twenty years. There was little difference, however, in vocabulary or knowledge of word order. Another study compared patterns of lateralization in the brains of adult native speakers of English, adult native signers, and deaf adults who had not been exposed to sign language. The nonsigning deaf adults did not show the same cerebral asymmetries as either the hearing adults or the deaf signers. In recent years there have been numerous studies of late learners of sign language, all with similar results. The cases of Genie and other isolated children, as well as deaf late learners of ASL, show that children cannot fully acquire language unless they are exposed to it within the critical period—a biologically determined window of opportunity during which time the brain is prepared to develop language. Moreover, the critical period is linked to brain lateralization. The human brain is primed to develop language in specific areas of the left hemisphere, but the normal process of brain specialization depends on early and systematic experience with language. Language acquisition plays a critical role in, and may even be the trigger for, the realization of normal cerebral lateralization for higher cognitive functions in general, not just for language. Beyond the critical period, the human brain seems unable to acquire the grammatical aspects of language, even with substantial linguistic training or many years of exposure. However, it is possible to acquire words and various conversational skills after this point. This evidence suggests that the critical period holds for the acquisition of grammatical abilities, but not necessarily for all aspects of language. The selective acquisition of certain components of language that occurs beyond the critical period is reminiscent of the selective impairment that occurs in various language disorders, where specific linguistic abilities are disrupted. This selectivity in both acquisition and impairment points to a strongly modularized language faculty. Language is separate from other cognitive systems and Language and Brain Development autonomous, and is itself a complex system with various components. In the chapters that follow, we will explore these different language components. A Critical Period for Bird Song That’s the wise thrush; he sings each song twice over Lest you should think he never could recapture The first fine careless rapture! ROBERT BROWNING, “Home-thoughts, from Abroad,” 1845 Mutts © Patrick McDonnell, King Features Syndicate Bird song lacks certain fundamental characteristics of human language, such as discrete sounds and creativity. However, certain species of birds show a critical period for acquiring their “language” similar to the critical period for human language acquisition. Calls and songs of the chaffinch vary depending on the geographic area that the bird inhabits. The message is the same, but the form or “pronunciation” is different. Usually, a young bird sings a simplified version of the song shortly after hatching. Later, it undergoes further learning in acquiring the fully complex version. Because birds from the same brood acquire different chaffinch songs depending on the area in which they finally settle, part of the song must be learned. On the other hand, because the fledging chaffinch sings the song of its species in a simple degraded form, even if it has never heard it sung, some aspect of it is biologically determined, that is, innate. The chaffinch acquires its fully developed song in several stages, just as human children acquire language. There is also a critical period in the song learning of chaffinches as well as white-crowned sparrows, zebra finches, and many other species. If these birds are not exposed to the songs of their species during certain fixed periods after their birth—the period differs from species to species—song acquisition does not occur. The chaffinch is unable to learn new song elements after ten months of age. If it is isolated from other birds before attaining the full complexity of its song and is then exposed again after ten months, its song will not develop further. If white-crowned sparrows lose their hearing during a critical period after they have learned to sing, they produce a song that differs from other white crowns. They need to hear themselves sing in order to produce 25 26 INTRODUCTION Brain and Language particular whistles and other song features. If, however, the deafness occurs after the critical period, their songs are normal. Similarly, baby nightingales in captivity may be trained to sing melodiously by another nightingale, a “teaching bird,” but only before their tail feathers are grown. After that period, they know only the less melodious calls of their parents, and nothing more can be done to further their musical development. On the other hand, some bird species show no critical period. The cuckoo sings a fully developed song even if it never hears another cuckoo sing. These communicative messages are entirely innate. For other species, songs appear to be at least partially learned, and the learning may occur throughout the bird’s lifetime. The bullfinch, for example, will learn elements of songs it is exposed to, even those of another species, and incorporate those elements into its own quiet warble. In a more recent example of unconstrained song learning, Danish ornithologists report that birds have begun to copy the ring tones of cellular phones. From the point of view of human language research, the relationship between the innate and learned aspects of bird song is significant. Apparently, the basic nature of the songs of some species is present from birth, which means that it is biologically and genetically determined. The same holds true for human language: Its basic nature is innate. The details of bird song and of human language are both acquired through experience that must occur within a critical period. The Development of Language in the Species As the voice was used more and more, the vocal organs would have been strengthened and perfected through the principle of the inherited effects of use; and this would have reacted on the power of speech. But the relation between the continued use of language and the development of the brain has no doubt been far more important. The mental powers in some early progenitor of man must have been more highly developed than in any existing ape, before even the most imperfect form of speech could have come into use. CHARLES DARWIN, The Descent of Man, 1871 There is much interest today among biologists as well as linguists in the relationship between the development of language and the evolutionary development of the human species. Some view language as species specific; some do not. Some view language ability as a difference in degree between humans and other primates—a continuity view; others see the onset of language ability as a qualitative leap—the discontinuity view. In trying to understand the development of language, scholars past and present have debated the role played by the vocal tract and the ear. For example, it has been suggested that speech could not have developed in nonhuman primates because their vocal tracts were anatomically incapable of producing a large enough inventory of speech sounds. According to this hypothesis, the development of language is linked to the evolutionary development of the speech production and perception apparatus. This, of course, would be accompanied by changes in the brain and the nervous system toward greater complexity. Such a view implies that the languages of our human ancestors of millions of years ago may have been syntactically and phonologically simpler than any language Language and Brain Development known to us today. The notion “simpler” is left undefined, although it has been suggested that this primeval language had a smaller inventory of sounds. One evolutionary step must have resulted in the development of a vocal tract capable of producing the wide variety of sounds of human language, as well as the mechanism for perceiving and distinguishing them. However, the existence of mynah birds and parrots is evidence that this step is insufficient to explain the origin of language, because these creatures have the ability to imitate human speech, but not the ability to acquire language. More important, we know from the study of humans who are born deaf and learn sign languages that are used around them that the ability to hear speech sounds is not a necessary condition for the acquisition and use of language. In addition, the lateralization evidence from ERP and imaging studies of people using sign language, as well as evidence from sign language aphasia, show that sign language is organized in the brain like spoken language. Certain auditory locations within the cortex are activated during signing even though no sound is involved, supporting the contention that the brain is neurologically equipped for language rather than speech. The ability to produce and hear a wide variety of sounds therefore appears to be neither necessary nor sufficient for the development of language in the human species. A major step in the development of language most probably relates to evolutionary changes in the brain. The linguist Noam Chomsky expresses this view: It could be that when the brain reached a certain level of complexity it simply automatically had certain properties because that’s what happens when you pack 1010 neurons into something the size of a basketball.3 The biologist Stephen Jay Gould expresses a similar view: The Darwinist model would say that language, like other complex organic systems, evolved step by step, each step being an adaptive solution. Yet language is such an integrated “all or none” system, it is hard to imagine it evolving that way. Perhaps the brain grew in size and became capable of all kinds of things which were not part of the original properties.4 Other linguists, however, support a more Darwinian natural selection development of what is sometimes called “the language instinct”: All the evidence suggests that it is the precise wiring of the brain’s microcircuitry that makes language happen, not gross size, shape, or neuron packing.5 The attempt to resolve this controversy clearly requires more research. Another point that is not yet clear is what role, if any, hemispheric lateralization 3Chomsky, N., in Searchinger, G. 1994. The human language series, program 3. Video. New York: Equinox Film/Ways of Knowing, Inc. 4 Gould, S. J., in Searchinger, G. 1994. The human language series, program 3. Video. New York: Equinox Film/Ways of Knowing, Inc. 5 Pinker, S. 1995. The language instinct. New York: William Morrow. 27 28 INTRODUCTION Brain and Language played in language evolution. Lateralization certainly makes greater specialization possible. Research conducted with birds and monkeys, however, shows that lateralization is not unique to the human brain. Thus, while it may constitute a necessary step in the evolution of language, it is not a sufficient one. We do not yet have definitive answers to the origin of language in the human brain. The search for these answers goes on and provides new insights into the nature of language and the nature of the human brain. Summary The attempt to understand what makes the acquisition and use of language possible has led to research on the brain-mind-language relationship. Neurolinguistics is the study of the brain mechanisms and anatomical structures that underlie linguistic competence and performance. Much neurolinguistic research is centered on experimental and behavioral data from people with impaired or atypical language. These results greatly enhance our understanding of language structure and acquisition. The brain is the most complex organ of the body, controlling motor and sensory activities and thought processes. Research conducted for more than a century has shown that different parts of the brain control different body functions. The nerve cells that form the surface of the brain are called the cortex, which serves as the intellectual decision maker, receiving messages from the sensory organs and initiating all voluntary actions. The brain of all higher animals is divided into two parts called the cerebral hemispheres, which are connected by the corpus callosum, a network that permits the left and right hemispheres to communicate. Each hemisphere exhibits contralateral control of functions. The left hemisphere controls the right side of the body, and the right hemisphere controls the left side. Despite the general symmetry of the human body, much evidence suggests that the brain is asymmetric, with the left and right hemispheres lateralized for different functions. Neurolinguists have many tools for studying the brain, among them dichotic listening experiments and many types of scans and electrical measurements. These techniques permit the study of the living brain as it processes language. By studying split-brain patients and aphasics, localized areas of the brain can be associated with particular language functions. For example, lesions in the part of the brain called Broca’s area may suffer from Broca’s aphasia, which results in impaired syntax and agrammatism. Damage to Wernicke’s area may result in Wernicke’s aphasia, in which fluent speakers produce semantically anomalous utterances, or even worse, jargon aphasia, in which speakers produce nonsense forms that make their utterance uninterpretable. Damage to yet different areas can produce anomia, a form of aphasia in which the patient has word-finding difficulties. Deaf signers with damage to the left hemisphere show aphasia for sign language similar to the language breakdown in hearing aphasics, even though sign language is a visual-spatial language. Other evidence supports the lateralization of language. Children who undergo a left hemispherectomy show specific linguistic deficits, whereas other cognitive References for Further Reading abilities remain intact. If the right brain is damaged or removed after the first two or three years, however, language is unimpaired, but other cognitive disorders may result. The language faculty is modular. It is independent of other cognitive systems with which it interacts. Evidence for modularity is found in studies of aphasia, of children with specific language impairment (SLI), of linguistic savants, and of children who learn language past the critical period. The genetic basis for an independent language module is supported by studies of SLI in families and twins and by studies of genetic anomalies associated with language disorders. The critical-age hypothesis states that there is a window of opportunity between birth and middle childhood for learning a first language. The imperfect language learning of persons exposed to language after this period supports the hypothesis. Some songbirds also appear to have a critical period for the acquisition of their calls and songs. References for Further Reading Caplan, D. 2001. Neurolinguistics. The handbook of linguistics, M. Aronoff and J. Rees-Miller (eds.). London: Blackwell Publishers. ______. 1992. Language: Structure, processing, and disorders. Cambridge, MA: MIT Press. ______. 1987. Neurolinguistics and linguistic aphasiology. Cambridge, UK: Cambridge University Press. Coltheart, M., K. Patterson, and J. C. Marshall (eds). 1980. Deep dyslexia. London: Routledge & Kegan Paul. Curtiss, S. 1977. Genie: A linguistic study of a modern-day “wild child.” New York: Academic Press. Curtiss, S., and J. Schaeffer. 2005. Syntactic development in children with hemispherectomy: The I-, D-, and C-systems. Brain and Language 94: 147–166. Damásio, H. 1981. Cerebral localization of the aphasias.” Acquired aphasia, M. Taylor Sarno (ed.). New York: Academic Press, 27–65. Gazzaniga, M. S. 1970. The bisected brain. New York: Appleton-Century-Crofts. Geschwind, N. 1979. Specializations of the human brain. Scientific American 206 (September): 180–199. Lenneberg, E. H. 1967. Biological foundations of language. New York: Wiley. Obler, L. K., and K. Gjerlow. 1999. Language and brain. Cambridge, UK: Cambridge University Press. Patterson, K. E., J. C. Marshall, and M. Coltheart (eds.). 1986. Surface dyslexia. Hillsdale, NJ: Lawrence Erlbaum. Pinker, S. 1994. The language instinct. New York: William Morrow. Poizner, H., E. S. Klima, and U. Bellugi. 1987. What the hands reveal about the brain. Cambridge, MA: MIT Press. Searchinger, G. 1994. The human language series: 1, 2, 3. Videos. New York: Equinox Film/Ways of Knowing, Inc. Smith, N. V., and I-M. Tsimpli. 1995. The mind of a savant: Language learning and modularity. Oxford, UK: Blackwell. Springer, S. P., and G. Deutsch. 1997. Left brain, right brain, 5th edn. New York: W. H. Freeman and Company. Stromswold, K. 2001. The heritability of language. Language 77(4): 647–721. Yamada, J. 1990. Laura: A case for the modularity of language. Cambridge, MA: MIT Press. 29 30 INTRODUCTION Brain and Language Exercises 1. The Nobel Prize laureate Roger Sperry has argued that split-brain patients have two minds: Everything we have seen so far indicates that the surgery has left these people with two separate minds, that is, two separate spheres of consciousness. What is experienced in the right hemisphere seems to lie entirely outside the realm of experience of the left hemisphere. Another Nobel Prize winner in physiology, Sir John Eccles, disagrees. He does not think the right hemisphere can think; he distinguishes between “mere consciousness,” which animals possess as well as humans, and language, thought, and other purely human cognitive abilities. In fact, according to him, human nature is all in the left hemisphere. Write a short essay discussing these two opposing points of view, stating your opinion on how to define “the mind.” 2. A. Some aphasic patients, when asked to read a list of words, substitute other words for those printed. In many cases, the printed words and the substituted words are similar. The following data are from actual aphasic patients. In each case, state what the two words have in common and how they differ: Printed Word i. liberty canary abroad large short tall ii. decide conceal portray bathe speak remember Word Spoken by Aphasic freedom parrot overseas long small long decision concealment portrait bath discussion memory B. What do the words in groups (i) and (ii) reveal about how words are likely to be stored in the brain? 3. The following sentences spoken by aphasic patients were collected and analyzed by Dr. Harry Whitaker. In each case, state how the sentence deviates from normal nonaphasic language. a. There is under a horse a new sidesaddle. b. In girls we see many happy days. c. I’ll challenge a new bike. d. I surprise no new glamour. e. Is there three chairs in this room? Exercises f. g. h. i. Mike and Peter is happy. Bill and John likes hot dogs. Proliferate is a complete time about a word that is correct. Went came in better than it did before. 4. The investigation of individuals with brain damage has been a major source of information regarding the neural basis of language and other cognitive systems. One might suggest that this is like trying to understand how an automobile engine works by looking at a damaged engine. Is this a good analogy? If so, why? If not, why not? In your answer, discuss how a damaged system can or cannot provide information about the normal system. 5. What are the arguments and evidence that have been put forth to support the notion that there are two separate parts of the brain? 6. Discuss the statement: It only takes one hemisphere to have a mind. 7. In this chapter, dichotic listening tests in which subjects hear different kinds of stimuli in each ear were discussed. These tests showed that there were fewer errors made in reporting linguistic stimuli such as the syllables pa, ta, and ka when heard through an earphone on the right ear; other nonlinguistic sounds such as a police car siren were processed with fewer mistakes if heard by the left ear. This is a result of the contralateral control of the brain. There is also a technique that permits visual stimuli to be received either by the right visual field, that is, the right eye alone (going directly to the left hemisphere), or by the left visual field (going directly to the right hemisphere). What are some visual stimuli that could be used in an experiment to further test the lateralization of language? 8. The following utterances were made either by Broca’s aphasics or Wernicke’s aphasics. Indicate which is which by writing a “B” or “W” next to the utterance. a. Goodnight and in the pansy I can’t say but into a flipdoor you can see it. b. Well . . . sunset . . . uh . . . horses nine, no, uh, two, tails want swish. c. Oh, . . . if I could I would, and a sick old man disflined a sinter, minter. d. Words . . . words . . . words . . . two, four, six, eight, . . . blaze am he. 9. Shakespeare’s Hamlet surely had problems. Some say he was obsessed with being overweight, because the first lines he speaks in the play when alone on the stage in Act II, Scene 2, are: O! that this too too solid flesh would melt, Thaw, and resolve itself into a dew; Others argue that he may have had Wernicke’s aphasia, as evidenced by the following passage from Act II, Scene 2: Slanders, sir: for the satirical rogue says here that old men have grey beards, that their faces are 31 32 INTRODUCTION Brain and Language wrinkled, their eyes purging thick amber and plum-tree gum and that they have a plentiful lack of wit, together with most weak hams: all which, sir, though I most powerfully and potently believe, yet I hold it not honesty to have it thus set down, for you yourself, sir, should be old as I am, if like a crab you could go backward. Take up the argument. Is Hamlet aphasic? Argue either case. 10. Research projects: a. Recently, it’s been said that persons born with “perfect pitch” nonetheless need to exercise that ability at a young age or it goes away by adulthood. Find out what you can about this topic and write a one-page (or longer) paper describing your investigation. Begin with defining “perfect pitch.” Relate your discoveries to the critical-age hypothesis discussed in this chapter. b. Consider some of the high-tech methodologies used to investigate the brain discussed in this chapter, such as PET scans and MRIs. What are the upsides and downsides of the use of these technologies on healthy patients? Consider the cost, the intrusiveness, and the ethics of exploring a person’s brain weighed against the knowledge obtained from such studies. c. Investigate claims that PET scans show that reading silently and reading aloud involve different parts of the left hemisphere. 11. Article review project: Read, summarize, and critically review the article that appeared in Science, Volume 298, November 22, 2002, by Marc D. Hauser, Noam Chomsky, and W. Tecumseh Fitch, entitled “The Faculty of Language: What Is It, Who Has It, and How Did It Evolve?” 12. As discussed in the chapter, agrammatic aphasics may have difficulty reading function words, which are words that have little descriptive content, but they can read more contentful words such as nouns, verbs, and adjectives. a. Which of the following words would you predict to be difficult for such a person? ore bee can (be able to) but not knot may be may can (metal container) butt or will (future) might (possibility) will (willingness) might (strength) b. Discuss three sources of evidence that function words and content words are stored or processed differently in the brain. 13. The traditional writing system of the Chinese languages (e.g., Mandarin, Cantonese) is ideographic (each concept or word is represented by a distinct Exercises character). More recently, the Chinese government has adopted a spelling system called pinyin, which is based on the Roman alphabet, and in which each symbol represents a sound. Following are several Chinese words in their character and pinyin forms. (The digit following the Roman letters in pinyin is a tone indicator and may be ignored.) mu4 tree hua1 flower ren2 man jia1 home gou3 dog Based on the information provided in this chapter, would the location of neural activity be the same or different when Chinese speakers read in these two systems? Explain. 14. Research project: Dame Margaret Thatcher, a former prime minister of the United Kingdom, has been (famously) quoted as saying: “If you want something said, ask a man . . . if you want something done, ask a woman.” This suggests, perhaps, that men and women process information differently. This exercise asks you to take up the controversial question: Are there gender differences in the brain having to do with how men and women process and use language? You might begin your research by seeking answers (try the Internet) to questions about the incidence of SLI, dyslexia, and language development differences in boys versus girls. 15. Research project: Discuss the concept of emergence and its relevance to the quoted material of footnotes 3 and 4, as opposed to footnote 5, on page 27. 33 2 Grammatical Aspects of Language The theory of grammar is concerned with the question: What is the nature of a person’s knowledge of his language, the knowledge that enables him to make use of language in the normal, creative fashion? A person who knows a language has mastered a system of rules that assigns sound and meaning in a definite way for an infinite class of possible sentences. N O A M C H O M S K Y, Language and Mind, 1968 1 Morphology: The Words of Language A word is dead When it is said, Some say. I say it just Begins to live That day. EMILY DICKINSON, “A Word Is Dead,” Complete Poems, 1924 Reprinted by permission of the publishers and the Trustees of Amherst College from THE POEMS OF EMILY DICKINSON, Thomas H. Johnson, ed., Cambridge, Mass.: The Belknap Press of Harvard University Press, Copyright © 1951, 1955, 1979, 1983 by the President and Fellows of Harvard College. 36 Every speaker of every language knows tens of thousands of words. Unabridged dictionaries of English contain nearly 500,000 entries, but most speakers don’t know all of these words. It has been estimated that a child of six knows as many as 13,000 words and the average high school graduate about 60,000. A college graduate presumably knows many more than that, but whatever our level of education, we learn new words throughout our lives, such as the many words in this book that you will learn for the first time. Words are an important part of linguistic knowledge and constitute a component of our mental grammars, but one can learn thousands of words in a language and still not know the language. Anyone who has tried to communicate in a foreign country by merely using a dictionary knows this is true. On the other hand, without words we would be unable to convey our thoughts through language or understand the thoughts of others. Someone who doesn’t know English would not know where one word begins or ends in an utterance like Thecatsatonthemat. We separate written words by spaces, but in the spoken language there are no pauses between most words. Morphology: The Words of Language Without knowledge of the language, one can’t tell how many words are in an utterance. Knowing a word means knowing that a particular sequence of sounds is associated with a particular meaning. A speaker of English has no difficulty in segmenting the stream of sounds into six individual words—the, cat, sat, on, the, and mat—because each of these words is listed in his or her mental dictionary, or lexicon (the Greek word for dictionary), that is part of a speaker’s linguistic knowledge. Similarly, a speaker knows that uncharacteristically, which has more letters than Thecatsatonthemat, is nevertheless a single word. The lack of pauses between words in speech has provided humorists with much material. The comical hosts of the show Car Talk, aired on National Public Radio, close the show by reading a list of credits that includes the following cast of characters: Copyeditor: Accounts payable: Pollution control: Purchasing: Statistician: Russian chauffeur: Legal firm: Adeline Moore (add a line more) Ineeda Czech (I need a check) Maury Missions (more emissions) Lois Bidder (lowest bidder) Marge Innovera (margin of error) Picov Andropov (pick up and drop off) Dewey, Cheetham, and Howe1 (Do we cheat ’em? And how!) In all these instances, you would have to have knowledge of English words to make sense of and find humor in such plays on words. The fact that the same sound sequences (Lois Bidder—lowest bidder) can be interpreted differently shows that the relation between sound and meaning is an arbitrary pairing, as discussed in chapter 6. For example, Un petit d’un petit in French means “a little one of a little one,” but in English the sounds resemble the name Humpty Dumpty. When you know a word, you know its sound (pronunciation) and its meaning. Because the sound-meaning relation is arbitrary, it is possible to have words with the same sound and different meanings (bear and bare) and words with the same meaning and different sounds (sofa and couch). Because each word is a sound-meaning unit, each word stored in our mental lexicon must be listed with its unique phonological representation, which determines its pronunciation, and with a meaning. For literate speakers, the spelling, or orthography, of most of the words we know is included. Each word in your mental lexicon includes other information as well, such as whether it is a noun, a pronoun, a verb, an adjective, an adverb, a preposition, or a conjunction. That is, the mental lexicon also specifies the grammatical category or syntactic class of the word. You may not consciously know that a form like love is listed as both a verb and a noun, but as a speaker you have such knowledge, as shown by the phrases I love you and You are the love of my life. If such information were not in the mental lexicon, we would not know how to form grammatical sentences, nor would we be able to distinguish grammatical from ungrammatical sentences. 1“Car Talk”/ from National Public Radio. Dewey, Cheetham & Howe, 2006, all rights reserved. 37 38 CHAPTER 1 Morphology: The Words of Language Dictionaries Dictionary, n. A malevolent literary device for cramping the growth of a language and making it hard and inelastic. AMBROSE BIERCE, The Devil’s Dictionary, 1911 The dictionaries that one buys in a bookstore contain some of the information found in our mental dictionaries. However, the aim of most early lexicographers, or dictionary makers, was to prescribe rather than describe the words of a language. They strove to be, as stated in Webster’s dictionaries, the “supreme authority” of the “correct” pronunciation and meaning of a word. To Samuel Johnson, whose seminal Dictionary of the English Language was published in 1755, the aim of a dictionary was to “register” (describe) the language, not to “construct” (prescribe) it. All dictionaries, from the gargantuan twenty-volume Oxford English Dictionary (OED) to the more commonly used “collegiate” dictionaries, provide the following information about each word: (1) spelling, (2) the “standard” pronunciation, (3) definitions to represent the word’s one or more meanings, and (4) parts of speech (e.g., noun, verb, preposition). Other information may include the etymology or history of the word, whether the word is nonstandard (such as ain’t) or slang, vulgar, or obsolete. Many dictionaries provide quotations from published literature to illustrate the given definitions, as was first done by Dr. Johnson. Owing to the increasing specialization in science and the arts, specialty and subspecialty dictionaries are proliferating. Dictionaries of slang and jargon (see chapter 9) have existed for many years; so have multilingual dictionaries. In addition to these, the shelves of bookstores and libraries are now filled with dictionaries written specifically for biologists, engineers, agriculturists, economists, artists, architects, printers, gays and lesbians, transsexuals, runners, tennis players, and almost any group that has its own set of words to describe what they think and what they do. Our own mental dictionaries include only a small set of the entries in all of these dictionaries, but each word is in someone’s lexicon. Content Words and Function Words “. . . and even . . . the patriotic archbishop of Canterbury found it advisable—” “Found what?” said the Duck. “Found it,” the Mouse replied rather crossly; “of course you know what ‘it’ means.” “I know what ‘it’ means well enough, when I find a thing,” said the Duck; “it’s generally a frog or a worm. The question is, what did the archbishop find?” LEWIS CARROLL, Alice’s Adventures in Wonderland, 1865 Languages make an important distinction between two kinds of words—content words and function words. Nouns, verbs, adjectives, and adverbs are the Content Words and Function Words content words. These words denote concepts such as objects, actions, attributes, and ideas that we can think about like children, anarchism, soar, and purple. Content words are sometimes called the open class words because we can and regularly do add new words to these classes, such as Bollywood, blog, dis, and 24/7, pronounced “twenty-four seven.” Other classes of words do not have clear lexical meanings or obvious concepts associated with them, including conjunctions such as and, or, and but; prepositions such as in and of; the articles the and a/an, and pronouns such as it. These kinds of words are called function words because they specify grammatical relations and have little or no semantic content. For example, the articles indicate whether a noun is definite or indefinite—the boy or a boy. The preposition of indicates possession, as in “the book of yours,” but this word indicates many other kinds of relations too. The it in it’s raining and the archbishop found it advisable are further examples of words whose function is purely grammatical—they are required by the rules of syntax, and as the cartoon suggests, we can hardly do without them. “FoxTrot” copyright . 2000 Bill Amend. Reprinted with permission of Universal Press Syndicate. All rights reserved. Function words are sometimes called closed class words. It is difficult to think of any conjunctions, prepositions, or pronouns that have recently entered the language. The small set of personal pronouns such as I, me, mine, he, she, and so on are part of this class. With the growth of the feminist movement, some proposals have been made for adding a genderless singular pronoun. If such a pronoun existed, it might have prevented the department head in a large university from making the incongruous statement: “We will hire the best person for the job regardless of his sex.” Various proposals such as “e” have been put forward, but none are likely to gain acceptance because the closed classes are unreceptive to new membership. Rather, speakers prefer to recruit existing pronouns such as they and their for this job, as in “We will hire the best person for the job regardless of their sex.” The difference between content and function words is illustrated by the following test that has circulated over the Internet: 39 40 CHAPTER 1 Morphology: The Words of Language Count the number of F’s in the following text without reading further: FINISHED FILES ARE THE RESULT OF YEARS OF SCIENTIFIC STUDY COMBINED WITH THE EXPERIENCE OF YEARS. Most people come up with three, which is wrong. If you came up with fewer than six, count again, and this time, pay attention to the function word of. This little test illustrates that the brain treats content and function words (like of ) differently. A great deal of psychological and neurological evidence supports this claim. As discussed in the introduction, some brain-damaged patients and people with specific language impairments have greater difficulty in using, understanding, or reading function words than they do with content words. Some aphasics are unable to read function words like in or which, but can read the lexical content words inn and witch. The two classes of words also seem to function differently in slips of the tongue produced by normal individuals. For example, a speaker may inadvertently switch words producing “the journal of the editor” instead of “the editor of the journal,” but the switching or exchanging of function words has not been observed. There is also evidence for this distinction from language acquisition (discussed in chapter 7). In the early stages of development, children often omit function words from their speech, as in for example, “doggie barking.” The linguistic evidence suggests that content words and function words play different roles in language. Content words bear the brunt of the meaning, whereas function words connect the content words to the larger grammatical context. Morphemes: The Minimal Units of Meaning “They gave it me,” Humpty Dumpty continued, “for an un-birthday present.” “I beg your pardon?” Alice said with a puzzled air. “I’m not offended,” said Humpty Dumpty. “I mean, what is an un-birthday present?” “A present given when it isn’t your birthday, of course.” LEWIS CARROLL, Through the Looking-Glass, 1871 In the foregoing dialogue, Humpty Dumpty is well aware that the prefix unmeans “not,” as further shown in the following pairs of words: A B desirable likely inspired happy developed sophisticated undesirable unlikely uninspired unhappy undeveloped unsophisticated Morphemes: The Minimal Units of Meaning Thousands of English adjectives begin with un-. If we assume that the most basic unit of meaning is the word, what do we say about parts of words like un-, which has a fixed meaning? In all the words in the B column, un- means the same thing—“not.” Undesirable means “not desirable,” unlikely means “not likely,” and so on. All the words in column B consist of at least two meaningful units: un + desirable, un + likely, un + inspired, and so on. Just as un- occurs with the same meaning in the previous list of words, so does phon- in the following words. (You may not know the meaning of some of them, but you will when you finish this book.) phone phonetic phonetics phonetician phonic phonology phonologist phonological telephone telephonic phoneme phonemic allophone euphonious symphony Phon- is a minimal form in that it can’t be decomposed. Ph doesn’t mean anything; pho, though it may be pronounced like foe, has no relation in meaning to it; and on is not the preposition spelled o-n. In all the words on the list, phon has the identical meaning of “pertaining to sound.” Words have internal structure, which is rule-governed. Uneaten, unadmired, and ungrammatical are words in English, but *eatenun, *admiredun, and *grammaticalun (to mean “not eaten,” “not admired,” “not grammatical”) are not, because we form a negative meaning of a word not by suffixing un- but by prefixing it. When Samuel Goldwyn, the pioneer moviemaker, announced, “In two words: im-possible,” he was reflecting the common view that words are the basic meaningful elements of a language. We have seen that this cannot be so, because some words contain several distinct units of meaning. The linguistic term for the most elemental unit of grammatical form is morpheme. The word is derived from the Greek word morphe, meaning “form.” If Goldwyn had taken a linguistics course, he would have said, more correctly, “In two morphemes: im-possible.” The study of the internal structure of words, and of the rules by which words are formed, is morphology. This word itself consists of two morphemes, morph + ology. The suffix -ology means “science of” or “branch of knowledge concerning.” Thus, the meaning of morphology is “the science of (word) forms.” Morphology is part of our grammatical knowledge of a language. Like most linguistic knowledge, this is generally unconscious knowledge. A single word may be composed of one or more morphemes: one morpheme two morphemes three morphemes boy desire morph (“to change form”) boy + ish desire + able morph + ology boy + ish + ness desire + able + ity 41 42 CHAPTER 1 Morphology: The Words of Language four morphemes more than four gentle + man + li + ness un + desire + able + ity un + gentle + man + li + ness anti + dis + establish + ment + ari + an + ism A morpheme may be represented by a single sound, such as the morpheme a meaning “without” as in amoral and asexual, or by a single syllable, such as child and ish in child + ish. A morpheme may also consist of more than one syllable: by two syllables, as in camel, lady, and water; by three syllables, as in Hackensack and crocodile; or by four or more syllables, as in hallucinate, apothecary, and onomatopoeia. A morpheme—the minimal linguistic unit—is thus an arbitrary union of a sound and a meaning (or grammatical function) that cannot be further analyzed. It is often called a linguistic sign, not to be confused with the sign of sign languages. This may be too simple a definition, but it will serve our purposes for now. Every word in every language is composed of one or more morphemes. Internet bloggers love to point out “inconsistencies” in the English language. They observe that while singers sing and flingers fling, it is not the case that fingers “fing.” However, English speakers know that finger is a single morpheme, or a monomorphemic word. The final -er syllable in finger is not a separate morpheme because a finger is not “something that fings.” The meaning of a morpheme must be constant. The agentive morpheme -er means “one who does” in words like singer, painter, lover, and worker, but the same sounds represent the comparative morpheme, meaning “more,” in nicer, prettier, and taller. Thus, two different morphemes may be pronounced identically. The identical form represents two morphemes because of the different meanings. The same sounds may occur in another word and not represent a separate morpheme at all, as in finger. Conversely, the two morphemes -er and -ster have the same meaning, but different forms. Both singer and songster mean “one who sings.” And like -er, -ster is not a morpheme in monster because a monster is not something that “mons” or someone that “is mon” the way youngster is someone who is young. All of this follows from the concept of the morpheme as a sound plus a meaning unit. The decomposition of words into morphemes illustrates one of the fundamental properties of human language—discreteness. In all languages, sound units combine to form morphemes, morphemes combine to form words, and words combine to form larger units—phrases and sentences. Discreteness is an important part of linguistic creativity. We can combine morphemes in novel ways to create new words whose meaning will be apparent to other speakers of the language. If you know that “to write” to a disk or a DVD means to put information on it, you automatically understand that a writable DVD is one that can take information; a rewritable DVD is one where the original information can be written over; and an unrewritable DVD is one that does not allow the user to write over the original information. You know the meanings of all these words by virtue of your knowledge of the discrete morphemes write, re-, -able, and un-, and the rules for their combination. Morphemes: The Minimal Units of Meaning Bound and Free Morphemes Prefixes and Suffixes “Dennis the Menace” . Hank Ketcham. Reprinted with permission of North America Syndicate. Our morphological knowledge has two components: knowledge of the individual morphemes and knowledge of the rules that combine them. One of the things we know about particular morphemes is whether they can stand alone or whether they must be attached to a base morpheme. Some morphemes like boy, desire, gentle, and man may constitute words by themselves. These are free morphemes. Other morphemes like -ish, -ness, -ly, pre-, trans-, and un- are never words by themselves but are always parts of words. These affixes are bound morphemes. We know whether each affix precedes or follows other morphemes. Thus, un-, pre- (premeditate, prejudge), and bi- (bipolar, bisexual) are prefixes. They occur before other morphemes. Some morphemes occur only as suffixes, following other morphemes. English examples of suffix morphemes are -ing (sleeping, eating, running, climbing), 43 44 CHAPTER 1 Morphology: The Words of Language -er (singer, performer, reader), -ist (typist, pianist, novelist, linguist), and -ly (manly, sickly, friendly), to mention only a few. Many languages have prefixes and suffixes, but languages may differ in how they deploy these morphemes. A morpheme that is a prefix in one language may be a suffix in another and vice versa. In English the plural morphemes -s and -es are suffixes (boys, lasses). In Isthmus Zapotec, spoken in Mexico, the plural morpheme ka- is a prefix: zigi zike diaga “chin” “shoulder” “ear” kazigi kazike kadiaga “chins” “shoulders” “ears” Languages may also differ in what meanings they express through affixation. In English we do not add an affix to derive a noun from a verb. We have the verb dance as in “I like to dance,” and we have the noun dance as in “There’s a dance or two in the old dame yet.” The form is the same in both cases. In Turkish, you derive a noun from a verb with the suffix -ak, as in the following examples: dur bat “to stop” “to sink” durak batak “stopping place” “sinking place” or “marsh/swamp” To express reciprocal action in English we use the phrase each other, as in understand each other, love each other. In Turkish a morpheme is added to the verb: anla sev “understand” “love” anlash sevish “understand each other” “love each other” The reciprocal suffix in these examples is pronounced sh after a vowel and ish after a consonant. This is similar to the process in English, in which we use a as the indefinite article morpheme before a noun beginning with a consonant, as in a dog, and an before a noun beginning with a vowel, as in an apple. The same morpheme may have more than one slightly different form (see exercise 6, for example). We will discuss the various pronunciations of morphemes in more detail in chapter 5. In Piro, an Arawakan language spoken in Peru, a single morpheme, -kaka, can be added to a verb to express the meaning “cause to”: cokoruha salwa “to harpoon” “to visit” cokoruhakaka salwakaka “cause to harpoon” “cause to visit” In Karuk, a Native American language spoken in the Pacific Northwest, adding -ak to a noun forms the locative adverbial meaning “in.” ikrivaam “house” ikrivaamak “in a house” It is accidental that both Turkish and Karuk have a suffix -ak. Despite the similarity in form, the two meanings are different. Similarly, the reciprocal suffix -ish in Turkish is similar in form to the English suffix -ish as in greenish. Morphemes: The Minimal Units of Meaning Similarity in meaning may give rise to different forms. In Karuk the suffix -ara has the same meaning as the English -y, that is, “characterized by” (hairy means “characterized by hair”). aptiik “branch” aptikara “branchy” These examples illustrate again the arbitrary nature of the linguistic sign, that is, of the sound-meaning relationship, as well as the distinction between bound and free morphemes. Infixes Some languages also have infixes, morphemes that are inserted into other morphemes. Bontoc, spoken in the Philippines, is such a language, as illustrated by the following: Nouns/Adjectives Verbs fikas kilad fusul fumikas kumilad fumusul “strong” “red” “enemy” “to be strong” “to be red” “to be an enemy” In this language, the infix -um- is inserted after the first consonant of the noun or adjective. Thus, a speaker of Bontoc who knows that pusi means “poor” would understand the meaning of pumusi, “to be poor,” on hearing the word for the first time, just as an English speaker who learns the verb sneet would know that sneeter is “one who sneets.” A Bontoc speaker who knows that ngumitad means “to be dark” would know that the adjective “dark” must be ngitad. Oddly enough, the only infixes in English are full-word obscenities, usually inserted into adjectives or adverbs. The most common infix in America is the word fuckin’ and all the euphemisms for it, such as friggin, freakin, flippin, and fuggin, as in in-fuggin-credible, un-fuckin-believable, or Kalama-flippin-zoo, based on the city in Michigan. In Britain, a common infix is bloody, an obscene term in British English, and its euphemisms, such as bloomin’. In the movie and stage musical My Fair Lady, the word abso + bloomin + lutely occurs in one of the songs sung by Eliza Doolittle. Circumfixes Some languages have circumfixes, morphemes that are attached to a base morpheme both initially and finally. These are sometimes called discontinuous morphemes. In Chickasaw, a Muskogean language spoken in Oklahoma, the negative is formed with both a prefix ik- and the suffix -o. The final vowel of the affirmative is dropped before the negative suffix is added. Examples of this circumfixing are: Affirmative Negative chokma lakna palli tiwwi ik + chokm + o ik + lakn + o ik + pall + o ik + tiww + o “he is good” “it is yellow” “it is hot” “he opens (it)” “he isn’t good” “it isn’t yellow” “it isn’t hot” “he doesn’t open (it)” 45 46 CHAPTER 1 Morphology: The Words of Language An example of a more familiar circumfixing language is German. The past participle of regular verbs is formed by adding the prefix ge- and the suffix -t to the verb root. This circumfix added to the verb root lieb “love” produces geliebt, “loved” (or “beloved,” when used as an adjective). Roots and Stems Morphologically complex words consist of a morpheme root and one or more affixes. Some examples of English roots are paint in painter, read in reread, ceive in conceive, and ling in linguist. A root may or may not stand alone as a word (paint and read do; ceive and ling don’t). In languages that have circumfixes, the root is the form around which the circumfix attaches, for example, the Chickasaw root chokm in ikchokmo (“he isn’t good”). In infixing languages the root is the form into which the infix is inserted; for example, fikas in the Bontoc word fumikas (“to be strong”). Semitic languages like Hebrew and Arabic have a unique morphological system. Nouns and verbs are built on a foundation of three consonants, and one derives related words by varying the pattern of vowels and syllables. For example, the root for “write” in Egyptian Arabic is ktb, from which the following words (among others) are formed by infixing vowels: katab kaatib kitáab kútub “he wrote” “writer” “book” “books” When a root morpheme is combined with an affix, it forms a stem. Other affixes can be added to a stem to form a more complex stem, as shown in the following: root stem word root stem word root stem stem stem word Chomsky Chomsky + ite Chomsky + ite + s believe believe + able un + believe + able system system + atic un + system + atic un + system + atic + al un + system + atic + al + ly (proper) noun noun + suffix noun + suffix + suffix verb verb + suffix prefix + verb + suffix noun noun + suffix prefix + noun + suffix prefix + noun + suffix + suffix prefix + noun + suffix + suffix + suffix With the addition of each new affix, a new stem and a new word are formed. Linguists sometimes use the word base to mean any root or stem to which an affix is attached. In the preceding example, system, systematic, unsystematic, and unsystematical are bases. Rules of Word Formation Bound Roots It had been a rough day, so when I walked into the party I was very chalant, despite my efforts to appear gruntled and consolate. I was furling my wieldy umbrella . . . when I saw her. . . . She was a descript person. . . . Her hair was kempt, her clothing shevelled, and she moved in a gainly way. JACK WINTER, “How I Met My Wife,” New Yorker, July 25, 1994 “How I Met My Wife” by Jack Winter from The New Yorker, July 25, 1994. Reprinted by permission of the Estate of Jack Winter. Bound roots do not occur in isolation and they acquire meaning only in combination with other morphemes. For example, words of Latin origin such as receive, conceive, perceive, and deceive share a common root, ceive; and the words remit, permit, commit, submit, transmit, and admit share the root mit. For the original Latin speakers, the morphemes corresponding to ceive and mit had clear meanings, but for modern English speakers, Latinate morphemes such as ceive and mit have no independent meaning. Their meaning depends on the entire word in which they occur. A similar class of words is composed of a prefix affixed to a bound root morpheme. Examples are ungainly, but no *gainly; discern, but no *cern; nonplussed, but no *plussed; downhearted but no *hearted, and others to be seen in this section’s epigraph. The morpheme huckle, when joined with berry, has the meaning of a berry that is small, round, and purplish blue; luke when combined with warm has the meaning “somewhat.” Both these morphemes and others like them (cran, boysen) are bound morphemes that convey meaning only in combination. Rules of Word Formation “I never heard of ‘Uglification,’” Alice ventured to say. “What is it?” The Gryphon lifted up both its paws in surprise. “Never heard of uglifying!” it exclaimed. “You know what to beautify is, I suppose?” “Yes,” said Alice doubtfully: “it means—to make—prettier.” “Well, then,” the Gryphon went on, “if you don’t know what to uglify is, you are a simpleton.” LEWIS CARROLL, Alice’s Adventures in Wonderland, 1865 When the Mock Turtle listed the branches of Arithmetic for Alice as “Ambition, Distraction, Uglification, and Derision,” Alice was very confused. She wasn’t really a simpleton, since uglification was not a common word in English until Lewis Carroll used it. Still, most English speakers would immediately know the meaning of uglification even if they had never heard or used the word before because they would know the meaning of its individual parts—the root ugly and the affixes -ify and -cation. We said earlier that knowledge of morphology includes knowledge of individual morphemes, their pronunciation, and their meaning, and knowledge of the rules for combining morphemes into complex words. The Mock Turtle added 47 48 CHAPTER 1 Morphology: The Words of Language -ify to the adjective ugly and formed a verb. Many verbs in English have been formed in this way: purify, amplify, simplify, falsify. The suffix -ify conjoined with nouns also forms verbs: objectify, glorify, personify. Notice that the Mock Turtle went even further; he added the suffix -cation to uglify and formed a noun, uglification, as in glorification, simplification, falsification, and purification. By using the morphological rules of English, he created a new word. The rules that he used are as follows: Adjective + ify Verb + cation S S Verb Noun “to make Adjective” “the process of making Adjective” Derivational Morphology SHOE © 1987 MACNELLY. KING FEATURES SYNDICATE. Reprinted with permission. Bound morphemes like -ify and -cation are called derivational morphemes. When they are added to a base, a new word with a new meaning is derived. The addition of -ify to pure—purify—means “to make pure,” and the addition of -cation—purification—means “the process of making pure.” If we invent an adjective, pouzy, to describe the effect of static electricity on hair, you will immediately understand the sentences “Walking on that carpet really pouzified my hair” and “The best method of pouzification is to rub a balloon on your head.” This means that we must have a list of the derivational morphemes in our mental dictionaries as well as the rules that determine how they are added to a root or stem. The form that results from the addition of a derivational morpheme is called a derived word. Derivational morphemes have clear semantic content. In this sense they are like content words, except that they are not words. As we have seen, when a derivational morpheme is added to a base, it adds meaning. The derived word may also be of a different grammatical class than the original word, as shown by suffixes such as -able and -ly. When a verb is suffixed with -able, the result is an adjective, as in desire + able. When the suffix -en is added to an adjective, a verb is derived, as in dark + en. One may form a noun from an adjective, as in sweet + ie. Other examples are: Rules of Word Formation Noun to Adjective Verb to Noun Adjective to Adverb boy + -ish virtu + -ous Elizabeth + -an pictur + -esque affection + -ate health + -ful alcohol + -ic acquitt + -al clear + -ance accus + -ation sing + -er conform + -ist predict + -ion exact + -ly Noun to Verb Adjective to Noun Verb to Adjective moral + -ize vaccin + -ate hast + -en tall + -ness specific + -ity feudal + -ism free + -dom read + -able creat + -ive migrat + -ory run(n) + -y Some derivational suffixes do not cause a change in grammatical class. Prefixes never do. Noun to Noun Verb to Verb Adjective to Adjective friend + -ship human + -ity king + -dom New Jersey + -ite vicar + -age Paul + -ine America + -n humanit + -arian mono- + theism dis- + advantage ex- + wife auto- + biography un- + do re- + cover dis- + believe auto- + destruct pink + -ish red + -like a- + moral il- + legal in- + accurate un- + happy semi- + annual dis- + agreeable sub- + minimal When a new word enters the lexicon by the application of morphological rules, other complex derivations may be blocked. For example, when Commun + ist entered the language, words such as Commun + ite (as in Trotsky + ite) or Commun + ian (as in grammar + ian) were not needed; their formation was blocked. Sometimes, however, alternative forms do coexist: for example, Chomskyan and Chomskyist and perhaps even Chomskyite (all meaning “follower of Chomsky’s views of linguistics”). Semanticist and semantician are both used, but the possible word semantite is not. Finally, derivational affixes appear to come in two classes. In one class, the addition of a suffix triggers subtle changes in pronunciation. For example, when we affix -ity to specific (pronounced “specifik” with a k sound), we get specificity (pronounced “specifisity” with an s sound). When deriving Elizabeth + an from Elizabeth, the fourth vowel sound changes from the vowel in Beth to the vowel in Pete. Other suffixes such as -y, -ive, and -ize may induce similar changes: sane/sanity, deduce/deductive, critic/criticize. 49 50 CHAPTER 1 Morphology: The Words of Language On the other hand, suffixes such as -er, -ful, -ish, -less, -ly, and -ness may be tacked onto a base word without affecting the pronunciation, as in baker, wishful, boyish, needless, sanely, and fullness. Moreover, affixes from the first class cannot be attached to a base containing an affix from the second class: *need + less + ity, *moral + ize + ive; but affixes from the second class may attach to bases with either kind of affix: moral + iz(e) + er, need + less + ness. Inflectional Morphology “Zits” . Zits Partnership. Reprinted with permission of King Features Syndicate. Function words like to, it, and be are free morphemes. Many languages, including English, also have bound morphemes that have a strictly grammatical function. They mark properties such as tense, number, person and so forth. Such bound morphemes are called inflectional morphemes. Unlike derivational morphemes, they never change the grammatical category of the stems to which they are attached. Consider the forms of the verb in the following sentences: 1. 2. 3. 4. 5. I sail the ocean blue. He sails the ocean blue. John sailed the ocean blue. John has sailed the ocean blue. John is sailing the ocean blue. In sentence (2) the -s at the end of the verb is an agreement marker; it signifies that the subject of the verb is third person and is singular, and that the verb is in the present tense. It doesn’t add lexical meaning. The suffix -ed indicates past tense, and is also required by the syntactic rules of the language when verbs are used with have, just as -ing is required when verbs are used with forms of be. Inflectional morphemes represent relationships between different parts of a sentence. For example, -s expresses the relationship between the verb and the third person singular subject; -ing expresses the relationship between the time the utterance is spoken (e.g., now) and the time of the event. If you say “John is dancing,” it means John is engaged in this activity while you speak. If you say “John danced,” the -ed affix places the activity before you spoke. As we will Rules of Word Formation discuss in chapter 2, inflectional morphology is closely connected to the syntax of the sentence. English also has other inflectional endings such as the plural suffix, which is attached to certain singular nouns, as in boy/boys and cat/cats. In contrast to Old and Middle English, which were more richly inflected languages, as we discuss in chapter 10, modern English has only eight bound inflectional affixes: English Inflectional Morphemes Examples -s -ed -ing -en -s -’s -er -est She wait-s at home. She wait-ed at home. She is eat-ing the donut. Mary has eat-en the donuts. She ate the donut-s. Disa’s hair is short. Disa has short-er hair than Karin. Disa has the short-est hair. third-person singular present past tense progressive past participle plural possessive comparative superlative Inflectional morphemes in English follow the derivational morphemes in a word. Thus, to the derivationally complex word commit + ment one can add a plural ending to form commit + ment + s, but the order of affixes may not be reversed to derive the impossible commit + s + ment = *commitsment. Yet another distinction between inflectional and derivational morphemes is that inflectional morphemes are productive: they apply freely to nearly every appropriate base (excepting “irregular” forms such as feet, not *foots). Most nouns takes an -s inflectional suffix to form a plural, but only some nouns take the derivational suffix -ize to form a verb: idolize, but not *picturize. Compared to many languages of the world, English has relatively little inflectional morphology. Some languages are highly inflected. In Swahili, which is widely spoken in eastern Africa, verbs can be inflected with multiple morphemes, as in nimepiga (ni + me + pig + a), meaning “he has hit something.” Here the verb root pig meaning “hit” has two inflectional prefixes: ni meaning “I,” and me meaning “completed action,” and an inflectional suffix a, which is an object agreement morpheme. Even the more familiar European languages have many more inflectional endings than English. In the Romance languages (languages descended from Latin), the verb has different inflectional endings depending on the subject of the sentence. The verb is inflected to agree in person and number with the subject, as illustrated by the Italian verb parlare meaning “to speak”: Io parlo Tu parli Lui/Lei parla “I speak” “You (singular) speak” “He/she speaks” Noi parliamo Voi parlate Loro parlano “We speak” “You (plural) speak” “They speak” Russian has a system of inflectional suffixes for nouns that indicates the noun’s grammatical relation—whether a subject, object, possessor, and so on— something English does with word order. For example, in English, the sentence Maxim defends Victor means something different from Victor defends Maxim. The order of the words is critical. But in Russian, all of the following sentences 51 52 CHAPTER 1 Morphology: The Words of Language mean “Maxim defends Victor” (the č is pronounced like the ch in cheese; the š like the sh in shoe; the j like the y in yet): Maksim zašiščajet Viktora. Maksim Viktora zašiščajet. Viktora Maksim zašiščajet. Viktora zašiščajet Maksim.2 The inflectional suffix -a added to the name Viktor to derive Viktora shows that Victor, not Maxim, is defended. The suffix designates the object of the verb, irrespective of word order. The grammatical relation of a noun in a sentence is called the case of the noun. When case is marked by inflectional morphemes, the process is referred to as case morphology. Russian has a rich case morphology, whereas English case morphology is limited to the one possessive -s and to its system of pronouns. Many of the grammatical relations that Russian expresses with its case morphology are expressed in English with prepositions. Among the world’s languages is a richness and variety of inflectional processes. Earlier we saw how German uses circumfixes to inflect a verb stem to produce a past particle: lieb to geliebt, similar to the -ed ending of English. Arabic infixes vowels for inflectional purposes: kitáab “book” but kútub “books.” Samoan (see exercise 10) uses a process of reduplication—inflecting a word through the repetition of part or all of the wordː savali “he travels,” but savavali “they travel.” Malay does the same with whole wordsː orang “person,” but orang orang “people.” Languages such as Finnish have an extraordinarily complex case morphology, whereas Mandarin Chinese lacks case morphology entirely. Inflection achieves a variety of purposes. In English verbs are inflected with -s to show third person singular agreement. Languages like Finnish and Japanese have a dazzling array of inflectional processes for conveying everything from “temporary state of being” (Finnish nouns) to “strong negative intention” (Japanese verbs). English spoken 1,000 years ago had considerably more inflectional morphology than modern English, as we shall discuss in chapter 10. In distinguishing inflectional from derivation morphemes we may summarize as follows: Inflectional Derivational Grammatical function No word class change Small or no meaning change Often required by rules of grammar Follow derivational morphemes in a word Productive Lexical function May cause word class change Some meaning change Never required by rules of grammar Precede inflectional morphemes in a word Some productive, many nonproductive Figure 1.1 sums up our knowledge of how morphemes in English are classified. 2These Russian examples were provided by Stella de Bode. Rules of Word Formation (ENGLISH) MORPHEMES BOUND AFFIX DERIVATIONAL PREFIX preuncon- ROOT -ceive -mit -fer FREE OPEN CLASS (CONTENT OR LEXICAL) WORDS nouns (girl) adjectives (pretty) verbs (love) adverbs (away) CLOSED CLASS (FUNCTION OR GRAMMATICAL) WORDS conjunctions (and) prepositions (in) articles (the) pronouns (she) auxiliary verbs (is) INFLECTIONAL SUFFIX -ly -ist -ment SUFFIX -ing -er -s -s -est -’s -en -ed FIGURE 1.1 | Classification of English morphemes. The Hierarchical Structure of Words We saw earlier that morphemes are added in a fixed order. This order reflects the hierarchical structure of the word. A word is not a simple sequence of morphemes. It has an internal structure. For example, the word unsystematic is composed of three morphemes: un-, system, and -atic. The root is system, a noun, to which we add the suffix -atic, resulting in an adjective, systematic. To this adjective, we add the prefix un- forming a new adjective, unsystematic. In order to represent the hierarchical organization of words (and sentences), linguists use tree diagrams. The tree diagram for unsystematic is as follows: Adjective 3 un Adjective 3 Noun atic g system This tree represents the application of two morphological rules: 1. 2. Noun + atic un + Adjective S S Adjective Adjective 53 54 CHAPTER 1 Morphology: The Words of Language Rule 1 attaches the derivational suffix -atic to the root noun, forming an adjective. Rule 2 takes the adjective formed by rule 1 and attaches the derivational prefix un-. The diagram shows that the entire word—unsystematic—is an adjective that is composed of an adjective—systematic—plus un. The adjective is itself composed of a noun—system—plus the suffix -atic. Hierarchical structure is an essential property of human language. Words (and sentences) have component parts, which relate to each other in specific, rule-governed ways. Although at first glance it may seem that, aside from order, the morphemes un- and -atic each relate to the root system in the same way, this is not the case. The root system is “closer” to -atic than it is to un-, and un- is actually connected to the adjective systematic, and not directly to system. Indeed, *unsystem is not a word. Further morphological rules can be applied to the given structure. For example, English has a derivational suffix -al, as in egotistical, fantastical, and astronomical. In these cases, -al is added to an adjective—egotistic, fantastic, astronomic—to form a new adjective. The rule for -al is as follows: 3. Adjective + al S Adjective Another affix is -ly, which is added to adjectives—happy, lazy, hopeful—to form adverbs happily, lazily, hopefully. Following is the rule for -ly: 4. Adjective + ly S Adverb Applying these two rules to the derived form unsystematic, we get the following tree for unsystematically: Adverb 4 Adjective ly 4 Adjective al 4 un Adjective 3 Noun atic g system This is a rather complex word. Despite its complexity, it is well-formed because it follows the morphological rules of the language. On the other hand, a very simple word can be ungrammatical. Suppose in the above example we first added un- to the root system. That would have resulted in the nonword *unsystem. Noun 3 un Noun g system Rules of Word Formation *Unsystem is not a possible word because there is no rule of English that allows un- to be added to nouns. The large soft-drink company whose ad campaign promoted the Uncola successfully flouted this linguistic rule to capture people’s attention. Part of our linguistic competence includes the ability to recognize possible versus impossible words, like *unsystem and *Uncola. Possible words are those that conform to the rules; impossible words are those that do not. Tree diagrams make explicit the way speakers represent the internal structure of the morphologically complex words in their language. In speaking and writing, we appear to string morphemes together sequentially as in un + system + atic. However, our mental representation of words is hierarchical as well as linear, and this is shown by tree diagrams. Inflectional morphemes are equally well represented. The following tree shows that the inflectional agreement morpheme -s follows the derivational morphemes -ize and re- in refinalizes: Verb 4 Verb s 4 re Verb 4 Adjective ize g final The tree also shows that re applies to finalize, which is correct as *refinal is not a word, and that the inflectional morpheme follows the derivational morpheme. The hierarchical organization of words is even more clearly shown by structurally ambiguous words, words that have more than one meaning by virtue of having more than one structure. Consider the word unlockable. Imagine you are inside a room and you want some privacy. You would be unhappy to find the door is unlockable—“not able to be locked.” Now imagine you are inside a locked room trying to get out. You would be very relieved to find that the door is unlockable—“able to be unlocked.” These two meanings correspond to two different structures, as follows: Adjective 3 un Adjective 3 Verb able g lock Adjective 3 Verb able 3 un Verb g lock In the first structure the verb lock combines with the suffix -able to form an adjective lockable (“able to be locked”). Then the prefix un-, meaning “not,” 55 56 CHAPTER 1 Morphology: The Words of Language combines with the derived adjective to form a new adjective unlockable (“not able to be locked”). In the second case, the prefix un- combines with the verb lock to form a derived verb unlock. Then the derived verb combines with the suffix -able to form unlockable, “able to be unlocked.” An entire class of words in English follows this pattern: unbuttonable, unzippable, and unlatchable, among others. The ambiguity arises because the prefix un- can combine with an adjective, as illustrated in rule 2, or it can combine with a verb, as in undo, unstaple, unearth, and unloosen. If words were only strings of morphemes without any internal organization, we could not explain the ambiguity of words like unlockable. These words also illustrate another important point, which is that structure is important to determining meaning. The same three morphemes occur in both versions of unlockable, yet there are two distinct meanings. The different meanings arise because of the different structures. Rule Productivity “Peanuts” copyright . United Feature Syndicate. Reprinted by permission. We have noted that some morphological processes, inflection in particular, are productive, meaning that they can be used freely to form new words from the list of free and bound morphemes. Among derivational morphemes, the suffix -able can be conjoined with any verb to derive an adjective with the meaning of the verb and the meaning of -able, which is something like “able to be” as in accept + able, laugh + able, pass + able, change + able, breathe + able, adapt + able, and so on. The productivity of this rule is illustrated by the fact that we find -able affixed to new verbs such as downloadable and faxable. The prefix un- derives same-class words with an opposite meaning: unafraid, unfit, un-American, and so on. Additionally, un- can be added to derived adjec- Rules of Word Formation tives that have been formed by morphological rules, resulting in perfectly acceptable words such as un + believe + able or un + pick + up + able. Yet un- is not fully productive. We find happy and unhappy, cowardly and uncowardly, but not sad and *unsad, brave and *unbrave, or obvious and *unobvious. It appears that the “un-Rule” is most productive for adjectives that are derived from verbs, such as unenlightened, unsimplified, uncharacterized, unauthorized, undistinguished, and so on. It also appears that most acceptable un- words have polysyllabic bases, and while we have unfit, uncool, unread, and unclean, many of the unacceptable -un forms have monosyllabic stems such as *unbig, *ungreat, *unred, *unsad, *unsmall, *untall. The rule that adds an -er to verbs in English to produce a noun meaning “one who does” is a nearly productive morphological rule, giving us examiner, examtaker, analyzer, lover, hunter, and so forth, but fails full productivity owing to “unwords” like *chairer, which is not “one who chairs.” Other derivational morphemes fall farther short of productivity. Consider: sincerity warmth moisten from from from sincere warm moist The suffix -ity is found in many other words in English, like chastity, scarcity, and curiosity; and -th occurs in health, wealth, depth, width, and growth. We find -en in sadden, ripen, redden, weaken, and deepen. Still, the phrase “*The tragicity of Hamlet” sounds somewhat strange, as does “*I’m going to heaten the sauce.” Someone may say coolth, but when “words” like tragicity, heaten, and coolth are used, it is usually either a slip of the tongue or an attempt at humor. Most adjectives will not accept any of these derivational suffixes. Even less productive to the point of rareness are such derivational morphemes as the diminutive suffixes in the words pig + let and sap + ling. In the morphologically complex words that we have seen so far, we can generally predict the meaning based on the meaning of the morphemes that make up the word. Unhappy means “not happy” and acceptable means “fit to be accepted.” However, one cannot always know the meaning of the words derived from free and derivational morphemes by knowing the morphemes themselves. The following un- forms have unpredictable meanings: unloosen unrip undo untread unearth unfrock unnerve “loosen, let loose” “rip, undo by ripping” “reverse doing” “go back through in the same steps” “dig up” “deprive (a cleric) of ecclesiastic rank” “fluster” Morphologically complex words whose meanings are not predictable must be listed individually in our mental lexicons. However, the morphological rules must also be in the grammar, revealing the relation between words and providing the means for forming new words. 57 58 CHAPTER 1 Morphology: The Words of Language Exceptions and Suppletions “Peanuts” copyright . United Feature Syndicate. Reprinted by permission. The morphological process that forms plural from singular nouns does not apply to words like child, man, foot, and mouse. These words are exceptions to the English inflectional rule of plural formation. Similarly, verbs like go, sing, bring, run, and know are exceptions to the inflectional rule for producing past tense verbs in English. When children are learning English, they first learn the regular rules, which they apply to all forms. Thus, we often hear them say mans and goed. Later in the acquisition process, they specifically learn irregular plurals like men and mice, and irregular past tense forms like came and went. These children’s errors are actually evidence that the regular rules exist. This is discussed more fully in chapter 7. Irregular, or suppletive, forms are treated separately in the grammar. That is, one cannot use the regular rules of inflectional morphology to add affixes to words that are exceptions like child/children, but must replace the uninflected form with another word. It is possible that for regular words, only the singular form need be specifically stored in the lexicon because we can use the inflectional rules to form plurals. But this can’t be so with suppletive exceptions, and children, mice, and feet must be learned separately. The same is true for suppletive past tense forms and comparative forms. There are regular rules—suffixes -ed and -er—to handle most cases such as walked and taller, but words like went and worse need to be learned individually as meaning “goed” and “badder.” When a new word enters the language, the regular inflectional rules generally apply. The plural of geek, when it was a new word in English, was geeks, not *geeken, although we are advised that some geeks wanted the plural of fax to be *faxen, like oxen, when fax entered the language as a shortened form of facsimile. Never fear: its plural is faxes. The exception to this may be a word “borrowed” from a foreign language. For example, the plural of Latin datum has always been data, never datums, though nowadays data, the one-time plural, is treated by many as a singular word like information. The past tense of the verb hit, as in the sentence “Yesterday you hit the ball,” and the plural of the noun sheep, as in “The sheep are in the meadow,” show that some morphemes seem to have no phonological shape at all. We know that hit in the above sentence is hit + past because of the time adverb yesterday, and we know that sheep is the phonetic form of sheep + plural because of the plural verb form are. Rules of Word Formation When a verb is derived from a noun, even if it is pronounced the same as an irregular verb, the regular rules apply to it. Thus ring, when used in the sense of encircle, is derived from the noun ring, and as a verb it is regular. We say the police ringed the bank with armed men, not *rang the bank with armed men. In the jargon of baseball one says that the hitter flied out (hit a lofty ball that was caught), rather than *flew out, because the verb came from the compound noun fly ball. Indeed, when a noun is used in a compound in which its meaning is lost, such as flatfoot, meaning “cop,” its plural follows the regular rule, so one says two flatfoots to refer to a pair of cops slangily, not *two flatfeet. It’s as if the noun is saying: “If you don’t get your meaning from me, you don’t get my special plural form.” Making compounds plural, however, is not always simply adding -s as in girlfriends. Thus for many speakers the plural of mother-in-law is mothers-in-law, whereas the possessive form is mother-in-law’s; the plural of court-martial is courts-martial and the plural of attorney general is attorneys general in a legal setting, but for most of the rest of us it is attorney generals. If the rightmost word of a compound takes an irregular form, however, the entire compound generally follows suit, so the plural of footman is footmen, not *footmans or *feetman or *feetmen. Lexical Gaps “Curiouser and curiouser!” cried Alice (she was so much surprised, that for the moment she quite forgot how to speak good English). LEWIS CARROLL, Alice’s Adventures in Wonderland, 1865 The redundancy of alternative forms such as Chomskyan/Chomskyite, all of which conform to the regular rules of word formation, may explain some of the accidental gaps (also called lexical gaps) in the lexicon. Accidental gaps are well-formed but nonexisting words. The actual words in a language constitute only a subset of the possible words. Speakers of a language may know tens of thousands of words. Dictionaries, as we noted, include hundreds of thousands of words, all of which are known by some speakers of the language. But no dictionary can list all possible words, because it is possible to add to the vocabulary of a language in many ways. (Some of these will be discussed here and some in chapter 10 on language change.) There are always gaps in the lexicon—words not present but that could be added. Some of the gaps are due to the fact that a permissible sound sequence has no meaning attached to it (like blick, or slarm, or krobe). Note that the sequence of sounds must be in keeping with the constraints of the language. *bnick is not a “gap” because no word in English can begin with a bn. We will discuss such constraints in chapter 5. Other gaps result when possible combinations of morphemes never come into use. Speakers can distinguish between impossible words such as *unsystem and *needlessity, and possible but nonexisting words such as curiouser, linguisticism, and antiquify. The ability to make this distinction is further evidence that the morphological component of our mental grammar consists of not just a lexicon—a list of existing words—but also of rules that enable us to create and understand new words, and to recognize possible and impossible words. 59 60 CHAPTER 1 Morphology: The Words of Language Other Morphological Processes The various kinds of affixation that we have discussed are by far the most common morphological processes among the world’s languages. But, as we continue to emphasize in this book, the human language capacity is enormously creative, and that creativity extends to ways other than affixation that words may be altered and created. Back-Formations [A girl] was delighted by her discovery that eats and cats were really eat + -s and cat + -s. She used her new suffix snipper to derive mik (mix), upstair, downstair, clo (clothes), len (lens), brefek (from brefeks, her word for breakfast), trappy (trapeze), even Santa Claw. STEVEN PINKER, Words and Rules: The Ingredients of Language, 1999 Misconception can sometimes be creative, and nothing in this world both misconceives and creates like a child, as we shall see in chapter 7. A new word may enter the language because of an incorrect morphological analysis. For example, peddle was derived from peddler on the mistaken assumption that the -er was the agentive suffix. Such words are called back-formations. The verbs hawk, stoke, swindle, and edit all came into the language as back-formations—of hawker, stoker, swindler, and editor. Pea was derived from a singular word, pease, by speakers who thought pease was a plural. Some word creation comes from deliberately miscast back-formations. The word bikini comes from the Bikini atoll of the Marshall Islands. Because the first syllable bi- is a morpheme meaning “two” in words like bicycle, some clever person called a topless bathing suit a monokini. Historically, a number of new words have entered the English lexicon in this way. Based on analogy with such pairs as act/action, exempt/exemption, and revise/revision, new words resurrect, preempt, and televise were formed from the existing words resurrection, preemption, and television. Language purists sometimes rail against back-formations and cite enthuse and liaise (from enthusiasm and liaison) as examples of language corruption. However, language is not corrupt; it is adaptable and changeable. Don’t be surprised to discover in your lifetime that shevelled and chalant have infiltrated the English language to mean “tidy” and “concerned,” and if it happens do not cry “havoc”; all will be well. Compounds [T]he Houynhnms have no Word in their Language to express any thing that is evil, except what they borrow from the Deformities or ill Qualities of the Yahoos. Thus they denote the Folly of a Servant, an Omission of a Child, a Stone that cuts their feet, a Continuance of foul or unseasonable Weather, and the like, by adding to each the Epithet of Yahoo. For instance, Hnhm Yahoo, Whnaholm Yahoo, Ynlhmnawihlma Yahoo, and an ill contrived House, Ynholmhnmrohlnw Yahoo. JONATHAN SWIFT, Gulliver’s Travels, 1726 Two or more words may be joined to form new, compound words. English is very flexible in the kinds of combinations permitted, as the following table Rules of Word Formation of compounds shows. Each entry in the table represents dozens of similar combinations. Adjective Noun Verb Adjective Noun Verb bittersweet headstrong — poorhouse homework pickpocket whitewash spoonfeed sleepwalk Some compounds that have been introduced fairly recently into English are Facebook, YouTube, power nap, and carjack. When the two words are in the same grammatical category, the compound will also be in this category: noun + noun = noun, as in girlfriend, fighterbomber, paper clip, elevator-operator, landlord, mailman; adjective + adjective = adjective, as in icy-cold, red-hot, worldly wise. In English, the rightmost word in a compound is the head of the compound. The head is the part of a word or phrase that determines its broad meaning and grammatical category. Thus, when the two words fall into different categories, the class of the second or final word determines the grammatical category of the compound: noun + adjective = adjective, as in headstrong; verb + noun = noun, as in pickpocket. On the other hand, compounds formed with a preposition are in the category of the nonprepositional part of the compound, such as (to) overtake or (the) sundown. This is further evidence that prepositions form a closed-class category that does not readily admit new members. Although two-word compounds are the most common in English, it would be difficult to state an upper limit: Consider three-time loser, four-dimensional space-time, sergeant-at-arms, mother-of-pearl, man about town, master of ceremonies, and daughter-in-law. Dr. Seuss uses the rules of compounding when he explains “when tweetle beetles battle with paddles in a puddle, they call it a tweetle beetle puddle paddle battle.”3 Spelling does not tell us what sequence of words constitutes a compound; whether a compound is spelled with a space between the two words, with a hyphen, or with no separation at all depends on the idiosyncrasies of the particular compound, as shown, for example, in blackbird, gold-tail, and smoke screen. Like derived words, compounds have internal structure. This is clear from the ambiguity of a compound like top + hat + rack, which can mean “a rack for top hats” corresponding to the structure in tree diagram (1), or “the highest hat rack,” corresponding to the structure in (2). Noun 4 Noun Noun 3 g Adjective Noun rack g g top hat (1) 3From Noun 4 Adjective Noun g 3 top Noun Noun g g hat rack (2) FOX IN SOCKS by Dr. Seuss, Trademark/ & copyright . by Dr. Seuss Enterprises, L.P., 1965, renewed 1993. Used by permission of Random House Children’s Books, a division of Random House, Inc., and International Creative Management. 61 62 CHAPTER 1 Morphology: The Words of Language Meaning of Compounds The meaning of a compound is not always the sum of the meanings of its parts; a blackboard may be green or white. Everyone who wears a red coat is not a Redcoat (slang for British soldier during the American Revolutionary War). The difference between the sentences “She has a red coat in her closet” and “She has a Redcoat in her closet” would have been highly significant in America in 1776. Other compounds reveal other meaning relations between the parts, which are not entirely consistent because many compounds are idiomatic (idioms are discussed in chapter 3). A boathouse is a house for boats, but a cathouse is not a house for cats. (It is slang for a house of prostitution or whorehouse.) A jumping bean is a bean that jumps, a falling star is a star that falls, and a magnifying glass is a glass that magnifies; but a looking glass is not a glass that looks, nor is an eating apple an apple that eats, and laughing gas does not laugh. Peanut oil and olive oil are oils made from something, but what about baby oil? And is this a contradiction: “horse meat is dog meat”? Not at all, since the first is meat from horses and the other is meat for dogs. In the examples so far, the meaning of each compound includes at least to some extent the meanings of the individual parts. However, many compounds nowadays do not seem to relate to the meanings of the individual parts at all. A jack-in-a-box is a tropical tree, and a turncoat is a traitor. A highbrow does not necessarily have a high brow, nor does a bigwig have a big wig, nor does an egghead have an egg-shaped head. Like certain words with the prefix un-, the meaning of many compounds must be learned as if they were individual whole words. Some of the meanings may be figured out, but not all. If you had never heard the word hunchback, it might be possible to infer the meaning; but if you had never heard the word flatfoot, it is doubtful you would know it means “detective” or “policeman,” even though the origin of the word, once you know the meaning, can be figured out. The pronunciation of English compounds differs from the way we pronounce the sequence of two words that are not compounded. In an actual compound, the first word is usually stressed (pronounced somewhat louder and higher in pitch), and in a noncompound phrase the second word is stressed. Thus we stress Red in Redcoat but coat in red coat. (Stress, pitch, and other similar features are discussed in chapters 4 and 5.) Universality of Compounding Other languages have rules for conjoining words to form compounds, as seen by French cure-dent, “toothpick”; German Panzerkraftwagen, “armored car”; Russian cetyrexetaznyi, “four-storied”; and Spanish tocadiscos, “record player.” In the Native American language Tohono O’odham, the word meaning “thing” is haɁ ichu, and it combines with doakam, “living creatures,” to form the compound haɁ ichu doakam, “animal life.” In Twi, by combining the word meaning “son” or “child,” ɔ ba, with the word meaning “chief,” ɔ hene, one derives the compound ɔ heneba, meaning “prince.” By adding the word “house,” ofi, to ɔ hene, the word meaning “palace,” ahemfi, is derived. The other changes that occur in the Twi compounds are due to phonological and morphological rules in the language. Sign Language Morphology In Thai, the word “cat” is m ɛɛ w, the word for “watch” (in the sense of “to watch over”) is fâw, and the word for “house” is bâan. The word for “watch cat” (like a watchdog) is the compound mɛɛ wfâwbâan—literally, “catwatchhouse.” Compounding is a common and frequent process for enlarging the vocabulary of all languages. “Pullet Surprises” Our knowledge of the morphemes and morphological rules of our language is often revealed by the “errors” we make. We may guess the meaning of a word we do not know. Sometimes we guess wrong, but our wrong guesses are nevertheless “intelligent.” Amsel Greene collected errors made by her students in vocabulary-building classes and published them in a book called Pullet Surprises.4 The title is taken from a sentence written by one of her high school students: “In 1957 Eugene O’Neill won a Pullet Surprise.” What is most interesting about these errors is how much they reveal about the students’ knowledge of English morphology. The creativity of these students is illustrated in the following examples: Word Student’s Definition deciduous longevity fortuitous gubernatorial bibliography adamant diatribe polyglot gullible homogeneous “able to make up one’s mind” “being very tall” “well protected” “to do with peanuts” “holy geography” “pertaining to original sin” “food for the whole clan” “more than one glot” “to do with sea birds” “devoted to home life” The student who used the word indefatigable in the sentence She tried many reducing diets, but remained indefatigable. clearly shows morphological knowledge: in meaning “not” as in ineffective; de meaning “off” as in decapitate; fat as in “fat”; able as in able; and combined meaning, “not able to take the fat off.” Our contribution to Greene’s collection is metronome: “a city-dwelling diminutive troll.” Sign Language Morphology Sign languages are rich in morphology. Like spoken languages, signs belong to grammatical categories. They have root and affix morphemes, free and bound morphemes, lexical content and grammatical morphemes, derivational and inflectional morphemes, and morphological rules for their combination to form 4 Greene, A. 1969. Pullet surprises. Glenview, IL: Scott, Foresman. 63 64 CHAPTER 1 Morphology: The Words of Language FIGURE 1.2 | Derivationally related sign in ASL. Copyright . 1987 Massachusetts Institute of Technology, by permission of The MIT Press.5 morphologically complex signs. The affixation is accomplished by preceding or following a particular gesture with another “affixing” gesture. The suffix meaning “negation,” roughly analogous to -un or -non or -dis, is accomplished as a rapid turning over of the hand(s) following the end of the root sign that is being negated. For example, “want” is signed with open palms facing upward; “don’t want” follows that gesture with a turning of the palms to face downward. This “reversal of orientation” suffix may be applied, with necessary adjustments, to many root signs. In sign language many morphological processes are not linear. Rather, the sign stem occurs nested within various movements and locations in signing space so that the gestures are simultaneous, an impossibility with spoken languages, as in the examples in Figure 1.2. Figure 1.2 illustrates the derivational process in ASL that is equivalent to the formation of the nouns comparison and measuring from the verbs compare and measure in English. Everything about the root morpheme remains the same except for the movement of the hands. Inflection of sign roots also occurs in ASL and all other sign languages, which characteristically modify the movement of the hands and the spatial contours of the area near the body in which the signs are articulated. For example, movement away from the signer’s body toward the “listener” might inflect a verb as in “I see you,” whereas movement away from the listener and toward the body would inflect the verb as in “you see me.” Morphological Analysis: Identifying Morphemes Speakers of a language have knowledge of the internal structure of a word because their mental grammars include a mental lexicon of morphemes and the 5 Poizner, Howard, Edward Klima, and Ursula Bellugi. “What the Hands Reveal about the Brain” figure: “Derivationally related signs in ASL.” © 1987 Massachusetts Institute of Technology, by permission of The MIT Press. Morphological Analysis: Identifying Morphemes morphological rules for their combination. Of course, mistakes are made while learning, but these are quickly remedied. (See chapter 7 for details of how children acquire language.) Suppose you didn’t know English and were a linguist from the planet Zorx wishing to analyze the language. How would you discover the morphemes of English? How would you determine whether a word in that language had one, two, or more morphemes? The first thing to do would be to ask native speakers how they say various words. (It would help to have a Zorxese-English interpreter along; otherwise, copious gesturing is in order.) Assume you are talented in miming and manage to collect the following forms: Adjective Meaning ugly uglier ugliest pretty prettier prettiest tall taller tallest “very unattractive” “more ugly” “most ugly” “nice looking” “more nice looking” “most nice looking” “large in height” “more tall” “most tall” To determine what the morphemes are in such a list, the first thing a field linguist would do is to see if some forms mean the same thing in different words, that is, to look for recurring forms. We find them: ugly occurs in ugly, uglier, and ugliest, all of which include the meaning “very unattractive.” We also find that -er occurs in prettier and taller, adding the meaning “more” to the adjectives to which it is attached. Similarly, -est adds the meaning “most.” Furthermore, by asking additional questions of our English speaker, we find that -er and -est do not occur in isolation with the meanings of “more” and “most.” We can therefore conclude that the following morphemes occur in English: ugly pretty tall -er -est root morpheme root morpheme root morpheme bound morpheme “comparative” bound morpheme “superlative” As we proceed we find other words that end with -er (e.g., singer, lover, bomber, writer, teacher) in which the -er ending does not mean “comparative” but, when attached to a verb, changes it to a noun who “verbs,” (e.g., sings, loves, bombs, writes, teaches). So we conclude that this is a different morpheme, even though it is pronounced the same as the comparative. We go on and find words like number, somber, butter, member, and many others in which the -er has no separate meaning at all—a somber is not “one who sombs” and a member does not memb—and therefore these words must be monomorphemic. 65 66 CHAPTER 1 Morphology: The Words of Language Once you have practiced on the morphology of English, you might want to go on to describe another language. Paku was invented by the linguist Victoria Fromkin for a 1970s TV series called Land of the Lost, recently made into a major motion picture of the same name. This was the language used by the monkey people called Pakuni. Suppose you found yourself in this strange land and attempted to find out what the morphemes of Paku were. Again, you would collect your data from a native Paku speaker and proceed as the Zorxian did with English. Consider the following data from Paku: me ye we wa abuma adusa abu Paku “I” “you (singular)” “he” “she” “girl” “boy” “child” “one Paku” meni yeni weni wani abumani adusani abuni Pakuni “we” “you (plural)” “they (masculine)” “they (feminine)” “girls” “boys” “children” “more than one Paku” By examining these words you find that the plural forms end in -ni and the singular forms do not. You therefore conclude that -ni is a separate morpheme meaning “plural” that is attached as a suffix to a noun. Here is a more challenging example, but the principles are the same. Look for repetitions and near repetitions of the same word parts, taking your cues from the meanings given. These are words from Michoacan Aztec, an indigenous language of Mexico: nokali nokalimes mokali ikali nopelo “my house” “my houses” “your house” “his house” “my dog” mopelo mopelomes ikwahmili nokwahmili mokwahmili “your dog” “your dogs” “his cornfield” “my cornfield” “your cornfield” We see there are three base meanings: house, dog, and cornfield. Starting with house we look for commonalities in all the forms that refer to “house.” They all contain kali so that makes a good first guess. (We might, and you might, have reasonably guessed kal, but eventually we wouldn’t know what to do with the i at the end of nokali and mokali.) With kali as “house” we may infer that no is a prefix meaning “my,” and that is supported by nopelo, meaning “my dog.” This being the case, we guess that pelo is “dog,” and see where that leads us. If pelo is “dog” and mopelo is “your dog,” then mo is probably the prefix for “your.” Now that we think that the possessive pronouns are prefixes, we can look at ikali and deduce that i means “his.” If we’re right about the prefixes then we can separate out the word for “cornfield” as kwahmili, and at this point we’re a-rockin’ and a-rollin’. The only morpheme unaccounted for is “plural.” We have two instances of plurality, nokalimes and mopelomes, but since we know no, kali, mo, and pelo, it is straightforward to identify the plural morpheme as the suffix mes. Summary In summary of our analysis, then: kali pelo kwahmili nomoi-mes “house” “dog” “cornfield” “my” “your” “his” “plural” By following the analytical principles just discussed, you should be able to solve some of the more complex morphological puzzles that appear in the exercises. Summary Knowing a language means knowing the morphemes of that language, which are the elemental units that constitute words. Moralizers is an English word composed of four morphemes: moral + ize + er + s. When you know a word or morpheme, you know both its form (sound or gesture) and its meaning; these are inseparable parts of the linguistic sign. The relationship between form and meaning is arbitrary. There is no inherent connection between them (i.e., the words and morphemes of any language must be learned). Morphemes may be free or bound. Free morphemes stand alone like girl or the, and they come in two types: open class, containing the content words of the language, and closed class, containing function words such as the or of. Bound morphemes may be affixes or bound roots such as -ceive. Affixes may be prefixes, suffixes, circumfixes, and infixes. Affixes may be derivational or inflectional. Derivational affixes derive new words; inflectional affixes, such as the plural affix -s, make grammatical changes to words. Complex words contain a root around which stems are built by affixation. Rules of morphology determine what kind of affixation produces actual words such as un + system + atic, and what kind produces nonwords such as *un + system. Words have hierarchical structure evidenced by ambiguous words such as unlockable, which may be un + lockable “unable to be locked” or unlock + able “able to be unlocked.” Some morphological rules are productive, meaning they apply freely to the appropriate stem; for example, re- applies freely to verbal stems to give words like redo, rewash, and repaint. Other rules are more constrained, forming words like young + ster but not *smart + ster. Inflectional morphology is extremely productive: the plural -s applies freely even to nonsense words. Suppletive forms escape inflectional morphology, so instead of *mans we have men; instead of *bringed we have brought. There are many ways for new words to be created other than affixation. Compounds are formed by uniting two or more root words in a single word, such as homework. The head of the compound (the rightmost word) bears the basic meaning, so homework means a kind of work done at home, but often the 67 68 CHAPTER 1 Morphology: The Words of Language meaning of compounds is not easily predictable and must be learned as individual lexical items, such as laughing gas. Back-formations are words created by misinterpreting an affix look-alike such as er as an actual affix, so the verb burgle was formed under the mistaken assumption that burglar was burgle + er. The grammars of sign languages also include a morphological component consisting of a root, derivational and inflectional sign morphemes, and the rules for their combination. Morphological analysis is the process of identifying form-meaning units in a language, taking into account small differences in pronunciation, so that in- and im- are seen to be the “same” prefix in English. References for Further Reading Anderson, S. R. 1992. A-morphous morphology. Cambridge, UK: Cambridge University Press. Aronoff, M. 1976. Word formation in generative grammar. Cambridge, MA: MIT Press. Bauer, L. 2003. Introducing linguistic morphology, 2nd edn. Washington, DC: Georgetown University Press. Jensen, J. T. 1990. Morphology: Word structure in generative grammar. Amsterdam/ Philadelphia: John Benjamins Publishing. Katamba, F. 1993. Morphology. New York: Bedford/St. Martins. Matthews, P. H. 1991. Morphology: An introduction to the theory of word structure, 2nd edn. Cambridge, UK: Cambridge University Press. Stockwell, R., and D. Minkova. 2001. English words: History and structure. New York: Cambridge University Press. Winchester, S. 2003. The meaning of everything (The story of the Oxford English dictionary). Oxford, UK: Oxford University Press. ______. 1999. The professor and the madman. New York: HarperCollins. Exercises 1. Here is how to estimate the number of words in your mental lexicon. Consult any standard dictionary. a. Count the number of entries on a typical page. They are usually boldfaced. b. Multiply the number of words per page by the number of pages in the dictionary. c. Pick four pages in the dictionary at random, say, pages 50, 75, 125, and 303. Count the number of words on these pages. d. How many of these words do you know? e. What percentage of the words on the four pages do you know? f. Multiply the words in the dictionary by the percentage you arrived at in (e). You know approximately that many English words. 2. Divide the following words by placing a + between their morphemes. (Some of the words may be monomorphemic and therefore indivisible.) Exercises Example: replaces = re + place + s a. retroactive b. befriended c. televise d. margin e. endearment f. psychology g. unpalatable h. holiday i. grandmother j. morphemic k. mistreatment l. deactivation m. saltpeter n. airsickness 3. Match each expression under A with the one statement under B that characterizes it. A a. b. c. d. e. B noisy crow scarecrow the crow crowlike crows (1) (2) (3) (4) (5) (6) compound noun root morpheme plus derivational prefix phrase consisting of adjective plus noun root morpheme plus inflectional affix root morpheme plus derivational suffix grammatical morpheme followed by lexical morpheme 4. Write the one proper description from the list under B for the italicized part of each word in A. A a. b. c. d. e. B terrorized uncivilized terrorize lukewarm impossible (1) (2) (3) (4) (5) (6) (7) (8) free root bound root inflectional suffix derivational suffix inflectional prefix derivational prefix inflectional infix derivational infix 5. A. Consider the following nouns in Zulu and proceed to look for the recurring forms. umfazi umfani umzali umfundisi umbazi “married woman” “boy” “parent” “teacher” “carver” abafazi abafani abazali abafundisi ababazi “married women” “boys” “parents” “teachers” “carvers” 69 70 CHAPTER 1 Morphology: The Words of Language umlimi “farmer” abalimi “farmers” umdlali “player” abadlali “players” umfundi “reader” abafundi “readers” a. What is the morpheme meaning “singular” in Zulu? b. What is the morpheme meaning “plural” in Zulu? c. List the Zulu stems to which the singular and plural morphemes are attached, and give their meanings. B. The following Zulu verbs are derived from noun stems by adding a verbal suffix. fundisa lima “to teach” “to cultivate” funda baza “to read” “to carve” d. Compare these words to the words in section A that are related in meaning, for example, umfundisi “teacher,” abafundisi “teachers,” fundisa “to teach.” What is the derivational suffix that specifies the category verb? e. What is the nominal suffix (i.e., the suffix that forms nouns)? f. State the morphological noun formation rule in Zulu. g. What is the stem morpheme meaning “read”? h. What is the stem morpheme meaning “carve”? 6. Sweden has given the world the rock group ABBA, the automobile Volvo, and the great film director Ingmar Bergman. The Swedish language offers us a noun morphology that you can analyze with the knowledge gained reading this chapter. Consider these Swedish noun forms: en lampa en stol en tidning lampor stolar tidningar lampan stolen tidningaren lamporna stolarna tidningarna “a lamp” “a chair” “a newspaper” “lamps” “chairs” “newspapers” “the lamp” “the chair” “the newspaper” “the lamps” “the chairs” “the newspapers” en bil en soffa en katt bilar soffor kattar bilen soffan katten bilarna sofforna kattarna “a car” “a sofa” “a cat” “cars” “sofas” “cats” “the car” “the sofa” “the cat” “the cars” “the sofas” “the cats” a. What is the Swedish word for the indefinite article a (or an)? b. What are the two forms of the plural morpheme in these data? How can you tell which plural form applies? c. What are the two forms of the morpheme that make a singular word definite, that is, correspond to the English article the? How can you tell which form applies? d. What is the morpheme that makes a plural word definite? e. In what order do the various suffixes occur when there is more than one? Exercises f. If en flicka is “a girl,” what are the forms for “girls,” “the girl,” and “the girls”? g. If bussarna is “the buses,” what are the forms for “buses” and “the bus”? 7. Here are some nouns from the Philippine language Cebuano. sibwano ilokano tagalog inglis bisaja “a Cebuano” “an Ilocano” “a Tagalog person” “an Englishman” “a Visayan” binisaja ininglis tinagalog inilokano sinibwano “the Visayan language” “the English language” “the Tagalog language” “the Ilocano language” “the Cebuano language” a. What is the exact rule for deriving language names from ethnic group names? b. What type of affixation is represented here? c. If suwid meant “a Swede” and italo meant “an Italian,” what would be the words for the Swedish language and the Italian language? d. If finuranso meant “the French language” and inunagari meant “the Hungarian language,” what would be the words for a Frenchman and a Hungarian? 8. The following infinitive and past participle verb forms are found in Dutch. Root Infinitive Past Participle wandel duw stofzuig wandelen duwen stofzuigen gewandeld geduwd gestofzuigd “walk” “push” “vacuum-clean” With reference to the morphological processes of prefixing, suffixing, infixing, and circumfixing discussed in this chapter and the specific morphemes involved: a. State the morphological rule for forming an infinitive in Dutch. b. State the morphological rule for forming the Dutch past participle form. 9. Below are some sentences in Swahili: mtoto mtoto mtoto watoto watoto watoto mtu mtu mtu watu watu watu kisu amefika anafika atafika wamefika wanafika watafika amelala analala atalala wamelala wanalala watalala kimeanguka “The child has arrived.” “The child is arriving.” “The child will arrive.” “The children have arrived.” “The children are arriving.” “The children will arrive.” “The person has slept.” “The person is sleeping.” “The person will sleep.” “The persons have slept.” “The persons are sleeping.” “The persons will sleep.” “The knife has fallen.” 71 72 CHAPTER 1 Morphology: The Words of Language kisu kisu visu visu visu kikapu kikapu kikapu vikapu vikapu vikapu kinaanguka kitaanguka vimeanguka vinaanguka vitaanguka kimeanguka kinaanguka kitaanguka vimeanguka vinaanguka vitaanguka “The knife is falling.” “The knife will fall.” “The knives have fallen.” “The knives are falling.” “The knives will fall.” “The basket has fallen.” “The basket is falling.” “The basket will fall.” “The baskets have fallen.” “The baskets are falling.” “The baskets will fall.” One of the characteristic features of Swahili (and Bantu languages in general) is the existence of noun classes. Specific singular and plural prefixes occur with the nouns in each class. These prefixes are also used for purposes of agreement between the subject noun and the verb. In the sentences given, two of these classes are included (there are many more in the language). a. Identify all the morphemes you can detect, and give their meanings. Example: -toto “child” m- noun prefix attached to singular nouns of Class I a- prefix attached to verbs when the subject is a singular noun of Class I Be sure to look for the other noun and verb markers, including tense markers. b. How is the verb constructed? That is, what kinds of morphemes are strung together and in what order? c. How would you say in Swahili: (1) “The child is falling.” (2) “The baskets have arrived.” (3) “The person will fall.” 10. We mentioned the morphological process of reduplication—the formation of new words through the repetition of part or all of a word—which occurs in many languages. The following examples from Samoan illustrate this kind of morphological rule. manao matua malosi punou atamaki savali laga “he wishes” “he is old” “he is strong” “he bends” “he is wise” “he travels” “he weaves” a. What is the Samoan for: (1) “they weave” (2) “they travel” (3) “he sings” mananao matutua malolosi punonou atamamaki pepese “they wish” “they are old” “they are strong” “they bend” “they are wise” “they sing” Exercises b. Formulate a general statement (a morphological rule) that states how to form the plural verb form from the singular verb form. 11. Following are listed some words followed by incorrect (humorous?) definitions: Word Definition stalemate effusive tenet dermatology ingenious finesse amphibious deceptionist mathemagician sexcedrin “husband or wife no longer interested” “able to be merged” “a group of ten singers” “a study of derms” “not very smart” “a female fish” “able to lie on both sea and land” “secretary who covers up for his boss” “Bernie Madoff’s accountant” “medicine for mate who says, ‘sorry, I have a headache.’” “hormonal supplement administered as pasta” “medicine to make you look beautiful” “say goodbye to those allergies” “singing in the shower” “dog that guards the cantaloupe patch” testostoroni aesthetominophen histalavista aquapella melancholy Give some possible reasons for the source of these silly “definitions.” Illustrate your answers by reference to other words or morphemes. For example, stalemate comes from stale meaning “having lost freshness” and mate meaning “marriage partner.” When mates appear to have lost their freshness, they are no longer as desirable as they once were. 12. a. Draw tree diagrams for the following words: construal, disappearances, irreplaceability, misconceive, indecipherable, redarken. b. Draw two tree diagrams for undarkenable to reveal its two meanings: “able to be less dark” and “unable to be made dark.” 13. There are many asymmetries in English in which a root morpheme combined with a prefix constitutes a word, but without the prefix is a nonword. A number of these are given in this chapter. a. Following is a list of such nonword roots. Add a prefix to each root to form an existing English word. Words Nonwords ___________ ___________ ___________ ___________ ___________ ___________ *descript *cognito *beknownst *peccable *promptu *plussed 73 74 CHAPTER 1 Morphology: The Words of Language Words Nonwords ___________ ___________ *domitable *nomer b. There are many more such multimorphemic words for which the root morphemes do not constitute words by themselves. Can you list five more? 14. We have seen that the meaning of compounds is often not revealed by the meaning of their composite words. Crossword puzzles and riddles often make use of this by providing the meaning of two parts of a compound and asking for the resulting word. For example, infielder = diminutive/cease. Read this as asking for a word that means “infielder” by combining a word that means “diminutive” with a word which means “cease.” The answer is shortstop. See if you can figure out the following: a. sci-fi TV series = headliner/journey b. campaign = farm building/tempest c. at-home wear = tub of water/court attire d. kind of pen = formal dance/sharp end e. conservative = correct/part of an airplane 15. Consider the following dialogue between parent and schoolchild: parent: When will you be done with your eight-page book report, dear? child: I haven’t started it yet. parent: But it’s due tomorrow, you should have begun weeks ago. Why do you always wait until the last minute? child: I have more confidence in myself than you do. parent: Say what? child: I mean, how long could it possibly take to read an eight-page book? The humor is based on the ambiguity of the compound eight-page book report. Draw two trees similar to those in the text for top hat rack to reveal the ambiguity. 16. One of the characteristics of Italian is that articles and adjectives have inflectional endings that mark agreement in gender (and number) with the noun they modify. Based on this information, answer the questions that follow the list of Italian phrases. un uomo un uomo robusto un uomo robustissimo una donna robusta un vino rosso una faccia un vento secco “a man” “a robust man” “a very robust man” “a robust woman” “a red wine” “a face” “a dry wind” a. What is the root morpheme meaning “robust”? b. What is the morpheme meaning “very”? Exercises c. What is the Italian for: (1) “a robust wine” (2) “a very red face” (3) “a very dry wine” 17. Following is a list of words from Turkish. In Turkish, articles and morphemes indicating location are affixed to the noun. deniz denize denizin eve “an ocean” “to an ocean” “of an ocean” “to a house” evden evimden denizimde elde “from a house” “from my house” “in my ocean” “in a hand” a. What is the Turkish morpheme meaning “to”? b. What kind of affixes in Turkish corresponds to English prepositions (e.g., prefixes, suffixes, infixes, free morphemes)? c. What would the Turkish word for “from an ocean” be? d. How many morphemes are there in the Turkish word denizimde? 18. The following are some verb forms in Chickasaw, a member of the Muskogean family of languages spoken in south-central Oklahoma.6 Chickasaw is an endangered language. Currently, there are only about 100 speakers of Chickasaw, most of whom are over 70 years old. sachaaha chaaha chichaaha hoochaaha satikahbi chitikahbitok chichchokwa hopobatok hoohopobatok sahopoba “I am tall” “he/she is tall” “you are tall” “they are tall” “I am tired” “you were tired” “you are cold” “he was hungry” “they were hungry” “I am hungry” a. What is the root morpheme for the following verbs? (1) “to be tall” (2) “to be hungry” b. What is the morpheme meaning: (1) past tense (2) “I” (3) “you” (4) “he/she” c. If the Chickasaw root for “to be old” is sipokni, how would you say: (1) “You are old” (2) “He was old” (3) “They are old” 6 The Chickasaw examples are provided by Pamela Munro. 75 76 CHAPTER 1 Morphology: The Words of Language 19. The language Little-End Egglish, whose source is revealed in exercise 14, chapter 10, exhibits the following data: a. b. c. d. e. kul vet rok ver gup i. ii. iii. iv. v. “omelet” “yolk (of egg)” “egg” “egg shell” “soufflé” zkulego zvetego zrokego zverego zgupego “my omelet” “my yolk” “my egg” “my egg shell” “my soufflé” zkulivo zvetivo zrokivo zverivo zgupivo “your omelet” “your yolk” “your egg” “your egg shell” “your soufflé” Isolate the morphemes that indicate possession, first person singular, and second person (we don’t know whether singular, plural, or both). Indicate whether the affixes are prefixes or suffixes. Given that vel means egg white, how would a Little-End Egglisher say “my egg white”? Given that zpeivo means “your hard-boiled egg,” what is the word meaning “hard-boiled egg”? If you knew that zvetgogo meant “our egg yolk,” what would be likely to be the morpheme meaning “our”? If you knew that borokego meant “for my egg,” what would be likely to be the morpheme bearing the benefactive meaning “for”? 20. Research project: Consider what are called “interfixes” such as -o- in English jack-o-lantern. They are said to be meaningless morphemes attached to two morphemes at once. What can you learn about that notion? Where do you think the -o- comes from? Are there languages other than English that have interfixes? 2 Syntax: The Sentence Patterns of Language To grammar even kings bow. J. B. MOLIÈRE, Les Femmes Savantes, II, 1672 It is an astonishing fact that any speaker of any human language can produce and understand an infinite number of sentences. We can show this quite easily through examples such as the following: The kindhearted boy had many girlfriends. The kindhearted, intelligent boy had many girlfriends. The kindhearted, intelligent, handsome boy had many girlfriends. . . . John found a book in the library. John found a book in the library in the stacks. John found a book in the library in the stacks on the fourth floor. . . . The cat chased the mouse. The cat chased the mouse that ate the cheese. The cat chased the mouse that ate the cheese that came from the cow. The cat chased the mouse that ate the cheese that came from the cow that grazed in the field. 77 78 CHAPTER 2 Syntax: The Sentence Patterns of Language In each case the speaker could continue creating sentences by adding another adjective, prepositional phrase, or relative clause. In principle, this could go on forever. All languages have mechanisms of this sort that make the number of sentences limitless. Given this fact, the sentences of a language cannot be stored in a dictionary format in our heads. Rather, sentences are composed of discrete units that are combined by rules. This system of rules explains how speakers can store infinite knowledge in a finite space—our brains. The part of grammar that represents a speaker’s knowledge of sentences and their structures is called syntax. The aim of this chapter is to show you what syntactic structures look like and to familiarize you with some of the rules that determine them. Most of the examples will be from the syntax of English, but the principles that account for syntactic structures are universal. What the Syntax Rules Do “Then you should say what you mean,” the March Hare went on. “I do,” Alice hastily replied, “at least—I mean what I say—that’s the same thing, you know.” “Not the same thing a bit!” said the Hatter. “You might just as well say that ‘I see what I eat’ is the same thing as ‘I eat what I see’!” “You might just as well say,” added the March Hare, “that ‘I like what I get’ is the same thing as ‘I get what I like’!” “You might just as well say,” added the Dormouse . . . “that ‘I breathe when I sleep’ is the same thing as ‘I sleep when I breathe’!” “It is the same thing with you,” said the Hatter. LEWIS CARROLL, Alice’s Adventures in Wonderland, 1865 The rules of syntax combine words into phrases and phrases into sentences. Among other things, the rules specify the correct word order for a language. For example, English is a Subject–Verb–Object (SVO) language. The English sentence in (1) is grammatical because the words occur in the right order; the sentence in (2) is ungrammatical because the word order is incorrect for English. (Recall that the asterisk or star preceding a sentence is the linguistic convention for indicating that the sentence is ungrammatical or ill-formed according to the rules of the grammar.) 1. 2. The President nominated a new Supreme Court justice. *President the new Supreme justice Court a nominated. A second important role of the syntax is to describe the relationship between the meaning of a particular group of words and the arrangement of those words. For example, Alice’s companions show us that the word order of a sentence contributes crucially to its meaning. The sentences in (3) and (4) contain the same words, but the meanings are quite different, as the Mad Hatter points out. 3. 4. I mean what I say. I say what I mean. What the Syntax Rules Do The rules of the syntax also specify the grammatical relations of a sentence, such as subject and direct object. In other words, they provide the information about who is doing what to whom. This information is crucial to understanding the meaning of a sentence. For example, the grammatical relations in (5) and (6) are reversed, so the otherwise identical sentences have very different meanings. 5. 6. Your dog chased my cat. My cat chased your dog. Syntactic rules also specify other constraints that sentences must adhere to. Consider, for example, the sentences in (7). As an exercise you can first read through them and place a star before those sentences that you consider to be ungrammatical. 7. (a) The boy found. (b) The boy found quickly. (c) The boy found in the house. (d) The boy found the ball. We predict that you will find the sentence in (7d) grammatical and the ones in (7a–c) ungrammatical. This is because the syntax rules specify that a verb like found must be followed by something, and that something cannot be an expression like quickly or in the house but must be like the ball. Similarly, we expect you will find the sentence in (8b) grammatical while the sentence in (8a) is not. 8. (a) Disa slept the baby. (b) Disa slept soundly. The verb sleep patterns differently than find in that it may be followed solely by a word like soundly but not by other kinds of phrases such as the baby. We also predict that you’ll find that the sentences in (9a, d, e, f) are grammatical and that (9b, c) are not. The examples in (9) show that specific verbs, such as believe, try, and want, behave differently with respect to the patterns of words that may follow them. 9. (a) Zack believes Robert to be a gentleman. (b) Zack believes to be a gentleman. (c) Zack tries Robert to be a gentleman. (d) Zack tries to be a gentleman. (e) Zack wants to be a gentleman. (f) Zack wants Robert to be a gentleman. The fact that all native speakers have the same judgments about the sentences in (7) to (9) tells us that grammatical judgments are neither idiosyncratic nor capricious, but are determined by rules that are shared by all speakers of a language. 79 80 CHAPTER 2 Syntax: The Sentence Patterns of Language In (10) we see that the phrase ran up the hill behaves differently from the phrase ran up the bill, even though the two phrases are superficially quite similar. For the expression ran up the hill, the rules of the syntax allow the word orders in (10a) and (10c), but not (10b). In ran up the bill, in contrast, the rules allow the order in (10d) and (10e), but not (10f). 10. (a) Jack and Jill ran up the hill. (b) Jack and Jill ran the hill up. (c) Up the hill ran Jack and Jill. (d) Jack and Jill ran up the bill. (e) Jack and Jill ran the bill up. (f) Up the bill ran Jack and Jill. The pattern shown in (10) illustrates that sentences are not simply strings of words with no further organization. If they were, there would be no reason to expect ran up the hill to pattern differently from ran up the bill. These phrases act differently because they have different syntactic structures associated with them. In ran up the hill, the words up the hill form a unit, as follows: He ran [up the hill] The whole unit can be moved to the beginning of the sentence, as in (10c), but we cannot rearrange its subparts, as shown in (10b). On the other hand, in ran up the bill, the words up the bill do not form a natural unit, so they cannot be moved, and (10f) is ungrammatical. Our syntactic knowledge crucially includes rules that tell us how words form groups in a sentence, or how they are hierarchically arranged with respect to one another. Consider the following sentence: The captain ordered all old men and women off the sinking ship. This phrase “old men and women” is ambiguous, referring either to old men and to women of any age or to old men and old women. The ambiguity arises because the words old men and women can be grouped in two ways. If the words are grouped as follows, old modifies only men and so the women can be any age. [old men] and [women] When we group them like this, the adjective old modifies both men and women. [old [men and women]] The rules of syntax allow both of these groupings, which is why the expression is ambiguous. The following hierarchical diagrams illustrate the same point: old men and women g old men and women What the Syntax Rules Do In the first structure old and men are under the same node and hence old modifies men. In the second structure old shares a node with the entire conjunction men and women, and so modifies both. This is similar to what we find in morphology for ambiguous words such as unlockable, which have two structures, corresponding to two meanings, as discussed in chapter 1. Many sentences exhibit such ambiguities, often leading to humorous results. Consider the following two sentences, which appeared in classified ads: For sale: an antique desk suitable for lady with thick legs and large drawers. We will oil your sewing machine and adjust tension in your home for $10.00. In the first ad, the humorous reading comes from the grouping [a desk] [for lady with thick legs and large drawers] as opposed to the intended [a desk for lady] [with thick legs and large drawers], where the legs and drawers belong to the desk. The second case is similar. Because these ambiguities are a result of different structures, they are instances of structural ambiguity. Contrast these sentences with: This will make you smart. The two interpretations of this sentence are due to the two meanings of smart— “clever” or “burning sensation.” Such lexical or word-meaning ambiguities, as opposed to structural ambiguities, will be discussed in chapter 3. Often a combination of differing structure and double word-meaning creates ambiguity (and humor) as in the cartoon: Rhymes With Orange (105945) © Hilary B. Price. King Features Syndicate Waitress’s nose ring Waitress’s nose ring Syntactic rules reveal the grammatical relations among the words of a sentence as well as their order and hierarchical organization. They also explain how the grouping of words relates to its meaning, such as when a sentence or 81 82 CHAPTER 2 Syntax: The Sentence Patterns of Language phrase is ambiguous. In addition, the rules of the syntax permit speakers to produce and understand a limitless number of sentences never produced or heard before—the creative aspect of linguistic knowledge. A major goal of linguistics is to show clearly and explicitly how syntactic rules account for this knowledge. A theory of grammar must provide a complete characterization of what speakers implicitly know about their language. What Grammaticality Is Not Based On Colorless green ideas sleep furiously. This is a very interesting sentence, because it shows that syntax can be separated from semantics—that form can be separated from meaning. The sentence doesn’t seem to mean anything coherent, but it sounds like an English sentence. HOWARD LASNIK, The Human Language: Part One, 1995 Importantly, a person’s ability to make grammaticality judgments does not depend on having heard the sentence before. You may never have heard or read the sentence Enormous crickets in pink socks danced at the prom. but your syntactic knowledge tells you that it is grammatical. As we showed at the beginning of this chapter, people are able to understand, produce, and make judgments about an infinite range of sentences, most of which they have never heard before. This ability illustrates that our knowledge of language is creative—not creative in the sense that we are all poets, which we are not, but creative in that none of us is limited to a fixed repertoire of expressions. Rather, we can exploit the resources of our language and grammar to produce and understand a limitless number of sentences embodying a limitless range of ideas and emotions. We showed that the structure of a sentence contributes to its meaning. However, grammaticality and meaningfulness are not the same thing, as shown by the following sentences: Colorless green ideas sleep furiously. A verb crumpled the milk. Although these sentences do not make much sense, they are syntactically well formed. They sound funny, but their funniness is different from what we find in the following strings of words: *Furiously sleep ideas green colorless. *Milk the crumpled verb a. There are also sentences that we understand even though they are not well formed according to the rules of the syntax. For example, most English speakers could interpret *The boy quickly in the house the ball found. although they know that the word order is incorrect. Similarly, we could probably assign a meaning to sentence (8a) (Disa slept the baby) in the previous sec- Sentence Structure tion. If asked to fix it up, we would probably come up with something like “Disa put the baby to sleep,” but we also know that as it stands, (8a) is not a possible sentence of English. To be a sentence, words must conform to specific patterns determined by the syntactic rules of the language. Some sentences are grammatical even though they are difficult to interpret because they include nonsense words, that is, words with no agreed-on meaning. This is illustrated by the following lines from the poem “Jabberwocky” by Lewis Carroll: ’Twas brillig, and the slithy toves Did gyre and gimble in the wabe These lines are grammatical in the linguistic sense that they obey the word order and other constraints of English. Such nonsense poetry is amusing precisely because the sentences comply with syntactic rules and sound like good English. Ungrammatical strings of nonsense words are not entertaining: *Toves slithy the and brillig ’twas wabe the in gimble and gyre did Grammaticality also does not depend on the truth of sentences. If it did, lying would be impossible. Nor does it depend on whether real objects are being discussed or whether something is possible in the real world. Untrue sentences can be grammatical, sentences discussing unicorns can be grammatical, and sentences referring to pregnant fathers can be grammatical. The syntactic rules that permit us to produce, understand, and make grammaticality judgments are unconscious rules. The grammar is a mental grammar, different from the prescriptive grammar rules that we are taught in school. We develop the mental rules of grammar long before we attend school, as we shall see in chapter 7. Sentence Structure I really do not know that anything has ever been more exciting than diagramming sentences. GERTRUDE STEIN, “Poetry and Grammar,” 1935 Suppose we wanted to write a template that described the structure of an English sentence, and more specifically, a template that gave the correct word order for English. We might come up with something like the following: Det—N—V—Det—N This template says that a determiner (an article) is followed by a noun, which is followed by a verb, and so on. It would describe English sentences such as the following: The child found a puppy. The professor wrote a book. That runner won the race. 83 84 CHAPTER 2 Syntax: The Sentence Patterns of Language The implication of such a template would be that sentences are strings of words belonging to particular grammatical categories (“parts of speech”) with no internal organization. We know, however, that such “flat” structures are incorrect. As noted earlier, sentences have a hierarchical organization; that is, the words are grouped into natural units. The words in the sentence The child found a puppy. may be grouped into [the child] and [found a puppy], corresponding to the subject and predicate of the sentence. A further division gives [the child] and then [[found] [a puppy]], and finally the individual words: [[the] [child]] [[found] [[a] [puppy]]]. It’s sometimes easier to see the parts and subparts of the sentence in a tree diagram: root the child found a puppy The “tree” is upside down with its “root” encompassing the entire sentence, “The child found a puppy,” and its “leaves” being the individual words, the, child, found, a, puppy. The tree conveys the same information as the nested square brackets. The hierarchical organization of the tree reflects the groupings and subgroupings of the words of the sentence. The tree diagram shows, among other things, that the phrase found a puppy divides naturally into two branches, one for the verb found and the other for the direct object a puppy. A different division, say, found a and puppy, is unnatural. Constituents and Constituency Tests Parts is parts. WENDY’S COMMERCIAL, 2006 The natural groupings or parts of a sentence are called constituents. Various linguistic tests reveal the constituents of a sentence. The first test is the “stand alone” test. If a group of words can stand alone, they form a constituent. For example, the set of words that can be used to answer a question is a constituent. So in answer to the question “What did you find?” a speaker might answer a puppy, but not found a. A puppy can stand alone while found a cannot. The second test is “replacement by a pronoun.” Pronouns can substitute for natural groups. In answer to the question “Where did you find a puppy?” a speaker can say, “I found him in the park.” Words such as do can also take the place of the entire predicate found a puppy, as in “John found a puppy and Bill Sentence Structure did too.” If a group of words can be replaced by a pronoun or a word like do, it forms a constituent. A third test of constituency is the “move as a unit” test. If a group of words can be moved, they form a constituent. For example, if we compare the following sentences to the sentence “The child found a puppy,” we see that certain elements have moved: It was a puppy that the child found. A puppy was found by the child. In the first example, the constituent a puppy has moved from its position following found; in the second example, the positions of a puppy and the child have been changed. In all such rearrangements the constituents a puppy and the child remain intact. Found a does not remain intact, because it is not a constituent. In the sentence “The child found a puppy,” the natural groupings or constituents are the subject the child, the predicate found a puppy, and the direct object a puppy. Some sentences have a prepositional phrase in the predicate. Consider The puppy played in the garden. We can use our tests to show that in the garden is also a constituent, as follows: Where did the puppy play? In the garden (stand alone) The puppy played there. (replacement by a pronoun-like word) In the garden is where the puppy played. (move as a unit) It was in the garden that the puppy played. As before, our knowledge of the constituent structure of a sentence may be graphically represented by a tree diagram. The tree diagram for the sentence “The puppy played in the garden” is as follows: the puppy played in the garden In addition to the syntactic tests just described, experimental evidence has shown that speakers do not represent sentences as strings of words but rather in terms of constituents. In these experiments, subjects listen to sentences that have clicking noises inserted into them at random points. In some cases the click occurs at a constituent boundary, and in other sentences the click is inserted in the middle of a constituent. The subjects are then asked to report where the click occurred. There were two important results: (1) Subjects noticed the click 85 86 CHAPTER 2 Syntax: The Sentence Patterns of Language and recalled its location best when it occurred at a major constituent boundary (e.g., between the subject and predicate); and (2) clicks that occurred inside the constituent were reported to have occurred between constituents. In other words, subjects displaced the clicks and put them at constituent boundaries. These results show that speakers perceive sentences in chunks corresponding to grammatical constituents. Every sentence in a language is associated with one or more constituent structures. If a sentence has more than one constituent structure, it is ambiguous, and each tree will correspond to one of the possible meanings. For example, the sentence “I bought an antique desk suitable for a lady with thick legs and large drawers” has two phrase structure trees associated with it. In one structure the phrase [a lady with thick legs and large drawers] forms a constituent. For example, it could stand alone in answer to the question “Who did you buy an antique desk for?” In its second meaning, the phrase with thick legs and large drawers modifies the phrase a desk for a lady, and thus the structure is [[a desk for a lady][with thick legs and large drawers]]. Syntactic Categories . ScienceCartoonsPlus.com. Each grouping in the tree diagrams of “The child found a puppy” is a member of a large family of similar expressions. For example, the child belongs to a Sentence Structure family that includes the police officer, your neighbor, this yellow cat, he, John, and countless others. We can substitute any member of this family for the child without affecting the grammaticality of the sentence, although the meaning of course would change. A police officer found a puppy. Your neighbor found a puppy. This yellow cat found a puppy. A family of expressions that can substitute for one another without loss of grammaticality is called a syntactic category. The child, a police officer, John, and so on belong to the syntactic category noun phrase (NP), one of several syntactic categories in English and every other language in the world. NPs may function as the subject or as an object in a sentence. NPs often contain a determiner (like a or the) and a noun, but they may also consist of a proper name, a pronoun, a noun without a determiner, or even a clause or a sentence. Even though a proper noun like John and pronouns such as he and him are single words, they are technically NPs, because they pattern like NPs in being able to fill a subject or object or other NP slots. John found the puppy. He found the puppy. Boys love puppies. The puppy loved him. The puppy loved John. NPs can be more complex as illustrated by the sentence: The girl that Professor Snape loved married the man of her dreams. The NP subject of this sentence is the girl that Professor Snape loved, and the NP object is the man of her dreams. Syntactic categories are part of a speaker’s knowledge of syntax. That is, speakers of English know that only items (a), (b), (e), (f), and (g) in the following list are NPs even if they have never heard the term noun phrase before. 1. (a) a bird (b) the red banjo (c) have a nice day (d) with a balloon (e) the woman who was laughing (f) it (g) John (h) went You can test this claim by inserting each expression into three contexts: Who found _________, _________ was seen by everyone, and What/who I heard was _________. For example, *Who found with a balloon is ungrammatical, as is *Have a nice day was seen by everyone, as opposed to Who found it? or John was seen by everyone. Only NPs fit into these contexts because only NPs can function as subjects and objects. 87 88 CHAPTER 2 Syntax: The Sentence Patterns of Language There are other syntactic categories. The expression found a puppy is a verb phrase (VP). A verb phrase always contains a verb (V), and it may contain other categories, such as a noun phrase or prepositional phrase (PP), which is a preposition followed by an NP, such as in the park, on the roof, with a balloon. In (2) the VPs are those phrases that can complete the sentence “The child __________ .” 2. (a) saw a clown (b) a bird (c) slept (d) smart (e) ate the cake (f) found the cake in the cupboard (g) realized that the earth was round Inserting (a), (c), (e), (f), and (g) will produce grammatical sentences, whereas the insertion of (b) or (d) would result in an ungrammatical sentence. Thus, (a), (c), (e), (f), and (g) are verb phrases. Lexical and Functional Categories There are ten parts of speech, and they are all troublesome. MARK TWAIN, “The Awful German Language,” in A Tramp Abroad, 1880 Syntactic categories include both phrasal categories such as NP, VP, AdjP (adjective phrase), PP (prepositional phrase), and AdvP (adverbial phrase), as well as lexical categories such as noun (N), verb (V), preposition (P), adjective (Adj), and adverb (Adv). Each lexical category has a corresponding phrasal category. Following is a list of lexical categories with some examples of each type: Lexical categories Noun (N) Verb (V) Preposition (P) Adjective (Adj) Adverb (Adv) puppy, boy, soup, happiness, fork, kiss, pillow, cake, cupboard find, run, sleep, throw, realize, see, try, want, believe up, down, across, into, from, by, with red, big, candid, hopeless, fair, idiotic, lucky again, carefully, luckily, never, very, fairly Many of these categories may already be familiar to you. As mentioned earlier, some of them are traditionally referred to as parts of speech. Other categories may be less familiar, for example, the category determiner (Det), which includes the articles a and the, as well as demonstratives such as this, that, these, and those, and “counting words” such as each and every. Another less familiar category is auxiliary (Aux), which includes the verbs have, had, be, was, and were, and the modals may, might, can, could, must, shall, should, will, and would. Aux and Det are functional categories, so called because their members have a grammatical function rather than a descriptive meaning. For example, determiners specify whether a noun is indefinite or definite (a boy versus the Sentence Structure boy), or the proximity of the person or object to the context (this boy versus that boy). Auxiliaries provide the verb with a time frame, whether ongoing (John is dancing), completed in the past (John has danced), or occurring in the future (John will dance). Auxiliaries may also express notions such as possibility (John may dance), necessity (John must dance), ability (John can dance), and so on. Lexical categories typically have particular kinds of meanings associated with them. For example, verbs usually refer to actions, events, and states (kick, marry, love); adjectives to qualities or properties (lucky, old); common nouns to general entities (dog, elephant, house); and proper nouns to particular individuals (Noam Chomsky) or places (Dodger Stadium) or other things that people give names to, such as commercial products (Coca-Cola, Viagra). But the relationship between grammatical categories and meaning is more complex than these few examples suggest. For example, some nouns refer to events (marriage and destruction) and others to states (happiness, loneliness). We can use abstract nouns such as honor and beauty, rather than adjectives, to refer to properties and qualities. In the sentence “Seeing is believing,” seeing and believing are nouns but are not entities. Prepositions are usually used to express relationships between two entities involving a location (e.g., the boy is in the room, the cat is under the bed), but this is not always the case; the prepositions of, by, about, and with are not locational. Because of the difficulties involved in specifying the precise meaning of lexical categories, we do not usually define categories in terms of their meanings, but rather on the basis of their syntactic distribution (where they occur in a sentence) and morphological characteristics. For example, we define a noun as a word that can occur with a determiner (the boy) and that can take a plural marker (boys), among other properties. All languages have syntactic categories such as N, V, and NP. Speakers know the syntactic categories of their language, even if they do not know the technical terms. Our knowledge of the syntactic classes is revealed when we substitute equivalent phrases, as we just did in examples (1) and (2), and when we use the various syntactic tests that we have discussed. Phrase Structure Trees and Rules Who climbs the Grammar-Tree distinctly knows Where Noun and Verb and Participle grows. JOHN DRYDEN, “The Sixth Satyr of Juvenal,” 1693 Now that you know something about constituent structure and grammatical categories, you are ready to learn how the sentences of a language are constructed. We will begin by building trees for simple sentences and then proceed to more complex structures. The trees that we will build here are more detailed than those we saw in the previous sections, because the branches of the tree will have category labels identifying each constituent. In this section we will also introduce the syntactic rules that generate (a technical term for describe or specify) the different kinds of structures. The following tree diagram provides labels for each of the constituents of the sentence “The child found a puppy.” These labels show that the entire sentence 89 90 CHAPTER 2 Syntax: The Sentence Patterns of Language belongs to the syntactic category of S (because the S-node encompasses all the words). It also reveals that the child and a puppy belong to the category NP, that is, they are noun phrases, and that found a puppy belongs to the category VP or is a verb phrase, consisting of a verb and an NP. It also reveals the syntactic category of each of the words in the sentence. S NP 2 2 Det g The N g child VP 2 V g NP 2 found Det g a N g puppy A tree diagram with syntactic category information is called a phrase structure tree or a constituent structure tree. This tree shows that a sentence is both a linear string of words and a hierarchical structure with phrases nested in phrases. Phrase structure trees (PS trees, for short) are explicit graphic representations of a speaker’s knowledge of the structure of the sentences of his language. PS trees represent three aspects of a speaker’s syntactic knowledge: 1. 2. 3. The linear order of the words in the sentence The identification of the syntactic categories of words and groups of words The hierarchical structure of the syntactic categories (e.g., an S is composed of an NP followed by a VP, a VP is composed of a V that may be followed by an NP, and so on) In chapter 1 we discussed the fact that the syntactic category of each word is listed in our mental dictionaries. We now see how this information is used by the syntax of the language. Words appear in trees under labels that correspond to their syntactic category. Nouns are under N, determiners under Det, verbs under V, and so on. The larger syntactic categories, such as VP, consist of all the syntactic categories and words below that point, or node, in the tree. The VP in the PS tree above consists of syntactic category nodes V and NP and the words found, a, and puppy. Because a puppy can be traced up the tree to the node NP, this constituent is a noun phrase. Because found and a puppy can be traced up to the node VP, this constituent is a verb phrase. The PS tree reflects the speaker’s intuitions about the natural groupings of words in a sentence. In discussing trees, every higher node is said to dominate all the categories beneath it. S dominates every node. A node is said to immediately dominate the categories one level below it. VP immediately dominates V and NP, the categories of which it is composed. Categories that are immediately dominated by the same node are sisters. V and NP are sisters in the phrase structure tree of “the child found a puppy.” Sentence Structure A PS tree is a formal device for representing the speaker’s knowledge of the structure of sentences in his language, as revealed by our linguistic intuitions. When we speak, we are not aware that we are producing sentences with such structures, but controlled experiments, such as the click experiments described earlier, show that we use them in speech production and comprehension. We will discuss these experiments further in chapter 8. The information represented in a PS tree can also be represented by another formal device: phrase structure (PS) rules. PS rules capture the knowledge that speakers have about the possible structures of a language. Just as a speaker cannot have an infinite list of sentences in her head, so she cannot have an infinite set of PS trees in her head. Rather, a speaker’s knowledge of the permissible and impermissible structures must exist as a finite set of rules that generate a tree for any sentence in the language. To express the structure given above, we need the following PS rules: 1. 2. 3. S NP VP → → → NP VP Det N V NP Phrase structure rules specify the well-formed structures of a language precisely and concisely. They express the regularities of the language and make explicit a speaker’s knowledge of the order of words and the grouping of words into syntactic categories. For example, in English an NP may contain a determiner followed by a noun. This is represented by rule 2. This rule conveys two facts: A noun phrase can contain a determiner followed by a noun in that order. A determiner followed by a noun is a noun phrase. You can think of PS rules as templates that a tree must match to be grammatical. To the left of the arrow is the dominating category, in this case NP, and the categories that it immediately dominates—that comprise it—appear on the right side, in this case Det and N. The right side of the arrow also shows the linear order of these components. Thus, one subtree for the English NP looks like this: NP 2 Det N Rule 1 says that a sentence (S) contains (immediately dominates) an NP and a VP in that order. Rule 3 says that a verb phrase consists of a verb (V) followed by an NP. These rules are general statements and do not refer to any specific VP, V, or NP. The subtrees represented by rules 1 and 3 are as follows: S 2 NP VP VP 2 V NP 91 92 CHAPTER 2 Syntax: The Sentence Patterns of Language A VP need not contain an NP object, however. It may include a verb alone, as in the following sentences: The woman laughed. The man danced. The horse galloped. These sentences have the structure: S 2 NP VP g V Thus a tree must have a VP that immediately dominates V, as specified by rule 4, which is therefore added to the grammar: 4. VP → V The following sentences contain prepositional phrases following the verb: The puppy played in the garden. The boat sailed up the river. A girl laughed at the monkey. The sheepdog rolled in the mud. The PS tree for such sentences is S NP 2 Det g The VP 2 N V PP 2 g g puppy played P NP g 2 in Det N g the g garden To permit structures of this type, we need two additional PS rules, as in 5 and 6. 5. 6. VP → PP → V PP P NP Another option open to the VP is to contain or embed a sentence. For example, the sentence “The professor said that the student passed the exam” contains Sentence Structure the sentence “the student passed the exam.” Preceding the embedded sentence is the word that, which is a complementizer (C). C is a functional category, like Aux and Det. Here is the structure of such sentence types: S NP 2 2 Det g The VP 2 N V g g professor said CP 2 C g that NP S 2 2 Det g the N g student VP 2 V NP g 2 passed Det N g g the exam To allow such embedded sentences, we need to add these two new rules to our set of phrase structure rules. 7. 8. VP → V CP CP → C S CP stands for complementizer phrase. Rule 8 says that CP contains a complementizer such as that followed by the embedded sentence. Other complementizers are if and whether in sentences like I don’t know whether I should talk about this. The teacher asked if the students understood the syntax lesson. that have structures similar to the one above. Here are the PS rules we have discussed so far. A few other rules will be considered later. 1. 2. 3. 4. 5. 6. 7. 8. S NP VP VP VP PP VP CP → → → → → → → → NP VP Det N V NP V V PP P NP V CP CS 93 94 CHAPTER 2 Syntax: The Sentence Patterns of Language Some Conventions for Building Phrase Structure Trees Everyone who is master of the language he speaks . . . may form new . . . phrases, provided they coincide with the genius of the language. JOHANN DAVID MICHAELIS, Dissertation, 1769 One can use the phrase structure rules as a guide for building trees that follow the structural constraints of the language. In so doing, certain conventions are followed. The S occurs at the top or “root” of the tree (it’s upside down). Another convention specifies how the rules are applied: First, find the rule with S on the left side of the arrow, and put the categories on the right side below the S, as shown here: S 2 VP NP Continue by matching any syntactic category at the bottom of the partially constructed tree to a category on the left side of a rule, then expand the tree with the categories on the right side. For example, we may expand the tree by applying the NP rule to produce: S 2 NP 2 VP N Det The categories at the bottom are Det, N, and VP, but only VP occurs to the left of an arrow in the set of rules and so needs to be expanded using one of the VP rules. Any one of the VP rules will work. The order in which the rules appear in the list of rules is irrelevant. (We could have begun by expanding the VP rather than the NP.) Suppose we use rule 5 next. Then the tree has grown to look like this: S VP NP Det N V PP Convention dictates that we continue in this way until none of the categories at the bottom of the tree appears on the left side of any rule (i.e., no phrasal categories may remain unexpanded). The PP must expand into a P and an NP (rule 6), and the NP into a Det and an N. We can use a rule as many times as it can apply. In this tree, we used the NP rule twice. After we have applied all the rules that can apply, the tree looks like this: Sentence Structure S NP 2 Det N VP 2 V PP 2 NP P 2 Det N By following these conventions, we generate only trees specified by the PS rules, and hence only trees that conform to the syntax of the language. By implication, any tree not so specified will be ungrammatical, that is, not permitted by the syntax. At any point during the construction of a tree, any rule may be used as long as its left-side category occurs somewhere at the bottom of the tree. By choosing different VP rules, we could specify different structures corresponding to sentences such as: The boys left. (VP → V) The wind blew the kite. (VP → V NP) The senator hopes that the bill passes. (VP → V CP) Because the number of possible sentences in every language is infinite, there are also an infinite number of trees. However, all trees are built out of the finite set of substructures allowed by the grammar of the language, and these substructures are specified by the finite set of phrase structure rules. The Infinity of Language: Recursive Rules So, naturalists observe, a flea Hath smaller fleas that on him prey; And these have smaller still to bite ’em, And so proceed ad infinitum. JONATHAN SWIFT, “On Poetry, a Rhapsody,” 1733 We noted at the beginning of the chapter that the number of sentences in a language is infinite and that languages have various means of creating longer and longer sentences, such as adding an adjective or a prepositional phrase. Even children know how to produce and understand very long sentences and know how to make them even longer, as illustrated by the children’s rhyme about the house that Jack built. This is the farmer sowing the corn, that kept the cock that crowed in the morn, that waked the priest all shaven and shorn, that married the man all tattered and torn, that kissed the maiden all forlorn, that milked the cow with the crumpled horn, that tossed the dog, 95 96 CHAPTER 2 Syntax: The Sentence Patterns of Language that worried the cat, that killed the rat, that ate the malt, that lay in the house that Jack built. The child begins the rhyme with This is the house that Jack built, continues by lengthening it to This is the malt that lay in the house that Jack built, and so on. You can add any of the following to the beginning of the rhyme and still have a grammatical sentence: I think that . . . What is the name of the unicorn that noticed that . . . Ask someone if . . . Do you know whether . . . Once we acknowledge the unboundedness of sentences, we need a formal device to capture that crucial aspect of speakers’ syntactic knowledge. It is no longer possible to specify each legal structure; there are infinitely many. To see how this works, let us first look at the case of multiple prepositional phrases such as [The girl walked [down the street] [over the hill] [through the woods] . . .]. VP substructures currently allow only one PP per sentence (VP → V PP—rule 5). We can rectify this problem by revising rule 5: 5. VP → VP PP Rule 5 is different from the previous rules because it repeats its own category (VP) inside itself. This is an instance of a recursive rule. Recursive rules are of critical importance because they allow the grammar to generate an infinite set of sentences. Reapplying rule 5 shows how the syntax permits structures with multiple PPs, such as in the sentence “The girl walked down the street with a gun toward the bank.” S NP VP 2 Det g the N g girl VP g VP VP 5 3 2 PP 2 PP 2 P g NP P g toward 2 V P NP with Det g g 2 g walked down Det N a g g the street PP 2 N g gun NP 2 Det g the N g bank Sentence Structure In this structure the VP rule 5 has applied three times and so there are three PPs: [down the street] [with a gun] [toward the bank]. It is easy to see that the rule could have applied four or more times, for example by adding a PP like for no good purpose. NPs can also contain PPs recursively. An example of this is shown by the phrase the man with the telescope in a box. NP NP 2 2 Det g the N g man PP 2 P g with NP NP 2 2 Det N g g the telescope PP 2 P g in NP 2 Det g a N g box To show that speakers permit recursive NP structures of this sort, we need to include the following PS rule, which is like the recursive VP rule 5. 9. NP → NP PP The PS rules define the allowable structures of the language, and in so doing make predictions about structures that we may not have considered when formulating each rule individually. These predictions can be tested, and if they are not validated, the rules must be reformulated because they must generate all and only the allowable structures. For example, rule 7 (VP → V CP) in combination with rules 8 (CP → C S) and 1 (S → NP VP) form a recursive set. (The recursiveness comes from the fact that S and VP occur on both the left and right side of the rules.) Those rules allow S to contain VP, which in turn contains CP, which in turn contains S, which in turn again contains VP, and so on, potentially without end. These rules, formulated for different purposes, correctly predict the limitlessness of language in which sentences are embedded inside larger sentences, such as The children hope that the teacher knows that the principal said that the school closes for the day as illustrated on the following page. 97 98 CHAPTER 2 Syntax: The Sentence Patterns of Language S 3 NP 2 Det g the VP 2 N V g children hope CP m 2 C S g 3 that NP VP 2 Det g the N g teacher V 2 CP knows 1 1 a m 1 2 S C 3 g that NP VP 2 Det N g g the principal V 2 CP said 2 S 1 1 a m 1 C g that 3 NP 2 Det g the N g school VP 2 V PP g 2 closes P NP g 2 for Det N g the g day Sentence Structure Recursive Adjectives and Possessives © The New Yorker Collection 2003 William Haefeli from cartoonbank.com. All rights reserved. Now we consider the case of multiple adjectives, illustrated at the beginning of the chapter with sentences such as “The kindhearted, intelligent, handsome boy had many girlfriends.” In English, adjectives occur before the noun. As a first approximation we might follow the system we have adopted thus far and introduce a recursive NP rule with a prenominal adjective: NP → Adj NP Repeated application of this rule would generate trees with multiple adjective positions, as desired. NP NP Adj NP Adj NP Adj But there is something wrong in this tree, which is made apparent when we expand the lowest NP. The adjective can appear before the determiner, and this is not a possible word order in English NPs. NP Adj g handsome NP Det g the NOT POSSIBLE! N g boy 99 100 CHAPTER 2 Syntax: The Sentence Patterns of Language The problem is that although determiners and adjectives are both modifiers of the noun, they have a different status. First, an NP will never have more than one determiner in it, while it may contain many adjectives. Also, an adjective directly modifies the noun, while a determiner modifies the whole adjective(s) + noun complex. The expression “the big dog” refers to some specific dog that is big, and not just some dog of any size. In general, modification occurs between sisters. If the adjective modifies the noun, then it is sister to the noun. If the determiner modifies the adjective + noun complex, then the determiner is sister to this complex. We can represent these two sisterhood relations by introducing an additional level of structure between NP and N. We refer to this level as N-bar (written as N'). NP Det g the 2 N' 2 Adj g handsome N g boy This structure provides the desired sisterhood relations. The adjective handsome is sister to the noun boy, which it therefore modifies, and the determiner is sister to the N' handsome boy. We must revise our NP rules to reflect this new structure, and add two rules for N'. Not all NPs have adjectives, of course. This is reflected in the second N' rule in which N' dominates only N. NP → N' → N' → Det N' (revised version of NP → Det N) Adj N' N Let us now see how these revised rules generate NPs with multiple (potentially infinitely many) adjectives. Thus far all the NPs we have looked at are common nouns with a simple definite or indefinite determiner (e.g., the cat, a boy), but NPs can consist of a simple pronoun (e.g., he, she, we, they) or a proper name (e.g., Robert, California, Prozac). To reflect determiner-less NP structures, we will need the rule NP → N' But that’s not all. We have possessive noun phrases such as Melissa’s garden, the girl’s shoes, and the man with the telescope’s hat. In these structures the possessor NP (e.g., Melissa’s, the girl’s, etc.) functions as a determiner in that it further specifies its sister noun. The ’s is the phonological realization of the abstract element poss. The structures are illustrated in each of the following trees. Sentence Structure NP NP Det N' Det NP poss N Melissa ’s garden N' poss NP Det N' the N girl N shoes ’s NP 5 N' Det 5 NP 5 N PP NP Det poss N' P N NP Det N' N the man with the telescope ’s hat To accommodate the possessive structure we need an additional rule: Det → NP poss This rule forms a recursive set with the NP → Det N' rule. Together these rules allow an English speaker to have multiple possessives such as The student’s friend’s cousin’s book. The embedding of categories within categories is common to all languages. Our brain capacity is finite, able to store only a finite number of categories and rules for their combination. Yet this finite system places an infinite set of sentences at our disposal. This linguistic property also illustrates the difference between competence and performance, discussed in chapter 6. All speakers of English (and other languages) have as part of their linguistic competence—their mental grammars— 101 102 CHAPTER 2 Syntax: The Sentence Patterns of Language the ability to embed phrases and sentences within each other ad infinitum. However, as the structures grow longer, they become increasingly more difficult to produce and understand. This could be due to short-term memory limitations, muscular fatigue, breathlessness, boredom, or any number of performance factors. (We will discuss performance factors more fully in chapter 8.) Nevertheless, these very long sentences would be well-formed according to the rules of the grammar. Heads and Complements “Mother Goose & Grimm” . Grimmy, Inc. Reprinted with permission of King Features Syndicate. Phrase structure trees also show relationships among elements in a sentence. For example, the subject and direct object of the sentence can be structurally defined. The subject is the NP that is closest to, or immediately dominated by, the root S. The direct object is the NP that is closest to, or immediately dominated by, VP. Another kind of relationship is that between the head of a phrase and its sisters. The head of a phrase is the word whose lexical category defines the type of phrase: the noun in a noun phrase, the verb in a verb phrase, and so on. Reviewing the PS rules in the previous section, we see that every VP contains a verb, which is its head. The VP may also contain other categories, such as an NP or CP. Those sister categories are complements; they complete the meaning of the phrase. Loosely speaking, the entire phrase refers to whatever the head verb refers to. For example, the VP find a puppy refers to an event of “finding.” The NP object in the VP that completes its meaning is a complement. The underscored CP (complementizer phrase) in the sentence “I thought that the child found the puppy” is also a complement. (Please do not confuse the terms complementizer and complement.) Every phrasal category, then, has a head of its same syntactic type. NPs are headed by nouns, PPs are headed by prepositions, CPs by complementizers, and so on; and every phrasal head can have a complement, which provides further information about the head. In the sentence “The death of Lincoln shocked the nation,” the PP of Lincoln is the complement to the head noun death. Other examples of complements are illustrated in the following examples, with the head in italics and the complement underlined: Sentence Structure an argument over jelly beans (PP complement to noun) his belief that justice will prevail (CP complement to noun) happy to be here (infinitive complement to adjective) about the war in Iraq (NP complement to preposition) wrote a long letter to his only sister (NP—PP complement to verb) tell John that his mother is coming to dinner (NP CP complements to verb) Each of these examples is a phrase (NP, AdjP, PP, VP) that contains a head (N, Adj, P, V), followed by a complement of varying composition such as CP in the case of belief, or NP PP in the case of wrote, and so on. The head-complement relation is universal. All languages have phrases that are headed and that contain complements. However, the order of the head and complement may differ in different languages. In English, for example, we see that the head comes first, followed by the complement. In Japanese, complements precede the head, as shown in the following examples: Taro-ga Taro-subject marker Inu-ga dog-subject marker inu-o dog-object marker niwa-de garden-in asonde playing mitsuketa found iru is (Taro found a dog) (The dog is playing in the garden) In the first sentence, the direct object complement inu-o “dog” precedes the head verb mitsuketa “found.” In the second, the PP complement niwa-de “in the garden” also precedes the head verb phrase. English is a VO language, meaning that the verb ordinarily precedes its object. Japanese is an OV language, and this difference is also reflected in the head/complement word order. Selection Whether a verb takes a complement or not depends on the properties of the verb. For example, the verb find is a transitive verb. A transitive verb requires an NP complement (direct object), as in The boy found the ball, but not *The boy found, or *The boy found in the house. Some verbs like eat are optionally transitive. John ate and John ate a sandwich are both grammatical. Verbs select different kinds of complements. For example, verbs like put and give take both an NP and a PP complement, but cannot occur with either alone: Sam put the milk in the refrigerator. *Sam put the milk. Robert gave the film to his client. *Robert gave to his client. Sleep is an intransitive verb; it cannot take an NP complement. Michael slept. *Michael slept a fish. Some verbs, such as think, select a sentence complement, as in “I think that Sam won the race.” Other verbs, like tell, select an NP and a sentence, as in “I 103 104 CHAPTER 2 Syntax: The Sentence Patterns of Language told Sam that Michael was on his bicycle”; yet other verbs like feel select either an AdjP or a sentence complement. (Complements are italicized.) Paul felt strong as an ox. He feels that he can win. As we will discuss later, sentences that are complements must often be preceded by a complementizer that. Other categories besides verbs also select their complements. For example, the noun belief selects either a PP or a CP, while the noun sympathy selects a PP, but not a CP, as shown by the following examples: the belief in freedom of speech the belief that freedom of speech is a basic right their sympathy for the victims *their sympathy that the victims are so poor Adjectives can also have complements. For example, the adjectives tired and proud select PPs: tired of stale sandwiches proud of her children With noun selection, the complement is often optional. Thus sentences like “He respected their belief,” “We appreciated their sympathy,” “Elimelech was tired,” and “All the mothers were proud” are syntactically well-formed with a meaning that might be conveyed by an explicit complement understood from context. Verb selection is often not optional, however, so that *He put the milk is ungrammatical even if it is clear from context where the milk was put. The information about the complement types selected by particular verbs and other lexical items is called C-selection or subcategorization, and is included in the lexical entry of the item in our mental lexicon. (Here C stands for “categorial” and is not to be confused with the C that stands for “complementizer”—we apologize for the “clash” of symbols, but that’s what it’s like in the linguistic literature.) Verbs also include in their lexical entry a specification of certain intrinsic semantic properties of their subjects and complements, just as they select for syntactic categories. This kind of selection is called S-selection (S for semantic). For example, the verb murder requires its subject and object to be human, while the verb drink requires its subject to be animate and its object liquid. Verbs such as like, hate, and so on select animate subjects. The following sentences violate S-selection and can only be used in a metaphorical sense. (We will use the symbol “!” to indicate a semantic anomaly.) !The rock murdered the man. !The beer drank the student. !The tree liked the boy. The famous sentence Colorless green ideas sleep furiously, discussed earlier in this chapter, is anomalous because (among other things) S-selection is violated Sentence Structure (e.g., the verb sleep requires an animate subject). In chapter 3 we will discuss the semantic relationships between a verb and its subject and objects in far more detail. The well-formedness of a phrase depends then on at least two factors: whether the phrase conforms to the structural constraints of the language as expressed in the PS rules, and whether it obeys the selectional requirements of the head, both syntactic (C-selection) and semantic (S-selection). What Heads the Sentence Might, could, would—they are contemptible auxiliaries. GEORGE ELIOT (MARY ANN EVANS), Middlemarch, 1872 We said earlier that all phrases have heads. One category that we have not yet discussed in this regard is sentence (S). For uniformity’s sake, we want all the categories to be headed, but what would the head of S be? To answer this question, let us consider sentences such as the following: Sam will kick the soccer ball. Sam has kicked the soccer ball. Sam is kicking the soccer ball. Sam may kick the soccer ball. As noted earlier, words like will, has, is, and may are auxiliary verbs, belonging to the category Aux, which also includes modals such as might, could, would, can, and several others. They occur in structures such as the following one. S NP 2 @ The boy VP 2 Aux g is may has VP @ eating eat eaten (From now on we will adopt the convention of using a triangle under a node when the content of a category is not crucial to the point under discussion.) Auxiliary verbs specify a time frame for the event (or state) described by the verb, whether it will take place in the future, already took place in the past, or is taking place now. A modal such as may contains “possibility” as part of its meaning, and says it is possible that the event will occur at some future time. The category Aux is a natural category to head S. Just as the VP is about the situation described by the verb—eat ice cream is about “eating”—so a sentence is about a situation or state of affairs that occurs at some point in time. 105 106 CHAPTER 2 Syntax: The Sentence Patterns of Language The parallel with other categories extends further. In the previous PS tree, VP is the complement to Aux. The selectional relationship between Aux and VP is demonstrated by the fact that particular auxiliaries go with particular kinds of VPs. For example, the auxiliary be takes a progressive (-ing) form of the verb, The boy is dancing. while the auxiliary have selects a past participle (-en) form of the verb, The girl has eaten. and the modals select the infinitival form of the verb (no affixes), The child must sleep The boy may eat. To have a uniform notation, many linguists use the symbols T (= tense) and TP (= tense phrase) instead of Aux and S. Furthermore, just as the NP required the intermediate N-bar (N') category, the TP also has the intermediate T-bar (T') category, as in the phrase structure tree below. TP NP 2 @ T' 2 T g VP be have Modal Indeed, many linguists assume that all XPs, where XP stands for any of NP, PP, VP, TP, AdjP, or CP, have three levels of structure. This is referred to as X-bar theory. The basic three-level X-bar schema is as follows: XP 2 specifier X' 2 X (head) complement The first level is the XP itself. The second level consists of a specifier, which functions as a modifier (and which is generally an optional constituent), and an X' (i.e., “X-bar”). For example, an NP specifier is a determiner; a VP specifier is an adverb such as never or often; an AdjP specifier is a degree word such as very or quite. The third level is an expansion of X' and consists of a head X and a complement, which may itself be a phrasal category, thus giving rise to recursion. X-bar structure is thought to be universal, occurring in all the world’s Sentence Structure languages, though the order of the elements inside XP and X' may be reversed, as we saw in Japanese. We will not use X-bar conventions in our description of syntax except on the few occasions where the notation provides an insight into the syntax of the language. For sentences we will generally use the more intuitive symbols S and Aux instead of TP and T, but you should think of Aux and S as having the same relationship to each other as V and VP, N and NP, and so on. To achieve this more straightforward approach, we will also ignore the T' category until it is needed later on in the description of the syntax of the main verb be. Without the use of TP, T', and T, we need an additional PS rule to characterize structures containing Aux: VP → Aux VP Like the other recursive VP rules, this rule will allow multiple Aux positions. VP Aux 2 VP 2 Aux VP 2 Aux VP @ This is a desired consequence because English allows sentences with multiple auxiliaries such as: The child may be sleeping. The dog has been barking all night. The bird must have been flying home. (modal, be) (have, be) (modal, have, be) The introduction of Aux into the system raises a question. Not all sentences seem to have auxiliaries. For example, the sentence “Sam kicked the soccer ball” has no modal, have or be. There is, however, a time reference for this sentence, namely, the past tense on the verb kicked. In sentences without auxiliaries, the tense of the sentence is its head. Instead of having a word under the category Aux (or T), there is a tense specification, present or past, as in the following tree: S NP 2 @ Sam VP 2 Aux VP @ g past kicked the ball 107 108 CHAPTER 2 Syntax: The Sentence Patterns of Language The inflection on the verb must match the tense in Aux. For example, if the tense of the sentence is past, then the verb must have an -ed affix (or must be an irregular past tense verb such as ate). Thus, in English, and many other languages, the head of S may contain only an abstract tense specification and no actual word, as just illustrated. The actual morpheme, in this case -ed or an irregular past tense form such as went, is inserted into the tree after all the syntactic rules have applied. Most inflectional morphemes, which depend on elements of syntax, are represented in this way. Another example is the tense-bearing word do that is inserted into negative sentences such as John did not go and questions such as Where did John go? In these sentences did means “past tense.” Later in this chapter we will see how do-insertion works. In addition to specifying the time reference of the sentence, Aux specifies the agreement features of the subject. For example, if the subject is we, Aux contains the features first-person and plural; if the subject is he or she, Aux contains the features third-person and singular. So, another function of the syntactic rules is to use Aux as a “matchmaker” between the subject and the verb. When the subject and the verb bear the same features, Aux makes a match; when they have incompatible features, Aux cannot make a match and the sentence is ungrammatical. This matchmaker function of syntactic rules is more obvious in languages such as Italian, which have many different agreement morphemes, as discussed in chapter 1. Consider the Italian sentence for “I buy books.” S VP NP @ Io *Io Present first person Present second person Aux VP @ compro i libri compri i libri The verb compro, “buy,” in the first sentence bears the first-person singular morpheme, -o, which matches the agreement feature in Aux, which in turn matches the subject Io, “I.” The sentence is therefore grammatical. In the second sentence, there is a mismatch between the first-person subject and the secondperson features in Aux (and on the verb), and so the sentence is ungrammatical. Sentence Structure Structural Ambiguities The structure of every sentence is a lesson in logic. JOHN STUART MILL, Inaugural address at St. Andrews, 1867 As mentioned earlier, certain kinds of ambiguous sentences have more than one phrase structure tree, each corresponding to a different meaning. The sentence The boy saw the man with the telescope is structurally ambiguous. Its two meanings correspond to the following two phrase structure trees. (For simplicity we omit Aux in these structures and we return to the non-X-bar notation.) S 1. NP 4 2 Det g The N g boy 2. VP 4 VP PP 2 2 V NP P NP 2 2 g g saw Det N with Det N g g g g the man the telescope S NP 2 2 Det g The N g boy VP 2 V g saw NP 2 2 Det g the NP PP 2 N P NP 2 g g man with Det N g g the telescope 109 110 CHAPTER 2 Syntax: The Sentence Patterns of Language One meaning of this sentence is “the boy used a telescope to see the man.” The first phrase structure tree represents this meaning. The key element is the position of the PP directly under the VP. Notice that although the PP is under VP, it is not a complement because phrasal categories don’t take complements (only heads do), and because it is not selected by the verb. The verb see selects an NP. In this sentence, the PP has an adverbial function and modifies the verb. In its other meaning, “the boy saw a man who had a telescope,” the PP with the telescope occurs under the direct object NP, where it modifies the noun man. In this second meaning, the complement of the verb see is the entire NP—the man with the telescope. The PP in the first structure is generated by the rule VP → VP PP In the second structure the PP is generated by the rule NP → NP PP Two interpretations are possible because the rules of syntax permit different structures for the same linear order of words. Following is the set of PS rules that we have presented so far in the chapter. The rules have been renumbered. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. S NP Det NP NP N' N' VP VP VP VP VP PP CP → → → → → → → → → → → → → → NP VP Det N' NP poss N' NP PP Adj N' N V V NP V CP Aux VP VP PP P NP CS This is not the complete set of PS rules for the language. Various structures in English cannot be generated with these rules, some of which we will talk about later. But even this mini phrase structure grammar generates an infinite set of possible sentences because the rules are recursive. These PS rules specify the word order for English (and other SVO languages, but not for Japanese, say, Sentence Structure in which the object comes before the verb). Linear order aside, the hierarchical organization illustrated by these rules is largely true for all languages, as expressed by X-bar schema. More Structures “Shoe” © MacNelly. King Features Syndicate Many English sentence types are not accounted for by the phrase structure rules given so far, including: 1. 2. 3. The dog completely destroyed the house. The cat and the dog were friends. The cat is coy. 111 112 CHAPTER 2 Syntax: The Sentence Patterns of Language The sentence in (1) contains the adverb (Adv) completely. Adverbs are modifiers that can specify how an event happens (quickly, slowly, completely) or when it happens (yesterday, tomorrow, often). As modifiers, adverbs are sisters to phrasal (XP) categories. In sentence (1) the adverb is a sister to VP, as illustrated in the following structure (we ignore Aux in this structure): S VP NP @ The dog Adv g completely VP @ destroyed the house Temporal adverbs such as yesterday, today, last week, and manner adverbs such as quietly, violently, suddenly, carefully, also occur to the right of VP as follows: S VP NP @ Adv @ g destroyed the house yesterday The dog VP Adverbs also occur as sisters to S (which, recall, is also a phrasal category, TP). S Adv g probably S NP 2 @ the dog VP @ has fleas Sentence Structure At this point you should be able to write the three PS rules that will account for the position of these adverbs.1 The “Shoe” cartoon’s joke is based on the fact that curse may take an NP complement (“cursed at the day”) and/or be modified by a temporal adverbial phrase (AdvP) (“cursed on the day”), leading to the structural ambiguity: VP VP V g cursed NP g the day I was born V g cursed AdvP g the day I was born Interestingly, I cursed the day I was born the day I was born, with both the NP and AdvP modifying the verb, is grammatical and meaningful. (See exercise 23b.) Sentence 2 contains a coordinate structure The cat and the dog. A coordinate structure results when two constituents of the same category (in this case, two NPs) are joined with a conjunction such as and or or. The coordinate NP has the following structure: NP NP1 CoordP Coord g and NP2 Though this may seem counterintuitive, in a coordinate structure the second member of the coordination (NP2) forms a constituent with the conjunction and. We can show this by means of the “move as a unit” constituency test. In sentence (5) the words and a CD move together to the end of the sentence, whereas in (6) the constituent is broken, resulting in ungrammaticality. 4. 5. 6. Caley bought a book and a CD yesterday. Caley bought a book yesterday and a CD. *Caley bought a book and yesterday a CD. Once again, we encourage you to write the two PS rules that generate this structure. 2 VP → Adv VP VP → VP Adv Answer: NP → NP CoordP, CoordP → Coord NP 2 Answer: S → Adv S 1 113 CHAPTER 2 Syntax: The Sentence Patterns of Language You can also construct trees for other kinds of coordinate structures, such as VP or PP coordination, which follow the same pattern. Michael writes poetry and surfs. (VP and VP) Sam rode his bicycle to school and to the pool. (PP and PP) Sentence (3) contains the main verb be followed by an adjective. The structure of main verb be sentences is best illustrated using T' notation. The main verb be acts like the modals and the auxiliaries be and have. For example, it is moved to the beginning of the sentence in questions (Is the cat coy?). For this reason we assume that the main verb be occurs under T and takes an XP complement. The XP may be AdjP, as shown in the tree structure for (3): TP NP 2 @ the cat T' 2 T g is AdjP g Adj g coy or an NP or PP as would occur in The cat is a feline or The cat is in the tree. As before we will leave it as an exercise for you to construct the PS rules for these sentence types and the tree structures they generate.3 (You might try drawing the tree structures; they should look very much like the one above.) There are also embedded sentence types other than those that we have discussed, for example: Hilary is waiting for you to sing. (Cf. You sing.) The host wants the president to leave early. (Cf. The president leaves early.) The host believes the president to be punctual. (Cf. The president is punctual.) Although the detailed structure of these different embedded sentences is beyond the scope of this introduction, you should note that an embedded sentence may be an infinitive. An infinitive sentence does not have a tense. The embedded sentences for you to sing, the president to leave early, and the president to be punctual are infinitives. Such verbs as want and believe, among many others, can take infinitival complements. This information, like other selectional properties, belongs to the lexical entry of the selecting verb (the higher verb in the tree). 3 Answer: TP → NP T', T' → T XP (where XP = AdjP, PP, NP) 114 Sentence Relatedness Sentence Relatedness I put the words down and push them a bit. EVELYN WAUGH, quoted in The New York Times, April 11, 1966 Another aspect of our syntactic competence is the knowledge that certain sentences are related to one another, such as the following pair: The boy is sleeping. Is the boy sleeping? These sentences describe the same situation. The sentence in the first column asserts that a particular situation exists, a boy-sleeping situation. Such sentences are called declarative sentences. The sentence in the second column asks whether such a boy-sleeping situation holds. Sentences of the second sort are called yesno questions. The only actual difference in meaning between these sentences is that one asserts a situation and the other asks for confirmation of a situation. This element of meaning is indicated by the different word orders, which illustrates that two sentences may have a structural difference that corresponds in a systematic way to a meaning difference. The grammar of the language must account for this fact. Transformational Rules Method consists entirely in properly ordering and arranging the things to which we should pay attention. RENÉ DESCARTES, Oeuvres, vol. X, c. 1637 Phrase structure rules account for much of our syntactic knowledge, but they do not account for the fact that certain sentence types in the language relate systematically to other sentence types. The standard way of describing these relationships is to say that the related sentences come from a common underlying structure. Yes-no questions are a case in point, and they bring us back to a discussion of auxiliaries. Auxiliaries are central to the formation of yes-no questions as well as certain other types of sentences in English. In yes-no questions, the auxiliary appears in the position preceding the subject. Here are a few more examples: The boy is sleeping. The boy has slept. The boy can sleep. The boy will sleep. Is the boy sleeping? Has the boy slept? Can the boy sleep? Will the boy sleep? A way to capture the relationship between a declarative and a yes-no question is to allow the PS rules to generate a structure corresponding to the declarative 115 CHAPTER 2 Syntax: The Sentence Patterns of Language sentence. Another formal device, called a transformational rule, then moves the auxiliary before the subject. The rule “Move Aux” is formulated as follows: Move the highest Aux to adjoin to (the root) S. That is, Move Aux applies to structures like: S NP VP @ Aux etc. to give structures like: S (newly created) S Aux NP VP etc. g The “__” shows the position from which the Aux is moved. For example: The boy is sleeping. → Is the boy __ sleeping? The rule takes the basic (NP-Aux) structure generated by the phrase structure rules and derives a second tree (the dash represents the position from which a constituent has been moved). The Aux is attached to the tree by adjunction. Adjunction is an operation that copies an existing node (in this case S) and creates a new level to which the moved category (in this case Aux) is appended. S NP 2 @ the boy S VP 2 Aux g is VP g V g sleeping → Aux g is 2 NP S 2 @ the boy VP 2 g 116 VP g V g sleeping Sentence Relatedness Yes-no questions are thus generated in two steps. 1. 2. The phrase structure rules generate a basic structure. Aux movement applies to produce the derived structure. The basic structures of sentences, also called deep structures or d-structures, conform to the phrase structure rules. Variants on the basic sentence structures are derived via transformations. By generating questions in two steps, we are claiming that for speakers a relationship exists between a question and its corresponding statement. Intuitively, we know that such sentences are related. The transformational rule is a formal way of representing this knowledge. The derived structures—the ones that follow the application of transformational rules—are called surface structures or s-structures. The phonological rules of the language—the ones that determine pronunciation—apply to s-structures. If no transformations apply, then d-structure and s-structure are the same. If transformations apply, then s-structure is the result after all transformations have had their effect. Many sentence types are accounted for by transformations, which can alter phrase structure trees by moving, adding, or deleting elements. Other sentence pairs that are transformationally related are: active-passive The cat chased the mouse. → The mouse was chased by the cat. there sentences There was a man on the roof. → A man was on the roof. PP preposing The astronomer saw the quasar with the telescope. telescope, the astronomer saw the quasar. → With the The Structural Dependency of Rules “Peanuts” © United Feature Syndicate, Inc. Transformations act on phrase structures without paying attention to the particular words that the structures contain. These rules are said to be structure dependent. The transformational rule of PP preposing moves any PP as long as 117 118 CHAPTER 2 Syntax: The Sentence Patterns of Language it is immediately under the VP, as in In the house, the puppy found the ball; or With the telescope, the boy saw the man; and so on. Evidence that transformations are structure dependent is provided by the fact that the sentence With a telescope, the boy saw the man is not ambiguous. It has only the meaning “the boy used a telescope to see the man,” the meaning corresponding to the first phrase structure on page 109 in which the PP is immediately dominated by the VP. In the structure corresponding to the other meaning, “boy saw a man who had a telescope,” the PP is in the NP as in the second tree on page 109. The PP preposing transformation applies to the VP–PP structure and not to the NP–PP structure. Another rule of English allows the complementizer that to be omitted when it precedes an embedded sentence but not a sentence that appears in subject position, as illustrated by these pairs: I know that you know. That you know bothers me. I know you know. *You know bothers me. This is a further demonstration that rules are structure dependent. Agreement rules are also structure dependent. In many languages, including English, the verb must agree with the subject. The verb is marked with an -s when the subject is third-person singular. This guy seems kind of cute. These guys seem kind of cute. Now consider these sentences: The guy we met at the party next door seems kind of cute. The guys we met at the party next door seem kind of cute. The verb seem must agree with the subject, guy or guys. Even though there are various words between the head noun and the verb, the verb always agrees with the head noun. Moreover, there is no limit to how many words may intervene, or whether they are singular or plural, as the following sentence illustrates: The guys (guy) we met at the party next door that lasted until 3 a.m. and was finally broken up by the cops who were called by the neighbors seem (seems) kind of cute. The phrase structure tree of such a sentence explains why this is so. S NP VP Aux present 3rd person singular The guy = = = = = = VP seems kind of cute Sentence Relatedness In the tree, “= = = = = =” represents the intervening structure, which may, in principle, be indefinitely long and complex. Speakers of English (and all other languages) know that agreement depends on sentence structure, not the linear order of words. Agreement is between the subject and the main verb, where the subject is structurally defined as the NP immediately dominated by S. The agreement relation is mediated by Aux, which contains the tense and agreement features that match up the subject and verb. As far as the rule of agreement is concerned, all other material can be ignored, although in actual performance, if the distance is too great, the speaker may forget what the head noun was. The “Peanuts” cartoon also illustrates that agreement takes place between the head noun—the first occurrence of “refusal”—and the structurally highest verb in the sentence, which is the final occurrence of “do,” despite the 14 intervening words. A final illustration of structure dependency is found in the declarativequestion pairs discussed previously. Consider the following sets of sentences: The boy who is sleeping was dreaming. Was the boy who is sleeping dreaming? *Is the boy who sleeping was dreaming? The boy who can sleep will dream. Will the boy who can sleep dream? *Can the boy who sleep will dream? The ungrammatical sentences show that to form a question, the rule that moves Aux singles out the auxiliary dominated by the root S, and not simply the first auxiliary in the sentence. We can see this in the following simplified phrase structure trees. There are two auxiliaries, one in the subject relative clause and the other in the root clause. The rule affects the auxiliary in the higher main clause. S VP NP @ the boy who is sleeping 2 Aux g was → VP @ dreaming S S NP @ the boy who is sleeping VP ro g Aux g was VP @ dreaming 119 120 CHAPTER 2 Syntax: The Sentence Patterns of Language If the rule picked out the first Aux, we would have the ungrammatical sentence Is the boy who__ sleeping was dreaming. To derive the correct s-structures, transformations such as Move Aux must refer to phrase structure and not to the linear order of elements. Structure dependency is a principle of Universal Grammar, and is found in all languages. For example, in languages that have subject-verb agreement, the dependency is between the verb and the head noun, and never some other noun such as the closest one, as shown in the following examples from Italian, German, Swahili, and English, respectively (the third-person singular agreement affix in the verb is in boldface and is governed by the boldfaced head noun, not the underlined noun, even though the latter is nearest the main verb): La madre con tanti figli lavora molto. Die Mutter mit den vielen Kindern arbeitet viel. Mama anao watoto wengi anajitahidi. The mother with many children works a lot. Further Syntactic Dependencies Sentences are organized according to two basic principles: constituent structure and syntactic dependencies. As we have discussed, constituent structure refers to the hierarchical organization of the subparts of a sentence, and transformational rules are sensitive to it. The second important property is the dependencies among elements in the sentence. In other words, the presence of a particular word or morpheme can be contingent on the presence of some other word or morpheme in a sentence. We have already seen at least two examples of syntactic dependencies. Selection is one kind of dependency. Whether there is a direct object in a sentence depends on whether the verb is transitive or intransitive. More generally, complements depend on the properties of the head of their phrase. Agreement is another kind of dependency. The features in Aux (and on the verb) must match the features of the subject. Wh Questions Whom are you? said he, for he had been to night school. GEORGE ADE, “The Steel Box,” in Bang! Bang!, 1928 The following wh questions illustrate another kind of dependency: 1. (a) What will Max chase? (b) Where has Pete put his bone? (c) Which dog do you think loves balls? There are several points of interest in these sentences. First, the verb chase in sentence (a) is transitive, yet there is no direct object following it. There is a gap Sentence Relatedness where the direct object should be. The verb put in sentence (b) selects a direct object and a prepositional phrase, yet there is no PP following his bone. Finally, the embedded verb loves in sentence (c) bears the third-person -s morpheme, yet there is no obvious subject to trigger this agreement. If we remove the wh phrases, the remaining sentences would be ungrammatical. 2. (a) *will Max chase ___? (b) *has Pete put his bone ___? (c) *do you think ___ loves balls? The grammaticality of a sentence with a gap depends on there being a wh phrase at the beginning of the sentence. The sentences in (1) are grammatical because the wh phrase is acting like the object in (a), the prepositional phrase object in (b), and the embedded subject in (c). We can explain the dependency between the wh phrase and the missing constituent if we assume that in each case the wh phrase originated in the position of the gap in a sentence with the corresponding declarative structure: 3. (a) Max will chase what? (b) Pete has put his bone where? (c) You think (that) which dog loves balls? The wh phrase is then moved to the beginning of the sentence by a transformational rule: Move wh. Because embedded wh phrases (I wonder who Mary likes) are known to be complementizer phrases (CPs), we may deduce that main clause questions (Who does Mary like?) are also CPs, with the following structure (recall that C abbreviates “complementizer”): CP C S The wh phrase moves to the empty C position at the left periphery of the sentence. Thus, wh questions are generated in three steps: 1. 2. 3. The phrase structure rules generate the CP d-structure with the wh phrase occupying an NP position within the S: direct object in (3a); prepositional object in (3b); and subject in (3c). Move Aux adjoins the auxiliary to S. Move wh moves the wh phrase to C. The following tree shows the d-structure of the sentence What will Max chase? 121 CHAPTER 2 Syntax: The Sentence Patterns of Language CP C 2 S 2 NP @ Max VP 2 Aux g will → VP 2 V g chase NP g what The s-structure representation of this sentence is: CP 2 C S 2 g What Aux S g 2 NP will VP @ ro VP 2 chase g Max g 122 In question (1c), there is an auxiliary “do.” Unlike the other auxiliaries (e.g., can, have, be), do is not part of the d-structure of the question. The d-structure of the question Which dog did Michael feed? is “Michael fed which dog?” Because Move Aux is structure dependent (like all rules), it ignores the content of the category. It will therefore move Aux even when Aux contains only a tense feature such as past. In this case, another rule called “do support,” inserts do into the structure to carry the tense: Sentence Relatedness CP C 2 S 2 NP @ Michael VP → 2 Aux g past VP 2 V g feed NP @ which dog CP 2 which dog do S 2 Aux g past S 2 NP VP @ ro Michael VP 2 V g feed g @ g C The first tree represents the d-structure to which the Aux and wh movement rules apply. The second tree shows the output of those transformations and the insertion of “do.” “Do” combines with past to yield “did.” Rules that convert inflectional features such as past tense, third-person present tense, and the possessive poss into their proper phonological forms are called spell-out rules. Unlike the other rules we have seen, which operate inside a phrase or clause, Move wh can move the wh phrase outside of its own clause. There is no limit to the distance that a wh phrase can move, as illustrated by the following sentences. The dashes indicate the position from which the wh phrase has been moved. 123 124 CHAPTER 2 Syntax: The Sentence Patterns of Language Who did Helen say the senator wanted to hire ___? Who did Helen say the senator wanted the congressional representative to try to hire ___? Who did Helen say the senator wanted the congressional representative to try to convince the Speaker of the House to get the Vice President to hire ___? “Long-distance” dependencies created by wh movement are a fundamental part of human language. They provide still further evidence that sentences are not simply strings of words but are supported by a rich scaffolding of phrase structure trees. These trees express the underlying structure of a sentence as well as its relation to other sentences in the language, and as always are reflective of a person’s knowledge of syntax. UG Principles and Parameters Whenever the literary German dives into a sentence, that is the last you are going to see of him till he emerges on the other side of the Atlantic with his Verb in his mouth. MARK TWAIN, A Connecticut Yankee in King Arthur’s Court, 1889 In this chapter we have largely focused on English syntax, but many of the grammatical structures we have described for English also hold in other languages. This is because Universal Grammar (UG) provides the basic design for all human languages, and individual languages are simply variations on this basic design. Imagine a new housing development. All of the houses have the same floor plan, but the occupants have some choices to make. They can have carpet or hardwood floors, curtains or blinds; they can choose their kitchen cabinets and the countertops, the bathroom tiles, and so on. This is more or less how the syntax operates. Languages conform to a basic design, and then there are choice points or points of variation. All languages have phrase structure rules that specify the allowable d-structures. In all languages, phrases consist of heads and complements, and sentences are headed by Aux (or T), which is specified for information such as tense, agreement, and modality. However, languages may have different word orders within the phrases and sentences. The word order differences between English and Japanese, discussed earlier, illustrate the interaction of general and language-specific properties. UG specifies the structure of a phrase. It must have a head and may take one or more complement types (the X-bar schema discussed earlier). However, each language defines for itself the relative order of these constituents: English is head initial, Japanese is head final. We call the points of variation parameters. All languages seem to have movement rules. Move Aux is a version of a more general rule that exists in languages such as Dutch, in which the auxiliary moves, if there is one, as in (1), and otherwise the main verb moves, as in (2): UG Principles and Parameters 1. 2. Zal Femke fietsen? will Femke bicycle ride Leest Meindert veel boeken? reads Meindert many books (Will Femke ride her bicycle?) (Does Meindert read many books?) In English, main verbs other than be do not move. Instead, English “do” spells out the stranded tense and agreement features. All languages have expressions for requesting information about who, when, where, what, and how. Even if the question words in other languages do not necessarily begin with “wh,” we will refer to such questions as wh questions. In some languages, such as Japanese and Swahili, the wh phrase does not move. It remains in its original d-structure position. In Japanese the sentence is marked with a question morpheme, no: Taro-ga Taro nani-o what mitsuketa-no? found Recall that Japanese word order is SOV, so the wh phrase nani (“what”) is an object and occurs before the verb. In Swahili the wh phrase—nani by pure coincidence—also stays in its base position: Ulipatia you gave nani who Kitabu? a book However, in all languages with wh movement (i.e., movement of the question phrase), the question element moves to C (complementizer). The “landing site” of the moved phrase is determined by UG. Among the wh movement languages, there is some variation. In the Romance languages, such as Italian, the wh phrase moves as in English, but when the wh phrase questions the object of a preposition, the preposition must move together with the wh phrase. In English, by contrast, the preposition can be “stranded” (i.e., left behind in its original position): A chi hai dato il libro? To whom (did) you give the book? *Chi hai dato il libro a? Who(m) did you give the book to? In some dialects of German, long-distance wh movement leaves a trail of wh phrases in the C position of the embedded sentence: Mit wem Glaubst Du With whom think you (Whom do you think Hans talks to?) Mit with Wen willst Du Wen Whom want you whom (Whom do you want Hans to call?) Hans Hans wem whom anruft? call Hans Hans spricht? talks 125 126 CHAPTER 2 Syntax: The Sentence Patterns of Language In Czech the question phrase “how much” can be moved, leaving behind the NP it modifies: Jak velké Václav koupil How big Václav bought (How big a car did Václav buy?) auto? car Despite these variations, wh movement adheres to certain constraints. Although wh phrases such as what, who, and which boy can be inserted into any NP position, and are then free in principle to move to C, there are specific instances in which wh movement is blocked. For example, a wh phrase cannot move out of a relative clause like the senator that wanted to hire who, as in (1b). It also cannot move out of a clause beginning with whether or if, as in (2c) and (d). (Remember that the position from which the wh phrases have moved is indicated with ___.) 1. (a) Emily paid a visit to the senator that wants to hire who? (b) *Who did Emily pay a visit to the senator that wants to hire ___? 2. (a) Miss Marple asked Sherlock whether Poirot had solved the crime. (b) Who did Miss Marple ask ___ whether Poirot had solved the crime? (c) *Who did Miss Marple ask Sherlock whether ___ had solved the crime? (d) *What did Miss Marple ask Sherlock whether Poirot had solved ___? The only difference between the grammatical (2b) and the ungrammatical (2c) and (d) is that in (2b) the wh phrase originates in the higher clause, whereas in (2c, d) the wh phrase comes from inside the whether clause. This illustrates that the constraint against movement depends on structure and not on the length of the sentence. Some sentences can be very short and still not allow wh movement: 3. (a) Sam Spade insulted the fat man’s henchman. (b) Who did Sam Spade insult? (c) Whose henchman did Sam Spade insult? (d) *Whose did Sam Spade insult henchman? 4. (a) John ate bologna and cheese. (b) John ate bologna with cheese. (c) *What did John eat bologna and? (d) What did John eat bologna with? The sentences in (3) show that a wh phrase cannot be extracted from inside a possessive NP. In (3b) it is okay to question the whole direct object. In (3c) it is even okay to question a piece of the possessive NP, providing the entire wh phrase is moved, but (3d) shows that moving the wh word alone out of the possessive NP is illicit. Sentence (4a) is a coordinate structure and has approximately the same meaning as (4b), which is not a coordinate structure. In (4c) moving a wh phrase out Sign Language Syntax of the coordinate structure results in ungrammaticality, whereas in 4(d), moving the wh phrase out of the PP is fine. The ungrammaticality of 4(c), then, is related to its structure and not to its meaning. The constraints on wh movement are not specific to English. Such constraints operate in all languages that have wh movement. Like the principle of structure dependency and the principles governing the organization of phrases, the constraints on wh movement are part of UG. These aspects of grammar need not be learned. They are part of the innate blueprint for language that the child brings to the task of acquiring a language. What children must learn are the languagespecific aspects of grammar. Where there are parameters of variation, children must determine the correct choice for their language. The Japanese child must determine that the verb comes after the object in the VP, and the English-speaking child that the verb comes first. The Dutch-speaking child acquires a rule that moves the verb, while the English-speaking child must restrict his rule to auxiliaries. Italian, English, and Czech children learn that to form a question, the wh phrase moves, whereas Japanese and Swahili children determine that there is no movement. As far as we can tell, children fix these parameters very quickly. We will have more to say about how children set UG parameters in chapter 7. Sign Language Syntax All languages have rules of syntax similar in kind, if not in detail, to those of English, and sign languages are no exception. Signed languages have phrase structure rules that provide hierarchical structure and order constituents. A signer distinguishes The dog chased the cat from The cat chased the dog through the order of signing. The basic order of ASL is SVO. Unlike English, however, adjectives follow the head noun in ASL. ASL has a category Aux, which expresses notions such as tense, agreement, modality, and so on. In Thai, to show that an action is continuous, the auxiliary verb kamlang is inserted before the verb. Thus kin means “eat” and kamlang kin means “is eating.” In English a form of be is inserted and the main verb is changed to an -ing form. In ASL the sign for a verb such as eat may be articulated with a sweeping, repetitive movement to achieve the same effect. The sweeping, repetitive motion is a kind of auxiliary. Many languages, including English, have a transformation that moves a direct object to the beginning of the sentence to draw particular attention to it, as in: Many greyhounds, my wife has rescued. The transformation is called topicalization because an object to which attention is drawn is generally the topic of the sentence or conversation. (The d-structure underlying this sentence is My wife has rescued many greyhounds.) In ASL a similar reordering of signs accompanied by raising the eyebrows and tilting the head upward accomplishes the same effect. The head motion and facial expressions of a signer function as markers of the special word order, much as intonation does in English, or the attachment of prefixes or suffixes might in other languages. 127 128 CHAPTER 2 Syntax: The Sentence Patterns of Language There are constraints on topicalization similar to those on wh movement illustrated in a previous section. In English the following strings are ungrammatical: *Henchman, Sam Spade insulted the fat man’s. *This film, John asked Mary whether she liked. *Cheese, John ate bologna and for lunch. Compare this with the grammatical: The fat man’s henchman, Sam Spade insulted. This film, John asked Mary to see with her. Bologna and cheese, John ate for lunch. Sign languages exhibit similar constraints. The signed sequence *Henchman, Sam Spade insulted the fat man’s or the other starred examples are ungrammatical in ASL as in spoken languages. ASL has wh phrases. The wh phrase in ASL may move or it may remain in its d-structure position as in Japanese and Swahili. The ASL equivalents of Who did Bill see yesterday? and Bill saw who yesterday? are both grammatical. As in topicalization, wh questions are accompanied by a nonmanual marker. For questions, this marker is a facial expression with furrowed brows and the head tilted back. ASL and other sign languages show an interaction of universal and languagespecific properties, just as spoken languages do. The rules of sign languages are structure dependent, and movement rules are constrained in various ways, as illustrated earlier. Other aspects are particular to sign languages, such as the facial gestures, which are an integral part of the grammar of sign languages but not of spoken languages. The fact that the principles and parameters of UG hold in both the spoken and manual modalities shows that the human brain is designed to acquire and use language, not simply speech. Summary Speakers of a language recognize the grammatical sentences of their language and know how the words in a sentence must be ordered and grouped to convey a certain meaning. All speakers are capable of producing and understanding an unlimited number of new sentences that have never before been spoken or heard. They also recognize ambiguities, know when different sentences mean the same thing, and correctly interpret the grammatical relations in a sentence, such as subject and direct object. This kind of knowledge comes from their knowledge of the rules of syntax. Sentences have structure that can be represented by phrase structure trees containing syntactic categories. Phrase structure trees reflect the speaker’s mental representation of sentences. Ambiguous sentences may have more than one phrase structure tree. Phrase structure trees reveal the linear order of words and the constituency of each syntactic category. There are different kinds of syntactic categories: Phrasal categories, such as NP and VP, are composed of other syntactic categories; lexical categories, such as Noun and Verb, and functional categories, such as Det, References for Further Reading Aux, and C, are not decomposable and often correspond to individual words. The internal structure of the phrasal categories is universal. It consists of a head and its complements. The particular order of elements within the phrase is accounted for by the phrase structure rules of each language. NPs, VPs, and so on are headed by nouns, verbs, and the like. The sentence (S or TP) is headed by Aux (or T), which carries such information as tense, agreement, and modality. A grammar is a formally stated, explicit description of the mental grammar or speaker’s linguistic competence. Phrase structure rules characterize the basic phrase structure trees of the language, the d-structures. Some PS rules allow the same syntactic category to appear repeatedly in a phrase structure tree, such as a sentence embedded in another sentence. These rules are recursive and reflect a speaker’s ability to produce countless sentences. The lexicon represents the knowledge that speakers have about the vocabulary of their language. This knowledge includes the syntactic category of words and what elements may occur together, expressed as c-selection or subcategorization. The lexicon also contains semantic information including the kinds of NPs that can function as semantically coherent subjects and objects, s-selection. Transformational rules account for relationships between sentences such as declarative and interrogative pairs, including wh questions. Transformations can move constituents. Much of the meaning of a sentence is interpreted from its d-structure. The output of the transformational rules is the s-structure of a sentence, the structure to which the phonological rules of the language apply. Inflectional information such as tense, agreement, and possessive, among others, is represented as features in the phrase structure tree. After the rules of the syntax have applied, these features are sometimes spelled out as affixes such as -ed and -’s or as function words such as do. The basic design of language is universal. Universal Grammar specifies that syntactic rules are structure dependent and that movement rules may not move phrases out of certain structures such as coordinate structures. These constraints exist in all languages—spoken and signed—and need not be learned. UG also contains parameters of variation, such as the order of heads and complements, and the variations on movement rules. A child acquiring a language must fix the parameters of UG for that language. References for Further Reading Baker, M. C. 2001. The atoms of language: The mind’s hidden rules of grammar. New York: Basic Books. Chomsky, N. 1995. The minimalist program. Cambridge, MA: MIT Press. ______. 1972. Language and mind, rev. edn. New York: Harcourt Brace Jovanovich. ______. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press. Jackendoff, R. S. 1994. Patterns in the mind: Language and human nature. New York: Basic Books. Pinker, S. 1999. Words and rules: The ingredients of language. New York: HarperCollins. Radford, A. 2009. Analysing English sentences: A minimalist approach. Cambridge, UK: Cambridge University Press. ______. 2004. English syntax: An introduction. Cambridge, UK: Cambridge University Press. 129 130 CHAPTER 2 Syntax: The Sentence Patterns of Language Exercises 1. Besides distinguishing grammatical from ungrammatical sentences, the rules of syntax account for other kinds of linguistic knowledge, such as a. when a sentence is structurally ambiguous. (Cf. The boy saw the man with a telescope.) b. when two sentences with different structures mean the same thing. (Cf. The father wept silently and The father silently wept.) c. systematic relationships of form and meaning between two sentences, like declarative sentences and their corresponding interrogative form. (Cf. The boy can sleep and Can the boy sleep?) Draw on your linguistic knowledge of English to come up with an example illustrating each of these cases. (Use examples that are different from the ones in the chapter.) Explain why your example illustrates the point. If you know a language other than English, provide examples in that language, if possible. 2. Consider the following sentences: a. I hate war. b. You know that I hate war. c. He knows that you know that I hate war. A. Write another sentence that includes sentence (c). B. What does this set of sentences reveal about the nature of language? C. How is this characteristic of human language related to the difference between linguistic competence and performance? (Hint: Review these concepts in chapter 6.) 3. Paraphrase each of the following sentences in two ways to show that you understand the ambiguity involved: Example: Smoking grass can be nauseating. i. Putting grass in a pipe and smoking it can make you sick. ii. Fumes from smoldering grass can make you sick. a. b. c. d. e. f. g. h. i. j. Dick finally decided on the boat. The professor’s appointment was shocking. The design has big squares and circles. That sheepdog is too hairy to eat. Could this be the invisible man’s hair tonic? The governor is a dirty street fighter. I cannot recommend him too highly. Terry loves his wife and so do I. They said she would go yesterday. No smoking section available. 4. A. Consider the following baseball joke (knowledge of baseball required): Catcher to pitcher: “Watch out for this guy, he’s a great fastball hitter.” Pitcher to catcher: “No problem. There’s no way I’ve got a great fastball.” Exercises Explain the humor either by paraphrasing, or even better, with a tree structure like the one we used early in the chapter for old men and women without the syntactic categories. B. Do the same for the advertising executive’s (honest?) claim that the new magazine “has between one and two billion readers.” 5. Draw two phrase structure trees representing the two meanings of the sentence “The magician touched the child with the wand.” Be sure you indicate which meaning goes with which tree. 6. Draw the subtrees for the italicized NPs in the following sentences: a. Every child’s mother hopes he will be happy. b. The big dog’s bone is buried in the garden. c. Angry men in dark glasses roamed the streets. d. My aunt and uncle’s trip to Alaska was wonderful. e. Challenge exercises: Whose dirty underwear is this? f. The boy’s dog’s bone is in the pantry. (Hint: Use the rules NP → Det N', Det → NP poss, NP → N'.) 7. In all languages, sentences can occur within sentences. For example, in exercise 2, sentence (b) contains sentence (a), and sentence (c) contains sentence (b). Put another way, sentence (a) is embedded in sentence (b), and sentence (b) is embedded in sentence (c). Sometimes embedded sentences appear slightly changed from their normal form, but you should be able to recognize and underline the embedded sentences in the following examples. Underline in the non-English sentences, when given, not in the translations (the first one is done as an example): a. Yesterday I noticed my accountant repairing the toilet. b. Becky said that Jake would play the piano. c. I deplore the fact that bats have wings. d. That Guinevere loves Lorian is known to all my friends. e. Who promised the teacher that Maxine wouldn’t be absent? f. It’s ridiculous that he washes his own Rolls-Royce. g. The woman likes for the waiter to bring water when she sits down. h. The person who answers this question will win $100. i. The idea of Romeo marrying a 13-year-old is upsetting. j. I gave my hat to the nurse who helped me cut my hair. k. For your children to spend all your royalty payments on recreational drugs is a shame. l. Give this fork to the person I’m getting the pie for. m. khǎw chyâ waǎ khruu maa. (Thai) He believe that teacher come He believes that the teacher is coming. n. Je me demande quand il partira. (French) I me ask when he will leave I wonder when he’ll leave. o. Jan zei dat Piet dit boek niet heeft gelezen. (Dutch) Jan said that Piet this book not has read Jan said that Piet has not read this book. 131 132 CHAPTER 2 Syntax: The Sentence Patterns of Language 8. Following the patterns of the various tree examples in the text, draw phrase structure trees for the following sentences. (Hint: You may omit the N' level whenever N' dominates a single N, so that, for example, the puppy has the structure NP 2 Det a. b. c. d. e. f. g. h. i. j. k. l. N The puppy found the child. A frightened passenger landed the crippled airliner. The house on the hill collapsed in the wind. The ice melted. The hot sun melted the ice. A fast car with twin cams sped by the children on the grassy lane. The old tree swayed in the wind. Challenge exercise: The children put the toy in the box. The reporter realized that the senator lied. Broken ice melts in the sun. My guitar gently weeps. A stranger cleverly observed that a dangerous spy from the CIA lurks in the alley by the old tenement. (Hint: See footnote 1, page 113.) 9. Use the rules on page 110 to create five phrase structure trees of 6, 7, 8, 9, and 10 words. Use your mental lexicon to fill in the bottom of the tree. 10. We stated that the rules of syntax specify all and only the grammatical sentences of the language. Why is it important to say “only”? What would be wrong with a grammar that specified as grammatical sentences all of the truly grammatical ones plus a few that were not grammatical? 11. In this chapter we introduced X-bar theory, according to which each phrase has three levels of structure. a. Draw the subtree corresponding to each phrasal category, NP, AdjP, VP, PP, as it would look according to X-bar notation. b. Challenge exercise: What would the structure of CP be according to X-bar notation? c. Further challenge: Give a sample phrase structure for each tree that fully exploits its entire structure—e.g., the father of the bride for the NP. 12. Using one or more of the constituency tests (i.e., stand alone, move as a unit, replacement by a pronoun) discussed in the chapter, determine which of the boldfaced portions in the sentences are constituents. Provide the grammatical category of the constituents. a. Martha found a lovely pillow for the couch. b. The light in this room is terrible. c. I wonder if Bonnie has finished packing her books. Exercises d. e. f. g. Melissa slept in her class. Pete and Max are fighting over the bone. I gave a bone to Pete and to Max yesterday. I gave a bone to Pete and to Max yesterday. 13. The two sentences below contain a verbal particle: i. He ran up the bill. ii. He ran the bill up. The verbal particle up and the verb run depend on each other for the unique idiosyncratic meaning of the phrasal verb run up. (Running up a bill involves neither running nor the location up.) We showed earlier that in such cases the particle and object do not form a constituent, hence they cannot move as a unit: iii. *Up the bill, John ran (compare this to Up the hill John ran). a. Using adverbs such as completely, show that the particle forms a constituent with the verb in [run up] the bill, while in run [up the hill], the preposition and NP object form a constituent. b. Now consider the following data: i. Michael ran up the hill and over the bridge. ii. *Michael ran up the bill and off his mouth. iii. Michael ran up the bill and ran off his mouth. Use the data to argue that expressions like up the bill and off his mouth are not constituents. 14. In terms of c-selection restrictions, explain why the following are ungrammatical: a. *The man located. b. *Jesus wept the apostles. c. *Robert is hopeful of his children. d. *Robert is fond that his children love animals. e. *The children laughed the man. 15. In the chapter, we looked at transitive verbs that select a single NP direct object like chase. English also has ditransitive verbs, ones that may be followed by two NPs, such as give: The emperor gave the vassal a castle. Think of three other ditransitive verbs in English and give example sentences. 16. For each verb, list the different types of complements it selects and provide an example of each type: a. want b. force c. try d. believe e. say 133 134 CHAPTER 2 Syntax: The Sentence Patterns of Language 17. Tamil is a language spoken in India by upward of 70 million people. Others, but not you, may find that they talk “funny,” as illustrated by wordfor-word translations of PPs from Tamil to English: a. Tamil to English Meaning the bed on the village from “on the bed” “from the village” i. Based on these data, is Tamil a head initial or a head final language? ii. What would the phrase structure rule for PP look like in Tamil? b. Here are two more word-for-word glosses: she is a poet that think the cobra is deadly that know “think that she is a poet” “know that the cobra is deadly” i. Do these further data support or detract from your analysis in part (a)? ii. What would the pertinent VP and CP rules look like in Tamil, based on these data? c. Give a word-for-word translation from Tamil of airplane on the runway and suppose that cobras spit. d. Challenge exercise: Same as (c) for: believe that she sits by the well. 18. All wh phrases can move to the left periphery of the sentence. a. Invent three sentences beginning with what, which, and where, in which the wh word is not in its d-structure position in the sentence. Give both the s-structure and d-structure versions of your sentence. For example, using when: When could Marcy catch a flight out of here? from Marcy could catch a flight out of here when? b. Draw the phrase structure tree for one of these sentences using the phrase structure and movement rules provided in the chapter. c. Challenge exercise: How could you reformulate the movement rules used to derive a wh question such as What has Mary done with her life? using an X-bar CP structure (see question 11)? 19. There are many systematic, structure-dependent relationships among sentences similar to the one discussed in the chapter between declarative and interrogative sentences. Here is another example based on ditransitive verbs (see exercise 15): The boy wrote the senator a letter. The boy wrote a letter to the senator. A philanthropist gave the animal rights movement $1 million. A philanthropist gave $1 million to the animal rights movement. a. Describe the relationship between the first and second members of the pairs of sentences. b. State why a transformation deriving one of these structures from the other is plausible. Exercises 20. State at least three differences between English and the following languages, using just the sentence(s) given. Ignore lexical differences (i.e., the different vocabulary). Here is an example: Thai: dèg khon níi kamlang kin. boy classifier this progressive eat “This boy is eating.” mǎa tua nán kin khâaw. dog classifier that eat rice “That dog ate rice.” Three differences are (1) Thai has “classifiers.” They have no English equivalent. (2) The words (determiners, actually) “this” and “that” follow the noun in Thai, but precede the noun in English. (3) The “progressive” is expressed by a separate word in Thai. The verb does not change form. In English, the progressive is indicated by the presence of the verb to be and the adding of -ing to the verb. a. French cet homme intelligent comprendra la question. this man intelligent will understand the question “This intelligent man will understand the question.” ces hommes intelligents comprendront les questions. these men intelligent will understand the questions “These intelligent men will understand the questions.” b. Japanese watashi ga sakana o tabete iru. I subject fish object eat (ing) am marker marker “I am eating fish.” c. Swahili mtoto alivunja kikombe. mtoto a- livunja kikombe class child he past break class cup marker marker “The child broke the cup.” watoto wanavunja vikombe. watoto wanavunja vikombe class child they present break class cup marker marker “The children break the cups.” d. Korean kɨ sonyɔn-iee wɨyu-lɨl masi-ass-ta. kɨ sonyɔniee wɨyu- lɨl masi- assta the boy subject milk object drink past assertion marker marker “The boy drank milk.” 135 136 CHAPTER 2 Syntax: The Sentence Patterns of Language kɨ-nɨn muɔs-ɨl kɨ nɨn muɔsɨl he subject what object marker marker “What did he eat?” e. Tagalog nakita ni Pedro-ng nakita ni Pedro -ng saw article Pedro that mɔk-ass-nɨnya. mɔk- ass- nɨnya eat past question puno puno full na na already ang ang topic marker bus. bus. bus “Pedro saw that the bus was already full.” 21. Transformations may delete elements. For example, the s-structure of the ambiguous sentence “George wants the presidency more than Martha” may be derived from two possible d-structures: a. George wants the presidency more than he wants Martha. b. George wants the presidency more than Martha wants the presidency. A deletion transformation either deletes he wants from the structure of example (a), or wants the presidency from the structure of example (b). This is a case of transformationally induced ambiguity: two different d-structures with different semantic interpretations are transformed into a single s-structure. Explain the role of a deletion transformation similar to the ones just discussed in the following humorous dialogue between “two old married folks.” he: Do you still love me as much as you used to? she: As much as I used to what? 22. Challenge exercise: Compare the following French and English sentences: French English Jean boit toujours du vin. John always drinks some wine. Jean drinks always some wine *John drinks always some wine (*Jean toujours boit du vin) Marie lit jamais le journal. Mary never reads the newspaper. Marie reads never the newspaper *Mary reads never the newspaper. (*Marie jamais lit le journal) Pierre lave souvent ses chiens. Peter often washes his dogs. Pierre washes often his dogs *Peter washes often his dogs. (*Pierre souvent lave ses chiens.) a. Based on the above data, what would you hypothesize concerning the position of adverbs in French and English? b. Now suppose that UG specifies that in all languages adverbs of frequency (e.g., always, never, often, sometimes) immediately precede the VP, as in the following tree. What rule would you need to hypothesize to derive the correct surface word order for French? (Hint: Adverbs are not allowed to move.) Exercises S NP 2 @ John Jean VP 2 Aux g pres. VP 2 Adv g always toujours VP 2 V g drinks boit NP @ wine du vin c. Do any verbs in English follow the same pattern as the French verbs? 23. a. Give the tree corresponding to the underlined portion of the sentence The hole should have been being filled by the workcrew. b. Give the tree corresponding to the VP cursed the day I was born the day I was born. Which must come first, the AdvP or the NP? (You needn’t worry about the internal structure of the AdvP or NP.) 24. Show that an embedded CP is a constituent by applying the constituency tests (stand alone, move as a unit, and replace with a pronoun). Consider the following sentences in formulating your answer, and provide further examples if you can. (The boldfaced words are the CP.) Sam asked if he could play soccer. I wonder whether Michael walked the dog. Cher believes that the students know the answer. It is a problem that Sam broke his arm. 25. Challenge exercise: a. Give the d-structure tree for Which dog does Michael think loves bones? (Hint: The complementizer that must be present.) b. Give the d-structure tree for What does Michael think that his dog loves? c. Consider these data: i. *Which dog does Michael think that loves bones? ii. What does Michael think his dog loves? In (ii) a complementizer deletion rule has deleted that. The rule is optional because the sentence is grammatical with or without that. In (i), however, the complementizer must be deleted to prevent the ungrammatical sentence from being generated. What factor governs the optionality of the rule? 137 138 CHAPTER 2 Syntax: The Sentence Patterns of Language 26. Dutch and German are Germanic languages related to English, and as in English wh questions are formed by moving a wh phrase to sentence initial position. a. In what way are the rules of question formation in Dutch and German different from English? Base your answer on the following data: German i. Dutch Was hat Karl gekauft? Wat heeft Wim gekocht? what has Karl bought what has Wim bought “What has Karl bought?” “What has Wim bought?” ii. Was kauft Karl? Wat koopt Wim? What buys Karl what buys Wim “What does Karl buy?” “What does Wim buy?” iii. Kauft Karl das Buch? Koopt Wim het boek? buys Karl the book buys Wim the book “Does Karl buy the book?” “Does Wim buy the book?” b. Challenge exercise: Consider the following declarative sentences in Dutch and German: iv. Karl kaufte das Buch. Wim kocht het boek. Karl bought the book Wim bought the book “Karl bought the book.” “Wim bought the book.” v. Das Buch kaufte Karl. Het boek kocht Wim. The book bought Karl the book bought Wim “Karl bought the book.” “Wim bought the book.” vi. Das Buch kaufte Karl gestern. the book bought Karl yesterday “Karl bought the book yesterday.” Het boek kocht Wim gisteren. the book bought Wim yesterday “Wim bought the book yesterday.” vii. Gestern kaufte Karl das Buch Yesterday bought Karl the book “Yesterday Karl bought the book.” Gisteren kocht Wim het boek. yesterday bought Wim the book “Yesterday Wim bought the book.” What rules derive the different word order in declarative sentences? (Hint: There are two rules, one involving movement of the verb, and the other movement of an XP.) c. Are either of the rules in (b) familiar from the German/Dutch questions in (i)–(iii)? 3 The Meaning of Language Surely all this is not without meaning. HERMAN MELVILLE, Moby-Dick, 1851 For thousands of years philosophers have pondered the meaning of meaning, yet speakers of a language can easily understand what is said to them and can produce strings of words that are meaningful to other speakers. We use language to convey information to others (My new bike is pink), ask questions (Who left the party early?), give commands (Stop lying!), and express wishes (May there be peace on earth). What do you know about meaning when you know a language? To begin with, you know when a “word” is meaningful (flick) or meaningless (blick), and you know when a “sentence” is meaningful (Jack swims) or meaningless (swims metaphorical every). You know when a word has two meanings (bear) and when a sentence has two meanings (Jack saw a man with a telescope). You know when two words have the same meaning (sofa and couch), and when two sentences have the same meaning (Jack put off the meeting, Jack put the meeting off). And you know when words or sentences have opposite meanings (alive/ dead; Jack swims/Jack doesn’t swim). You generally know the real-world object that words refer to like the chair in the corner; and even if the words do not refer to an actual object, such as the unicorn behind the bush, you still have a sense of what they mean, and if the particular object happened to exist, you would have the knowledge to identify it. You know, or have the capacity to discover, when sentences are true or false. That is, if you know the meaning of a sentence, you know its truth conditions. In some cases it’s obvious, or redundant (all kings are male [true], all bachelors are 139 140 CHAPTER 3 The Meaning of Language married [false]); in other cases you need some further, nonlinguistic knowledge (Molybdenum conducts electricity), but by knowing the meaning, you know the kind of world knowledge that is needed. Often, if you know that a sentence is true (Nina bathed her dogs), you can infer that another sentence must also be true (Nina’s dogs got wet), that is, the first sentence entails the second sentence. All of this knowledge about meaning extends to an unlimited set of sentences, just like our syntactic knowledge, and is part of the grammar of the language. Part of the job of the linguist is to reveal and make explicit this knowledge about meaning that every speaker has. The study of the linguistic meaning of morphemes, words, phrases, and sentences is called semantics. Subfields of semantics are lexical semantics, which is concerned with the meanings of words, and the meaning relationships among words; and phrasal or sentential semantics, which is concerned with the meaning of syntactic units larger than the word. The study of how context affects meaning—for example, how the sentence It’s cold in here comes to be interpreted as “close the windows” in certain situations—is called pragmatics. What Speakers Know about Sentence Meaning Language without meaning is meaningless. ROMAN JAKOBSON In this section we discuss the linguistic knowledge you have that permits you to determine whether a sentence is true or false, when one sentence implies the truth or falsehood of another, and whether a sentence has multiple meanings. One way to account for this knowledge is by formulating semantic rules that build the meaning of a sentence from the meaning of its words and the way the words combine syntactically. This is often called truth-conditional semantics because it takes speakers’ knowledge of truth conditions as basic. It is also called compositional semantics because it calculates the truth value of a sentence by composing, or putting together, the meaning of smaller units. We will limit our discussion to declarative sentences like Jack swims or Jack kissed Laura, because we can judge these kinds of sentences as either true or false. At least part of their meaning, then, will be their truth value. Truth . . . Having Occasion to talk of Lying and false Representation, it was with much Difficulty that he comprehended what I meant. . . . For he argued thus: That the Use of Speech was to make us understand one another and to receive Information of Facts; now if any one said the Thing which was not, these Ends were defeated; because I cannot properly be said to understand him. . . . And these were all the Notions he had concerning that Faculty of Lying, so perfectly well understood, and so universally practiced among human Creatures. JONATHAN SWIFT, Gulliver’s Travels, 1726 What Speakers Know about Sentence Meaning Let’s begin by returning to Jack, who is swimming in the pool. If you are poolside and you hear the sentence Jack swims, and you know the meaning of that sentence, then you will judge the sentence to be true. On the other hand, if you are indoors and you happen to believe that Jack never learned to swim, then when you hear the very same sentence Jack swims, you will judge the sentence to be false and you will think the speaker is misinformed or lying. More generally, if you know the meaning of a sentence, then you can determine under what conditions it is true or false. You do not need to actually know whether a sentence is true or false to know its meaning. Knowing the meaning tells you how to determine the truth value. The sentence copper conducts electricity has meaning and is perfectly understood precisely because we know how to determine whether it’s true or false. Knowing the meaning of a sentence, then, means knowing under what circumstances it would be true or false according to your knowledge of the world, namely its truth conditions. Reducing the question of meaning to the question of truth conditions has proved to be very fruitful in understanding the semantic properties of language. For most sentences it does not make sense to say that they are always true or always false. Rather, they are true or false in a given situation, as we previously saw with Jack swims. But a restricted number of sentences are indeed always true regardless of the circumstances. They are called tautologies. (The term analytic is also used for such sentences.) Examples of tautologies are sentences like Circles are round or A person who is single is not married. Their truth is guaranteed solely by the meaning of their parts and the way they are put together. Similarly, some sentences are always false. These are called contradictions. Examples of contradictions are sentences like Circles are square or A bachelor is married. Entailment and Related Notions You mentioned your name as if I should recognize it, but beyond the obvious facts that you are a bachelor, a solicitor, a Freemason, and an asthmatic, I know nothing whatever about you. SIR ARTHUR CONAN DOYLE, “The Norwood Builder,” in The Memoirs of Sherlock Holmes, 1894 Much of what we know is deduced from what people say alongside our observations of the world. As we can deduce from the quotation, Sherlock Holmes took deduction to the ultimate degree. Often, deductions can be made based on language alone. If you know that the sentence Jack swims beautifully is true, then you also know that the sentence Jack swims must also be true. This meaning relation is called entailment. We say that Jack swims beautifully entails Jack swims. More generally, one sentence entails another if whenever the first sentence is true the second one is also true, in all conceivable circumstances. Generally, entailment goes only in one direction. So while the sentence Jack swims beautifully entails Jack swims, the reverse is not true. Knowing merely that 141 142 CHAPTER 3 The Meaning of Language Jack swims is true does not necessitate the truth of Jack swims beautifully. Jack could be a poor swimmer. On the other hand, negating both sentences reverses the entailment. Jack doesn’t swim entails Jack doesn’t swim beautifully. The notion of entailment can be used to reveal knowledge that we have about other meaning relations. For example, omitting tautologies and contradictions, two sentences are synonymous (or paraphrases) if they are both true or both false with respect to the same situations. Sentences like Jack put off the meeting and Jack postponed the meeting are synonymous, because when one is true the other must be true; and when one is false the other must also be false. We can describe this pattern in a more concise way by using the notion of entailment: Two sentences are synonymous if they entail each other. Thus if sentence A entails sentence B and vice versa, then whenever A is true B is true, and vice versa. Although entailment says nothing specifically about false sentences, it’s clear that if sentence A entails sentence B, then whenever B is false, A must be false. (If A were true, B would have to be true.) And if B also entails A, then whenever A is false, B would have to be false. Thus mutual entailment guarantees identical truth values in all situations; the sentences are synonymous. Two sentences are contradictory if, whenever one is true, the other is false or, equivalently, there is no situation in which they are both true or both false. For example, the sentences Jack is alive and Jack is dead are contradictory because if the sentence Jack is alive is true, then the sentence Jack is dead is false, and vice versa. In other words, Jack is alive and Jack is dead have opposite truth values. Like synonymy, contradiction can be reduced to a special case of entailment. Two sentences are contradictory if one entails the negation of the other. For instance, Jack is alive entails the negation of Jack is dead, namely Jack is not dead. Similarly, Jack is dead entails the negation of Jack is alive, namely Jack is not alive. The notions of contradiction (always false) and contradictory (opposite in truth value) are related in that if two sentences are contradictory, their conjunction with and is a contradiction. Thus Jack is alive and Jack is dead is a contradiction; it cannot be true under any circumstances. Ambiguity Let’s pass gas. SEEN ON A SIGN IN THE LUNCHROOM OF AN ELECTRIC UTILITY COMPANY Our semantic knowledge tells us when words or phrases (including sentences) have more than one meaning, that is, when they are ambiguous. In chapter 2 we saw that the sentence The boy saw the man with a telescope was an instance of structural ambiguity. It is ambiguous because it can mean that the boy saw the man by using a telescope or that the boy saw the man who was holding a telescope. The sentence is structurally ambiguous because it is associated with two What Speakers Know about Sentence Meaning different phrase structures, each corresponding to a different meaning. Here are the two structures: (1) NP Det g The S 5 N g boy VP 5 VP V g saw NP 2 N g man Det g the P g with PP NP Det g a N g telescope (2) S NP Det g The VP N g V g NP NP 2 Det N g g PP boy saw the P g man with NP Det g a N g telescope In (1) the PP with a telescope modifies the VP, and the interpretation is that the action of seeing occurred by use of a telescope. In (2) the PP with a telescope modifies the NP the man, and the interpretation is that the man has the telescope. Lexical ambiguity arises when at least one word in a phrase has more than one meaning. For instance the sentence This will make you smart is ambiguous because of the two meanings of the word smart: “clever” or “burning sensation.” Our knowledge of lexical and structural ambiguities reveals that the meaning of a linguistic expression is built both on the words it contains and its syntactic structure. The notion that the meaning of an expression is composed of the meanings of its parts and how they are combined structurally is referred to as the principle of compositionality. In the next section we discuss the rules by which the meaning of a phrase or sentence is determined based on its composition. 143 144 CHAPTER 3 The Meaning of Language Compositional Semantics To account for speakers’ knowledge of grammaticality, constituent structure, and relations between sentences, as well as for the limitless creativity of our linguistic competence, we concluded (chapter 2) that the grammar must contain syntactic rules. To account for speaker’s knowledge of the truth, reference, entailment, and ambiguity of sentences, as well as for our ability to determine the meaning of a limitless number of expressions, we must suppose that the grammar contains semantic rules that combine the meanings of words into meaningful phrases and sentences. Semantic Rules In the sentence Jack swims, we know that the word Jack, which is a proper name, refers to a precise object in the world, which is its referent. For instance, in the scenario given earlier, the referential meaning of Jack is the guy who is your friend and who is swimming happily in the pool right now. Based on this, we conclude that the meaning of the name Jack is the individual it refers to. What about the meaning of the verb swim? Part of its meaning is the group or set of individuals (human beings and animals) that swim. You will see in a moment how this aspect of the meaning of swim helps us understand sentences in a way that accords with our semantic knowledge. Our semantic rules must be sensitive not only to the meaning of individual words but to the structure in which they occur. Taking as an example our simple sentence Jack swims, let us see how the semantic rules compute its meaning. The meanings of the individual words are summarized as follows: Word Meanings Jack swims refers to (or means) the individual Jack refers to (or means) the set of individuals that swim The phrase structure tree for our sentence is as follows: S 5 NP VP g g Jack swims The tree tells us that syntactically the NP Jack and the VP swims combine to form a sentence. We want to mirror that combination at the semantic level: in other words, we want to combine the meaning of the NP Jack (an individual) and the meaning of the VP swims (a set of individuals) to obtain the meaning of the S Jack swims. This is done by means of Semantic Rule I. Compositional Semantics Semantic Rule I The meaning of [S NP VP] is the following truth condition: If the meaning of NP (an individual) is a member of the meaning of VP (a set of individuals), then S is TRUE, otherwise it is FALSE. Rule I states that a sentence composed of a subject NP and a predicate VP is true if the subject NP refers to an individual who is among the members of the set that constitute the meaning of the VP. This rule is entirely general; it does not refer to any particular sentence, individuals, or verbs. It works equally well for sentences like Ellen sings or Max barks. Thus the meaning of Max barks is the truth condition (i.e., the “if-sentence”) that states that the sentence is true if the individual denoted by Max is among the set of barking individuals. Let us now try a slightly more complex case: the sentence Jack kissed Laura. The main syntactic difference between this example and the previous one is that we now have a transitive verb that requires an extra NP in object position; otherwise our semantic rules will derive the meaning using the same mechanical procedure as in the first example. We again start with the word meaning and syntactic structure: Word Meanings Jack Laura kissed refers to (or means) the individual Jack refers to (or means) the individual Laura refers to (or means) the set of pairs of individuals X and Y such that X kissed Y. Here is the phrase structure tree. S NP g Jack VP 2 V g kissed NP g Laura The meaning of the transitive verb kiss is still a set, but this time a set of pairs of individuals. The meaning of the VP, however, is still a set of individuals, namely those individuals who kissed Laura. This may be expressed formally in Semantic Rule II. Semantic Rule II The meaning of [ VP V NP] is the set of individuals X such that X is the first member of any pair in the meaning of V whose second member is the meaning of NP. The meaning of the sentence is derived by first applying Semantic Rule II, which establishes the meaning of the VP as a certain set of individuals, namely 145 146 CHAPTER 3 The Meaning of Language those who kissed Laura. Now Semantic Rule I applies without further ado and gives the meaning of the sentence as the truth condition that determines S to be true whenever the meaning of Jack is a member of the set that is the meaning of the VP kissed Laura. In other words, S is true if Jack kissed Laura and false otherwise. These two semantic rules handle an essentially infinite number of intransitive and transitive sentences. One last example will illustrate how the semantic knowledge of entailment may be represented in the grammar. Consider Jack swims beautifully, and consider further the meaning of the adverb beautifully. Its meaning is clearly not an individual or a set of individuals. Rather, the meaning of beautifully is an operation that reduces the size of the sets that are the meanings of verb phrases. When applied to the meaning of swims, it reduces the set of individuals who swim to the smaller set of those who swim beautifully. We won’t express this rule formally, but it is now easy to see one source of entailment. The truth conditions that make Jack swims beautifully true are narrower than the truth conditions that make Jack swims true by virtue of the fact that among the individuals who swim, fewer of them swim beautifully. Therefore, any truth condition that causes Jack swims beautifully to be true necessarily causes Jack swims to be true, hence Jack swims beautifully entails Jack swims. These rules, and many more like them, account for our knowledge about the truth value of sentences by taking the meanings of words and combining them according to the syntactic structure of the sentence. It is easy to see from these examples how ambiguous meanings arise. Because the meaning of a sentence is computed based on its hierarchical organization, different trees will have different meanings—structural ambiguity—even when the words are the same, as in the example The boy saw the man with a telescope. The occurrence of an ambiguous word—lexical ambiguity—when it combines with the other elements of a sentence, can make the entire sentence ambiguous, as in She can’t bear children. The semantic theory of sentence meaning that we just sketched is not the only possible one, and it is also incomplete, as shown by the paradoxical sentence This sentence is false. The sentence cannot be true, else it’s false; it cannot be false, else it’s true. Therefore it has no truth value, though it certainly has meaning. This notwithstanding, compositional truth-conditional semantics has proven to be an extremely powerful and useful tool for investigating the semantic properties of natural languages. When Compositionality Goes Awry A loose sally of the mind; an irregular undigested piece; not a regular and orderly composition. SAMUEL JOHNSON (1709–1784) The meaning of an expression is not always obvious, even to a native speaker of the language. Meanings may be obscured in many ways, or at least may require some imagination or special knowledge to be apprehended. Poets, pundits, and yes, even professors can be difficult to understand. In the previous sections we saw that semantic rules compute sentence meaning compositionally based on the meanings of words and the syntactic structure that Compositional Semantics contains them. There are, however, interesting cases in which compositionality breaks down, either because there is a problem with words or with the semantic rules. If one or more words in a sentence do not have a meaning, then obviously we will not be able to compute a meaning for the entire sentence. Moreover, even if the individual words have meaning but cannot be combined together as required by the syntactic structure and related semantic rules, we will also not get to a meaning. We refer to these situations as semantic anomaly. Alternatively, it might require a lot of creativity and imagination to derive a meaning. This is what happens in metaphors. Finally, some expressions—called idioms—have a fixed meaning, that is, a meaning that is not compositional. Applying compositional rules to idioms gives rise to funny or inappropriate meanings. Anomaly Don’t tell me of a man’s being able to talk sense; everyone can talk sense. Can he talk nonsense? WILLIAM PITT There is no greater mistake in the world than the looking upon every sort of nonsense as want of sense. LEIGH HUNT, “On the Talking of Nonsense,” 1820 The semantic properties of words determine what other words they can be combined with. A sentence widely used by linguists that we encountered in chapter 2 illustrates this fact: Colorless green ideas sleep furiously. The sentence obeys all the syntactic rules of English. The subject is colorless green ideas and the predicate is sleep furiously. It has the same syntactic structure as the sentence Dark green leaves rustle furiously. but there is obviously something semantically wrong with the sentence. The meaning of colorless includes the semantic feature “without color,” but it is combined with the adjective green, which has the feature “green in color.” How can something be both “without color” and “green in color”? Other semantic violations occur in the sentence. Such sentences are semantically anomalous. Other English “sentences” make no sense at all because they include “words” that have no meaning; they are uninterpretable. They can be interpreted only if some meaning for each nonsense word can be dreamt up. Lewis Carroll’s “Jabberwocky” is probably the most famous poem in which most of the content words have no meaning—they do not exist in the lexicon of the grammar. Still, all the sentences sound as if they should be or could be English sentences: ’Twas brillig, and the slithy toves Did gyre and gimble in the wabe; All mimsy were the borogoves, And the mome raths outgrabe. ... 147 148 CHAPTER 3 The Meaning of Language He took his vorpal sword in hand: Long time the manxome foe he sought— So rested he by the Tumtum tree, And stood awhile in thought. Without knowing what vorpal means, you nevertheless know that He took his vorpal sword in hand means the same thing as He took his sword, which was vorpal, in hand. It was in his hand that he took his vorpal sword. Knowing the language, and assuming that vorpal means the same thing in the three sentences (because the same sounds are used), you can decide that the sense—the truth conditions—of the three sentences are identical. In other words, you are able to decide that two things mean the same thing even though you do not know what either one means. You decide by assuming that the semantic properties of vorpal are the same whenever it is used. We now see why Alice commented, when she had read “Jabberwocky”: “It seems very pretty, but it’s rather hard to understand!” (You see she didn’t like to confess, even to herself, that she couldn’t make it out at all.) “Somehow it seems to fill my head with ideas—only I don’t exactly know what they are! However, somebody killed something: that’s clear, at any rate—” Semantic violations in poetry may form strange but interesting aesthetic images, as in Dylan Thomas’s phrase a grief ago. Ago is ordinarily used with words specified by some temporal semantic feature: a week ago an hour ago a month ago a century ago but not *a table ago *a dream ago *a mother ago When Thomas used the word grief with ago, he was adding a durational feature to grief for poetic effect, so while the noun phrase is anomalous, it evokes certain feelings. In the poetry of E. E. Cummings, there are phrases like the six subjunctive crumbs twitch. a man . . . wearing a round jeer for a hat. children building this rainman out of snow. Though all of these phrases violate some semantic rules, we can understand them; breaking the rules creates the imagery desired. The fact that we are able to understand, or at least interpret, anomalous expressions, and at the same time recognize their anomalous nature, demonstrates our knowledge of the semantic system and semantic properties of the language. Compositional Semantics Metaphor Our doubts are traitors. WILLIAM SHAKESPEARE, Measure for Measure, c. 1603 Walls have ears. MIGUEL DE CERVANTES, Don Quixote, 1605 The night has a thousand eyes and the day but one. FRANCES WILLIAM BOURDILLON, “Light,” 1873 When what appears to be an anomaly is nevertheless understood in terms of a meaningful concept, the expression becomes a metaphor. There is no strict line between anomalous and metaphorical expressions. Technically, metaphors are anomalous, but the nature of the anomaly creates the salient meanings that metaphors usually have. The anomalous A grief ago might come to be interpreted by speakers of English as “the unhappy time following a sad event” and therefore become a metaphor. Metaphors may have a literal meaning as well as their metaphorical meaning, so in some sense they are ambiguous. However, when the semantic rules are applied to Walls have ears, for example, the literal meaning is so unlikely that listeners use their imagination for another interpretation. The principle of compositionality is very “elastic” and when it fails to produce an acceptable literal meaning, listeners try to accommodate and stretch the meaning. This accommodation is based on semantic properties that are inferred or that provide some kind of resemblance or comparison that can end up as a meaningful concept. This works only up to a certain point, however. It’s not clear what the literal meaning of Our doubts are traitors might be, though the conceptual meaning that the act of doubting a precious belief is self-betrayal seems plausible. To interpret a metaphor we need to understand the individual words, the literal meaning of the whole expression, and facts about the world. To understand the metaphor Time is money it is necessary to know that in our society we are often paid according to the number of hours or days worked. In fact, “time,” which is an abstract concept, is the subject of multiple metaphors. We “save time,” “waste time,” “manage time,” push things “back in time,” live on “borrowed time,” and suffer the “ravages of time” as the “sands of time” drift away. In effect, the metaphors take the abstract concept of time and treat it as a concrete object of value. Metaphor has a strong cultural component. Shakespeare uses metaphors that are lost on many of today’s playgoers. “I am a man whom Fortune hath cruelly scratched,” is most effective as a metaphor in a society like Shakespeare’s that commonly depicts “Fortune” as a woman. On the other hand There’s a bug in my program would make little sense in a culture without computers, even if the idea of having bugs in something indicates a problem. Many expressions now taken literally may have originated as metaphors, such as “the fall of the dollar,” meaning its decline in value on the world market. 149 150 CHAPTER 3 The Meaning of Language Many people wouldn’t bat an eyelash (another metaphor) at the literal interpretation of saving or wasting time. Metaphor is one of the factors in language change (see chapter 10). Metaphorical use of language is language creativity at its highest. Nevertheless, the basis of metaphorical use is very much the ordinary linguistic knowledge that all speakers possess about words, their semantic properties, and their combinatorial possibilities. Idioms HAGAR THE HORRIBLE © King Features Syndicate. Reprinted with permission of King Features Syndicate. Compositional Semantics Because the words (or morphemes) of a language are arbitrary (not predictable by rule), they must be listed in a mental lexicon. The lexicon is a repository of the words (or morphemes) of a language and their meanings. On the other hand, the meanings of morphologically complex words, phrases, and sentences are compositional and are derived by rules. We noted in chapter 1 that the meaning of some words (for example, compounds) is not predictable, so these must also be given in the lexicon. It turns out that languages also contain many phrases whose meanings are not predictable on the basis of the meanings of the individual words. These phrases typically start out as metaphors that “catch on” and are repeated so often that they become fixtures in the language. Such expressions are called idioms, or idiomatic phrases, as in these English examples: sell down the river rake over the coals drop the ball let their hair down put his foot in his mouth throw her weight around snap out of it cut it out hit it off get it off bite your tongue give a piece of your mind Here is where the usual semantic rules for combining meanings do not apply. The principle of compositionality is superseded by expressions that act very much like individual morphemes in that they are not decomposable, but have a fixed meaning that must be learned. Idioms are similar in structure to ordinary phrases except that they tend to be frozen in form and do not readily undergo rules that change word order or substitution of their parts. Thus, the sentence in (1) has the same structure as the sentence in (2). 1. 2. She put her foot in her mouth. She put her bracelet in her drawer. But while the sentences in (3) and (4) are clearly related to (2), 3. 4. The drawer in which she put her bracelet was hers. Her bracelet was put in her drawer. the sentences in (5) and (6) do not have the idiomatic sense of sentence (1), except, perhaps, humorously. 5. 6. The mouth in which she put her foot was hers. Her foot was put in her mouth. Also, if we know the meaning of (2) and the meaning of the word “necklace” we will immediately understand (7). 151 152 CHAPTER 3 The Meaning of Language 7. She put her necklace in the drawer. But if we try substituting “hand” for “foot” in sentence (1), we do not maintain the idiomatic meaning, but rather have the literal compositional meaning. There are, however, some idioms whose parts can be moved without affecting the idiomatic sense: The FBI kept tabs on radicals. Tabs were kept on radicals by the FBI. Radicals were kept tabs on by the FBI. Like metaphors, idioms can break the rules on combining semantic properties. The object of eat must usually be something with the semantic feature “edible,” but in He ate his hat. Eat your heart out. this restriction is violated. Idioms often lead to humor: What did the doctor tell the vegetarian about his surgically implanted heart valve from a pig? That it was okay as long as he didn’t “eat his heart out.” They may also be used to create what appear to be paradoxes. In many places such as Times Square in New York, a ball is dropped at midnight on New Year’s Eve. Now, if the person in charge doesn’t drop the ball, then he has “dropped the ball.” And if that person does indeed drop the ball, then he has not “dropped the ball.” Right? Idioms, grammatically as well as semantically, have special characteristics. They must be entered into the lexicon or mental dictionary as single items with their meanings specified, and speakers must learn the special restrictions on their use in sentences. All languages have idioms, but idioms rarely if ever translate word for word from one language to another. Most speakers of American English understand the idiom to kick the bucket as meaning “to die.” The same combination of words in Spanish (patear el cubo) has only the literal meaning of striking a specific bucket with a foot. On the other hand, estirar la pata, literally “to stretch the (animal) leg,” has the idiomatic sense of “to die” in Spanish. Most idioms originate as metaphorical expressions that establish themselves in the language and become frozen in their form and meaning. Lexical Semantics (Word Meanings) “There’s glory for you!” “I don’t know what you mean by ‘glory,’ ” Alice said. Humpty Dumpty smiled contemptuously. Lexical Semantics (Word Meanings) “Of course you don’t—till I tell you. I meant ‘there’s a nice knock-down argument for you!’ ” “But ‘glory’ doesn’t mean ‘a nice knock-down argument,’ ” Alice objected. “When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.” “The question is,” said Alice, “whether you can make words mean so many different things.” LEWIS CARROLL, Through the Looking-Glass, 1871 As just discussed, the meaning of a phrase or sentence is partially a function of the meanings of the words it contains. Similarly, the meaning of morphologically complex words is a function of their component morphemes, as we saw in chapter 1. However, there is a fundamental difference between word meaning—or lexical semantics—and sentence meaning. The meaning of entries in the mental lexicon—be they morphemes, words, compound words, idioms, and so on—is conventional; that is, speakers of a language implicitly agree on their meaning, and children acquiring the language must simply learn those meanings outright. On the other hand, the meaning of most sentences must be constructed by the application of semantic rules. Earlier we discussed the rules of semantic composition. In this section we will talk about word meaning and the semantic relationships that exist between words and morphemes. Although the agreed-upon meaning of a word may shift over time within a language community, we are not free as individuals to change the meanings of words at will; if we did, we would be unable to communicate with each other. Humpty Dumpty seems unwilling to accept this convention, though fortunately for us there are few Humpty Dumptys. All the speakers of a language share a basic vocabulary—the sounds and meanings of morphemes and words. Each of us knows the meanings of thousands of words. This knowledge permits us to use words to express our thoughts and to understand the thoughts of others. The meaning of words is part of linguistic knowledge. Your mental storehouse of information about words and morphemes is what we have been calling the lexicon. Dictionaries such as the Oxford English Dictionary (OED) or Webster’s Collegiate Dictionary are filled with words and their meanings. Dictionaries give the meaning of words using other words rather than in terms of some more basic units of meaning, whatever they might be. In this sense a dictionary really provides paraphrases rather than meanings. It relies on our knowledge of the language to understand the definitions. The meanings associated with words in our mental lexicon are probably not like what we find in the OED or Webster’s, although it is admittedly very difficult to specify precisely how word meanings are represented in the mind. Theories of Word Meaning It is natural . . . to think of there being connected with a sign . . . besides . . . the reference of the sign, also what I should like to call the sense of the sign. . . . GOTTLOB FREGE, “On Sense and Reference,” 1892 153 154 CHAPTER 3 The Meaning of Language If the meaning of a word is not like a dictionary entry, what is it? This question has been debated by philosophers and linguists for centuries. One proposal is that the meaning of a word or expression is its reference, its association with the object it refers to. This real world object is called the referent. Reference © The New Yorker Collection 1992 Michael Maslin from cartoonbank.com. All Rights Reserved. We have already determined that the meaning of proper names like Jack is its reference, that link between the word Jack and the person named Jack, which is its referent. Proper names are noun phrases (NPs); you can substitute a proper name in any NP position in a sentence and preserve grammaticality. There are other NPs that refer to individuals as well. For instance, NPs like the happy swimmer, my friend, and that guy can all be used to refer to Jack in the situation where you’ve observed Jack swimming. The same is true for pronouns such as I, you, and him, which also function as NPs. In all these cases, the reference of the NP—which singles out the individual referred to under the circumstances—is part of the meaning of the NP. On the other hand, not every NP refers to an individual. For instance, the sentence No baby swims contains the NP no baby, but your linguistic knowledge tells you that this NP does not refer to any specific individual. If no baby has no reference, but is not meaningless, then something about meaning beyond reference must be present. Lexical Semantics (Word Meanings) Also in support of that “extra something” is our knowledge that, while under certain circumstances the happy swimmer and Jack may have the same reference in that both expressions are associated with the same referent, the former has some further meaning. To see this, we observe that the happy swimmer is happy is a tautology—true in every conceivable situation, but Jack is happy is not a tautology, for there are circumstances under which that sentence might be false. Sense If meaning were reference alone, then the meaning of words and expressions would be entirely dependent on the objects pointed out in the real world. For example, the meaning of dog would be tied to the set of canine objects. This theory of word meaning is attractive because it underscores the idea that meaning is a connection between language on the one hand, and objects and events in the world on the other. An obvious problem for such a theory, however, is that speakers know many words that have no real-world referents (e.g., hobbits, unicorns, and Harry Potter). Yet speakers do know the meanings of these expressions. Similarly, what real-world entities would function words like of and by, or modal verbs such as will or may refer to? A further problem is that two expressions may refer to the same individual but not have the same meaning, as we saw with Jack and the happy swimmer. For another example, Barack Obama and the President currently refer to the same individual, but the meaning of the NP the President is, in addition, something like “the head of state,” which is an element of meaning separate from reference and more enduring. This element of meaning is often termed sense. It is the extra something referred to earlier. Unicorns, hobbits, and Harry Potter have sense but no reference (with regard to objects in the real world). Conversely, proper names typically have only reference. A name like Chris Jones may point out a certain person, its referent, but has little linguistic meaning beyond that. Sometimes two different proper names have the same referent, such as Mark Twain and Samuel Langhorne Clemens, or Unabomber and Theodore Kaczynski. Such pairs of noun phrases are coreferential. It is a hotly debated question in the philosophy of language as to whether coreferential expressions have the same or different senses. Another proposal is that the meaning of a word is the mental image it conjures up in the mind of speakers. This solves the problem of unicorns, hobbits, and Harry Potter; we may have a clear image of these entities from books, movies, and so on, and that connection might serve as reference for those expressions. However, many meaningful expressions are not associated with any clear, unique image agreed on by most speakers of the language. For example, what image is evoked by the expressions very, if, and every? It’s difficult to say, yet these expressions are certainly meaningful. What is the image of oxygen as distinct from nitrogen—both are clear gases, yet they mean very different things. What mental image would we have of dog that is general enough to include Yorkshire Terriers and Great Danes and yet excludes foxes and wolves? Astronauts will likely have a very different mental image of the expression space capsule than the average person, yet non-astronauts and astronauts do communicate with one another if they speak the same language. 155 156 CHAPTER 3 The Meaning of Language Although the idea that the meaning of a word corresponds to a mental image is intuitive (because many words do provoke imagery), it is clearly inadequate as a general explanation of what people know about word meanings. Perhaps the best we can do is to note that the reference part of a word’s meaning, if it has reference at all, is the association with its referent; and the sense part of a word’s meaning contains the information needed to complete the association, and to suggest properties that the referent may have, whether it exists in the real world or in the world of imagination. Lexical Relations Does he wear a turban, a fez or a hat? Does he sleep on a mattress, a bed or a mat, or a Cot, The Akond of Swat? Can he write a letter concisely clear, Without a speck or a smudge or smear or Blot, The Akond of Swat? EDWARD LEAR, “The Akond of Swat,” in Laughable Lyrics, 1877 Although no theory of word meaning is complete, we know that speakers have considerable knowledge about the meaning relationships among different words in their mental lexicons, and any theory must take that knowledge into account. Words are semantically related to one another in a variety of ways. The words that describe these relations often end in the bound morpheme -nym. The bestknown lexical relations are synonyms, illustrated in the poem by Edward Lear, and antonyms or opposites. Synonyms are words or expressions that have the same meaning in some or all contexts. There are dictionaries of synonyms that contain many hundreds of entries, such as: apathetic/phlegmatic/passive/sluggish/indifferent pedigree/ancestry/genealogy/descent/lineage A sign in the San Diego Zoo Wild Animal Park states: Please do not annoy, torment, pester, plague, molest, worry, badger, harry, harass, heckle, persecute, irk, bullyrag, vex, disquiet, grate, beset, bother, tease, nettle, tantalize, or ruffle the animals. It has been said that there are no perfect synonyms—that is, no two words ever have exactly the same meaning. Still, the following two sentences have very similar meanings: He’s sitting on the sofa. / He’s sitting on the couch. During the French Norman occupation of England that began in 1066 c.e., many French words of Latin origin were imported into English. As a result, English contains many synonymous pairs consisting of a word with an English (or Germanic) root, and another with a Latin root, such as: Lexical Semantics (Word Meanings) English Latin manly heal send go down virile recuperate transmit descend Words that are opposite in meaning are antonyms. There are several kinds of antonymy. There are complementary pairs: alive/dead present/absent awake/asleep They are complementary in that alive = not dead and dead = not alive, and so on. There are gradable pairs of antonyms: big/small hot/cold fast/slow happy/sad The meaning of adjectives in gradable pairs is related to the object they modify. The words do not provide an absolute scale. For example, we know that “a small elephant” is much bigger than “a large mouse.” Fast is faster when applied to an airplane than to a car. Another characteristic of certain pairs of gradable antonyms is that one is marked and the other unmarked. The unmarked member is the one used in questions of degree. We ask, ordinarily, “How high is the mountain?” (not “How low is it?”). We answer “Ten thousand feet high” but never “Ten thousand feet low,” except humorously or ironically. Thus high is the unmarked member of high/ low. Similarly, tall is the unmarked member of tall/short, fast the unmarked member of fast/slow, and so on. Another kind of opposite involves pairs like give/receive buy/sell teacher/pupil They are called relational opposites, and they display symmetry in their meaning. If X gives Y to Z, then Z receives Y from X. If X is Y’s teacher, then Y is X’s pupil. Pairs of words ending in -er and -ee are usually relational opposites. If Mary is Bill’s employer, then Bill is Mary’s employee. Some words are their own antonyms. These “autoantonyms” or “contranyms” are words such as cleave “to split apart” or “to cling together” and dust “to remove something” or “to spread something,” as in dusting furniture or dusting crops. Antonymic pairs that are pronounced the same but spelled differently are similar to autoantonyms: raise and raze are one such pair. In English there are several ways to form antonyms. You can add the prefix un-: likely/unlikely able/unable fortunate/unfortunate or you can add non-: entity/nonentity conformist/nonconformist 157 158 CHAPTER 3 The Meaning of Language or you can add in-: tolerant/intolerant discreet/indiscreet decent/indecent These strategies occasionally backfire, however. Pairs such as loosen and unloosen; flammable and inflammable; valuable and invaluable, and a few other “antiautonyms” actually have the same or nearly the same meaning, despite looking like antonyms. Other lexical relations include homonyms, polysemy, and hyponyms. Rhymes With Orange (105945) © Hilary B. Price. King Features Syndicate Words like bear and bare are homonyms (also called homophones). Homonyms are words that have different meanings but are pronounced the same, and may or may not be spelled the same. (They’re homographs when spelled the same, but when homographs are pronounced differently like pussy meaning “infected” or pussy meaning “kitten,” they are called heteronyms rather than homonyms.) Near nonsense sentences like Entre nous, the new gnu knew nu is a Greek letter tease us with homonyms. The humor in the cartoon above is based on the homonyms walk and wok. Homonyms can create ambiguity. The sentence: I’ll meet you by the bank. may mean “I’ll meet you by the financial institution” or “I’ll meet you by the riverside.” Homonyms are good candidates for confusion as well as humor, as illustrated in the following passage from Alice’s Adventures in Wonderland: “How is bread made?” “I know that!” Alice cried eagerly. “You take some flour—” “Where do you pick the flower?” the White Queen asked. “In a garden, or in the hedges?” Lexical Semantics (Word Meanings) “Well, it isn’t picked at all,” Alice explained; “it’s ground—” “How many acres of ground?” said the White Queen. The confusion and humor is based on the different sets of homonyms: flower and flour and the two meanings of ground. Alice means ground as the past tense of grind, whereas the White Queen is interpreting ground to mean “earth.” When a word has multiple meanings that are related conceptually or historically, it is said to be polysemous (polly-seamus). For example, the word diamond referring to a geometric shape and also to a baseball field that has that shape is polysemous. Open a dictionary of English to any page and you will find words with more than one definition (e.g., guard, finger, overture). Each of these words is polysemous because each has several related meanings. Speakers of English know that the words red, white, and blue are color words. Similarly, lion, tiger, leopard, and lynx are all felines. Such sets of words are called hyponyms. The relationship of hyponymy is between the more general term such as color and the more specific instances of it, such as red. Thus red is a hyponym of color, and lion is a hyponym of feline; or equivalently, color has the hyponym red and feline has the hyponym lion. Semantic Features In the previous sections we discussed word meaning in relation to objects in the world, and this permitted us to develop a truth-based semantics. We also explored the meaning of words in relation to other words. But it is also possible to look for a more basic set of semantic features or properties that are part of word meanings and that reflect our knowledge about what words mean. Decomposing the meanings of words into semantic features can clarify how certain words relate to other words. For example, the basic property of antonyms is that they share all but one semantic feature. We know that big and red are not antonyms because they have too few semantic features in common. They are both adjectives, but big has a semantic feature “about size,” whereas red has a semantic feature “about color.” On the other hand, buy/sell are relational opposites because both contain a semantic feature like “change in possession,” differing only in the direction of the change. Semantic features are among the conceptual elements that are part of the meanings of words and sentence. Consider, for example, the sentence: The assassin killed Thwacklehurst. If the word assassin is in your mental dictionary, you know that it was some person who murdered some important person named Thwacklehurst. Your knowledge of the meaning of assassin tells you that an animal did not do the killing, and that Thwacklehurst was not an average citizen. Knowledge of assassin includes knowing that the individual to whom that word refers is human, is a murderer, and is a killer of important people. These bits of information are some of the semantic features of the word on which speakers of the language agree. The meaning of all nouns, verbs, adjectives, and adverbs—the content words— 159 160 CHAPTER 3 The Meaning of Language and even some of the function words such as with and over can at least partially be specified by such properties. Evidence for Semantic Features Semantic properties are not directly observable. Their existence must be inferred from linguistic evidence. One source of such evidence is the speech errors, or “slips of the tongue,” that we all produce. Consider the following unintentional word substitutions that some speakers have actually spoken. Intended Utterance Actual Utterance (Error) bridge of the nose when my gums bled he came too late Mary was young the lady with the Dachshund that’s a horse of another color his ancestors were farmers he has to pay her alimony bridge of the neck when my tongues bled he came too early Mary was early the lady with the Volkswagen that’s a horse of another race his descendants were farmers he has to pay her rent These errors, and thousands of others that have been collected and catalogued, reveal that the incorrectly substituted words are not random but share some semantic feature with the intended words. Nose, neck, gums, and tongues are all “body parts” or “parts of the head.” Young, early, and late are related to “time.” Dachshund and Volkswagen are both “German” and “small.” The common semantic features of color and race, ancestor and descendant, and alimony and rent are apparent. The semantic properties that describe the linguistic meaning of a word should not be confused with other nonlinguistic properties, such as physical properties. Scientists know that water is composed of hydrogen and oxygen, but such knowledge is not part of a word’s meaning. We know that water is an essential ingredient of lemonade and baths. However, we don’t need to know any of these things to know what the word water means, and to be able to use and understand it in a sentence. Semantic Features and Grammar Rhymes With Orange (105945) © Hilary B. Price. King Features Syndicate Lexical Semantics (Word Meanings) Further evidence that words are composed of smaller bits of meaning is that semantic features interact with different aspects of the grammar such as morphology or syntax. These effects show up in both nouns and verbs. Semantic Features of Nouns The same semantic feature may be shared by many words. “Female” is a semantic feature, sometimes indicated by the suffix -ess, that makes up part of the meaning of nouns, such as: tigress doe ewe hen mare vixen aunt debutante girl maiden widow woman The words in the last two columns are also distinguished by the semantic feature “human,” which is also found in: doctor bachelor dean parent professor baby teenager child Another part of the meaning of the words baby and child is that they are “young.” (We will continue to indicate words by using italics and semantic features by double quotes.) The word father has the properties “male” and “adult” as do uncle and bachelor. In some languages, though not English, nouns occur with classifiers, grammatical morphemes that indicate the semantic class of the noun. In Swahili a noun that has the semantic feature “human” is prefixed with m- if singular and wa- if plural, as in mtoto (child) and watoto (children). A noun that has the feature “human artifact,” such as bed, chair, or knife, is prefixed with the classifiers ki if singular and vi if plural, for example, kiti (chair) and viti (chairs). Semantic properties may have syntactic and semantic effects, too. For example, the kinds of determiners that a noun may occur with are controlled by whether it is a “count” noun or a “mass” noun. Consider these data: I have two dogs. I have a dog. *I have dog. He has many dogs. *He has much dogs. *I have two rice(s). *I have a rice. I have rice. *He has many rice(s). He has much rice. Count nouns can be enumerated and pluralized—one potato, two potatoes. They may be preceded by the indefinite determiner a, and by the quantifier many as in many potatoes, but not by much, *much potato. They must also occur with a determiner of some kind. Nouns such as rice, water, and milk, which cannot be enumerated or pluralized, are mass nouns. They cannot be preceded by a or many, and they can occur with the quantifier much or without any determiner at all. The humor of the cartoon is based both on the ambiguity of toast and the fact that as a food French toast is a mass noun, but as an oration it is a count 161 162 CHAPTER 3 The Meaning of Language noun. The count/mass distinction captures the fact that speakers know the properties that govern which determiner types go with different nouns. Without it we could not describe these differences. Generally, the count/mass distinction corresponds to the difference between discrete objects and homogeneous substances. But it would be incorrect to say that this distinction is grounded in human perception, because different languages may treat the same object differently. For example, in English the words hair, furniture, and spaghetti are mass nouns. We say Some hair is curly, Much furniture is poorly made, John loves spaghetti. In Italian, however, these words are count nouns, as illustrated in the following sentences: Ivano ha mangiato molti spaghetti ieri sera. Ivano ate many spaghettis last evening. Piero ha comprato un mobile. Piero bought a furniture. Luisella ha pettinato i suoi capelli. Luisella combed her hairs. We would have to assume a radical form of linguistic determinism (remember the Sapir-Whorf hypothesis from chapter 6) to say that Italian and English speakers have different perceptions of hair, furniture, and spaghetti. It is more reasonable to assume that languages can differ to some extent in the semantic features they assign to words with the same referent, somewhat independently of the way they conceptualize that referent. Even within a particular language we can have different words—count and mass—to describe the same object or substance. For example, in English we have shoes (count) and footwear (mass), coins (count) and change (mass). Semantic Features of Verbs Verbs also have semantic features as part of their meaning. For example, “cause” is a feature of verbs such as darken, kill, uglify, and so on. darken kill uglify cause to become dark cause to die cause to become ugly “Go” is a feature of verbs that mean a change in location or possession, such as swim, crawl, throw, fly, give, or buy: Jack swims. The baby crawled under the table. The boy threw the ball over the fence. John gave Mary a beautiful engagement ring. Words like swim have an additional feature like “in liquid,” while crawl is “close to a surface.” “Become” is a feature expressing the end state of the action of certain verbs. For example, the verb break can be broken down into the following components of meaning: “cause” to “become” broken. Lexical Semantics (Word Meanings) Verbal features, like features on nouns, may have syntactic consequences. For example, verbs can either describe events, such as John kissed Mary/John ate oysters, or states, such as John knows Mary/John likes oysters. The eventive/ stative difference is mirrored in the syntax. Eventive sentences still sound natural when passivized, when expressed progressively, when used imperatively, and with certain adverbs: Eventives Mary was kissed by John. John is kissing Mary. Kiss Mary! John deliberately kissed Mary. Oysters were eaten by John. John is eating oysters. Eat oysters! John deliberately ate oysters. The stative sentences seem peculiar, if not ungrammatical or anomalous, when cast in the same form. (The preceding “?” indicates the strangeness.) Statives ?Mary is known by John. ?John is knowing Mary. ?Know Mary! ?John deliberately knows Mary. ?Oysters are liked by John. ?John is liking oysters. ?Like oysters! ?John deliberately likes oysters. Negation is a particularly interesting component of the meaning of some verbs. Expressions such as ever, anymore, have a red cent, and many more are ungrammatical in certain simple affirmative sentences, but grammatical in corresponding negative ones. *Mary will ever smile. (Cf. Mary will not ever smile.) *I can visit you anymore. (Cf. I cannot visit you anymore.) *It’s worth a red cent. (Cf. It’s not worth a red cent.) Such expressions are called negative polarity items because a negative element such as “not” elsewhere in the sentence allows them to appear. Consider these data: *John thinks that he’ll ever fly a plane again. *John hopes that he’ll ever fly a plane again. John doubts that he’ll ever fly a plane again. John despairs that he’ll ever fly a plane again. This suggests that verbs such as doubt and despair, but not think and hope, have “negative” as a component of their meaning. Doubt may be analyzed as “think that not,” and despair as “has no hope.” The negative feature in the verb allows the negative polarity item ever to occur grammatically without the overt presence of not. Argument Structure Verbs differ in terms of the number and types of NPs they can take as complements. As we noted in chapter 2, transitive verbs such as find, hit, chase, and so 163 164 CHAPTER 3 The Meaning of Language on take, or c-select, a direct object complement, whereas intransitive verbs like arrive or sleep do not. Ditransitive verbs such as give or throw take two object complements as in John threw Mary a ball. In addition, most verbs take a subject. The various NPs that occur with a verb are its arguments. Thus intransitive verbs have one argument: the subject; transitive verbs have two arguments: the subject and direct object; ditransitive verbs have three arguments: the subject, direct object, and indirect object. The argument structure of a verb is part of its meaning and is included in its lexical entry. The verb not only determines the number of arguments in a sentence, but it also limits the semantic properties of both its subject and its complements. For example, find and sleep require (s-select) animate subjects. The well-known colorless green ideas sleep furiously is semantically anomalous because ideas (colorless or not) are not animate. Components of a verb’s meaning can also be relevant to the choice of complements it can take. For example, the verbs in (1) and (3) can take two objects—they’re ditransitive—while those in (2) and (4) cannot. 1. 2. 3. 4. John threw/tossed/kicked/flung the boy the ball. *John pushed/pulled/lifted/hauled the boy the ball. Mary faxed/radioed/e-mailed/phoned Helen the news. *Mary murmured/mumbled/muttered/shrieked Helen the news. Although all the verbs in (1) and (2) are verbs of motion, they differ in how the force of the motion is applied: the verbs in (1) involve a single quick motion whereas those in (2) involve a prolonged use of force. Similarly, the verbs in (3) and (4) are all verbs of communication, but their meanings differ in the way the message is communicated; those in (3) involve an external apparatus whereas those in (4) involve the type of voice used. Finally, the ditransitive verbs have “transfer direct object to indirect object” in their meaning. In (1) the ball is transferred to the boy. In (3) the news is transferred, or leastwise transmitted, to Helen. The ditransitive verbs give, write, send, and throw all have this property. Even when the transference is not overt, it may be inferred. In John baked Mary a cake, there is an implied transfer of the cake from John to Mary. Subtle aspects of meaning are mirrored in the argument structure of the verbs, and indeed, this connection between form and meaning may help children acquire the syntactic and semantic rules of their language, as will be discussed in chapter 7. Thematic Roles A feminine boy from Khartoum Took a masculine girl to his room They spent the whole night In one hell of a fight About who should do what—and to whom? ANONYMOUS LIMERICK, quoted in More Limericks, G. Legman (ed.), 1977 Lexical Semantics (Word Meanings) The NP arguments in the VP, which include the subject and any objects, are semantically related in various ways to the verb. The relations depend on the meaning of the particular verb. For example, the NP the boy in the sentence: 1. The boy rolled a red ball. agent theme is the “doer” of the rolling action, also called the agent. The NP a red ball is the theme or the “undergoer” of the rolling action. Relations such as agent and theme are called thematic roles. Thematic roles express the kind of relation that holds between the arguments of the verb and the type of situation that the verb describes. A further example is the sentence: 2. The boy threw the red ball to the girl. agent theme goal Here, the girl bears the thematic role of goal, that is, the endpoint of a change in location or possession. The verb phrase is interpreted to mean that the theme of throw ends up in the position of the goal. Other thematic roles are source, where the action originates; instrument, the means used to accomplish the action; and experiencer, one receiving sensory input: Professor Snape awakened Harry Potter with his wand. source experiencer instrument The particular thematic roles assigned by a verb can be traced back to components of the verb’s meaning. Verbs such as throw, buy, and fly contain a feature “go” expressing a change in location or possession. The feature “go” is thus linked to the presence of the thematic roles of theme, source, and goal. Verbs like awaken or frighten have a feature “affects mental state” so that one of its arguments takes on the thematic role of experiencer. Thematic role assignment, or theta assignment, is also connected to syntactic structure. In the sentence in (2) the role of theme is assigned to the direct object the ball and the role of goal to the indirect object the girl. Verb pairs such as sell and buy both involve the feature “go.” They are therefore linked to a thematic role of theme, which is assigned to the direct object, as in the following sentences: 3. John sold the book to Mary. agent theme goal 4. Mary bought the book from John. agent theme source In addition, sell is linked to the presence of a goal (the recipient or endpoint of the transfer), and buy to the presence of a source (the initiator of the transfer). Thus, 165 166 CHAPTER 3 The Meaning of Language buy/sell are relational opposites because both contain the semantic feature “go” (the transfer of goods or services) and they differ only in the direction of transfer, that is, whether the indirect object is a source or goal. Thematic roles are not assigned to arguments randomly. There is a connection between the meaning of a verb and the syntactic structure of sentences containing the verb. Our knowledge of verbs includes their syntactic category, which arguments they select, and the thematic roles they assign to their arguments. Thematic roles are the same in sentences that are paraphrases. 1. 2. The dog bit the stick. / The stick was bitten by the dog. The trainer gave the dog a treat. / The trainer gave a treat to the dog. In (1) the dog is the agent and the stick is the theme. In (2) the treat is the theme and the dog is the goal. This is because certain thematic roles must be assigned to the same deep structure position, for example, theme is assigned to the object of bit/bitten. This uniformity of theta assignment, a principle of Universal Grammar, dictates that the various thematic roles are always in their proper structural place in deep structure. Thus the stick in the passive sentence the stick was bitten by the dog must have originated in object position and moved to subject position by transformational rule: __ was bitten the stick by the dog d-structure → the stick was bitten __ by the dog s-structure Thematic roles may remain the same in sentences that are not paraphrases, as in the following instances: 3. 4. 5. The boy opened the door with the key. The key opened the door. The door opened. In all three of these sentences, the door is the theme, the object that is opened. Uniformity of theta assignment therefore entails that the door in the sentence in (5) originates as the object of open and undergoes a movement rule, much like in the passive example above. ___ opened the door → The door opened ___ Although the sentences in (3)–(5) are not strict paraphrases of one another, they are structurally and semantically related in that they have similar deep structure configurations. In the sentences in (3) and (4), the key, despite its different positions, has the thematic role of instrument suggesting greater structural flexibility for some thematic roles. The semantics of the three sentences is determined by the meaning of the verb open and the rules that determine how thematic roles are assigned to the verb’s arguments. Pragmatics Pragmatics SHOE © 1991 MACNELLY. KING FEATURES SYNDICATE. Reprinted with permission. Pragmatics is concerned with our understanding of language in context. Two kinds of contexts are relevant. The first is linguistic context—the discourse that precedes the phrase or sentence to be interpreted; the second is situational context—virtually everything nonlinguistic in the environment of the speaker. Speakers know how to combine words and phrases to form sentences, and they also know how to combine sentences into a larger discourse to express complex thoughts and ideas. Discourse analysis is concerned with the broad speech units comprising multiple sentences. It involves questions of style, appropriateness, cohesiveness, rhetorical force, topic/subtopic structure, differences between written and spoken discourse, as well as grammatical properties. Within a discourse, preceding sentences affect the meaning of sentences that follow them in various ways. For example, the reference or meaning of pronouns often depends on prior discourse. Prior discourse can also disambiguate words like bank in that the discussion may be about rafting on a river or interest rates. Situational context, on the other hand, is the nonlinguistic environment in which a sentence or discourse happens. It is the context that allows speakers to seamlessly, even unknowingly, interpret questions like Can you pass the salt? as requests to carry out a certain action and not a simple question. Situational context includes the speaker, hearer, and any third parties present, along with their beliefs and their beliefs about what the others believe. It includes the physical environment, the social milieu, the subject of conversation, the time of day, and so on, ad infinitum. Almost any imaginable extralinguistic factor may, under appropriate circumstances, influence the way language is interpreted. Pronouns provide a good way to illustrate the two kinds of contexts—linguistic and situational—that affect meaning. Pronouns Pronouns are lexical items that can get their meaning from other NPs in the sentence or in the larger discourse. Any NP that a pronoun depends on for its 167 168 CHAPTER 3 The Meaning of Language meaning is called its antecedent. Pronouns are sensitive to syntax, discourse, and situational context for their interpretation. We’ll take up syntactic matters first. Pronouns and Syntax “Hi and Lois” © King Features Syndicate. Reprinted with permission of King Features Syndicate. There are different types of pronouns. Reflexive pronouns are pronouns such as himself and themselves. In English, reflexive pronouns always depend on an NP antecedent for their meaning and the antecedent must be in the same clause, as illustrated in the following examples: 1. 2. 3. Jane bit herself. *Jane said that the boy bit herself. *Herself left. In (1) the NP Jane and the reflexive pronoun herself are in the same S; in (2) herself is in the embedded sentence and is structurally too far from the antecedent Jane, resulting in the ungrammaticality. In (3) herself has no antecedent at all, hence nothing to get its meaning from. The flouting of the rule that requires reflexives to have antecedents gives rise to the humor in the cartoon. Languages also have pronouns that are not reflexive, such as he, she, it, us, him, her, you, and so on, which we will simply refer to as pronouns. Pronouns also depend on other elements for their meaning, but the syntactic conditions on pronouns are different from those on reflexives. Pronouns cannot refer to an antecedent in the same clause, but they are free to refer to an NP outside this clause, as illustrated in the following sentences (the underlining indicates the interpretation in which the pronoun takes the NP (in this case, John) as antecedent): 4. 5. *John knows him. John knows that he is a genius. The sentence in (4) is ungrammatical relative to the interpretation because him cannot mean John. (Compare John knows himself.) In (5), however, the pro- Pragmatics noun he can be interpreted as John. Notice that in both sentences it is possible for the pronouns to refer to some other person not mentioned in the sentence (e.g., Pete or Harry). In this case the pronoun gets its reference from the larger discourse or nonlinguistic context. Pronouns and Discourse The 911 operator, trying to get a description of the gunman, asked, “What kind of clothes does he have on?” Mr. Morawski, thinking the question pertained to Mr. McClure [the victim, who lay dying of a gunshot wound], answered, “He has a bloody shirt with blue jeans, purple striped shirt.” The 911 operator then gave police that description [the victim’s] of a gunman. THE NEWS AND OBSERVER, Raleigh, North Carolina, January 21, 1989 Pronouns may be used to refer to entities previously mentioned in discourse or to entities that are presumably known to the participants of a discourse. When that presumption fails, miscommunication such as the one at the head of this section may result. In a discourse, prior linguistic context plays a primary role in pronoun interpretation. In the following discourse, It seems that the man loves the woman. Many people think he loves her. the most natural interpretation of her is “the woman” referred to in the first sentence, whoever she happens to be. But it is also possible for her to refer to a different person, perhaps one indicated with a pointing gesture. In such a case her would be spoken with added emphasis: Many people think he loves her! Similar remarks apply to the reference of he, which most naturally refers to the man, but not necessarily so. Again, intonation and emphasis would provide clues. Referring to the previous discourse, strictly speaking, it would not be ungrammatical if the discourse went this way: It seems that the man loves the woman. Many people think the man loves the woman. However, most of us would find that the discourse sounds stilted. Often in discourse, the use of pronouns is a stylistic decision, which is part of pragmatics. Pronouns and Situational Context When a pronoun gets its reference from an NP antecedent in the same sentence, we say that the pronoun is bound to that noun phrase antecedent. If her in 1. Mary thinks he loves her 169 170 CHAPTER 3 The Meaning of Language refers to “Mary,” it would be a bound pronoun. Pronouns can also be bound to quantifier antecedents such as “every N'” as in the sentence: 2. Every girl in the class hopes John will ask her out on a date. In this case her refers to each one of the girls in the class and is said to be bound to every girl. Reflexive pronouns are always bound. When a pronoun refers to some entity outside the sentence or not explicitly mentioned in the discourse, it is said to be free or unbound. So, her in the sentences in (1) and (2) need not be bound to Mary or to every girl and can also refer to some arbitrary girl. The reference of a free pronoun must ultimately be determined by the situational context. First- and second-person nonreflexive (I/we, you) pronouns are bound to the speaker and hearer, respectively. They therefore depend on the situational context, namely, who is talking and who is listening. With third-person pronouns, semantic rules permit them either to be bound or free, as noted above. The ultimate interpretation in any event is context-dependent. Deixis “Dennis the Menace” © Hank Ketcham. Reprinted with permission of North America Syndicate. In all languages, the reference of certain words and expressions relies entirely on the situational context of the utterance, and can only be understood in light of these circumstances. This aspect of pragmatics is called deixis (pronounced “dike-sis”). Pronouns are deictic. Their reference (or lack of same) is ultimately context dependent. Expressions such as this person that man Pragmatics these women those children are also deictic, because they require situational information for the listener to make a referential connection and understand what is meant. These examples illustrate person deixis. They also show that the demonstrative articles like this and that are deictic. We also have time deixis and place deixis. The following examples are all deictic expressions of time: now this time two weeks from now then that time last week tomorrow seven days ago next April To understand what specific times such expressions refer to, we need to know when the utterance was said. Clearly, next week has a different reference when uttered today than a month from today. If you found an undated notice announcing a “BIG SALE NEXT WEEK,” you would not know whether the sale had already taken place. Expressions of place deixis require contextual information about the place of the utterance, as shown by the following examples: here that place this city there this ranch these parks this place those towers over there yonder mountains The “Dennis the Menace” cartoon at the beginning of this section illustrates the hilarity that may ensue if deictic expressions are misinterpreted. Directional terms such as before/behind left/right front/back are deictic insofar as you need to know the orientation in space of the conversational participants to know their reference. In Japanese the verb kuru “come” can only be used for motion toward the place of utterance. A Japanese speaker cannot call up a friend and ask May I kuru to your house? as you might, in English, ask “May I come to your house?” The correct verb is iku, “go,” which indicates motion away from the place of utterance. In Japanese these verbs have a deictic aspect to their meaning. Deixis, as we’ve seen, is a great source of humor. A cartoon shows a chicken calling across the road to another chicken, “Hey, how do I cross to the other side of the road?” “You’re ON the other side,” the other chicken replies. Deixis abounds in language use and marks one of the boundaries of semantics and pragmatics. Deictic expressions such as I, an hour from now, and behind me have meaning to the extent that their referents are determined in a regular way as a function of the situation of use. (I, for example, picks out the speaker.) To 171 172 CHAPTER 3 The Meaning of Language complete their meaning, to determine their reference, it is necessary to know the situational context. More on Situational Context Depending on inflection, ah bon [in French] can express shock, disbelief, indifference, irritation, or joy. PETER MAYLE, Toujours Provence, 1991 Much discourse is telegraphic. Verb phrases are not specifically mentioned, entire clauses are left out, direct objects vanish, pronouns roam freely. Yet people still understand one another, and part of the reason is that rules of grammar and rules of discourse combine with contextual knowledge to fill in what’s missing and make the discourse cohere. Much of the contextual knowledge is knowledge of who is speaking, who is listening, what objects are being discussed, and general facts about the world we live in—what we have been calling situational context. Often what we say is not literally what we mean. When we ask at the dinner table if someone “can pass the salt” we are not querying their ability to do so, we are requesting that they do so. If I say “You’re standing on my foot,” I am not making idle conversation; I am asking you to stand elsewhere. We say “It’s cold in here” to convey “Shut the window,” or “Turn up the heat,” or “Let’s leave,” or a dozen other things that depend on the real-world situation at the time of speaking. In the following sections, we will look at several ways that real-world context influences and interacts with meaning. Maxims of Conversation Polonius: Though this be madness, yet there is method in’t. WILLIAM SHAKESPEARE, Hamlet, c. 1600 Speakers recognize when a series of sentences “hangs together” or when it is disjointed. The following discourse (Hamlet, Act II, Scene II), which gave rise to Polonius’s remark, does not seem quite right—it is not coherent. polonius: hamlet: polonius: hamlet: polonius: hamlet: What do you read, my lord? Words, words, words. What is the matter, my lord? Between who? I mean, the matter that you read, my lord. Slanders, sir: for the satirical rogue says here that old men have gray beards, that their faces are wrinkled, their eyes purging thick amber and plum-tree gum, and that they have a plentiful lack of wit, together with most weak hams: all which, sir, though I most powerfully and potently believe, yet I hold it not honesty to have it thus set down; for yourself, sir, should grow old as I am, if like a crab you could go backward. Hamlet, who is feigning insanity, refuses to answer Polonius’s questions “in good faith.” He has violated certain conversational conventions, or maxims Pragmatics of conversation. These maxims were first discussed by the British philosopher H. Paul Grice and are sometimes called Gricean Maxims. One such maxim, the maxim of quantity, states that a speaker’s contribution to the discourse should be as informative as is required—neither more nor less. Hamlet has violated this maxim in both directions. In answering “Words, words, words” to the question of what he is reading, he is providing too little information. His final remark goes to the other extreme in providing too much information. Hamlet also violates the maxim of relevance when he “misinterprets” the question about the reading matter as a matter between two individuals. The run-on nature of Hamlet’s final remark, a violation of the maxim of manner, is another source of incoherence. This effect is increased in the final sentence by the somewhat bizarre metaphor that compares growing younger with walking backward, a violation of the maxim of quality, which requires sincerity and truthfulness. Here is a summary of the four conversational maxims, parts of the broad cooperative principle. Name of Maxim Description of Maxim Quantity Relevance Manner Quality Say neither more nor less than the discourse requires. Be relevant. Be brief and orderly; avoid ambiguity and obscurity. Do not lie; do not make unsupported claims. Unless speakers (like Hamlet) are being deliberately uncooperative, they adhere to these maxims and to other conversational principles, and assume others do too. Bereft of context, if one man says (truthfully) to another “I have never slept with your wife,” that would be provocative because the very topic of conversation should be unnecessary, a violation of the maxim of quantity. Asking an able-bodied person at the dinner table “Can you pass the salt?”, if answered literally, would force the responder into stating the obvious, also a violation of the maxim of quantity. To avoid this, the person asked seeks a reason for the question, and deduces that the asker would like to have the salt shaker. The maxim of relevance explains how saying “It’s cold in here” to a person standing by an open window might be interpreted as a request to close it, or else why make the remark to that particular person in the first place? For sentences like I am sorry that the team lost to be relevant, it must be true that “the team lost.” Else why say it? Situations that must exist for utterances to be appropriate are called presuppositions. Questions like Have you stopped hugging your border collie? presuppose that you hugged your border collie, and statements like The river Avon runs through Stratford presuppose the existence of the river and the town. The presuppositions prevent violations of the maxim of relevance. When presuppositions are ignored, we get the confusion in this passage from Lewis Carroll’s Alice’s Adventures in Wonderland: “Take some more tea,” the March Hare said to Alice, very earnestly. “I’ve had nothing yet,” Alice replied in an offended tone, “so I can’t take more.” 173 174 CHAPTER 3 The Meaning of Language “You mean you can’t take less,” said the Hatter: “It’s very easy to take more than nothing.” Utterances like Take some more tea or Have another beer carry the presupposition that one has already had some. The March Hare is oblivious to this aspect of language, of which the annoyed Alice is keenly aware. Presuppositions are different from entailments in that they are felicity conditions taken for granted by speakers adhering to the cooperative principle. Unlike entailments, they remain when the sentence is negated. I am not sorry that the team lost still presupposes that the team lost. On the other hand, while John killed Bill entails Bill died, no such entailment follows from John did not kill Bill. Conversational conventions such as these allow the various sentence meanings to be sensibly combined into discourse meaning and integrated with context, much as rules of sentence grammar allow word meanings to be sensibly (and grammatically) combined into sentence meaning. Implicatures What does “yet” mean, after all? “I haven’t seen Reservoir Dogs yet.” What does that mean? It means you’re going to go, doesn’t it? NICK HORNBY, High Fidelity, 1995 In conversation we sometimes infer or conclude based not only on what was said, but also on assumptions about what the speaker is trying to achieve. In the examples just discussed—It’s cold in here, Can you please pass the salt, and I have never slept with your wife—the person spoken to derives a meaning that is not the literal meaning of the sentences. In the first case he assumes that he is being asked to close the window; in the second case he knows he’s not being questioned but rather asked to pass the salt; and in the third case he will understand exactly the opposite of what is said, namely that the speaker has slept with his wife. Such inferences are known as implicatures. Implicatures are deductions that are not made strictly on the basis of the content expressed in the discourse. Rather, they are made in accordance with the conversational maxims, taking into account both the linguistic meaning of the utterance as well as the particular circumstances in which the utterance is made. Consider the following conversation: speaker a: speaker b: Smith doesn’t have any girlfriends these days. He’s been driving over to the West End a lot lately. The implicature is that Smith has a girlfriend in the West End. The reasoning is that B’s answer would be irrelevant unless it contributed information related to A’s question. We assume speakers try to be cooperative. So it is fair to conclude that B uttered the second sentence because the reason that Smith drives to the West End is that he has a girlfriend there. Pragmatics Because implicatures are derived on the basis of assumptions about the speaker that might turn out to be wrong, they can be easily cancelled. For this reason A could have responded as follows: speaker a: He goes to the West End to visit his mother who is ill. Although B’s utterance implies that the reason Smith goes to the West End is to visit his girlfriend, A’s response cancels this implicature. Implicatures are different than entailments. An entailment cannot be cancelled; it is logically necessary. Implicatures are also different than presuppositions. They are the possible consequences of utterances in their context, whereas presuppositions are situations that must exist for utterances to be appropriate in context, in other words, to obey Grice’s Maxims. Further world knowledge may cancel an implicature, but the utterances that led to it remain sensible and wellformed, whereas further world knowledge that negates a presupposition—oh, the team didn’t lose after all—renders the entire utterance inappropriate and in violation of Grice’s Maxims. Speech Acts “Zits” © Zits Partnership. Reprinted with permission of King Features Syndicate. You can use language to do things. You can use language to make promises, lay bets, issue warnings, christen boats, place names in nomination, offer congratulations, or swear testimony. The theory of speech acts describes how this is done. By saying I warn you that there is a sheepdog in the closet, you not only say something, you warn someone. Verbs like bet, promise, warn, and so on are performative verbs. Using them in a sentence (in the first person, present tense) adds something extra over and above the statement. There are hundreds of performative verbs in every language. The following sentences illustrate their usage: I bet you five dollars the Yankees win. I challenge you to a match. I dare you to step over this line. 175 176 CHAPTER 3 The Meaning of Language I fine you $100 for possession of oregano. I move that we adjourn. I nominate Batman for mayor of Gotham City. I promise to improve. I resign! I pronounce you husband and wife. In all of these sentences, the speaker is the subject (i.e., the sentences are in first person), who by uttering the sentence is accomplishing some additional action, such as daring, nominating, or resigning. In addition, all of these sentences are affirmative, declarative, and in the present tense. They are typical performative sentences. An informal test to see whether a sentence contains a performative verb is to begin it with the words I hereby. . . . Only performative sentences sound right when begun this way. Compare I hereby apologize to you with the somewhat strange I hereby know you. The first is generally taken as an act of apologizing. In all of the examples given, insertion of hereby would be acceptable. In studying speech acts, the importance of context is evident. In some situations Band practice, my house, 6 to 8 is a reminder, but the same sentence may be a warning in a different context. We call this underlying purpose of the utterance—be it a reminder, a warning, a promise, a threat, or whatever— the illocutionary force of a speech act. Because the illocutionary force of a speech act depends on the context of the utterance, speech act theory is a part of pragmatics. Summary Knowing a language means knowing how to produce and understand the meaning of infinitely many sentences. The study of linguistic meaning is called semantics. Lexical semantics is concerned with the meanings of morphemes and words; compositional semantics with phrases and sentences. The study of how context affects meaning is called pragmatics. Speakers’ knowledge of sentence meaning includes knowing the truth conditions of declarative sentences; knowing when one sentence entails another sentence; knowing when two sentences are paraphrases or contradictory; knowing when a sentence is a tautology, contradiction, or paradox; and knowing when sentences are ambiguous, among other things. Compositional semantics is the building up of phrasal or sentence meaning from the meaning of smaller units by means of semantic rules. There are cases when the meaning of larger units does not follow from the meaning of its parts. Anomaly is when the pieces do not fit sensibly together, as in colorless green ideas sleep furiously; metaphors are sentences that appear to be anomalous, but to which a meaningful concept can be attached, such as time is money; idioms are fixed expressions whose meaning is not compositional but rather must be learned as a whole unit, such as kick the bucket meaning “to die.” Part of the meaning of words may be the association with the objects the words refer to (if any), called reference, but often there is additional meaning Summary beyond reference, which is called sense. The reference of the President is Barack Obama, and the sense of the expression is “highest executive office.” Some expressions have reference but little sense such as proper names, and some have sense but no reference such as the present king of France. Words are related in various ways. They may be synonyms, various kinds of antonyms such as gradable pairs and relational opposites, or homonyms, words pronounced the same but with different meanings such as bare and bear. Part of the meaning of words may be described by semantic features such as “female,” “young,” “cause,” or “go.” Nouns may have the feature “count,” wherein they may be enumerated (one potato, two potatoes), or “mass,” in which enumeration may require contextual interpretation (*one milk, *two milks, perhaps meaning “one glass or quart or portion of milk”). Some verbs have the feature of being “eventive” while others are “stative.” The semantic feature of negation is found in many words and is evidenced by the occurrence of negative polarity items (e.g., John doubts that Mary gives a hoot, but *John thinks that Mary gives a hoot). Verbs have various argument structures, which describe the NPs that may occur with particular verbs. For example, intransitive verbs take only an NP subject, whereas ditransitive verbs take an NP subject, an NP direct object, and an NP indirect object. Thematic roles describe the semantic relations between a verb and its NP arguments. Some thematic roles are agent: the doer of an action; theme: the recipient of an action; and goal, source, instrument, and experiencer. The principle of uniformity of theta assignment dictates that thematic roles must be assigned to particular structural position (e.g., theme to object position) illustrating that there is a close connection between syntax and semantics. The general study of how context affects linguistic interpretation is pragmatics. Context may be linguistic—what was previously spoken or written—or knowledge of the world, including the speech situation, what we’ve called situational context. Discourse consists of several sentences, including exchanges between speakers. Pragmatics is important when interpreting discourse, for example, in determining whether a pronoun in one sentence has the same referent as a noun phrase in another sentence. Deictic terms such as you, there, now, and the other side require knowledge of the situation (person spoken to, place, time, spatial orientation) of the utterance to be interpreted referentially. Speakers of all languages adhere to various cooperative principles for communicating sincerely called maxims of conversation. Such maxims as “be relevant” or “say neither more nor less than the discourse requires” permit a person to interpret It’s cold in here as “Shut the windows” or “Turn up the thermostat.” Implicatures are the inferences that may be drawn from an utterance in context. When Mary says It’s cold in here, one of many possible implicatures may be “Mary wants the heat turned up.” Implicatures are like entailments in that their truth follows from sentences of the discourse, but unlike entailments, which are necessarily true, implicatures may be cancelled by information added later. Mary might wave you away from the thermostat and ask you to hand her a sweater. Presuppositions are situations that must be true for utterances to be appropriate, so that Take some more tea has the presupposition “already had some tea.” 177 178 CHAPTER 3 The Meaning of Language The theory of speech acts tells us that people use language to do things such as lay bets, issue warnings, or nominate candidates. By using the words “I nominate Bill Smith,” you may accomplish an act of nomination that allows Bill Smith to run for office. Verbs that “do things” are called performative verbs. The speaker’s intent in making an utterance is known as illocutionary force. In the case of performative verbs, the illocutionary force is mentioned overtly. In other cases it must be determined from context. References for Further Reading Austin, J. L. 1962. How to do things with words. Cambridge, MA: Harvard University Press. Chierchia, G., and S. McConnell-Ginet. 2000. Meaning and grammar, 2nd edn. Cambridge, MA: MIT Press. Davidson, D., and G. Harman, eds. 1972. Semantics of natural languages. Dordrecht, The Netherlands: Reidel. Fraser, B. 1995. An introduction to pragmatics. Oxford, UK: Blackwell Publishers. Green, G. M. 1989. Pragmatics and natural language understanding. Hillsdale, NJ: Lawrence Erlbaum Associates. Grice, H. P. 1989. Logic and conversation. Reprinted in Studies in the way of words. Cambridge, MA: Harvard University Press. Jackendoff, R. 1993. Patterns in the mind. New York: HarperCollins. ______. 1983. Semantics and cognition. Cambridge, MA: MIT Press. Lakoff, G. 1987. Women, fire, and dangerous things: What categories reveal about the mind. Chicago: University of Chicago Press. Lakoff, G., and M. Johnson. 2003. Metaphors we live by, 2nd edn. Chicago: University of Chicago Press. Lyons, J. 1995. Linguistic semantics: An introduction. Cambridge, UK: Cambridge University Press. Mey, J. L. 2001. Pragmatics: An introduction, 2nd edn. Oxford, UK: Blackwell Publishers. Saeed, J. 2003. Semantics, 2nd edn. Oxford, UK: Blackwell Publishing. Searle, J. R. 1969. Speech acts: An essay in the philosophy of language. Cambridge, UK: Cambridge University Press. Exercises 1. (This exercise requires knowledge of elementary set theory.) A. Suppose that the reference (meaning) of swims points out the set of individuals consisting of Anna, Lu, Paul, and Benjamin. For which of the following sentences are the truth conditions produced by Semantic Rule I met? i. Anna swims. ii. Jack swims. iii. Benjamin swims. B. Suppose the reference (meaning) of loves points out the set consisting of the following pairs of individuals: <Anna, Paul>, <Paul, Benjamin>, Exercises <Benjamin, Benjamin>, <Paul, Anna>. According to Semantic Rule II, what is the meaning of the verb phrase: i. loves Paul ii. loves Benjamin iii. loves Jack C. Given the information in (B), for which of the following sentences are the truth conditions produced by Semantic Rule I met? i. Paul loves Anna. ii. Benjamin loves Paul. iii. Benjamin loves himself. iv. Anna loves Jack. D. Challenge exercise: Consider the sentence Jack kissed Laura. How would the actions of Semantic Rules (I) and (II) determine that the sentence is false if it were true that: i. Nobody kissed Laura. How about if it were true that: ii. Jack did not kiss Laura, although other men did. 2. The following sentences are either tautologies (analytic), contradictions, or situationally true or false. Write T by the tautologies, C by the contradictions, and S by the other sentences. a. Queens are monarchs. b. Kings are female. c. Kings are poor. d. Queens are ugly. e. Queens are mothers. f. Kings are mothers. g. Dogs are four-legged. h. Cats are felines. i. Cats are stupid. j. Dogs are carnivores. k. George Washington is George Washington. l. George Washington is the first president. m. George Washington is male. n. Uncles are male. o. My aunt is a man. p. Witches are wicked. q. My brother is a witch. r. My sister is an only child. s. The evening star isn’t the evening star. t. The evening star isn’t Venus. u. Babies are adults. v. Babies can lift one ton. w. Puppies are human. x. My bachelor friends are all married. 179 180 CHAPTER 3 The Meaning of Language y. My bachelor friends are all lonely. z. Colorless ideas are green. 3. You are in a village in which every man must be shaved, and in which the lone (male) barber shaves all and only the men who do not shave themselves. Formulate a paradox based on this situation. 4. Should the semantic component of the grammar account for whatever a speaker means when uttering any meaningful expression? Defend your viewpoint. 5. A. The following sentences may be lexically or structurally ambiguous, or both. Provide paraphrases showing that you comprehend all the meanings. Example: I saw him walking by the bank. Meaning 1: I saw him and he was walking by the bank of the river. Meaning 2: I saw him and he was walking by the financial institution. Meaning 3: I was walking by the bank of the river when I saw him. Meaning 4: I was walking by the financial institution when I saw him. a. We laughed at the colorful ball. b. He was knocked over by the punch. c. The police were urged to stop drinking by the fifth. d. I said I would file it on Thursday. e. I cannot recommend visiting professors too highly. f. The license fee for pets owned by senior citizens who have not been altered is $1.50. (Actual notice) g. What looks better on a handsome man than a tux? Nothing! (Attributed to Mae West) h. Wanted: Man to take care of cow that does not smoke or drink. (Actual notice) i. For Sale: Several old dresses from grandmother in beautiful condition. (Actual notice) j. Time flies like an arrow. (Hint: There are at least four paraphrases, but some of them require imagination.) B. Do the same thing for the following newspaper headlines: k. POLICE BEGIN CAMPAIGN TO RUN DOWN JAYWALKERS l. DRUNK GETS NINE MONTHS IN VIOLIN CASE m. FARMER BILL DIES IN HOUSE n. STUD TIRES OUT o. SQUAD HELPS DOG BITE VICTIM p. LACK OF BRAINS HINDERS RESEARCH q. MINERS REFUSE TO WORK AFTER DEATH r. EYE DROPS OFF SHELF s. JUVENILE COURT TO TRY SHOOTING DEFENDANT t. QUEEN MARY HAVING BOTTOM SCRAPED 6. Explain the semantic ambiguity of the following sentences by providing two or more sentences that paraphrase the multiple meanings. Example: Exercises “She can’t bear children” can mean either “She can’t give birth to children” or “She can’t tolerate children.” a. He waited by the bank. b. Is he really that kind? c. The proprietor of the fish store was the sole owner. d. The long drill was boring. e. When he got the clear title to the land, it was a good deed. f. It takes a good ruler to make a straight line. g. He saw that gasoline can explode. h. You should see her shop. i. Every man loves a woman. j. You get half off the cost of your hotel room if you make your own bed. k. “It’s his job to lose” (said the coach about his new player). l. Challenge exercise: Bill wants to marry a Norwegian woman. 7. Go on an idiom hunt. In the course of some hours in which you converse or overhear conversations, write down all the idioms that are used. If you prefer, watch soap operas or something similar for an hour or two and write down the idioms. Show your parents (or whomever) this book when they find you watching TV and you claim you’re doing your homework. 8. Take a half dozen or so idioms from exercise 7, or elsewhere, and try to find their source, and if you cannot, speculate imaginatively on the source. For example, sell down the river meaning “betray” arose from American slave traders selling slaves from more northern states along the Mississippi River to the harsher southern states. For snap out of it, meaning “pay attention” or “get in a better mood,” we (truly) speculate that ill-behaving persons were once confined in a straight-jacket secured by snaps, and to snap out of it meant the person was behaving better. 9. For each group of words given as follows, state what semantic property or properties distinguish between the classes of (a) words and (b) words. If asked, also indicate a semantic property that the (a) words and the (b) words share. Example: (a) widow, mother, sister, aunt, maid (b) widower, father, brother, uncle, valet The (a) and (b) words are “human.” The (a) words are “female” and the (b) words are “male.” a. (a) bachelor, man, son, paperboy, pope, chief (b) bull, rooster, drake, ram The (a) and (b) words are: The (a) words are: The (b) words are: b. (a) table, stone, pencil, cup, house, ship, car (b) milk, alcohol, rice, soup, mud The (a) words are: The (b) words are: 181 182 CHAPTER 3 The Meaning of Language c. (a) book, temple, mountain, road, tractor (b) idea, love, charity, sincerity, bravery, fear The (a) words are: The (b) words are: d. (a) pine, elm, ash, weeping willow, sycamore (b) rose, dandelion, aster, tulip, daisy The (a) and (b) words are: The (a) words are: The (b) words are: e. (a) book, letter, encyclopedia, novel, notebook, dictionary (b) typewriter, pencil, pen, crayon, quill, charcoal, chalk The (a) words are: The (b) words are: f. (a) walk, run, skip, jump, hop, swim (b) fly, skate, ski, ride, cycle, canoe, hang-glide The (a) and (b) words are: The (a) words are: The (b) words are: g. (a) ask, tell, say, talk, converse (b) shout, whisper, mutter, drawl, holler The (a) and (b) words are: The (a) words are: The (b) words are: h. (a) absent–present, alive–dead, asleep–awake, married–single (b) big–small, cold–hot, sad–happy, slow–fast The (a) and (b) word pairs are: The (a) words are: The (b) words are: i. (a) alleged, counterfeit, false, putative, accused (b) red, large, cheerful, pretty, stupid (Hint: Is an alleged murderer always a murderer? Is a pretty girl always a girl?) The (a) words are: The (b) words are: 10. Research project: There are many -nym/-onym words that describe classes of words with particular semantic properties. We mentioned a few in this chapter such as synonyms, antonyms, homonyms, and hyponyms. What is the etymology of -onym? What common English word is it related to? How many more -nym words and their meaning can you come up with? Try for five or ten on your own. With help from the Internet, dozens are possible. (Hint: One such -nym word was the winning word in the 1997 Scripps National Spelling Bee.) 11. There are several kinds of antonymy. By writing a c, g, or r in column C, indicate whether the pairs in columns A and B are complementary, gradable, or relational opposites. Exercises A B C good expensive parent beautiful false lessor pass hot legal larger poor fast asleep husband rude bad cheap offspring ugly true lessee fail cold illegal smaller rich slow awake wife polite ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ 12. For each definition, write in the first blank the word that has that meaning and in the second (and third if present) a differently spelled homonym that has a different meaning. The first letter of the words is provided. Example: “A pair”: t(wo) t(oo) t(o) a. b. c. d. e. f. “Naked”: “Base metal”: “Worships”: “Eight bits”: “One of five senses”: “Several couples”: b_______ l_______ p_______ b_______ s_______ p_______ b_______ l_______ p_______ b_______ s_______ p_______ p_______ b_______ c_______ p_______ g. h. i. j. “Not pretty”: “Purity of gold unit”: “A horse’s coiffure”: “Sets loose”: p_______ k_______ m_______ f_______ p_______ c_______ m_______ f_______ M_______ f_______ 13. Here are some proper names of U.S. restaurants. Can you figure out the basis for the name? (This is for fun—don’t let yourself be graded.) a. b. c. d. e. f. g. h. i. j. k. Mustard’s Last Stand Aunt Chilada’s Lion on the Beach Pizza Paul and Mary Franks for the Memories Weiner Take All Dressed to Grill Deli Beloved Gone with the Wings Aunt Chovy’s Pizza Polly Esther’s 183 184 CHAPTER 3 The Meaning of Language l. Dewey, Cheatham & Howe (Hint: This is also the name of a made-up law firm noted in chapter 6.) m. Thai Me Up Café (truly—it’s in L.A.) n. Romancing the Cone 14. The following sentences consist of a verb, its noun phrase subject, and various complements and prepositional phrases. Identify the thematic role of each NP by writing the letter a, t, i, s, g, or e above the noun, standing for agent, theme, instrument, source, goal, and experiencer. a t s i Example: The boy took the books from the cupboard with a handcart. a. Mary found a ball. b. The children ran from the playground to the wading pool. c. One of the men unlocked all the doors with a paper clip. d. John melted the ice with a blowtorch. e. Helen looked for a cockroach. f. Helen saw a cockroach. g. Helen screamed. h. The ice melted. i. With a telescope, the boy saw the man. j. The farmer loaded hay onto the truck. k. The farmer loaded the hay with a pitchfork. l. The hay was loaded on the truck by the farmer. m. Helen heard music coming out of the speaker. 15. Find a complete version of “The Jabberwocky” from Through the Looking-Glass by Lewis Carroll. There are some on the Internet. Look up all the nonsense words in a good dictionary (also to be found online) and see how many of them are lexical items in English. Note their meanings. 16. In sports and games, many expressions are “performative.” By shouting You’re out, the first base umpire performs an act. Think up half a dozen or so similar examples and explain their use. 17. A criterion of a performative utterance is whether you can begin it with “I hereby.” Notice that if you say sentence (a) aloud, it sounds like a genuine apology, but to say sentence (b) aloud sounds funny because you cannot willfully perform an act of recognition: a. I hereby apologize to you. b. ?I hereby recognize you. Determine which of the following are performative sentences by inserting “hereby” and seeing whether they sound right. c. I testify that she met the agent. d. I know that she met the agent. e. I suppose the Yankees will win. f. He bet her $2,500 that Bush would win. g. I dismiss the class. Exercises h. i. j. k. l. m. I teach the class. We promise to leave early. I owe the IRS $1 million. I bequeath $1 million to the IRS. I swore I didn’t do it. I swear I didn’t do it. 18. A. Explain, in terms of Grice’s Maxims, the humor or strangeness of the following exchange between mother and child. The child has just finished eating a cookie when the mother comes into the room. mother: What are these cookie crumbs doing in your bed? child: Nothing, they’re just lying there. B. Do the same for this “exchange” between an owner and her cat: owner: cat: If cats ruled the world, everyone would sleep on a pile of fresh laundry. Cats don’t rule the world?? 19. Spend an hour or two observing conversations between people, including yourself if you wish, where the intended meanings of utterances are mediated by Grice’s Maxims. For example, someone says “I didn’t quite catch that,” with the possible meaning of “Please say it again,” or “Please speaker a little louder.” Record five (or more if you’re having fun) such instances, and the maxim or maxims involved. In the above example, we would cite the maxims of relevance and quantity. 20. Consider the following “facts” and then answer the questions. Part A illustrates your ability to interpret meanings when syntactic rules have deleted parts of the sentence; Part B illustrates your knowledge of semantic features and entailment; Parts C and D illustrate implicatures. A. Roses are red and bralkions are too. Booth shot Lincoln and Czolgosz, McKinley. Casca stabbed Caesar and so did Cinna. Frodo was exhausted as was Sam. i. What color are bralkions? ii. What did Czolgosz do to McKinley? iii. What did Cinna do to Caesar? iv. What did Sam feel? B. Now consider these facts and answer the questions: Black Beauty was a stallion. Mary is a widow. John pretended to send Martha a birthday card. Jane didn’t remember to send Tom a birthday card. Tina taught her daughter to swim. My boss managed to give me a raise last year. Flipper is walking. (T = true; F = false) 185 186 CHAPTER 3 The Meaning of Language v. Black Beauty was male. T ___ F ___ vi. Mary was never married. T ___ F ___ vii. John sent Martha a card. T ___ F ___ viii. Jane sent Tom a card. T ___ F ___ ix. Tina’s daughter can swim. T ___ F ___ x. I didn’t get a raise last year. T ___ F ___ xi. Flipper has legs. T ___ F ___ C. Based on information in A and B, make possible true/false decisions on the following: i. Czolgosz is an assassin. T ___ F ___ ii. Sam was breathing hard. T ___ F ___ iii. Mary is not young. T ___ F ___ iv. John is dishonest. T ___ F ___ v. Jane is inconsiderate. T ___ F ___ vi. Tina is a lousy mother. T ___ F ___ vii. I hate my boss. T ___ F ___ viii. Flipper is a fish. T ___ F ___ D. For each case in C, provide further information that cancels the implicature. E.g., in (a) we further learn that Czolgosz killed his unprepossessing neighbor Morris McKinley who had recently retired from the railroad. Thus Czolgosz is not an assassin but merely a common murderer. 21. The following sentences have certain presuppositions that ensure their appropriateness. What are they? Example: The minors promised the police to stop drinking. Presupposition: The minors were drinking. a. We went to the ballpark again. b. Valerie regretted not receiving a new T-bird for Labor Day. c. That her pet turtle ran away made Emily very sad. d. The administration forgot that the professors support the students. e. It is an atrocity that the World Trade Center was attacked on September 11, 2001. f. It isn’t tolerable that the World Trade Center was attacked on September 11, 2001. g. Disa wants more popcorn. h. Mary drank one more beer before leaving. i. Jack knows who discovered Pluto in 1930. j. Mary was horrified to find a cockroach in her bed. 22. Circle any deictic expression in the following sentences. (Hint: Proper names and noun phrases that contain the definite article the are not considered deictic expressions.) a. I saw her standing there. b. Dogs are animals. c. Yesterday, all my troubles seemed so far away. d. The name of that rock band is “The Beatles.” Exercises e. f. g. h. i. j. The Declaration of Independence was signed in 1776. The Declaration of Independence was signed last year. Copper conducts electricity. The treasure chest is to your right. These are the times that try men’s souls. There is a tide in the affairs of men which taken at the flood leads on to fortune. 23. State for each pronoun in the following sentences whether it is free, bound, or either bound or free. Consider each sentence independently. Example: John finds himself in love with her. himself—bound; her—free Example: John said that he loved her. he—bound or free; her—free a. b. c. d. e. f. g. h. i. Louise said to herself in the mirror: “She’s so ugly.” The fact that he considers her pretty pleases Maria. Whenever she sees it, she thinks of herself. John discovered that a picture of himself was hanging in the post office, and that fact bugged him, but it pleased her. It seems that she and he will never stop arguing with them. Persons are prohibited from picking flowers from any but their own graves. (On a sign in a cemetery) Everybody who worked on the campaign hoped the candidate would give him a job. John thinks he is a good cook. Challenge exercise: In the following sentence there is an expressed pronoun he in the first conjunct and an implicit pronoun in the second conjunct. State for each one whether it is bound, free, or both bound and free. Provide paraphrases for each meaning. John thinks he’s a good cook and Bill does too. 24. Each of the following single statements has at least one implicature in the situation described. What is it? a. Statement: You make a better door than a window. Situation: Someone is blocking your view. b. Statement: It’s getting late. Situation: You’re at a party and it’s 4 a.m. c. Statement: The restaurants are open until midnight. Situation: It’s 10 o’clock and you haven’t eaten dinner. d. Statement: If you’d diet, this wouldn’t hurt so badly. Situation: Someone is standing on your toe. e. Statement: I thought I saw a fan in the closet. Situation: It’s sweltering in the room. f. Statement: Mr. Smith dresses neatly, is well groomed, and is always on time to class. Situation: The summary statement in a letter of recommendation to graduate school. 187 188 CHAPTER 3 The Meaning of Language g. Statement: Most of the food is gone. Situation: You arrived late at a cocktail party. h. Statement: John or Mary made a mistake. Situation: You’re looking over some work done by John and Mary. 25. In each of the following dialogues between Jack and Laura, there is a conversational implicature. What is it? a. Jack: Did you make a doctor’s appointment? Laura: Their line was busy. b. Jack: Do you have the play tickets? Laura: Didn’t I give them to you? c. Jack: Does your grandmother have a live-in boyfriend? Laura: She’s very traditional. d. Jack: How did you like the string quartet? Laura: I thought the violist was swell. e. Laura: What are Boston’s chances of winning the World Series? Jack: Do bowling balls float? f. Laura: Do you own a cat? Jack: I’m allergic to everything. g. Laura: Did you mow the grass and wash the car like I told you to? Jack: I mowed the grass. h. Laura: Do you want dessert? Jack: Is the Pope Catholic? 26. A. Think of ten negative polarity items such as give a hoot or have a red cent. B. Challenge exercise: Can you think of other contexts without overt negation that “license” their use? (Hint: One answer is discussed in the text, but there are others.) 27. Challenge exercise: Suppose that, contrary to what was argued in the text, the noun phrase no baby does refer to some individual just like the baby does. It needn’t be an actual baby but some abstract “empty” object that we’ll call ∅. Show that this approach to the semantics of no baby, when applying Semantic Rule I and taking the restricting nature of adverbs into account (everyone who swims beautifully also swims), predicts that No baby sleeps soundly entails No baby sleeps, and explain why this is wrong. 28. Consider: “The meaning of words lies not in the words themselves, but in our attitude toward them,” by Antoine de Saint-Exupéry (the author of The Little Prince). Do you think this is true, partially true, or false? Defend your point of view, providing examples if needed. 29. The Second Amendment of the Constitution of the United States states: A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed. It has long been argued that the citizens of the United States have an absolute right to own guns, based on this amendment. Apply Grice’s Maxims to the Second Amendment and agree or disagree. 4 Phonetics: The Sounds of Language I gradually came to see that Phonetics had an important bearing on human relations—that when people of different nations pronounce each other’s languages really well (even if vocabulary & grammar not perfect), it has an astonishing effect of bringing them together, it puts people on terms of equality, a good understanding between them immediately springs up. FROM THE JOURNAL OF DANIEL JONES When you know a language you know the sounds of that language, and you know how to combine those sounds into words. When you know English you know the sounds represented by the letters b, s, and u, and you are able to combine them to form the words bus or sub. Although languages may contain different sounds, the sounds of all the languages of the world together constitute a class of sounds that the human vocal tract is designed to make. This chapter will discuss these speech sounds, how they are produced, and how they may be classified. 189 190 CHAPTER 4 Phonetics: The Sounds of Language Sound Segments “Herman” is reprinted with permission from Laughing-Stock Licensing, Inc., Ottawa, Canada. All Rights Reserved. The study of speech sounds is called phonetics. To describe speech sounds, it is necessary to know what an individual sound is, and how each sound differs from all others. This is not as easy as it may seem, for when we speak, the sounds seem to run together and it isn’t at all obvious where one sound ends and the next begins. However, when we know the language we hear the individual sounds in our “mind’s ear” and are able to make sense of them, unlike the sign painter in the cartoon. A speaker of English knows that there are three sounds in the word bus. Yet, physically the word is just one continuous sound. You can segment that one sound into parts because you know English. And you recognize those parts when they occur elsewhere as b does in bet or rob, as u does in up, and as s does in sister. It is not possible to segment the sound of someone clearing her throat into a sequence of discrete units. This is not because throat-clearing is one continuous sound. It is because such sounds are not speech and are therefore not able to be segmented into the sounds of speech. Speakers of English can separate keepout into the two words keep and out because they know the language. We do not generally pause between words (except to take a breath), even though we may think we do. Children learn- Sound Segments ing a language reveal this fact. A two-year-old child going down stairs heard his mother say, “hold on.” He replied, “I’m holing don, I’m holing don,” not knowing where the break between words occurred. In fact, word boundary misperceptions have changed the form of words historically. At an earlier stage of English, the word apron was napron. However, the phrase a napron was so often misperceived as an apron that the word lost its initial n. Some phrases and sentences that are clearly distinct when printed may be ambiguous when spoken. Read the following pairs aloud and see why we might misinterpret what we hear: grade A I scream The sun’s rays meet gray day ice cream The sons raise meat The lack of breaks between spoken words and individual sounds often makes us think that speakers of foreign languages run their words together, unaware that we do too. X-ray motion pictures of someone speaking make the absence of breaks very clear. One can see the tongue, jaw, and lips in continuous motion as the individual sounds are produced. Yet, if you know a language you have no difficulty segmenting the continuous sounds of speech. It doesn’t matter if there is an alphabet for the language or whether the listener can read and write. Everyone who knows a language knows how to segment sentences into words, and words into sounds. Identity of Speech Sounds By infinitesimal movements of the tongue countless different vowels can be produced, all of them in use among speakers of English who utter the same vowels no oftener than they make the same fingerprints. GEORGE BERNARD SHAW, 1950 It is truly amazing, given the continuity of the speech signal, that we are able to understand the individual words in an utterance. This ability is more surprising because no two speakers ever say the same word identically. The speech signal produced when one speaker says cat is not the same as that of another speaker’s cat. Even two utterances of cat by the same speaker will differ to some degree. Our knowledge of a language determines when we judge physically different sounds to be the same. We know which aspects of pronunciation are linguistically important and which are not. For example, if someone coughs in the middle of saying “How (cough) are you?” a listener will ignore the cough and interpret this simply as “How are you?” People speak at different pitch levels, at different rates of speed, and even with their heads encased in a helmet, like Darth Vader. However, such personal differences are not linguistically significant. Our linguistic knowledge makes it possible to ignore nonlinguistic differences in speech. Furthermore, we are capable of making sounds that we know are not speech sounds in our language. Many English speakers can make a clicking 191 192 CHAPTER 4 Phonetics: The Sounds of Language sound of disapproval that writers sometimes represent as tsk. This sound never occurs as part of an English word. It is even difficult for many English speakers to combine this clicking sound with other sounds. Yet clicks are speech sounds in Xhosa, Zulu, Sosotho, and Khoikhoi—languages spoken in southern Africa— just like the k or t in English. Speakers of those languages have no difficulty producing them as parts of words. Thus, tsk is a speech sound in Xhosa but not in English. The sound represented by the letters th in the word think is a speech sound in English but not in French. In general, languages differ to a greater or lesser degree in the inventory of speech sounds that words are built from. The science of phonetics attempts to describe all of the sounds used in all languages of the world. Acoustic phonetics focuses on the physical properties of sounds; auditory phonetics is concerned with how listeners perceive these sounds; and articulatory phonetics—the primary concern of this chapter—is the study of how the vocal tract produces the sounds of language. The Phonetic Alphabet The English have no respect for their language, and will not teach their children to speak it. They cannot spell it because they have nothing to spell it with but an old foreign alphabet of which only the consonants—and not all of them—have any agreed speech value. GEORGE BERNARD SHAW, Preface to Pygmalion, 1912 Orthography, or alphabetic spelling, does not represent the sounds of a language in a consistent way. To be scientific—and phonetics is a science—we must devise a way for the same sound to be spelled with the same letter every time, and for any letter to stand for the same sound every time. To see that ordinary spelling with our Roman alphabet is woefully inadequate for the task, consider sentences such as: Did he believe that Caesar could see the people seize the seas? The silly amoeba stole the key to the machine. The same sound is represented variously by e, ie, ae, ee, eo, ei, ea, y, oe, ey, and i. On the other hand, consider: My father wanted many a village dame badly. Here the letter a represents the various sounds in father, wanted, many, and so on. Making the spelling waters yet muddier, we find that a combination of letters may represent a single sound: shoot either coat character deal glacial Thomas rough theater physics nation plain Or, conversely, the single letter x, when not pronounced as z, usually stands for the two sounds ks as in sex (you may have to speak aloud to hear that sex is pronounced seks). Sound Segments Some letters have no sound in certain words (so-called silent letters): mnemonic pterodactyl psychology bough autumn write sword lamb resign hole debt island ghost corps gnaw knot Or, conversely, there may be no letter to represent sounds that occur. In many words, the letter u represents a y sound followed by a u sound: cute fume use (sounds like kyute; compare: coot) (sounds like fyume; compare: fool) (sounds like yuse; compare: Uzbekistan) Throughout several centuries English scholars have advocated spelling reform. George Bernard Shaw complained that spelling was so inconsistent that fish could be spelled ghoti—gh as in tough, o as in women, and ti as in nation. Nonetheless, spelling reformers failed to change our spelling habits, and it took phoneticians to invent an alphabet that absolutely guaranteed a one sound–one symbol correspondence. There could be no other way to study the sounds of all human languages scientifically. In 1888 members of the International Phonetic Association developed a phonetic alphabet to symbolize the sounds of all languages. They utilized both ordinary letters and invented symbols. Each character of the alphabet had exactly one value across all of the world’s languages. Someone who knew this alphabet would know how to pronounce a word written in it, and upon hearing a word pronounced, would know how to write it using the alphabetic symbols. The inventors of this International Phonetic Alphabet, or IPA, knew that a phonetic alphabet should include just enough symbols to represent the fundamental sounds of all languages. Table 4.1 is a list of the IPA symbols that we will use to represent English speech sounds. The symbols do not tell us everything about the sounds, which may vary from person to person and which may depend on their position in a word. They are not all of the phonetic symbols needed for English, but they will suffice for our purposes. When we discuss the sounds in more detail later in the chapter, we will add appropriate symbols. From now on we will enclose phonetic symbols in square brackets [ ] to distinguish them from ordinary letters. TABLE 4.1 | A Phonetic Alphabet for English Pronunciation Consonants p b m f v θ ð ʃ ʒ pill bill mill feel veal thigh thy shill measure t d n s z tʃ dʒ ʍ till dill nil seal zeal chill gin which Vowels k g ŋ h l r j w kill gill ring heal leaf reef you witch i e u o æ ʌ aɪ ɔɪ beet bait boot boat bat butt bite boy ɪ ɛ ʊ ɔ a ə aʊ bit bet foot bore pot/bar sofa bout 193 194 CHAPTER 4 Phonetics: The Sounds of Language The symbol [ə] in sofa toward the bottom right of the chart is called a schwa. We use it to represent vowels in syllables that are not emphasized in speaking and whose duration is very short, such as general, about, reader, etc. The schwa is pronounced with the mouth in a neutral position and is a brief, colorless vowel. The schwa is reserved for the vowel sound in all reduced syllables, even though its pronunciation may vary slightly according to its position in the word and who is speaking. All other vowel symbols in the chart occur in syllables that receive at least some emphasis. Speakers from different parts of the country may pronounce some words differently. For example, some of you may pronounce the words which and witch identically. If you do, the initial sound of both words is symbolized by [w] in the chart. If you don’t, the breathy wh of which is represented by [ʍ]. Some speakers of English pronounce bought and pot with the same vowel; others pronounce them with the vowel sounds in bore and bar, respectively. We have therefore listed both words in the chart of symbols. It is difficult to include all the phonetic symbols needed to represent all differences in English. There may be sounds in your speech that are not represented, and vice versa, but that’s okay. There are many varieties of English. The versions spoken in England, in Australia, in Ireland, and in India, among others, differ in their pronunciations. And even within American English, phonetic differences exist among the many dialects, as we discuss in chapter 9. The symbols in Table 4.1 are IPA symbols with one small exception. The IPA uses an upside-down “r” (ɹ) for the English sound r. We, and many writers, prefer the right side up symbol r for clarity when writing for an English-reading audience. Apart from “r,” some writers use different symbols for other sounds that once were traditional for transcribing American English. You may encounter these in other books. Here are some equivalents: IPA Alternative ʃ ʒ tʃ dʒ ʊ š ž č ǰ u Using the IPA symbols, we can now unambiguously represent the pronunciation of words. For example, in the six words below, ou represents six distinct vowel sounds; the gh is silent in all but rough, where it is pronounced [f]; the th represents a single sound, either [Ð] or [ð], and the l in would is also silent. However, the phonetic transcription gives us the actual pronunciation. Spelling Pronunciation though thought rough bough through would [ðo] [θɔt] [rʌf] [baʊ] [θru] [wʊd] Articulatory Phonetics Articulatory Phonetics The voice is articulated by the lips and the tongue. . . . Man speaks by means of the air which he inhales into his entire body and particularly into the body cavities. When the air is expelled through the empty space it produces a sound, because of the resonances in the skull. The tongue articulates by its strokes; it gathers the air in the throat and pushes it against the palate and the teeth, thereby giving the sound a definite shape. If the tongue would not articulate each time, by means of its strokes, man would not speak clearly and would only be able to produce a few simple sounds. HIPPOCRATES (460–377 b.c.e.) The production of any sound involves the movement of air. Most speech sounds are produced by pushing lung air through the vocal cords—a pair of thin membranes—up the throat, and into the mouth or nose, and finally out of the body. A brief anatomy lesson is in order. The opening between the vocal cords is the glottis and is located in the voice box or larynx, pronounced “lair rinks.” The tubular part of the throat above the larynx is the pharynx (rhymes with larynx). What sensible people call “the mouth,” linguists call the oral cavity to distinguish it from the nasal cavity, which is the nose and the plumbing that connects it to the throat, plus your sinuses. Finally there are the tongue and the lips, both of which are capable of rapid movement and shape changing. All of these together comprise the vocal tract. Differing vocal tract shapes result in the differing sounds of language. Figure 4.1 should make these descriptions clearer. (The vocal cords and larynx are not specifically labeled in the figure.) Consonants The sounds of all languages fall into two classes: consonants and vowels. Consonants are produced with some restriction or closure in the vocal tract that impedes the flow of air from the lungs. In phonetics, the terms consonant and vowel refer to types of sounds, not to the letters that represent them. In speaking of the alphabet, we may call “a” a vowel and “c” a consonant, but that means only that we use the letter “a” to represent vowel sounds and the letter “c” to represent consonant sounds. Place of Articulation Lolita, light of my life, fire of my loins. My sin, my soul. Lo-lee-ta: the tip of the tongue taking a trip of three steps down the palate to tap, at three, on the teeth. Lo. Lee. Ta. VLADIMIR NABOKOV, Lolita, 1955 We classify consonants according to where in the vocal tract the airflow restriction occurs, called the place of articulation. Movement of the tongue and lips creates the constriction, reshaping the oral cavity in various ways to produce the 195 CHAPTER 4 Phonetics: The Sounds of Language NASAL CAVITY alveolar ridge palate teeth 5 lip 4 1 2 velum (soft palate) 6 ORAL CAVITY 3 uvula 7 lip TONGUE PHARYNX 196 8 glottis FIGURE 4.1 | The vocal tract. Places of articulation: 1. bilabial; 2. labiodental; 3. interdental; 4. alveolar; 5. (alveo)palatal; 6. velar; 7. uvular; 8. glottal. various sounds. We are about to discuss the major places of articulation. As you read the description of each sound class, refer to Table 4.1, which provides key words containing the sounds. As you pronounce these words, try to feel which articulators are moving. (Watching yourself in a mirror helps, too.) Look at Figure 4.1 for help with the terminology. Bilabials [p] [b] [m] When we produce a [p], [b], or [m] we articulate by bringing both lips together. Labiodentals [f] [v] We also use our lips to form [f] and [v]. We articulate these sounds by touching the bottom lip to the upper teeth. Interdentals [θ] [ð] These sounds, both spelled th, are pronounced by inserting the tip of the tongue between the teeth. However, for some speakers the tongue merely touches behind the teeth, making a sound more correctly called dental. Articulatory Phonetics Watch yourself in a mirror and say think [θɪŋk] or these [ðiz] and see where your tongue tip goes. Alveolars [t] [d] [n] [s] [z] [l] [r] All seven of these sounds are pronounced with the tongue raised in various ways to the alveolar ridge. • For [t,d,n] the tongue tip is raised and touches the ridge, or slightly in front of it. • For [s,z] the sides of the front of the tongue are raised, but the tip is lowered so that air escapes over it. • For [l] the tongue tip is raised while the rest of the tongue remains down, permitting air to escape over its sides. Hence, [l] is called a lateral sound. You can feel this in the “l’s” of Lolita. • For [r] [IPA ɹ] most English speakers either curl the tip of the tongue back behind the alveolar ridge, or bunch up the top of the tongue behind the ridge. As opposed to [l], air escapes through the central part of the mouth when [r] is articulated. It is a central liquid. Palatals [ʃ] [ʒ] [tʃ] [dʒ] [j] For these sounds, which occur in mission [mɪʃən], measure [mɛʒər], cheap [tʃip], judge [dʒʌdʒ], and yoyo [jojo], the constriction occurs by raising the front part of the tongue to the palate. Velars [k] [g] [ŋ] Another class of sounds is produced by raising the back of the tongue to the soft palate or velum. The initial and final sounds of the words kick [kɪk] and gig [gɪg] and the final sounds of the words back [bӕk], bag [bӕg], and bang [bӕŋ] are all velar sounds. Uvulars [ʀ] [q] [ɢ] Uvular sounds are produced by raising the back of the tongue to the uvula, the fleshy protuberance that hangs down in the back of our throats. The r in French is often a uvular trill symbolized by [ʀ]. The uvular sounds [q] and [ɢ] occur in Arabic. These sounds do not ordinarily occur in English. Glottals [h] [ʔ] The sound of [h] is from the flow of air through the open glottis, and past the tongue and lips as they prepare to pronounce a vowel sound, which always follows [h]. If the air is stopped completely at the glottis by tightly closed vocal cords, the sound upon release of the cords is a glottal stop [ʔ]. The interjection uh-oh, that you hope never to hear your dentist utter, has two glottal stops and is spelled phonetically [ʔʌʔo]. Table 4.2 summarizes the classification of these English consonants by their place of articulation. Manner of Articulation We have described several classes of consonants according to their place of articulation, yet we are still unable to distinguish the sounds in each class from one another. What distinguishes [p] from [b] or [b] from [m]? All are bilabial sounds. What is the difference between [t], [d], and [n], which are all alveolar sounds? 197 198 CHAPTER 4 Phonetics: The Sounds of Language TABLE 4.2 | Place of Articulation of English Consonants Bilabial Labiodental Interdental Alveolar Palatal Velar Glottal p f θ t ʃ k h b v ð d ʒ g ʔ m n tʃ ŋ s dʒ z l r Speech sounds also vary in the way the airstream is affected as it flows from the lungs up and out of the mouth and nose. It may be blocked or partially blocked; the vocal cords may vibrate or not vibrate. We refer to this as the manner of articulation. Voiced and Voiceless Sounds Sounds are voiceless when the vocal cords are apart so that air flows freely through the glottis into the oral cavity. [p] and [s] in super [supər] are two of the several voiceless sounds of English. If the vocal cords are together, the airstream forces its way through and causes them to vibrate. Such sounds are voiced. [b] and [z] in buzz [bʌz] are two of the many voiced sounds of English. To get a sense of voicing, try putting a finger in each ear and say the voiced “z-z-z-z-z.” You can feel the vibrations of the vocal cords. If you now say the voiceless “s-s-s-s-s,” you will not sense these vibrations (although you might hear a hissing sound). When you whisper, you are making all the speech sounds voiceless. Try it! Whisper “Sue” and “zoo.” No difference, right? The voiced/voiceless distinction is very important in English. This phonetic property distinguishes the words in word pairs like the following: rope/robe [rop]/[rob] fate/fade [fet]/[fed] rack/rag [ræk]/[ræg] wreath/wreathe [riθ]/[rið] The first word of each pair ends with a voiceless sound and the second word with a voiced sound. All other aspects of the sounds in each word pair are identical; the position of the lips and tongue is the same. The voiced/voiceless distinction also occurs in the following pairs, where the first word begins with a voiceless sound and the second with a voiced sound: fine/vine [faɪn]/[vaɪn] peat/beat [pit]/[bit] seal/zeal [sil/zil] tote/dote [tot]/[dot] choke/joke [tʃok]/[dʒok] kale/gale [kel]/[gel] In our discussion of [p], we did not distinguish the initial sound in the word pit from the second sound in the word spit. There is, however, a phonetic differ- Articulatory Phonetics ence in these two voiceless stops. During the production of voiceless sounds, the glottis is open and the air flows freely between the vocal cords. When a voiceless sound is followed by a voiced sound such as a vowel, the vocal cords must close so they can vibrate. Voiceless sounds fall into two classes depending on the timing of the vocal cord closure. When we say pit, the vocal cords remain open for a very short time after the lips come apart to release the p. We call this p aspirated because a brief puff of air escapes before the glottis closes. When we pronounce the p in spit, however, the vocal cords start vibrating as soon as the lips open. That p is unaspirated. Hold your palm about two inches in front of your lips and say pit. You will feel a puff of air, which you will not feel when you say spit. The t in tick and the k in kin are also aspirated voiceless stops, while the t in stick and the k in skin are unaspirated. Finally, in the production of the voiced [b] (and [d] and [g]), the vocal cords are vibrating throughout the closure of the lips, and continue to vibrate during the vowel sound that follows after the lips part. We indicate aspirated sounds by writing the phonetic symbol with a raised h, as in the following examples: pool tale kale [pʰul] [tʰel] [kʰel] spool stale scale [spul] [stel] [skel] Figure 4.2 shows in diagrammatic form the timing of lip closure in relation to the state of the vocal cords. Nasal and Oral Sounds The voiced/voiceless distinction differentiates the bilabials [b] and [p]. The sound [m] is also a bilabial, and it is voiced. What distinguishes it from [b]? Figure 4.1 shows the roof of the mouth divided into the (hard) palate and the soft palate (or velum). The palate is a hard bony structure at the front of the mouth. You can feel it with your thumb. First, wash your hands. Now, slide your thumb along the hard palate back toward the throat; you will feel the velum, which is where the flesh becomes soft and pliable. The velum terminates in the uvula, which you can see in a mirror if you open your mouth wide and say “aaah.” The velum is movable, and when it is raised all the way to touch the back of the throat, the passage through the nose is cut off and air can escape only through the mouth. Sounds produced with the velum up, blocking the air from escaping through the nose, are oral sounds, because the air can escape only through the oral cavity. Most sounds in all languages are oral sounds. When the velum is not in its raised position, air escapes through both the nose and the mouth. Sounds produced this way are nasal sounds. The sound [m] is a nasal consonant. Thus [m] is distinguished from [b] because it is a nasal sound, whereas [b] is an oral sound. 199 200 CHAPTER 4 Phonetics: The Sounds of Language FIGURE 4.2 | Timing of lip closure and vocal-cord vibrations for voiced, voiceless unaspirated, and voiceless aspirated bilabial stops [b], [p], [ph]. The diagrams in Figure 4.3 show the position of the lips and the velum when [m], [b], and [p] are articulated. The sounds [p], [b], and [m] are produced by stopping the airflow at the lips; [m] and [b] differ from [p] by being voiced; [m] differs from [b] by being nasal. (If you ever wondered why people sound FIGURE 4.3 | Position of lips and velum for m (lips together, velum down) and b, p (lips together, velum up). Articulatory Phonetics TABLE 4.3 | Four Classes of Speech Sounds Voiced Voiceless Oral Nasal bdg ptk mnŋ * *Nasal consonants in English are usually voiced. Both voiced and voiceless nasal sounds occur in other languages. “nasally” when they have a cold, it’s because excessive mucous production prevents the velum from closing properly during speech.) The same oral/nasal difference occurs in raid [red] and rain [ren], rug [rʌg] and rung [rʌ ŋ]. The velum is raised in the production of [d] and [g], preventing the air from flowing through the nose, whereas for [n] and [ŋ] the velum is down, allowing the air out through both the nose and the mouth when the closure is released. The sounds [m], [n], and [ŋ] are therefore nasal sounds, and [b], [d], and [g] are oral sounds. The presence or absence of these phonetic features—nasal and voiced—permit the division of all speech sounds into four classes: voiced, voiceless, nasal, and oral, as shown in Table 4.3. We now have three ways of classifying consonants: by voicing, by place of articulation, and by nasalization. For example, [p] is a voiceless, bilabial, oral sound; [n] is a voiced, alveolar, nasal sound, and so on. Stops [p] [b] [m] [t] [d] [n] [k] [g] [ŋ] [tʃ] [dʒ] [ʔ] We are seeing finer and finer distinctions of speech sounds. However, both [t] and [s] are voiceless, alveolar, oral sounds. What distinguishes them? After all, tack and sack are different words. Stops are consonants in which the airstream is completely blocked in the oral cavity for a short period (tens of milliseconds). All other sounds are continuants. The sound [t] is a stop, but the sound [s] is not, and that is what makes them different speech sounds. • [p], [b], and [m] are bilabial stops, with the airstream stopped at the mouth by the complete closure of the lips. • [t], [d], and [n] are alveolar stops; the airstream is stopped by the tongue, making a complete closure at the alveolar ridge. • [k], [g], and [ŋ] are velar stops, with the complete closure at the velum. • [tʃ] and [dʒ] are palatal affricates with complete stop closures. They will be further classified later. • [ʔ] is a glottal stop; the air is completely stopped at the glottis. We have been discussing the sounds that occur in English. A variety of stop consonants occur in other languages but not in English. For example, in Quechua, spoken in Bolivia and Peru, uvular stops occur, where the back of the tongue is raised and moved rearward to form a complete closure with the uvula. The phonetic symbol [q] denotes the voiceless version of this stop, which is the 201 202 CHAPTER 4 Phonetics: The Sounds of Language initial sound in the name of the language “Quechua.” The voiced uvular stop [ɢ] also occurs in Quechua. Fricatives [f] [v] [θ] [ð] [s] [z] [ʃ] [ʒ] [x] [ɣ] [h] In the production of some continuants, the airflow is so severely obstructed that it causes friction, and the sounds are therefore called fricatives. The first of the following pairs of fricatives are voiceless; the second voiced. • [f] and [v] are labiodental fricatives; the friction is created at the lips and teeth, where a narrow passage permits the air to escape. • [θ] and [ð] are interdental fricatives, represented by th in thin and then. The friction occurs at the opening between the tongue and teeth. • [s] and [z] are alveolar fricatives, with the friction created at the alveolar ridge. • [ ʃ] and [ʒ] are palatal fricatives, and contrast in such pairs as mission [mɪʃən] and measure [mԑʒər]. They are produced with friction created as the air passes between the tongue and the part of the palate behind the alveolar ridge. In English, the voiced palatal fricative never begins words except for foreign words such as genre. The voiceless palatal fricative begins the words shoe [ʃu] and sure [ʃur] and ends the words rush [rʌʃ] and push [pʊʃ]. • [x] and [ɣ] denote velar fricatives. They are produced by raising the back of the tongue toward, but not quite touching, the velum. The friction is created as air passes through that narrow passage, and the sound is not unlike clearing your throat. These sounds do not commonly occur in English, though in some forms of Scottish English the final sound of loch meaning “lake” is [x]. In rapid speech the g in wagon may be pronounced [ɣ]. The final sound of the composer J. S. Bach’s name is also pronounced [x], which is a common sound in German. • [h] is a glottal fricative. Its relatively weak sound comes from air passing through the open glottis and pharynx. All fricatives are continuants. Although the airstream is obstructed as it passes through the oral cavity, it is not completely stopped. Affricates [tʃ] [dʒ] These sounds are produced by a stop closure followed immediately by a gradual release of the closure that produces an effect characteristic of a fricative. The palatal sounds that begin and end the words church and judge are voiceless and voiced affricates, respectively. Affricates are not continuants because of the initial stop closure. Liquids [l] [r] In the production of the sounds [l] and [r], there is some obstruction of the airstream in the mouth, but not enough to cause any real constriction or friction. These sounds are liquids. They are articulated differently, as described in the earlier alveolar section, but are grouped as a class because they are acoustically similar. Due to that similarity, foreign speakers of English may confuse the two sounds and substitute one for the other. It also accounts for Dennis’s confusion in the cartoon. Articulatory Phonetics “Dennis the Menace” © Hank Ketcham. Reprinted with permission of North America Syndicate. Glides [j] [w] The sounds [j] and [w], the initial sounds of you [ju] and we [wi], are produced with little obstruction of the airstream. They are always followed directly by a vowel and do not occur at the end of words (don’t be fooled by spelling; words ending in y or w like say and saw end in a vowel sound). After articulating [j] or [w], the tongue glides quickly into place for pronouncing the next vowel, hence the term glide. The glide [j] is a palatal sound; the blade of the tongue (the front part minus the tip) is raised toward the hard palate in a position almost identical to that in producing the vowel sound [i] in the word beat [bit]. The glide [w] is produced by both rounding the lips and simultaneously raising the back of the tongue toward the velum. It is thus a labio-velar glide. Where speakers of English have different pronunciations for the words which and witch, the labio-velar glide in the first word is voiceless, symbolized as [ʍ] (an upside-down w). The position of the tongue and the lips for [w] is similar to that for producing the vowel sound [u] in suit [sut]. 203 204 CHAPTER 4 Phonetics: The Sounds of Language Approximants In some books the sounds [w], [j], [r], and [l] are alternatively called approximants because the articulators approximate a frictional closeness, but no actual friction occurs. The first three are central approximants, whereas [l] is a lateral approximant. Although in this chapter we focus on the sounds of English, the IPA has symbols and classifications for all the sounds of the world’s languages. For example, many languages have sounds that are referred to as trills, and others have clicks. These are described in the following sections. Trills and flaps The “r”-sound of many languages may be different from the English [r]. A trilled “r” is produced by rapid vibrations of an articulator. An alveolar trill, as in the Spanish word for dog, perro, is produced by vibrating the tongue tip against the alveolar ridge. Its IPA symbol is [r], strictly speaking, though we have co-opted [r] for the English “r.” Many French speakers articulate the initial sound of rouge as a uvular trill, produced by vibrating the uvula. Its IPA symbol is [ʀ]. Another “r”-sound is called a flap and is produced by a flick of the tongue against the alveolar ridge. It sounds like a very fast d. It occurs in Spanish in words like pero meaning “but.” It may also occur in British English in words such as very. Its IPA symbol is [ɾ]. Most American speakers produce a flap instead of a [t] or [d] in words like writer and rider, which then sound identical and are spelled phonetically as [raɪɾər]. Clicks These “exotic” sounds are made by moving air in the mouth between various articulators. The sound of disapproval often spelled tsk is an alveolar click that occurs in several languages of southern Africa such as Zulu. A lateral click, which is like the sound one makes to encourage a horse, occurs in Xhosa. In fact, the ‘X’ in Xhosa stands for that particular speech sound. Phonetic Symbols for American English Consonants We are now capable of distinguishing all of the consonant sounds of English via the properties of voicing, nasality, and place and manner of articulation. For example, [f] is a voiceless, (oral), labiodental fricative; [n] is a (voiced), nasal, alveolar stop. The parenthesized features are usually not mentioned because they are redundant; all sounds are oral unless nasal is specifically mentioned, and all nasals are voiced in English. Table 4.4 lists the consonants by their phonetic features. The rows stand for manner of articulation and the columns for place of articulation. The entries are sufficient to distinguish all words in English from one another. For example, using [p] for both aspirated and unaspirated voiceless bilabial stops, and [b] for the voiced bilabial stop, suffices to differentiate the words pit, spit, and bit. If a narrower phonetic transcription of these words is desired, the symbol [pʰ] can be used to indicate aspiration giving us [pʰɪt], [spɪt], [bɪt]. By “narrow transcription” we mean one that indicates all the phonetic details of a sound, even those that do not affect the word. Examples of words in which these sounds occur are given in Table 4.5. Articulatory Phonetics 205 TABLE 4.4 | Some Phonetic Symbols for American English Consonants Bilabial Labiodental Interdental Alveolar Palatal Velar Glottal ʔ Stop (oral) voiceless voiced p b t d k g Nasal (voiced) m n ŋ Fricative voiceless voiced f v θ ð s z ʃ ʒ Affricate voiceless voiced h tʃ dʒ Glide voiceless voiced ʍ w ʍ w j Liquid (voiced) (central) (lateral) r l TABLE 4.5 | Examples of Consonants in English Words Bilabial Labiodental Interdental Alveolar Palatal Velar Glottal (ʔ)uh-(ʔ)oh Stop (oral) voiceless voiced pie buy tie die kite guy Nasal (voiced) my night sing Fricative voiceless voiced fine vine thigh thy sue zoo Affricate voiceless voiced Glide voiceless voiced Liquid (voiced) (central) (lateral) shoe measure high cheese jump which wipe you rye lye which wipe 206 CHAPTER 4 Phonetics: The Sounds of Language Vowels Higgins: Pickering: Higgins: Tired of listening to sounds? Yes. It’s a fearful strain. I rather fancied myself because I can pronounce twenty-four distinct vowel sounds, but your hundred and thirty beat me. I can’t hear a bit of difference between most of them. Oh, that comes with practice. You hear no difference at first, but you keep on listening and presently you find they’re all as different as A from B. GEORGE BERNARD SHAW, Pygmalion, 1912 Vowels are produced with little restriction of the airflow from the lungs out the mouth and/or the nose. The quality of a vowel depends on the shape of the vocal tract as the air passes through. Different parts of the tongue may be high or low in the mouth; the lips may be spread or pursed; the velum may be raised or lowered. Vowel sounds carry pitch and loudness; you can sing vowels or shout vowels. They may be longer or shorter in duration. Vowels can stand alone—they can be produced without consonants before or after them. You can say the vowels of beat [bit], bit [bɪt], or boot [but], for example, without the initial [b] or the final [t], but you cannot say a [b] or a [t] alone without at least a little bit of vowel sound. Linguists can describe vowels acoustically or electronically. We will discuss that topic in chapter 8. In this chapter we describe vowels by their articulatory features as we did with consonants. Just as we say a [d] is pronounced by raising the tongue tip to the alveolar ridge, we say an [i] is pronounced by raising the body of the tongue toward the palate. With a [b], the lips come together; for an [ӕ] (the vowel in cat) the tongue is low in the mouth with the tongue tip forward, behind the front teeth. If you watch a side view of an X-ray (that’s -ray, not -rated!) video of someone’s tongue moving during speech, you will see various parts of the tongue rise up high and fall down low; at the same time you will see it move forward and backward in the mouth. These are the dimensions over which vowels are produced. We classify vowels according to three questions: 1. 2. 3. How high or low in the mouth is the tongue? How forward or backward in the mouth is the tongue? Are the lips rounded (pursed) or spread? Tongue Position The upper two diagrams in Figure 4.4 show that the tongue is high in the mouth in the production of the vowels [i] and [u] in the words he [hi] and who [hu]. In he the front part (but not the tip) of the tongue is raised; in who it is the back of the tongue. (Prolong the vowels of these words and try to feel the raised part of your tongue.) These are both high vowels, and the [i] is a high front vowel while the [u] is a high back vowel. To produce the vowel sound [a] of hah [ha], the back of the tongue is low in the mouth, as the lower diagram in Figure 4.4 shows. (The reason a doctor Articulatory Phonetics FIGURE 4.4 | Position of the tongue in producing the vowels in he, who, and hah. examining your throat may ask you to say “ah” is that the tongue is low and easy to see over.) This vowel is therefore a low back vowel. The vowels [ɪ] and [ʊ] in the words hit [hɪt] and put [pʰʊt] are similar to those in heat [hit] and hoot [hut] with slightly lowered tongue positions. The vowel [æ] in hack [hæk] is produced with the front part of the tongue low in the mouth, similar to the low vowel [a], but with the front rather than the back part of the tongue lowered. Say “hack, hah, hack, hah, hack, hah . . .” and you should feel your tongue moving forward and back in the low part of your mouth. Thus [æ] is a low front vowel. The vowels [e] and [o] in bait [bet] and boat [bot] are mid vowels, produced by raising the tongue to a position midway between the high and low vowels just discussed. [ɛ] and [ɔ] in the words bet [bɛt] and bore [bɔr] are also mid vowels, produced with a slightly lower tongue position than [e] and [o], respectively. Here, [e] and [ɛ] are front; [o] and [ɔ] are back. To produce the vowel [ʌ] in the word butt [bʌt], the tongue is not strictly high nor low, front nor back. It is a lower midcentral vowel. The schwa vowel [ə], which occurs as the first sound in about [əbaʊt], or the final sound of sofa [sofə], is also articulated with the tongue in a more or less neutral position between the extremes of high/low, front/back. The schwa is used mostly to represent unstressed vowels. (We will discuss stress later.) 207 208 CHAPTER 4 Phonetics: The Sounds of Language Lip Rounding Vowels also differ as to whether the lips are rounded or spread. The back vowels [u], [ʊ], [o], and [ɔ] in boot, put, boat, and bore are the only rounded vowels in English. They are produced with pursed or rounded lips. You can get a feel for the rounding by prolonging the word who, as if you were an owl: whoooooooooo. Now pose for the camera and say cheese, only say it with a prolonged vowel: cheeeeeeeeeeese. The high front [i] in cheese is unrounded, with the lips in the shape of a smile, and you can feel it or see it in a mirror. The low vowel [a] in the words bar, bah, and aha is the only (American) English back vowel that occurs without lip rounding. Other languages may differ in whether or not they have rounded vowels. French and Swedish, for example, have front rounded vowels, which English lacks. English also lacks a high back unrounded vowel, but this sound occurs in Mandarin Chinese, Japanese, and the Cameroonian language FeʔFeʔ, among others. The IPA symbol for this vowel is [ш], and to show that roundedness is important, we note that in Mandarin Chinese the unrounded [sш] means “four,” but the round [su] (like sue) means “speed.” Figure 4.5 shows the vowels based on tongue “geography.” The position of the vowel relative to the horizontal axis is a measure of the vowel’s front/back dimension. Its position relative to the vertical axis is a measure of tongue height. For example, we see that [i] is a high front vowel, [o] is a midback (rounded) vowel, and [ʌ] is a lower midcentral vowel, tending toward backness. Diphthongs A diphthong is a sequence of two vowel sounds. Diphthongs are present in the phonetic inventory of many languages, including English. The vowels we have studied so far are simple vowels, called monophthongs. The vowel sound in the word bite [baɪt], however, is the [a] vowel sound of father followed rapidly by the Part of the Tongue Involved Tongue Height FRONT HIGH i beet CENTRAL I bit BACK boot u put Á ROUNDED MID e bait ” bet E o bore O Rosa Ø butt LOW boat œ bat FIGURE 4.5 | Classification of American English vowels. bomb a Articulatory Phonetics [ɪ] sound of fit, resulting in the diphthong [aɪ]. Similarly, the vowel in bout [baʊt] is [a] followed by the [ʊ] sound of put, resulting in [aʊ]. Another diphthong that occurs in English is the vowel sound in boy [bɔɪ], which is the vowel [ɔ] of bore followed by [ɪ], resulting in [ɔɪ]. The pronunciation of any of these diphthongs may vary from our description because of the diversity of English speakers. To some extent the midvowels [e] and [o] may be diphthongized, especially in American English, though not in other varieties such as Irish English. Many linguists therefore denote these sounds as [eɪ] and [oʊ] as a narrower transcription. In this book we will stay with [e] and [o] for these vowel sounds. Nasalization of Vowels Vowels, like consonants, can be produced with a raised velum that prevents the air from escaping through the nose, or with a lowered velum that permits air to pass through the nasal passage. When the nasal passage is blocked, oral vowels result; when the nasal passage is open, nasal (or nasalized) vowels result. In English, nasal vowels occur for the most part before nasal consonants in the same syllable, and oral vowels occur in all other places. The words bean, bone, bingo, boom, bam, and bang are examples of words that contain nasalized vowels. To show the nasalization of a vowel in a narrow phonetic transcription, an extra mark called a diacritic—the symbol ~ (tilde) in this case—is placed over the vowel, as in bean [bĩn] and bone [bõn]. In languages like French, Polish, and Portuguese, nasalized vowels occur without nasal consonants. The French word meaning “sound” is son [sõ]. The n in the spelling is not pronounced but indicates that the vowel is nasal. Tense and Lax Vowels Figure 4.5 shows that the vowel [i] has a slightly higher tongue position than [ɪ]. This is also true for [e] and [ɛ], [u] and [ʊ], and [o] and [ɔ]. The first vowel in each pair is generally produced with greater tension of the tongue muscles than its counterpart, and they are often a little longer in duration. These vowels can be distinguished by the features tense and lax, as shown in the first four rows of the following: Tense Lax i e u o a aɪ aʊ ɪ ɛ ʊ ɔ ɔɪ æ ʌ ə beat bait boot boat hah high how bit bet put bore boy hat hut about Additionally, [a] is a tense vowel as are the diphthongs [aɪ] and [aʊ], but the diphthong [ɔɪ] is lax as are [ӕ], [ʌ], and of course [ə]. Tense vowels may occur at the ends of words: [si], [se], [su], [so], [pa], [saɪ], and [haʊ] represent the English words see, say, sue, sew, pa, sigh, and how. Lax vowels mostly do not occur 209 210 CHAPTER 4 Phonetics: The Sounds of Language at the ends of words; [sɪ], [sɛ], [sʊ], [sæ], [sʌ], and [sə] are not possible words in English. (The one exception to this generalization is lax [ɔ] and its diphthong [ɔɪ], which occur in words such as [sɔ] (saw) and [sɔɪ] (soy).) Different (Tongue) Strokes for Different Folks The vowels in Figure 4.5 do not represent all the vowels of all English speakers. They may not represent your particular vowel set. If you speak British English, there’s a good chance that you have a low, back, rounded vowel in the word hot that the vowel chart lacks. Canadian English speakers pronounce the vowel in words like bite as [ʌɪ] rather than [aɪ]. Consonants, too, vary from region to region, if not from person to person. One person’s “alveolar” stops may technically be dental stops, with the tongue hard behind the upper front teeth. In Britain, the substitution of the glottal stop where an American might use a [t] or [d] is common. It’s very much the case throughout the English-speaking world that, as the old song goes, “I say ‘tomayto’ [təmeto], you say ‘tomahto’ [təmato],” and we lovers of language say “vive la différence.” Major Phonetic Classes Biologists divide life forms into larger and smaller classes. They may distinguish between animals and plants; or within animals, between vertebrates and invertebrates; and within vertebrates, between mammals and reptiles, and so on. Linguists describe speech sounds similarly. All sounds are consonant sounds or vowel sounds. Within consonants, all are voiced or unvoiced, and so on. All the classes of sounds described so far in this chapter combine to form larger, more general classes that are important in the patterning of sounds in the world’s languages. Noncontinuants and Continuants Stops and affricates belong to the class of noncontinuants. There is a total obstruction of the airstream in the oral cavity. Nasal stops are included although air does flow continuously out the nose. All other consonants, and all vowels, are continuants, in which the stream of air flows continuously out of the mouth. Obstruents and Sonorants The non-nasal stops, the fricatives, and the affricates form a major class of sounds called obstruents. The airstream may be fully obstructed, as in nonnasal stops and affricates, or nearly fully obstructed, as in the production of fricatives. Sounds that are not obstruents are sonorants. Vowels, nasal stops [m,n,ŋ], liquids [l,r], and glides [j,w] are all sonorants. They are produced with much less obstruction to the flow of air than the obstruents, which permits the air to resonate. Nasal stops are sonorants because, although the air is blocked in the mouth, it continues to resonate in the nasal cavity. Articulatory Phonetics Consonantal Obstruents, nasal stops, liquids, and glides are all consonants. There is some degree of restriction to the airflow in articulating these sounds. With glides ([j,w]), however, the restriction is minimal, and they are the most vowel-like, and the least consonant-like, of the consonants. Glides are even referred to as “semivowels” or “semi-consonants” in some books. In recognition of this fact linguists place the obstruents, nasal stops, and liquids in a subclass of consonants called consonantal, from which the glides are excluded. Here are some other terms used to form subclasses of consonantal sounds. These are not exhaustive, nor are they mutually exclusive (e.g., the interdentals belong to two subclasses). A full course in phonetics would note further classes that we omit. Labials [p] [b] [m] [f] [v] [w] [ʍ] Labial sounds are those articulated with the involvement of the lips. They include the class of bilabial sounds [p] [b] and [m], the labiodentals [f] and [v], and the labiovelars [w] and [ʍ]. Coronals [θ] [ð] [t] [d] [n] [s] [z] [ʃ] [ʒ] [tʃ] [dʒ] [l] [r] Coronal sounds are articulated by raising the tongue blade. Coronals include the interdentals [θ] [ð], the alveolars [t] [d] [n] [s] [z], the palatals [ʃ] [ʒ], the affricates [tʃ] [dʒ], and the liquids [l] [r]. Anteriors [p] [b] [m] [f] [v] [θ] [ð] [t] [d] [n] [s] [z] Anterior sounds are consonants produced in the front part of the mouth, that is, from the alveolar area forward. They include the labials, the interdentals, and the alveolars. Sibilants [s] [z] [ʃ] [ʒ] [tʃ] [dʒ] Another class of consonantal sounds is characterized by an acoustic rather than an articulatory property of its members. The friction created by sibilants produces a hissing sound, which is a mixture of high-frequency sounds. Syllabic Sounds Sounds that may function as the core of a syllable possess the feature syllabic. Clearly vowels are syllabic, but they are not the only sound class that anchors syllables. Liquids and nasals can also be syllabic, as shown by the words dazzle [dӕzl ̩], faker [fekr̩], rhythm [rɪðm̩], and button [bʌtn̩]. (The diacritic mark under the [l]̩ , [r̩], [m̩], and [n̩] is the notation for syllabic.) Placing a schwa [ə] before the syllabic liquid or nasal also shows that these are separate syllables. The four words could be written as [dӕzəl], [fekər], [rɪðəm], and [bʌtən]. We will use this transcription. Similarly, the vowel sound in words like bird and verb are sometimes written as a syllabic r, [br̩d] and [vr̩b]. For consistency we shall transcribe these words using the schwa—[bərd] and [vərb]—the only instances where a schwa represents a stressed vowel. Obstruents and glides are never syllabic sounds because they are always accompanied by a vowel, and that vowel functions as the syllabic core. 211 212 CHAPTER 4 Phonetics: The Sounds of Language Prosodic Features Length, pitch, and stress (or “accent”) are prosodic, or suprasegmental, features. They are features over and above the segmental values such as place or manner of articulation, thus the “supra” in suprasegmental. The term prosodic comes from poetry, where it refers to the metrical structure of verse. One of the essential characteristics of poetry is the placement of stress on particular syllables, which defines the versification of the poem. Speech sounds that are identical in their place or manner features may differ in length (duration). Tense vowels are slightly longer than lax vowels, but only by a few milliseconds. However, in some languages when a vowel is prolonged to around twice its normal length, it can make a difference between words. In Japanese the word biru [biru] with a regular i means “building,” but with the i doubled in length as in biiru, spelled phonetically as [biːru], the meaning is “beer.” (The colon-like ː is the IPA symbol for segment length or doubling.) In Japanese vowel length can make the difference between two words. Japanese, and many other languages such as Finnish and Italian, have long consonants that may contrast words. When a consonant is long, or doubled, either the closure or obstruction is prolonged. Pronounced with a short k, the word saki [saki] means “ahead” in Japanese; pronounced with a long k—prolonging the velar closure—the word sakki [sakːi] means “before.” In effect, the extended silence of the prolonged closure is meaningful in these languages. English is not a language in which vowel or consonant length can change a word. You might say “puleeeeeze” to emphasize your request, but the word is still please. You may also say in English “Whatttttt a dump!” to express your dismay at a hotel room, prolonging the t-closure, but the word what is not changed. When we speak, we also change the pitch of our voice. The pitch depends on how fast the vocal cords vibrate; the faster they vibrate, the higher the pitch. If the larynx is small, as in women and children, the shorter vocal cords vibrate faster and the pitch is higher, all other things being equal. That is why women and children have higher-pitched voices than men, in general. When we discuss tone languages in the next section, we will see that pitch may affect the meaning of a word. In many languages, certain syllables in a word are louder, slightly higher in pitch, and somewhat longer in duration than other syllables in the word. They are stressed syllables. For example, the first syllable of digest, the noun meaning “summation of articles,” is stressed, whereas in digest, the verb meaning “to absorb food,” the second syllable receives greater stress. Stress can be marked in several ways: for example, by putting an accent mark over the stressed vowel in the syllable, as in dígest versus digést. English is a “stress-timed” language. In general, at least one syllable is stressed in an English word. French is not a stress-timed language. The syllables have approximately the same loudness, length, and pitch. It is a “syllable-timed” language. When native English speakers attempt to speak French, they often stress syllables, so that native French speakers hear French with “an English Prosodic Features accent.” When French speakers speak English, they fail to put stress where a native English speaker would, and that contributes to what English speakers call a “French accent.” Tone and Intonation We have already seen how length and stress can make sounds with the same segmental properties different. In some languages, these differences make different words, such as the two digests. Pitch, too, can make a difference in certain languages. Speakers of all languages vary the pitch of their voices when they talk. The effect of pitch on a syllable differs from language to language. In English, it doesn’t matter whether you say cat with a high pitch or a low pitch. It will still mean “cat.” But if you say [ba] with a high pitch in Nupe (a language spoken in Nigeria), it will mean “to be sour,” whereas if you say [ba] with a low pitch, it will mean “to count.” Languages that use the pitch of individual vowels or syllables to contrast meanings of words are called tone languages. More than half the world’s languages are tone languages. There are more than one thousand tone languages spoken in Africa alone. Many languages of Asia, such as Mandarin Chinese, Burmese, and Thai, are tone languages. In Thai, for example, the same string of segmental sounds represented by [naː] will mean different things if one says the sounds with a low pitch, a midpitch, a high pitch, a falling pitch from high to low, or a rising pitch from low to high. Thai therefore has five linguistic tones, as illustrated as follows: (Diacritics are used to represent distinctive tones in the phonetic transcriptions.) [`] [-] [´] [ˆ] [ˇ] L M H HL LH low tone mid tone high tone falling tone rising tone [nàː] [nāː] [náː] [nâː] [nǎː] “a nickname” “rice paddy” “young maternal uncle or aunt” “face” “thick” There are two kinds of tones. If the pitch is level across the syllable, we have a register tone. If the pitch changes across the syllable, whether from high to low or vice versa, we have a contour tone. Thai has three level and two contour tones. Commonly, tone languages will have two or three register tones and possibly one or two contour tones. In a tone language it is not the absolute pitch of the syllables that is important but the relations among the pitches of different syllables. Thus men, women, and children with differently pitched voices can still communicate in a tone language. Tones generally have a lexical function, that is, they make a difference between words. But in some languages tones may also have a grammatical function, as in Edo spoken in midwestern Nigeria. The tone on monosyllabic verbs followed by a direct object indicates the tense and transitivity of the verb. Low 213 214 CHAPTER 4 Phonetics: The Sounds of Language tone means present tense, transitive; high tone means past tense, transitive, as illustrated here: òtà gbẽ̀ Ota write+PRES+TRANS Ota writes a book. òtà gbẽ́ Ota write+PAST+TRANS Ota wrote a book. èbé book èbé book In many tone languages we find a continual lowering of the absolute pitch on the tones throughout an utterance. The relative pitches remain the same, however. In the following sentence in Twi, spoken in Ghana, the relative pitch rather than the absolute pitch is important. “Kofi searches for a little food for his friend’s child.” hwe~hw”! Ko~fí LH L H a!dua~N H L ka~kra! ma~ L H L n~' a!da~mfo~ ba! L HL L H The actual pitches of these syllables would be rather different from each other, as shown in the following musical staff-like figure (the higher the number, the higher the pitch): 7 fê hw”! a! 6 kra! 5 Ko~ 4 3 2 1 hwe~ a! dua~N ka~ ba! ma~ n~' da~mfo~ The lowering of the pitch is called downdrift. In languages with downdrift, a high tone that occurs after a low tone, or a low tone after a high tone, is lower in pitch than the preceding similarly marked tone. Notice that the first high tone in the sentence is given the pitch value 7. The next high tone (which occurs after an intervening low tone) is 6; that is, it is lower in pitch than the first high tone. This example shows that in analyzing tones, just as in analyzing segments, all the physical properties need not be considered. Only essential features are important in language—in this case, whether the tone is high or low in relation to the other pitches. The absolute pitch is inessential. Speakers of tone languages are able to ignore the linguistically irrelevant absolute pitch differences between individual speakers and attend to the linguistically relevant relative pitch differences, much like speakers of non-tone languages ignore pitch altogether. Languages that are not tone languages, such as English, are called intonation languages. The pitch contour of the utterance varies, but in an intonation language as opposed to a tone language, pitch is not used to distinguish words Phonetic Symbols and Spelling Correspondences from each other. Intonation may affect the meaning of whole sentences, so that John is here spoken with falling pitch at the end is interpreted as a statement, but with rising pitch at the end, a question. We’ll have more to say about intonation in the next chapter. Phonetic Symbols and Spelling Correspondences “Family Circus” © Bil Keane, Inc. Reprinted with permission of King Features Syndicate. Table 4.6 shows the sound/spelling correspondences for American English consonants and vowels. (We have not given all possible spellings for every sound; however, these examples should help you relate English orthography to the English sound system.) We have included the symbols for the voiceless aspirated stops to illustrate that what speakers usually consider one sound—for example p—may occur phonetically as two sounds, [p], [pʰ]. Some of these pronunciations may differ from your own. For example, you may (or may not) pronounce the words cot and caught identically. In the form of English described here, cot and caught are pronounced differently, so cot is one 215 216 CHAPTER 4 Phonetics: The Sounds of Language of the examples of the vowel sound [a] as in car. Caught illustrates the vowel [ɔ] as in core. There will be other differences, too, because English is a worldwide language and is spoken in many forms in many countries. The English examples used in this book are a compromise among several varieties of American English, but this should not deter you. Our purpose is to teach phonetics in general, and to show you how phonetics might describe the speech sounds of any of the world’s languages with the proper symbols and diacritics. We merely use American English for illustration, and we provide the major phonetic symbols for American English to show you how such symbols may be used to describe the phonetics of any of the world’s languages. TABLE 4.6 | Phonetic Symbol/English Spelling Correspondences Consonants Symbol Examples p pʰ b m t tʰ d n k kʰ g ŋ f v s z θ ð ʃ ʒ tʃ tʃʰ dʒ l r j w ʍ h ʔ ɾ spit tip Lapp pit prick plaque appear bit tab brat bubble mitt tam smack Emmy camp comb stick pit kissed write tick intend pterodactyl attack Dick cad drip loved ride nick kin snow mnemonic Gnostic pneumatic know skin stick scat critique elk curl kin charisma critic mechanic close girl burg longer Pittsburgh sing think finger fat philosophy flat phlogiston coffee reef cough vat dove gravel sip skip psychology pass pats democracy scissors fasten deceive descent zip jazz razor pads kisses Xerox design lazy scissors maize thigh through wrath ether Matthew thy their weather lathe either shoe mush mission nation fish glacial sure measure vision azure casual decision rouge match rich righteous choke Tchaikovsky discharge judge midget George magistrate residual leaf feel call single reef fear Paris singer you yes feud use witch swim queen which where whale (for speakers who pronounce which differently than witch) hat who whole rehash bottle button glottal (for some speakers), (ʔ)uh-(ʔ)oh writer, rider, latter, ladder The “Phonetics” of Signed Languages TABLE 4.6 | (Continued) Vowels i ɪ e ɛ æ u ʊ ʌ o ɔ a ə aɪ aʊ ɔɪ beet beat be receive key believe amoeba people Caesar Vaseline serene bit consist injury bin women gate bait ray great eight gauge greyhound bet serenity says guest dead said pan act laugh comrade boot lute who sewer through to too two move Lou true suit put foot butcher could cut tough among oven does cover flood coat go beau grow though toe own sew caught stalk core saw ball awe auto cot father palm sergeant honor hospital melodic sofa alone symphony suppose melody bird verb the bite sight by buy die dye aisle choir liar island height sign about brown doubt coward sauerkraut boy oil The “Phonetics” of Signed Languages Earlier we noted that signed languages, like all other human languages, are governed by a grammatical system that includes syntactic and morphological rules. Signed languages are like spoken languages in another respect; signs can be broken down into smaller units analogous to the phonetic features discussed in this chapter. Just as spoken languages distinguish sounds according to place and manner of articulation, so signed languages distinguish signs according to the place and manner in which the signs are articulated by the hands. The signs of ASL, for example, are formed by three major features: 1. 2. 3. The configuration of the hand (handshape) The movement of the hand and arms toward or away from the body The location of the hands in signing space To illustrate how these features define a sign, the ASL sign meaning “arm” is a flat hand, moving to touch the upper arm. It has three features: flat hand, motion upward, upper arm. ASL has over 30 handshapes. But not all signed languages share the same handshapes, just as not all spoken languages share the same places of articulation (French lacks interdental stops; English lacks the uvular trill of French). For example, the T handshape of ASL does not occur in the European signed languages. Similarly, Chinese Sign Language has a handshape formed with an open hand with all fingers extended except the ring finger. ASL does not have this handshape. 217 218 CHAPTER 4 Phonetics: The Sounds of Language FIGURE 4.6 | Minimal contrasts illustrating major formational parameters. Reprinted by permission of the publisher from THE SIGNS OF LANGUAGE by Edward Klima and Ursula Bellugi, p. 42, Cambridge, Mass.: Harvard University Press, Copyright © 1979 by the President and Fellows of Harvard College. Movement can be either straight or in an arc. Secondary movements include wiggling or hooking fingers. Signs can also be unidirectional (moving in one direction) or bidirectional (moving in one direction and then back again). The location of signs is defined relative to the body or face and by whether the sign involves vertical movement, horizontal movement, or movement to or away from the body. As in spoken language, a change along one of these parameters can result in different words. Just as a difference in voicing or tone can result in different words in a spoken language, a change in location, handshape, or movement can Summary result in different signs with different meanings. For example, the sign meaning “father” differs from the sign meaning “fine” only in the place of articulation. Both signs are formed with a spread five-finger handshape, but the thumb touches the signer’s forehead in “father” and it touches his chest in “fine.” Figure 4.6 illustrates several sets of words that differ from each other along one or another of the phonetic parameters of ASL. There are two-handed and one-handed signs. One-handed signs are formed with the speaker’s dominant hand, whether left or right. Just as spoken languages have features that do not distinguish different words (e.g., consonant length in English), in ASL (and probably all signed languages), a difference in handedness does not affect the meaning of the sign. The parallels that exist in the organization of sounds and signs are not surprising when we consider that similar cognitive systems underlie both spoken and signed languages. Summary The science of speech sounds is called phonetics. It aims to provide the set of properties necessary to describe and distinguish all the sounds in human languages throughout the world. When we speak, the physical sounds we produce are continuous stretches of sound, which are the physical representations of strings of discrete linguistic segments. Knowledge of a language permits one to separate continuous speech into individual sounds and words. The discrepancy between spelling and sounds in English and other languages motivated the development of phonetic alphabets in which one letter corresponds to one sound. The major phonetic alphabet in use is the International Phonetic Alphabet (IPA), which includes modified Roman letters and diacritics, by means of which the sounds of all human languages can be represented. To distinguish between orthography (spelling) and phonetic transcriptions, we write the latter between square brackets, as in [fə̃nɛɾɪk] for phonetic. All English speech sounds come from the movement of lung air through the vocal tract. The air moves through the glottis (i.e., between the vocal cords), up the pharynx, through the oral (and possibly the nasal) cavity, and out the mouth or nose. Human speech sounds fall into classes according to their phonetic properties. All speech sounds are either consonants or vowels, and all consonants are either obstruents or sonorants. Consonants have some obstruction of the airstream in the vocal tract, and the location of the obstruction defines their place of articulation, some of which are bilabial, labiodental, alveolar, palatal, velar, uvular, and glottal. Consonants are further classified according to their manner of articulation. They may be voiced or voiceless, oral or nasal, long or short. They may be stops, fricatives, affricates, liquids, or glides. During the production of voiced sounds, the vocal cords are together and vibrating, whereas in voiceless sounds they are apart and not vibrating. Voiceless sounds may also be aspirated or unaspirated. In the production of aspirated sounds, the vocal cords remain apart for a brief time after the stop closure is released, resulting in a puff of air at the time of the 219 220 CHAPTER 4 Phonetics: The Sounds of Language release. Consonants may be grouped according to certain features to form larger classes such as labials, coronals, anteriors, and sibilants. Vowels form the nucleus of syllables. They differ according to the position of the tongue and lips: high, mid, or low tongue; front, central, or back of the tongue; rounded or unrounded lips. The vowels in English may be tense or lax. Tense vowels are slightly longer in duration than lax vowels. Vowels may also be stressed (longer, higher in pitch, and louder) or unstressed. Vowels, like consonants, may be nasal or oral, although most vowels in all languages are oral. Length, pitch, loudness, and stress are prosodic, or suprasegmental, features. They are imposed over and above the segmental values of the sounds in a syllable. In many languages, the pitch of the vowel in the syllable is linguistically significant. For example, two words with identical segments may contrast in meaning if one has a high pitch and another a low pitch. Such languages are tone languages. There are also intonation languages in which the rise and fall of pitch may contrast meanings of sentences. In English the statement Mary is a teacher will end with a fall in pitch, but in the question Mary is a teacher? the pitch will rise. English and other languages use stress to distinguish different words, such as cóntent and contént. In some languages, long vowels and long consonants contrast with their shorter counterparts. Thus biru [biru] and biiru [biːru], saki [saki] and sakki [sakːi] are different words in Japanese. Diacritics to specify such properties as nasalization, length, stress, and tone may be combined with the phonetic symbols for more detailed phonetic transcriptions. A phonetic transcription of men would use a tilde diacritic to indicate the nasalization of the vowel: [mɛ̃n]. In sign languages there are “phonetic” features analogous to those of spoken languages. In ASL these are handshape, movement, and location. As in spoken languages, changes along one of these parameters can result in a new word. In the following chapter, we discuss this meaning-changing property of features in much greater detail. References for Further Reading Catford, J. C. 2001. A practical introduction to phonetics, 2nd edn. New York: Oxford University Press. Crystal, D. 2003. A dictionary of linguistics and phonetics, 5th edn. Oxford, UK: Blackwell Publishers. Emmorey, K. 2002. Language, cognition and the brain: Insights from sign language research. New Jersey: Lawrence Erlbaum Associates. Fromkin, V. A. (ed.). 1978. Tone: A linguistic survey. New York: Academic Press. International Phonetic Association. 1989. Principles of the International Phonetic Association, rev. edn. London: IPA. Ladefoged, P. 2006. A course in phonetics, 5th edn. Boston, MA: Thomson Learning. _____. 2005. Vowels and consonants, 2nd edn. Oxford, UK: Blackwell Publishers. Ladefoged, P., and I. Maddieson. 1996. The sounds of the world’s languages. Oxford, UK: Blackwell Publishers. Pullum, G. K., and W. A. Ladusaw. 1986. Phonetic symbol guide. Chicago: University of Chicago Press. Exercises Exercises 1. Write the phonetic symbol for the first sound in each of the following words according to the way you pronounce it. Examples: ooze [u] psycho [s] a. judge [ ] f. thought [ ] b. Thomas [ ] g. contact [ ] c. though [ ] h. phone [ ] d. easy [ ] i. civic [ ] e. pneumonia [ ] j. usual [ ] 2. Write the phonetic symbol for the last sound in each of the following words. Example: boy [ɔɪ] (Diphthongs should be treated as one sound.) a. fleece [ ] f. cow [ ] b. neigh [ ] g. rough [ ] c. long [ ] h. cheese [ ] d. health [ ] i. bleached [ ] e. watch [ ] j. rags [ ] 3. Write the following words in phonetic transcription, according to your pronunciation. Examples: knot [nat]; delightful [dilaɪtfəl] or [dəlaɪtfəl]. Some of you may pronounce some of these words the same. a. physics h. Fromkin o. touch b. merry i. tease p. cough c. marry j. weather q. larynx d. Mary k. coat r. through e. yellow l. Rodman s. beautiful f. sticky m. heath t. honest g. transcription n. “your name” u. president 4. Following is a phonetic transcription of a verse in the poem “The Walrus and the Carpenter” by Lewis Carroll. The speaker who transcribed it may not have exactly the same pronunciation as you; there are many correct versions. However, there is one major error in each line that is an impossible pronunciation for any American English speaker. The error may consist of an extra symbol, a missing symbol, or a wrong symbol in the word. Note that the phonetic transcription that is given is a narrow transcription; aspiration is marked, as is the nasalization of vowels. This is to illustrate a detailed transcription. However, none of the errors involve aspiration or nasalization of vowels. Write the word in which the error occurs in the correct phonetic transcription. Corrected Word a. ðə tʰãɪm hӕz cʌ̃m [kʰʌ̃m] b. ðə wɔlrəs sed 221 222 CHAPTER 4 Phonetics: The Sounds of Language c. d. e. f. g. h. tʰu tʰɔlk əv mɛ̃ni θɪ ŋ̃ z əv ʃuz ãnd ʃɪps ӕ̃ nd silɪ ŋ̃ wӕx əv kʰӕbəgəz ӕ̃ nd kʰɪ ŋ̃ z ӕ̃ nd waɪ ðə si ɪs bɔɪlɪ ̃ ŋ hat ӕ̃ nd wɛθər pʰɪgz hæv wɪ ŋ̃ z 5. The following are all English words written in a broad phonetic transcription (thus omitting details such as nasalization and aspiration). Write the words using normal English orthography. a. b. c. d. e. f. g. h. i. j. k. l. m. n. o. p. q. r. s. t. [hit] [strok] [fez] [ton] [boni] [skrim] [frut] [pritʃər] [krak] [baks] [θæŋks] [wɛnzde] [krɔld] [kantʃiɛntʃəs] [parləmɛntæriən] [kwəbɛk] [pitsə] [bərak obamə] [dʒɔn məken] [tu θaʊzənd ænd et] 6. Write the symbol that corresponds to each of the following phonetic descriptions, then give an English word that contains this sound. Example: voiced alveolar stop a. voiceless bilabial unaspirated stop b. low front vowel c. lateral liquid d. velar nasal e. voiced interdental fricative f. voiceless affricate g. palatal glide h. mid lax front vowel i. high back tense vowel j. voiceless aspirated alveolar stop 7. [d] dough [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] In each of the following pairs of words, the bold italicized sounds differ by one or more phonetic properties (features). Give the IPA symbol for each italicized sound, state their differences and, in addition, state what properties they have in common. Exercises Example: phone—phonic The o in phone is mid, tense, round. The o in phonic is low, unround. Both are back vowels. a. bath—bathe b. reduce—reduction c. cool—cold d. wife—wives e. cats—dogs f. impolite—indecent 8. Write a phonetic transcription of the italicized words in the following poem entitled “Brush Up Your English” published long ago in a British newspaper. I take it you already know Of tough and bough and cough and dough? Some may stumble, but not you, On hiccough, thorough, slough and through? So now you are ready, perhaps, To learn of less familiar traps? Beware of heard, a dreadful word That looks like beard and sounds like bird. And dead, it’s said like bed, not bead; For goodness’ sake, don’t call it deed! Watch out for meat and great and threat. (They rhyme with suite and straight and debt.) A moth is not a moth in mother, Nor both in bother, broth in brother.1 9. For each group of sounds listed, state the phonetic feature(s) they all share. Example: [p] [b] [m] Features: bilabial, stop, consonant a. [g] [p] [t] [d] [k] [b] b. [u] [ʊ] [o] [ɔ] c. [i] [ɪ] [e] [ɛ] [æ] d. [t] [s] [ʃ] [p] [k] [tʃ] [f] [h] e. [v] [z] [ʒ] [dʒ] [n] [g] [d] [b] [l] [r] [w] [j] f. [t] [d] [s] [ʃ] [n] [tʃ] [dʒ] 10. Write the following broad phonetic transcriptions in regular English spelling. a. nom tʃamski ɪz e lɪngwɪst hu titʃəz æt ɛm aɪ ti b. fənɛtɪks ɪz ðə stʌdi əv spitʃ saʊndz c. ɔl spokən læŋgwɪdʒəz juz saʊndz prədust baɪ ðə ʌpər rɛspərətɔri sɪstəm d. ɪn wʌn daɪəlɛkt əv ɪnglɪʃ kat ðə naʊn ænd kɔt ðə vərb ar prənaʊnst ðə sem e. sʌm pipəl θɪŋk fənɛtɪks ɪz vɛri ɪntərɛstɪŋ f. vɪktɔrijə framkən rabərt radmən ænd ninə haɪəmz ar ðə ɔθərz əv ðɪs bʊk 1T. S. Watt, “Brush Up Your English,” Guardian, June 21, 1954. Reprinted by permission. 223 224 CHAPTER 4 Phonetics: The Sounds of Language 11. What phonetic property or feature distinguishes the sets of sounds in column A from those in column B? A a. b. c. d. e. f. B [i] [ɪ] [p] [t] [k] [s] [f] [p] [b] [m] [i] [ɪ] [u] [ʊ] [f] [v] [s] [z] [ʃ] [ʒ] [i] [ɪ] [e] [ə] [ɛ] [æ] [u] [ʊ] [b] [d] [g] [z] [v] [t] [d] [n] [k] [g] [ŋ] [e] [ɛ] [o] [ɔ] [æ] [a] [tʃ] [dʒ] [u] [ʊ] [o] [ɔ] [a] 12. Which of the following sound pairs have the same manner of articulation, and what is that manner of articulation? a. [h] [ʔ] f. [f] [ʃ] b. [r] [w] g. [k] [θ] c. [m] [ŋ] h. [s] [g] d. [ð] [v] i. [j] [w] e. [r] [t] j. [j] [dʒ] 13. A. Which of the following vowels are lax and which are tense? a. [i] b. [ɪ] c. [u] d. [ʌ] e. [ʊ] f. [e] g. [ɛ] h. [o] i. [ɔ] j. [æ] k. [a] l. [ə] m. [aɪ] n. [aʊ] o. [ɔɪ] B. Think of ordinary, nonexclamatory English words with one syllable that end in [ʃ] preceded directly by each of the vowels in A. Are any such words impossible in English? Example: fish [fɪʃ] is such a word. Words ending in [-aɪʃ] are not possible in English. C. In terms of tense/lax, which vowel type is found in most such words? 14. Write a made-up sentence in narrow phonetic transcription that contains at least six different monophthongal vowels and two different diphthongs. 15. The front vowels of English, [i, ɪ, e, ɛ, ӕ], are all unrounded. However, many languages have rounded front vowels, such as French. Here are three words in French with rounded front vowels. Transcribe them phonetically by finding out the correct IPA symbols for front rounded vowels: (Hint: Try one of the books given in the references, or Google around.) a. tu, “you,” has a high front rounded vowel and is transcribed phonetically as [ ] b. bleu, “blue,” has a midfront rounded vowel and is transcribed phonetically as [ ] c. heure, “hour,” has a low midfront rounded vowel and is transcribed phonetically as [ ] 16. Challenge exercise: A. Take all of the vowels from 13A except the schwa and find a monosyllabic word containing that vowel followed directly by [t], giving both the spelling and the phonetic transcription. Exercises Example: beat [bit], foot [fʊt] B. Now do the same thing for monosyllabic words ending in [r]. Indicate when such a word appears not to occur in your dialect of English. C. And do the same thing for monosyllabic words ending in [ŋ]. Indicate when such a word appears not to occur in your dialect of English. D. Is there a quantitative difference in the number of examples found as you go from A to C? E. Are most vowels that “work” in B tense or lax? How about in C? F. Write a brief summary of the difficulties you encountered in trying to do this exercise. 17. In the first column are the last names of well-known authors. In the second column is one of their best-known works. Match the work to the author and write the author’s name and work in conventional spelling. Example: a. [dɪkǝ̃nz] 1. [ɔləvər tʰwɪst] Answer: a—1 (Dickens, Oliver Twist) b. [sɛrvãntɛs] 2. [ə ferwɛl tʰu armz] c. [dãnte] 3. [æ̃ nǝ̃məl farm] d. [dɪkǝ̃nz] 4. [dõn kihote] e. [ɛliət] 5. [greps ʌv ræθ] f. [hɛ̃mɪ ŋ̃ we] 6. [gret ɛkspɛktʰeʃǝ̃nz] g. [hõmər] 7. [gʌləvərz tʰrævəlz] h. [mɛlvɪl] 8. [hæ̃ mlət] i. [orwɛl] 9. [mobi-dɪk] j. [ʃekspir] 10. [saɪləs marnər] k. [staɪ ñ bɛk] 11. [ðə dɪvaɪ ñ kʰãmədi] l. [swɪft] 12. [ðə ɪliəd] m. [tʰɔlstɔɪ] 13. [tʰãm sɔɪjər] n. [tʰwẽn] 14. [wor æ̃ nd pʰis] 225 5 Phonology: The Sound Patterns of Language Speech is human, silence is divine, yet also brutish and dead; therefore we must learn both arts. THOMAS CARLYLE (1795–1881) Phonology is the study of telephone etiquette. A HIGH SCHOOL STUDENT 226 What do you think is greater: the number of languages in the world, or the number of speech sounds in all those languages? Well, there are thousands of languages, but only hundreds of speech sounds, some of which we examined in the previous chapter. Even more remarkable, only a few dozen features, such as voicing or bilabial or stop, are needed to describe every speech sound that occurs in every human language. That being the case, why, you may ask, do languages sound so different? One reason is that the sounds form different patterns in different languages. English has nasalized vowels, but only in syllables with nasal consonants. French puts nasal vowels anywhere it pleases, with or without nasal consonants. The speech sound that ends the word song—the velar nasal [ŋ]—cannot begin a word in English, but it can in Vietnamese. The common Vietnamese name spelled Nguyen begins with this sound, and the reason few of us can pronounce this name correctly is that it doesn’t follow the English pattern. The fact that a sound such as [ŋ] is difficult for an English speaker to pronounce at the beginning of a word, but easy for a Vietnamese speaker, means that there is no general notion of “difficulty of articulation” that can explain all The Pronunciation of Morphemes of the sound patterns of particular languages. Rather, the ability to pronounce particular sounds depends on the speaker’s unconscious knowledge of the sound patterns of her own language or languages. The study of how speech sounds form patterns is phonology. These patterns may be as simple as the fact that the velar nasal cannot begin a syllable in English, or as complex as why g is silent in sign but is pronounced in the related word signature. To see that this is a pattern and not a one-time exception, just consider the slippery n in autumn and autumnal, or the b in bomb and bombard. The word phonology refers both to the linguistic knowledge that speakers have about the sound patterns of their language and to the description of that knowledge that linguists try to produce. Thus it is like the way we defined grammar: your mental knowledge of your language, or a linguist’s description of that knowledge. Phonology tells you what sounds are in your language and which ones are foreign; it tells you what combinations of sounds could be an actual word, whether it is (black) or isn’t (blick), and what combination of sounds could not be an actual word (*lbick). It also explains why certain phonetic features are important to identifying a word, for example voicing in English as in pat versus bat, while other features, such as aspiration in English, are not crucial to identifying a word, as we noted in the previous chapter. And it also allows us to adjust our pronunciation of a morpheme, for example the past or plural morpheme, to suit the different phonological contexts that it occurs in, as we will discuss shortly. In this chapter we’ll look at some of the phonological processes that you know, that you acquired as a child, and that yet may initially appear to you to be unreasonably complex. Keep in mind that we are only making explicit what you already know, and its complexity is in a way a wondrous feature of your own mind. The Pronunciation of Morphemes The t is silent, as in Harlow. MARGOT ASQUITH, referring to her name being mispronounced by the actress Jean Harlow Knowledge of phonology determines how we pronounce words and the parts of words we call morphemes. Often, certain morphemes are pronounced differently depending on their context, and we will introduce a way of describing this variation with phonological rules. We begin with some examples from English, and then move on to examples from other languages. The Pronunciation of Plurals Nearly all English nouns have a plural form: cat/cats, dog/dogs, fox/foxes. But have you ever paid attention to how plural forms are pronounced? Listen to a native speaker of English (or yourself if you are one) pronounce the plurals of the following nouns. 227 228 CHAPTER 5 Phonology: The Sound Patterns of Language A B C D cab cad bag love lathe cam can call bar spa boy cap cat back cuff faith bus bush buzz garage match badge child ox mouse criterion sheep The final sound of the plural nouns from Column A is a [z]—a voiced alveolar fricative. For column B the plural ending is an [s]—a voiceless alveolar fricative. And for Column C it’s [әz]. Here is our first example of a morpheme with different pronunciations. Note also that there is a regularity in columns A, B, and C that does not exist in D. The plural forms in D—children, oxen, mice, criteria, and sheep—are a hodge-podge of special cases that are memorized individually when you acquire English, whether natively or as a second language. This is because there is no way to predict the plural forms of these words. How do we know how to pronounce this plural morpheme? The spelling, which adds s or es, is misleading—not a z in sight—yet if you know English, you pronounce it as we indicated. When faced with this type of question, it’s useful to make a chart that records the phonological environments in which each variant of the morpheme is known to occur. (The more technical term for a variant is allomorph.) Writing the words from the first three columns in broad phonetic transcription, we have our first chart for the plural morpheme. Allomorph Environment [z] After [kæb], [kæd], [bæg], [lʌv], [leð], [kæm], [kæn], [bæŋ], [kɔl], [bar], [spa], [bɔɪ], e.g., [kæbz], [kædz] . . . [bɔɪz] [s] After [kæp], [kæt], [bæk], [kʌf], [feθ], e.g., [kæps], [kæts] . . . [feθs] [əz] After [bʌs], [bʊʃ], [bʌz], [gəraʒ], [mætʃ], [bædʒ], e.g., [bʌsəz], [bʊʃəz] . . . [bædʒəz] To discover the pattern behind the way plurals are pronounced, we look for some property of the environment associated with each group of allomorphs. For example, what is it about [kæb] or [lʌv] that determines that the plural morpheme will take the form [z] rather than [s] or [əz]? To guide our search, we look for minimal pairs in our list of words. A minimal pair is two words with different meanings that are identical except for one sound segment that occurs in the same place in each word. For example, cab [kæb] and cad [kæd] are a minimal pair that differ only in their final segments, whereas cat [kæt] and mat [mæt] are a minimal pair that differ only in their The Pronunciation of Morphemes initial segments. Other minimal pairs in our list include cap/cab, bag/back, and bag/badge. Minimal pairs whose members take different allomorphs are particularly useful for our search. For example, consider cab [kæb] and cap [kæp], which respectively take the allomorphs [z] and [s] to form the plural. Clearly, the final segment is responsible, because that is where the two words differ. Similarly for bag [bæg] and badge [bædʒ]. Their final segments determine the different plural allomorphs [z] and [əz]. Apparently, the distribution of plural allomorphs in English is conditioned by the final segment of the singular form. We can make our chart more concise by considering just the final segment. (We treat diphthongs such as [ɔɪ] as single segments.) Allomorph Environment [z] [s] [əz] After [b], [d], [g], [v], [ð], [m], [n], [ŋ], [l], [r], [a], [ɔɪ] After [p], [t], [k], [f], [θ] After [s], [ʃ], [z], [ʒ] , [tʃ], [dʒ] We now want to understand why the English plural follows this pattern. We always answer questions of this type by inspecting the phonetic properties of the conditioning segments. Such an inspection reveals that the segments that trigger the [əz] plural have in common the property of being sibilants. Of the nonsibilants, the voiceless segments take the [s] plural, and the voiced segments take the [z] plural. Now the rules can be stated in more general terms: Allomorph Environment [z] [s] [əz] After voiced nonsibilant segments After voiceless nonsibilant segments After sibilant segments An even more concise way to express these rules is to assume that the basic or underlying form of the plural morpheme is /z/, with the meaning “plural.” This is the “default” pronunciation. The rules tell us when the default does not apply: 1. 2. Insert a [ə] before the plural morpheme /z/ when a regular noun ends in a sibilant, giving [əz]. Change the plural morpheme /z/ to a voiceless [s] when preceded by a voiceless sound. These rules will derive the phonetic forms—that is, the pronunciations—of plurals for all regular nouns. Because the basic form of the plural is /z/, if no rule applies, then the plural morpheme will be realized as [z]. The following chart shows how the plurals of bus, butt, and bug are formed. At the top are the basic forms. The two rules apply or not as appropriate as one moves downward. The output of rule 1 becomes the input of rule 2. At the bottom are the phonetic realizations—the way the words are pronounced. 229 230 CHAPTER 5 Phonology: The Sound Patterns of Language bus 1 pl. Basic representation \bØs 1 z\ Apply rule (1) Apply rule (2) NA Phonetic representation [bØsEz] butt 1 pl. E bug 1 pl. \bØt 1 z\ \bØg 1 z\ NA* NA s NA [bØts] [bØgz] *NA means “not applicable.” As we have formulated these rules, (1) must apply before (2). If we applied the rules in reverse order, we would derive an incorrect phonetic form for the plural of bus, as a diagram similar to the previous one illustrates: Basic representation Apply rule (2) Apply rule (1) Phonetic representation \bØs 1 z\ s E *[bØsEs] The particular phonological rules that determine the phonetic form of the plural morpheme and other morphemes of the language are morphophonemic rules. Such rules concern the pronunciation of specific morphemes. Thus the plural morphophonemic rules apply to the plural morpheme specifically, not to all morphemes in English. Additional Examples of Allomorphs The formation of the regular past tense of English verbs parallels the formation of regular plurals. Like plurals, some irregular past tenses conform to no particular rule and must be learned individually, such as go/went, sing/sang, and hit/hit. And also like plurals, there are three phonetic past-tense morphemes for regular verbs: [d], [t], and [əd]. Here are several examples in broad phonetic transcription. Study sets A, B, and C and try to see the regularity before reading further. Set A: gloat [glot], gloated [glotəd]; raid [red], raided [redəd] Set B: grab [græb], grabbed [græbd]; hug [hʌg], hugged [hʌgd]; faze [fez], fazed [fezd]; roam [rom], roamed [romd]. Set C: reap [rip], reaped [ript]; poke [pok], poked [pokt]; kiss [kɪs], kissed [kɪst]; patch [pætʃ], patched [pætʃt] Set A suggests that if the verb ends in a [t] or a [d] (i.e., non-nasal alveolar stops), [əd] is added to form the past tense, similar to the insertion of [əz] to form the The Pronunciation of Morphemes plural of nouns that end in sibilants. Set B suggests that if the verb ends in a voiced segment other than [d], you add a voiced [d]. Set C shows us that if the verb ends in voiceless segment other than [t], you add a voiceless [t]. Just as /z/ was the basic form of the plural morpheme, /d/ is the basic form of the past-tense morpheme, and the rules for past-tense formation of regular verbs are much like the rules for the plural formation of regular nouns. These are also morphophonemic rules as they apply specifically to the past-tense morpheme /d/. As with the plural rules, the output of Rule 1, if any, provides the input to Rule 2, and the rules must be applied in order. 1. 2. Insert a [ə] before the past-tense morpheme when a regular verb ends in a non-nasal alveolar stop, giving [əd]. Change the past-tense morpheme to a voiceless [t] when a voiceless sound precedes it. Two further allomorphs in English are the possessive morpheme and the thirdperson singular morpheme, spelled s or es. These morphemes take on the same phonetic form as the plural morpheme according to the same rules! Add [s] to ship to get ship’s; add [z] to woman to get woman’s; and add [əz] to judge to get judge’s. Similarly for the verbs eat, need, and rush, whose third-person singular forms are eats with a final [s], needs with a final [z], and rushes with a final [əz]. That the rules of phonology are based on properties of segments rather than on individual words is one of the factors that makes it possible for young children to learn their native language in a relatively short period. The young child doesn’t need to learn each plural, each past tense, each possessive form, and each verb ending, on a noun-by-noun or verb-by-verb basis. Once the rule is learned, thousands of word forms are automatically known. And as we will see when we discuss language development in chapter 7, children give clear evidence of learning morphophonemic rules such as the plural rules by applying the rule too broadly and producing forms such as mouses, mans, and so on, which are ungrammatical in the adult language. English is not the only language that has morphemes that are pronounced differently in different phonological environments. Most languages have morpheme variation that can be described by rules similar to the ones we have written for English. For example, the negative morpheme in the West African language Akan has three nasal allomorphs: [m] before p, [n] before t, and [ŋ] before k, as the following examples show ([mɪ] means “I”): mɪ pɛ mɪ tɪ mɪ kɔ “I like” “I speak” “I go” mɪ mpɛ “I don’t like” mɪ ntɪ “I don’t speak” mɪ ŋkɔ “I don’t go” The rule that describes the distribution of allomorphs is: Change the place of articulation of the nasal negative morpheme to agree with the place of articulation of a following consonant. The rule that changes the pronunciation of nasal consonants as just illustrated is called the homorganic nasal rule—homorganic means “same place”—because 231 232 CHAPTER 5 Phonology: The Sound Patterns of Language the place of articulation of the nasal is the same as for the following consonant. The homorganic nasal rule is a common rule in the world’s languages. Phonemes: The Phonological Units of Language In the physical world the naive speaker and hearer actualize and are sensitive to sounds, but what they feel themselves to be pronouncing and hearing are “phonemes.” EDWARD SAPIR, “The Psychological Reality of Phonemes,” 1933 The phonological rules discussed in the preceding section apply only to particular morphemes. However, other phonological rules apply to sounds as they occur in any morpheme in the language. These rules express our knowledge about the sound patterns of the entire language. This section introduces the notions of phoneme and allophone. Phonemes are what we have been calling the basic form of a sound and are sensed in your mind rather than spoken or heard. Each phoneme has associated with it one or more sounds, called allophones, which represent the actual sound corresponding to the phoneme in various environments. For example, the phoneme /p/ is pronounced with the aspiration allophone [pʰ] in pit but without aspiration [p] in spit. Phonological rules operate on phonemes to make explicit which allophones are pronounced in which environments. Vowel Nasalization in English as an Illustration of Allophones English contains a general phonological rule that determines the contexts in which vowels are nasalized. In chapter 4 we noted that both oral and nasal vowels occur phonetically in English. The following examples show this: bean roam [bĩn] [rõm] bead robe [bid] [rob] Taking oral vowels as basic—that is, as the phonemes—we have a phonological rule that states: Vowels are nasalized before a nasal consonant within the same syllable. This rule expresses your knowledge of English pronunciation: nasalized vowels occur only before nasal consonants and never elsewhere. The effect of this rule is exemplified in Table 5.1. As the examples in Table 5.1 illustrate, oral vowels in English occur in final position and before non-nasal consonants; nasalized vowels occur only before nasal consonants. The nonwords (starred) show us that nasalized vowels do not occur finally or before non-nasal consonants, nor do oral vowels occur before nasal consonants. Phonemes: The Phonological Units of Language TABLE 5.1 | Nasal and Oral Vowels: Words and Nonwords Words be [bi] bead lay [le] baa [bæ] Nonwords [bid] bean [bĩn] *[bĩ] *[bĩd] *[bin] lace [les] lame [lẽm] *[lẽ] *[lẽs] *[lem] bad [bæd] bang [bæ̃ŋ] *[bæ̃] *[bæ̃d] *[bæŋ] You may be unaware of this variation in your vowel production, but this is natural. Whether you speak or hear the vowel in bean with or without nasalization does not matter. Without nasalization, it might sound a bit strange, as if you had a foreign accent, but bean pronounced [bĩn] and bean pronounced [bin] would convey the same word. Likewise, if you pronounced bead as [bĩd], with a nasalized vowel, someone might suspect you had a cold, or that you spoke nasally, but the word would remain bead. Because nasalization is an inessential difference insofar as what the word actually is, we tend to be unaware of it. Contrast this situation with a change in vowel height. If you intend to say bead but say bad instead, that makes a difference. The [i] in bead and the [æ] in bad are sounds from different phonemes. Substitute one for another and you get a different word (or no word). The [i] in bead and the [ĩ] in the nasalized bead do not make a difference in meaning. These two sounds, then, belong to the same phoneme, an abstract high front vowel that we denote between slashes as /i/. Phonemes are not physical sounds. They are abstract mental representations of the phonological units of a language, the units used to represent words in our mental lexicon. The phonological rules of the language apply to phonemes to determine the pronunciation of words. The process of substituting one sound for another in a word to see if it makes a difference is a good way to identify the phonemes of a language. Here are twelve words differing only in their vowel: beat bit bait bet bat bite [bit] [bɪt] [bet] [bɛt] [bæt] [baɪt] [i] [ɪ] [e] [ɛ] [æ] [aɪ] boot but boat bought bout bot [but] [bʌt] [bot] [bɔt] [baʊt] [bat] [u] [ʌ] [o] [ɔ] [aʊ] [a] Any two of these words form a minimal pair: two different words that differ in one sound. The two sounds that cause the word difference belong to different phonemes. The pair [bid] and [bĩd] are not different words; they are variants of the same word. Therefore, [i] and [ĩ] do not belong to different phonemes. They are two actualizations of the same phoneme. From the minimal set of [b–t] words we can infer that English has at least twelve vowel phonemes. (We consider diphthongs to function as single vowel sounds.) To that total we can add a phoneme corresponding to [ʊ] resulting from minimal pairs such as book [bʊk] and beak [bik]; and we can add one for [ɔɪ] resulting from minimal pairs such as boy [bɔɪ] and buy [baɪ]. 233 234 CHAPTER 5 Phonology: The Sound Patterns of Language Our minimal pair analysis has revealed eleven monophthongal and three diphthongal vowel phonemes, namely, /i ɪ e ɛ æ u ʊ o ɔ a ʌ/ and /aɪ/, /aʊ/, /ɔɪ/. (This set may differ slightly in other variants of English.) Importantly, each of these vowel phonemes has (at least) two allophones (i.e., two ways of being pronounced: orally as [i], [ɪ], [e], etc., and nasally as [ĩ], [ĩ], [ẽ], etc.), as determined by the phonological rule of nasalization. A particular realization (pronunciation) of a phoneme is called a phone. The collection of phones that are the realizations of the same phoneme are called the allophones of that phoneme. In English, each vowel phoneme has both an oral and a nasalized allophone. The choice of the allophone is not random or haphazard; it is rule-governed. To distinguish between a phoneme and its allophones, we use slashes / / to enclose phonemes and continue to use square brackets [ ] for allophones or phones. For example, [i] and [ĩ] are allophones of the phoneme /i/; [ɪ] and [ĩ] are allophones of the phoneme /ɪ/, and so on. Thus we will represent bead and bean phonemically as /bid/ and /bin/. We refer to these as phonemic transcriptions of the two words. The rule for the distribution of oral and nasal vowels in English shows that phonetically these words will be pronounced as [bid] and [bĩn]. The pronunciations are indicated by phonetic transcriptions, and written between square brackets. Allophones of /t/ Copyright © Don Addis. Phonemes: The Phonological Units of Language Consonants, too, have allophones whose distribution is rule-governed. For /t/ the following examples illustrate the point. tick [tʰɪk] stick [stɪk] hits [hɪts] bitter [bɪɾər] In tick we normally find an aspirated [tʰ], whereas in stick and hits we find an unaspirated [t], and in bitter we find the flap [ɾ]. As with vowel nasalization, swapping these sounds around will not change word meaning. If we pronounce bitter with a [tʰ], it will not change the word; it will simply sound unnatural (to most Americans). We account for this knowledge of how t is pronounced by positing a phoneme /t/ with three allophones [tʰ], [t], and [ɾ]. We also posit phonological rules, which roughly state that the aspirated [tʰ] occurs before a stressed vowel, the unaspirated [t] occurs directly before or after /s/, and the flap [ɾ] occurs between a stressed vowel and an unstressed vowel. Whether we pronounce tick as [tʰɪk], [tɪk], or [ɾɪk], we are speaking the same word, however strangely pronounced. The allophones of a phoneme do not contrast. If we change the voicing and say Dick, or the manner of articulation and say sick, or the nasalization and say nick, we get different words. Those sounds do contrast. Tick, Dick, sick, and nick thus form a minimal set that shows us that there are phonemes /t/, /d/, /s/, and /n/ in English. We may proceed in this manner to discover other phonemes by considering pick, kick, Mick (as in Jagger), Vic, thick, chick, lick, and Rick to infer the phonemes /p/, /k/, /m/, /v/, /θ/, /tʃ/, /l/, and /r/. By finding other minimal pairs and sets, we would discover yet more consonant phonemes such as /ð/, which, together with /θ/, contrasts the words thy and thigh, or either and ether. Each of these phonemes has its own set of allophones, even if that set consists of a single phone, which would mean there is only one pronunciation in all environments. Most phonemes have more than one allophone, and the phonological rules dictate when the different allophones occur. It should be clear at this point that pronunciation is not a random process. It is systematic and rule-governed, and while the systems and the rules may appear complex, they are no more than a compendium of the knowledge that every speaker has. Complementary Distribution Minimal pairs illustrate that some speech sounds in a language are contrastive and can be used to make different words such as big and dig. These contrastive sounds group themselves into the phonemes of that language. Some sounds are non-contrastive and cannot be used to make different words. The sounds [t] and [ɾ] were cited as examples that do not contrast in English, so [raɪtər] and [raɪɾər] are not a minimal pair, but rather alternate ways in which writer may be pronounced. Oral and nasal vowels in English are also non-contrastive sounds. What’s more, the oral and nasal allophones of each vowel phoneme never occur in the same phonological context, as Table 5.2 illustrates. Where oral vowels occur, nasal vowels do not occur, and vice versa. In this sense the phones are said to complement each other or to be in complementary distribution. By and large, the allophones of a phoneme are in complementary 235 236 CHAPTER 5 Phonology: The Sound Patterns of Language TABLE 5.2 | Distribution of Oral and Nasal Vowels in English Syllables In Final Position Oral vowels Nasal vowels Yes No Before Nasal Consonants Before Oral Consonants No Yes Yes No distribution—never occurring in identical environments. Complementary distribution is a fundamental concept of phonology, and interestingly enough, it shows up in everyday life. Here are a couple of examples that draw on the common experience of reading and writing English. The first example focuses on printed letters such as those that appear on the pages of this book. Each printed letter of English has two main variants: lowercase and uppercase (or capital). If we restrict our attention to words that are not proper names or acronyms (such as Ron or UNICEF), we can formulate a simple rule that does a fair job of determining how letters will be printed: A letter is printed in uppercase if it is the first letter of a sentence; otherwise, it is printed in lowercase. Even ignoring names and acronyms, this rule is only approximately right, but let’s go with it anyway. It helps to explain why written sentences such as the following appear so strange: phonology is the study of the sound patterns of human languageS. pHONOLOGY iS tHE sTUDY oF tHE sOUND pATTERNS oF hUMAN lANGUAGES. These “sentences” violate the rule in funny ways, despite that they are comprehensible, just as the pronunciation of bead with a nasal [ĩ] as [bĩd] would sound funny but be understood. To the extent that the rule is correct, the lowercase and uppercase variants of an English letter are in complementary distribution. The uppercase variant occurs in one particular context (namely, at the beginning of the sentence), and the lowercase variant occurs in every other context (or elsewhere). Therefore, just as every English vowel phoneme has an oral and a nasalized allophone that occurs in different spoken contexts, every letter of the English alphabet has two variants, or allographs, that occur in different written contexts. In both cases, the two variants of a single mental representation (phoneme or letter) are in complementary distribution because they never appear in the same environment. And, substituting one for the other—a nasal vowel in place of an oral one, or an uppercase letter in place of a lowercase one—may sound or look unusual, but it will not change the meaning of what is spoken or written. Superman and Clark Kent, or Dr. Jekyll and Mr. Hyde—for those of you familiar with these fictional characters—are in complementary distribution with respect to time. At a given moment in time, the individual is either one or another of his alter egos. Our next example turns to cursive handwriting, which you are likely to have learned in elementary school. Writing in cursive is in one sense more similar to the act of speaking than printing is, because in cursive writing each letter of a Phonemes: The Phonological Units of Language word (usually) connects to the following letter—just as adjacent sounds connect during speech. The following figure illustrates that the connections between the letters of a word in cursive writing create different variants of a letter in different environments: Compare how the letter l appears after a g (as in glue) and after a b (as in blue). In the first case, the l begins near the bottom of the line, but in the second case, the l begins near the middle of the line (which is indicated by the dashes). In other words, the same letter l has two variants. It doesn’t matter where the l begins, it’s still an l. Likewise, it doesn’t matter whether a vowel in English is nasalized or not, it’s still that vowel. Which variant occurs in a particular word is determined by the immediately preceding letter. The variant that begins near the bottom of the line appears after letters like g that end near the bottom of the line. The variant that begins near the middle of the line appears after letters like b that end near the middle of the line. The two variants of l are therefore in complementary distribution. This pattern of complementary distribution is not specific to l but occurs for other cursive letters in English. By examining the pairs sat and vat, mill and will, and rack and rock, you can see the complementary distribution of the variants of a, i, and c, respectively. In each case, the immediately preceding letter determines which variant occurs, with the consequence that the variants of a given letter are in complementary distribution. We turn now to a general discussion of phonemes and allophones. When sounds are in complementary distribution, they do not contrast with each other. The replacement of one sound for the other will not change the meaning of the word, although it might not sound like typical English pronunciation. Given these facts about the patterning of sounds in a language, a phoneme can be defined as a set of phonetically similar sounds that are in complementary distribution. A set may consist of only one member. Some phonemes are represented by only one sound; they have one allophone. When there is more than one allophone in the set, the phones must be phonetically similar; that is, share most phonetic features. In English, the velar nasal [ŋ] and the glottal fricative [h] are in complementary distribution; [ŋ] does not occur word initially and [h] does not occur word finally. But they share very few phonetic features; [ŋ] is a voiced velar nasal stop; [h] is a voiceless glottal fricative. Therefore, they are not allophones of the same phoneme; [ŋ] and [h] are allophones of different phonemes. 237 238 CHAPTER 5 Phonology: The Sound Patterns of Language Speakers of a language generally perceive the different allophones of a single phoneme as the same sound or phone. For example, most speakers of English are unaware that the vowels in bead and bean are different phones because mentally, speakers produce and hear phonemes, not phones. Distinctive Features of Phonemes We are generally not aware of the phonetic properties or features that distinguish the phonemes of our language. Phonetics provides the means to describe the phones (sounds) of language, showing how they are produced and how they vary. Phonology tells us how various sounds form patterns to create phonemes and their allophones. For two phones to contrast meaning, there must be some phonetic difference between them. The minimal pairs seal [sil] and zeal [zil] show that [s] and [z] represent two contrasting phonemes in English. They cannot be allophones of one phoneme because one cannot replace the [s] with the [z] without changing the meaning of the word. Furthermore, they are not in complementary distribution; both occur word initially before the vowel [i]. They are therefore allophones of the two different phonemes /s/ and /z/. From the discussion of phonetics in chapter 4, we know that [s] and [z] differ in voicing: [s] is voiceless and [z] is voiced. The phonetic feature of voicing therefore distinguishes the two words. Voicing also distinguishes feel and veal [f]/[v] and cap and cab [p]/[b]. When a feature distinguishes one phoneme from another, hence one word from another, it is a distinctive feature or, equivalently, a phonemic feature. Feature Values One can think of voicing and voicelessness as the presence or absence of a single feature, voiced. This single feature may have two values: plus (+), which signifies its presence, and minus (–), which signifies its absence. For example, [b] is [+voiced] and [p] is [–voiced]. The presence or absence of nasality can similarly be designated as [+nasal] or [–nasal], with [m] being [+nasal] and [b] and [p] being [–nasal]. A [–nasal] sound is an oral sound. We consider the phonetic and phonemic symbols to be cover symbols for sets of distinctive features. They are a shorthand method of specifying the phonetic properties of the segment. Phones and phonemes are not indissoluble units; they are composed of phonetic features, similar to the way that molecules are composed of atoms. A more explicit description of the phonemes /p/, /b/, and /m/ may thus be given in a feature matrix of the following sort. Stop Labial Voiced Nasal p b m + + – – + + + – + + + + Aspiration is not listed as a phonemic feature in the specification of these units, because it is not necessary to include both [p] and [pʰ] as phonemes. In a pho- Distinctive Features of Phonemes netic transcription, however, the aspiration feature would be specified where it occurs. A phonetic feature is distinctive when the + value of that feature in certain words contrasts with the – value of that feature in other words. At least one feature value difference must distinguish each phoneme from all the other phonemes in a language. Because the phonemes /b/, /d/, and /g/ contrast by virtue of their place of articulation features—labial, alveolar, and velar—these place features are also distinctive in English. Because uvular sounds do not occur in English, the place feature uvular is not distinctive. The distinctive features of the voiced stops in English are shown in the following: Stop Voiced Labial Alveolar Velar Nasal b m d n g ŋ + + + – – – + + + – – + + + – + – – + + – + – + + + – – + – + + – – + + Each phoneme in this chart differs from all the other phonemes by at least one distinctive feature. Vowels, too, have distinctive features. For example, the feature [±back] distinguishes the vowel in rock [rak] ([+back]) from the vowel in rack [ræk] ([–back]), among others, and is therefore distinctive. Similarly, [±tense] distinguishes [i] from [ɪ] (beat versus bit), among others, and is also a distinctive feature of the vowel system. Nondistinctive Features We have seen that nasality is a distinctive feature of English consonants, but it is a nondistinctive feature for English vowels. Given the arbitrary relationship between form and meaning, there is no way to predict that the word meat begins with a nasal bilabial stop [m] and that the word beat begins with an oral bilabial stop [b]. You learn this when you learn the words. On the other hand, the nasality feature value of the vowels in bean, mean, comb, and sing is predictable because they occur before nasal consonants. When a feature value is predictable by rule for a certain class of sounds, the feature is a nondistinctive or redundant or predictable feature for that class. (The three terms are equivalent.) Thus nasality is a redundant feature in English vowels, but a nonredundant (distinctive or phonemic) feature for English consonants. This is not the case in all languages. In French, nasality is a distinctive feature for both vowels and consonants: gars (pronounced [ga]) “lad” contrasts with gant [gã], which means “glove”; and bal [bal] “dance” contrasts with mal [mal] “bad.” Thus, French has both oral and nasal consonant phonemes and vowel phonemes; English has oral and nasal consonant phonemes, but only oral vowel phonemes. Like French, the African language Akan (spoken in Ghana) has nasal vowel phonemes. Nasalization is a distinctive feature for vowels in Akan, as the following examples illustrate: 239 240 CHAPTER 5 Phonology: The Sound Patterns of Language [ka] [fi] [tu] [nsa] [tʃi] [pam] “bite” “come from” “pull” “hand” “hate” “sew” [kã] [fĩ] [tũ] [nsã] [tʃĩ] [pãm] “speak” “dirty” “den” “liquor” “squeeze” “confederate” Nasalization is not predictable in Akan as it is in English. There is no nasalization rule in Akan, as shown by the minimal pair [pam] and [pãm]. If you substitute an oral vowel for a nasal vowel, or vice versa, you will change the word. Two languages may have the same phonetic segments (phones) but have two different phonemic systems. Phonetically, both oral and nasalized vowels exist in English and Akan. However, English does not have nasalized vowel phonemes, but Akan does. The same phonetic segments function differently in the two languages. Nasalization of vowels in English is redundant and nondistinctive; nasalization of vowels in Akan is nonredundant and distinctive. Another nondistinctive feature in English is aspiration. In chapter 4 we pointed out that in English both aspirated and unaspirated voiceless stops occur. The voiceless aspirated stops [pʰ], [tʰ], and [kʰ] and the voiceless unaspirated stops [p], [t], and [k] are in complementary distribution in English, as shown in the following: Syllable Initial before a Stressed Vowel After a Syllable Initial /s/ [pʰ] pill [pʰɪl] par [tʰ] till [tʰɪl] tar [kʰ] kill [kʰɪl] car [p] spill [spɪl] spar [t] still [stɪl] star [k] skill [skɪl] scar [pɪl]* [spʰɪl]* [par]* [tɪl]* [stʰɪl]* [tar]* [kɪl]* [skʰɪl]* [kar]* [pʰar] [tʰar] [kʰar] [spar] [star] [skar] [spʰar]* [stʰar]* [skʰar]* Nonword* Where the unaspirated stops occur, the aspirated ones do not, and vice versa. If you wanted to, you could say spit with an aspirated [pʰ], as [spʰɪt], and it would be understood as spit, but listeners would probably think you were spitting out your words. Given this distribution, we see that aspiration is a redundant, nondistinctive feature in English; aspiration is predictable, occurring as a feature of voiceless stops when they occur initially in a stressed syllable. This is the reason speakers of English usually perceive the [pʰ] in pill and the [p] in spill to be the same sound, just as they consider the [i] and [ĩ] that represent the phoneme /i/ in bead and bean to be the same. They do so because the difference between them is predictable, redundant, nondistinctive, and nonphonemic (all equivalent terms). This example illustrates why we refer to the phoneme as an abstract unit or as a mental unit. We do not utter phonemes; we produce phones, the allophones of the phonemes of the language. In English /p/ is a phoneme that is realized phonetically (pronounced) as both [p] and [pʰ], depending on context. The phones or sounds [p] and [pʰ] are allophones of the phoneme /p/. Distinctive Features of Phonemes Phonemic Patterns May Vary across Languages The tongue of man is a twisty thing, there are plenty of words there of every kind, the range of words is wide, and their variance. HOMER, The Iliad, c. 900 b.c.e. We have seen that the same phones may occur in two languages but pattern differently because the phonologies are different. English, French, and Akan have oral and nasal vowel phones; in English, oral and nasal vowels are allophones of one phoneme, whereas in French and Akan they represent distinct phonemes. Aspiration of voiceless stops further illustrates the asymmetry of the phonological systems of different languages. Both aspirated and unaspirated voiceless stops occur in English and Thai, but they function differently in the two languages. Aspiration in English is not a distinctive feature because its presence or absence is predictable. In Thai it is not predictable, as the following examples show: Voiceless Unaspirated Voiceless Aspirated [paa] [tam] [kat] [pʰaa] [tʰam] [kʰat] forest to pound to bite to split to do to interrupt The voiceless unaspirated and the voiceless aspirated stops in Thai occur in minimal pairs; they contrast and are therefore phonemes. In both English and Thai, the phones [p], [t], [k], [pʰ], [tʰ], and [kʰ] occur. In English they represent the phonemes /p/, /t/, and /k/; in Thai they represent the phonemes /p/, /t/, /k/, /pʰ/, /tʰ/, and /kʰ/. Aspiration is a distinctive feature in Thai; it is a nondistinctive redundant feature in English. The phonetic facts alone do not reveal what is distinctive or phonemic: The phonetic representation of utterances shows what speakers know about the pronunciation of sounds. The phonemic representation of utterances shows what speakers know about the patterning of sounds. That pot/pat and spot/spat are phonemically transcribed with an identical /p/ reveals the fact that English speakers consider the [pʰ] in pot [pʰat] and the [p] in spot [spat] to be phonetic manifestations of the same phoneme /p/. This is also reflected in spelling, which is more attuned to phonemes than to individual phones. In English, vowel length and consonant length are nonphonemic. Prolonging a sound in English will not produce a different word. In other languages, long and short vowels that are identical except for length are phonemic. In such languages, length is a nonpredictable distinctive feature. For example, vowel length is phonemic in Korean, as shown by the following minimal pairs (recall that the colon-like symbol ː indicates length): il seda kul “day” “to count” “oyster” iːl seːda kuːl “work” “strong” “tunnel” 241 242 CHAPTER 5 Phonology: The Sound Patterns of Language In Italian the word for “grandfather” is nonno /nonːo/, which contrasts with the word for “ninth,” which is nono /nono/, so consonant length is phonemic in Italian. In Luganda, an African language, consonant length is also phonemic: /kula/ with a short /k/ means “grow up,” whereas /kːula/ with a long /kː/ means “treasure.” Thus consonant length is unpredictable in Luganda, just as whether a word begins with a /b/ or a /p/ is unpredictable in English. ASL Phonology As discussed in chapter 4, signs can be broken down into smaller units that are in many ways analogous to the phonemes and distinctive features in spoken languages. They can be decomposed into location, movement, and handshape and there are minimal pairs that are distinguished by a change in one or another of these features. Figure 4.6 in chapter 4 provides some examples. The signs meaning “candy,” “apple,” and “jealous” are articulated at the same location on the face and involve the same movement, but contrast minimally in hand configuration. “Summer,” “ugly,” and “dry” are a minimal set contrasting only in place of articulation, and “tape,” “chair,” and “train” contrast only in movement. Thus signs can be decomposed into smaller minimal units that contrast meaning. Some features are non-distinctive. Whether a sign is articulated on the right or left hand does not affect its meaning. Natural Classes of Speech Sounds It’s as large as life, and twice as natural! LEWIS CARROLL, Through the Looking-Glass, 1871 We show what speakers know about the predictable aspects of speech through phonological rules. In English, these rules determine the environments in which vowels are nasalized or voiceless stops aspirated. These rules apply to all the words in the language, and even apply to made-up words such as sint, peeg, or sparg, which would be /sɪnt/, /pig/, and /sparg/ phonemically and [sĩnt], [pʰig], and [sparg] phonetically. The more linguists examine the phonologies of the world’s languages, the more they find that similar phonological rules involve the same classes of sounds such as nasals or voiceless stops. For example, many languages besides English have a rule that nasalizes vowels before nasal consonants: Nasalize a vowel when it precedes a nasal consonant in the same syllable. The rule will apply to all vowel phonemes when they occur in a context preceding any segment marked [+nasal] in the same syllable, and will add the feature [+nasal] to the feature matrix of the vowel. Our description of vowel nasalization in English needs only this rule. It need not include a list of the individual vowels to which the rule applies or a list of the sounds that result from its application. Many languages have rules that refer to [+voiced] and [–voiced] sounds. For example, the aspiration rule in English applies to the class of [–voiced] noncontinuant sounds in word-initial position. As in the vowel nasality rule, we do not Distinctive Features of Phonemes need to consider individual segments. The rule automatically applies to initial /p/, /t/, /k/, and /tʃ/. Phonological rules often apply to natural classes of sounds. A natural class is a group of sounds described by a small number of distinctive features such as [–voiced], [–continuant], which describe /p/, /t/, /k/, and /tʃ/. Any individual member of a natural class would require more features in its description than the class itself, so /p/ is not only [–voiced], [–continuant], but also [+labial]. The relationships among phonological rules and natural classes illustrate why segments are to be regarded as bundles of features. If segments were not specified as feature matrices, the similarities among /p/, /t/, /k/ or /m/, /n/, /ŋ/ would be lost. It would be just as likely for a language to have a rule such as 1. Nasalize vowels before p, i, or z. as to have a rule such as 2. Nasalize vowels before m, n, or ŋ. Rule 1 has no phonetic explanation, whereas Rule 2 does: the lowering of the velum in anticipation of a following nasal consonant causes the vowel to be nasalized. In Rule 1, the environment is a motley collection of unrelated sounds that cannot be described with a few features. Rule 2 applies to the natural class of nasal consonants, namely sounds that are [+nasal], [+consonantal]. The various classes of sounds discussed in chapter 4 also define natural classes to which the phonological rules of all languages may refer. They also can be specified by + and – feature values. Table 5.3 illustrates how these feature values combine to define some major classes of phonemes. The presence of +/– indicates that the sound may or may not possess a feature depending on its context. For example, word-initial nasals are [–syllabic] but some word-final nasals can be [+syllabic], as in button [bʌtn̩ ]. TABLE 5.3 | Feature Specification of Major Natural Classes of Sounds Features Consonantal Sonorant Syllabic Nasal Obstruents Nasals Liquids Glides Vowels + – – – + + +/– + + + +/– – – + – – – + + +/– Feature Specifications for American English Consonants and Vowels Here are feature matrices for vowels and consonants in English. By selecting all segments marked the same for one or more features, you can identify natural classes. For example, the natural class of high vowels /i, ɪ, u, ʊ/ is marked [+high] in the vowel feature chart of Table 5.4; the natural class of voiced stops /b, m, d, n, g, ŋ, dʒ/ are the ones marked [+voice] [–continuant] in the consonant chart of Table 5.5. 243 244 CHAPTER 5 Phonology: The Sound Patterns of Language TABLE 5.4 | Features of Some American English Vowels Features i i e ɛ æ u ʊ o ↄ a ʌ High Mid Low Back Central Round Tense + – – – – – + + – – – – – – – + – – – – + – + – – – – – – – + – – – – + – – + – + + + – – + – + – – + – + – + + – + – + – + – – – + + – – + – + – – + – – The Rules of Phonology But that to come Shall all be done by the rule. WILLIAM SHAKESPEARE, Antony and Cleopatra, 1623 Throughout this chapter we have emphasized that the relationship between the phonemic representation of a word and its phonetic representation, or how it is pronounced, is rule-governed. Phonological rules are part of a speaker’s knowledge of the language. The phonemic representations are minimally specified because some features or feature values are predictable. For example, in English all nasal consonants are voiced, so we don’t need to specify voicing in the phonemic feature matrix for nasals. Similarly, we don’t need to specify the feature round for non–low back vowels. If Table 5.5 was strictly phonemic, then instead of a + in the voice-row for m, n, and ŋ, the cells would be left blank, as would the cells in the round-row of Table 5.4 for u, ʊ, o, ɔ. Such underspecification reflects the redundancy in the phonology, which is also part of a speaker’s knowledge of the sound system. The phonemic representation should include only the nonpredictable, distinctive features of the phonemes in a word. The phonetic representation, derived by applying the phonological rules, includes all of the linguistically relevant phonetic aspects of the sounds. It does not include all of the physical properties of the sounds of an utterance, however, because the physical signal may vary in many ways that have little to do with the phonological system. The absolute pitch of the sound, the rate of speech, or its loudness is not linguistically significant. The phonetic transcription is therefore also an abstraction from the physical signal; it includes the nonvariant phonetic aspects of the utterances, those features that remain relatively constant from speaker to speaker and from one time to another. Although the specific rules of phonology differ from language to language, the kinds of rules, what they do, and the natural classes they refer to are universal. Assimilation Rules We have seen that nasalization of vowels in English is nonphonemic because it is predictable by rule. The vowel nasalization rule is an assimilation rule, or a rule + – – – – – + – – + – – – Consonantal Sonorant Syllabic Nasal Voiced Continuant Labial Alveolar Palatal Anterior Velar Coronal Sibilant + – – – + – + – – + – – – b + + –/+ + + – + – – + – – – m + – – – – – – + – + – + – t + – – – + – – + – + – + – d + + –/+ + + – – + – + – + – n + – – – – – – – – – + – – k + – – – + – – – – – + – – g + + –/+ + + – – – – – + – – ŋ + – – – – + + – – + – – – f + – – – + + + – – + – – – v + – – – – + – – – + – + – θ + – – – + + – – – + – + – ð + – – – – + – + – + – + + s + – – – + + – + – + – + + z + – – – – + – – + – – + + ∫ + – – – + + – – + – – + + ʒ + – – – – – – – + – – + + t∫ + – – – + – – – + – – + + dʒ Note: The phonemes /r/ and /l/ are distinguished by the feature [lateral], not shown here. /l/ is the only phoneme that would be [+lateral]. p Features TABLE 5.5 | Features of Some American English Consonants r + + + + –/+ –/+ – – + + + + – – + + – – + + – – + + – – l – + – – + + – – + – – + – j – + – – + + + – – – + – – w – + – – – + – – – – – – – h The Rules of Phonology 245 246 CHAPTER 5 Phonology: The Sound Patterns of Language that makes neighboring segments more similar by duplicating a phonetic property. For the most part, assimilation rules stem from articulatory processes. There is a tendency when we speak to increase the ease of articulation. It is easier to lower the velum while a vowel is being pronounced before a nasal stop than to wait for the completion of the vowel and then require the velum to move suddenly. We now wish to look more closely at the phonological rules we have been discussing. Previously, we stated the vowel nasalization rule: Vowels are nasalized before a nasal consonant within the same syllable. This rule specifies the class of sounds affected by the rule: Vowels It states what phonetic change will occur by applying the rule: Change phonemic oral vowels to phonetic nasal vowels. And it specifies the context or phonological environment. Before a nasal consonant within the same syllable. A shorthand notation to write rules, similar to the way scientists and mathematicians use symbols, makes the rule statements more concise. Every physicist knows that E = mc 2 means “Energy equals mass times the square of the velocity of light.” We can use similar notations to state the nasalization rule as: V → [+nasal] / __ [+nasal] $ Let’s look at the rule piece by piece. V Vowels → become [+nasal] nasalized / __ [+nasal] in the before nasal environment segments $ within a syllable To the left of the arrow is the class of sounds that is affected. To the right of the arrow is the phonetic change that occurs. The phonological environment follows the slash. The underscore __ is the relative position of the sound to be changed within the environment, in this case before a nasal segment. The dollar sign denotes a syllable boundary and guarantees that the environment does not cross over to the next syllable. This rule tells us that the vowels in such words as den /dɛn/ will become nasalized to [dɛ̃n], but deck /dɛk/ will not be affected and is pronounced [dɛk] because /k/ is not a nasal consonant. As well, a word such as den$tal /dɛn$təl/ will be pronounced [dɛ̃n$təl], where we have showed the syllable boundary explicitly. However, the first vowel in de$note, /di$not/, will not be nasalized, because the nasal segment does not precede the syllable boundary, so the “within a syllable” condition is not met. Any rule written in formal notation can be stated in words. The use of formal notation is a shorthand way of presenting the information. Notation also reveals the function of the rule more explicitly than words. It is easy to see in the for- The Rules of Phonology mal statement of the rule that this is an assimilation rule because the change to [+nasal] occurs before [+nasal] segments. Assimilation rules in languages reflect coarticulation—the spreading of phonetic features either in the anticipation or in the perseveration (the “hanging on”) of articulatory processes. The auditory effect is that words sound smoother. The following example illustrates how the English vowel nasalization rule applies. It also shows the assimilatory nature of the rule, that is, the change from no nasal feature to [+nasal]: “bob” Phonemic representation Nasality: phonemic feature value Apply nasal rule Nasality: phonetic feature value Phonetic representation /b – – [b a 0* NA – a “boom” b/ – /b – – b] – [b u 0 ↓ + ũ m/ + + m] *The 0 means not present on the phonemic level. There are many assimilation rules in English and other languages. Recall that the voiced /z/ of the English regular plural suffix is changed to [s] after a voiceless sound, and that similarly the voiced /d/ of the English regular past-tense suffix is changed to [t] after a voiceless sound. These are instances of voicing assimilation. In these cases the value of the voicing feature goes from [+voice] to [–voice] because of assimilation to the [–voice] feature of the final consonant of the stem, as in the derivation of cats: /kæt + z/ → [kæts] We saw a different kind of assimilation rule in Akan, where we observed that the nasal negative morpheme was expressed as [m] before /p/, [n] before /t/, and [ŋ] before /k/. (This is the homorganic nasal rule.) In this case the place of articulation—bilabial, alveolar, velar—of the nasal assimilates to the place of articulation of the following consonant. The same process occurs in English, where the negative morpheme prefix spelled in- or im- agrees in place of articulation with the word to which it is prefixed, so we have impossible [ĩmpʰasəbəl], intolerant [ĩntʰalərə̃nt], and incongruous [ĩŋkʰãngruəs]. In effect, the rule makes two consonants that appear next to each other more similar. ASL and other signed languages also have assimilation rules. One example is handshape assimilation, which takes place in compounds such as the sign for “blood.” This ASL sign is a compound of the signs for “red” and “flow.” The handshape for “red” alone is formed at the chin by a closed hand with the index finger pointed up. In the compound “blood” this handshape is replaced by that of the following word “flow,” which is an open handshape (all fingers extended). In other words, the handshape for “red” has undergone assimilation. The location of the sign (at the chin) remains the same. Examples such as this tell us that while the features of signed languages are different from those of spoken languages, their phonologies are organized according to principles like those of spoken languages. 247 248 CHAPTER 5 Phonology: The Sound Patterns of Language Dissimilation Rules “Dennis the Menace” © Hank Ketcham. Reprinted with permission of North America Syndicate. It is understandable that so many languages have assimilation rules; they permit greater ease of articulation. It might seem strange, then, to learn that languages also have dissimilation rules, in which a segment becomes less similar to another segment. Ironically, such rules have the same explanation: it is sometimes easier to articulate dissimilar sounds. The difficulty of tongue twisters like “the sixth sheik’s sixth sheep is sick” is based on the repeated similarity of sounds. If one The Rules of Phonology were to make some sounds less similar, as in “the second sheik’s tenth sheep is sick,” it would be easier to say. The cartoon makes the same point, with toy boat being more difficult to articulate repeatedly than sail boat, because the [ɔɪ] of toy is more similar to [o] than is the [e] of sail. An example of easing pronunciation through dissimilation is found in some varieties of English, where there is a fricative dissimilation rule. This rule applies to sequences /fθ/ and /sθ/, changing them to [ft] and [st]. Here the fricative /θ/ becomes dissimilar to the preceding fricative by becoming a stop. For example, the words fifth and sixth come to be pronounced as if they were spelled fift and sikst. A classic example of the same kind of dissimilation occurred in Latin, and the results of this process show up in the derivational morpheme /-ar/ in English. In Latin a derivational suffix -alis was added to nouns to form adjectives. When the suffix was added to a noun that contained the liquid /l/, the suffix was changed to -aris; that is, the liquid /l/ was changed to the dissimilar liquid /r/. These words came into English as adjectives ending in -al or in its dissimilated form -ar, as shown in the following examples: -al -ar anecdot-al annu-al ment-al pen-al spiritu-al ven-al angul-ar annul-ar column-ar perpendicul-ar simil-ar vel-ar All of the -ar adjectives contain an /l/, and as columnar illustrates, the /l/ need not be the consonant directly preceding the dissimilated segment. Though dissimilation rules are rarer than assimilation rules, they are nevertheless found throughout the world’s languages. Feature-Changing Rules The assimilation and dissimilation rules we have seen may all be thought of as feature-changing rules. In some cases a feature already present is changed. The /z/ plural morpheme has its voicing value changed from plus to minus when it follows a voiceless sound. Similarly, the /n/ in the phonemic negative prefix morpheme /ɪn/ undergoes a change in its place of articulation feature when preceding bilabials or velars. In the case of the Latin dissimilation rule, the feature [+lateral] is changed to [–lateral], so that /l/ is pronounced [r]. The addition of a feature is the other way in which we have seen features change. The English vowel nasalization rule is a case in point. Phonemically, vowels are not marked for nasality; however, in the environment specified by the rule, the feature [+nasal] is added. Some feature-changing rules are neither assimilation nor dissimilation rules. The rule in English that aspirates voiceless stops at the beginning of a syllable simply adds a nondistinctive feature. Generally, aspiration occurs only if the following vowel is stressed. The /p/ in pit and repeat is an aspirated [pʰ], but the /p/ in inspect or compass is an unaspirated [p]. We also note that even with an 249 250 CHAPTER 5 Phonology: The Sound Patterns of Language intervening consonant, the aspiration takes place so that words such as crib, clip, and quip ([kʰrɪb], [kʰlɪp], and [kʰwɪp]) all begin with an aspirated [kʰ]. And finally, the affricate /tʃ/ is subject to the rule, so chip is phonetically [tʃʰɪp]. We can now state the rule: A voiceless, noncontinuant has [+aspirated] added to its feature matrix at the beginning of a syllable containing a stressed vowel with an optional intervening consonant. Aspiration is not specified in any phonemic feature matrices of English. The aspiration rule adds this feature for reasons having to do with the timing of the closure release rather than in an attempt to make segments more alike or not alike, as with assimilation and dissimilation rules. Remember that /p/ and /b/ (and all such symbols) are simply cover symbols that do not reveal the phonemic distinctions. In phonemic and phonetic feature matrices, these differences are made explicit, as shown in the following phonemic matrices: Consonantal Continuant Labial Voiced p b + – + – + – + + ← distinctive difference The nondistinctive feature “aspiration” is not included in these phonemic representations because aspiration is predictable. Segment Insertion and Deletion Rules Phonological rules may add or delete entire segments. These are different from the feature-changing and feature-adding rules we have seen so far, which affect only parts of segments. The process of inserting a consonant or vowel is called epenthesis. The rules for forming regular plurals, possessive forms, and third-person singular verb agreement in English all require an epenthesis rule. Here is the first part of that rule that we gave earlier for plural formation: Insert a [ə] before the plural morpheme /z/ when a regular noun ends in a sibilant, giving [əz]. Letting the symbol ∅ stand for “null,” we can write this morphophonemic epenthesis rule more formally as “null becomes schwa between two sibilants,” or like this: ∅ → ə / [+sibilant] ___ [+sibilant] Similarly, we recall the first part of the rule for regular past-tense formation in English: Insert a [ə] before the past-tense morpheme when a regular verb ends in a non-nasal alveolar stop, giving [əd]. The Rules of Phonology This epenthesis rule may also be expressed in our more formal notation: ∅ → ə / [– nasal, + alveolar, – continuant] ___ [– nasal, + alveolar, – continuant] There is a plausible explanation for insertion of a [ə]. If we merely added a [z] to squeeze to form its plural, we would get [skwizː], which would be hard for English speakers to distinguish from [skwiz]. Similarly, if we added just [d] to load to form its past tense, it would be [lodː], which would also be difficult to distinguish from [lod], because in English we do not contrast long and short consonants. These and other examples suggest that the morphological patterns in a language are closely related to other generalizations about the phonology of that language. Just as vowel length can be used for emphasis without changing the meaning of a word, as in “Stooooop [staːp] hitting me,” an epenthetic schwa can have a similar effect, as in “P-uh-lease [pʰəliz] let me go.” Segment deletion rules are commonly found in many languages and are far more prevalent than segment insertion rules. One such rule occurs in casual or rapid speech. We often delete the unstressed vowels that are shown in bold type in words like the following: mystery general memory funeral vigorous Barbara These words in casual speech sound as if they were written: mystry genral memry funral vigrous Barbra The silent g that torments spellers in such words as sign and design is actually an indication of a deeper phonological process, in this case, one of segment deletion. Consider the following examples: A sign design paradigm B [sãɪn] [dəzãɪn] [pʰærədãɪm] signature designation paradigmatic [sɪgnətʃər] [dεzɪgneʃə̃n] [pʰærədɪgmæɾək] In none of the words in column A is there a phonetic [g], but in each corresponding word in column B a [g] occurs. Our knowledge of English phonology accounts for these phonetic differences. The “[g]—no [g]” alternation is regular, and we apply it to words that we never have heard. Suppose someone says: “He was a salignant [səlɪgnə̃nt] man.” Not knowing what the word means (which you couldn’t, since we made it up), you might ask: “Why, did he salign [səlãɪn] somebody?” It is highly doubtful that a speaker of English would pronounce the verb form without the -ant as [səlɪgn], because the phonological rules of English would delete the /g/ when it occurred in this context. This rule might be stated as: Delete a /g/ when it occurs before a syllable-final nasal consonant. 251 252 CHAPTER 5 Phonology: The Sound Patterns of Language The rule is even more general, as evidenced by the pair gnostic [nastɪk] and agnostic [ægnastɪk], and by the silent g’s in the cartoon: “Tumbleweeds” © Tom K. Ryan. Reprinted with permission of North America Syndicate. This more general rule may be stated as: Delete a /g/ word initially before a nasal consonant or before a syllable-final nasal consonant. Given this rule, the phonemic representation of the stems in sign/signature, design/ designation, malign/malignant, phlegm/phlegmatic, paradigm/paradigmatic, gnostic/agnostic, and so on will include a /g/ that will be deleted by the regular rule if a prefix or suffix is not added. By stating the class of sounds that follow the /g/ (nasal consonants) rather than any specific nasal consonant, the rule deletes the /g/ before both /m/ and /n/. Movement (Metathesis) Rules “Family Circus” © Bil Keane, Inc. Reprinted with permission of King Features Syndicate. The Rules of Phonology Phonological rules may also reorder sequences of phonemes, in which case they are called metathesis rules. For some speakers of English, the word ask is pronounced [æks], but the word asking is pronounced [æsk ĩŋ]. In this case a metathesis rule reorders the /s/ and /k/ in certain contexts. In Old English the verb was aksian, with the /k/ preceding the /s/. A historical metathesis rule switched these two consonants, producing ask in most dialects of English. Children’s speech shows many cases of metathesis (which are corrected as the child approaches the adult grammar): aminal [æ̃mə̃nəl] for animal and pusketti [pʰəskɛti] for spaghetti are common children’s pronunciations. Dog lovers have metathesized the Shetland sheepdog into a sheltie, and at least two presidents of the United States have applied a metathesis rule to the word nuclear, which many Americans pronounce [njukliər], but is pronounced [nukjələr] by those leading statesmen. From One to Many and from Many to One As we’ve seen, phonological rules that relate phonemic to phonetic representations have several functions, among which are the following: Function Example 1. Change feature values Nasal consonant assimilation rules in Akan and English Aspiration in English g-deletion before nasals in English Schwa insertion in English plural and past tense Metathesis rule relating [æsk] and [æks] 2. Add new features 3. Delete segments 4. Add segments 5. Reorder segments The relationship between the phonemes and phones of a language is complex and varied. Rarely is a single phoneme realized as one and only one phone. We often find one phoneme realized as several phones, as in the case with English voiceless stops that may be realized as aspirated or unaspirated, among other possibilities. And we find the same phone may be the realization of several different phonemes. Here is a dramatic example of that many-to-one relationship. Consider the vowels in the following pairs of words: A /i/ /ɪ/ /e/ /ɛ/ /æ/ /a/ /o/ /ʊ/ compete medicinal maintain telegraph analysis solid phone Talmudic B [i] [ɪ] [e] [ɛ] [æ] [a] [o] [ʊ] competition medicine maintenance telegraphy analytic solidity phonetic Talmud [ə] [ə] [ə] [ə] [ə] [ə] [ə] [ə] 253 254 CHAPTER 5 Phonology: The Sound Patterns of Language In column A all the boldfaced vowels are stressed vowels with a variety of vowel phones; in column B the boldfaced vowels are without stress or reduced and are pronounced as schwa [ə]. In these cases the stress pattern of the word varies because of the different suffixes. The vowel that is stressed in one form becomes reduced in a different form and is therefore pronounced as [ə]. The phonemic representations of all of the root morphemes contain an unreduced vowel such as /i/ or /e/ that is phonetically [ə] when it is reduced. We can conclude, then, that [ə] is an allophone of all English vowel phonemes. The rule to derive the schwa is simple to state: Change a vowel to a [ə] when the vowel is reduced. In the phonological description of a language, it is not always straightforward to determine phonemic representations from phonetic transcriptions. How would we deduce the /o/ in phonetic from its pronunciation as [fə̃nεɾɪk] without a complete phonological analysis? However, given the phonemic representation and the phonological rules, we can always derive the correct phonetic representation. In our internal mental grammars this derivation is no problem, because the words occur in their phonemic forms in our mental lexicons and we know the rules of the language. Similar rules exist in other languages that show that there is no one-to-one relationship between phonemes and phones. For example, in German both voiced and voiceless obstruents occur as phonemes, as is shown by the following minimal pair: Tier [tiːr] “animal” dir [diːr] “to you” However, when voiced obstruents occur at the end of a word or syllable, they become voiceless. The words meaning “bundle” Bund /bʊnd/ and “colorful” bunt /bʊnt/ are phonetically identical and pronounced [bʊnt] with a final [t]. Obstruent voicing is neutralized in syllable-final position. The German devoicing rule changes the specifications of features. In German, the phonemic representation of the final stop in Bund is /d/, specified as [+voiced]; it is changed by rule to [–voiced] to derive the phonetic [t] in wordfinal position. Again, this shows there is no simple relationship between phonemes and their allophones. German presents us with this picture: German Phonemes /d/ /t/ German Phones [d] [t] The devoicing rule in German provides a further illustration that we cannot discern the phonemic representation of a word given only the phonetic form; [bʊnt] can be derived from either /bʊnd/ or /bʊnt/. The phonemic representations and the phonological rules together determine the phonetic forms. The Rules of Phonology The Function of Phonological Rules The function of the phonological rules in a grammar is to provide the phonetic information necessary for the pronunciation of utterances. We may illustrate this point in the following way: input Phonemic (Mental Lexicon) Representation of Words in a Sentence Phonological rules (P-rules) output Phonetic Representation of Words in a Sentence The input to the P-rules is the phonemic representation. The P-rules apply to the phonemic strings and produce as output the phonetic representation. The application of rules in this way is called a derivation. We have given examples of derivations that show how plurals are derived, how phonemically oral vowels become nasalized, and how /t/ and /d/ become flaps in certain environments. A derivation is thus an explicit way of showing both the effects and the function of phonological rules in a grammar. All the examples of derivations we have so far considered show the application of just one phonological rule, except the plural and past-tense rules, which are actually one rule with two parts. In any event, it is common for more than one rule to apply to a word. For example, the word tempest is phonemically /tɛmpɛst/ (as shown by the pronunciation of tempestuous [tʰɛ̃mpʰɛstʃuəs]) but phonetically [tʰɛ̃mpəst]. Three rules apply to it: the aspiration rule, the vowel nasalization rule, and the schwa rule. We can derive the phonetic form from the phonemic representation as follows: Underlying phonemic representation / t ” m p s t / th Aspiration rule ”) Nasalization rule E Schwa rule Surface phonetic representation ” [ tÓ ”) m p E s t ] Slips of the Tongue: Evidence for Phonological Rules Slips of the tongue, or speech errors, in which we deviate in some way from the intended utterance, show phonological rules in action. We all make speech 255 256 CHAPTER 5 Phonology: The Sound Patterns of Language errors, and they tell us interesting things about language and its use. Consider the following speech errors: Intended Utterance Actual Utterance 1. gone to seed [gãn tə sid] 2. stick in the mud [stɪk ĩn ðə mʌd] 3. speech production [spitʃ pʰrədʌkʃə̃n] god to seen [gad tə sĩn] smuck in the tid [smʌk ĩn ðə tʰɪd] preach seduction [pʰritʃ sədʌkʃə̃n] In the first example, the final consonants of the first and third words were reversed. Notice that the reversal of the consonants also changed the nasality of the vowels. The vowel [ã] in the intended utterance is replaced by [a]. In the actual utterance, the nasalization was lost because it no longer occurred before a nasal consonant. The vowel in the third word, which was the non-nasal [i] in the intended utterance, became [ĩ] in the error, because it was followed by /n/. The nasalization rule applied. In the other two errors, we see the application of the aspiration rule. In the intended stick, the /t/ would have been realized as an unaspirated [t] because it follows the syllable initial /s/. When it was switched with the /m/ in mud, it was pronounced as the aspirated [tʰ], because it occurred initially. The third example also illustrates the aspiration rule in action. More than being simply amusing, speech errors are linguistically interesting because they provide further evidence for phonological rules and for the decomposition of speech sounds into features. We will learn more about speech errors in chapter 8 on language processing. Prosodic Phonology Syllable Structure Baby Blues © Baby Blues Partnership. King Features Syndicate Prosodic Phonology Words are composed of one or more syllables. A syllable is a phonological unit composed of one or more phonemes. Every syllable has a nucleus, which is usually a vowel (but which may be a syllabic liquid or nasal). The nucleus may be preceded and/or followed by one or more phonemes called the syllable onset and coda. From a very early age, children learn that certain words rhyme. In rhyming words, the nucleus and the coda of the final syllable of both words are identical, as in the following jingle: Jack and Jill Went up the hill To fetch a pail of water. Jack fell down And broke his crown And Jill came tumbling after. For this reason, the nucleus + coda constitute the subsyllabic unit called a rime (note the spelling). A syllable thus has a hierarchical structure. Using the IPA symbol σ for the phonological syllable, the hierarchical structure of the monosyllabic word splints can be shown: Í Onset Rime Nucleus s p l I Coda n t s Word Stress In many languages, including English, one or more of the syllables in every content word (i.e., every word except for function words like to, the, a, of) are stressed. A stressed syllable, which can be marked by an acute accent (´), is perceived as more prominent than an unstressed syllable, as shown in the following examples: pérvert pervért súbject subjéct (noun) (verb) (noun) (verb) as in as in as in as in “My neighbor is a pervert.” “Don’t pervert the idea.” “Let’s change the subject.” “He’ll subject us to criticism.” These pairs show that stress can be contrastive in English. In these cases it distinguishes between nouns and verbs. Some words may contain more than one stressed vowel, but exactly one of the stressed vowels is more prominent than the others. The vowel that receives 257 258 CHAPTER 5 Phonology: The Sound Patterns of Language primary stress is marked by an acute accent. The other stressed vowels are indicated by a grave accent (` )̀ over the vowels (these vowels receive secondary stress). rèsignátion fùndaméntal lìnguístics ìntrodúctory sỳstəmátic rèvolútion Generally, speakers of a language know which syllable receives primary stress, which ones receive secondary stress, and which ones are reduced (are unstressed). It is part of their implicit knowledge of the language. It’s usually easy to distinguish between stressed and reduced syllables, because the vowel in reduced syllables is pronounced as a schwa [ə], except at the ends of certain words such as confetti or laboratory. It may be harder to distinguish between primary and secondary stress. If you are unsure of where the primary stress is in a word (and you are a native or near-native speaker of English), try shouting the word as if talking to a person across a busy street. Often, the difference in stress becomes more apparent. The stress pattern of a word may differ among English-speaking people. For example, in most varieties of American English the word láboratòry [lǽbərətʰɔ̀ri] has two stressed syllables, but in most varieties of British English it receives only one stress [ləbɔ́rətri]. Because English vowels generally reduce to schwa or delete when they are not stressed, the British and American vowels differ in this word. In fact, in the British version the fourth vowel is deleted because it is not stressed. Stress is a property of the syllable rather than a segment; it is a prosodic or suprasegmental feature. To produce a stressed syllable, one may change the pitch (usually by raising it), make the syllable louder, or make it longer. We often use all three of these phonetic means to stress a syllable. Sentence and Phrase Stress “Bimbo’s Circus” © Howie Schneider/Dist. by Newspaper Enterprise Association, Inc. When words are combined into phrases and sentences, one syllable receives greater stress than all others. That is, just as there is only one primary stress Prosodic Phonology in a word spoken in isolation, only one of the vowels in a phrase (or sentence) receives primary stress or accent. All of the other stressed vowels are reduced to secondary stress. In English we place primary stress on the adjectival part of a compound noun (which may be written as one word, two words separated by a hyphen, or two separate words), but we place the stress on the noun when the words are a noun phrase consisting of an adjective followed by a noun. The differences between the following pairs are therefore predictable: Compound Noun Adjective + Noun tíghtrope (“a rope for acrobatics”) Rédcoat (“a British soldier”) hótdog (“a frankfurter”) Whíte House (“the President’s house”) tight rópe (“a rope drawn taut”) red cóat (“a coat that is red”) hot dóg (“an overheated dog”) white hóuse (“a house painted white”) Say these examples out loud, speaking naturally, and at the same time listen or feel the stress pattern. If English is not your native language, listen to a native speaker say them. These pairs show that stress may be predictable from the morphology and syntax. The phonology interacts with the other components of the grammar. The stress differences between the noun and verb pairs discussed in the previous section (subject as noun or verb) are also predictable from the syntactic word category. Intonation “What can I do, Tertius?” said Rosamond, turning her eyes on him again. That little speech of four words, like so many others in all languages, is capable by varied vocal inflexions of expressing all states of mind from helpless dimness to exhaustive argumentative perception, from the completest self-devoting fellowship to the most neutral aloofness. GEORGE ELIOT, Middlemarch, 1872 In chapter 4, we discussed pitch as a phonetic feature in reference to tone languages and intonation languages. In this chapter we have discussed the use of phonetic features to distinguish meaning. We can now see that pitch is a phonemic feature in tone languages such as Chinese, Thai, and Akan. We refer to these relative pitches as contrasting tones. In intonation languages such as English, pitch still plays an important role, but in the form of the pitch contour or intonation of the phrase or sentence. In English, intonation may reflect syntactic or semantic differences. If we say John is going with a falling pitch at the end, it is a statement, but if the pitch rises at the end, it may be interpreted as a question. Similarly, What’s in the tea, honey? may, depending on intonation, be a query to someone called “honey” regarding the contents of the tea (falling intonation on honey), or may be a query regarding whether the tea contains honey (rising intonation on honey). A sentence that is ambiguous in writing may be unambiguous when spoken because of differences in the pitch contour, as we saw in the previous paragraph. 259 260 CHAPTER 5 Phonology: The Sound Patterns of Language Here is a somewhat more subtle example. Written, sentence 1 is unclear as to whether Tristram intended for Isolde to read and follow directions, or merely to follow him: 1. Tristram left directions for Isolde to follow. Spoken, if Tristram wanted Isolde to follow him, the sentence would be pronounced with a rise in pitch on the first syllable of follow, followed by a fall in pitch, as indicated (oversimplistically) in sentence 2. Tristram left directions for Isolde to follow. In this pronunciation of the sentence, the primary stress is on the word follow. If the meaning is to read and follow a set of directions, the highest pitch comes on the second syllable of directions, as illustrated, again oversimplistically, in sentence 3. Tristram left directions for Isolde to follow. The primary stress in this pronunciation is on the word directions. Pitch plays an important role in both tone languages and intonation languages, but in different ways, depending on the phonological system of the respective languages. Sequential Constraints of Phonemes If you were to receive the following telegram, you would have no difficulty in correcting the “obvious” mistakes: BEST WISHES FOR VERY HAPPP BIRTFDAY because sequences such as BIRTFDAY do not occur in the language. COLIN CHERRY, On Human Communication, 1957 Suppose you were given the following four phonemes and asked to arrange them to form all possible English words: /b/ /ɪ/ /k/ /l/ You would most likely produce the following: /blɪk/ /klɪb/ /bɪlk/ /kɪlb/ These are the only permissible arrangements of these phonemes in English. */lbkɪ/, */ɪlbk/, */bkɪl/, and */ɪlkb/ are not possible English words. Although /blɪk/ and /klɪb/ are not now existing words, if you heard someone say: “I just bought a beautiful new blick.” Sequential Constraints of Phonemes you might ask: “What’s a blick?” If, on the other hand, you heard someone say: “I just bought a beautiful new bkli.” you might reply, “You just bought a new what?” Your knowledge of English phonology includes information about what sequences of phonemes are permissible, and what sequences are not. After a consonant like /b/, /g/, /k/, or /p/, another stop consonant in the same syllable is not permitted by the phonology. If a word begins with an /l/ or an /r/, the next segment must be a vowel. That is why */lbɪk/ does not sound like an English word. It violates the restrictions on the sequencing of phonemes. People who like to work crossword puzzles are often more aware of these constraints than the ordinary speaker, whose knowledge, as we have emphasized, may not be conscious. Other such constraints exist in English. If the initial sounds of chill or Jill begin a word, the next sound must be a vowel. The words /tʃʌt/ or /tʃon/ or /tʃæk/ are possible in English (chut, chone, chack), as are /dʒæl/ or /dʒil/ or /dʒalɪk/ (jal, jeel, jolick), but */tʃlɔt/ and */dʒpurz/ are not. No more than three sequential consonants can occur at the beginning of a word, and these three are restricted to /s/ + /p,t,k/ + /l,r,w,y/. There are even restrictions if this condition is met. For example, /stl/ is not a permitted sequence, so stlick is not a possible word in English, but strick is, along with spew /spju/, sclaff /sklæf/ (to strike the ground with a golf club), and squat /skwat/. Other languages have different sequential restrictions. In Polish zl and kt are permissible syllable-initial combinations, as in /zlev/, “a sink,” and /kto/, “who.” Croatian permits words like the name Mladen. Japanese has severe constraints on what may begin a syllable; most combinations of consonants (e.g., /br/, /sp/) are impermissible. The limitations on sequences of segments are called phonotactic constraints. Phonotactic constraints have as their basis the syllable, rather than the word. That is, only the clusters that can begin a syllable can begin a word, and only a cluster that can end a syllable can end a word. In multisyllabic words, clusters that seem illegal may occur, for example the /kspl/ in explicit /ɛksplɪsɪt/. However, there is a syllable boundary between the /ks/ and /pl/, which we can make explicit using $: /ɛk $ splɪs $ ɪt/. Thus we have a permitted syllable coda /k/ that ends a syllable adjoined to a permitted onset /spl/ that begins a syllable. On the other hand, English speakers know that “condstluct” is not a possible word because the second syllable would have to start with an impermissible onset, either /stl/ or /tl/. In Twi, a word may end only in a vowel or a nasal consonant. The sequence /pik/ is not a possible Twi word because it breaks the phonotactic rules of the language, whereas /mba/ is not a possible word in English, although it is a word in Twi. All languages have constraints on the permitted sequences of phonemes, although different languages have different constraints. Just as spoken language has sequences of sounds that are not permitted in the language, so sign languages have forbidden combinations of features. For example, in the ASL compound for “blood” (red flow) discussed earlier, the total handshape must be assimilated, including the shape of the hand and the orientation of the fingers. Assimilation 261 262 CHAPTER 5 Phonology: The Sound Patterns of Language of just the handshape but not the finger orientation is impossible in ASL. The constraints may differ from one sign language to another, just as the constraints on sounds and sound sequences differ from one spoken language to another. A permissible sign in a Chinese sign language may not be a permissible sign in ASL, and vice versa. Children learn these constraints when they acquire the spoken or signed language, just as they learn what the phonemes are and how they are related to phonetic segments. Lexical Gaps The words bot [bat] and crake [kʰrek] are not known to all speakers of English, but they are words. On the other hand [bʊt] (rhymes with put), creck [kʰrɛk], cruke [kʰruk], cruk [kʰrʌk], and crike [kʰraɪk] are not now words in English, although they are possible words. Advertising professionals often use possible but nonoccurring words for the names of new products. Although we would hardly expect a new product or company to come on the market with the name Zhleet [ʒlit]—an impossible word in English—we do not bat an eye at Bic, Xerox /ziraks/, Kodak, Glaxo, or Spam (a meat product, not junk mail), because those once nonoccurring words obey the phonotactic constraints of English. A possible word contains phonemes in sequences that obey the phonotactic constraints of the language. An actual, occurring word is the union of a possible word with a meaning. Possible words without meaning are sometimes called nonsense words and are also referred to as accidental gaps in the lexicon, or lexical gaps. Thus “words” such as creck and cruck are nonsense words and represent accidental gaps in the lexicon of English. Why Do Phonological Rules Exist? No rule is so general, which admits not some exception. ROBERT BURTON, The Anatomy of Melancholy, 1621 A very important question that we have not addressed thus far is: Why do grammars have phonological rules at all? In other words, why don’t underlying or phonemic forms surface intact rather than undergoing various changes? In the previous section we discussed phonotactic constraints, which are part of our knowledge of phonology. As we saw, phonotactic constraints specify which sound sequences are permissible in a particular language, so that in English blick is a possible word but *lbick isn’t. Many linguists believe that phonological rules exist to ensure that the surface or phonetic forms of words do not violate phonotactic constraints. If underlying forms remained unmodified, they would often violate the phonotactics of the language. Consider, for example, the English past-tense rule and recall that it has two subrules. The first inserts a schwa when a regular verb ends in an alveolar stop (/t/ or /d/), as in mated [metəd]. The second devoices the past-tense morpheme /d/ when it occurs after a voiceless sound, as in reaped [ript] or peaked [pʰikt]. Why Do Phonological Rules Exist? Notice that the part of the rule that devoices /d/ reflects the constraint that English words may not end in a sequence consisting of a voiceless stop + d. Words such as [lɪpd] and [mɪkd] do not exist, nor could they exist. They are impossible words of English, just as [bkɪl] is. More generally, there are no words that end in a sequence of obstruents whose voicing features do not match. Thus words such as [kasb], where the final two obstruents are [–voice] [+voice] are not possible, nor are words such as [kabs] whose final two obstruents are [+voice] [–voice]. On the other hand, [kasp] and [kɛbz] are judged to be possible words because the final two segments agree in voicing. Thus, there appears to be a general constraint in English, stated as follows: (A) Obstruent sequences may not differ with respect to their voice feature at the end of a word. We can see then that the devoicing part of the past-tense rule changes the underlying form of the past-tense morpheme to create a surface form that conforms to this general constraint. Similarly, the schwa insertion part of the past-tense rule creates possible sound sequences from impossible ones. English does not generally permit sequences of sounds within a single syllable that are very similar to each other, such as [kk], [kg], [gk], [gg], [pp], [sz], [zs], and so on. (The words spelled egg and puppy are phonetically [ɛg] and [pʌpɪ].) Thus the schwa insertion rule separates sequences of sounds that are otherwise not permitted in the language because they are too similar to each other, for example, the sequence of /d/ and /d/ in /mɛnd + d/, which becomes [mɛ̃ndəd] mended, or /t/ and /d/ in /part + d/, which becomes [pʰartəd] parted. The relevant constraint is stated as follows: (B) Sequences of obstruents that differ at most with respect to voicing are not permitted within English words. Constraints such as (A) and (B) are far more general than particular rules like the past-tense rule. For example, constraint B might also explain why an adjective such as smooth turns into the abstract noun smoothness, rather than taking the affix -th [θ], as in wide-width, broad-breadth, and deep-depth. Suffixing smooth with -th would result in a sequence of too similar obstruents, smoo[ðθ], which differ only in their voicing feature. This suggests that languages may satisfy constraints in various grammatical situations. Thus, phonological rules exist because languages have general principles that constrain possible sequences of sounds. The rules specify minimal modifications of the underlying forms that bring them in line with the surface constraints. Therefore, we find different variants of a particular underlying form depending on the phonological context. It has also been proposed that a universal set of phonological constraints exists, and that this set is ordered, with some constraints being more highly ranked than others. The higher the constraint is ranked, the more influence it exerts on the language. This proposal, known as Optimality Theory, also holds that the particular constraint rankings can differ from language to language, 263 264 CHAPTER 5 Phonology: The Sound Patterns of Language and that the different rankings generate the different sound patterns shown across languages. For example, constraint B is highly ranked in English; and so we have the English past-tense rule, as well as many other rules, including the plural rule (with some modification), that modify sequences of sounds that are too similar. Constraint B is also highly ranked in other languages, for example, Modern Hebrew, in which suffixes that begin with /t/ are always separated from stems ending in /t/ or /d/ by inserting [e], as in /kiʃat + ti/ → [kiʃatetɪ] meaning “I decorated.” In Berber, similar consonants such as tt, dd, ss, and so on can surface at the end of words. In this language, constraint B is not highly ranked; other constraints outrank it and therefore exert a stronger effect on the language, notably constraints that require that surface forms not deviate from corresponding underlying forms. These constraints, known as faithfulness constraints, compete in the rankings with constraints that modify the underlying forms. Faithfulness constraints reflect the drive among languages to want a morpheme to have a single identifiable form, a drive that is in competition with constraints such as A and B. In the case of the English past-tense morpheme, the drive toward a single morpheme shows up in the spelling, which is always -ed. In our discussion of syntactic rules in chapter 2, we noted that there are principles of Universal Grammar (UG) operating in the syntax. Two examples of this are the principle that transformational rules are structure dependent and the constraint that movement rules may not move phrases out of coordinate structures. If Optimality Theory is correct, and universal phonological constraints exist that differ among languages only in their rankings, then phonological rules, like syntactic rules, are constrained by universal principles. The differences in constraint rankings across languages are in some ways parallel to the different parameter settings that exist in the syntax of different languages, also discussed in chapter 2. We noted that in acquiring the syntax of her language, the young child must set the parameters of UG at the values that are correct for the language of the environment. Similarly, in acquiring the phonology of her language, the child must determine the correct constraint rankings as evidenced in the input language. We will have more to say about language acquisition in chapter 7. Phonological Analysis Out of clutter, find simplicity. From discord, find harmony. ALBERT EINSTEIN (1879–1955) Children recognize phonemes at an early age without being taught, as we shall see in chapter 7. Before reading this book, or learning anything about phonology, you knew a p sound was a phoneme in English because it contrasts words like pat and cat, pat and sat, pat and mat. But you probably did not know that the p in pat and the p in spit are different sounds. There is only one /p/ phoneme in English, but that phoneme has more than one allophone, including an aspirated one and an unaspirated one. Phonological Analysis If a non-English-speaking linguist analyzed English, how could this fact about the sound p be discovered? More generally, how do linguists discover the phonological system of a language? To do a phonological analysis, the words to be analyzed must be transcribed in great phonetic detail, because we do not know in advance which phonetic features are distinctive and which are not. Consider the following Finnish words: 1. 2. 3. 4. [kudot] [kate] [katot] [kade] “failures” “cover” “roofs” “envious” 5. 6. 7. 8. [madon] [maton] [ratas] [radon] “of a worm” “of a rug” “wheel” “of a track” Given these words, do the voiceless/voiced alveolar stops [t] and [d] represent different phonemes, or are they allophones of the same phone? Here are a few hints as to how a phonologist might proceed: 1. 2. 3. Check to see if there are any minimal pairs. Items (2) and (4) are minimal pairs: [kate] “cover” and [kade] “envious.” Items (5) and (6) are minimal pairs: [madon] “of a worm” and [maton] “of a rug.” [t] and [d] in Finnish thus represent the distinct phonemes /t/ and /d/. That was an easy problem. Now consider the following data from English, again focusing on [t] and [d] together with the alveolar flap [ɾ] and primary stress ´: [ráɪt] [déɾə] [mǽd] [bətróð] [lǽɾər] [ráɪɾər] [déɾɪŋ] [mʌ́ɾər] [mǽɾər] “write” “data” “mad” “betroth” “latter” “rider” “dating” “mutter” “madder” [ráɪɾər] [dét] [mǽt] [lǽɾər] [dɪ ś tə̃ns] [ráɪd] [bɛ́dsaɪd] [tú ɾər] [mǽdnɪs] “writer” “date” “mat” “ladder” “distance” “ride” “bedside” “tutor” “madness” A broad examination of the data reveals minimal pairs involving [t] and [d], so clearly /t/ and /d/ are phonemes. We also see some interesting homophones, such as ladder and latter, and writer and rider. And the flap [ɾ]? Is it a phoneme? Or is it predictable somehow? At this point the linguist undertakes the tedious task of identifying all of the immediate environments for [t], [d], and [ɾ], using # for a word boundary: [t]: áɪ_#, é_#, ǽ_#, ə_r, s_ə, #_ú [d]: #_é (3 times), ǽ_#, #_ɪ ,́ áɪ_#, ɛ́ _s, ǽ_n [ɾ]: áɪ_ə (2 times), é_ə, ǽ_ə (3 times), é_ɪ, ú _ə, ʌ́_ə It does not appear at this point that anything systematic is going on with vowel or consonant quality, so we abstract the data a little, using v for an unstressed vowel, v ́ for a stressed vowel, C for a consonant, and # for a word boundary: 265 266 CHAPTER 5 Phonology: The Sound Patterns of Language [t]: v́_#, #_v́,́ C_v, v_C [d]: #_v́,́ v́_́ #, v́_́ C [ɾ]: v́_́ v Now we see clearly that [ɾ] is in complementary distribution with both [t] and [d]. It occurs only when preceded by a stressed vowel and followed by an unstressed vowel, and neither [t] nor [d] ever do. We may conclude, based on these data, that [ɾ] is an allophone of both /t/ and /d/. We tentatively propose the “alveolar flap rule”ː An alveolar stop becomes a flap in the environment between a stressed and unstressed vowel. The phonemic forms lack a flap, so that writer is phonemically /raɪtər/ and rider is /raɪdər/, based on [raɪt] and [raɪd]. Similarly, we can propose /mædər/ for madder based on [mæd] and [mædnɪs], and /detɪŋ/ for dating based on [det]. But we don’t have enough information to determine phonemic forms of data, latter, ladder, tatter, and tutor. This is typically the case in actual analyses. Rarely is there sufficient evidence to provide all the answers. Finally, consider these data from Greek, focusing on the following sounds: [x] [k] [c] [ç] 1. 2. 3. 4. 5. 6. 7. 8. voiceless velar fricative voiceless velar stop voiceless palatal stop voiceless palatal fricative [kano] [xano] [çino] [cino] [kali] [xali] [çeli] [ceri] “do” “lose” “pour” “move” “charms” “plight” “eel” “candle” 9. 10. 11. 12. 13. 14. 15. 16. [çeri] [kori] [xori] [xrima] [krima] [xufta] [kufeta] [oçi] “hand” “daughter” “dances” “money” “shame” “handful” “bonbons” “no” To determine the status of [x], [k], [c], and [ç], you should answer the following questions. 1. 2. 3. 4. Are there are any minimal pairs in which these sounds contrast? Are any noncontrastive sounds in complementary distribution? If noncontrasting phones are found, what are the phonemes and their allophones? What are the phonological rules by which the allophones can be derived? 1. By analyzing the data, we find that [k] and [x] contrast in a number of minimal pairs, for example, in [kano] and [xano]. [k] and [x] are therefore distinctive. [c] and [ç] also contrast in [çino] and [cino] and are therefore distinctive. But what about the velar fricative [x] and the palatal fricative [ç]? And the velar Phonological Analysis stop [k] and the palatal stop [c]? We can find no minimal pairs that would conclusively show that these represent separate phonemes. 2. We now proceed to answer the second question: Are these noncontrasting phones, namely [x]/[ç] and [k]/[c], in complementary distribution? One way to see if sounds are in complementary distribution is to list each phone with the environment in which it is found, as follows: Phone Environment [k] [x] [c] [ç] before [a], [o], [u], [r] before [a], [o], [u], [r] before [i], [e] before [i], [e] We see that [k] and [x] are not in complementary distribution; they both occur before back vowels. Nor are [c] and [ç] in complementary distribution. They both occur before front vowels. But the stops [k] and [c] are in complementary distribution; [k] occurs before back vowels and [r], and never occurs before front vowels. Similarly, [c] occurs only before front vowels and never before back vowels or [r]. Finally, [x] and [ç] are in complementary distribution for the same reason. We therefore conclude that [k] and [c] are allophones of one phoneme, and the fricatives [x] and [ç] are also allophones of one phoneme. The pairs of allophones also fulfill the criterion of phonetic similarity. The first two are [–anterior] stops; the second are [–anterior] fricatives. (This similarity discourages us from pairing [k] with [ç], and [c] with [x], which are less similar to each other.) 3. Which of the phone pairs are more basic, and hence the ones whose features would define the phoneme? When two allophones can be derived from one phoneme, one selects as the underlying segment the allophone that makes the rules and the phonemic feature matrix as simple as possible, as we illustrated with the English unaspirated and aspirated voiceless stops. In the case of the velar and palatal stops and fricatives in Greek, the rules appear to be equally simple. However, in addition to the simplicity criterion, we wish to state rules that have natural phonetic explanations. Often these turn out to be the simplest solution. In many languages, velar sounds become palatal before front vowels. This is an assimilation rule; palatal sounds are produced toward the front of the mouth, as are front vowels. Thus we select /k/ as a phoneme with the allophones [k] and [c], and /x/ as a phoneme with the allophones [x] and [ç]. 4. We can now state the rule by which the palatals can be derived from the velars. Palatalize velar consonants before front vowels. Using feature notation we can state the rule as: [+velar] → [+palatal] / ____ [–back] Because only consonants are marked for the feature [velar], and only vowels for the feature [back], it is not necessary to include the features [consonantal] 267 268 CHAPTER 5 Phonology: The Sound Patterns of Language or [syllabic] in the rule. We also do not need to include any other features that are redundant in defining the segments to which the rule applies or the environment in which the rule applies. Thus [+palatal] in the change part of the rule is sufficient, and the feature [–back] also suffices to specify the front vowels. The simplicity criterion constrains us to state the rule as simply as we can. Summary Part of one’s knowledge of a language is knowledge of the phonology or sound system of that language. It includes the inventory of phones—which are the phonetic sounds that occur in the language—and the ways in which they pattern. This patterning determines the inventory of phonemes—the abstract basic units that differentiate words. When similar phones occur in complementary distribution, they are allophones—predictable phonetic variants—of one phoneme. Thus the aspirated [pʰ] and the unaspirated [p] are allophones of the phoneme /p/ because they occur in different phonetic environments. Some phones may be allophones of more than one phoneme. There is no one-to-one correspondence between the phonemes of a language and their allophones. In English, for example, stressed vowels become unstressed according to regular rules, and ultimately reduce to schwa [ə], which is an allophone of each English vowel. Phonological segments—phonemes and phones—are composed of phonetic features such as voiced, nasal, labial, and continuant, whose presence or absence is indicated by + or – signs. Voiced, continuant, and many others are distinctive features—they can contrast words. Other features like aspiration are nondistinctive and are predictable from phonetic context. Some features like nasal may be distinctive for one class of sounds (e.g., consonants) but nondistinctive for a different class of sounds (e.g., vowels). Phonetic features that are nondistinctive in one language may be distinctive in another. Aspiration is distinctive in Thai and nondistinctive in English. When two distinct words are distinguished by a single phone occurring in the same position, they constitute a minimal pair, e.g., fine [faɪn] and vine [vaɪn]. Minimal pairs also occur in sign languages. Signs may contrast by handshape, location, and movement. Words in some languages may also be phonemically distinguished by prosodic or suprasegmental features, such as pitch, stress, and segment length. Languages in which syllables or words are contrasted by pitch are called tone languages. Intonation languages may use pitch variations to distinguish meanings of phrases and sentences. The relationship between phonemic representation and phonetic representation (pronunciation) is determined by phonological rules. Phonological rules apply to phonemic strings and alter them in various ways to derive their phonetic pronunciation, or in the case of signed languages, their hand configuration. They may be assimilation rules, dissimilation rules, rules that add nondistinctive features, epenthetic rules that insert segments, deletion rules, and metathesis rules that reorder segments. References for Further Reading Phonological rules generally refer to entire classes of sound. These are natural classes, characterized by a small set of phonetic features shared by all the members of the class, e.g., [–continuant], [–voiced], to designate the natural class of voiceless stops. Linguists may use a mathematical-like formulation to express phonological rules in a concise way. For example, the rule that nasalizes vowels when they occur before a nasal consonant may be written V → [+nasal] / __ [+nasal]. Morphophonemic rules apply to specific morphemes, e.g., the plural morpheme /z/ is phonetically [z], [s], or [əz], depending on the final phoneme of the noun to which it is attached. The phonology of a language also includes sequential constraints (phonotactics) that determine which sounds may be adjacent within the syllable. These determine what words are possible in a language, and what phonetic strings are impermissible. Possible but nonoccurring words constitute accidental gaps and are nonsense words, e.g., blick [blɪk]. Phonological rules exist in part to enforce phonotactic constraints. Optimality Theory hypothesizes a set of ranked constraints that govern the phonological rules. To discover the phonemes of a language, linguists (or students of linguistics) can use a methodology such as looking for minimal pairs of words, or for sounds that are in complementary distribution. The phonological rules in a language show that the phonemic shape of words is not identical with their phonetic form. The phonemes are not the actual phonetic sounds, but are abstract mental constructs that are realized as sounds by the operation of rules such as those described in this chapter. No one is taught these rules, yet everyone knows them subconsciously. References for Further Reading Anderson, S. R. 1985. Phonology in the twentieth century: Theories of rules and theories of representations. Chicago: University of Chicago Press. Bybee, J. 2002. Phonology and language use. Cambridge, UK: Cambridge University Press. Chomsky, N., and M. Halle. 1968. The sound pattern of English. New York: Harper & Row. Clements, G. N., and S. J. Keyser. 1983. CV phonology: A generative theory of the syllable. Cambridge, MA: MIT Press. Goldsmith, J. A. (ed.). 1995. The handbook of phonological theory. Cambridge, MA: Blackwell. Gussman, E., S. R. Anderson, J. Bresnan, B. Comrie, W. Dressler, and C. J. Ewan. 2002. Phonology: Analysis and theory. Cambridge, UK: Cambridge University Press. Hogg, R., and C. B. McCully. 1987. Metrical phonology: A coursebook. Cambridge, UK: Cambridge University Press. Hyman, L. M. 1975. Phonology: Theory and analysis. New York: Holt, Rinehart & Winston. Kaye, Jonathan. 1989. Phonology: A cognitive view. Hillsdale, NJ: Erlbaum. Kenstowicz, M. J. 1994. Phonology in generative grammar. Oxford, UK: Blackwell Publications. 269 270 CHAPTER 5 Phonology: The Sound Patterns of Language Exercises Data in languages other than English are given in phonetic transcription without square brackets unless otherwise stated. The phonetic transcriptions of English words are given within square brackets. 1. The following sets of minimal pairs show that English /p/ and /b/ contrast in initial, medial, and final positions. Initial Medial Final pit/bit rapid/rabid cap/cab Find similar sets of minimal pairs for each pair of consonants given: a. /k/—/g/ d. /b/—/v/ g. /s/—/ʃ/ b. /m/—/n/ e. /b/—/m/ h. /tʃ/—/dʒ/ c. /l/—/r/ f. /p/—/f/ i. /s/—/z/ 2. A young patient at the Radcliffe Infirmary in Oxford, England, following a head injury, appears to have lost the spelling-to-pronunciation and pronunciation-to-spelling rules that most of us can use to read and write new words or nonsense strings. He also is unable to get to the phonemic representation of words in his lexicon. Consider the following examples of his reading pronunciation and his writing from dictation: Stimulus Reading Pronunciation Writing from Dictation fame café time note praise treat goes float /fæmi/ /sæfi/ /taɪmi/ /noti/ or /nɔti/ /pra-aɪ-si/ /tri-æt/ /go-ɛs/ /flɔ-æt/ FAM KAFA TIM NOT PRAZ TRET GOZ FLOT What rules or patterns relate his reading pronunciation to the written stimulus? What rules or patterns relate his spelling to the dictated stimulus? For example, in reading, a corresponds to /a/ or /æ/; in writing from dictation /e/ and /æ/ correspond to written A. 3. Read “A Case of Identity,” the third story in The Adventures of Sherlock Holmes by Sir Arthur Conan Doyle (and no fair reading summaries, synopses, or anything other than the original—it’s online). Now all you have to do is explain what complementary distribution has to do with this mystery. 4. Consider the distribution of [r] and [l] in Korean in the following words. (Some simplifying changes have been made in these transcriptions, and those in exercise 6, that have no bearing on the problems.) rubi “ruby” mul “water” kir-i “road (nom.)” pal “arm” Exercises saram irum-i ratio “person” “name (nom.)” “radio” səul ilgop ibalsa “Seoul” “seven” “barber” Are [r] and [l] allophones of one or two phonemes? a. Do they occur in any minimal pairs? b. Are they in complementary distribution? c. In what environments does each occur? d. If you conclude that they are allophones of one phoneme, state the rule that can derive the phonetic allophonic forms. 5. Consider these data from a common German dialect ([x] is a velar fricative, [ç] is a palatal fricative). nɪçt “not” baːx “Bach” reːçə̃n “rake” laːxə̃n “to laugh” ʃlɛçt “bad” kɔxt “cooks” riːçə̃n “to smell” fɛrsuːxə̃n “to try” hãɪmlɪç “sly” hoːx “high” rɛçts “rightward” ʃlʊxt “canyon” kriːçə̃n “to crawl” fɛrflʊxt “accursed” a. Are [x] and [ç] allophones of the same phoneme, or is each an allophone of a separate phoneme? Give your reasons. b. If you conclude that they are allophones of one phoneme, state the rule that can derive the phonetic allophones. 6. Here are some additional data from Korean: son “hand” ʃihap “game” som “cotton” ʃilsu “mistake” sosəl “novel” ʃipsam “thirteen” sɛk “color” ʃinho “signal” isa “moving” maʃita “is delicious” sal “flesh” oʃip “fifty” kasu “singer” miʃin “superstition” miso “grin” kaʃi “thorn” a. Are [s] and [ʃ] allophones of the same phoneme, or is each an allophone of a separate phoneme? Give your reasons. b. If you conclude that they are allophones of one phoneme, state the rule that can derive the phonetic allophones. 7. In Southern Kongo, a Bantu language spoken in Angola, the nonpalatal segments [t,s,z] are in complementary distribution with their palatal counterparts [tʃ,ʃ,ʒ], as shown in the following words: tobola “to bore a hole” tʃina “to cut” tanu “five” tʃiba “banana” kesoka “to be cut” ŋkoʃi “lion” kasu “emaciation” nselele “termite” kunezulu “heaven” aʒimola “alms” nzwetu “our” lolonʒi “to wash house” 271 272 CHAPTER 5 Phonology: The Sound Patterns of Language zevo ʒima “then” “to stretch” zeŋga tenisu “to cut” “tennis” a. State the distribution of each pair of segments. Example: [t]—[tʃ]: [t] occurs before [o], [a], [e], and [u]; [tʃ] occurs before [i]. [s]—[ʃ]: [z]—[ʒ]: b. Using considerations of simplicity, which phone should be used as the underlying phoneme for each pair of nonpalatal and palatal segments in Southern Kongo? c. State in your own words the one phonological rule that will derive all the phonetic segments from the phonemes. Do not state a separate rule for each phoneme; a general rule can be stated that will apply to all three phonemes you listed in (b). Try to give a formal statement of your rule. d. Which of the following are possible words in Southern Kongo, and which are not? i. tenesi ii. lotʃunuta iii. zevoʒiʒi iv. ʃiʃi v. ŋkasa vi. iʒiloʒa 8. In some dialects of English, the following words have different vowels, as is shown by the phonetic transcriptions: A bite rice ripe wife dike B [bʌɪt] [rʌɪs] [rʌɪp] [wʌɪf] [dʌɪk] bide rise bribe wives dime nine rile dire writhe C [baɪd] [raɪz] [braɪb] [waɪvz] [dãɪm] [nãɪn] [raɪl] [daɪr] [raɪð] die by sigh rye guy [daɪ] [baɪ] [saɪ] [raɪ] [gaɪ] a. How may the classes of sounds that end the words in columns A and B be characterized? That is, what feature specifies all the final segments in A and all the final segments in B? b. How do the words in column C differ from those in columns A and B? c. Are [ʌɪ] and [aɪ] in complementary distribution? Give your reasons. d. If [ʌɪ] and [aɪ] are allophones of one phoneme, should they be derived from /ʌɪ/ or /aɪ/? Why? e. Give the phonetic representations of the following words as they would be spoken in the dialect described here: life __________ lives ___________ lie ___________ file __________ bike ___________ lice ___________ f. Formulate a rule that will relate the phonemic representations to the phonetic representations of the words given above. Exercises 9. Pairs like top and chop, dunk and junk, so and show, and Caesar and seizure reveal that /t/ and /tʃ/, /d/ and /dʒ/, /s/ and /ʃ/, and /z/ and /ʒ/ are distinct phonemes in English. Consider these same pairs of nonpalatalized and palatalized consonants in the following data. (The palatal forms are optional forms that often occur in casual speech.) Nonpalatalized Palatalized [hɪt mi] [lid hĩm] [pʰæs ʌs] [luz ðem] [hɪtʃ ju] [lidʒ ju] [pʰæʃ ju] [luʒ ju] “hit me” “lead him” “pass us” “lose them” “hit you” “lead you” “pass you” “lose you” Formulate the rule that specifies when /t/, /d/, /s/, and /z/ become palatalized as [tʃ], [dʒ], [ʃ], and [ʒ]. Restate the rule using feature notations. Does the formal statement reveal the generalizations? 10. Here are some Japanese words in broad phonetic transcription. Note that [ts] is an alveolar affricate and should be taken as a single symbol just like the palatal fricative [tʃ]. It is pronounced as the initial sound in tsunami. Japanese words (except certain loan words) never contain the phonetic sequences *[ti] or *[tu]. tatami “mat” tomodatʃi “friend” utʃi “house” tegami “letter” totemo “very” otoko “male” tʃitʃi “father” tsukue “desk” tetsudau “help” ʃita “under” ato “later” matsu “wait” natsu “summer” tsutsumu “wrap” tʃizu “map” kata “person” tatemono “building” te “hand” a. Based on these data, are [t], [tʃ], and [ts] in complementary distribution? b. State the distribution—first in words, then using features—of these phones. c. Give a phonemic analysis of these data insofar as [t], [tʃ], and [ts] are concerned. That is, identify the phonemes and the allophones. d. Give the phonemic representation of the phonetically transcribed Japanese words shown as follows. Assume phonemic and phonetic representations are the same except for [t], [tʃ], and [ts]. tatami /__________/ tsukue /_________/ tsutsumu /_______/ tomodatʃi /_______/ tetsudau /________/ tʃizu /___________/ utʃi /____________/ ʃita /____________/ kata /___________/ tegami /_________/ ato /____________/ koto /___________/ totemo /_________/ matsu /__________/ tatemono /_______/ otoko /__________/ degutʃi /_________/ te /_____________/ tʃitʃi /___________/ natsu /__________/ tsuri /___________/ 11. The following words are Paku, a language created by V. Fromkin, spoken by the Pakuni in the cult classic Land of the Lost, originally an NBC television series and recently a major motion picture. The acute accent indicates a stressed vowel. a. ótu “evil” (N) c. etógo “cactus” (sg) b. túsa “evil” (Adj) d. etogṍni “cactus” (pl) 273 274 CHAPTER 5 Phonology: The Sound Patterns of Language e. f. g. h. i. páku “Paku” (sg) j. ãmpṍni “hairless ones” pakṹ ni “Paku” (pl) k. ã́ ́mi “mother” ́ épo “hair” l. ãmĩ ni “mothers” mpósa “hairless” m. áda “father” ́ ã́ ́mpo “hairless one” n. adãni “fathers” i. Is stress predictable? If so, what is the rule? ii. Is nasalization a distinctive feature for vowels? Give the reasons for your answer. iii. How are plurals formed in Paku? 12. Consider the following English verbs. Those in column A have stress on the penultimate (next-to-last) syllable, whereas the verbs in column B and C have their last syllable stressed. A B C astónish collápse amáze éxit exíst impróve imágine resént surpríse cáncel revólt combíne elícit adópt belíeve práctice insíst atóne a. Transcribe the words under columns A, B, and C phonemically. (Use a schwa for the unstressed vowels even if they can be derived from different phonemic vowels. This should make it easier for you.) e.g., astonish /əstanɪʃ/, collapse /kəlæps/, amaze /əmez/ b. Consider the phonemic structure of the stressed syllables in these verbs. What is the difference between the final syllables of the verbs in columns A and B? Formulate a rule that predicts where stress occurs in the verbs in columns A and B. c. In the verbs in column C, stress also occurs on the final syllable. What must you add to the rule to account for this fact? (Hint: For the forms in columns A and B, the final consonants had to be considered; for the forms in column C, consider the vowels.) 13. Following are listed the phonetic transcriptions of ten “words.” Some are English words, some are not words now but are possible words or nonsense words, and others are not possible because they violate English sequential constraints. Write the English words in regular spelling. Mark the other words as possible or not possible. For each word you mark as “not possible,” state your reason. Word Example: [θrot] [slig] [lsig] Possible Not Possible Reason X No English word can begin with a liquid followed by an obstruent. throat X Exercises Word a. b. c. d. e. f. g. h. i. j. Possible Not Possible Reason [pʰril] [skritʃ] [kʰno] [maɪ] [gnostɪk] [jũnəkʰɔrn] [fruit] [blaft] [ŋar] [æpəpʰlɛksi] 14. Consider these phonetic forms of Hebrew words: [v]—[b] bika mugbal ʃavar ʃavra ʔikev bara [f]—[p] “lamented” “limited” “broke” (masc.) “broke” (fem.) “delayed” “created” litef sefer sataf para mitpaxat haʔalpim “stroked” “book” “washed” “cow” “handkerchief” “the Alps” Assume that these words and their phonetic sequences are representative of what may occur in Hebrew. In your answers, consider classes of sounds rather than individual sounds. a. Are [b] and [v] allophones of one phoneme? Are they in complementary distribution? In what phonetic environments do they occur? Can you formulate a phonological rule stating their distribution? b. Does the same rule, or lack of a rule, that describes the distribution of [b] and [v] apply to [p] and [f]? If not, why not? c. Here is a word with one phone missing. A blank appears in place of the missing sound: hid___ik. Check the one correct statement. i. ii. iii. iv. [b] but not [v] could occur in the empty slot. [v] but not [b] could occur in the empty slot. Either [b] or [v] could occur in the empty slot. Neither [b] nor [v] could occur in the empty slot. d. Which of the following statements is correct about the incomplete word ___ana? i. ii. iii. iv. [f] but not [p] could occur in the empty slot. [p] but not [f] could occur in the empty slot. Either [p] or [f] could fill the blank. Neither [p] nor [f] could fill the blank. e. Now consider the following possible words (in phonetic transcription): laval surva labal palar falu razif If these words actually occurred in Hebrew, would they: 275 276 CHAPTER 5 Phonology: The Sound Patterns of Language i. Force you to revise the conclusions about the distribution of labial stops and fricatives you reached on the basis of the first group of words given above? ii. Support your original conclusions? iii. Neither support nor disprove your original conclusions? 15. Consider these data from the African language Maninka. bugo “hit” bugoli “hitting” dila “repair” dilali “repairing” don “come in” donni “coming in” dumu “eat” dumuni “eating” gwen “chase” gwenni “chasing” a. What are the two forms of the morpheme meaning “-ing”? (1) _____________________ (2) _____________________ b. Can you predict which phonetic form will occur? If so, state the rule. c. What are the “-ing” forms for the following verbs? da “lie down” __________ men “hear” ______________ famu “understand” ___________ d. What does the rule that you formulated predict for the “-ing” form of sunogo “sleep” ________________ e. If your rule predicts sunogoli, modify it to predict sunogoni without affecting the other occurrences of -li. Conversely, if your rule predicts sunogoni, modify it to predict sunogoli without affecting the other occurrences of -ni. 16. Consider the following phonetic data from the Bantu language Luganda. (The data have been somewhat altered to make the problem easier.) In each line except the last, the same root occurs in both columns A and B, but it has one prefix in column A, meaning “a” or “an,” and another prefix in column B, meaning “little.” A ẽnato ẽnapo ẽnobi ẽmpipi ẽŋkoːsa ẽmːãːmːo ẽŋːõːmːe ẽnːĩmiro ẽnugẽni B “a canoe” “a house” “an animal” “a kidney” “a feather” “a peg” “a horn” “a garden” “a stranger” akaːto akaːpo akaobi akapipi akakoːsa akabãːmːo akagõːmːe akadĩmiro akatabi “little canoe” “little house” “little animal” “little kidney” “little feather” “little peg” “little horn” “little garden” “little branch” Base your answers to the following questions on only these forms. Assume that all the words in the language follow the regularities shown here. (Hint: You may write long segments such as /mː/ as /mm/ to help you visualize more clearly the phonological processes taking place.) Exercises a. Are nasal vowels in Luganda phonemic? Are they predictable? b. Is the phonemic representation of the morpheme meaning “garden” /dimiro/? c. What is the phonemic representation of the morpheme meaning “canoe”? d. Are [p] and [b] allophones of one phoneme? e. If /am/ represents a bound prefix morpheme in Luganda, can you conclude that [ãmdãno] is a possible phonetic form for a word in this language starting with this prefix? f. Is there a homorganic nasal rule in Luganda? g. If the phonetic representation of the word meaning “little boy” is [akapoːbe], give the phonemic and phonetic representations for “a boy.” Phonemic____________________ Phonetic ____________________ h. Which of the following forms is the phonemic representation for the prefix meaning “a” or “an”? i. /en/ ii. /ẽn/ iii. /ẽm/ iv. /em/ v. /eː/ i. j. What is the phonetic representation of the word meaning “a branch”? What is the phonemic representation of the word meaning “little stranger”? k. State the three phonological rules revealed by the Luganda data. 17. Here are some Japanese verb forms given in broad phonetic transcription. They represent two styles (informal and formal) of present-tense verbs. Morphemes are separated by +. Gloss Informal Formal call write eat see leave go out die close swindle wear read lend wait press apply drop have win steal a lover yob + u kak + u tabe + ru mi + ru de + ru dekake + ru ʃin + u ʃime + ru katar + u ki + ru yom + u kas + u mats + u os + u ate + ru otos + u mots + u kats + u netor + u yob + imasu kak + imasu tabe + masu mi + masu de + masu dekake + masu ʃin + imasu ʃime + masu katar + imasu ki + masu yom + imasu kaʃ + imasu matʃ + imasu oʃ + imasu ate + masu otoʃ + imasu motʃ + imasu katʃ + imasu netor + imasu 277 278 CHAPTER 5 Phonology: The Sound Patterns of Language a. List each of the Japanese verb roots in their phonemic representations. b. Formulate the rule that accounts for the different phonetic forms of these verb roots. c. There is more than one allomorph for the suffix designating formality and more than one for the suffix designating informality. List the allomorphs of each. Formulate the rule or rules for their distribution. 18. Consider these data from the Native American language Ojibwa.1 (The data have been somewhat altered for the sake of simplicity; /c/ is a palatal stop.) anokːiː “she works” nitanokːiː “I work” aːkːosi “she is sick” nitaːkːosi “I am sick” ayeːkːosi “she is tired” kiʃayeːkːosi “you are tired” ineːntam “she thinks” kiʃineːntam “you think” maːcaː “she leaves” nimaːcaː “I leave” takoʃːin “she arrives” nitakoʃːin “I arrive” pakiso “she swims” kipakiso “you swim” wiːsini “she eats” kiwiːsini “you eat” a. What forms do the morphemes meaning “I” and “you” take; that is, what are the allomorphs? b. Are the allomorphs for “I” in complementary distribution? How about for “you”? c. Assuming that we want one phonemic form to underlie each allomorph, what should it be? d. State a rule that derives the phonetic forms of the allomorphs. Make it as general as possible; that is, refer to a broad natural class in the environment of the rule. You may state the rule formally, in words, or partially in words with some formal abbreviations. e. Is the rule a morphophonemic rule; that is, does it (most likely) apply to specific morphemes but not in general? What evidence do you see in the data to suggest your answer? 19. Consider these data from the Burmese language, spoken in Myanmar. The small ring under the nasal consonants indicates a voiceless nasal. Tones have been omitted, as they play no role in this problem. ma “health” n̥eɪ “unhurried” na “pain” m̥ i “flame” mjiʔ “river” m̥ on “flour” nwe “to flex” m̥ a “order” nwa “cow” n̥weɪ “heat” (verb) mi “flame” n̥a “nostril” Are [m] and [m̥ ] and [n] and [n̥] allophones or phonemic? Present evidence to support your conclusion. 1From Baker, C. L. & John McCarthy. “The Logical Problem of Language Acquisition,” Table: Example of Ojibwa allomorphy. © 1981 Massachusetts Institute of Technology, by permission of The MIT Press. Exercises 20. Here are some short sentences in a made-up language called Wakanti. (Long consonants are written as doubled letters to make the analysis easier.) aba ideɪ aguʊ upi atu ika ijama aweli ioa aie ulamaba “I eat” “You sleep” “I go” “We come” “I walk” “You see” “You found out” “I climbed up” “You fell” “I hunt” “We put on top” amma inneɪ aŋŋuʊ umpi antu iŋka injama amweli inoa anie unlamaba “I don’t eat” “You don’t sleep” “I don’t go” “We don’t come” “I don’t walk” “You don’t see” “You didn’t find out” “I didn’t climb up” “You didn’t fall” “I don’t hunt” “We don’t put on top” a. What is the phonemic form of the negative morpheme based on these data? b. What are its allomorphs? c. State a rule that derives the phonetic form of the allomorphs from the underlying, phonemic form. d. Another phonological rule applies to these data. State explicitly what the rule does and to what natural class of consonants it applies. e. Give the phonemic forms for all the negative sentences. 21. Here are some data from French: Phonetic Gloss pəti tablo no tablo pəti livr no livr pəti navɛ no navɛ pətit ami noz ami pətit wazo noz wazo “small picture” “our pictures” “small book” “our books” “small turnip” “our turnips” “small friend” “our friends” “small bird” “our birds” a. What are the two forms for the words “small” and “our”? b. What are the phonetic environments that determine the occurrence of each form? c. Can you express the environment by referring to word boundaries and using exactly one phonetic feature, which will refer to a certain natural class? (Hint: A more detailed phonetic transcription would show the word boundaries (#), e.g., [#no##livr#].) d. What are the basic or phonemic forms? e. State a rule in words that derives the nonbasic forms from the basic ones. 279 280 CHAPTER 5 Phonology: The Sound Patterns of Language f. Challenge exercise: State the rule formally, using ∅ to represent “null” and # to represent a word boundary. 22. Consider these pairs of semantically related phonetic forms and glosses in a commonly known language (the + indicates a morpheme boundary): Phonetic Gloss Phonetic Gloss [bãm] explosive device [bãmb + ard] [kʰrʌ̃ m] [aɪæ̃m] a morsel or bit a metrical foot [kʰrʌ̃ mb + əl] [aɪæ̃mb + ɪc] [θʌ̃ m] an opposable digit [θʌ̃ mb + əlĩnə] to attack with explosive devices to break into bits consisting of metrical feet a tiny woman of fairy tales a. What are the two allomorphs of the root morpheme in each line of data? b. What is the phonemic form of the underlying root morpheme? (Hint: Consider pairs such as atom/atomic and form/formal before you decide.) c. State a rule that derives the allomorphs. d. Spell these words using the English alphabet. 23. Consider these data from Hebrew. (Note: ts is an alveolar sibilant fricative and should be considered one sound, just as sh stands for the palatal fricative [ʃ]. The word lehit is a reflexive pronoun.) Nonsibilant–Initial Verbs Sibilant–Initial Verbs kabel lehit-kabel “to accept” “to be accepted” pater lehit-pater “to fire” “to resign” bayesh lehit-bayesh “to shame” “to be ashamed” tsadek lehits-tadek (not *lehit-tsadek) shamesh lehish-tamesh (not *lehit-shamesh) sader lehis-tader (not *lehit-sader) “to justify” “to apologize” “to use for” “to use” “to arrange” “to arrange oneself” a. Describe the phonological change taking place in the second column of Hebrew data. b. Describe in words as specifically as possible a phonological rule that accounts for the change. Make sure your rule doesn’t affect the data in the first column of Hebrew. 24. Here are some Japanese data, many of them from exercise 10, in a fine enough phonetic transcription to show voiceless vowels (the ones with the little ring under them). Exercises Word Gloss Word Gloss Word Gloss tatami tegami su̥kiyaki tʃi ̥tʃi ʃi ̥ta degutʃi natsu kata matsu̥ʃi ̥ta mat letter sukiyaki father under exit summer person (a proper name) tomodatʃi totemo ki ̥setsu tsu̥kue ki ̥ta tsuri tsu̥tsumu fu̥ton etsu̥ko friend very season desk north fishing wrap futon (a girl’s name) utʃi otoko busata tetsudau matsu ki ̥setsu tʃizu fugi fu̥kuan house male silence help wait mistress map discuss a plan a. Which vowels may occur voiceless? b. Are they in complementary distribution with their voiced counterparts? If so, state the distribution. c. Are the voiced/voiceless pairs allophones of the same phoneme? d. State in words, or write in formal notation if you can, the rule for determining the allophones of the vowels that have voiceless allophones. 25. With regard to English plural and past-tense rules, we observed that the two parts of the rules must be carried out in the proper order. If we reverse the order, we would get *[bʌsəs] instead of [bʌsəz] for the plural of bus (as illustrated in the text), and *[stetət] instead of [stetəd] for the past tense of state. Although constraints A and B (given below) are the motivation for the plural and past-tense rules, both the correct and incorrect plural and past-tense forms are consistent with those constraints. What additional constraint is needed to prevent [bʌsəs] and [stetət] from being generated? (A) Obstruent sequences may not differ with respect to their voice feature at the end of a word. (B) Sequences of obstruents that differ at most with respect to voicing are not permitted within English words. 26. There is a rule of word-final obstruent devoicing in German (e.g., German /bund/ is pronounced [bũnt]). This rule is actually a manifestation of the constraint: Voiced obstruents are not permitted at the end of a word. Given that this constraint is universal, explain why English band /bænd/ is nevertheless pronounced [bæ̃nd], not [bæ̃nt], in terms of Optimality Theory (OT). 27. For many English speakers, word-final /z/ is devoiced when the /z/ represents a separate morpheme. These speakers pronounce plurals such as dogs, days, and dishes as [dɔgs], [des], and [dɪʃəs] instead of [dɔgz], [dez], and [dɪʃəz]. Furthermore, they pronounce possessives such as Dan’s, Jay’s, and Liz’s as [dæ̃ns], [dʒes], and [lɪzəs] instead of [dæ̃nz], [dʒez], and [lɪzez]. Finally, they pronounce third-person singular verb forms such as reads, 281 282 CHAPTER 5 Phonology: The Sound Patterns of Language goes, and fusses as [rids], [gos], and [fʌsəs] instead of [ridz], [goz], and [fʌsəz]. (However, words such as daze and Franz are still pronounced [dez] and [frænz], because the /z/ is not a separate morpheme. Interestingly, in this dialect Franz and Fran’s are not homophones, nor are daze and day’s.) How might OT explain this phenomenon? 28. In German the third-person singular suffix is -t. Following are three German verb stems (underlying forms) and the third-person forms of these verbs: Stem Third person /loːb/ /zag/ /raɪz/ [loːpt] [zakt] [raɪst] he praises he says he travels The final consonant of the verb stem undergoes devoicing in the thirdperson form, even though it is not at the end of the word. What constraint is operating to devoice the final stem consonant? How is this similar to or different from the constraint that operates in the English plural and past tense? 3 The Biology and Psychology of Language The field of psycholinguistics, or the psychology of language, is concerned with discovering the psychological processes that make it possible for humans to acquire and use language. J E A N B E R K O G L E A S O N A N D N A N B E R N S T E I N R AT N E R , Psycholinguistics, 1993 6 What Is Language? When we study human language, we are approaching what some might call the “human essence,” the distinctive qualities of mind that are, so far as we know, unique to man. NOAM CHOMSKY, Language and Mind, 1968 Whatever else people do when they come together—whether they play, fight, make love, or make automobiles—they talk. We live in a world of language. We talk to our friends, our associates, our wives and husbands, our lovers, our teachers, our parents, our rivals, and even our enemies. We talk to bus drivers and total strangers. We talk face-to-face and over the telephone, and everyone responds with more talk. Television and radio further swell this torrent of words. Hardly a moment of our waking lives is free from words, and even in our dreams we talk and are talked to. We also talk when there is no one to answer. Some of us talk aloud in our sleep. We talk to our pets and sometimes to ourselves. The possession of language, perhaps more than any other attribute, distinguishes humans from other animals. To understand our humanity, one must understand the nature of language that makes us human. According to the philosophy expressed in the myths and religions of many peoples, language is the source of human life and power. To some people of Africa, a newborn child is a kintu, a “thing,” not yet a muntu, a “person.” Only by the act of learning language does the child become a human being. According to this tradition, we all become “human” because we all know at least one language. But what does it mean to “know” a language? Linguistic Knowledge Do we know only what we see, or do we see what we somehow already know? CYNTHIA OZICK, “What Helen Keller Saw,” New Yorker, June 16 & 23, 2003 284 Linguistic Knowledge When you know a language, you can speak and be understood by others who know that language. This means you have the capacity to produce sounds that signify certain meanings and to understand or interpret the sounds produced by others. But language is much more than speech. Deaf people produce and understand sign languages just as hearing persons produce and understand spoken languages. The languages of the deaf communities throughout the world are equivalent to spoken languages, differing only in their modality of expression. Most everyone knows at least one language. Five-year-old children are nearly as proficient at speaking and understanding as their parents. Yet the ability to carry out the simplest conversation requires profound knowledge that most speakers are unaware of. This is true for speakers of all languages, from Albanian to Zulu. A speaker of English can produce a sentence having two relative clauses without knowing what a relative clause is, such as My goddaughter who was born in Sweden and who now lives in Iowa is named Disa, after a Viking queen. In a parallel fashion, a child can walk without understanding or being able to explain the principles of balance and support or the neurophysiological control mechanisms that permit one to do so. The fact that we may know something unconsciously is not unique to language. What, then, do speakers of English or Quechua or French or Mohawk or Arabic know? Knowledge of the Sound System “B.C.” © 1994 Creators Syndicate, Inc. Reprinted by permission of John L. Hart FLP and Creators Syndicate, Inc. Part of knowing a language means knowing what sounds (or signs1) are in that language and what sounds are not. One way this unconscious knowledge is revealed is by the way speakers of one language pronounce words from another 1The sign languages of the deaf will be discussed throughout the book. A reference to “language,” then, unless speech sounds or spoken languages are specifically mentioned, includes both spoken and signed languages. 285 286 CHAPTER 6 What Is Language? language. If you speak only English, for example, you may substitute an English sound for a non-English sound when pronouncing “foreign” words like French ménage à trois. If you pronounce it as the French do you are using sounds outside the English sound system. French people speaking English often pronounce words like this and that as if they were spelled zis and zat. The English sound represented by the initial letters th in these words is not part of the French sound system, and the French mispronunciation reveals the speaker’s unconscious knowledge of this fact. Knowing the sound system of a language includes more than knowing the inventory of sounds. It means also knowing which sounds may start a word, end a word, and follow each other. The name of a former president of Ghana was Nkrumah, pronounced with an initial sound like the sound ending the English word sink. While this is an English sound, no word in English begins with the nk sound. Speakers of English who have occasion to pronounce this name often mispronounce it (by Ghanaian standards) by inserting a short vowel sound, like Nekrumah or Enkrumah. Children who learn English recognize that nk cannot begin a word, just as Ghanaian children learn that words in their language can and do begin with the nk sound. We will learn more about sounds and sound systems in chapters 4 and 5. Knowledge of Words Knowing the sounds and sound patterns in our language constitutes only one part of our linguistic knowledge. Knowing a language means also knowing that certain sequences of sounds signify certain concepts or meanings. Speakers of English know what boy means, and that it means something different from toy or girl or pterodactyl. You also know that toy and boy are words, but moy is not. When you know a language, you know words in that language, that is, which sequences of sounds are related to specific meanings and which are not. Arbitrary Relation of Form and Meaning The minute I set eyes on an animal I know what it is. I don’t have to reflect a moment; the right name comes out instantly. I seem to know just by the shape of the creature and the way it acts what animal it is. When the dodo came along he [Adam] thought it was a wildcat. But I saved him. I just spoke up in a quite natural way and said, “Well, I do declare if there isn’t the dodo!” MARK TWAIN, Eve’s Diary, 1906 If you do not know a language, the words (and sentences) of that language will be mainly incomprehensible, because the relationship between speech sounds and the meanings they represent is, for the most part, an arbitrary one. When you are acquiring a language you have to learn that the sounds represented by the letters house signify the concept ; if you know French, this same meaning is represented by maison; if you know Russian, by dom; if you know Spanish, by casa. Similarly, is represented by hand in English, main in French, nsa in Twi, and ruka in Russian. Linguistic Knowledge The following are words in some different languages. How many of them can you understand? a. b. c. d. e. f. g. h. i. kyinii doakam odun asa toowq bolna wartawan inaminatu yawwa People who know the languages from which these words are taken understand that they have the following meanings: a. b. c. d. e. f. g. h. i. a large parasol (in Twi, a Ghanaian language) living creature (in Tohono O’odham, an American Indian language) wood (in Turkish) morning (in Japanese) is seeing (in Luiseño, a California Indian language) to speak (in Hindi-Urdu); aching (in Russian) reporter (in Indonesian) teacher (in Warao, a Venezuelan Indian language) right on! (in Hausa, a Nigerian language) “Herman”® is reprinted with permission from Laughing Stock Licensing Inc., Ottawa, Canada. All rights reserved. 287 288 CHAPTER 6 What Is Language? These examples show that the words of a particular language have the meanings they do only by convention. Despite what Eve says in Mark Twain’s satire Eve’s Diary, a pterodactyl could have been called ron, blick, or kerplunkity. As Juliet says in Shakespeare’s Romeo and Juliet: What’s in a name? That which we call a rose By any other name would smell as sweet. This conventional and arbitrary relationship between the form (sounds) and meaning (concept) of a word is also true in sign languages. If you see someone using a sign language you do not know, it is doubtful that you will understand the message from the signs alone. A person who knows Chinese Sign Language (CSL) would find it difficult to understand American Sign Language (ASL), and vice versa, as illustrated in Figure 6.1. Many signs were originally like miming, where the relationship between form and meaning is not arbitrary. Bringing the hand to the mouth to mean “eating,” as in miming, would be nonarbitrary as a sign. Over time these signs may change, just as the pronunciation of words changes, and the miming effect is lost. These signs become conventional, so that knowing the shape or movement of the hands does not reveal the meaning of the gestures in sign languages, as also shown in Figure 6.1. FATHER (ASL) FATHER (CSL) SUSPECT (ASL) SUSPECT (CSL) FIGURE 6.1 | Arbitrary relation between gestures and meanings of the signs for father and suspect in ASL and CSL.2 Copyright © 1987 Massachusetts Institute of Technology, by permission of The MIT Press. 2 From Poizner, Howard, Edward Klima, and Ursula Bellugi. “What the Hands Reveal about the Brain” figure: “Arbitrary relationship between gestures and meanings in ASL and CSL,” Copyright © 1987 Massachusetts Institute of Technology, by permission of The MIT Press. Linguistic Knowledge There is some sound symbolism in language—that is, words whose pronunciation suggests the meaning. Most languages contain onomatopoeic words like buzz or murmur that imitate the sounds associated with the objects or actions they refer to. But even here, the sounds differ from language to language, reflecting the particular sound system of the language. In English cock-a-doodle-doo is an onomatopoeic word whose meaning is the crow of a rooster, whereas in Finnish the rooster’s crow is kukkokiekuu. Forget gobble gobble when you’re in Istanbul; a turkey in Turkey goes glu-glu. Sometimes particular sound sequences seem to relate to a particular concept. In English many words beginning with gl relate to sight, such as glare, glint, gleam, glitter, glossy, glaze, glance, glimmer, glimpse, and glisten. However, gl words and their like are a very small part of any language, and gl may have nothing to do with “sight” in another language, or even in other words in English, such as gladiator, glucose, glory, glutton, globe, and so on. English speakers know the gl words that relate to sight and those that do not; they know the onomatopoeic words and all the words in the basic vocabulary of the language. No speaker of English knows all 472,000 entries in Webster’s Third New International Dictionary. And even if someone did know all the words in Webster’s, that person would still not know English. Imagine trying to learn a foreign language by buying a dictionary and memorizing words. No matter how many words you learned, you would not be able to form the simplest phrases or sentences in the language, or understand a native speaker. No one speaks in isolated words. Of course, you could search in your traveler’s dictionary for individual words to find out how to say something like “car— gas—where?” After many tries, a native might understand this question and then point in the direction of a gas station. If he answered you with a sentence, however, you probably would not understand what was said or be able to look it up, because you would not know where one word ended and another began. Chapter 2 will discuss how words are put together to form phrases and sentences, and chapter 3 will explore word and sentence meanings. The Creativity of Linguistic Knowledge Albert: So are you saying that you were the best friend of the woman who was married to the man who represented your husband in divorce? André: In the history of speech, that sentence has never been uttered before. NEIL SIMON, The Dinner Party, 2000 Knowledge of a language enables you to combine sounds to form words, words to form phrases, and phrases to form sentences. You cannot buy a dictionary or phrase book of any language with all the sentences of the language. No dictionary can list all the possible sentences, because the number of sentences in a language is infinite. Knowing a language means being able to produce new sentences never spoken before and to understand sentences never heard before. The linguist Noam Chomsky, one of the people most responsible for the modern revolution in language and cognitive science, refers to this ability as part of the creative aspect of language use. Not every speaker of a language can create 289 290 CHAPTER 6 What Is Language? great literature, but everybody who knows a language can and does create new sentences when speaking and understands new sentences created by others, a fact expressed more than 400 years ago by Huarte de San Juan (1530–1592): “Normal human minds are such that . . . without the help of anybody, they will produce 1,000 (sentences) they never heard spoke of . . . inventing and saying such things as they never heard from their masters, nor any mouth.” In pointing out the creative aspect of language, Chomsky made a powerful argument against the behaviorist view of language that prevailed in the first half of the twentieth century, which held that language is a set of learned responses to stimuli. While it is true that if someone steps on your toes you may automatically respond with a scream or a grunt, these sounds are not part of language. They are involuntary reactions to stimuli. After we reflexively cry out, we can then go on to say: “Thank you very much for stepping on my toe, because I was afraid I had elephantiasis and now that I can feel the pain I know I don’t,” or any one of an infinite number of sentences, because the particular sentences we produce are not controlled by any stimulus. Even some involuntary cries like “ouch” are constrained by our own language system, as are the filled pauses that are sprinkled through conversational speech, such as er, uh, and you know in English. They contain only the sounds found in the language. French speakers, for example, often fill their pauses with the vowel sound that starts their word for egg—oeuf—a sound that does not occur in English. Our creative ability is reflected not only in what we say but also includes our understanding of new or novel sentences. Consider the following sentence: “Daniel Boone decided to become a pioneer because he dreamed of pigeon-toed giraffes and cross-eyed elephants dancing in pink skirts and green berets on the wind-swept plains of the Midwest.” You may not believe the sentence; you may question its logic; but you can understand it, although you have probably never heard or read it before now. Knowledge of a language, then, makes it possible to understand and produce new sentences. If you counted the number of sentences in this book that you have seen or heard before, the number would be small. Next time you write an essay or a letter, see how many of your sentences are new. Few sentences are stored in your brain, to be pulled out to fit some situation or matched with some sentence that you hear. Novel sentences never spoken or heard before cannot be stored in your memory. Simple memorization of all the possible sentences in a language is impossible in principle. If for every sentence in the language a longer sentence can be formed, then there is no limit to the number of sentences. In English you can say: This is the house. or This is the house that Jack built. or This is the malt that lay in the house that Jack built. Linguistic Knowledge or This is the dog that worried the cat that killed the rat that ate the malt that lay in the house that Jack built. And you need not stop there. How long, then, is the longest sentence? A speaker of English can say: The old man came. or The old, old, old, old, old man came. How many “olds” are too many? Seven? Twenty-three? It is true that the longer these sentences become, the less likely we would be to hear or to say them. A sentence with 276 occurrences of “old” would be highly unusual in either speech or writing, even to describe Methuselah. But such a sentence is theoretically possible. If you know English, you have the knowledge to add any number of adjectives as modifiers to a noun and to form sentences with an indefinite number of clauses, as in “the house that Jack built.” All human languages permit their speakers to increase the length and complexity of sentences in these ways; creativity is a universal property of human language. Knowledge of Sentences and Nonsentences To memorize and store an infinite set of sentences would require an infinite storage capacity. However, the brain is finite, and even if it were not, we could not store novel sentences, which are, well, novel. When you learn a language you must learn something finite—your vocabulary is finite (however large it may be)—and that can be stored. If sentences were formed simply by placing one word after another in any order, then a language could be defined simply as a set of words. But you can see that knowledge of words is not enough by examining the following strings of words: 1. a. b. c. d. e. f. g. h. John kissed the little old lady who owned the shaggy dog. Who owned the shaggy dog John kissed the little old lady. John is difficult to love. It is difficult to love John. John is anxious to go. It is anxious to go John. John, who was a student, flunked his exams. Exams his flunked student a was who John. If you were asked to put an asterisk or star before the examples that seemed ill formed or ungrammatical or “no good” to you, which ones would you mark? Our intuitive knowledge about what is or is not an allowable sentence in English convinces us to star b, f, and h. Which ones did you star? 291 292 CHAPTER 6 What Is Language? Would you agree with the following judgments? 2. a. b. c. d. e. f. g. h. What he did was climb a tree. *What he thought was want a sports car.3 Drink your beer and go home! *What are drinking and go home? I expect them to arrive a week from next Thursday. *I expect a week from next Thursday to arrive them. Linus lost his security blanket. *Lost Linus security blanket his. If you find the starred sentences unacceptable, as we do, you see that not every string of words constitutes a well-formed sentence in a language. Our knowledge of a language determines which strings of words are well-formed sentences and which are not. Therefore, in addition to knowing the words of the language, linguistic knowledge includes rules for forming sentences and making the kinds of judgments you made about the examples in (1) and (2). These rules must be finite in length and finite in number so that they can be stored in our finite brains. Yet, they must permit us to form and understand an infinite set of new sentences. They are not rules determined by a judge or a legislature, or even rules taught in a grammar class. They are unconscious rules that we acquire as young children as we develop language. A language, then, consists of all the sounds, words, and infinitely many possible sentences. When you know a language, you know the sounds, the words, and the rules for their combination. Linguistic Knowledge and Performance “What’s one and one and one and one and one and one and one and one and one and one?” “I don’t know,” said Alice. “I lost count.” “She can’t do Addition,” the Red Queen interrupted. LEWIS CARROLL, Through the Looking-Glass, 1871 Our linguistic knowledge permits us to form longer and longer sentences by joining sentences and phrases together or adding modifiers to a noun. Whether we stop at three, five, or eighteen adjectives, it is impossible to limit the number we could add if desired. Very long sentences are theoretically possible, but they are highly improbable. Evidently, there is a difference between having the knowledge necessary to produce sentences of a language and applying this knowledge. It is a difference between what we know, which is our linguistic competence, and how we use this knowledge in actual speech production and comprehension, which is our linguistic performance. Speakers of all languages have the knowledge to understand or produce sentences of any length. Here is an example from the ruling of a federal judge: 3The asterisk is used before examples that speakers find ungrammatical. This notation will be used throughout the book. Linguistic Knowledge We invalidate the challenged lifetime ban because we hold as a matter of federal constitutional law that a state initiative measure cannot impose a severe limitation on the people’s fundamental rights when the issue of whether to impose such a limitation on these rights is put to the voters in a measure that is ambiguous on its face and that fails to mention in its text, the proponent’s ballot argument, or the state’s official description, the severe limitation to be imposed. However, there are physiological and psychological reasons that limit the number of adjectives, adverbs, clauses, and so on that we actually produce and understand. Speakers may run out of breath, lose track of what they have said, or die of old age before they are finished. Listeners may become confused, tired, bored, or disgusted. When we speak, we usually wish to convey some message. At some stage in the act of producing speech, we must organize our thoughts into strings of words. Sometimes the message is garbled. We may stammer, or pause, or produce slips of the tongue. We may even sound like Hattie in the cartoon, who illustrates the difference between linguistic knowledge and the way we use that knowledge in performance. “The Born Loser” © Newspaper Enterprise Association, Inc. 293 294 CHAPTER 6 What Is Language? For the most part, linguistic knowledge is unconscious knowledge. The linguistic system—the sounds, structures, meanings, words, and rules for putting them all together—is acquired with no conscious awareness. Just as we may not be conscious of the principles that allow us to stand or walk, we are unaware of the rules of language. Our ability to speak, to understand, and to make judgments about the grammaticality of sentences reveals our knowledge of the rules of our language. This knowledge represents a complex cognitive system. The nature of this system is what this book is all about. What Is Grammar? We use the term “grammar” with a systematic ambiguity. On the one hand, the term refers to the explicit theory constructed by the linguist and proposed as a description of the speaker’s competence. On the other hand, it refers to this competence itself. NOAM CHOMSKY AND MORRIS HALLE, The Sound Pattern of English, 1968 Descriptive Grammars There are no primitive languages. The great and abstract ideas of Christianity can be discussed even by the wretched Greenlanders. JOHANN PETER SUESSMILCH, in a paper delivered before the Prussian Academy, 1756 The way we are using the word grammar differs from most common usages. In our sense, the grammar is the knowledge speakers have about the units and rules of their language—rules for combining sounds into words (called phonology), rules of word formation (called morphology), rules for combining words into phrases and phrases into sentences (called syntax), as well as the rules for assigning meaning (called semantics). The grammar, together with a mental dictionary (called a lexicon) that lists the words of the language, represents our linguistic competence. To understand the nature of language we must understand the nature of grammar. Every human being who speaks a language knows its grammar. When linguists wish to describe a language, they make explicit the rules of the grammar of the language that exist in the minds of its speakers. There will be some differences among speakers, but there must be shared knowledge too. The shared knowledge—the common parts of the grammar—makes it possible to communicate through language. To the extent that the linguist’s description is a true model of the speakers’ linguistic capacity, it is a successful description of the grammar and of the language itself. Such a model is called a descriptive grammar. It does not tell you how you should speak; it describes your basic linguistic knowledge. It explains how it is possible for you to speak and understand and make judgments about well-formedness, and it tells what you know about the sounds, words, phrases, and sentences of your language. When we say in later chapters that a sentence is grammatical we mean that it conforms to the rules of the mental grammar (as described by the linguist); What Is Grammar? when we say that it is ungrammatical, we mean it deviates from the rules in some way. If, however, we posit a rule for English that does not agree with your intuitions as a speaker, then the grammar we are describing differs in some way from the mental grammar that represents your linguistic competence; that is, your language is not the one described. No language or variety of a language (called a dialect) is superior to any other in a linguistic sense. Every grammar is equally complex, logical, and capable of producing an infinite set of sentences to express any thought. If something can be expressed in one language or one dialect, it can be expressed in any other language or dialect. It might involve different means and different words, but it can be expressed. We will have more to say about dialects in chapter 9. This is true as well for languages of technologically underdeveloped cultures. The grammars of these languages are not primitive or ill formed in any way. They have all the richness and complexity of the grammars of languages spoken in technologically advanced cultures. Prescriptive Grammars It is certainly the business of a grammarian to find out, and not to make, the laws of a language. JOHN FELL, Essay towards an English Grammar, 1784 Just read the sentence aloud, Amanda, and listen to how it sounds. If the sentence sounds OK, go with it. If not, rearrange the pieces. Then throw out the rule books and go to bed. JAMES KILPATRICK, “Writer’s Art” (syndicated newspaper column), 1998 Any fool can make a rule And every fool will mind it HENRY DAVID THOREAU, journal entry, 1860 Not all grammarians, past or present, share the view that all grammars are equal. Language “purists” of all ages believe that some versions of a language are better than others, that there are certain “correct” forms that all educated people should use in speaking and writing, and that language change is corruption. The Greek Alexandrians in the first century, the Arabic scholars at Basra in the eighth century, and numerous English grammarians of the eighteenth and nineteenth centuries held this view. They wished to prescribe rather than describe the rules of grammar, which gave rise to the writing of prescriptive grammars. In the Renaissance a new middle class emerged who wanted their children to speak the dialect of the “upper” classes. This desire led to the publication of many prescriptive grammars. In 1762 Bishop Robert Lowth wrote A Short Introduction to English Grammar with Critical Notes. Lowth prescribed a number of new rules for English, many of them influenced by his personal taste. Before the publication of his grammar, practically everyone—upper-class, middle-class, and lower-class—said I don’t have none and You was wrong about that. Lowth, 295 296 CHAPTER 6 What Is Language? however, decided that “two negatives make a positive” and therefore one should say I don’t have any; and that even when you is singular it should be followed by the plural were. Many of these prescriptive rules were based on Latin grammar and made little sense for English. Because Lowth was influential and because the rising new class wanted to speak “properly,” many of these new rules were legislated into English grammar, at least for the prestige dialect—that variety of the language spoken by people in positions of power. The view that dialects that regularly use double negatives are inferior cannot be justified if one looks at the standard dialects of other languages in the world. Romance languages, for example, use double negatives, as the following examples from French and Italian show: French: Je I ne veux not want Italian: Non not voglio I-want parler speak avec with personne. no-one. parlare speak con with nessuno. no-one. English translation: “I don’t want to speak with anyone.” Prescriptive grammars such as Lowth’s are different from the descriptive grammars we have been discussing. Their goal is not to describe the rules people know, but to tell them what rules they should follow. The great British Prime Minister Winston Churchill is credited with this response to the “rule” against ending a sentence with a preposition: “This is the sort of nonsense up with which I will not put.” Today our bookstores are populated with books by language purists attempting to “save the English language.” They criticize those who use enormity to mean “enormous” instead of “monstrously evil.” But languages change in the course of time and words change meaning. Language change is a natural process, as we discuss in chapter 10. Over time enormity was used more and more in the media to mean “enormous,” and we predict that now that President Barack Obama has used it that way (in his victory speech of November 4, 2008), that usage will gain acceptance. Still, the “saviors” of the English language will never disappear. They will continue to blame television, the schools, and even the National Council of Teachers of English for failing to preserve the standard language, and are likely to continue to dis (oops, we mean disparage) anyone who suggests that African American English (AAE)4 and other dialects are viable, complete languages. In truth, human languages are without exception fully expressive, complete, and logical, as much as they were two hundred or two thousand years ago. Hopefully (another frowned-upon usage), this book will convince you that all languages and dialects are rule-governed, whether spoken by rich or poor, powerful or weak, learned or illiterate. Grammars and usages of particular groups 4AAE is also called African American Vernacular English (AAVE), Ebonics, and Black English (BE). It is spoken by some (but by no means all) African Americans. It is discussed in chapter 9. What Is Grammar? in society may be dominant for social and political reasons, but from a linguistic (scientific) perspective they are neither superior nor inferior to the grammars and usages of less prestigious members of society. Having said all this, it is undeniable that the standard dialect (defined in chapter 9) may indeed be a better dialect for someone wishing to obtain a particular job or achieve a position of social prestige. In a society where “linguistic profiling” is used to discriminate against speakers of a minority dialect, it may behoove those speakers to learn the prestige dialect rather than wait for social change. But linguistically, prestige and standard dialects do not have superior grammars. Finally, all of the preceding remarks apply to spoken language. Writing (see chapter 11) is not acquired naturally through simple exposure to others speaking the language (see chapter 7), but must be taught. Writing follows certain prescriptive rules of grammar, usage, and style that the spoken language does not, and is subject to little, if any, dialectal variation. Teaching Grammars I don’t want to talk grammar. I want to talk like a lady. G. B. SHAW, Pygmalion, 1912 The descriptive grammar of a language attempts to describe the rules internalized by a speaker of that language. It is different from a teaching grammar, which is used to learn another language or dialect. Teaching grammars can be helpful to people who do not speak the standard or prestige dialect, but find it would be advantageous socially and economically to do so. They are used in schools in foreign language classes. This kind of grammar gives the words and their pronunciations, and explicitly states the rules of the language, especially where they differ from the language of instruction. It is often difficult for adults to learn a second language without formal instruction, even when they have lived for an extended period in a country where the language is spoken. (Second language acquisition is discussed in more detail in chapter 7.) Teaching grammars assume that the student already knows one language and compares the grammar of the target language with the grammar of the native language. The meaning of a word is provided by a gloss—the parallel word in the student’s native language, such as maison, “house” in French. It is assumed that the student knows the meaning of the gloss “house,” and so also the meaning of the word maison. Sounds of the target language that do not occur in the native language are often described by reference to known sounds. Thus the student might be aided in producing the French sound u in the word tu by instructions such as “Round your lips while producing the vowel sound in tea.” The rules on how to put words together to form grammatical sentences also refer to the learner’s knowledge of their native language. For example, the teaching grammar Learn Zulu by Sibusiso Nyembezi states that “The difference between singular and plural is not at the end of the word but at the beginning of it,” and warns that “Zulu does not have the indefinite and definite articles 297 298 CHAPTER 6 What Is Language? ‘a’ and ‘the.’” Such statements assume students know the rules of their own grammar, in this case English. Although such grammars might be considered prescriptive in the sense that they attempt to teach the student what is or is not a grammatical construction in the new language, their aim is different from grammars that attempt to change the rules or usage of a language that is already known by the speaker. This book is not primarily concerned with either prescriptive or teaching grammars. However, these kinds of grammars are considered in chapter 9 in the discussion of standard and nonstandard dialects. Language Universals In a grammar there are parts that pertain to all languages; these components form what is called the general grammar. In addition to these general (universal) parts, there are those that belong only to one particular language; and these constitute the particular grammars of each language. CÉSAR CHESNEAU DU MARSAIS, c. 1750 There are rules of particular languages, such as English, Swahili, and Zulu, that form part of the individual grammars of these languages, and then there are rules that hold in all languages. Those rules representing the universal properties that all languages share constitute a universal grammar. The linguist attempts to uncover the “laws” of particular languages, and also the laws that pertain to all languages. The universal laws are of particular interest because they give us a window into the workings of the human mind in this cognitive domain. In about 1630, the German philosopher Johann Heinrich Alsted first used the term general grammar as distinct from special grammar. He believed that the function of a general grammar was to reveal those features “which relate to the method and etiology of grammatical concepts. They are common to all languages.” Pointing out that “general grammar is the pattern ‘norma’ of every particular grammar whatsoever,” he implored “eminent linguists to employ their insight in this matter.” Three and a half centuries before Alsted, the scholar Robert Kilwardby held that linguists should be concerned with discovering the nature of language in general. So concerned was Kilwardby with Universal Grammar that he excluded considerations of the characteristics of particular languages, which he believed to be as “irrelevant to a science of grammar as the material of the measuring rod or the physical characteristics of objects were to geometry.” Kilwardby was perhaps too much of a universalist. The particular properties of individual languages are relevant to the discovery of language universals, and they are of interest for their own sake. People attempting to study Latin, Greek, French, or Swahili as a second language are so focused on learning those aspects of the second language that are different from their native language that they may be skeptical of assertions that there are universal laws of language. Yet the more we investigate this question, the more evidence accumulates to support Chomsky’s view that there is a Universal Grammar (UG) that is part of the biologically endowed human Language Universals language faculty. We can think of UG as the basic blueprint that all languages follow. It specifies the different components of the grammar and their relations, how the different rules of these components are constructed, how they interact, and so on. It is a major aim of linguistic theory to discover the nature of UG. The linguist’s goal is to reveal the “laws of human language” as the physicist’s goal is to reveal the “laws of the physical universe.” The complexity of language, a product of the human brain, undoubtedly means this goal will never be fully achieved. All scientific theories are incomplete, and new hypotheses must be proposed to account for new data. Theories are continually changing as new discoveries are made. Just as physics was enlarged by Einstein’s theories of relativity, so grows the linguistic theory of UG as new discoveries shed new light on the nature of human language. The comparative study of many different languages is of central importance to this enterprise. The Development of Grammar How comes it that human beings, whose contacts with the world are brief and personal and limited, are nevertheless able to know as much as they do know? BERTRAND RUSSELL, Human Knowledge: Its Scope and Limits, 1948 Linguistic theory is concerned not only with describing the knowledge that an adult speaker has of his or her language, but also with explaining how that knowledge is acquired. All normal children acquire (at least one) language in a relatively short period with apparent ease. They do this despite the fact that parents and other caregivers do not provide them with any specific language instruction. Indeed, it is often remarked that children seem to “pick up” language just from hearing it spoken around them. Children are language learning virtuosos—whether a child is male or female, from a rich family or a disadvantaged one, grows up on a farm or in the city, attends day care or has home care—none of these factors fundamentally affects the way language develops. Children can acquire any language they are exposed to with comparable ease— English, Dutch, French, Swahili, Japanese—and even though each of these languages has its own peculiar characteristics, children learn them all in very much the same way. For example, all children go through a babbling stage; their babbles gradually give way to words, which then combine into simple sentences. When children first begin to produce sentences, certain elements may be missing. For example, the English-speaking two-year-old might say Cathy build house instead of Cathy is building the house. On the other side of the world, a Swahili-speaking child will say mbuzi kula majani, which translates as “goat eat grass,” and which also lacks many required elements. They pass through other linguistic stages on their way to adultlike competence, and by about age five children speak a language that is almost indistinguishable from the language of the adults around them. In just a few short years, without the benefit of explicit guidance and regardless of personal circumstances, the young child—who may be unable to tie her shoes or do even the simplest arithmetic computation—masters the complex grammatical structures of her language and acquires a substantial lexicon. Just 299 300 CHAPTER 6 What Is Language? how children accomplish this remarkable cognitive feat is a topic of intense interest to linguists. The child’s inexorable path to adult linguistic knowledge and the uniformity of the acquisition process point to a substantial innate component to language development. Chomsky, following the lead of the early rationalist philosophers, proposed that human beings are born with an innate “blueprint” for language, what we referred to earlier as Universal Grammar. Children acquire language as quickly and effortlessly as they do because they do not have to figure out all the grammatical rules, only those that are specific to their particular language. The universal properties—the laws of language—are part of their biological endowment. Linguistic theory aims to uncover those principles that characterize all human languages and to reveal the innate component of language that makes language acquisition possible. In chapter 7 we will discuss language acquisition in more detail. Sign Languages: Evidence for the Innateness of Language It is not the want of organs that [prevents animals from making] . . . known their thoughts . . . for it is evident that magpies and parrots are able to utter words just like ourselves, and yet they cannot speak as we do, that is, so as to give evidence that they think of what they say. On the other hand, men who, being born deaf and mute . . . are destitute of the organs which serve the others for talking, are in the habit of themselves inventing certain signs by which they make themselves understood. RENÉ DESCARTES, Discourse on Method, 1637 The sign languages of deaf communities provide some of the best evidence to support the notion that humans are born with the ability to acquire language, and that all languages are governed by the same universal properties. Because deaf children are unable to hear speech, they do not acquire spoken languages as hearing children do. However, deaf children who are exposed to sign languages acquire them just as hearing children acquire spoken languages. Sign languages do not use sounds to express meanings. Instead, they are visualgestural systems that use hand, body, and facial gestures as the forms used to represent words and grammatical rules. Sign languages are fully developed languages, and signers create and comprehend unlimited numbers of new sentences, just as speakers of spoken languages do. Current research on sign languages has been crucial to understanding the biological underpinnings of human language acquisition and use. About one in a thousand babies is born deaf or with a severe hearing deficiency. Deaf children have difficulty learning a spoken language because normal speech depends largely on auditory feedback. To learn to speak, a deaf child requires extensive training in special schools or programs designed especially for deaf people. Although deaf people can be taught to speak a language intelligibly, they can never understand speech as well as a hearing person. Seventy-five percent of spoken English words cannot be read on the lips accurately. The ability of many deaf individuals to comprehend spoken language is therefore remarkable; they Language Universals combine lip reading with knowledge of the structure of language, the meaning redundancies that language has, and context. If, however, human language is a biologically based ability and all human beings have the innate ability (or as Darwin suggested, instinct) to acquire a language, it is not surprising that nonspoken languages have developed among nonhearing individuals. The more we learn about the human linguistic knowledge, the clearer it becomes that language acquisition and use are not dependent on the ability to produce and hear sounds, but on a far more abstract cognitive capacity that accounts for the similarities between spoken and sign languages. American Sign Language The major language of the deaf community in the United States is American Sign Language (ASL). ASL is an outgrowth of the sign language used in France and brought to the United States in 1817 by the great educator Thomas Hopkins Gallaudet. Like all languages, ASL has its own grammar with phonological, morphological, syntactic, and semantic rules, and a mental lexicon of signs, all of which is encoded through a system of gestures, and is otherwise equivalent to spoken languages. Signers communicate ideas at a rate comparable to spoken communication. Moreover, language arts are not lost to the deaf community. Poetry is composed in sign language, and stage plays such as Richard Brinsley Sheridan’s The Critic have been translated into sign language and acted by the National Theatre of the Deaf. Deaf children acquire sign language much in the way that hearing children acquire a spoken language, going through the same linguistic stages including the babbling stage. Deaf children babble with their hands, just as hearing children babble with their vocal tract. Deaf children often sign themselves to sleep just as hearing children talk themselves to sleep. Deaf children report that they dream in sign language as French-speaking children dream in French and Hopispeaking children dream in Hopi. Deaf children sign to their dolls and stuffed animals. Slips of the hand occur similar to slips of the tongue; finger fumblers amuse signers as tongue twisters amuse speakers. Sign languages resemble spoken languages in all major aspects, showing that there truly are universals of language despite differences in the modality in which the language is performed. This universality is predictable because regardless of the modality in which it is expressed, language is a biologically based ability. In the United States there are several signing systems that educators have created in an attempt to represent spoken and/or written English. Unlike ASL, these languages are artificial, consisting essentially in the replacement of each spoken English word (and grammatical elements such as the -s ending for plurals and the -ed ending for past tense) by a sign. So the syntax and semantics of these manual codes for English are approximately the same as those of spoken English. The result is unnatural—similar to trying to speak French by translating every English word or ending into its French counterpart. Difficulties arise because there are not always corresponding forms in the two languages. The problem is even greater with sign languages because they use multidimensional space while spoken languages are sequential. 301 302 CHAPTER 6 What Is Language? FIGURE 6.2 | The ASL sign DECIDE: (a) and (c) show transitions from the sign; (b) illustrates the single downward movement of the sign. Reprinted by permission of the publisher from THE SIGNS OF LANGUAGE by Edward Klima and Ursula Bellugi, p. 62, Cambridge, Mass.: Harvard University Press, Copyright © 1979 by the President and Fellows of Harvard College. There are occasions when signers need to represent a word or concept for which there is no sign. New coinages, foreign words, acronyms, certain proper nouns, technical vocabulary, or obsolete words as might be found in a signed interpretation of a play by Shakespeare are among some of these. For such cases ASL provides a series of hand shapes and movements that represent the letters of the English alphabet, permitting all such words and concepts to be expressed through finger spelling. Signs, however, are produced differently from finger-spelled words. As Klima and Bellugi observe, “The sign DECIDE cannot be analyzed as a sequence of distinct, separable configurations of the hand. Like all other lexical signs in ASL, but unlike the individual finger-spelled letters in D-E-C-I-D-E taken separately, the ASL sign DECIDE does have an essential movement but the hand shape occurs simultaneously with the movement. In appearance, the sign is a continuous whole.”5 This sign is shown in Figure 6.2. Animal “Languages” A dog cannot relate his autobiography; however eloquently he may bark, he cannot tell you that his parents were honest though poor. BERTRAND RUSSELL, Human Knowledge: Its Scope and Limits, 1948 Is language the exclusive property of the human species? The idea of talking animals is as old and as widespread among human societies as language itself. All cultures have legends in which some animal plays a speaking role. All over West Africa, children listen to folktales in which a “spider-man” is the hero. “Coyote” is a favorite figure in many Native American tales, and many an animal takes 5 Klima, E. S., and U. Bellugi. 1979. The signs of language. Cambridge, MA: Harvard University Press. Animal “Languages” the stage in Aesop’s famous fables. The fictional Doctor Doolittle’s forte was communicating with all manner of animals, from giant snails to tiny sparrows. If language is viewed only as a system of communication, then many species communicate. Humans also use systems other than language to relate to each other and to send and receive “messages,” like so-called body language. The question is whether the communication systems used by other species are at all like human linguistic knowledge, which is acquired by children with no instruction, and which is used creatively rather than in response to internal or external stimuli. “Talking” Parrots Words learned by rote a parrot may rehearse; but talking is not always to converse. WILLIAM COWPER, Poems by William Cowper, of the Inner Temple, Esq., 1782 “Bizarro” © by Dan Piraro. Reprinted with permission of King Features Syndicate. All rights reserved. Most humans who acquire language use speech sounds to express meanings, but such sounds are not a necessary aspect of language, as evidenced by the sign languages. The use of speech sounds is therefore not a basic part of what we have 303 304 CHAPTER 6 What Is Language? been calling language. The chirping of birds, the squeaking of dolphins, and the dancing of bees may potentially represent systems similar to human languages. If animal communication systems are not like human language, it is not because of a lack of speech. Conversely, when animals vocally imitate human utterances, it does not mean they possess language. Language is a system that relates sounds or gestures to meanings. Talking birds such as parrots and mynahs are capable of faithfully reproducing words and phrases of human language that they have heard, but their utterances carry no meaning. They are speaking neither English nor their own language when they sound like us. Talking birds do not dissect the sounds of their imitations into discrete units. Polly and Molly do not rhyme for a parrot. They are as different as hello and good-bye. One property of all human languages (which will be discussed further in chapter 4) is the discreteness of the speech or gestural units, which are ordered and reordered, combined and split apart. Generally, a parrot says what it is taught, or what it hears, and no more. If Polly learns “Polly wants a cracker” and “Polly wants a doughnut” and also learns to imitate the single words whiskey and bagel, she will not spontaneously produce, as children do, “Polly wants whiskey” or “Polly wants a bagel” or “Polly wants whiskey and a bagel.” If she learns cat and cats, and dog and dogs, and then learns the word parrot, she will not be able to form the plural parrots as children do by the age of three; nor can a parrot form an unlimited set of utterances from a finite set of units, or understand utterances never heard before. Reports of an African gray parrot named Alex suggest that new methods of training animals may result in more learning than was previously believed possible. When the trainer uses words in context, Alex seems to relate some sounds with their meanings. This is more than simple imitation, but it is not how children acquire the complexities of the grammar of any language. It is more like a dog learning to associate certain sounds with meanings, such as heel, sit, fetch, and so on. Indeed, a recent study in Germany reports on a nine-year-old border collie named Rico who has acquired a 200-word vocabulary (containing both German and English words). Rico did not require intensive training but was able to learn many of these words quite quickly. However impressive these feats, the ability of a parrot to produce sounds similar to those used in human language, even if meanings are related to these sounds, and Rico’s ability to understand sequences of sounds that correspond to specific objects, cannot be equated with the child’s ability to acquire the complex grammar of a human language. The Birds and the Bees The birds and animals are all friendly to each other, and there are no disputes about anything. They all talk, and they all talk to me, but it must be a foreign language for I cannot make out a word they say. MARK TWAIN, Eve’s Diary, 1906 Animal “Languages” Most animals possess some kind of “signaling” communication system. Among certain species of spiders there is a complex system for courtship. The male spider, before he approaches his ladylove, goes through an elaborate series of gestures to inform her that he is indeed a spider and a suitable mate, and not a crumb or a fly to be eaten. These gestures are invariant. One never finds a creative spider changing or adding to the courtship ritual of his species. A similar kind of gestural language is found among the fiddler crabs. There are forty species, and each uses its own claw-waving movement to signal to another member of its “clan.” The timing, movement, and posture of the body never change from one time to another or from one crab to another within the particular variety. Whatever the signal means, it is fixed. Only one meaning can be conveyed. The imitative sounds of talking birds have little in common with human language, but the natural calls and songs of many species of birds do have a communicative function. They also resemble human languages in that there are “regional dialects” within the same species, and as with humans, these dialects are transmitted from parents to offspring. Indeed, researchers have noted that dialect differences may be better preserved in songbirds than in humans because there is no homogenization of regional accents due to radio or TV. Birdcalls (consisting of one or more short notes) convey messages associated with the immediate environment, such as danger, feeding, nesting, flocking, and so on. Bird songs (more complex patterns of notes) are used to stake out territory and to attract mates. There is no evidence of any internal structure to these songs, nor can they be segmented into independently meaningful parts as words of human language can be. In a study of the territorial song of the European robin, it was discovered that the rival robins paid attention only to the alternation between high-pitched and low-pitched notes, and which came first did not matter. The message varies only to the extent of how strongly the robin feels about his possession and to what extent he is prepared to defend it and start a family in that territory. The different alternations therefore express intensity and nothing more. The robin is creative in his ability to sing the same thing in many ways, but not creative in his ability to use the same units of the system to express many different messages with different meanings. As we discussed in the introduction, some species of birds can only acquire their song during a specific period of development. In this respect bird songs are similar to human language, for which there is also a critical period for acquisition. Although this is an important aspect of both bird song and human language, birdcalls and songs are fundamentally different kinds of communicative systems. The kinds of messages that birds can convey are limited, and messages are stimulus controlled. This distinction is also true of the system of communication used by honeybees. A forager bee is able to return to the hive and communicate to other bees where a source of food is located. It does so by performing a dance on a wall of the hive that reveals the location and quality of the food source. For one species of Italian honeybee, the dancing behavior may assume one of three possible patterns: round (which indicates locations near the hive, within 20 feet or so); sickle (which indicates locations at 20 to 60 feet from the hive); and tail-wagging (for 305 306 CHAPTER 6 What Is Language? distances that exceed 60 feet). The number of repetitions per minute of the basic pattern in the tail-wagging dance indicates the precise distance; the slower the repetition rate, the longer the distance. The bees’ dance is an effective system of communication for bees. It is capable, in principle, of infinitely many different messages, like human language; but unlike human language, the system is confined to a single subject—food source. An experimenter who forced a bee to walk to the food source showed the inflexibility. When the bee returned to the hive, it indicated a distance twenty-five times farther away than the food source actually was. The bee had no way of communicating the special circumstances in its message. This absence of creativity makes the bee’s dance qualitatively different from human language. In the seventeenth century, the philosopher and mathematician René Descartes pointed out that the communication systems of animals are qualitatively different from the language used by humans: It is a very remarkable fact that there are none so depraved and stupid, without even excepting idiots, that they cannot arrange different words together, forming of them a statement by which they make known their thoughts; while, on the other hand, there is no other animal, however perfect and fortunately circumstanced it may be, which can do the same. Descartes goes on to state that one of the major differences between humans and animals is that human use of language is not just a response to external, or even internal, stimuli, as are the sounds and gestures of animals. He warns against confusing human use of language with “natural movements which betray passions and may be . . . manifested by animals.” To hold that animals communicate by systems qualitatively different from human language systems is not to claim human superiority. Humans are not inferior to the one-celled amoeba because they cannot reproduce by splitting in two; they are just different sexually. They are not inferior to hunting dogs, whose sense of smell is far better than that of human animals. As we will discuss in the next chapter, the human language ability is rooted in the human brain, just as the communication systems of other species are determined by their biological structure. All the studies of animal communication systems, including those of primates, provide evidence for Descartes’ distinction between other animal communication systems and the linguistic creative ability possessed by the human animal. Can Chimps Learn Human Language? It is a great baboon, but so much like man in most things. . . . I do believe it already understands much English; and I am of the mind it might be taught to speak or make signs. ENTRY IN SAMUEL PEPYS’S DIARY, 1661 In their natural habitat, chimpanzees, gorillas, and other nonhuman primates communicate with each other through visual, auditory, olfactory, and tactile Animal “Languages” signals. Many of these signals seem to have meanings associated with the animals’ immediate environment or emotional state. They can signal danger and can communicate aggressiveness and subordination. However, the natural sounds and gestures produced by all nonhuman primates are highly stereotyped and limited in the type and number of messages they convey, consisting mainly of emotional responses to particular situations. They have no way of expressing the anger they felt yesterday or the anticipation of tomorrow. Even though the natural communication systems of these animals are quite limited, many people have been interested in the question of whether they have the latent capacity to acquire complex linguistic systems similar to human language. Throughout the second half of the twentieth century, there were a number of studies designed to test whether nonhuman primates could learn human language. In early experiments researchers raised chimpanzees in their own homes alongside their children, in order to recreate the natural environment in which human children acquire language. The chimps were unable to vocalize words despite the efforts of their caretakers, though they did achieve the ability to understand a number of individual words. One disadvantage suffered by primates is that their vocal tracts do not permit them to pronounce many different sounds. Because of their manual dexterity, primates might better be taught sign language as a test of their cognitive linguistic ability. Starting with a chimpanzee named Washoe, and continuing over the years with a gorilla named Koko and another chimp ironically named Nim Chimpsky (after Noam Chomsky), efforts were made to teach them American Sign Language. Though the primates achieved small successes such as the ability to string two signs together, and to occasionally show flashes of creativity, none achieved the qualitative linguistic ability of a human child. Similar results were obtained in attempting to teach primates artificial languages designed to resemble human languages in some respects. Sarah, Lana, Sherman, Austin, and other chimpanzees were taught languages whose “words” were plastic chips, or keys on a keyboard, that could be arranged into “sentences.” The researchers were particularly interested in the ability of primates to communicate using such abstract symbols. These experiments also came under scrutiny. Questions arose over what kind of knowledge Sarah and Lana were showing with their symbol manipulations. The conclusion was that the creative ability that is so much a part of human language was not evidenced by the chimps’ use of the artificial languages. More recently, psychologists Patricia Greenfield and Sue Savage-Rumbaugh studied a different species of chimp, a male bonobo (or pygmy chimpanzee) named Kanzi. They used the same plastic symbols and computer keyboard that were used with Lana. They claimed that Kanzi not only learned, but also invented, grammatical rules. One rule they described is the use of a symbol designating an object such as “dog” followed by a symbol meaning “go.” After combining these symbols, Kanzi would then go to an area where dogs were located to play with them. Greenfield and Savage-Rumbaugh claimed that this “ordering” rule was not an imitation of his caretakers’ utterances, who they said used an opposite ordering, in which “go” was followed by “dogs.” 307 308 CHAPTER 6 What Is Language? Kanzi’s acquisition of grammatical skills was slower than that of children, taking about three years (starting when he was five and a half years old). Most of Kanzi’s “sentences” are fixed formulas with little if any internal structure. Kanzi has not yet exhibited the linguistic knowledge of a human three-year-old, whose complexity level includes knowledge of sentence structure. Moreover, unlike Kanzi’s use of a different word order from his caretakers, children rapidly adopt the correct word order of the surrounding language. As often happens in science, the search for the answers to one kind of question leads to answers to other questions. The linguistic experiments with primates have led to many advances in our understanding of primate cognitive ability. Researchers have gone on to investigate other capacities of the chimp mind, such as causality; Savage-Rumbaugh and Greenfield are continuing to study the ability of chimpanzees to use symbols. These studies also point out how remarkable it is that human children, by the ages of three and four, without explicit teaching or overt reinforcement, create new and complex sentences never spoken and never heard before. In the Beginning: The Origin of Language Nothing, no doubt, would be more interesting than to know from historical documents the exact process by which the first man began to lisp his first words, and thus to be rid forever of all the theories on the origin of speech. MAX MÜLLER, Lectures on the Science of Language, 1874 All religions and mythologies contain stories of language origin. Philosophers through the ages have argued the question. Scholarly works have been written on the subject. Prizes have been awarded for the “best answer” to this eternally perplexing problem. Theories of divine origin, language as a human invention, and evolutionary development have all been put forward. Linguistic history suggests that spoken languages of the kind that exist today have been around for tens of thousands of years at the very least, but the earliest deciphered written records are barely six thousand years old. (The origin of writing is discussed in chapter 11.) These records appear so late in the history of the development of language that they provide no clue to its origin. Despite the difficulty of finding scientific evidence, speculations on language origin have provided valuable insights into the nature and development of language, which prompted the great Danish linguist Otto Jespersen to state that “linguistic science cannot refrain forever from asking about the whence (and about the whither) of linguistic evolution.” A brief look at some of these speculative notions will reveal this point. In the Beginning: The Origin of Language Divine Gift And out of the ground the Lord God formed every beast of the field, and every fowl of the air; and brought them unto Adam to see what he would call them: and whatsoever Adam called every living creature, that was the name thereof. GENESIS 2:19, The Bible, King James Version According to Judeo-Christian beliefs, the one deity gave Adam the power to name all things. Similar beliefs are found throughout the world. According to the Egyptians, the creator of speech was the god Thoth. Babylonians believed that the language giver was the god Nabu, and the Hindus attributed our unique language ability to a female god: Brahma was the creator of the universe, but his wife Sarasvati gave language to us. Plato held that at some ancient time, a “legislator” gave the correct, natural name to everything, and that words echoed the essence of their meanings. Belief in the divine origin of language is intertwined with the supernatural properties that have been associated with the spoken word. In many religions only special languages may be used in prayers and rituals, such as Latin in the Catholic Church for many centuries. The Hindu priests of the fifth century b.c.e. believed that the original pronunciation of Vedic Sanskrit was sacred and must be preserved. This led to important linguistic study because their language had already changed greatly since the hymns of the Vedas had been written. The first linguist known to us is Panini, who wrote a descriptive grammar of Sanskrit in the fourth century b.c.e. that revealed the earlier pronunciation, which could then be used in religious worship. Even today Panini’s deep insights into the workings of language are highly revered by linguists. Although myths, customs, and superstitions do not tell us very much about language origin, they do tell us about the importance ascribed to language. There is no way to prove or disprove the divine origin of language, just as one cannot argue scientifically for or against the existence of deities. The First Language Imagine the Lord talking French! Aside from a few odd words in Hebrew, I took it completely for granted that God had never spoken anything but the most dignified English. CLARENCE DAY, Life with Father, 1935 For millennia, “scientific” experiments have reportedly been devised to verify particular theories of the first language. The Egyptian pharaoh Psammetichus (664–610 b.c.e.) sought to determine the most primitive language by isolating two infants in a mountain hut, to be cared for by a mute servant, in the belief that their first words would be in the original language. They weren’t! History is replete with similar stories, but as we saw in the introduction, all such “experimentation” on children is unspeakably cruel and utterly worthless. 309 310 CHAPTER 6 What Is Language? Nearly all “theories” of language origin, however silly and superstitious, contain the implicit belief that all languages originated from a single source—the monogenetic theory of language origin. Opposing this is the proposition that language arose in several places, or at several times, in the course of history. Which of these is true is still debated by linguists. Human Invention or the Cries of Nature? Language was born in the courting days of mankind; the first utterances of speech I fancy to myself like something between the nightly love lyrics of puss upon the tiles and the melodious love songs of the nightingale. OTTO JESPERSEN, Language, Its Nature, Development, and Origin, 1922 Despite all evidence to the contrary, the idea that the earliest form of language was imitative, or echoic, was proposed up to the twentieth century. A parallel view states that language at first consisted of emotional ejaculations of pain, fear, surprise, pleasure, anger, and so on. French philosopher Jean-Jacques Rousseau proposed that the earliest manifestations of language were “cries of nature.” Other hypotheses suggested that language arose out of the rhythmical grunts of men and women working together, or more charming, that language originated from song as an expressive rather than a communicative need. Just as with the beliefs in a divine origin of language, these proposed origins are not verifiable by scientific means. Language most likely evolved with the human species, possibly in stages, possibly in one giant leap. Research by linguists, evolutionary biologists, and neurologists support this view and the view that from the outset the human animal was genetically equipped to learn language. Further discussion of this topic can be found in the introduction. Language and Thought It was intended that when Newspeak had been adopted once and for all and Oldspeak forgotten, a heretical thought—that is, a thought diverging from the principles of IngSoc— should be literally unthinkable, at least so far as thought is dependent on words. GEORGE ORWELL, appendix to 1984, 1949 Many people are fascinated by the question of how language relates to thought. It is natural to imagine that something as powerful and fundamental to human nature as language would influence how we think about or perceive the world around us. This is clearly reflected in the appendix of George Orwell’s masterpiece 1984, quoted above. Over the years there have been many claims made regarding the relationship between language and thought. The claim that the structure of a language influences how its speakers perceive the world around Language and Thought them is most closely associated with the linguist Edward Sapir and his student Benjamin Whorf, and is therefore referred to as the Sapir-Whorf hypothesis. In 1929 Sapir wrote: Human beings do not live in the objective world alone, nor in the world of social activity as ordinarily understood, but are very much at the mercy of the particular language which has become the medium of expression for their society . . . we see and hear and otherwise experience very largely as we do because the language habits of our community predispose certain choices of interpretation.6 Whorf made even stronger claims: The background linguistic system (in other words, the grammar) of each language is not merely the reproducing instrument for voicing ideas but rather is itself the shaper of ideas, the program and guide for the individual’s mental activity, for his analysis of impressions, for his synthesis of his mental stock in trade . . . We dissect nature along lines laid down by our native languages.7 The strongest form of the Sapir-Whorf hypothesis is called linguistic determinism because it holds that the language we speak determines how we perceive and think about the world. On this view language acts like a filter on reality. One of Whorf’s best-known claims in support of linguistic determinism was that the Hopi Indians do not perceive time in the same way as speakers of European languages because the Hopi language does not make the grammatical distinctions of tense that, for example, English does with words and word endings such as did, will, shall, -s, -ed, and -ing. A weaker form of the hypothesis is linguistic relativism, which says that different languages encode different categories and that speakers of different languages therefore think about the world in different ways. For example, languages break up the color spectrum at different points. In Navaho, blue and green are one word. Russian has different words for dark blue (siniy) and light blue (goluboy), while in English we need to use the additional words dark and light to express the difference. The American Indian language Zuni does not distinguish between the colors yellow and orange. Languages also differ in how they express locations. For example, in Italian you ride “in” a bicycle and you go “in” a country while in English you ride “on” a bicycle and you go “to” a country. In English we say that a ring is placed “on” a finger and a finger is placed “in” the ring. Korean, on the other hand, has one word for both situations, kitta, which expresses the idea of a tight-fitting relation between the two objects. Spanish has two different words for the inside of a corner (esquina) and the outside of a corner (rincon). The Whorfian claim that is perhaps most familiar is that the 6Sapir, E. 1929. Language. New York: Harcourt, Brace & World, p. 207. 7 Whorf, B. L., and J. B. Carroll. 1956. Language, thought, and reality: Selected writings. Cambridge, MA: MIT Press. 311 312 CHAPTER 6 What Is Language? Eskimo language Inuit has many more words than English for snow and that this affects the world view of the Inuit people. “Family Circus” © 1999 Bil Keane, Inc. Reprinted with permission of King Features Syndicate. That languages show linguistic distinctions in their lexicons and grammar is certain, and we will see many examples of this in later chapters. The question is to what extent—if at all—such distinctions determine or influence the thoughts and perceptions of speakers. The Sapir-Whorf hypothesis is controversial, but it is clear that the strong form of this hypothesis is false. Peoples’ thoughts and perceptions are not determined by the words and structures of their language. We are not prisoners of our linguistic systems. If speakers were unable to think about something for which their language had no specific word, translations would be impossible, as would learning a second language. English may not have a special word for the inside of a corner as opposed to the outside of a corner, but we are perfectly able to express these concepts using more than one word. In fact, we just did. If we could not think about something for which we do not have words, how would infants ever learn their first word, much less a language? Many of the specific claims of linguistic determinism have been shown to be wrong. For example, the Hopi language may not have words and word endings Language and Thought for specific tenses, but the language has other expressions for time, including words for the days of the week, parts of the day, yesterday and tomorrow, lunar phases, seasons, etc. The Hopi people use various kinds of calendars and various devices for time-keeping based on the sundial. Clearly, they have a sophisticated concept of time despite the lack of a tense system in the language. The Munduruku, an indigenous people of the Brazilian Amazon, have no words in their language for triangle, square, rectangle, or other geometric concepts, except circle. The only terms to indicate direction are words for upstream, downstream, sunrise, and sunset. Yet Munduruku children understand many principles of geometry as well as American children, whose language is rich in geometric and spatial words. Similarly, though languages differ in their color words, speakers can readily perceive colors that are not named in their language. Grand Valley Dani is a language spoken in New Guinea with only two color words, black and white (dark and light). In experimental studies, however, speakers of the language showed recognition of the color red, and they did better with fire-engine red than offred. This would not be possible if their color perceptions were fixed by their language. Our perception of color is determined by the structure of the human eye, not by the structure of language. A source of dazzling linguistic creativity is to be found at the local paint store where literally thousands of colors are given names like soft pumpkin, Durango dust, and lavender lipstick. Anthropologists have shown that Inuit has no more words for snow than English does: around a dozen, including sleet, blizzard, slush, and flurry. But even if it did, this would not show that language conditions the Inuits’ experience of the world, but rather that experience with a particular world creates the need for certain words. In this respect the Inuit speaker is no different from the computer programmer, who has a technical vocabulary for Internet protocols, or the linguist, who has many specialized words regarding language. In this book we will introduce you to many new words and linguistic concepts, and surely you will learn them! This would be impossible if your thoughts about language were determined by the linguistic vocabulary you now have. These studies show that our perceptions and thoughts are not determined by the words or word endings of our language. But what about the linguistic structures we are accustomed to using? Could these be a strong determinant? In a recent study, psychologist Susan Goldin-Meadow and colleagues asked whether the word order of a particular language influences the way its speakers describe an event nonverbally, either with gestures or with pictures. Languages differ in how they encode events, such as a person twisting a knob. Speakers of languages like English, Chinese, and Spanish typically use the word order actor—action— object (person—twist—knob), whereas speakers of languages like Turkish and Japanese use the order actor—object—action (person—knob—twist). Word order is one of the earliest aspects of language structure that children acquire and it is a fundamental aspect of our linguistic knowledge. Therefore if language structure strongly influences how we interpret events, then these ordering patterns might show up in the way we describe events even when we are not talking. Goldin-Meadow and colleagues asked adult speakers of English, Turkish, and Chinese (Mandarin) to describe vignettes shown on a computer screen using only their hands, and also using a set of pictures. Their results showed that all 313 314 CHAPTER 6 What Is Language? the speakers—irrespective of their language—used the same order in the nonverbal tasks. The predominant gesture order was actor—action—object, and the same results were found in the picture-ordering task. Goldin-Meadow and colleagues suggest that there is a universal, natural order in which people cognitively represent events, and that this is not affected by the language they happen to speak. Similar results have been observed between English and Greek speakers. These languages differ in how their verbs encode motion. When describing movement, English speakers will commonly use verbs that focus on the manner of motion such as slide, skip, and walk. Greek speakers, on the other hand, use verbs that focus on the direction of the motion, as in approach and ascend. Measurements of eye movements of these speakers as they verbally describe an event show that they focus on the aspect of the event encoded by their language. However, when freely observing an event but not describing it verbally, they attend to the event in the same ways regardless of what language they speak. These results show that speakers’ attention to events is not affected by their language except as they are preparing to speak. In our understanding of the world we are certainly not “at the mercy of whatever language we speak,” as Sapir suggested. However, we may ask whether the language we speak influences our cognition in some way. In the domain of color categorization, for example, it has been shown that if a language lacks a word for red, say, then it’s harder for speakers to reidentify red objects. In other words, having a label seems to make it easier to store or access information in memory. Similarly, experiments show that Russian speakers are better at discriminating light blue (goluboy) and dark blue (siniy) objects than English speakers, whose language does not make a lexical distinction between these categories. These results show that words can influence simple perceptual tasks in the domain of color discrimination. Upon reflection, this may not be a surprising finding. Colors exist on a continuum, and the way we segment into “different” colors happens at arbitrary points along this spectrum. Because there is no physical motivation for these divisions, this may be the kind of situation where language could show an effect. The question has also been raised regarding the possible influence of grammatical gender on how people think about objects. Many languages, such as Spanish and German, classify nouns as masculine or feminine; Spanish “key” is la llave (feminine) and “bridge” is el puente (masculine). Some psychologists have suggested that speakers of gender-marking languages think about objects as having gender, much like people or animals have. In one study, speakers of German and Spanish were asked to describe various objects using English adjectives (the speakers were proficient in English). In general, they used more masculine adjectives—independently rated as such—to describe objects that are grammatically masculine in their language. For example, Spanish speakers described bridges (el puente) as big, dangerous, long, strong, and sturdy. In German the word for bridge is feminine (die Brücke) and German speakers used more feminine adjectives such as beautiful, elegant, fragile, peaceful, pretty, and slender. Interestingly, it has been noted that English speakers, too, make consistent judgments about the gender of objects (ships are “she”) even though English has no grammatical gender on common nouns. It may be, then, that regardless of the What We Know about Human Language language spoken, humans have a tendency to anthropomorphize objects and this tendency is somehow enhanced if the language itself has grammatical gender. Though it is too early to come to any firm conclusions, the results of these and similar studies seem to support a weak version of linguistic relativism. Politicians and marketers certainly believe that language can influence our thoughts and values. One political party may refer to an inheritance tax as the “estate tax,” while an opposing party refers to it as the “death tax.” One politician may refer to “tax breaks for the wealthy” while another refers to “tax relief.” In the abortion debate, some refer to the “right to choose” and others to the “right to life.” The terminology reflects different ideologies, but the choice of expression is primarily intended to sway public opinion. Politically correct (PC) language also reflects the idea that language can influence thought. Many people believe that by changing the way we talk, we can change the way we think; that if we eliminate racist and sexist terms from our language, we will become a less racist and sexist society. As we will discuss in chapter 9, language itself is not sexist or racist, but people can be, and because of this particular words take on negative meanings. In his book The Language Instinct, Steven Pinker uses the expression euphemism treadmill to describe how the euphemistic terms that are created to replace negative words often take on the negative associations of the words they were coined to replace. For example, handicapped was once a euphemism for the offensive term crippled, and when handicapped became politically incorrect it was replaced by the euphemism disabled. And as we write, disabled is falling into disrepute and is often replaced by yet another euphemism, challenged. Nonetheless, in all such cases, changing language has not resulted in a new world view of the speakers. Prescient as Orwell was with respect to how language could be used for social control, he was more circumspect with regard to the relation between language and thought. He was careful to qualify his notions with the phrase “at least so far as thought is dependent on words.” Current research shows that language does not determine how we think about and perceive the world. Future research should show the extent to which language influences other aspects of cognition such as memory and categorization. What We Know about Human Language Much is unknown about the nature of human languages, their grammars and use. The science of linguistics is concerned with these questions. Investigations of linguists and the analyses of spoken languages date back at least to 1600 b.c.e. in Mesopotamia. We have learned a great deal since that time. A number of facts pertaining to all languages can be stated. 1. Wherever humans exist, language exists. 2. There are no “primitive” languages—all languages are equally complex and equally capable of expressing any idea. The vocabulary of any language can be expanded to include new words for new concepts. 315 316 CHAPTER 6 What Is Language? 3. All languages change through time. 4. The relationships between the sounds and meanings of spoken languages and between the gestures and meanings of sign languages are for the most part arbitrary. 5. All human languages use a finite set of discrete sounds or gestures that are combined to form meaningful elements or words, which themselves may be combined to form an infinite set of possible sentences. 6. All grammars contain rules of a similar kind for the formation of words and sentences. 7. Every spoken language includes discrete sound segments, like p, n, or a, that can all be defined by a finite set of sound properties or features. Every spoken language has both vowel sounds and consonant sounds. 8. Similar grammatical categories (for example, noun, verb) are found in all languages. 9. There are universal semantic properties like entailment (one sentence inferring the truth of another) found in every language in the world. 10. Every language has a way of negating, forming questions, issuing commands, referring to past or future time, and so on. 11. All languages permit abstractions like goodness, spherical, and skillful. 12. All languages have slang, epithets, taboo words, and euphemisms for them, such as john for “toilet.” 13. All languages have hypothetical, counterfactual, conditional, unreal, and fictional utterances; e.g., “If I won the lottery, I would buy a Ferrari,” or “Harry Potter battled Voldemort with his wand by Hogwarts castle.” 14. All languages exhibit freedom from stimulus; a person can choose to say anything at any time under any circumstances, or can choose to say nothing at all. 15. Speakers of all languages are capable of producing and comprehending an infinite set of sentences. Syntactic universals reveal that every language has a way of forming sentences such as: Linguistics is an interesting subject. I know that linguistics is an interesting subject. You know that I know that linguistics is an interesting subject. Cecelia knows that you know that I know that linguistics is an interesting subject. Is it a fact that Cecelia knows that you know that I know that linguistics is an interesting subject? 16. The ability of human beings to acquire, know, and use language is a biologically based ability rooted in the structure of the human brain, and expressed in different modalities (spoken or signed). 17. Any normal child, born anywhere in the world, of any racial, geographical, social, or economic heritage, is capable of learning any language to which he or she is exposed. The differences among languages are not due to biological reasons. It seems that the universalists from all ages were not spinning idle thoughts. We all possess human language. Summary Summary We are all intimately familiar with at least one language, our own. Yet few of us ever stop to consider what we know when we know a language. No book contains, or could possibly contain, the English or Russian or Zulu language. The words of a language can be listed in a dictionary, but not all the sentences can be; and a language consists of these sentences as well as words. Speakers use a finite set of rules to produce and understand an infinite set of possible sentences. These rules are part of the grammar of a language, which develops when you acquire the language and includes the sound system (the phonology), the structure and properties of words (the morphology and lexicon), how words may be combined into phrases and sentences (the syntax), and the ways in which sounds and meanings are related (the semantics). The sounds and meanings of individual words are related in an arbitrary fashion. If you had never heard the word syntax you would not know what it meant by its sounds. The gestures used by signers are also arbitrarily related to their meanings. Language, then, is a system that relates sounds (or hand and body gestures) with meanings. When you know a language, you know this system. This knowledge (linguistic competence) is different from behavior (linguistic performance). If you woke up one morning and decided to stop talking (as the Trappist monks did after they took a vow of silence), you would still have knowledge of your language. This ability or competence underlies linguistic behavior. If you do not know the language, you cannot speak it; but if you know the language, you may choose not to speak. There are different kinds of “grammars.” The descriptive grammar of a language represents the unconscious linguistic knowledge or capacity of its speakers. Such a grammar is a model of the mental grammar every speaker of the language knows. It does not teach the rules of the language; it describes the rules that are already known. A grammar that attempts to legislate what your grammar should be is called a prescriptive grammar. It prescribes. It does not describe, except incidentally. Teaching grammars are written to help people learn a foreign language or a dialect of their own language. The more that linguists investigate the thousands of languages of the world and describe the ways in which they differ from each other, the more they discover that these differences are limited. There are linguistic universals that pertain to each of the parts of grammars, the ways in which these parts are related, and the forms of rules. These principles compose Universal Grammar, which provides a blueprint for the grammars of all possible human languages. Universal Grammar constitutes the innate component of the human language faculty that makes normal language development possible. Strong evidence for Universal Grammar is found in the way children acquire language. Children learn language by exposure. They need not be deliberately taught, though parents may enjoy “teaching” their children to speak or sign. Children will learn any human language to which they are exposed, and they learn it in definable stages, beginning at a very early age. By four or five years of age, children have acquired nearly the entire adult grammar. This suggests that children are born with a genetically endowed faculty to learn and use human language, which is part of the Universal Grammar. 317 318 CHAPTER 6 What Is Language? The fact that deaf children learn sign language shows that the ability to hear or produce sounds is not a prerequisite for language learning. All the sign languages in the world, which differ as spoken languages do, are visual-gestural systems that are as fully developed and as structurally complex as spoken languages. The major sign language used in the United States is American Sign Language (ASL). If language is defined merely as a system of communication, or the ability to produce speech sounds, then language is not unique to humans. There are, however, certain characteristics of human language not found in the communication systems of any other species. A basic property of human language is its creativity—a speaker’s ability to combine the basic linguistic units to form an infinite set of “well-formed” grammatical sentences, most of which are novel, never before produced or heard. For many years researchers were interested in the question of whether language is unique to the human species. There have been many attempts to teach nonhuman primates communication systems that are supposed to resemble human language in certain respects. Overall, results have been disappointing: Chimpanzees like Sarah and Lana learned to manipulate symbols for rewards, and others, like Washoe and Nim Chimpsky, learned a number of ASL signs. But a careful examination of their multisign utterances reveals that unlike in children, the language of the chimps shows little spontaneity, is highly imitative (echoic), and has little syntactic structure. It has been suggested that the pygmy chimp Kanzi shows grammatical ability greater than the other chimps studied, but he still does not have the ability of even a three-year-old child. At present we do not know if there was a single original language—the monogenetic hypothesis—or whether language arose independently in several places, or at several times, in human history. Myths of language origin abound; divine origin and various modes of human invention are the source of these myths. Language most likely evolved with the human species, possibly in stages, possibly in one giant leap. The Sapir-Whorf hypothesis holds that the particular language we speak determines or influences our thoughts and perceptions of the world. Much of the early evidence in support of this hypothesis has not stood the test of time. More recent experimental studies suggest that the words and grammar of a language may affect aspects of cognition, such as memory and categorization. References for Further Reading Anderson, S. R. 2008. The logical structure of linguistic theory. Language (December): 795–814. Bickerton, D. 1990. Language and species. Chicago: Chicago University Press. Chomsky, N. 1986. Knowledge of language: Its nature, origin, and use. New York and London: Praeger. ______. 1975. Reflections on language. New York: Pantheon Books. ______. 1972. Language and mind. Enlarged ed. New York: Harcourt Brace Jovanovich. Gentner, D., and S. Goldin-Meadow. 2003. Language in mind. Cambridge, MA: MIT Press. Exercises Hall, R. A. 1950. Leave your language alone. Ithaca, NY: Linguistica. Jackendoff, R. 1997. The architecture of the language faculty. Cambridge, MA: MIT Press. ______. 1994. Patterns in the mind: Language and human nature. New York: Basic Books. Klima, E. S., and U. Bellugi. 1979. The signs of language. Cambridge, MA: Harvard University Press. Lane, H. 1989. When the mind hears: A history of the deaf. New York: Vintage Books (Random House). Milroy, J., and L. Milroy. 1998. Authority in language: Investigating standard English, 3rd edn. New York: Routledge. Napoli, D. J. 2003. Language matters: A guide to everyday thinking about language. New York: Oxford University Press. Pinker, S. 1999. Words and rules: The ingredients of language. New York: HarperCollins. ______. 1994. The language instinct. New York: William Morrow. Premack, A. J., and D. Premack. 1972. Teaching language to an ape. Scientific American (October): 92–99. Terrace, H. S. 1979. Nim: A chimpanzee who learned sign language. New York: Knopf. Stam, J. 1976. Inquiries into the origin of language: The fate of a question. New York: Harper & Row. Stokoe, W. 1960. Sign language structure: An outline of the visual communication system of the American deaf. Silver Spring, MD: Linstok Press. Exercises 1. An English speaker’s knowledge includes the sound sequences of the language. When new products are put on the market, the manufacturers have to think up new names for them that conform to the allowable sound patterns. Suppose you were hired by a manufacturer of soap products to name five new products. What names might you come up with? List them. We are interested in how the names are pronounced. Therefore, describe in any way you can how to say the words you list. Suppose, for example, you named one detergent Blick. You could describe the sounds in any of the following ways: bl as in blood, i as in pit, ck as in stick bli as in bliss, ck as in tick b as in boy, lick as in lick 2. Consider the following sentences. Put a star (*) after those that do not seem to conform to the rules of your grammar, that are ungrammatical for you. State, if you can, why you think the sentence is ungrammatical. a. Robin forced the sheriff go. b. Napoleon forced Josephine to go. c. The devil made Faust go. d. He passed by a large pile of money. e. He came by a large sum of money. f. He came a large sum of money by. 319 320 CHAPTER 6 What Is Language? g. h. i. j. k. l. m. n. o. p. q. r. s. Did in a corner little Jack Horner sit? Elizabeth is resembled by Charles. Nancy is eager to please. It is easy to frighten Emily. It is eager to love a kitten. That birds can fly amazes. The fact that you are late to class is surprising. Has the nurse slept the baby yet? I was surprised for you to get married. I wonder who and Mary went swimming. Myself bit John. What did Alice eat the toadstool with? What did Alice eat the toadstool and? 3. It was pointed out in this chapter that a small set of words in languages may be onomatopoeic; that is, their sounds “imitate” what they refer to. Ding-dong, tick-tock, bang, zing, swish, and plop are such words in English. Construct a list of ten new onomatopoeic words. Test them on at least five friends to see if they are truly nonarbitrary as to sound and meaning. 4. Although sounds and meanings of most words in all languages are arbitrarily related, there are some communication systems in which the “signs” unambiguously reveal their “meaning.” a. Describe (or draw) five different signs that directly show what they mean. Example: a road sign indicating an S curve. b. Describe any other communication system that, like language, consists of arbitrary symbols. Example: traffic signals, where red means stop and green means go. 5. Consider these two statements: I learned a new word today. I learned a new sentence today. Do you think the two statements are equally probable, and if not, why not? 6. What do the barking of dogs, the meowing of cats, and the singing of birds have in common with human language? What are some of the basic differences? 7. A wolf is able to express subtle gradations of emotion by different positions of the ears, the lips, and the tail. There are eleven postures of the tail that express such emotions as self-confidence, confident threat, lack of tension, uncertain threat, depression, defensiveness, active submission, and complete submission. This system seems to be complex. Suppose that there were a thousand different emotions that the wolf could express in this way. Would you then say a wolf had a language similar to a human’s? If not, why not? 8. Suppose you taught a dog to heel, sit up, roll over, play dead, stay, jump, and bark on command, using the italicized words as cues. Would you be teaching it language? Why or why not? Exercises 9. State some rule of grammar that you have learned is the correct way to say something, but that you do not generally use in speaking. For example, you may have heard that It’s me is incorrect and that the correct form is It’s I. Nevertheless, you always use me in such sentences; your friends do also, and in fact It’s I sounds odd to you. Write a short essay presenting arguments against someone who tells you that you are wrong. Discuss how this disagreement demonstrates the difference between descriptive and prescriptive grammars. 10. Noam Chomsky has been quoted as saying: It’s about as likely that an ape will prove to have a language ability as that there is an island somewhere with a species of flightless birds waiting for human beings to teach them to fly. In the light of evidence presented in this chapter, comment on Chomsky’s remark. Do you agree or disagree, or do you think the evidence is inconclusive? 11. Think of song titles that are “bad” grammar, but that, if corrected, would lack effect. For example, the 1929 “Fats” Waller classic “Ain’t Misbehavin’” is clearly superior to the bland “I am not misbehaving.” Try to come up with five or ten such titles. 12. Linguists who attempt to write a descriptive grammar of linguistic competence are faced with a difficult task. They must understand a deep and complex system based on a set of sparse and often inaccurate data. (Children learning language face the same difficulty.) Albert Einstein and Leopold Infeld captured the essence of the difficulty in their book The Evolution of Physics, written in 1938: In our endeavor to understand reality we are somewhat like a man trying to understand the mechanism of a closed watch. He sees the face and the moving hands, even hears its ticking, but he has no way of opening the case. If he is ingenious he may form some picture of a mechanism which could be responsible for all the things he observes, but he may never be quite sure his picture is the only one which could explain his observations. He will never be able to compare his picture with the real mechanism and he cannot even imagine the possibility of the meaning of such a comparison. Write a short essay that speculates on how a linguist might go about understanding the reality of a person’s grammar (the closed watch) by observing what that person says and doesn’t say (the face and moving hands). For example, a person might never say the sixth sheik’s sixth sheep is sick as a dog, but the grammar should specify that it is a well-formed sentence, just as it should somehow indicate that Came the messenger on time is ill-formed. 13. View the motion picture My Fair Lady (drawn from the play Pygmalion by George Bernard Shaw). Write down every attempt to teach grammar (pro- 321 322 CHAPTER 6 What Is Language? nunciation, word choice, and syntax) to the character of Eliza Doolittle. This is an illustration of a “teaching grammar.” 14. Many people are bilingual or multilingual, speaking two or more languages with very different structures. a. What implications does bilingualism have for the debate about language and thought? b. Many readers of this textbook have some knowledge of a second language. Think of a linguistic structure or word in one language that does not exist in the second language and discuss how this does or does not affect your thinking when you speak the two languages. (If you know only one language, ask this question of a bilingual person you know.) c. Can you find an example of an untranslatable word or structure in one of the languages you speak? 15. The South American indigenous language Pirahã is said to lack numbers beyond two and distinct words for colors. Research this language—Google would be a good start—with regard to whether Pirahã supports or fails to support linguistic determinism and/or linguistic relativism. 16. English (especially British English) has many words for woods and woodlands. Here are some: woodlot, carr, fen, firth, grove, heath, holt, lea, moor, shaw, weald, wold, coppice, scrub, spinney, copse, brush, bush, bosquet, bosky, stand, forest, timberland, thicket a. How many of these words do you recognize? b. Look up several of these words in the dictionary and discuss the differences in meaning. Many of these words are obsolete, so if your dictionary doesn’t have them, try the Internet. c. Do you think that English speakers have a richer concept of woodlands than speakers whose language has fewer words? Why or why not? 17. English words containing dge in their spelling (trudge, edgy) are said mostly to have an unfavorable or negative connotation. Research this notion by accumulating as many dge words as you can and classifying them as unfavorable (sludge) or neutral (bridge). What do you do about budget? Unfavorable or not? Are there other questionable words? 18. With regard to the “euphemism treadmill”: Identify three other situations in which a euphemism evolved to be as offensive as the word it replaced, requiring yet another euphemism. Hint: Sex, race, and bodily functions are good places to start. 19. Research project: Read the Cratylus Dialogue—it’s online. In it is a discussion (or “dialogue”) of whether names are “conventional” (i.e., what we have called arbitrary) or “natural.” Do you find Socrates’ point of view sufficiently well argued to support the thesis in this chapter that the rela- Exercises tionship between form and meaning is indeed arbitrary? Argue your case in either direction in a short (or long, if you wish) essay. 20. Research project: (Cf. exercise 15) It is claimed that Pirahã—an indigenous language of Brazil—violates some of the universal principles hypothesized by linguists. Which principles are in question? Is the evidence persuasive? Conclusive? Speculative? (Hint: Use the journal Language, Volume 85, Number 2, June 2009.) 323 7 Language Acquisition [The acquisition of language] is doubtless the greatest intellectual feat any one of us is ever required to perform. LEONARD BLOOMFIELD, Language, 1933 The capacity to learn language is deeply ingrained in us as a species, just as the capacity to walk, to grasp objects, to recognize faces. We don’t find any serious differences in children growing up in congested urban slums, in isolated mountain villages, or in privileged suburban villas. DAN SLOBIN, The Human Language Series program 2, 1994 324 Language is extremely complex. Yet very young children—before the age of five—already know most of the intricate system that is the grammar of a language. Before they can add 2 + 2, children are conjoining sentences, asking questions, using appropriate pronouns, negating sentences, forming relative clauses, and inflecting verbs and nouns and in general have the creative capacity to produce and understand a limitless number of sentences. It is obvious that children do not learn a language simply by memorizing the sentences of the language. Rather, they acquire a system of grammatical rules of the sort we have discussed in the preceding chapters. No one teaches children the rules of the grammar. Their parents are no more aware of the phonological, morphological, syntactic, and semantic rules than are the children. Even if you remember your early years, do you remember anyone telling you to form a sentence by adding a verb phrase to a noun phrase, or to add [s] or [z] to form plurals? No one told you “This is a grammatical utterance and that is not.” Yet somehow you were able, as all children are, to quickly and effortlessly extract the intricate system of rules from the language you heard around you Mechanisms of Language Acquisition and thereby “reinvent” the grammar of your parents. How the child accomplishes this phenomenal task is the subject of this chapter. Mechanisms of Language Acquisition There have been various proposals concerning the psychological mechanisms involved in acquiring a language. Early theories of language acquisition were heavily influenced by behaviorism, a school of psychology prevalent in the 1950s. As the name implies, behaviorism focused on people’s behaviors, which are directly observable, rather than on the mental systems underlying these behaviors. Language was viewed as a kind of verbal behavior, and it was proposed that children learn language through imitation, reinforcement, analogy, and similar processes. B. F. Skinner, one of the founders of behaviorist psychology, proposed a model of language acquisition in his book Verbal Behavior (1957). Two years later, in a devastating reply to Skinner entitled Review of Verbal Behavior (1959), Noam Chomsky showed that language is a complex cognitive system that could not be acquired by behaviorist principles. Do Children Learn through Imitation? Child: Adult: Child: Adult: Child: Adult: Child: My teacher holded the baby rabbits and we patted them. Did you say your teacher held the baby rabbits? Yes. What did you say she did? She holded the baby rabbits and we patted them. Did you say she held them tightly? No, she holded them loosely. ANONYMOUS ADULT AND CHILD At first glance the question of how children acquire language doesn’t seem difficult to answer. Don’t children just listen to what is said around them and imitate the speech they hear? Imitation is involved to some extent. An American child may hear milk and a Mexican child leche and each attempts to reproduce what is heard. But the early words and sentences that children produce show that they are not simply imitating adult speech. Many times the words are barely recognizable to an adult and the meanings are also not always like the adult’s, as we will discuss below. Children do not hear words like holded or tooths or sentences such as Cat stand up table or many of the other utterances they produce between the ages of two and three, such as the following:1 1Many of the examples of child language in this chapter are taken from CHILDES (Child Language Data Exchange System), a computerized database of the spontaneous speech of children acquiring English and many other languages. MacWhinney, B., and C. Snow. 1985. The child language data exchange system. Journal of Child Language 12:271–96. 325 326 CHAPTER 7 Language Acquisition a my pencil two foot what the boy hit? other one pants Mommy get it my ladder cowboy did fighting me Even when children are trying to imitate what they hear, they are unable to produce sentences outside of the rules of their developing grammar. The following are a child’s attempt to imitate what the adult has said: adult: adult: adult: He’s going out. That’s an old-time train. Adam, say what I say. Where can I put them? child: child: He go out. Old-time train. child: Where I can put them? Imitation also fails to account for the fact that children who are unable to speak for neurological or physiological reasons are able to learn the language spoken to them and understand it. When they overcome their speech impairment, they immediately use the language for speaking. Do Children Learn through Correction and Reinforcement? Child: Mother: Child: Mother: Child: Nobody don’t like me. No, say “Nobody likes me.” Nobody don’t like me. (dialogue repeated eight times) Now, listen carefully; say “Nobody likes me.” Oh, nobody don’t likes me. ANONYMOUS MOTHER AND CHILD Another proposal, in the behaviorist tradition, is that children learn to produce correct (grammatical) sentences because they are positively reinforced when they say something grammatical and negatively reinforced (corrected) when they say something ungrammatical. Roger Brown and his colleagues at Harvard University studied parent–child interactions. They report that correction seldom occurs, and when it does, it is usually for mispronunciations or incorrect reporting of facts and not for “bad grammar.” They note, for example, that the ungrammatical sentence “Her curl my hair” was not corrected because the child’s mother was in fact curling her hair. However, when the child uttered the grammatical sentence “Walt Disney comes on Tuesday,” she was corrected because the television program was shown on Wednesday. Brown concludes that it is “truth value rather than syntactic well-formedness that chiefly governs explicit verbal reinforcement by parents—which renders mildly paradoxical the fact that the usual product of such a training schedule is an adult whose speech is highly grammatical but not notably truthful.” Mechanisms of Language Acquisition Adults will sometimes recast children’s utterances into an adultlike form, as in the following examples: Child Mother It fall. Where is them? It doing dancing. It fell? They’re at home. It’s dancing, yes. In these examples, the mother provides the correct model without actually correcting the child. Although recasts are potentially helpful to the child, they are not used in a consistent way. One study of forty mothers of children two to four years old showed that only about 25 percent of children’s ungrammatical sentences are recast and that overall, grammatical sentences were recast as often as bad sentences. Parents tend to focus on the correctness of content more than on grammaticality. So parents allow many ungrammatical utterances to “slip by” and change many grammatical utterances. A child who relied on recasts to learn grammar would be mightily confused. Even if adults did correct children’s syntax more often than they do, it would still not explain how or what children learn from such adult responses, or how children discover and construct the correct rules. Children do not know what they are doing wrong and are unable to make corrections even when they are pointed out, as shown by the preceding example and the following one: child: father: child: father: child: father: child: father: child: father: child: Want other one spoon, Daddy. You mean, you want the other spoon. Yes, I want other one spoon, please, Daddy. Can you say “the other spoon”? Other . . . one . . . spoon. Say . . . “other.” Other. Spoon. Spoon. Other . . . spoon. Other . . . spoon. Now give me other one spoon? Such conversations between parents and children do not occur often; this conversation was between a linguist studying child language and his child. Mothers and fathers are usually delighted that their young children are talking and consider every utterance a gem. The “mistakes” children make are cute and repeated endlessly to anyone who will listen. Do Children Learn Language through Analogy? It has also been suggested that children put words together to form phrases and sentences by analogy, by hearing a sentence and using it as a model to form other sentences. But this is also problematic, as Lila Gleitman, an expert on developmental psycholinguistics, points out: [S]uppose the child has heard the sentence “I painted a red barn.” So now, by analogy, the child can say “I painted a blue barn.” That’s exactly the 327 328 CHAPTER 7 Language Acquisition kind of theory that we want. You hear a sample and you extend it to all of the new cases by similarity. . . . In addition to “I painted a red barn” you might also hear the sentence “I painted a barn red.” So it looks as if you take those last two words and switch their order. . . . So now you want to extend this to the case of seeing, because you want to look at barns instead of paint them. So you have heard, “I saw a red barn.” Now you try (by analogy) a . . . new sentence—“I saw a barn red.” Something’s gone wrong. This is an analogy, but the analogy didn’t work. It’s not a sentence of English.2 This kind of problem arises constantly. Consider another example. The child hears the following pair of sentences: The boy was sleeping. Was the boy sleeping? Based on pairs of sentences like this, he formulates a rule for forming questions: “Move the auxiliary to the position preceding the subject.” He then acquires the more complex relative clause construction: The boy who is sleeping is dreaming about a new car. He now wants to form a question. What does he do? If he forms a question on analogy to the simple yes-no question, he will move the first auxiliary is as follows: *Is the boy who sleeping is dreaming about a new car? Studies of spontaneous speech, as well as experiments, show that children never make mistakes of this sort. As discussed in chapter 2, syntactic rules, such as the rule that moves the auxiliary, are sensitive to the structure of the sentence and not to the linear order of words. The available evidence shows that children know about the structure dependency of rules at a very early age. In recent years, a computer model of language representation and acquisition called connectionism has been proposed that relies in part on behaviorist learning principles such as analogy and reinforcement. In the connectionist model, no grammatical rules are stored anywhere. Linguistic knowledge, such as knowledge of the past tense, is represented by a set of neuron-like connections between different phonological forms (e.g., between play and played, dance and danced, drink and drank). Repeated exposure to particular verb pairs in the input reinforces the connection between them, mimicking rule-like behavior. Based on similarities between words, the model can produce a past-tense form that it was not previously exposed to. On analogy to dance-danced, it will convert prance to pranced; on analogy to drink-drank it will convert sink to sank. As a model of language acquisition, connectionism faces some serious challenges. The model assumes that the language of the child’s environment has very specific properties. However, investigation of the input that children actually receive shows that it is not consistent with those assumptions. Another problem 2Gleitman, L. R., and E. Wanner. 1982. Language acquisition: The state of the art. Cambridge, UK: Cambridge University Press. Mechanisms of Language Acquisition is that rules such as formation of past tense cannot be based on phonological form alone but must also be sensitive to information in the lexicon. For example, the past tense of a verb derived from a noun is always regular even if an irregular form exists. When a fly ball is caught in a baseball game, we say the batter flied out, not flew out. Similarly, when an irregular plural is part of a larger noun, it may be regularized. When we see several images of Walt Disney’s famous rodent, we describe them as Mickey Mouses, not Mickey Mice. Do Children Learn through Structured Input? Yet another suggestion is that children are able to learn language because adults speak to them in a special “simplified” language sometimes called motherese, or child-directed speech (CDS) (or more informally, baby talk). This hypothesis places a lot of emphasis on the role of the environment in facilitating language acquisition. In our culture adults do typically talk to young children in a special way. We tend to speak more slowly and more clearly, we may speak in a higher pitch and exaggerate our intonation, and sentences are generally grammatical. However, motherese is not syntactically simpler. It contains a range of sentence types, including syntactically complex sentences such as questions (Do you want your juice now?); embedded sentences (Mommy thinks you should sleep now); imperatives (Pat the dog gently!); and negatives with tag questions (We don’t want to hurt him, do we?). And adults do not simplify their language by dropping inflections from verbs and nouns or by omitting function words such as determiners and auxiliaries, though children do this all the time. It is probably a good thing that motherese is not syntactically restricted. If it were, children might not have sufficient information to extract the rules of their language. Although infants prefer to listen to motherese over normal adult speech, studies show that using motherese does not significantly affect the child’s language development. In many cultures, adults do not use a special style of language with children, and there are even communities in which adults hardly talk to babies at all. Nevertheless, children around the world acquire language in much the same way, irrespective of these varying circumstances. Adults seem to be the followers rather than the leaders in this enterprise. The child does not develop linguistically because he is exposed to ever more adultlike language. Rather, the adult adjusts his language to the child’s increasing linguistic sophistication. The exaggerated intonation and other properties of motherese may be useful for getting a child’s attention and for reassuring the child, but it is not a driving force behind language development. Analogy, imitation, and reinforcement cannot account for language development because they are based on the (implicit or explicit) assumption that what the child acquires is a set of sentences or forms rather than a set of grammatical rules. Theories that assume that acquisition depends on a specially structured input also place too much emphasis on the environment rather than on the grammar-making abilities of the child. These proposals do not explain the creativity that children show in acquiring language, why they go through stages, or why they make some kinds of “errors” but not others, for example, “Give me other one spoon” but not “Is the boy who sleeping is dreaming about a new car?” 329 330 CHAPTER 7 Language Acquisition Children Construct Grammars We are designed to walk. . . . That we are taught to walk is impossible. And pretty much the same is true of language. Nobody is taught language. In fact you can’t prevent the child from learning it. NOAM CHOMSKY, The Human Language Series program 2, 1994 Language acquisition is a creative process. Children are not given explicit information about the rules, by either instruction or correction. They extract the rules of the grammar from the language they hear around them, and their linguistic environment does not need to be special in any way for them to do this. Observations of children acquiring different languages under different cultural and social circumstances reveal that the developmental stages are similar, possibly universal. Even deaf children of deaf signing parents go through stages in their signing development that parallel those of children acquiring spoken languages. These factors lead many linguists to believe that children are equipped with an innate template or blueprint for language—which we have referred to as Universal Grammar (UG)—and that this blueprint aids the child in the task of constructing a grammar for her language. This is referred to as the innateness hypothesis. The Innateness Hypothesis © ScienceCartoonsPlus.com Mechanisms of Language Acquisition The innateness hypothesis receives its strongest support from the observation that the grammar a person ends up with is vastly underdetermined by his linguistic experience. In other words, we end up knowing far more about language than is exemplified in the language we hear around us. This argument for the innateness of UG is called the poverty of the stimulus. Although children hear many utterances, the language they hear is incomplete, noisy, and unstructured. We said earlier that child-directed speech is largely well formed, but children are also exposed to adult–adult interactions. These utterances include slips of the tongue, false starts, ungrammatical and incomplete sentences, and no consistent information as to which utterances are well formed and which are not. But most important is the fact that children come to know aspects of the grammar about which they receive no information. In this sense, the data they are exposed to is impoverished. It is less than what is necessary to account for the richness and complexity of the grammar they attain. For example, we noted that the rules children construct are structure dependent. Children do not produce questions by moving the first auxiliary as in (1) below. Instead, they correctly invert the auxiliary of the main clause, as in (2). (We use ___ to mark the position from which a constituent moves.) 1. 2. *Is the boy who ___ sleeping is dreaming of a new car? Is the boy who is sleeping ___ dreaming of a new car? To come up with a rule that moves the auxiliary of the main clause rather than the first auxiliary, the child must know something about the structure of the sentence. Children are not told about structure dependency. They are not told about constituent structure. Indeed, adults who have not studied linguistics do not explicitly know about structure dependency, constituent structure, and other abstract properties of grammar and so could not instruct their children even if they were so inclined. This knowledge is tacit or implicit. The input children get is a sequence of sounds, not a set of phrase structure trees. No amount of imitation, reinforcement, analogy, or structured input will lead the child to formulate a phrase structure tree, much less a principle of structure dependency. Yet, children do create phrase structures, and the rules they acquire are sensitive to this structure. The child must also learn many aspects of grammar from her specific linguistic environment. English-speaking children learn that the subject comes first and that the verb precedes the object inside the VP, that is, that English is an SVO language. Japanese children acquire an SOV language. They learn that the object precedes the verb. English-speaking children must learn that yes-no questions are formed by moving the auxiliary to the beginning of the sentence, as follows: You will come home. → Will you ___ come home? Japanese children learn that to form a yes-no question, the morpheme -ka is suffixed to a verb stem. Tanaka ga sushi o tabete iru Tanaka ga sushi o tabete iruka “Tanaka is eating sushi.” “Is Tanaka eating sushi?” 331 332 CHAPTER 7 Language Acquisition In Japanese questions, sentence constituents are not rearranged. According to the innateness hypothesis, the child extracts from the linguistic environment those rules of grammar that are language specific, such as word order and movement rules. But he does not need to learn universal principles like structure dependency, or general principles of sentence formation such as the fact that heads of categories can take complements. All these principles are part of the innate blueprint for language that children use to construct the grammar of their language. The innateness hypothesis provides an answer to the logical problem of language acquisition posed by Chomsky: What accounts for the ease, rapidity, and uniformity of language acquisition in the face of impoverished data? The answer is that children acquire a complex grammar quickly and easily without any particular help beyond exposure to the language because they do not start from scratch. UG provides them with a significant head start. It helps them to extract the rules of their language and to avoid many grammatical errors. Because the child constructs his grammar according to an innate blueprint, all children proceed through similar developmental stages, as we will discuss in the next section. The innateness hypothesis also predicts that all languages will conform to the principles of UG. We are still far from understanding the full nature of the principles of UG. Research on more languages provides a way to test any principles that linguists propose. If we investigate a language in which a posited UG principle is absent, we will have to correct our theory and substitute other principles, as scientists must do in any field. But there is little doubt that human languages conform to abstract universal principles and that the human brain is specially equipped for acquisition of human language grammars. Stages in Language Acquisition . . . for I was no longer a speechless infant; but a speaking boy. This I remember; and have since observed how I learned to speak. It was not that my elders taught me words . . . in any set method; but I . . . did myself . . . practice the sounds in my memory. . . . And thus by constantly hearing words, as they occurred in various sentences . . . I thereby gave utterance to my will. ST. AUGUSTINE, Confessions, 398 c.e. Children do not wake up one fine morning with a fully formed grammar in their heads. Relative to the complexity of the adult grammar that they eventually attain, the process of language acquisition is fast, but it is not instantaneous. From first words to virtual adult competence takes three to five years, during which time children pass through linguistic stages. They begin by babbling, they then acquire their first words, and in just a few months they begin to put words together into sentences. Observations of children acquiring different languages reveal that the stages are similar, possibly universal. The earliest studies of child language acquisition come from diaries kept by parents. More recent studies include the use of tape recordings, videotapes, and controlled experiments. Linguists record the Mechanisms of Language Acquisition spontaneous utterances of children and purposefully elicit other utterances to study the child’s production and comprehension. Researchers have also invented ingenious techniques for investigating the linguistic abilities of infants, who are not yet speaking. Children’s early utterances may not look exactly like adult sentences, but child language is not just a degenerate form of adult language. The words and sentences that the child produces at each stage of development conform to the set of grammatical rules he has developed to that point. Although child grammars and adult grammars differ in certain respects, they also share many formal properties. Like adults, children have grammatical categories such as NP and VP, rules for building phrase structures and for moving constituents, as well as phonological, morphological, and semantic rules, and they adhere to universal principles such as structure dependency. From the perspective of the adult grammar, sentences such as Nobody don’t like me and Want other one spoon, Daddy contain grammatical errors, but such “errors” often reflect the child’s current stage of grammatical competence and therefore provide researchers with a window into their grammar. The Perception and Production of Speech Sounds An infant crying in the night: An infant crying for the light: And with no language but a cry. ALFRED LORD TENNYSON, In Memoriam A.H.H., 1849 The notion that a person is born with a mind like a blank slate is belied by a wealth of evidence that newborns are reactive to some subtle distinctions in their environment and not to others. That is, the mind appears to be attuned at birth to receive certain kinds of information. Infants will respond to visual depth and distance distinctions, to differences between rigid and flexible physical properties of objects, and to human faces rather than to other visual stimuli. Infants also show a very early response to different properties of language. Experiments demonstrate that infants will increase their sucking rate—measured by ingeniously designed pacifiers—when stimuli (visual or auditory) presented to them are varied, but will decrease the sucking rate when the same stimuli are presented repeatedly. Early in acquisition when tested with a preferential listening technique, they will also turn their heads toward and listen longer to sounds, stress patterns, and words that are familiar to them. These instinctive responses can be used to measure a baby’s ability to discriminate and recognize different linguistic stimuli. A newborn will respond to phonetic contrasts found in human languages even when these differences are not phonemic in the language spoken in the baby’s home. A baby hearing a human voice over a loudspeaker saying [pa] [pa] [pa] will slowly decrease her rate of sucking. If the sound changes to [ba] or even [pʰa], the sucking rate increases dramatically. Controlled experiments show that adults find it difficult to differentiate between the allophones of one phoneme, but for infants it comes naturally. Japanese infants can distinguish between [r] and [l] whereas their parents cannot; babies can hear the difference between 333 334 CHAPTER 7 Language Acquisition aspirated and unaspirated stops even if students in an introductory linguistics course cannot. Babies can discriminate between sounds that are phonemic in other languages and nonexistent in the language of their parents. For example, in Hindi, there is a phonemic contrast between a retroflex “t” [ʈ] (made with the tongue curled back) and the alveolar [t]. To English-speaking adults, these may sound the same; to their infants, they do not. Infants can perceive voicing contrasts such as [pa] versus [ba], contrasts in place of articulation such as [da] versus [ga], and contrasts in manner of articulation such as [ra] versus [la], or [ra] versus [wa], among many others. Babies will not react, however, to distinctions that never correspond to phonemic contrasts in any human language, such as sounds spoken more or less loudly or sounds that lie between two phonemes. Furthermore, a vowel that we perceive as [i], for example, is a different physical sound when produced by a male, female, or child, but babies ignore the nonlinguistic aspects of the speech signal just as adults do. Infants appear to be born with the ability to perceive just those sounds that are phonemic in some language. It is therefore possible for children to learn any human language they are exposed to. During the first year of life, the infant’s job is to uncover the sounds of the ambient language. From around six months, he begins to lose the ability to discriminate between sounds that are not phonemic in his own language. His linguistic environment molds the infant’s initial perceptions. Japanese infants can no longer hear the difference between [r] and [l], which do not contrast in Japanese, whereas babies in English-speaking homes retain this perception. They have begun to learn the sounds of the language of their parents. Before that, they appear to know the sounds of human language in general. Babbling “Hi & Lois” © King Features Syndicate The shaping by the linguistic environment that we see in perception also occurs in the speech the infant is producing. At around six months, the infant begins to babble. The sounds produced in this period include many sounds that do not occur in the language of the household. However, babbling is not linguistic chaos. The twelve most frequent consonants in the world’s languages make up Mechanisms of Language Acquisition 95 percent of the consonants infants use in their babbling. There are linguistic constraints even during this very early stage. The early babbles consist mainly of repeated consonant-vowel sequences, like mama, gaga, and dada. Later babbles are more varied. By the end of the first year the child’s babbles come to include only those sounds and sound combinations that occur in the target language. Babbles begin to sound like words, although they may not have any specific meaning attached to them. At this point adults can distinguish the babbles of an English-babbling infant from those of an infant babbling in Cantonese or Arabic. During the first year of life, the infant’s perceptions and productions are being fine-tuned to the surrounding language(s). Deaf infants produce babbling sounds that are different from those of hearing children. Babbling is related to auditory input and is linguistic in nature. Studies of vocal babbling of hearing children and manual babbling of deaf children support the view that babbling is a linguistic ability related to the kind of language input the child receives. These studies show that four- to seven-monthold hearing infants exposed to spoken language produce a restricted set of phonetic forms. At the same age, deaf children exposed to sign language produce a restricted set of signs. In each case the forms are drawn from the set of possible sounds or possible gestures found in spoken and signed languages. Babbling illustrates the readiness of the human mind to respond to linguistic input from a very early stage. During the babbling stage, the intonation contours produced by hearing infants begin to resemble the intonation contours of sentences spoken by adults. The different intonation contours are among the first linguistic contrasts that children perceive and produce. During this same period, the vocalizations produced by deaf babies are random and nonrepetitive. Similarly, the manual gestures produced by hearing babies differ greatly from those produced by deaf infants exposed to sign language. The hearing babies move their fingers and clench their fists randomly with little or no repetition of gestures. The deaf infants, however, use more than a dozen different hand motions repetitively, all of which are elements of American Sign Language or the sign languages used in deaf communities of other countries. The generally accepted view is that humans are born with a predisposition to discover the units that serve to express linguistic meanings, and that at a genetically specified stage in neural development, the infant will begin to produce these units—sounds or gestures—depending on the language input the baby receives. This suggests that babbling is the earliest stage in language acquisition, in opposition to an earlier view that babbling was prelinguistic and merely neuromuscular in origin. The “babbling as language acquisition” hypothesis is supported by recent neurological studies that link babbling to the language centers of the left hemisphere, also providing further evidence that the brain specializes for language functions at a very early age, as discussed in the introduction. First Words From this golden egg a man, Prajapati, was born. . . . A year having passed, he wanted to speak. He said “bhur” and the earth was created. He said “bhuvar” and the space of the air was created. He said “suvar” and the sky was created. That is why a child wants to speak 335 336 CHAPTER 7 Language Acquisition after a year. . . . When Prajapati spoke for the first time, he uttered one or two syllables. That is why a child utters one or two syllables when he speaks for the first time. HINDU MYTH Some time after the age of one, the child begins to repeatedly use the same string of sounds to mean the same thing. At this stage children realize that sounds are related to meanings. They have produced their first true words. The age of the child when this occurs varies and has nothing to do with the child’s intelligence. (It is reported that Einstein did not start to speak until he was three or four years old.) The child’s first utterances differ from adult language. The following words of one child, J. P., at the age of sixteen months, illustrate the point: [ʔaʊ] [bʌʔ]/[mʌʔ] [da] [iʔo]/[siʔo] [sa] [aɪ]/[ʌɪ] [baʊ]/[daʊ] “not,” “no,” “don’t” “up” “dog” “Cheerios” “sock” “light” “down” [sː] [sʲuː] [haɪ] [sr] [sæː]/[əsæː] [ma] [dæ] “aerosol spray” “shoe” “hi” “shirt,” “sweater” “what’s that?”/“hey, look!” “mommy” “daddy” Most children go through a stage in which their utterances consist of only one word. This is called the holophrastic or “whole phrase” stage because these one-word utterances seem to convey a more complex message. For example, when J. P. says “down” he may be making a request to be put down, or he may be commenting on a toy that has fallen down from the shelf. When he says “cheerios” he may simply be naming the box of cereal in front of him, or he may be asking for some Cheerios. This suggests that children have a more complex mental representation than their language allows them to express. Comprehension experiments confirm the hypothesis that children’s productive abilities do not fully reflect their underlying grammatical competence. It has been claimed that deaf babies develop their first signs earlier than hearing children speak their first words. This has led to the development of Baby Sign, a technique in which hearing parents learn and model for their babies various “signs,” such as a sign for “milk,” “hurt,” and “mother.” The idea is that the baby can communicate his needs manually even before he is able to articulate spoken words. Promoters of Baby Sign (and many parents) say that this leads to less frustration and less crying. The claim that signs appear earlier than words is controversial. Some linguists argue that what occurs earlier in both deaf and hearing babies are pre-linguistic gestures that lack the systematic meaning of true signs. Baby Sign may perhaps be exploiting this earlier manual dexterity, and not a precocious linguistic development. More research is needed. Segmenting the Speech Stream I scream, you scream, we all scream for ice cream. TRANSCRIBED FROM VOCALS BY TOM STACKS, performing with Harry Reser’s Six Jumping Jacks, January 14, 1928 Mechanisms of Language Acquisition The acquisition of first words is an amazing feat. How do infants discover where one word begins and another leaves off? Speech is a continuous stream broken only by breath pauses. Children are in the same fix that you might be in if you tuned in a foreign-language radio station. You wouldn’t have the foggiest idea of what was being said or what the words were. Intonation breaks that do exist do not necessarily correspond to word, phrase, or sentence boundaries. The adult speaker with knowledge of the lexicon and grammar of a language imposes structure on the speech he hears, but a person without such knowledge cannot. How then do babies, who have not yet learned the lexicon or rules of grammar, extract the words from the speech they hear around them? The ability to segment the continuous speech stream into discrete units—words—is one of the remarkable feats of language acquisition. Studies show that infants are remarkably good at extracting information from continuous speech. They seem to know what kind of cues to look for in the input that will help them to isolate words. One of the cues that English-speaking children attend to that helps them figure out word boundaries is stress. As noted in chapter 5, every content word in English has a stressed syllable. (Function words such as the, a, am, can, etc. are ordinarily unstressed.) If the content word is monosyllabic, then that syllable is stressed as in dóg or hám. Bisyllabic content words can be trochaic, which means that stress is on the first syllable, as in páper or dóctor, or iambic, which means stress is on the second syllable, as in giráffe or devíce. The vast majority of English words have trochaic stress. In controlled experiments adult speakers are quicker to recognize words with trochaic stress than words with iambic stress. This can be explained if English-speaking adults follow a strategy of taking a stressed syllable to mark the onset of a new word. But what about children? Could they avail themselves of the same strategy? Stress is very salient to infants, and they are quick to acquire the rhythmic structure of their language. Using the preferential listening technique mentioned earlier, researchers have shown that at just a few months old infants are able to discriminate native and non-native stress patterns. Before the end of the first year their babbling takes on the rhythmic pattern of the ambient language. At about nine months old, English-speaking children prefer to listen to bisyllabic words with initial rather than final stress. And most notably, studies show that infants acquiring English can indeed use stress cues to segment words in fluent speech. In a series of experiments, infants who were seven and a half months old listened to passages with repeated instances of a trochaic word such as púppy, and passages with iambic words such as guitár. They were then played lists of words, some of which had occurred in the previous passage and others that had not. Experimenters measured the length of time that they listened to the familiar versus unfamiliar words. The results showed that children listened significantly longer (indicated by turning their head in the direction of the loudspeaker) to words that they had heard in the passage, but only when the words had the trochaic pattern (púppy). For words with the iambic pattern (guitár), the children responded only to the stressed syllable (tár), though the monosyllabic word tar had not appeared in the passage. These results suggest that the infants—like adults—are taking the stressed syllable to mark the onset of a new word. Following such a strategy will sometimes lead to errors (for iambic words 337 338 CHAPTER 7 Language Acquisition and unstressed function words), but it provides the child with a way of getting started. This is sometimes referred to as prosodic bootstrapping. Infants can use the stress pattern of the language as a start to word learning. Infants are also sensitive to phonotactic constraints and to the distribution of allophones in the target language. For example, we noted in chapter 5 that in English aspiration typically occurs at the beginning of a stressed syllable— [pʰɪt] versus [spɪt]—and that certain combinations of sounds are more likely to occur at the end of a word rather than at the beginning, for example [rt]. Studies show that nine-month-olds can use this information to help segment speech into words in English. Languages differ in their stress patterns as well as in their allophonic variation and phonotactics. Wouldn’t the infant then need some way to first figure out what stress pattern he is dealing with, or what the allophones and possible sound combinations are, before he could use this information to extract the words of his language from fluent speech? This seems to be a classic chicken and egg problem—he has to know the language to learn the language. A way out of this conundrum is provided by the finding that infants may also rely on statistical properties of the input to segment words, such as the frequency with which particular sequences of sounds occur. In one study, eight-month-old infants listened to two minutes of speech formed from four nonsense words, pabiku, tutibu, golabu, babupu. The words were produced by a speech synthesizer and strung together in three different orders, analogous to three different sentences, without any pauses or other phonetic cues to the word boundaries. Here is an example of what the children heard: golabupabikututibubabupugolabubabupututibu. . . . . After listening to the strings the infants were tested to see if they could distinguish the “words” of the language, for example pabiku (which, recall, they had never heard in isolation before), from sequences of syllables that spanned word boundaries, such as bubabu (also in the input). Despite the very brief exposure and the lack of boundary cues, the infants were able to distinguish the words from the nonwords. The authors of the study conclude that the children do this by tracking the frequency with which the different sequences of syllables occur: the sequences inside the words (e.g., pa-bi-ku) remain the same whatever order the words are presented in, but the sequences of syllables that cross word boundaries will change in the different presentations and hence these sequences will occur much less frequently. Though it is still unclear how much such statistical procedures can accomplish with real language input, which is vastly larger and more varied, this experiment and others like it suggest that babies are sensitive to statistical information as well as to linguistic structure to extract words from the input. It is possible that they first rely on statistical properties to isolate some words, and then, based on these words, they are able to detect the rhythmic, allophonic, and phonotactic properties of the language, and with this further knowledge they can do further segmentation. Studies that measure infants’ reliance on statistics versus stress for segmenting words support this two stage model: younger infants (seven-and-a-half months old) respond to frequency Mechanisms of Language Acquisition while older infants (nine months old) attend to stress, allophonic, and phonotactic information. The Development of Grammar Children are biologically equipped to acquire all aspects of grammar. In this section we will look at development in each of the components of language, and we will illustrate the role that Universal Grammar and other factors play in this development. The Acquisition of Phonology “Baby Blues” © Baby Blues Partnership. Reprinted with permission of King Features Syndicate. In terms of his phonology, J. P. is like most children at the one-word stage. The first words are generally monosyllabic with a CV (consonant-vowel) form. The vowel part may be a diphthong, depending on the language being acquired. The phonemic inventory is much smaller than is found in the adult language. It appears that children first acquire the small set of sounds common to all languages regardless of the ambient language(s), and in later stages acquire the less common sounds of their own language. For example, most languages have the sounds [p] and [s], but [θ] is a rare sound. J. P.’s sound system followed this pattern. His phonological inventory at an early stage included the consonants [b,m,d,k], which are frequently occurring sounds in the world’s languages. In general, the order of acquisition of classes of sounds begins with vowels and then goes by manner of articulation for consonants: nasals are acquired first, then glides, stops, liquids, fricatives, and affricates. Natural classes characterized by place of articulation features also appear in children’s utterances according to a more or less ordered series: labials, velars, alveolars, and palatals. It is not surprising that mama is an early word for many children. The distribution and frequency of sounds in a language can also influence the acquisition of certain segments. Sounds that are expected to be acquired late may appear earlier in children’s language when they are frequently occurring. For example, the fricative [v] is a very late acquisition in English but it is an early phoneme in Estonian, Bulgarian, and Swedish, languages that have several [v]-initial words that are common in the vocabularies of young children. If the first year is devoted to figuring out the sounds of the target language, the second year involves learning how these sounds are used in the phonology of 339 340 CHAPTER 7 Language Acquisition the language, especially which contrasts are phonemic. When children first begin to contrast one pair of a set (e.g., when they learn that /p/ and /b/ are distinct phonemes due to a voicing difference), they also begin to distinguish between other similar pairs (e.g., /t/ and /d/, /s/ and /z/, and all the other voiceless–voiced phonemic pairs). As we would expect, the generalizations refer to natural classes of speech sounds. Controlled experiments show that children at this stage can perceive or comprehend many more phonological contrasts than they can produce. The same child who says [wӕbɪt] instead of “rabbit,” and who does not seem to distinguish [w] and [r], will not make mistakes on a picture identification task in which she must point to either a ring or a wing. In addition, children sometimes produce two different sounds in a way that makes them indiscernible to adult observers. Acoustic analyses of children’s utterances show that although a child’s pronunciation of wing and ring may seem the same to the adult ear, they are physically different sounds. As a further example, a spectrographic analysis of ephant, “elephant,” produced by a three-year-old child, clearly showed an [l] in the representation of the word, even though the adult experimenter could not hear it. Many anecdotal reports also show the disparity between the child’s production and perception at this stage. An example is the exchange between the linguist Neil Smith and his two-year-old son Amahl. At this age Amahl’s pronunciation of “mouth” is [maʊs]. NS: A: NS: A: NS: A: NS: A: What does [maʊs] mean? Like a cat. Yes, what else? Nothing else. It’s part of your head. (fascinated) (touching A’s mouth) What’s this? [ma ʊs] According to Smith, it took Amahl a few seconds to realize his word for “mouse” and his word for “mouth” were the same. It is not that Amahl and other children do not hear the correct adult pronunciation. They do, but they are unable in these early years to produce it themselves. Another linguist’s child (yes, linguists love to experiment on their own children) pronounced the word light as yight [jaɪt] but would become very angry if someone said to him, “Oh, you want me to turn on the yight.” “No no,” he would reply, “not yight—yight!” Therefore, even at this stage, it is not possible to determine the extent of the grammar of the child—in this case, the phonology—simply by observing speech production. It is sometimes necessary to use various experimental and instrumental techniques to tap the child’s competence. A child’s first words show many substitutions of one feature for another or one phoneme for another. In the preceding examples, mouth [maʊθ] is pronounced mouse [maʊs], with the alveolar fricative [s] replacing the less common interdental fricative [θ]; light [laɪt] is pronounced yight [jaɪt], with the glide [j] replacing the liquid [l]; and rabbit is pronounced wabbit, with the glide [w] replacing the liquid [r]. Glides are acquired earlier than liquids, and hence substitute for them. Mechanisms of Language Acquisition These substitutions are simplifications of the adult pronunciation. They make articulation easier until the child achieves greater articulatory control. Children’s early pronunciations are not haphazard, however. The phonological substitutions are rule governed. The following is an abridged lexicon for another child, Michael, between the ages of eighteen and twenty-one months: [pun] [peɪn] [tɪs] [taʊ] [tin] [polər] “spoon” “plane” “kiss” “cow” “clean” “stroller” [maɪtl] [daɪtər] [pati] [mani] [bәrt] [bərt] “Michael” “diaper” “Papi” “Mommy” “Bert” “(Big) Bird” Michael systematically substituted the alveolar stop [t] for the velar stop [k] as in his words for “cow,” “clean,” “kiss,” and his own name. He also replaced labial [p] with [t] when it occurred in the middle of a word, as in his words for “Papi” and “diaper.” He reduced consonant clusters in “spoon,” “plane,” and “stroller,” and he devoiced final stops as in “Big Bird.” In devoicing the final [d] in “bird,” he created an ambiguous form [bәrt] referring both to Bert and Big Bird. No wonder only parents understand their children’s first words! Michael’s substitutions are typical of the phonological rules that operate in the very early stages of acquisition. Other common rules are reduplication— “bottle” becomes [baba], “water” becomes [wawa]; and the dropping of a final consonants—“bed” becomes [be], “cake” becomes [ke]. These two rules show that the child prefers a simple CV syllable. Of the many phonological rules that children create, no child will necessarily use all rules. Early phonological rules generally reflect natural phonological processes that also occur in adult languages. For example, various adult languages have a rule of syllable-final consonant devoicing (German does—/bʊnd/ is pronounced [bʊnt]—English doesn’t). Children do not create bizarre or whimsical rules. Their rules conform to the possibilities made available by Universal Grammar. The Acquisition of Word Meaning Suddenly I felt a misty consciousness as of something forgotten—a thrill of returning thought; and somehow the mystery of language was revealed to me. . . . Everything had a name, and each name gave birth to a new thought. HELEN KELLER, The Story of My Life, 1903 In addition to what it tells us about phonological regularities, the child’s early vocabulary also provides insight into how children use words and construct word meaning. For J. P. the word up was originally used only to mean “Get me up!” when he was either on the floor or in his high chair, but later he used it to mean “Get up!” to his mother as well. J. P. used his word for sock not only for socks but also for other undergarments that are put on over the feet, such 341 342 CHAPTER 7 Language Acquisition as undershorts. This illustrates how a child may extend the meaning of a word from a particular referent to encompass a larger class. When J. P. began to use words, the object had to be physically present, but that requirement did not last very long. He first used “dog” only when pointing to a real dog, but later he used the word for pictures of dogs in various books. A new word that entered J. P.’s vocabulary at seventeen months was “uhoh,” which he would say after he had an accident like spilling juice, or when he deliberately poured his yogurt over the side of his high chair. His use of this word shows his developing use of language for social purposes. At this time he added two new words meaning “no,” [doː] and [no], which he used when anyone attempted to take something from him that he wanted, or tried to make him do something he did not want to do. He used them either with the imperative meaning of “Don’t do that!” or with the assertive meaning of “I don’t want to do that.” Even at this early stage, J. P. was using words to convey a variety of ideas and feelings, as well as his social awareness. But how do children learn the meanings of words? Most people do not see this aspect of acquisition as posing a great problem. The intuitive view is that children look at an object, the mother says a word, and the child connects the sounds with the object. However, this is not as easy as it seems: A child who observes a cat sitting on a mat also observes . . . a mat supporting a cat, a mat under a cat, a floor supporting a mat and a cat, and so on. If the adult now says “The cat is on the mat” even while pointing to the cat on the mat, how is the child to choose among these interpretations of the situation? Even if the mother simply says “cat,” and the child accidentally associates the word with the animal on the mat, the child may interpret cat as “Cat,” the name of a particular animal, or of an entire species. In other words, to learn a word for a class of objects such as “cat” or “dog,” children have to figure out exactly what the word refers to. Upon hearing the word dog in the presence of a dog, how does the child know that “dog” can refer to any four-legged, hairy, barking creature? Should it include poodles, tiny Yorkshire terriers, bulldogs, and Great Danes, all of which look rather different from one another? What about cows, lambs, and other four-legged mammals? Why are they not “dogs”? The important and very difficult question is: What relevant features define the class of objects we call dog, and how does a child acquire knowledge of them? Even if a child succeeds in associating a word with an object, nobody provides explicit information about how to extend the use of that word to all the other objects to which that word refers. It is not surprising, therefore, that children often overextend a word’s meaning, as J. P. did with the word sock. A child may learn a word such as papa or daddy, which she first uses only for her own father, and then extend its meaning to apply to all men, just as she may use the word dog to mean any four-legged creature. After the child has acquired her first seventy-five to one hundred words, the overextended meanings start to narrow until they correspond to those of the other speakers of the language. How this occurs is still not entirely understood. On the other hand, early language learning may involve underextension, in which a lexical item is used in an overly restrictive way. It is common for children Mechanisms of Language Acquisition to first apply a word like bird only to the family’s pet canary without making a connection to birds in the tree outside, as if the word were a proper noun. And just as overextended meanings narrow in on the adult language, underextended meanings broaden their scope until they match the target language. The mystery surrounding the acquisition of word meanings has intrigued philosophers and psychologists as well as linguists. We know that all children view the world in a similar fashion and apply the same general principles to help them determine a word’s meaning. For example, overextensions are usually based on physical attributes such as size, shape, and texture. Ball may refer to all round things, bunny to all furry things, and so on. However, children will not make overextensions based on color. In experiments, children will group objects by shape and give them a name, but they will not assign a name to a group of red objects. If an experimenter points to an object and uses a nonsense word like blick, saying that’s a blick, the child will interpret the word to refer to the whole object, not one of its parts or attributes. Given the poverty of stimulus for word learning, principles like the “form over color principle” and the “whole object principle” help the child organize his experience in ways that facilitate word learning. Without such principles, it is doubtful that children could learn words as quickly as they do. Children learn approximately fourteen words a day for the first six years of their lives. That averages to about 5,000 words per year. How many students know 10,000 words of a foreign language after two years of study? There is also experimental evidence that children can learn the meaning of one class of words—verbs—based on the syntactic environment in which they occur. If you were to hear a sentence such as John blipped Mary the gloon, you would not know exactly what John did, but you would likely understand that the sentence is describing a transfer of something from John to Mary. Similarly, if you heard John gonked that Mary. . . . , you would conclude that the verb gonk was a verb of communication like say or a mental verb like think. The complement types that a verb selects can provide clues to its meaning and thereby help the child. This learning of word meaning based on syntax is referred to as syntactic bootstrapping. The Acquisition of Morphology “Baby Blues” © Baby Blues Partnership. Reprinted with permission of King Features Syndicate. 343 344 CHAPTER 7 Language Acquisition The child’s acquisition of morphology provides the clearest evidence of rule learning. Children’s errors in morphology reveal that the child acquires the regular rules of the grammar and then overgeneralizes them. This overgeneralization occurs when children treat irregular verbs and nouns as if they were regular. We have probably all heard children say bringed, goed, drawed, and runned, or foots, mouses, and sheeps. These mistakes tell us much about how children learn language because such forms could not arise through imitation; children use them in families in which the parents never speak “bad English.” In fact, children generally go through three phases in the acquisition of an irregular form: Phase 1 Phase 2 Phase 3 broke brought breaked bringed broke brought In phase 1 the child uses the correct term such as brought or broke. At this point the child’s grammar does not relate the form brought to bring, or broke to break. The words are treated as separate lexical entries. Phase 2 is crucial. This is when the child constructs a rule for forming the past tense and attaches the regular past-tense morpheme to all verbs—play, hug, help, as well as break and bring. Children look for general patterns. What they do not know at phase 2 is that there are exceptions to the rule. Now their language is more regular than the adult language. During phase 3 the child learns that there are exceptions to the rule, and then once again uses brought and broke, with the difference being that these irregular forms will be related to the root forms. The child’s morphological rules emerge quite early. In a classic study, preschool children and children in the first, second, and third grades were shown a drawing of a nonsense animal like the funny creature shown in the following picture. Each “animal” was given a nonsense name. The experimenter would then say to the child, pointing to the picture, “This is a wug.” Then the experimenter would show the child a picture of two of the animals and say, “Now here is another one. There are two of them. There are two ___.” The child’s task was to give the plural form, “wugs” [wʌgz]. Another little make-believe animal was called a “bik,” and when the child was shown two biks, he or she again was to say the plural form [bɪks]. The children applied regular plural formation to words they had never heard, showing that they had acquired the plural rule. Their ability to add [z] when the animal’s name ended with a voiced sound, and [s] when there was a final voiceless consonant, showed that the children were also using rules based on an understanding of natural classes of phonological segments, and not simply imitating words they had previously heard. Mechanisms of Language Acquisition More recently, studies of children acquiring languages with richer inflectional morphologies than English reveal that they learn agreement at a very early age. For example, Italian verbs must be inflected for number and person to agree with the subject. This is similar to the English agreement rule “add s to the verb” for third-person, singular subjects—He giggles a lot but We giggle a lot— except that in Italian more verb forms must be acquired. Italian-speaking children between the ages of 1;10 (one year, ten months) and 2;4 correctly inflect the verb, as the following utterances of Italian children show: Tu leggi il libro. Io vado fuori. Dorme miao dorme. Leggiamo il libro. “You (second person singular) read the book.” “I go (first person singular) outside.” “Sleeps (third person singular) cat sleeps.” “(We) read (first person plural) the book.” Children acquiring other richly inflected languages such as Spanish, German, Catalan, and Swahili quickly acquire agreement morphology. It is rare for them to make agreement errors, just as it is rare for an English-speaking child to say “I goes.” In these languages there is also gender and number agreement between the head noun and the article and adjectives inside the noun phrase. Children as young as two years old respect these agreement requirements when producing NPs, as shown by the following Italian examples: E mia gonna. Questo mio bimbo. Guarda la mela piccolina. Guarda il topo piccolino. “(It) is my (feminine singular) skirt.” “This my (masculine singular) baby.” “Look at the little (feminine singular) apple.” “Look at the little (masculine singular) mouse.” Experimental studies with twenty-five-month-old French-speaking children also show that they use gender information on determiners to help identify the subsequent noun, for example, le ballon (the-masc. balloon) versus la banane (the-fem. banana). Children also show knowledge of the derivational rules of their language and use these rules to create novel words. In English, for example, we can derive verbs from nouns. From the noun microwave we now have a verb to microwave; from the noun e(lectronic) mail we derived the verb to e-mail. Children acquire this derivational rule early and use it often because there are lots of gaps in their verb vocabulary. Child Utterance Adult Translation You have to scale it. I broomed it up. He’s keying the door. “You have to weigh it.” “I swept it up.” “He’s opening the door (with a key).” These novel forms provide further evidence that language acquisition is a creative process and that children’s utterances reflect their internal grammars, which include both derivational and inflectional rules. 345 346 CHAPTER 7 Language Acquisition The Acquisition of Syntax “Doonesbury” © 1984 G. B. Trudeau. Reprinted with permission of Universal Press Syndicate. All rights reserved. When children are still in the holophrastic stage, adults listening to the one-word utterances often feel that the child is trying to convey a more complex message. Experimental techniques show that at that stage (and even earlier), children have knowledge of some syntactic rules. In these experiments the infant sits on his mother’s lap and hears a sentence over a speaker while seeing two video displays depicting different actions, one of which corresponds to the sentence. Infants tend to look longer at the video that matches the sentence they hear. This methodology allows researchers to tap the linguistic knowledge of children who are using only single words or who are not talking at all. Results show that children as young as seventeen months can understand the difference between sentences such as “Ernie is tickling Bert” and “Bert is tickling Ernie.” Because these sentences have all the same words, the child cannot be relying on the words alone to understand the meanings. He must also understand the word-order rules and how they determine the grammatical relations of subject and object. This same preferential looking technique has shown that eighteen-month-olds can distinguish between subject and object wh questions, such as What is the apple hitting? and What hit the apple? These results and many others strongly suggest that children’s syntactic competence is ahead of their productive abilities, which is also how their phonology develops. Around the time of their second birthday, children begin to put words together. At first these utterances appear to be strings of two of the child’s earlier holophrastic utterances, each word with its own single-pitch contour. Soon, they begin to form actual two-word sentences with clear syntactic and semantic relations. The intonation contour of the two words extends over the whole utterance rather than being separated by a pause between the two words. The Mechanisms of Language Acquisition following utterances illustrate the kinds of patterns that are found in children’s utterances at this stage: allgone sock bye bye boat more wet Katherine sock hi Mommy allgone sticky it ball dirty sock These early utterances can express a variety of semantic and syntactic relations. For example, noun + noun sentences such as Mommy sock can express a subject + object relation in the situation when the mother is putting the sock on the child, or a possessive relation when the child is pointing to Mommy’s sock. Two nouns can also be used to show a subject-locative relation, as in sweater chair to mean “The sweater is on the chair,” or to show attribution as in dirty sock. Children often have a variety of modifiers such as allgone, more, and bye bye. Because children mature at different rates and the age at which children start to produce words and put words together varies, chronological age is not a good measure of a child’s language development. Instead, researchers use the child’s mean length of utterances (MLU) to measure progress. MLU is the average length of the utterances the child is producing at a particular point. MLU can be measured in terms of morphemes, so words like boys, danced, and crying each have a value of two (morphemes). MLU can also be measured in term of words, which is a more revealing measure when comparing children acquiring languages with different morphological systems. Children with the same MLU are likely to have similar grammars even though they are different ages. In their earliest multiword utterances, children are inconsistent in their use of function words (grammatical morphemes) such as a and the, subject pronouns, auxiliary verbs such as can and is, and verbal inflection. Many (though not all) utterances consist only of open-class or content words, while some or all of the function words, auxiliaries, and verbal inflection may be missing. During this stage children often sound as if they are sending an e-message or reading an oldfashioned telegram (containing only the required words for basic understanding), which is why such utterances are sometimes called “telegraphic speech,” and we call this the telegraphic stage of the child’s language development. Cat stand up table. What that? He play little tune. Andrew want that. Cathy build house. No sit there. Ride truck. Show Mommy that. J. P.’s early sentences were similar (the words in parentheses are missing from J. P.’s sentences): 347 348 CHAPTER 7 Language Acquisition Age in Months 25 26 27 28 [danʔ ɪʔ tsɪʔ] [bʷaʔ tat] [mamis tu hӕs] [mo bʌs go] [dӕdi go] [ʔaɪ gat tu dʲus] [do baɪʔ mi] [kʌder sʌni ber] [ʔaɪ gat pwe dɪs] [mamis tak mɛns] “Don’t eat (the) chip.” “Block (is on) top.” “Mommy’s two hands.” “Where bus go?” “(Where) Daddy go?” “I got two (glasses of) juice.” “Don’t bite (kiss) me.” “Sonny color(ed a) bear.” “I(’m) play(ing with) this.” “Mommy talk(ed to the) men.” It can take many months before children use all the grammatical morphemes and auxiliary verbs consistently. However, the child does not deliberately leave out function words as would an adult sending a twitter. The sentences reflect the child’s linguistic capacity at that particular stage of language development. There is a great deal of debate among linguists about how to characterize telegraphic speech: Do children omit function morphemes because of limitations in their ability to produce longer, more complex sentences, or do they omit these morphemes because their grammar permits such elements to be unexpressed? On the first account, telegraphic speech is due to performance limitations: Since there is an upper limit on the length of utterance a child can produce, and function morphemes are prosodically and semantically weak, they are omitted. On the second view, telegraphic speech is an early grammatical stage similar to languages like Italian or Spanish that allow subject pronouns to be dropped, as in Hablo ingles “(I) speak English,” or Chinese, which lacks many types of determiners. Although these sentences may lack certain morphemes, they nevertheless appear to have hierarchical constituent structures and syntactic rules similar to those in the adult grammar. For example, children almost never violate the word-order rules of their language. In languages with relatively fixed word order such as English and Japanese, children use the required order (SVO in English, SOV in Japanese) from the earliest stage. In languages with freer word order, like Turkish and Russian, grammatical relations such as subject and object are generally marked by inflectional morphology, such as case markers. Children acquiring these languages quickly learn the morphological case markers. For example, Russian- and German-speaking children mark subjects with nominative case and objects with accusative case with very few errors. Telegraphic speech is also very good evidence against the hypothesis that children learn sentences by imitation. Adults—even when speaking motherese—do not drop function words when they talk to children. The correct use of word order, case marking, and agreement rules shows that even though children may often omit function morphemes, they are aware of constituent structure and syntactic rules. Their utterances are not simply words randomly strung together. From a very early stage onward, children have a grasp of the principles of phrase and sentence formation and of the kinds of structure dependencies mentioned in chapter 2, as revealed by these constituent structure trees: Mechanisms of Language Acquisition S S NP Pronoun he NP VP V play N NP Adj N little tune VP V NP Pronoun Andrew want that S NP N VP V NP N Cathy build house In order to apply morphological and syntactic rules the child must know what syntactic categories the words in his language belong to. But how exactly does the child come to know that play and want are verbs and tune and house are nouns? One suggestion is that children first use the meaning of the word to figure out its category. This is called semantic bootstrapping. The child may have rules such as “if a word refers to a physical object, it’s a noun” or “if a word refers to an action, it’s a verb,” and so on. However, the rules that link certain meanings to specific categories are not foolproof. For example, the word action denotes an action but it is not a verb, know is not an action but is a verb, and justice is a noun though it is not a physical object. But the rules that drive semantic bootstrapping might be helpful for the kind of words children learn early on which tend to refer to objects and actions. Word frames may also help the child to determine when words belong to the same category. Studies of the language used to children show that there are certain frames that occur frequently enough to be reliable for categorization, for example, “you __ it” and “the __ one.” Most typically, verbs such as see, do, did, win, fix, turned, and get occur in the first frame, while adjectives like red, big, wrong, and light occur in the second. If a child knows that see is a verb, then he could also deduce that all the other words appearing in the same frame are also verbs. Like semantic bootstrapping, the distributional evidence is not foolproof. For example, “it __ the” can frame a verb, it hit the ball, but also a preposition, I hit it across the street. And also like semantic bootstrapping, this evidence may 349 350 CHAPTER 7 Language Acquisition well be reliable enough to give the child a head start into the complex task of learning the syntactic categories of words. The most frequent frames typically consist of function words, determiners such as the or a or pronouns like it or one. This suggests that children can learn from function morphemes in the input even though they omit these elements in their own speech. Indeed, comprehension studies show that children pay attention to function words. Two-year-olds respond more appropriately to grammatical commands such as Find the bird than to commands with an ungrammatically positioned function word as in Find was bird. Other studies suggest that function morphemes such as determiners help children in word segmentation and categorization. Sometime between the ages of 2;6 and 3;6, a virtual language explosion occurs. At this point it is difficult to identify distinct stages because the child is undergoing so much development so rapidly. By the age of 3;0, most children are consistent in their use of function morphemes. Moreover, they have begun to produce and understand complex structures, including coordinated sentences and embedded sentences of various kinds, such as the following: He was stuck and I got him out. I want this doll because she’s big. I know what to do. I like to play with something else. I think she’s sick. Look at the train Ursula bought. I gon’ make it like a rocket to blast off with. It’s too early for us to eat. Past the age of 3;6 children can generally form grammatical wh questions with the proper Aux inversion such as What can I do tomorrow? They can produce and understand relative clauses such as This is the lion that chased the giraffe, as well as other embedded clauses such as I know that Mommy is home. They can use reflexive pronouns correctly such as I saw myself in the camera. Somewhat beyond 4;0, depending on the individual, much of the adult grammar has been acquired. The Acquisition of Pragmatics “Baby Blues” © Baby Blues Partnership. Reprinted with permission of King Features Syndicate. Mechanisms of Language Acquisition In addition to acquiring the rules of grammar, children must learn the appropriate use of language in context, or pragmatics. The cartoon is funny because of the inappropriateness of the interaction, showing that Zoe hasn’t completely acquired the pragmatic “maxims of conversation” discussed in chapter 3. Context is needed to determine the reference of pronouns. A sentence such as “Amazingly, he loves her anyway” is uninterpretable unless both speaker and hearer understand who the pronouns he and her refer to. If the sentence were preceded by “I saw John and Mary kissing in the park,” then the referents of the pronouns would be clear. Children are not always sensitive to the needs of their interlocutors, and they may fail to establish the referents for pronouns. It is not unusual for a three- or four-year-old (or even older children) to use pronouns out of the blue, like the child who cries to her mother “He hit me” when mom has no idea who did the deed. The speaker and listener form part of the context of an utterance. The meaning of I and you depends on who is talking and who is listening, which changes from situation to situation. Younger children (around age two) have difficulty with the “shifting reference” of these pronouns. A typical error that children make at this age is to refer to themselves as “you,” for example, saying “You want to take a walk” when they mean “I want to take a walk.” Children also show a lack of pragmatic awareness in the way they sometimes use articles. Like pronouns, the interpretation of articles depends on context. The definite article the, as in “the boy,” can be used felicitously only when it is clear to speaker and hearer what boy is being discussed. In a discourse the indefinite article a/an must be used for the first mention of a new referent, but the definite article (or pronoun) may be used in subsequent mentions, as illustrated following: A boy walked into the class. He was in the wrong room. The teacher directed the boy to the right classroom. Children do not always respect the pragmatic rules for articles. In experimental studies, three-year-olds may use the definite article for introducing a new referent. In other words, the child tends to assume that his listener knows who he is talking about without having established this in a linguistically appropriate way. It may take a child several months or years to master those aspects of pragmatics that involve establishing the reference for function morphemes such as determiners and pronouns. Other aspects of pragmatics are acquired very early. Children in the holophrastic stage use their one-word utterances with different illocutionary force (see page 176). The utterance “up” spoken by J. P. at sixteen months might be a simple statement such as “The teddy is up on the shelf,” or a request: “Pick me up.” The Development of Auxiliaries: A Case Study We have seen in this chapter that language acquisition involves development in various components—the lexicon, phonology, morphology, and syntax, as well as pragmatics. These different modules interact in complex ways to chart an overall course of language development. 351 352 CHAPTER 7 Language Acquisition As an example, let us take the case of the English auxiliaries. As noted earlier, children in the telegraphic stage do not typically use auxiliaries such as can, will, or do, and they often omit be and have from their utterances. Several syntactic constructions in English depend on the presence of an auxiliary, the most central of which are questions and negative sentences. To negate a main verb requires an auxiliary verb (or do if there isn’t one) as in the following examples: I don’t like this book. I won’t read this book. An adult does not say “I not like this book.” Similarly, as discussed in chapter 2, English yes-no and wh questions are formed by moving an auxiliary to precede the subject, as in the following examples: Can I leave now? Do you love me? Where should John put the book? Although the two-year-old does not have productive control of auxiliaries, she is able to form negative sentences and questions. During the telegraphic stage, the child produces questions of the following sort: Yes-No Questions I ride train? Mommy eggnog? Have some? These utterances have a rising intonation pattern typical of yes-no questions in English, but because there are no auxiliaries, there can be no auxiliary movement. In wh questions there is also no auxiliary, but there is generally a wh phrase that has moved to the beginning of the sentence. English-speaking children do not produce sentences such as “Cowboy doing what?” in which the wh phrase remains in its deep structure position. The two-year-old has an insufficient lexicon. The lack of auxiliaries means that she cannot use a particular syntactic device associated with question formation in English—auxiliary movement. However, she has the pragmatic knowledge to make a request or ask for information, and she has the appropriate prosody, which depends on knowledge of phonology and the syntactic structure of the question. She also knows the grammatical rule that requires wh phrases to be in a fronted position. Many components of language must be in place to form an adultlike question. In languages that do not require auxiliaries to form a question, children appear more adultlike. For example, in Dutch and Italian, the main verb moves. Because many main verbs are acquired before auxiliaries, Dutch and Italian children in the telegraphic stage produce questions that follow the adult rule: Mechanisms of Language Acquisition Dutch En wat doen ze daar? and what do they there Wordt mama boos? Weet je n kerk? becomes mama angry know you a church “And what are they doing there?” “Is mommy angry?” “Do you know a church?” Italian Cosa fanno questi bambini? Chando vene a mama? what do these children Vola cici? flies birdie when comes the mommy “What are these babies doing?” “When is Mommy coming?” “Is the birdie flying?” The Dutch and Italian children show us there is nothing intrinsically difficult about syntactic movement rules. The delay that English-speaking children show in producing adultlike questions may simply be because auxiliaries are acquired later than main verbs and because English is idiosyncratic in forming questions by moving only auxiliaries. The lack of auxiliaries during the telegraphic stage also affects the formation of negative sentences. During this stage the English-speaking child’s negative sentences look like the following: He no bite you. Wayne not eating it. Kathryn not go over there. You no bring choo-choo train. That no fish school. Because of the absence of auxiliaries, these utterances do not look very adultlike. However, children at this stage understand the pragmatic force of negation. The child who says “No!” when asked to take a nap knows exactly what he means. As children acquire the auxiliaries, they generally use them correctly; that is, the auxiliary usually appears before the subject in yes-no questions, but not always. Yes-No Questions Does the kitty stand up? Can I have a piece of paper? Will you help me? We can go now? Wh Questions Which way they should go? What can we ride in? What will we eat? The introduction of auxiliaries into the child’s grammar also affects negative sentences. We now find correctly negated auxiliaries, though be is still missing in many cases. 353 354 CHAPTER 7 Language Acquisition Paul can’t have one. Donna won’t let go. I don’t want cover on it. I am not a doctor. It’s not cold. Paul not tired. I not crying. The child always places the negation in the correct position in relation to the auxiliary or be. Main verbs follow negation and be precedes negation. Children never produce errors such as “Mommy dances not” or “I not am going.” In languages such as French and German, which are like Italian and Dutch in having a rule that moves inflected verbs, the verb shows up before the negative marker. French and German children respect this rule, as follows. (In the German examples nich is the baby form of nicht.) French Veux pas lolo. Marche pas. Ça tourne pas. want not water walks not that turns not “I don’t want water.” “She doesn’t walk.” “That doesn’t turn.” German Macht nich aua. Brauche nich lala. Schmeckt auch nich. makes not ouch need not pacifier tastes also not “It doesn’t hurt.” “I don’t need a pacifier.” “It doesn’t taste good either.” Though the stages of language development are universal, they are shaped by the grammar of the particular adult language the child is acquiring. During the telegraphic stage, German, French, Italian, and English-speaking children omit auxiliaries, but they form negative sentences and questions in different ways because the rules of question and negative formation are different in the respective adult languages. This tells us something essential about language acquisition: Children are sensitive to the rules of the adult language at the earliest stages of development. Just as their phonology is quickly fine-tuned to the ambient language(s), so is their syntactic system. The ability of children to form complex rules and construct grammars of the languages around them in a relatively short time is phenomenal. That all children go through similar stages regardless of language shows that they are equipped with special abilities to know what generalizations to look for and what to ignore, and how to discover the regularities of language. Setting Parameters Children acquire some aspects of syntax very early, even while they are still in the telegraphic stage. Most of these early developments correspond to what we referred to as the parameters of UG in chapter 2. One such parameter determines whether the head of a phrase comes before or after its complements, for Mechanisms of Language Acquisition example, whether the order of the VP is verb-object (VO) as in English or OV as in Japanese. Children produce the correct word order of their language in their earliest multiword utterances, and they understand word order even when they are in the one-word stage of production. According to the parameter model of UG, the child does not actually have to formulate a word-order rule. Rather, he must choose between two already specified values: head first or head last. He determines the correct value based on the language he hears around him. The English-speaking child can quickly figure out that the head comes before its complements; a Japanese-speaking child can equally well determine that his language is head final. Other parameters of UG involve the verb movement rules. In some languages the verb can move out of the VP to higher positions in the phrase structure tree. We saw this in the Dutch and Italian questions discussed in the last section. In other languages, such as English, verbs do not move (only auxiliaries do). The verb movement parameters provide the child with an option: my language does/does not allow verb movement. As we saw, Dutch- and Italian-speaking children quickly set the verb movement parameters to the “does allow” value, and so they form questions by moving the verb. English-speaking children never make the mistake of moving the verb, even when they don’t yet have auxiliaries. In both cases, the children have set the parameter at the correct value for their language. Even after English-speaking children acquire the auxiliaries and the Aux movement rule, they never overgeneralize this movement to include verbs. This supports the hypothesis that the parameter is set early in development and cannot be undone. In this case as well, the child does not have to formulate a rule of verb movement; he does not have to learn when the verb moves and where it moves to. This is all given by UG. He simply has to decide whether verb movement is possible in his language. The parameters of UG limit the grammatical options to a small well-defined set—is my language head first or head last, does my language have verb movement, and so on. Parameters greatly reduce the acquisition burden on the child and contribute to explaining the ease and rapidity of language acquisition. The Acquisition of Signed Languages Deaf children who are born to deaf signing parents are naturally exposed to sign language just as hearing children are naturally exposed to spoken language. Given the universal aspects of sign and spoken languages, it is not surprising that language development in these deaf children parallels the stages of spoken language acquisition. Deaf children babble, they then progress to single signs similar to the single words in the holophrastic stage, and finally they begin to combine signs. There is also a telegraphic stage in which the function signs may be omitted. Use of function signs becomes consistent at around the same age for deaf children as function words in spoken languages. The ages at which signing children go through each of these stages are comparable to the ages of children acquiring a spoken language. Both spoken and signed language acquisition adhere to a set of universal principles, overlaid by language-particular components. We saw earlier that Englishspeaking children easily acquire wh movement, which is governed by universal principles, but they show some delay in their use of Aux movement, which is 355 356 CHAPTER 7 Language Acquisition specific to English. In wh questions in ASL, the wh word can move or it can be left in its original position. Both of the following sentences are grammatical: ___________________________whq WHO BILL SEE YESTERDAY? ___________________________ whq BILL SAW WHO YESTERDAY? (Note: We follow the convention of writing the glosses for signs in uppercase letters.) There is no Aux movement in ASL, but a question is accompanied by a facial expression with furrowed brows and the head tilted back. This is represented by the “whq” above the ASL glosses. This non-manual marker is part of the grammar of ASL. It is like the rising intonation we use when we ask questions in English and other spoken languages. In the acquisition of wh questions in ASL, signing children easily learned the rules associated with the wh phrase. The children sometimes move the wh phrase and sometimes leave it in place, as adult signers do. But they often omit the nonmanual marker, an omission that is not grammatical in the adult language. Sometimes the parallels between the acquisition of signed and spoken languages are striking. For example, some of the grammatical morphemes in ASL are semantically transparent or iconic, that is, they look like what they mean; for example, the sign for the pronoun “I” involves the speaker pointing to his chest. The sign for the pronoun “you” is a point to the chest of the addressee. As noted earlier, at around age two, children acquiring spoken languages often reverse the pronouns “I” and “you.” Interestingly, at this same age signing children make this same error. They will point to themselves when they mean “you” and point to the addressee when they mean “I.” Children acquiring ASL make this error despite the transparency or iconicity of these particular signs, because signing children (like signing adults) treat these pronouns as linguistic symbols and not simply as pointing gestures. As part of the language, the shifting reference of these pronouns presents the same problem for signing children that it does for speaking children. Hearing children of deaf parents acquire both sign language and spoken language when exposed to both. Studies show that Canadian bilingual children who acquire Langues des Signes Quebecoise (LSQ), or Quebec Sign Language, develop the two languages exactly as bilingual children acquiring two spoken languages. The LSQ–French bilinguals reached linguistic milestones in each of their languages in parallel with Canadian children acquiring French and English. They produced their first words, as well as their first word combinations, at the same time in each language. In reaching these milestones, neither group showed any delay compared to monolingual children. Deaf children of hearing parents who are not exposed to sign language from birth suffer a great handicap in acquiring language. It may be many years before these children are able to use a spoken language or before they encounter a conventional sign language. Yet the instinct to acquire language is so strong in humans that these deaf children begin to develop their own manual gestures to Knowing More Than One Language express their thoughts and desires. A study of six such children revealed that they not only developed individual signs but joined pairs and formed sentences with definite syntactic order and systematic constraints. Although these “home signs,” as they are called, are not fully developed languages like ASL or LSQ, they have a linguistic complexity and systematicity that could not have come from the input, because there was no input. Cases such as these demonstrate not only the strong drive that humans have to communicate through language, but also the innate basis of language structure. Knowing More Than One Language He that understands grammar in one language, understands it in another as far as the essential properties of Grammar are concerned. The fact that he can’t speak, nor comprehend, another language is due to the diversity of words and their various forms, but these are the accidental properties of grammar. ROGER BACON (1214–1294) People can acquire a second language under many different circumstances. You may have learned a second language when you began middle school, or high school, or college. Moving to a new country often means acquiring a new language. Other people live in communities or homes in which more than one language is spoken and may acquire two (or more) languages simultaneously. The term second language acquisition, or L2 acquisition, generally refers to the acquisition of a second language by someone (adult or child) who has already acquired a first language. This is also referred to as sequential bilingualism. Bilingual language acquisition refers to the (more or less) simultaneous acquisition of two languages beginning in infancy (or before the age of three years), also referred to as simultaneous bilingualism. Childhood Bilingualism © 2009 Tundra Comics Approximately half of the people in the world are native speakers of more than one language. This means that as children they had regular and continued 357 358 CHAPTER 7 Language Acquisition exposure to those languages. In many parts of the world, especially in Africa and Asia, bilingualism (even multilingualism) is the norm. In contrast, many Western countries (though by no means all of them) view themselves as monolingual, even though they may be home to speakers of many languages. In the United States and many European countries, bilingualism is often viewed as a transitory phenomenon associated with immigration. Bilingualism is an intriguing topic. People wonder how it’s possible for a child to acquire two (or more) languages at the same time. There are many questions, such as: Doesn’t the child confuse the two languages? Does bilingual language development take longer than monolingual development? Are bilingual children brighter, or does acquiring two languages negatively affect the child’s cognitive development in some way? How much exposure to each language is necessary for a child to become bilingual? Much of the early research into bilingualism focused on the fact that bilingual children sometimes mix the two languages in the same sentences, as the following examples from French-English bilingual children illustrate. In the first example, a French word appears in an otherwise English sentence. In the other two examples, all of the words are English but the syntax is French. His nose is perdu. A house pink That’s to me. “His nose is lost.” “A pink house” “That’s mine.” In early studies of bilingualism, this kind of language mixing was viewed negatively. It was taken as an indication that the child was confused or having difficulty with the two languages. In fact, many parents, sometimes on the advice of educators or psychologists, would stop raising their children bilingually when faced with this issue. However, it now seems clear that some amount of language mixing is a normal part of the early bilingual acquisition process and not necessarily an indication of any language problem. Theories of Bilingual Development These mixed utterances raise an interesting question about the grammars of bilingual children. Does the bilingual child start out with only one grammar that is eventually differentiated, or does she construct a separate grammar for each language right from the start? The unitary system hypothesis says that the child initially constructs only one lexicon and one grammar. The presence of mixed utterances such as the ones just given is often taken as support for this hypothesis. In addition, at the early stages, bilingual children often have words for particular objects in only one language. For example, a Spanish-English bilingual child may know the Spanish word for milk, leche, but not the English word, or she may have the word water but not agua. This kind of complementarity has also been taken as support for the idea that the child has only one lexicon. However, careful examination of the vocabularies of bilingual children reveals that although they may not have exactly the same words in both languages, there is enough overlap to make the single lexicon idea implausible. The reason children may not have the same set of words in both languages is that they use their two languages in different circumstances and acquire the vocabulary Knowing More Than One Language appropriate to each situation. For example, the bilingual English-Spanish child may hear only Spanish during mealtime, and so he will first learn the Spanish words for foods. Also, bilingual children have smaller vocabularies in each of their languages than the monolingual child has in her one language. This makes sense because a child can only learn so many words a day, and the bilingual child has two lexicons to build. For these reasons the bilingual child may have more lexical gaps than the monolingual child at a comparable stage of development, and those gaps may be different for each language. The separate systems hypothesis says that the bilingual child builds a distinct lexicon and grammar for each language. To test the separate systems hypothesis, it is necessary to look at how the child acquires those pieces of grammar that are different in his two languages. For example, if both languages have SVO word order, this would not be a good place to test this hypothesis. Several studies have shown that where the two languages diverge, children acquire the different rules of each language. Spanish-English and French-German bilingual children have been shown to use the word orders appropriate to each language, as well as the correct agreement morphemes for each language. Other studies have found that children set up two distinct sets of phonemes and phonological rules for their languages. The separate systems hypothesis also receives support from the study of the LSQ-French bilinguals discussed earlier. These children have semantically equivalent words in the two languages, just as bilinguals acquiring two spoken languages do. In addition, these children, like all bilingual children, were able to adjust their language choice to the language of their addressees, showing that they differentiated the two languages. Like most bilingual children, the LSQ-French bilinguals produced mixed utterances that had words from both languages. What is especially interesting is that these children showed simultaneous language mixing. They would produce an LSQ sign and a French word at the same time, something that is only possible if one language is spoken and the other signed. However, this finding has implications for bilingual language acquisition in general. It shows that the language mixing of bilingual children is not caused by confusion, but is rather the result of two grammars operating simultaneously. If bilingual children have two grammars and two lexicons, what explains the mixed utterances? Various explanations have been offered. One suggestion is that children mix because they have lexical gaps; if the French-English bilingual child does not know the English word lost, she will use the word she does know, perdu—the “any port in a storm” strategy. Another possibility is that the mixing in child language is similar to codeswitching used by many adult bilinguals (discussed in chapter 9). In specific social situations, bilingual adults may switch back and forth between their two languages in the same sentence, for example, “I put the forks en las mesas” (I put the forks on the tables). Codeswitching reflects the grammars of both languages working simultaneously; it is not “bad grammar” or “broken English.” Adult bilinguals codeswitch only when speaking to other bilingual speakers. It has been suggested that the mixed utterances of bilingual children are a form of codeswitching. In support of this proposal, various studies have shown that bilingual children as young as two make contextually appropriate language choices: In speaking to monolinguals the children use one language, and in speaking to bilinguals they mix the two languages. 359 360 CHAPTER 7 Language Acquisition Two Monolinguals in One Head Although we must study many bilingual children to reach any firm conclusions, the evidence accumulated so far seems to support the idea that children construct multiple grammars from the outset. Moreover, it seems that bilingual children develop their grammars along the same lines as monolingual children. They go through a babbling stage, a holophrastic stage, a telegraphic stage, and so on. During the telegraphic stage they show the same characteristics in each of their languages as the monolingual children. For example, monolingual Englishspeaking children omit verb endings in sentences such as “Eve play there” and “Andrew want that,” and German-speaking children use infinitives as in “S[ch]okolade holen” (chocolate get-infinitive). Spanish- and Italian-speaking monolinguals never omit verbal inflection or use infinitives in this way. Remarkably, two-year-old German-Italian bilinguals use infinitives when speaking German but not when they speak Italian. Young Spanish-English bilingual children drop the English verb endings but not the Spanish ones, and German-English bilinguals omit verbal inflection in English and use the infinitive in German. Results such as these have led some researchers to suggest that from a grammarmaking point of view, the bilingual child is like “two monolinguals in one head.” The Role of Input One issue that concerns researchers studying bilingualism, as well as parents of bilingual children, is the relationship between language input and proficiency. What role does input play in helping the child to separate the two languages? One input condition that is thought to promote bilingual development is une personne–une langue (one person, one language)—as in, Mom speaks only language A to the child and Dad speaks only language B. The idea is that keeping the two languages separate in the input will make it easier for the child to acquire each without influence from the other. Whether this method influences bilingual development in some important way has not been established. In practice this “ideal” input situation may be difficult to attain. It may also be unnecessary. We saw earlier that babies are attuned to various phonological properties of the input language such as prosody and phonotactics. Various studies suggest that this sensitivity provides a sufficient basis for the bilingual child to keep the two languages separate. Another question is, how much input does a child need in each language to become “native” in both? The answer is not straightforward. It seems intuitively clear that if a child hears twelve hours of English a day and only two hours of Spanish, he will probably develop English much more quickly and completely than Spanish. In fact, under these conditions he may never achieve the kind of grammatical competence in Spanish that we associate with the normal monolingual Spanish speaker. In reality, bilingual children are raised in a variety of circumstances. Some may have more or less equal exposure to the two languages; some may hear one language more than the other but still have sufficient input in the two languages to become “native” in both; some may ultimately have one language that is dominant to a lesser or greater degree. Researchers simply do not know how much language exposure is necessary in the two languages to produce a balanced bilingual. For practical purposes, the rule of thumb is that Knowing More Than One Language the child should receive roughly equal amounts of input in the two languages to achieve native proficiency in both. Cognitive Effects of Bilingualism Bilingual Hebrew-English-speaking child: “I speak Hebrew and English.” Monolingual English-speaking child: “What’s English?” SOURCE UNKNOWN Another issue is the effect of bilingualism on intellectual or cognitive development. Does being bilingual make you more or less intelligent, more or less creative, and so on? Historically, research into this question has been fraught with methodological problems and has often been heavily influenced by the prevailing political and social climate. Many early studies (before the 1960s) showed that bilingual children did worse than monolingual children on IQ and other cognitive and educational tests. The results of more recent research indicate that bilingual children outperform monolinguals in certain kinds of problem solving. Also, bilingual children seem to have better metalinguistic awareness, which refers to a speaker’s conscious awareness about language rather than of language. This is illustrated in the epigraph to this section. Moreover, bilingual children have an earlier understanding of the arbitrary relationship between an object and its name. Finally, they have sufficient metalinguistic awareness to speak the contextually appropriate language, as noted earlier. Whether children enjoy some cognitive or educational benefit from being bilingual seems to depend in part on extralinguistic factors such as the social and economic position of the child’s group or community, the educational situation, and the relative “prestige” of the two languages. Studies that show the most positive effects (e.g., better school performance) generally involve children reared in societies where both languages are valued and whose parents were interested and supportive of their bilingual development. Second Language Acquisition In contrast to the bilinguals just discussed, many people are introduced to a second language (L2) after they have achieved native competence in a first language (L1). If you have had the experience of trying to master a second language as an adult, no doubt you found it to be a challenge quite unlike your first language experience. Is L2 Acquisition the Same as L1 Acquisition? With some exceptions, adults do not simply pick up a second language. It usually requires conscious attention, if not intense study and memorization, to become proficient in a second language. Again, with the exception of some remarkable individuals, adult second-language learners (L2ers) do not often achieve nativelike grammatical competence in the L2, especially with respect to pronunciation. They generally have an accent, and they may make syntactic or morphological errors that are unlike the errors of children acquiring their first language 361 362 CHAPTER 7 Language Acquisition (L1ers). For example, L2ers often make word order errors, especially early in their development, as well as morphological errors in grammatical gender and case. L2 errors may fossilize so that no amount of teaching or correction can undo them. Unlike L1 acquisition, which is uniformly successful across children and languages, adults vary considerably in their ability to acquire an L2 completely. Some people are very talented language learners. Others are hopeless. Most people fall somewhere in the middle. Success may depend on a range of factors, including age, talent, motivation, and whether you are in the country where the language is spoken or sitting in a classroom five mornings a week with no further contact with native speakers. For all these reasons, many people, including many linguists who study L2 acquisition, believe that second language acquisition is something different from first language acquisition. This hypothesis is referred to as the fundamental difference hypothesis of L2 acquisition. In certain important respects, however, L2 acquisition is like L1 acquisition. Like L1ers, L2ers do not acquire their second language overnight; they go through stages. Like L1ers, L2ers construct grammars. These grammars reflect their competence in the L2 at each stage, and so their language at any particular point, though not native-like, is rule-governed and not haphazard. The intermediate grammars that L2ers create on their way to the target have been called interlanguage grammars. Consider word order in the interlanguage grammars of Romance (e.g., Italian, Spanish, and Portuguese) speakers acquiring German as a second language. The word order of the Romance languages is Subject-(Auxiliary)-Verb-Object (like English). German has two basic word orders depending on the presence of an auxiliary. Sentences with auxiliaries have Subject-Auxiliary-Object-Verb, as in (1). Sentences without auxiliaries have Subject-Verb-Object, as in (2). (Note that as with the child data above, these L2 sentences may contain various “errors” in addition to the word order facts we are considering.) 1. 2. Hans hat ein Buch gekauft. Hans kauft ein Buch. “Hans has a book bought.” “Hans is buying a book.” Studies show that Romance speakers acquire German word order in pieces. During the first stage they use German words but the S-Aux-V-O word order of their native language, as follows: Stage 1: Mein Vater hat gekauft ein Buch. “My father has bought a book.” At the second stage, they acquired the VP word order Object-Verb. Stage 2: Vor Personalrat auch meine helfen. in the personnel office [a colleague] me helped “A colleague in the personnel office helped me.” At the third stage they acquired the rule that places the verb or (auxiliary) in second position. Knowing More Than One Language Stage 3: Jetzt kann sie mir eine Frage machen. now can she me a question ask “Now she can ask me a question.” I kenne nich die Welt. I know not the world. “I don’t know the world.” These stages differ from those of children acquiring German as a first language. For example, German children know early on that the language has SOV word order. Like L1ers, L2ers also attempt to uncover the grammar of the target language, but with varying success, and they often do not reach the target. Proponents of the fundamental difference hypothesis believe that L2ers construct grammars according to different principles than those used in L1 acquisition, principles that are not specifically designed for language acquisition, but for the problemsolving skills used for tasks like playing chess or learning math. According to this view, L2ers lack access to the specifically linguistic principles of UG that L1ers have to help them. Opposing this view, others have argued that adults are superior to children in solving all sorts of nonlinguistic problems. If they were using these problemsolving skills to learn their L2, shouldn’t they be uniformly more successful than they are? Also, linguistic savants such as Christopher, discussed in the introduction, argue against the view that L2 acquisition involves only nonlinguistic cognitive abilities. Christopher’s IQ and problem-solving skills are minimal at best, yet he has become proficient in several languages. Many L2 acquisition researchers do not believe that L2 acquisition is fundamentally different from L1 acquisition. They point to various studies that show that interlanguage grammars do not generally violate principles of UG, which makes the process seem more similar to L1 acquisition. In the German L2 examples above, the interlanguage rules may be wrong for German, or wrong for Romance, but they are not impossible rules. These researchers also note that although L2ers may fall short of L1ers in terms of their final grammar, they appear to acquire rules in the same way as L1ers. Native Language Influence in L2 Acquisition One respect in which L1 acquisition and L2 acquisition are clearly different is that adult L2ers already have a fully developed grammar of their first language. As discussed in chapter 6, linguistic competence is unconscious knowledge. We cannot suppress our ability to use the rules of our language. We cannot decide not to understand English. Similarly, L2ers—especially at the beginning stages of acquiring their L2—seem to rely on their L1 grammar to some extent. This is shown by the kinds of errors L2ers make, which often involve the transfer of grammatical rules from their L1. This is most obvious in phonology. L2ers generally speak with an accent because they may transfer the phonemes, phonological rules, or syllable structures of their first language to their second language. We see this in the Japanese speaker, who does not distinguish between write [raɪt] and light [laɪt] because the r/l distinction is not phonemic in Japanese; in the French speaker, who says “ze cat in ze hat” because French does not have [ð]; 363 364 CHAPTER 7 Language Acquisition in the German speaker, who devoices final consonants, saying [hӕf] for have; and in the Spanish speaker, who inserts a schwa before initial consonant clusters, as in [ǝskul] for school and [ǝsnab] for snob. Similarly, English speakers may have difficulty with unfamiliar sounds in other languages. For example, in Italian long (or double) consonants are phonemic. Italian has minimal pairs such as the following: ano pala dita “anus” “shovel” “fingers” anno palla ditta “year” “ball” “company” English-speaking L2 learners of Italian have difficulty in hearing and producing the contrast between long and short consonants. This can lead to very embarrassing situations, for example on New Year’s Eve, when instead of wishing people buon anno (good year), you wish them buon ano. Native language influence is also found in the syntax and morphology. Sometimes this influence shows up as a wholesale transfer of a particular piece of grammar. For example, a Spanish speaker acquiring English might drop subjects in nonimperative sentences because this is possible in Spanish, as illustrated by the following examples: Hey, is not funny. In here have the mouth. Live in Colombia. Or speakers may begin with the word order of their native language, as we saw in the Romance-German interlanguage examples. Native language influence may show up in more subtle ways. For example, people whose L1 is German acquire English yes-no questions faster than Japanese speakers do. This is because German has a verb movement rule for forming yes-no questions that is very close to the English Aux movement rule, while in Japanese there is no syntactic movement in question formation. The Creative Component of L2 Acquisition It would be an oversimplification to think that L2 acquisition involves only the transfer of L1 properties to the L2 interlanguage. There is a strong creative component to L2 acquisition. Many language-particular parts of the L1 grammar do not transfer. Items that a speaker considers irregular, infrequent, or semantically difficult are not likely to transfer to the L2. For example, speakers will not typically transfer L1 idioms such as He hit the roof meaning “He got angry.” They are more likely to transfer structures in which the semantic relations are transparent. For example, a structure such as (1) will transfer more readily than (2). 1. 2. It is awkward to carry this suitcase. This suitcase is awkward to carry. In (1) the NP “this suitcase” is in its logical direct object position, while in (2) it has been moved to the subject position away from the verb that selects it. Knowing More Than One Language Many of the “errors” that L2ers do make are not derived from their L1. For example, in one study Turkish speakers at a particular stage in their development of German used S-V-Adv (Subject-Verb-Adverb) word order in embedded clauses (the wenn clause in the following example) in their German interlanguage, even though both their native language and the target language have S-Adv-V order: Wenn if ich I geh go zuruck back, ich I arbeit elektriker work (as an) electrician in der Türkei. in Turkey (Cf. Wenn ich zuruck geh ich arbeit elektriker, which is grammatically correct German.) The embedded S-V-Adv order is most likely an overgeneralization of the verbsecond requirement in German main clauses. As we noted earlier, overgeneralization is a clear indication that a rule has been acquired. Why certain L1 rules transfer to the interlanguage grammar and others don’t is not well understood. It is clear, however, that although construction of the L2 grammar is influenced by the L1 grammar, developmental principles—possibly universal—also operate in L2 acquisition. This is best illustrated by the fact that speakers with different L1s go through similar L2 stages. For example, Turkish, Serbo-Croatian, Italian, Greek, and Spanish speakers acquiring German as an L2 all drop articles to some extent. Because some of these L1s have articles, this cannot be caused by transfer but must involve some more general property of language acquisition. Is There a Critical Period for L2 Acquisition? I don’t know how you manage, Sir, amongst all the foreigners; you never know what they are saying. When the poor things first come here they gabble away like geese, although the children can soon speak well enough. MARGARET ATWOOD, Alias Grace, 1996 Age is a significant factor in L2 acquisition. The younger a person is when exposed to a second language, the more likely she is to achieve native-like competence. In an important study of the effects of age on ultimate attainment in L2 acquisition, Jacqueline Johnson and Elissa Newport tested several groups of Chinese and Korean speakers who had acquired English as a second language. The subjects, all of whom had been in the United States for at least five years, were tested on their knowledge of specific aspects of English morphology and syntax. They were asked to judge the grammaticality of sentences such as: The little boy is speak to a policeman. The farmer bought two pig. A bat flewed into our attic last night. Johnson and Newport found that the test results depended heavily on the age at which the person had arrived in the United States. The people who arrived as children (between the age of three and eight) did as well on the test as American 365 366 CHAPTER 7 Language Acquisition native speakers. Those who arrived between the ages of eight and fifteen did not perform like native speakers. Moreover, every year seemed to make a difference for this group. The person who arrived at age nine did better than the one who arrived at age ten; those who arrived at age eleven did better than those who arrived at age twelve, and so on. The group that arrived between the ages of seventeen and thirty-one had the lowest scores. Does this mean that there is a critical period for L2 acquisition, an age beyond which it is impossible to acquire the grammar of a new language? Most researchers would hesitate to make such a strong claim. Although age is an important factor in achieving native-like L2 competence, it is certainly possible to acquire a second language as an adult. Many teenage and adult L2 learners become proficient, and a few highly talented ones even manage to pass for native speakers. Also, the Newport and Johnson studies looked at the end state of L2 acquisition, after their subjects had been in an English-speaking environment for many years. It is possible that the ultimate attainment of adult L2ers falls short of native competence, but that the process of L2 acquisition is not fundamentally different from L1 acquisition. It is more appropriate to say that L2 acquisition abilities gradually decline with age and that there are “sensitive periods” for the native-like mastery of certain aspects of the L2. The sensitive period for phonology is the shortest. To achieve native-like pronunciation of an L2 generally requires exposure during childhood. Other aspects of language, such as syntax, may have a larger window. Recent research with learners of their “heritage language” (the ancestral language not learned as a child, such as Gaelic in Ireland) provides additional support for the notion of sensitive periods in L2 acquisition. This finding is based on studies into the acquisition of Spanish by college students who had overheard the language as children (and sometimes knew a few words), but who did not otherwise speak or understand Spanish. The overhearers were compared to people who had no exposure to Spanish before the age of fourteen. All of the students were native speakers of English studying their heritage language as a second language. These results showed that the overhearers acquired a more native-like accent than the other students did. However, the overhearers did not show any advantage in acquiring the grammatical morphemes of Spanish. Early exposure may leave an imprint that facilitates the late acquisition of certain aspects of language. Recent research on the neurological effects of acquiring a second language shows that left hemisphere cortical density is increased in bilinguals relative to monolinguals and that this increase is more pronounced in early versus late second-language learners. The study also shows a positive relationship between brain density and second-language proficiency. The researchers conclude that the structure of the human brain is altered by the experience of acquiring a second language. Summary When children acquire a language, they acquire the grammar of that language—the phonological, morphological, syntactic, and semantic rules. They also acquire the pragmatic rules of the language as well as a lexicon. Children Summary are not taught language. Rather, they extract the rules (and much of the lexicon) from the language around them. Several learning mechanisms have been suggested to explain the acquisition process. Imitations of adult speech, reinforcement, and analogy have all been proposed. None of these possible learning mechanisms account for the fact that children creatively form new sentences according to the rules of their language, or for the fact that children make certain kinds of errors but not others. Empirical studies of the motherese hypothesis show that grammar development does not depend on structured input. Connectionist models of acquisition also depend on the child having specially structured input. The ease and rapidity of children’s language acquisition and the uniformity of the stages of development for all children and all languages, despite the poverty of the stimulus they receive, suggest that the language faculty is innate and that the infant comes to the complex task already endowed with a Universal Grammar. UG is not a grammar like the grammar of English or Arabic, but represents the principles to which all human languages conform. Language acquisition is a creative process. Children create grammars based on the linguistic input and are guided by UG. Language development proceeds in stages, which are universal. During the first year of life, children develop the sounds of their language. They begin by producing and perceiving many sounds that do not exist in their language input, the babbling stage. Gradually, their productions and perceptions are fine-tuned to the environment. Children’s late babbling has all the phonological characteristics of the input language. Deaf children who are exposed at birth to sign languages also produce manual babbling, showing that babbling is a universal first stage in language acquisition that is dependent on the linguistic input received. At the end of the first year, children utter their first words. During the second year, they learn many more words and they develop much of the phonological system of the language. Children’s first utterances are one-word “sentences” (the holophrastic stage). Many experimental studies show that children are sensitive to various linguistic properties such as stress and phonotactic constraints, and to statistical regularities of the input that enable them to segment the fluent speech that they hear into words. One method of segmenting speech is prosodic bootstrapping. Other bootstrapping methods can help the child to learn verb meaning based on syntactic context (syntactic bootstrapping), or syntactic categories based on word meaning (semantic bootstrapping) and distributional evidence such as word frames. After a few months, the child puts two or more words together. These early sentences are not random combinations of words—the words have definite patterns and express both syntactic and semantic relationships. During the telegraphic stage, the child produces longer sentences that often lack function or grammatical morphemes. The child’s early grammar still lacks many of the rules of the adult grammar, but is not qualitatively different from it. Children at this stage have correct word order and rules for agreement and case, which show their knowledge of structure. Children make specific kinds of errors while acquiring their language. For example, they will overgeneralize morphology by saying bringed or mans. This 367 368 CHAPTER 7 Language Acquisition shows that they are acquiring rules of their particular language. Children never make errors that violate principles of Universal Grammar. In acquiring the lexicon of the language children may overextend word meaning by using dog to mean any four-legged creature. As well, they may underextend word meaning and use dog only to denote the family pet and no other dogs, as if it were a proper noun. Despite these categorization “errors,” children’s word learning, like their grammatical development, is guided by general principles. Deaf children exposed to sign language show the same stages of language acquisition as hearing children exposed to spoken languages. That all children go through similar stages regardless of language shows that they are equipped with special abilities to know what generalizations to look for and what to ignore, and how to discover the regularities of language, irrespective of the modality in which their language is expressed. Children may acquire more than one language at a time. Bilingual children seem to go through the same stages as monolingual children except that they develop two grammars and two lexicons simultaneously. This is true for children acquiring two spoken languages as well as for children acquiring a spoken language and a sign language. Whether the child will be equally proficient in the two languages depends on the input he or she receives and the social conditions under which the languages are acquired. In second language acquisition, L2 learners construct grammars of the target language—called interlanguage grammars—that go through stages, like the grammars of first-language learners. Influence from the speaker’s first language makes L2 acquisition appear different from L1 acquisition. Adults often do not achieve native-like competence in their L2, especially in pronunciation. The difficulties encountered in attempting to learn languages after puberty may be because there are sensitive periods for L2 acquisition. Some theories of second language acquisition suggest that the same principles operate that account for first language acquisition. A second view suggests that the acquisition of a second language in adulthood involves general learning mechanisms rather than the specifically linguistic principles used by the child. The universality of the language acquisition process, the stages of development, and the relatively short period in which the child constructs a complex grammatical system without overt teaching suggest that the human species is innately endowed with special language acquisition abilities and that language is biologically and genetically part of the human neurological system. All normal children learn whatever language or languages they are exposed to, from Afrikaans to Zuni. This ability is not dependent on race, social class, geography, or even intelligence (within a normal range). This ability is uniquely human. References for Further Reading Brown, R. 1973. A first language: The early stages. Cambridge, MA: Harvard University Press. Clark, E. 2002. First language acquisition. New York: Cambridge University Press. Guasti, M. T. 2002. Language acquisition: The growth of grammar. Cambridge, MA: MIT Press. Exercises Hakuta, K. 1986. Mirror of language: The debate on bilingualism. New York: Basic Books. Ingram, D. 1989. First language acquisition: Method, description and explanation. New York: Cambridge University Press. Jakobson, R. 1971. Studies on child language and aphasia. The Hague: Mouton. Klima, E. S., and U. Bellugi. 1979. The signs of language. Cambridge, MA: Harvard University Press. O’Grady, W. 2005. How children learn language. Cambridge, UK: Cambridge University Press. White, L. 2003. Second language acquisition and Universal Grammar. Cambridge, UK: Cambridge University Press. Exercises 1. Baby talk is a term used to label the word forms that many adults use when speaking to children. Examples in English are choo-choo for “train” and bow-wow for “dog.” Baby talk seems to exist in every language and culture. At least two things seem to be universal about baby talk: The words that have baby-talk forms fall into certain semantic categories (e.g., food and animals), and the words are phonetically simpler than the adult forms (e.g., “tummy” /tʌmi/ for “stomach” /stʌmɪk/). List all the baby-talk words you can think of in your native language; then (1) separate them into semantic categories, and (2) try to state general rules for the kinds of phonological reductions or simplifications that occur. 2. In this chapter we discussed the way children acquire rules of question formation. The following examples of children’s early questions are from a stage that is later than those discussed in the chapter. Formulate a generalization to describe this stage. Can I go? Why do you have one tooth? What do frogs eat? Do you like chips? Can I can’t go? Why you don’t have a tongue? What do you don’t like? Do you don’t like bananas? 3. Find a child between two and four years old and play with the child for about thirty minutes. Keep a list of all words and/or “sentences” that are used inappropriately. Describe what the child’s meanings for these words probably are. Describe the syntactic or morphological errors (including omissions). If the child is producing multiword sentences, write a grammar that could account for the data you have collected. 4. Roger Brown and his coworkers at Harvard University studied the language development of three children, referred to in the literature as Adam, Eve, and Sarah. The following are samples of their utterances during the “two-word stage.” see boy see sock pretty boat push it move it mommy sleep 369 370 CHAPTER 7 Language Acquisition pretty fan bye bye melon more taxi bye bye hot more melon A. Assume that these utterances are grammatical sentences in the children’s grammars. (1) Write a minigrammar that would account for these sentences. Example: One rule might be: VP → V N (2) Draw phrase structure trees for each utterance. Example: VP V N see boy B. One observation made by Brown was that many of the sentences and phrases produced by the children were ungrammatical from the point of view of the adult grammar. The research group concluded, based on utterances such as those below, that a rule in the children’s grammar for a noun phrase was: NP → M N (where M = any modifier) A coat My stool Poor man A celery That knee Little top A Becky More coffee Dirty knee A hands More nut That Adam My mummy Two tinker-toy Big boot (3) Mark with an asterisk any of the above NPs that are ungrammatical in the adult grammar of English. (4) State the “violation” for each starred item. For example, if one of the utterances were Lotsa book, you might say: “The modifier lotsa must be followed by a plural noun.” 5. In the holophrastic (one-word) stage of child language acquisition, the child’s phonological system differs in systematic ways from that in the adult grammar. The inventory of sounds and the phonemic contrasts are smaller, and there are greater constraints on phonotactic rules. (See chapter 5 for a discussion of these aspects of phonology.) A. For each of the following words produced by a child, state what the substitution is, and any other differences that result. Example: spook [pʰuk] Substitution: initial cluster [sp] reduced to single consonant; /p/ becomes aspirated, showing that child has acquired aspiration rule. Exercises (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) don’t skip shoe that play thump bath chop kitty light dolly grow [dot] [kʰɪp] [su] [dæt] [pʰe] [dʌp] [bæt] [tʰap] [kɪdi] [waɪt] [daʊi] [go] B. State general rules that account for the children’s deviations from the adult pronunciations. 6. Children learn demonstrative words such as this, that, these, those; temporal terms such as now, then, tomorrow; and spatial terms such as here, there, right, and behind relatively late. What do all these words have in common? (Hint: See the pragmatics section of chapter 3.) Why might that factor delay their acquisition? 7. We saw in this chapter how children overgeneralize rules such as the plural rule, producing forms such as mans or mouses. What might a child learning English use instead of the adult words given? a. children b. went c. better d. best e. brought f. sang g. geese h. worst i. knives j. worse 8. The following words are from the lexicons of two children ages one year six months (1;6) and two (2;0) years old. Compare the pronunciation of the words to adult pronunciation. Child 1 (1;6) soap feet sock goose dish [doʊp] [bit] [kak] [gos] [dɪtʃ] Child 2 (2;0) bib slide dog cheese shoes [bɛ] [daɪ] [da] [tʃis] [dus] light sock geese fish sheep [waɪt] [sʌk] [gis] [fɪs] [ʃip] bead pig cheese biz bib [biː] [pɛk] [tis] [bɪs] [bɪp] a. What happens to final consonants in the language of these two children? Formulate the rule(s) in words. Do all final consonants behave 371 372 CHAPTER 7 Language Acquisition the same way? If not, which consonants undergo the rule(s)? Is this a natural class? b. On the basis of these data, do any pairs of words allow you to identify any of the phonemes in the grammars of these children? What are they? Explain how you were able to determine your answer. 9. Make up a “wug test” to test a child’s knowledge of the following morphemes: comparative superlative progressive agentive -er -est -ing -er (as in bigger) (as in biggest) (as in I am dancing) (as in writer) 10. Children frequently produce sentences such as the following: Don’t giggle me. I danced the clown. Yawny Baby—you can push her mouth open to drink her. Who deaded my kitty cat? Are you gonna nice yourself? a. How would you characterize the difference between the grammar or lexicon of children who produce such sentences and adult English? b. Can you think of similar, but well-formed, examples in adult English? 11. Many Arabic speakers tend to insert a vowel in their pronunciation of English words. The first column has examples from L2ers whose L1 is Egyptian Arabic and the second column from L2ers who speak Iraqi Arabic (consider [tʃ] to be a single consonant): L1 = Egyptian Arabic L1 = Iraqi Arabic [bilastik] [θiriː] [tiransilet] [silaɪd] [firɛd] [tʃildiren] [ifloːr] [ibleːn] [tʃilidren] [iθriː] [istadi] [ifrɛd] plastic three translate slide Fred children floor plane children three study Fred a. What vowel do the Egyptian Arabic speakers insert and where? b. What vowel do the Iraqi Arabic speakers insert and where? c. Based on the position of the italicized epenthetic vowel in “I wrote to him,” can you guess which list, A or B, belongs to Egyptian Arabic and which belongs to Iraqi Arabic? Arabic A kitabta kitabla kitabitla Arabic B “I wrote him” “He wrote to him” “I wrote to him” katabtu katablu katabtilu “I wrote him” “He wrote to him” “I wrote to him” 12. Following is a list of utterances recorded from Sammy at age two-and-ahalf: Exercises a. b. c. d. e. f. g. h. i. j. k. l. m. n. o. p. q. r. s. t. u. v. w. x. y. Mikey not see him. Where ball go? Look Mommy, doggie. Big doggie. He no bite ya. He eats mud. Kitty hiding. Grampie wear glasses. He funny. He loves hamburgers. Daddy ride bike. That’s mines. That my toy. Him sleeping. Want more milk. Read moon book. Me want that. Teddy up. Daddy ’puter. ’Puter broke. Cookies and milk!!! Me Superman. Mommy’s angry. Allgone kitty. Here my batball. A. B. C. D. What stage of language development is Sammy in? Calculate the number of morphemes in each of Sammy’s utterances. What is Sammy’s MLU in morphemes? In words? Challenge question: Deciding the morpheme count for several of Sammy’s words requires some thought. For each of the following, determine whether it should count as one or two morphemes and why. allgone batball glasses cookies 13. The following sentences were uttered by children in the telegraphic stage (the second column contains a word-by-word gloss, and the last column is a translation of the sentence that includes elements that the child omitted): Child’s utterance Gloss Translation Swedish Se, blomster har look flowers have English French Tickles me Mange du pain eat some bread German S[ch]okolade holen chocolate get “Look, (I) have flowers.” “It tickles me.” “S/he eats some bread.” “I/we get chocolate.” 373 374 CHAPTER 7 Language Acquisition Dutch Earst kleine boekje lezen first little book read “First, I/we read a little book.” In each of the children’s sentences, the subject is missing, although this is not grammatical in the respective adult languages (in contrast to languages such as Spanish and Italian in which it is grammatical to omit the subject). a. Develop two hypotheses as to why the child might omit sentence subjects during this stage. For example, one hypothesis might be “children are limited in the length of sentences they can produce, so they drop subjects.” b. Evaluate the different hypotheses. For example, an objection to the hypothesis given in (a) might be “If length is the relevant factor, why do children consistently drop subjects but not objects?” 8 Language Processing: Humans and Computers No doubt a reasonable model of language use will incorporate, as a basic component, the generative grammar that expresses the speaker-hearer’s knowledge of the language; but this generative grammar does not, in itself, prescribe the character or functioning of a perceptual model or a model of speech production. NOAM CHOMSKY, Aspects of the Theory of Syntax, 1965 The Human Mind at Work: Human Language Processing Psycholinguistics is the area of linguistics that is concerned with linguistic performance—how we use our linguistic competence—in speech (or sign) production and comprehension. The human brain is able not only to acquire and store the mental lexicon and grammar, but also to access that linguistic storehouse to speak and understand language in real time. How we process knowledge depends largely on the nature of that knowledge. If, for example, language were not open-ended, and were merely a finite store of fixed phrases and sentences in memory, then speaking might simply consist of finding a sentence that expresses a thought we wished to convey. Comprehension could be the reverse—matching the sounds to a stored string that has been memorized with its meaning. Of course, this is ridiculous! It is not possible because of the creativity of language. In chapter 7, we saw that children do not learn language by imitating and storing sentences, but by constructing a grammar. When we speak, we access our lexicon to find the words, and we use the rules of grammar to construct novel sentences and to produce the sounds that 375 376 CHAPTER 8 Language Processing: Humans and Computers Speaker Listener Sensory nerves Ear Brain Brain Feedback link Vocal muscles Sensory nerves Motor nerves Sound waves Ear Linguistic level Physiological level Acoustic level Physiological level Linguistic level FIGURE 8.1 | The speech chain.1 A spoken utterance starts as a message in the speaker’s brain/mind. It is put into linguistic form and interpreted as articulation commands, emerging as an acoustic signal. The signal is processed by the listener’s ear and sent to the brain/mind, where it is interpreted. express the message we wish to convey. When we listen to speech and understand what is being said, we also access the lexicon and grammar to assign a structure and meaning to the sounds we hear. Speaking and comprehending speech can be viewed as a speech chain, a kind of “brain-to-brain” linking, as shown in Figure 8.1. The grammar relates sounds and meanings, and contains the units and rules of the language that make speech production and comprehension possible. However, other psychological processes are used to produce and understand utterances. Certain mechanisms enable us to break the continuous stream of speech sounds into linguistic units such as phonemes, syllables, and words in order to comprehend, and to compose sounds into words in order to produce meaningful speech. Other mechanisms determine how we pull words from the mental lexi1The figure is taken from P. B. Denes and E. N. Pinson, eds. 1963. The Speech Chain. Philadelphia, PA: Williams & Wilkins, p. 4. Reprinted with permission of Alcatel-Lucent USA Inc. The Human Mind at Work: Human Language Processing con, and still others explain how we construct a phrase structure representation of the words we retrieve. We usually have no difficulty understanding or producing sentences in our language. We do it without effort or conscious awareness of the processes involved. However, we have all had the experience of making a speech error, of having a word on the “tip of our tongue,” or of failing to understand a perfectly grammatical sentence, such as sentence (1): 1. The horse raced past the barn fell. Many individuals, on hearing this sentence, will judge it to be ungrammatical, yet will judge as grammatical a sentence with the same syntactic structure, such as: 2. The bus driven past the school stopped. Similarly, people will have no problem with sentence (3), which has the same meaning as (1). 3. The horse that was raced past the barn fell. Conversely, some ungrammatical sentences are easily understandable, such as sentence (4). This mismatch between grammaticality and interpretability tells us that language processing involves more than grammar. 4. *The baby seems sleeping. A theory of linguistic performance tries to detail the psychological mechanisms that work with the grammar to facilitate language production and comprehension. Comprehension “I quite agree with you,” said the Duchess; “and the moral of that is—‘Be what you would seem to be’—or, if you’d like it put more simply—‘Never imagine yourself not to be otherwise than what it might appear to others . . . to be otherwise.’ ” “I think I should understand that better,” Alice said very politely, “if I had it written down: but I can’t quite follow it as you say it.” LEWIS CARROLL, Alice’s Adventures in Wonderland, 1865 The sentence uttered by the Duchess provides another example of a grammatical sentence that is difficult to understand. The sentence is very long and contains several words that require extra resources to process, for example, multiple uses of negation and words like otherwise. Alice notes that if she had a pen and paper she could “unpack” this sentence more easily. One of the aims of psycholinguistics is to describe the processes people normally use in speaking and understanding language. The various breakdowns in performance, such as tip of the tongue phenomena, speech errors, and failure to comprehend tricky sentences, can tell us 377 378 CHAPTER 8 Language Processing: Humans and Computers a great deal about how the language processor works, just as children’s acquisition errors tell us a lot about the mechanisms involved in language development. The Speech Signal Understanding a sentence involves analysis at many levels. To begin with, we must comprehend the individual speech sounds we hear. We are not conscious of the complicated processes we use to understand speech any more than we are conscious of the complicated processes of digesting food and utilizing nutrients. We must study these processes deliberately and scientifically. One of the first questions of linguistic performance concerns segmentation of the acoustic signal. To understand this process, some knowledge of the signal can be helpful. In chapter 4 we described speech sounds according to the ways in which they are produced. These involve the position of the tongue, the lips, and the velum; the state of the vocal cords; whether the articulators obstruct the free flow of air; and so on. All of these articulatory characteristics are reflected in the physical characteristics of the sounds produced. Speech sounds can also be described in physical, or acoustic, terms. Physically, a sound is produced whenever there is a disturbance in the position of air molecules. The ancient philosophers asked whether a sound is produced if a tree falls in the middle of the forest with no one to hear it. This question has been answered by the science of acoustics. Objectively, a sound is produced; subjectively, there is no sound. In fact, there are sounds we cannot hear because our ears are not sensitive to the full range of frequencies. Acoustic phonetics is concerned only with speech sounds, all of which can be heard by the normal human ear. When we push air out of the lungs through the glottis, it causes the vocal cords to vibrate; this vibration in turn produces pulses of air that escape through the mouth (and sometimes the nose). These pulses are actually small variations in the air pressure caused by the wavelike motion of the air molecules. The sounds we produce can be described in terms of how fast the variations of the air pressure occur. This determines the fundamental frequency of the sounds and is perceived by the hearer as pitch. We can also describe the magnitude, or intensity, of the variations, which determines the loudness of the sound. The quality of the speech sound—whether it’s an [i] or an [a] or whatever—is determined by the shape of the vocal tract when air is flowing through it. This shape modulates the fundamental frequency into a spectrum of frequencies of greater or lesser intensity, and the particular combination of “greater or lesser” is heard as a particular sound. (Imagine smooth ocean waves with regular peaks and troughs approaching a rocky coastline. As they crash upon the rocks they are “modulated” or broken up into dozens of “sub-waves” with varying peaks and troughs. That is similar to what is happening to the glottal pulses as they “crash” through the vocal tract.) An important tool in acoustic research is a computer program that decomposes the speech signal into its frequency components. When speech is fed into a computer (from a microphone or a recording), an image of the speech signal is displayed. The patterns produced are called spectrograms or, more vividly, voiceprints. A spectrogram of the words heed, head, had, and who’d is shown in Figure 8.2. The Human Mind at Work: Human Language Processing FIGURE 8.2 | A spectrogram of the words heed, head, had, and who’d, spoken with a British accent (speaker: Peter Ladefoged, February 16, 1973). Courtesy of Peter Ladefoged. Time in milliseconds moves horizontally from left to right on the x axis; on the y axis the graph represents pitch (or, more technically, frequency). The intensity of each frequency component is indicated by the degree of darkness: the more intense, the darker. Each vowel is characterized by dark bands that differ in their placement according to their frequency. They represent the strongest harmonics (or sub-waves) produced by the shape of the vocal tract and are called the formants of the vowels. (A harmonic is a special frequency that is a multiple (2, 3, etc.) of the fundamental frequency.) Because the tongue is in a different position for each vowel, the formant frequencies differ for each vowel. The frequencies of these formants account for the different vowel qualities you hear. The spectrogram also shows, although not very conspicuously, the pitch of the entire utterance (intonation contour) on the voicing bar marked P. The striations are the thin vertical lines that indicate a single opening and closing of the vocal cords. When the striations are far apart, the vocal cords are vibrating slowly and the pitch is low; when the striations are close together, the vocal cords are vibrating rapidly and the pitch is high. By studying spectrograms of all speech sounds and many different utterances, acoustic phoneticians have learned a great deal about the basic acoustic components that reflect the articulatory features of speech sounds. Speech Perception and Comprehension Do what you know and perception is converted into character. RALPH WALDO EMERSON (1803–1882) Speech is a continuous signal. In natural speech, sounds overlap and influence each other, and yet listeners have the impression that they are hearing discrete units such as words, morphemes, syllables, and phonemes. A central problem of 379 380 CHAPTER 8 Language Processing: Humans and Computers speech perception is to explain how listeners carve up the continuous speech signal into meaningful units. This is referred to as the “segmentation problem.” Another question is, how does the listener manage to recognize particular speech sounds when they occur in different contexts and when they are spoken by different people? For example, how can a speaker tell that a [d] spoken by a man with a deep voice is the same unit of sound as the [d] spoken in the highpitched voice of a child? Acoustically, they are distinct. In addition, a [d] that occurs before the vowel [i] is somewhat acoustically different from a [d] that occurs before the vowel [u]. How does a listener know that two physically distinct instances of a sound are the same? This is referred to as the “lack of invariance problem.” In addressing the latter problem, experimental results show that listeners can calibrate their perceptions to control for differences in the size and shape of the vocal tract of the speaker. Similarly, listeners adjust how they interpret timing information in the speech signal as a function of how quickly the speaker is talking. These normalization procedures enable the listener to understand a [d] as a [d] regardless of the speaker or the speech rate. More complicated adjustments are required to factor out the effects of a preceding or following sound. As we might expect, the units we can perceive depend on the language we know. Speakers of English can perceive the difference between [l] and [r] because these phones represent distinct phonemes in the language. Speakers of Japanese have great difficulty in differentiating the two because they are allophones of one phoneme in their language. Recall from our discussion of language development in chapter 7 that these perceptual biases develop during the first year of life. Returning to the segmentation problem, spoken words are seldom surrounded by boundaries such as pauses. Nevertheless, words are obviously units of perception. The spaces between them in writing support this view. How do we find the words in the speech stream? Suppose you heard someone say: A sniggle blick is procking a slar. and you were able to perceive the sounds as [ə s n ɪ g ə l b l ɪ k ɪ z pʰ r a k ɪ ̃ ŋ ə s l a r] You would still be unable to assign a meaning to the sounds, because the meaning of a sentence relies mainly on the meaning of its words, and the only English lexical items in this string are the morphemes a, is, and -ing. The sentence lacks any English content words. (However, you would accept it as grammatically well-formed because it conforms to the rules of English syntax.) You can decide that the sentence has no meaning only if you attempt (unconsciously or consciously) to search your mental lexicon for the phonological strings you decide are possible words. This process is called lexical access, or word recognition, discussed in detail later. Finding that there are no entries for sniggle, blick, prock, and slar, you can conclude that the sentence contains nonsense strings. The segmentation and search of these “words” relies on knowing the grammatical morphemes and the syntax. The Human Mind at Work: Human Language Processing If instead you heard someone say The cat chased the rat and you perceived the sounds as [ð ə kʰ æ ʔ tʃʰ e s t ð ə r æ t] a similar lexical look-up process would lead you to conclude that an event concerning a cat, a rat, and the activity of chasing had occurred. You could know this only by segmenting the words in the continuous speech signal, analyzing them into their phonological word units, and matching these units to similar strings stored in your lexicon, which also includes the meanings attached to these phonological representations. (This still would not enable you to understand who chased whom, because that requires syntactic analysis.) Stress and intonation provide some clues to syntactic structure. We know, for example, that the different meanings of the sentences He lives in the white house and He lives in the White House can be signaled by differences in their stress patterns. Such prosodic aspects of speech also help to segment the speech signal into words and phrases. For example, syllables at the end of a phrase are longer in duration than at the beginning, and intonation contours mark boundaries of clauses. Bottom-up and Top-down Models I have experimented and experimented until now I know that [water] never does run uphill, except in the dark. I know it does in the dark, because the pool never goes dry; which it would, of course, if the water didn’t come back in the night. It is best to prove things by experiment; then you know; whereas if you depend on guessing and supposing and conjecturing, you will never get educated. MARK TWAIN, Eve’s Diary, 1906 In this laboratory the only one who is always right is the cat. MOTTO IN THE LABORATORY OF ARTURO ROSENBLUETH Language comprehension is very fast and automatic. We understand an utterance as fast as we hear it or read it. But we know this understanding must involve (at least) the following sub-operations: segmenting the continuous speech signal into phonemes, morphemes, words, and phrases; looking up the words and morphemes in the mental lexicon; finding the appropriate meanings of ambiguous words; parsing them into tree structures; choosing among different possible structures when syntactic ambiguities arise; interpreting the sentence; making a mental model of the discourse and updating it to reflect the meaning of the new sentence; and other matters beyond the scope of our introductory text. This seems like a great deal of work to be done in a very short time: we can understand spoken language at a rate of twenty phonemes per second. One might conclude that there must be some sort of a trick that makes it all possible. In a certain sense there is. Because of the sequential nature of language, a certain 381 382 CHAPTER 8 Language Processing: Humans and Computers amount of guesswork is involved in real-time comprehension. Many psycholinguists suggest that language perception and comprehension involve both topdown processing and bottom-up processing. Top-down processes proceed from semantic and syntactic information to the lexical information gained from the sensory input. Through use of such higherlevel information, we can try to predict what is to follow in the signal. For example, upon hearing the determiner the, the speaker begins constructing an NP and expects that the next word could be a noun, as in the boy. In this instance the knowledge of phrase structure would be the source of information. Bottom-up processing moves step-by-step from the incoming acoustic (or visual) signal, to phonemes, morphemes, words and phrases, and ultimately to semantic interpretation. Each step of building toward a meaning is based on the sensory data and accompanying lexical information. According to this model the speaker waits until hearing the and boy before constructing an NP, and then waits for the next word, and so on. Evidence for top-down processing is found in experiments that require subjects to identify spoken words in the presence of noise. Listeners make more errors when the words occur in isolation than when they occur in sentences. Moreover, they make more errors if the words occur in anomalous, or nonsense, sentences; and they make the most errors if the words occur in ungrammatical sentences. Also, as discussed further below, when subjects are asked to “shadow” sentences, that is, to repeat each word of a sentence immediately upon hearing it, they often produce words in anticipation of the input. Based on a computation of the meaning of the sentence to that point, they can guess what is coming next. Apparently, subjects are using their knowledge of syntactic and semantic relations to help them narrow down the set of candidate words. Top-down processing is also supported by a different kind of experiment. Subjects hear recorded sentences in which some part of the signal is removed and a cough or buzz is substituted, such as the underlined “s” in the sentence The state governors met with their respective legislatures convening in the capital city. Their experience is that they “hear” the sentence as complete, without any phonemes missing, and, in fact, have difficulty saying exactly where in the word the noise occurred. This effect is called phoneme restoration. It would not be surprising simply to find that subjects can guess that the word containing the cough was legislatures. What is remarkable is that they really believe they are hearing the [s], even when they are told it is not there. In this case, top-down information apparently overrides bottom-up information. There is also a role for context (top-down information) in segmentation. In some instances even an utterance containing all familiar words can be divided in more than one way. For example, the phonetic sequence [g r e d e] in a discussion of meat or eggs is likely to be heard as Grade A, but in a discussion of the weather as grey day. In other cases, although the sequence of phonemes might be compatible with two segmentations (e.g., [n aɪ t (ʰ) r e t]), the phonetic details of pronunciation can signal where the word boundary is. In night rate, the first t is part of the coda of the first syllable and thus unaspirated, whereas in nitrate it begins the onset of the second syllable, which is stressed and therefore the t is aspirated. The Human Mind at Work: Human Language Processing Lexical Access and Word Recognition Oh, are you from Wales? Do you know a fella named Jonah? He used to live in whales for a while. GROUCHO MARX (1890–1977) Psycholinguists have conducted a great deal of research on lexical access or word recognition, the process by which we obtain information about the meaning and syntactic properties of a word from our mental lexicon. Several experimental techniques have been used in studies of lexical access. One technique involves asking subjects to decide whether a string of letters (or sounds if auditory stimuli are used) is or is not a word. They must respond by pressing one button if the stimulus is an actual word and a different button if it is not, so they are making a lexical decision. During these and similar experiments, measurements of response time, or reaction time (often referred to as RTs), are taken. The assumption is that the longer it takes to respond to a particular task, the more processing is involved. RT measurements show that lexical access depends to some extent on word frequency; more commonly used words (both spoken and written) such as car are responded to more quickly than words that we rarely encounter such as fig. Many properties of lexical access can be examined using lexical decision experiments. In the following example, the relationship between the current word and the immediately preceding word is manipulated. For example, making a lexical decision on the word doctor will be faster if you just made a lexical decision on nurse than if you just made one on a semantically unrelated word such as flower. This effect is known as semantic priming: we say that the word nurse primes the word doctor. This effect might arise because semantically related words are located in the same part of the mental lexicon, so when we hear a priming word and look it up in the lexicon, semantically related, nearby words are “awakened” and more readily accessible for a few moments. Recent neurolinguistic research is showing the limits of the lexical decision technique. It is now possible to measure electrical brain activity in subjects while they perform a lexical decision experiment, and compare the patterns in brain responses to patterns in RTs. (The technique is similar to the event-related brain potentials mentioned in the introduction.) Such experiments have provided results that directly conflict with the RT data. For example, measures of brain activity show priming to pairs of verb forms such as teach/taught during the early stages of lexical access, whereas such pairs do not show priming in lexical decision RTs. This is because lexical decision involves several stages of processing, and patterns in early stages may be obscured by different patterns in later stages. Brain measures, by contrast, are taken continuously and therefore allow researchers to separately measure early and later processes. One of the most interesting facts about lexical access is that listeners retrieve all meanings of a word even when the sentence containing the word is biased toward one of the meanings. This is shown in experiments in which the ambiguous word 383 384 CHAPTER 8 Language Processing: Humans and Computers primes words related to both of its meanings. For example, suppose a subject hears the sentence: The gypsy read the young man’s palm for only a dollar. Palm primes the word hand, so in a lexical decision about hand, a shorter RT occurs than in a comparable sentence not containing the word palm. However, a shorter RT also occurs for the word tree. The other meaning of palm (as in palm tree) is apparently activated even though that meaning is not a part of the meaning of the priming sentence. In listening or reading, then, all of the meanings represented by a string of letters and sounds will be triggered. This argues for a limit on the effects of top-down processing because the individual word palm is heard and processed somewhat independently of its context, and so is capable of priming words related to all its lexical meanings. However, the disambiguating information in the sentence is used very quickly (within 250 milliseconds) to discard the meanings that are not appropriate to the sentence. If we check for priming after the word only instead of right after the word palm in the previous example, we find it for hand but no longer for tree. Another experimental technique, called the naming task, asks the subject to read aloud a printed word. (A variant of the naming task is also used in studies of people with aphasia, who are asked to name the object shown in a picture.) Subjects read irregularly spelled words like dough and steak just slightly more slowly than regularly spelled words like doe and stake, but still faster than invented strings like cluff. This suggests that people can do two different things in the naming task. They can look for the string in their mental lexicon, and if they find it (i.e., if it is a real word), they can pronounce the stored phonological representation for it. They can also “sound it out,” using their knowledge of how certain letters or letter sequences (e.g., “gh,” “oe”) are most commonly pronounced. The latter is obviously the only way to come up with a pronunciation for a nonexisting word. The fact that irregularly spelled words are read more slowly than regularly spelled real words suggests that the mind “notices” the irregularity. This may be because the brain is trying to do two tasks—lexical look-up and sounding out the word—in parallel in order to perform naming as fast as possible. When the two approaches yield inconsistent results, a conflict arises that takes some time to resolve. Syntactic Processing Teacher Strikes Idle Kids Enraged Cow Injures Farmer with Ax Killer Sentenced to Die for Second Time in 10 Years Stolen Painting Found by Tree AMBIGUOUS HEADLINES Psycholinguistic research has also focused on syntactic processing. In addition to recognizing words, the listener must figure out the syntactic and semantic The Human Mind at Work: Human Language Processing relations among the words and phrases in a sentence, what we earlier referred to as “parsing.” The parsing of a sentence is largely determined by the rules of the grammar, but it is also strongly influenced by the sequential nature of language. Listeners actively build a phrase structure representation of a sentence as they hear it. They must therefore decide for each “incoming” word what its grammatical category is and how it attaches to the tree that is being constructed. Many sentences present temporary ambiguities, such as a word or words that belong to more than one syntactic category. For example, the string The warehouse fires . . . could continue in one of two ways: 1. 2. . . . were set by an arsonist. . . . employees over sixty. Fires is part of a compound noun in sentence (1) and is a verb in sentence (2). As noted earlier, experimental studies of such sentences show that both meanings and categories are activated when a subject encounters the ambiguous word. The ambiguity is quickly resolved (hence the term temporary ambiguity) based on syntactic and semantic context, and on the frequency of the two uses of the word. The disambiguations are so quick and seamless that unintentionally ambiguous newspaper headlines such as those at the head of this section are scarcely noticeable except to linguists who collect them. Another important type of temporary ambiguity concerns sentences in which the phrase structure rules allow two possible attachments of a constituent, as illustrated by the following example: After the child visited the doctor prescribed a course of injections. Experiments that track eye movements of people when they read such sentences show that there may be attachment preferences that operate independently of the context or meaning of the sentence. When the mental syntactic processor, or parser, receives the word doctor, it attaches it as a direct object of the verb visit in the subordinate clause. For this reason, subjects experience a strange perceptual effect when they encounter the verb prescribed. They must “change their minds” and attach the doctor as subject of the main clause instead. Sentences that induce this effect are called garden path sentences. The sentence presented at the beginning of this chapter, The horse raced past the barn fell, is also a garden path sentence. People naturally interpret raced as the main verb, when in fact the main verb is fell. The initial attachment choices that lead people astray may reflect general principles used by the parser to deal with syntactic ambiguity. Two such principles that have been suggested are known as minimal attachment and late closure. Minimal attachment says, “Build the simplest structure consistent with the grammar of the language.” In the string The horse raced . . . , the simpler structure is the one in which the horse is the subject and raced the main verb; the more complex structure is similar to The horse that was raced. . . . We can think of simple versus complex here in terms of the amount of structure in the syntactic tree for the sentence so far. 385 386 CHAPTER 8 Language Processing: Humans and Computers The second principle, late closure, says “Attach incoming material to the phrase that is currently being processed.” Late closure is exemplified in the following sentence: The doctor said the patient will die yesterday. Readers often experience a garden path effect at the end of this sentence because their initial inclination is to construe yesterday as modifying will die, which is semantically incongruous. Late closure explains this: The hearer encounters yesterday as he is processing the embedded clause, of which die is the main verb. On the other hand, the verb said, which yesterday is supposed to modify, is part of the root clause, which hasn’t been worked on for the past several words. The hearer must therefore backtrack to attach yesterday to the clause containing said. The comprehension of sentences depends on syntactic processing that uses the grammar in combination with special parsing principles to construct trees. Garden path sentences like those we have been discussing suggest that the mental parser sometimes makes a strong commitment to one of the possible parses. Whether it always does so, and whether this means it completely ignores all other parses, are open questions that are still being investigated by linguists. Another striking example of processing difficulty is a rewording of a Mother Goose poem. In its original form we have: This is the dog that worried the cat that killed the rat that ate the malt that lay in the house that Jack built. No problem understanding that? Now try this equivalent description: Jack built the house that the malt that the rat that the cat that the dog worried killed ate lay in. No way, right? Although the confusing sentence follows the rules of relative clause formation—you have little difficulty with the cat that the dog worried—it seems that once is enough; when you apply the same process twice, getting the rat that the cat that the dog worried killed, it becomes quite difficult to process. If we apply the process three times, as in the malt that the rat that the cat that the dog worried killed ate, all hope is lost. The difficulty in parsing this kind of sentence is related to memory constraints. In processing the sentence, you have to keep the malt in mind all the way until ate, but while doing that you have to keep the rat in mind all the way until killed, and while doing that. . . . It’s a form of structure juggling that is difficult to perform; we evidently don’t have enough memory capacity to keep track of all the necessary items. Though we have the competence to create such sentences—in fact, we have the competence to make a sentence with 10,000 words in it—performance limitations prevent creation of such monstrosities. Various experimental techniques are used to study sentence comprehension. In addition to the priming and reading tasks, in a shadowing task subjects are asked to repeat what they hear as rapidly as possible. Exceptionally good shadowers can follow what is being said only about a syllable behind (300 milliseconds). Most of us, however, shadow with a delay of 500 to 800 milliseconds, which is still quite fast. More interestingly, fast shadowers often correct speech errors or mispronun- The Human Mind at Work: Human Language Processing ciations unconsciously and add inflectional endings if they are absent. Even when they are told that the speech they are to shadow includes errors and they should repeat the errors, they are rarely able to do so. Corrections are more likely to occur when the target word is predictable from what has been said previously. These shadowing experiments make at least two points: (1) they support extremely rapid use of top-down information: differences in predictability have an effect within about one-quarter of a second; and (2) they show how fast the mental parser does grammatical analysis, because some of the errors that are corrected, such as missing agreement inflections, depend on successfully parsing the immediately preceding words. The ability to comprehend what is said to us is a complex psychological process involving the internal grammar, parsing principles such as minimal attachment and late closure, frequency factors, memory, and both linguistic and nonlinguistic context. Speech Production Speech was given to the ordinary sort of men, whereby to communicate their mind; but to wise men, whereby to conceal it. ROBERT SOUTH, sermon at Westminster Abbey, April 30, 1676 As we saw, the speech chain starts with a speaker who, through some complicated set of neuromuscular processes, produces an acoustic signal that represents a thought, idea, or message to be conveyed to a listener, who must then decode the signal to arrive at a similar message. It is more difficult to devise experiments that provide information about how the speaker proceeds than to do so for the listener’s side of the process. Much of the best information has come from observing and analyzing spontaneous speech. Planning Units “U.S. Acres” copyright © Paws. All rights reserved. We might suppose that speakers’ thoughts are simply translated into words one after the other via a semantic mapping process. Grammatical morphemes would be added as demanded by the syntactic rules of the language. The phonetic representation of each word in turn would then be mapped onto the neuromuscular commands to the articulators to produce the acoustic signal representing it. 387 388 CHAPTER 8 Language Processing: Humans and Computers We know, however, that this is not a true picture of speech production. Although sounds within words and words within sentences are linearly ordered, speech errors, or slips of the tongue (also discussed in chapter 5), show that the prearticulation stages involve units larger than the single phonemic segment or even the word, as illustrated by the “U.S. Acres” cartoon. That error is an example of a spoonerism, named after William Archibald Spooner, a distinguished dean of an Oxford college in the early 1900s who is reported to have referred to Queen Victoria as “That queer old dean” instead of “That dear old queen,” and berated his class of students by saying, “You have hissed my mystery lecture. You have tasted the whole worm” instead of the intended “You have missed my history lecture. You have wasted the whole term.” Indeed, speech errors show that features, segments, words, and phrases may be conceptualized well before they are uttered. This point is illustrated in the following examples of speech errors (the intended utterance is to the left of the arrow; the actual utterance, including the error, is to the right of the arrow): 1. 2. 3. 4. 5. 6. The hiring of minority faculty. → The firing of minority faculty. (The intended h is replaced by the f of faculty, which occurs later in the intended utterance.) ad hoc → odd hack (The vowels /æ/ of the first word and /a/ of the second are exchanged or reversed.) big and fat → pig and vat (The values of a single feature are switched: in big [+voiced] becomes [–voiced] and in fat [–voiced] becomes [+voiced].) There are many ministers in our church. → There are many churches in our minister. (The root morphemes minister and church are exchanged; the grammatical plural morpheme remains in its intended place in the phrase structure.) salute smartly → smart salutely (heard on All Things Considered, National Public Radio (NPR), May 17, 2007.) (The root morphemes are exchanged, but the -ly affix remains in place.) Seymour sliced the salami with a knife. → Seymour sliced a knife with the salami. (The entire noun phrases—article + noun—were exchanged.) In these errors, the intonation contour (primary stressed syllables and variations in pitch) remained the same as in the intended utterances, even when the words were rearranged. In the intended utterance of (6), the highest pitch would be on knife. In the misordered sentence, the highest pitch occurred on the second syllable of salami. The pitch rise and increased loudness are thus determined by the syntactic structure of the sentence and do not depend on the individual words. Syntactic structures exist independently of the words that occupy them, and intonation contours can be mapped onto those structures without being associated with particular words. Errors like those just cited are constrained in interesting ways. Phonological errors involving segments or features, as in (1), (2), and (3), primarily occur in content words, and not in grammatical morphemes, showing the distinction between these lexical classes. In addition, while words and lexical morphemes The Human Mind at Work: Human Language Processing may be interchanged, grammatical morphemes may not be. We do not find errors like The boying are sings for The boys are singing. Typically, as example (4) illustrates, the inflectional endings are left behind when lexical morphemes switch and subsequently attach, in their proper phonological form, to the moved lexical morpheme. Errors like those in (1)–(6) show that speech production operates in real time with features, segments, morphemes, words, phrases—the very units that exist in the grammar. They also show that when we speak, words are chosen and sequenced ahead of when they are articulated. We do not select one word from our mental dictionary and say it, then select another word and say it. Lexical Selection Humpty Dumpty’s theory, of two meanings packed into one word like a portmanteau, seems to me the right explanation for all. For instance, take the two words “fuming” and “furious.” Make up your mind that you will say both words but leave it unsettled which you will say first. Now open your mouth and speak. If . . . you have that rarest of gifts, a perfectly balanced mind, you will say “frumious.” LEWIS CARROLL, Preface to The Hunting of the Snark, 1876 In chapter 3, word substitution errors were used to illustrate the semantic properties of words. Such substitutions are seldom random; they show that in our attempt to express our thoughts by speaking words in the lexicon, we may make an incorrect lexical selection based on partial similarity or relatedness of meanings. Blends, in which we produce part of one word and part of another, further illustrate the lexical selection process in speech production; we may select two or more words to express our thoughts and instead of deciding between them, we produce them as “portmanteaus,” as Humpty Dumpty calls them. Such blends are illustrated in the following errors: 1. 2. 3. 4. splinters/blisters edited/annotated a swinging/hip chick frown/scowl → → → → splisters editated a swip chick frowl These blend errors are typical in that the segments stay in the same position within the syllable as they were in the target words. This is not true in the previous example made up by Lewis Carroll: a much more likely blend of fuming and furious would be fumious or furing. Application and Misapplication of Rules I thought . . . four rules would be enough, provided that I made a firm and constant resolution not to fail even once in the observance of them. RENÉ DESCARTES, Discourse on Method, 1637 Spontaneous errors show that the rules of morphology and syntax, discussed in earlier chapters as part of competence, may also be applied (or misapplied) when we speak. It is difficult to see this process in normal error-free speech, but when 389 390 CHAPTER 8 Language Processing: Humans and Computers someone says groupment instead of grouping, ambigual instead of ambiguous, or bloodent instead of bloody, it shows that regular rules are applied to combine morphemes and form possible but nonexistent words. Inflectional rules also surface. The UCLA professor who said *We swimmed in the pool knows that the past tense of swim is swam, but he mistakenly applied the regular rule to an irregular form. Morphophonemic rules also appear to be performance rules as well as rules of competence. Consider the a/an alternation rule in English. Errors such as an istem for the intended a system or a burly bird for the intended an early bird show that when segmental misordering changes a noun beginning with a consonant to a noun beginning with a vowel, or vice versa, the indefinite article is also changed so that it conforms to the grammatical rule. Speakers hardly ever produce errors like *an burly bird or *a istem, which tells us something about the stages in the production of an utterance. The rule that determines whether a or an should be produced (an precedes a vowel; a precedes a consonant) must apply after the stage at which early has slipped to burly; that is, the stage at which /b/ has been anticipated. If a/an were selected first, the article would be an (or else the rule must reapply after the initial error has occurred). Similarly, an error such as bin beg for the intended Big Ben shows that phonemes are misordered before allophonic rules apply. That is, the intended Big Ben phonetically is [bɪg bɛ̃n] with an oral [ɪ] before the [g], and a nasal [ɛ̃] before the [n]. In the utterance that was produced, however, the [ɪ] is nasalized because it now occurs before the misordered [n], whereas the [ɛ̃] is oral before the misordered [g]. If the misordering occurred after the phonemes had undergone allophonic rules such as nasalization, the result would have been the phonetic utterance [bɪn bɛ̃g]. Nonlinguistic Influences Our discussion of speech comprehension suggested that nonlinguistic factors can be involved in—and sometimes interfere with—linguistic processing. They also affect speech production. The individual who said He made hairlines instead of He made headlines was referring to a barber. The fact that the two compound nouns both start with the same sound, are composed of two syllables, have the same stress pattern, and contain the identical second morphemes undoubtedly played a role in producing the error, but the relationship between hairlines and barbers may also have been a contributing factor. Similar comments apply to the congressional representative who said, “It can deliver a large payroll” instead of “It can deliver a large payload,” in reference to a bill to fund the building of bomber aircraft. Other errors show that thoughts unrelated in form to the intended utterance may have an influence on what is said. One speaker said, “I’ve never heard of classes on April 9” instead of the intended on Good Friday. Good Friday fell on April 9 that year. The two phrases are not similar phonologically or morphologically, yet the nonlinguistic association seems to have influenced what was said. Both normal conversational data and experimentally elicited data provide the psycholinguist with evidence for the construction of models both of speech production and of comprehension, the beginning and ending points of the speech chain of communication. Computer Processing of Human Language Computer Processing of Human Language Man is still the most extraordinary computer of all. JOHN F. KENNEDY (1917–1963) Until a few decades ago, language was strictly “humans only—others need not apply.” Today, it is common for computers to process language. Computational linguistics is a subfield of linguistics and computer science that is concerned with the interactions of human language and computers. Computational linguistics includes the analysis of written texts and spoken discourse, the translation of text and speech from one language into another, the use of human (not computer) languages for communication between computers and people, and the modeling and testing of linguistic theories. Computers That Talk and Listen The first generations of computers had received their inputs through glorified typewriter keyboards, and had replied through high-speed printers and visual displays. HAL could do this when necessary, but most of his communication with his shipmates was by means of the spoken words. Poole and Bowman could talk to HAL as if he were a human being, and he would reply in the perfect idiomatic English he had learned during the fleeting weeks of his electronic childhood. ARTHUR C. CLARKE, 2001: A Space Odyssey, 1968 The ideal computer is multilingual; it should “speak” computer languages such as FORTRAN and Java, and human languages such as French and Japanese. For many purposes it would be helpful if we could communicate with computers as we communicate with other humans, through our native language. But as of the year 2010, the computers portrayed in films and on television as capable of speaking and understanding human language do not exist. Computational linguistics is concerned with the interaction between language and computers in all dimensions, from phonetics to pragmatics, from producing speech to comprehending speech, from spoken (or signed) utterances to written forms. Computational phonetics and phonology is concerned with processing speech. Its main goals are converting speech to text on the comprehension side, and text to speech on the production side. The areas of computational morphology, computational syntax, computational semantics, and computational pragmatics, discussed below, are concerned with higher levels of linguistic processing. Computational Phonetics and Phonology The two sides of computational phonetics and phonology are speech recognition and speech synthesis. Speech recognition is the process of analyzing the speech signal into its component phones and phonemes, and producing, in effect, a phonetic transcription of the speech. Further processing may convert the transcription into ordinary text for output on a screen, or into words and phrases for further processing, as in a speech understanding application. (Note: Speech 391 392 CHAPTER 8 Language Processing: Humans and Computers recognition is not the same as speech understanding, as is commonly thought. Rather, speech recognition is a necessary precursor to the far more complex process of comprehension.) Speech synthesis is the process of creating electronic signals that simulate the phones and prosodic features of speech and assemble them into words and phrases for output to an electronic speaker, or for further processing as in a speech generation application. Speech Recognition When Frederic was a little lad he proved so brave and daring, His father thought he’d ’prentice him to some career seafaring. I was, alas! his nurs’rymaid, and so it fell to my lot To take and bind the promising boy apprentice to a pilot— A life not bad for a hardy lad, though surely not a high lot, Though I’m a nurse, you might do worse than make your boy a pilot. I was a stupid nurs’rymaid, on breakers always steering, And I did not catch the word aright, through being hard of hearing; Mistaking my instructions, which within my brain did gyrate I took and bound this promising boy apprentice to a pirate. GILBERT AND SULLIVAN, The Pirates of Penzance, 1879 When you listen to someone speak a foreign language, you notice that it is continuous except for breath pauses, and that it is difficult to segment the speech into sounds and words. It’s all run together. The computer faces this situation when it tries to do speech recognition. Early speech recognizers were not designed to “hear” individual sounds. Rather, the computers were programmed to store the acoustic patterns of entire words or even phrases in their memories, and then further instructed to look for those patterns in any subsequent speech they were asked to recognize. The computers had a fixed, small vocabulary. Moreover, they best recognized the speech of the same person who provided the original word patterns. They would have trouble “understanding” a different speaker, and if a word outside the vocabulary was uttered, the computers were clueless. If the words were run together, recognition accuracy also fell, and if the words were not fully pronounced, say missipi for Mississippi, failure generally ensued. Coarticulation effects also muddied the waters. The computers might have [hɪz] as their representation of the word his, but in the sequence his soap, pronounced [hɪssop], the his is pronounced [hɪs] with a voiceless [s]. In addition, the vocabulary best consisted of words that were not too similar phonetically, avoiding confusion between words like pilot and pirate, which might, as with the young lad in the song, have grave consequences. Today, many interactive phone systems have a speech recognition component. They will invite you to “press 1 or say ‘yes’; press 2 or say ‘no,’ ” or perhaps offer a menu of choices triggered by one or more spoken word responses. Sophisticated mobile phones allow their owners to preprogram complete phrases such as “call my office” or “display the calendar.” These systems have very small vocabularies and so can search the speech signal for anything resembling prestored acoustic patterns of a keyword and generally get it right. Computer Processing of Human Language The more sophisticated speech recognizers that can be purchased for use on a personal computer have much larger vocabularies, often the size of an abridged dictionary. To be highly accurate they must be trained to the voice of a specific person, and they must be able to detect individual phones in the speech signal. The training consists in the user making multiple utterances known in advance to the computer, which extracts the acoustic patterns of each phone typical of that user. Later the computer uses those patterns to aid in the recognition process. Because no two utterances are ever identical, and because there is generally noise (nonspeech sounds) in the signal, the matching process that underlies speech recognition is statistical. On the phonetic level, the computations may classify some stretch of sound in its input as [l] with 65 percent confidence and [r] with 35 percent confidence. Other factors may be used to help the decision. For example, if the computer is confident that the preceding sound is [d] and begins the word, then [r] is the likely candidate, because no words begin with /dl/ in English. The system takes advantage of its (i.e., the programmer’s) knowledge of sequential constraints (see chapter 5). If, on the other hand, the sound occurs at the beginning of the word, further information is needed to determine whether it is the phoneme /l/ or /r/. If the following sounds are [up] then /l/ is the one, because loop is a word but *roop is not. If the computer is unable to decide, it may offer a list of choices such as late and rate and ask the person using the system to decide. Advanced speech recognizers may utilize syntactic rules to further disambiguate an utterance. If the late/rate syntactic context is “It’s too ___” the choice is late because too may be followed by an adjective but not by a noun or verb. Statistical disambiguation may also be used. For example, in a standard corpora of English there will be far more occurrences of “It’s too late . . .” than there might be of, say, “It’s to rate . . .” A statistical model can be built based on such facts that would lead the machine to give weight to the choice of late rather than rate. Even these modern systems, with all the computing power behind them, are brittle. They break when circumstances become unfavorable. If the user speaks rapidly with lots of coarticulation (whatcha for what are you), and there is a lot of background noise, recognition accuracy plummets. People do better. If someone mumbles, you can generally make out what they are saying because you have context to help you. In a noisy setting such as a party, you are able to converse with your dance partner despite the background noise because your brain has the ability to filter out irrelevant sounds and zero in on the voice of a single speaker. This effect is so striking it is given a name: the cocktail party effect. Computers are not nearly as capable as people in coping with noise, although research directed at the problem is beginning to show positive results. Speech Synthesis Machines which, with more or less success, imitate human speech, are the most difficult to construct, so many are the agencies engaged in uttering even a single word—so many are the inflections and variations of tone and articulation, that the mechanician finds his ingenuity taxed to the utmost to imitate them. SCIENTIFIC AMERICAN, January 14, 1871 393 394 CHAPTER 8 Language Processing: Humans and Computers Early efforts toward building “talking machines” were concerned with machines that could produce sounds that imitated human speech. In 1779, Christian Gottlieb Kratzenstein won a prize for building such a machine. It was “an instrument constructed like the vox humana pipes of an organ which . . . accurately express the sounds of the vowels.” In building this machine he also answered a question posed by the Imperial Academy of St. Petersburg, Russia: “What is the nature and character of the sounds of the vowels a, e, i, o, u [that make them] different from one another?” Kratzenstein constructed a set of “acoustic resonators” similar to the shapes of the mouth when these vowels are articulated and set them resonating by a vibrating reed that produced pulses of air similar to those coming from the lungs through the vibrating vocal cords. Nearly a century later, a young Alexander Graham Bell, always fascinated with speech and its production, fabricated a “talking head” from a cast of a human skull. He used various materials to form the velum, palate, teeth, lips, tongue, cheeks, and so on, and installed a metal larynx with vocal cords made by stretching a slotted piece of rubber. A keyboard control system manipulated all the parts with an intricate set of levers. This ingenious machine produced vowel sounds, some nasal sounds, and even a few short combinations of sounds. With the advances in the acoustic theory of speech production and the technological developments in electronics, machine production of speech sounds has made great progress. We no longer have to build physical models of the speechproducing mechanism; we can now imitate the process by producing the physical signals electronically. Speech sounds can be reduced to a small number of acoustic components. One way to produce synthetic speech is to mix these components together in the proper proportions, depending on the speech sounds to be imitated. It is rather like following a recipe for making soup, which might read: “Take two quarts of water, add one onion, three carrots, a potato, a teaspoon of salt, a pinch of pepper, and stir it all together.” This method of producing synthetic speech would include a recipe that might read: 1. 2. 3. 4. 5. 6. Start with a tone at the same frequency as vibrating vocal cords (higher if a woman’s or child’s voice is being synthesized, lower for a man’s). Emphasize the harmonics corresponding to the formants required for a particular vowel, liquid, or nasal quality. Add hissing or buzzing for fricatives. Add nasal resonances for nasal sounds. Temporarily cut off sound to produce stops and affricates. and so on. . . . All of these ingredients are blended electronically, using computers to produce highly intelligible, more or less natural-sounding speech. Because item (2) is central to the process, this method of speech synthesis is called formant synthesis. Most synthetic speech still has a machinelike quality or accent, caused by small inaccuracies in simulation, and because suprasegmental factors such as changing intonation and stress patterns are not yet fully understood. If not correct, such factors may be more confusing than mispronounced phonemes. Currently, the chief area of research in speech synthesis is concerned precisely with Computer Processing of Human Language discovering and programming the rules of rhythm and timing that native speakers apply. Still, speech synthesizers today are no harder to understand than a person speaking a dialect slightly different from one’s own, and when the context is sufficiently narrow, as in a synthetic voice reading a weather report (a common application), there are no problems. An alternative approach to formant synthesis is concatenative synthesis. The basic units of concatenative synthesis are recorded units such as phones, diphones, syllables, morphemes, words, phrases, and sentences. A diphone is a transitional unit comprising the last portion of one phone plus the first portion of another, used to smooth coarticulation effects. There may be hundreds or even thousands of these little acoustic pieces. The recordings are made by human speakers. The synthesis aspect is in the assembling of the individual units to form the desired computerspoken utterance. This would not be possible without the increased computational power now available, and today’s synthesizers are generally of this type. The challenge in concatenative synthesis is achieving the fluidity of human speech. This requires electronic fine tuning of speech prosody, that is, duration, intonation, pitch, and loudness on which naturalness is based. At this time much concatenative speech sounds stilted as the units do not always fit together seamlessly, and the perfection of prosodic effects remains elusive. Text-to-Speech Speak clearly, if you speak at all; carve every word before you let it fall. OLIVER WENDELL HOLMES, SR. (1809–1894) To provide input to the speech synthesizer, a computer program called text-tospeech converts written text into the basic units of the synthesizer. For formant synthesizers, the text-to-speech process translates the input text into a phonetic representation. This task is like the several exercises at the end of chapter 4, in which we asked you for a phonetic transcription of written words. Naturally, the text-to-speech process precedes the electronic conversion to sound. For concatenative synthesizers, the text-to-speech process translates the input text into a representation based on whatever units are to be concatenated. For a syllable-based synthesizer, the text-to-speech program would take The number is 5557766 as input and produce [θə] [nʌ̃ m] [bər] [ɪz] [faɪv] [faɪv] [faɪv] [sɛv] [ə̃n] [sɛv] [ə̃n] [sɪks] [sɪks] as output. The “synthesizer” (a computer program) would look up the various syllables in its memory and concatenate them, with further electronic processing supplied for realistic prosody and to smooth over the syllable boundaries. The difficulties of text-to-speech are legion. We will mention two. The first is the problem of words spelled alike but pronounced differently. Read may be pronounced as [rɛd] in She has read the book, but like [ri:d] in She will read the book. How does the text-to-speech system know which is which? Make no mistake about the answer; the machine must have structural knowledge of the sentence to make the correct choice, just as humans must. Unstructured, linear knowledge will not suffice. For example, we might program the text-to-speech system to pronounce read as [rɛd] when the previous word is a form of have, but this approach fails in several ways. First, the have governs the pronunciation at a distance, both from the left and the right, as in Has the girl with the 395 396 CHAPTER 8 Language Processing: Humans and Computers flaxen hair read the book? and Oh, read a lot of books, has he! The underlying structure needs to be known, namely that has is an auxiliary verb for the main verb to read. If we try the strategy “pronounce read as [rɛd] whenever have is ‘in the vicinity,’ ” we would induce an error in sentences like The teacher said to have the girl read the book by tomorrow, where [riːd] is the required pronunciation. Even worse for the linear analysis are sentences like Which girl did the teacher have read from the book? where the words have read occur next to each other, but the correct version is [riːd]. Of course you know that this occurrence of read is [riːd], because you know English and therefore know English syntactic structures. Only through structural knowledge can the “spelled-the-samepronounced-differently” problem be approached effectively. We’ll learn more about this in the section on computational syntax later in the chapter. The second difficulty is inconsistent spelling, which is well illustrated by the first two lines of a longer poem: I take it you already know Of tough and bough and cough and dough Each of the ough words is phonetically different, but it is difficult to find rules that dictate when gh should be [f] and when it is silent, or how to pronounce the ou. Modern computers have sufficient storage capacity to store the recorded pronunciation of every word in the language, its alternative pronunciations, and its likely pronunciations, which may be determined by an extensive statistical analysis. This list may include acronyms, abbreviations, foreign words, proper names, numbers including fractions, and special symbols such as #, &, *, %, and so on. Such a list is helpful—it is like memorizing rather than figuring out the pronunciations—and encompasses a large percentage of items, including the ough words. This is the basis of word-level concatenative synthesis. However, the list can never be complete. New words, new word forms, proper names, abbreviations, and acronyms are constantly being added to the language and cannot be anticipated. The textto-speech system requires conversion rules for items not in its dictionary, and these must be output by a formant synthesizer or a concatenative synthesizer based on units smaller than the word if they are to be spoken. The challenges here are similar to those faced when learning to read aloud, which are considerable and, when it comes to the pronunciation of proper names or foreign words, utterly daunting. Speech synthesis has important applications. It benefits visually impaired persons in the form of “reading machines,” now commercially available, and vocal output of what is displayed on a computer screen. Mute patients with laryngectomies or other medical conditions that prevent normal speech can use synthesizers to express themselves. For example, researchers at North Carolina State University developed a communication system for an individual with so severe a form of multiple sclerosis that he could utter no sound and was totally paralyzed except for nodding his head. Using a head movement for “yes” and its absence as “no,” this individual could select words displayed on a computer screen and assemble sentences to express his thoughts, which were then spoken by a synthesizer. Computational Morphology If we wish our computers to speak and understand grammatical English, we must teach them morphology (see chapter 1). We can’t have machines going Computer Processing of Human Language around saying “*The cat is sit on the mat” or “*My five horse be in the barn.” Similarly, if computers are to understand English, they need to know that sitting contains two morphemes, sit + ing, whereas spring is one morpheme, and reinvent is two but they are re + invent, not rein + vent. The processing of word structures by computers is computational morphology. The computer needs to understand the structure of words both to understand the words and to use the words in a grammatically correct way. To process words, the computer is programmed to look for roots and affixes. In some cases this process is straightforward. Books is easily broken into book + s, walking into walk + ing, fondness into fond + ness, and unhappy into un + happy. These cases, and many like them, are the easy ones, because the spelling is well behaved, and the morphological processes are general. Other words are more difficult, such as profundity = profound + ity, galactic = galaxy + ic, and democracy = democrat + y. One approach is to place all the morphological forms of all the words in the language into the computer’s dictionary. Although today’s computers can handle such a high computational load—many millions of forms—there would still be problems because of the generality of the processes. As soon as a new word enters the language, as fax did some years ago, a whole family of words is possible: faxes, fax’s, faxing, faxed, refax, and faxable; and many others are not possible: *faxify, *exfax, *disfax, and so on. The dictionary would be continually out of date. Moreover, not all forms are predictable. Although heaten is not a dictionary word, if you hear it you know, and the computer should know, that it means “to make hot.” Likewise, compounding is a general process, and it would be impossible to predict all possible compounds of English. When podcast was coined from pod + cast, no computer could have had it in its dictionary. The computer needs to have the ability to break words correctly into their component morphemes, and to understand each morpheme and its effect on the word’s meaning, and where the word can be placed in a sentence. Computational morphology, then, is a host of interwoven rules, exceptions, and word/ morpheme forms, all with the purpose of comprehending the internal structure of words. One method of morphological analysis is called stemming. Here, affixes are detected and repeatedly stripped of the beginnings and ends of words, checking the work against computer’s dictionary. For example, if the word to be analyzed were befriended, the computer would recognize and verify the prefix be- and the suffix -ed, leaving behind the root friend, all of which would be verified in a dictionary of words and morphemes. More complex words such as unsystematically would be repeatedly broken down in -ly (an adverb-former), -al, -atic (both adjective-formers), system (a root word), and un- (a negative morpheme). Difficult problems always remain. If the computer sees resent, how does it know if it’s the monomorphemic resent (with a /z/), or re + sent? A broader context is needed, and that is the topic of the next section. Computational Syntax Good order is the foundation of all things. EDMUND BURKE, Reflections on the Revolution in France, 1790 397 398 CHAPTER 8 Language Processing: Humans and Computers In the beginning, there was ELIZA. patient: eliza: patient: eliza: patient: eliza: patient: eliza: Men are all alike. In what way? They’re always bugging me about something or other. Can you think of a specific example? Well, my boyfriend made me come here. Your boyfriend made you come here? He says I’m depressed much of the time. I am sorry to hear you are depressed. ELIZA is a computer program designed to simulate a nondirective psychiatrist talking to a patient. ELIZA has a collection of prepared phrases stored in memory such as “In what way?” or “Tell me more” that help keep the conversation going when it doesn’t “understand” the patient. When ELIZA recognizes certain syntactic patterns, however, it turns them around to make a sensible response, as when it echoes “Your boyfriend made you come here?” ELIZA was programmed with a small bit of syntactic knowledge, which allowed it to make grammatical utterances based on the patient’s statements, but no semantic knowledge at all. If a patient says “My sniggle is untherible,” ELIZA would respond with “How long has your sniggle been untherible?” ELIZA was designed by the MIT scientist Joseph Weizenbaum (1923–2008) in the 1960s. Though its syntactic sophistication is low by today’s standards, ELIZA is one of the earliest attempts at human-machine communication that took the syntax of English into account. Twenty-five years after ELIZA, computer scientists at Duke University and North Carolina State University programmed the “Circuit Fix-It Shop,” a system designed to help a technician repair a faulty circuit. Unlike ELIZA, which accepted only typed input and produced printed output, the Circuit Fix-It Shop accepted speech input and spoke its responses. It had its own syntax and was capable both of understanding and speaking syntactically complex utterances, as the following excerpt from a repair session shows: computer: technician: computer: technician: computer: technician: computer: technician: computer: This is the Circuit Fix-It Shop. How may I help you? I need to fix the RS111 circuit. I am familiar with that circuit. The LED is supposed to be displaying alternately blinking one and seven. Where is the LED? In the middle right of the blue region is the LED. No light. What is the switch at when the LED is off? The switch is down. You put up the switch. Once the computer has understood the technician’s utterance, it uses rules of semantics and syntax, together with a lexicon of words and morphemes, to formulate a grammatical, sensible response, which it then speaks through its synthesizer. The rules in many cases are similar to the phrase structure rules seen in chapter 2, such as “Form a noun phrase from an article followed by a noun.” It also has semantic rules that tell it to use the with the word switch since its “world knowledge” tells it that there is only one switch in that particular circuit. Computer Processing of Human Language Computational linguists at East Carolina University continue to broaden and improve this prototype of a natural language, interactive repair manual. To understand a sentence, you must know its syntactic structure. If you didn’t know the structure of dogs that chase cats chase birds, you wouldn’t know whether dogs or cats chase birds. Similarly, machines that understand language must also determine syntactic structure. A parser is a computer program that attempts to replicate what we have been calling the “mental parser.” Like the mental parser, the parser in a computer uses a grammar to assign a phrase structure to a string of words. Parsers may use a phrase structure grammar and lexicon similar to those discussed in chapter 2. For example, a parser may contain the following rules: S → NP VP, NP → Det N, and so forth. Suppose the machine is asked to parse The child found the kittens. A top-down parser proceeds by first consulting the grammar rules and then examining the input string to see if the first word could begin an S. If the input string begins with a Det, as in the example, the search is successful, and the parser continues by looking for an N, and then a VP. If the input string happened to be child found the kittens, the parser would be unable to assign it a structure because it doesn’t begin with a determiner, which is required by this grammar to begin an S. It would report that the sentence is ungrammatical. A bottom-up parser takes the opposite tack. It looks first at the input string and finds a Det (the) followed by an N (child). The rules tell it that this phrase is an NP. It would continue to process found, the, and kittens to construct a VP, and would finally combine the NP and VP to make an S. Parsers may run into difficulties with words that belong to several syntactic categories. In a sentence like The little orange rabbit hopped, the parser might mistakenly assume orange is a noun. Later, when the error is apparent, the parser backtracks to the decision point, and retries with orange as an adjective. Such a strategy works on confusing but grammatical sentences like The old man the boats and The Russian women loved died, which cause a garden path effect for human (mental) parsers. Another way to handle such ambiguous situations is for the computer to try every parse that the grammar allows in parallel. Only parses that finish are accepted as valid. In such a strategy, two parses of The Russian women loved died would be explored simultaneously: Russian would be an adjective in one and a noun in the other. The adjective parse would get as far as The Russian women loved but then fail since died cannot occur in that position of a verb phrase. (The parser must not allow ungrammatical sentences such as *The y