Language Development in Deaf Children: Sign Language and Cochlear Implants | The Oxford Handbook of Neurolinguistics | Oxford Academic
Skip to Main Content

Contents

The Oxford Handbook of Neurolinguistics The Oxford Handbook of Neurolinguistics

Deafness and other forms of hearing loss represent the fifth leading cause of disability in the world; the 2013 Global Burdens of Disease study (Vos et al., 2015) estimated the total prevalence of hearing loss worldwide at over 1.2 billion people, with over 400 million people—over 5% of the world’s population—having moderate or greater loss (defined as a reduction in thresholds of 35 dB or greater, the level at which hearing loss is considered “disabling”). In the United States, it is estimated that 2–3 out of every 1,000 children are born with detectable hearing loss in one or both ears, and hearing loss is one of the three most common congenital disorders (Vohr, 2003). The prevalence of hearing loss is higher in other parts of the world, with highest incidence in South Asia, Asia Pacific, and Sub-Saharan Africa (Vos et al., 2015).

The consequences of hearing loss can be quite severe. Because language is used both in education and for self-regulation (“self-talk”), children with hearing loss are more likely to have delays in language development, reading, and social development, as well as behavioral problems—especially if they do not have the opportunity to learn and use sign language, or to have their hearing restored. The consequences of this can be far-reaching and lifelong: on average, deaf people have lower levels of educational attainment, lower incomes, less likelihood of holding a managerial or professional occupation (even when adjusted for education level), and report lower levels of job satisfaction, along with higher incidences of work stress and depression (Rydberg, Gellerstedt, & Danermark, 2009; Shein, 2003). People who become deaf as adults suffer language comprehension difficulties, difficulties in normal activities of daily living, participation restriction, social isolation, and high rates of depression.

Clinically, “deafness” can be defined as moderate (40–60 dB loss), severe (60–80 dB), and profound (>80 db) hearing loss. In practice, these different levels are distinguished clinically because they affect activities of daily living to different degrees, and have different standards of treatment. People with moderate or severe impairment are generally still able to detect environmental noises, and are at least aware of speech. However, even with moderate loss, speech perception can be challenging, and worse in environments with background noise (such as outside, in a room with multiple people talking, or with background music, radio, TV, etc.). With severe impairment, speech perception may be difficult or impossible, even under ideal listening conditions. It is important to recognize as well that hearing loss is often asymmetric (worse in one ear); generally the rating of severity of loss is based on the better ear, but one can expect worse functional hearing in someone with profound loss in one ear and moderate in the other, compared to someone with moderate loss in both ears.

Numerous options are available for persons with hearing loss, including learning and using sign language, hearing aids, and cochlear implants (CIs). Hearing aids are devices that sit on or around the ear and amplify incoming sounds. In contrast, CIs involve a device that is implanted in the base of the skull, with a wire that is threaded into the cochlea (see Figure 14.1). A microphone attaches to the outside of the head via a magnet, and transmits sound across the skull to the CI, which then stimulates the cochlea via electrodes on the inserted wire. CIs take advantage of the place-coding organization of the cochlea, whereby different auditory frequencies are received through electrical stimulation at different points along the cochlea. These rehabilitative options are not mutually exclusive; for example, many deaf people have hearing aids or CIs, and CI users may have a CI on one side and a hearing aid on the other. Profoundly deaf children are routinely referred for cochlear implantation, and in most cases this is effective in bringing hearing into the normal range. However, even with hearing restored to audiologically “normal” levels, many children with CIs show functional deficits (e.g., speech perception in noisy environments; reading) that persist into adulthood. As well, these treatment options have been the subject of significant debate in cultural, educational, and health-care spheres. Although these debates encompass arguments on both empirical and cultural grounds, a solid understanding of how deafness affects brain organization and language abilities is central to better understanding how society can best address the needs of people with severe hearing loss.

This chapter is focused on language development in deaf children, with a particular emphasis on children who use CIs and the factors that influence language outcomes. After briefly reviewing sign language (see also Corina & Lawyer, Chapter 16 in this volume), we first consider the effects of auditory deprivation on neurodevelopment, followed by a description of language outcomes in deaf children. This will be followed by a review of the neuroimaging literature on neuroplastic reorganization in deafness and after hearing restoration. We then discuss the major predictors of language outcomes in children with CIs. The chapter concludes with a summary and recommendations for practice and future research.

Figure 14.1.

A cochlear implant. The external microphone unit is shown in the top left, and includes a piece that fits over the external part of the ear, as well as a magnetic transmitter that connects to the implanted unit. The implant itself is shown above and behind the ear, partially occluded by the external magnet. Running from the implant is a wire that is threaded into the cochlea (shell-shaped structure in the center-right of the figure) and turns sound input into electrical stimulation to the cochlea. Signals are then conducted via the auditory nerve (right side of figure) to the brain.

Source: Image courtesy of MED-EL.

Throughout the world, natural sign languages—which have all the linguistic complexity and characteristics of spoken languages—have evolved naturally among communities of deaf people. Deaf children exposed to sign language from birth develop normal language abilities, and indeed children who learn sign language from birth later show far greater mastery of spoken language (oral and written) than deaf children not exposed to sign or exposed only later in life (Mayberry, Lock, & Kazmi, 2002). However, only an estimated 4%–5% of congenitally deaf children are born to deaf parents, and so unfortunately the majority of deaf children do not have a fluent signing parent from whom to learn a sign language. Although parents can of course begin to learn sign language and use it with their children, this is a significant undertaking for any parent of a newborn—and in general cannot be expected to provide the child with optimal, fluent early language input.

One very important thing to recognize is that the term “sign language” actually encompasses a variety of manual communication systems, which are very different in their origins, linguistic status, and the situations and cultures in which they are used. In the preceding paragraph, the term “natural sign languages” was used, very intentionally, to describe visual-manual languages that have evolved independently of each other in Deaf communities around the world. Contrary to popular misunderstanding, there is no universally intelligible sign language—just like spoken languages, independent communities of people have developed their own sign languages that differ from each other in their phonological structure, grammar, and form-to-meaning mappings (i.e., vocabulary). Because sign languages have evolved within communities of deaf users, they are not visual-manual versions of the spoken languages used in the same geographic locations. For example, American Sign Language (ASL), which is used by the Deaf community throughout the United States and Canada, has many syntactic differences from spoken English. Attempting to literally translate an English sentence into ASL will produce a sentence that may well violate ASL’s syntactic rules; many words and morphemes that are required in English are not required in ASL (and may actually not have an ASL equivalent), or are required in different sentence contexts. Further underscoring the distinction between signed languages and the spoken languages used in the surrounding hearing communities is the fact that although English is the primary spoken language in the United States, Australia, Ireland, and the United Kingdom, each of these countries has its own, linguistically distinct sign language. For example, British Sign Language (BSL) is mutually unintelligible with ASL; ASL is derived from langue des signes française (LSF; French Sign Language), not BSL. In spite of their having evolved largely independently of one another, all natural sign languages appear to follow some universal principles. Some of these are universal principles of human language that cross spoken and signed forms (such as having a limited phonological inventory and phonotactic rules; having hierarchical and recursive syntactic structure), while others are unique to sign language but common across sign languages (for example, the phonetic inventories of sign languages are defined based on the parameters of handshape, orientation, location, and movement; syntactic morphemes are produced simultaneously with their root, rather than sequentially).

In contrast to natural sign languages, a variety of auxiliary sign languages, or manually coded languages (MCL), have been developed by hearing educators in efforts to help provide deaf people with access to spoken languages. Generally speaking, these are ways of representing a spoken language manually—so the vocabulary, word order, and syntactic morphemes of a spoken language are given manual equivalents. MCLs are often used in deaf education settings (and parents of deaf children are encouraged to use them) specifically because they offer access to the spoken language of the milieu. The assumption underlying their use is that by having visual-manual access to the spoken language, the child will have an awareness and understanding of that language and its associated phonological and morpho-syntactic systems, which will in turn facilitate tasks such as learning to read and, if a CI is provided, learning to hear and understand the spoken language.

Language is a defining aspect of many cultures, and this is certainly true of sign languages: there is a widely recognized distinction between deafness (profound hearing loss) and Deaf culture (spelled with a capital “D”). Deaf culture revolves around a shared (signed) language, but encompasses more than just language (including shared beliefs, values, traditions, and arts) and includes both deaf and hearing people who sign. Within Deaf culture, hearing loss is not considered a medical condition, and Deaf people—more than most other groups of people who have a condition defined by the medical community—tend to resent views of deafness that center around concepts of deficits, impairment, or disability. Rather, hearing loss is viewed as a central aspect of cultural identity (although many hearing signers are part of Deaf culture as well). The notion that deafness needs to be treated, or “fixed,” is viewed as a threat to an individual’s identity and to Deaf culture more generally. Cochlear implants were thus not initially welcomed by most of the Deaf community, but rather were viewed as a significant threat, and even an attempted act of “cultural genocide” by mainstream culture.

An immense amount of linguistic development occurs in the first year of life, going from tuning children’s brains to the phonemic inventory of their native language, through becoming able to segment the incoming, continuous speech stream into words, to comprehending and understanding first words. Deaf infants exposed to fluent signing from birth show similar developmental milestones, at similar ages, to their normally hearing peers (Petitto et al., 2001). For example, they begin canonical babbling (using native language phonemes) at the same age (7 months) as hearing babies (Petitto & Marentette, 1991), and show increasing sensitivity to, and mastery of, ASL phonological structure over the course of development (Marentette & Mayberry, 2000). Deaf babies generally produce their first signs some months earlier than hearing children, which has been attributed to more rapid development of motor control of the hands than the vocal articulators. As adults, extensive neuropsychological and neuroimaging data demonstrate that in the deaf, native sign languages activate the same classical, left-lateralized brain regions as spoken languages (see Corina & Lawyer, Chapter 16 in this volume). Together with the developmental data, these findings emphasize the fact that signed and spoken languages are linguistically and neurologically equivalent, and that native signers are at no linguistic disadvantage in their native language compared to native learners of a spoken language.

Some of the strongest evidence in favor of sensitive periods in L1 development comes from studies of deaf people who learned a natural sign language at different ages (such cases are more prevalent among deaf people due to great variability in how deaf children were historically educated). On numerous tests, including grammatical judgment, complex grammatical morphology, and sentence recall, there was a clear gradient whereby native ASL learners outperformed later learners, and those who learned as young children outperformed those who only began L1 acquisition after age 12 (Mayberry, 1993; Newport, 1990). Notably, in all of these studies the participants were tested as adults and so all had many years of ASL signing experience, confirming that even delays of a few years in L1 exposure cannot be overcome by years of subsequent practice. Mayberry and colleagues have further demonstrated that these linguistic deficits in late L1 acquisition are mirrored by changes in brain organization for language. Examining adults who first learned ASL as their L1 between the ages of 0–14, they found linear declines in activation of left hemisphere frontal and temporal language areas with increasing age of acquisition (AoA), along with increased activation in occipital regions, which may have indicated greater effort required for processing the low-level (visual) features of signs at the expense of fluent linguistic processing (Mayberry, Chen, Witcher, & Klein, 2011a). Using another imaging technique, magnetoencephalography (MEG) (see Salmelin, Kujala and Liljeström, Chapter 6 in this volume), Ferjan Ramirez and colleagues (2013) similarly found unusual patterns of brain activity, involving occipital and parietal regions, in deaf signers who learned ASL after age 14.

This research confirms that delays in L1 exposure, as experienced by deaf people who do not receive signed language input from birth, lead to lifelong delays in L1 performance. Further evidence also indicates that such delays in signed L1 exposure have permanent impacts on spoken L2 learning. Mayberry and colleagues (2002) compared deaf native ASL signers with deaf people who first learned ASL between ages 9–15, and a group of hearing people who learned English as a second language at a range of ages matched to the ASL late learners (thus all groups had learned English late as their L2). On a test of English grammatical judgment, native ASL users performed comparably to hearing English L2 learners, and both groups performed significantly better than deaf late learners of ASL. In other words, poorer L2 performance was attributable to the lack of L1 learning, but—critically—not whether the L1 was spoken or signed, nor whether people were deaf or hearing. Mayberry and Lock (2003) extended these findings, showing that on both English grammatical judgment and sentence comprehension, native ASL signers and hearing English L2 learners performed comparably to each other, and not significantly worse than native English speakers. More recent work has extended these findings to deaf CI users. Hassanzadeh (2012) compared two groups of CI users, matched in age, duration of deafness, and age of implantation. The groups differed only in that one had deaf parents, and the other had hearing parents. Children of deaf parents outperformed those of hearing parents on spoken language comprehension and production at every time point tested, from 6 months to 10 years post-implantation. (Although the language use of the parents in this study was not explicitly reported, the author indicated that sign language was used, and speculated that more generally, deaf parents adopted communication strategies from birth that were heavily visual and accommodated children’s lack of hearing.) A similar finding was reported by Sarant and colleagues (2001), who found that having a family history of hearing loss predicted significantly better preschool language skills in young CI users. Taking a slightly different tack, Davidson and colleagues (2014) compared spoken language outcomes between deaf CI users and hearing children, all of whom had ASL as their L1 by virtue of being born to deaf, signing families. The CI users’ English perception and production skills on standardized tests were within (and in some cases even exceeded) age-appropriate norms and were indistinguishable from those of the hearing children. This was particularly notable because the deaf children had received their CIs after the age of 1 year (between 16–35 months of age; they had 1–4 years of experience with the CI) and so, on the basis of the literature reviewed earlier, would be expected to perform on average below hearing peers on these standardized tests.

Similar findings have been obtained in a number of studies investigating deaf people’s reading abilities. Learning to read is obviously challenging for deaf people, since it normally involves mapping novel visual shapes (letters) to sounds (phonemes), which someone who is pre-lingually deaf has no experience with. On average, (non CI-using) deaf high school graduates read at a grade 3–4 level (Marschark et al., 2009). However, this statistic masks the fact that many deaf people do become skilled readers, with many succeeding in higher education and earning professional and doctoral degrees. Recent research has focused on what cognitive skills deaf people utilize in becoming successful readers (Marschark et al., 2009). A notable finding from these studies is that although phonological abilities (e.g., phonological awareness, phonological coding ability) are among the strongest predictors of reading ability in hearing people, they account for a much smaller proportion of variance in deaf readers (Holmer, Heimann, & Rudner, 2016; Mayberry, Del Giudice, & Lieberman, 2011b). This indicates that while lack of access to spoken language phonology may create a challenge for learning to read, it is not insurmountable, and skilled deaf readers develop alternative skills for fluent reading, perhaps focused more on visual processing and efficient, whole-word recognition. Most notably for the current discussion, several studies have found that signing abilities predict better reading abilities (T. E. Allen, 2015; Chamberlain & Mayberry, 2008; Freel et al., 2011); notably, Allen found this relationship to hold in signing families of both deaf and hearing children, but not in families of deaf children who only learned sign language later in childhood, outside of their family setting—again emphasizing the important role of learning a native language from birth. However, as Godin-Meadow and Mayberry (2001) note, sign language fluency does not guarantee good reading ability—reading ability is a specific skill that needs to be taught. An intriguing recent suggestion is the “functional equivalence” hypothesis (McQuarrie & Parrila, 2014): that exposure to a natural language with a phonetic structure—regardless of whether that language is spoken or signed—during sensitive periods for language development in infancy is essential for developing normal reading abilities.

As noted in the introduction to this chapter, CIs are considered standard of care for children with severe hearing loss. Overall, the evidence overwhelmingly supports the efficacy of CIs to restore hearing. At the same time, many CI users perform below their normally hearing peers on standardized outcome measures, especially those related to language and scholastic performance—effects that can persist into high school and are thus not ameliorated simply by years of hearing experience. A significant factor contributing to heterogeneity of CI outcomes—but also providing useful information—is the fact that children may receive their implants at a wide variety of ages. There are several reasons for this. First, in the early days of cochlear implantation, surgeons (and practice guidelines) were more conservative and tended to wait until children were 3 or more years old prior to CI surgery. However, more recent evidence has suggested that infants not only tolerate CI surgery well, but that their outcomes tend to be better with earlier implantation. Even with this evidence, many parents and clinicians may choose to wait longer prior to the surgery; as well, some children are not born deaf but become so sometime during childhood. In other cases, limitations in health-care resources or insurance may preclude children from receiving CIs as early as might be desired. These variables result in “natural experiments” that have allowed researchers to study a range of variables that affect CI outcomes.

Most of the evidence concerning age of implantation comes from relatively young children, likely because of the increasing recognition that earlier implantation results in better outcomes, and because longitudinal research is more difficult, costly, and prone to dropouts. On both comprehension and production measures, several studies have shown better performance by children implanted before 2 years of age compared to those implanted between ages 3–5 (Dettman et al., 2016; Tobey et al., 2013); infants who received CIs within the first two years of life show increased phonetic complexity in their utterances, which correlates with language skills at 4 years of age (Walker & Bass-Ringdahl, 2008). Other studies have looked at even younger ages of implantation, and found significantly better outcomes in children implanted before 12 months of age than from 13–24 months (Colletti, Mandalà, Zoccante, Shannon, & Colletti, 2011; Cuda, Murri, Guerzoni, Fabrizi, & Mariani, 2014; Dettman et al., 2016; Dettman, Pinder, Briggs, & Dowell, 2007; Leigh, Dettman, Dowell, & Briggs, 2013).

It is important to recognize, however, that the fact that CIs are “effective” does not necessarily mean that CI users’ speech production and perception are indistinguishable from their normally hearing peers: mean levels of performance decrease with age of implantation, and variability increases. For example, children implanted before 2 years of age showed an average of 12 months’ delay in receptive language abilities at 2–4 years, relative to hearing children (Ceh, Bervinchak, & Francis, 2013). In another study of a group of children who received CIs prior to age 5 and were tested at ages 5–13 years, only approximately 50% were in the normative range for their age on assessments of spoken language (Boons et al., 2013). In a large national study of a cohort of children who were among the first in the United States to receive multichannel CIs, only about 50% of children were within one standard deviation of norms obtained from hearing children on tasks of receptive and expressive language, although this increased to 70%–80% in the normal range by high school (Geers, Strube, Tobey, Pisoni, & Moog, 2011). However, there was still considerable variability in individual outcomes, and the authors of this study noted significant gaps between verbal and nonverbal IQ scores in CI users, suggesting that the CI users had not reached levels of spoken language competence that they might have been able to without a hearing impairment. These studies further noted that children who evidenced greater language difficulties in early grades were also those with worse performance in high school. This suggests that although the proportion of children falling within the normal range increased with age, time alone is not a panacea; identifying and addressing the needs of CI users who are struggling early on might help improve long-term outcomes. Further, CI users’ performance varied widely across different outcome measures; generally they performed worse on more challenging tests (e.g., connected speech compared to isolated words), and better on tests where strategies or executive skills (such as use of context) could compensate for hearing difficulties (Geers, Pisoni, & Brenner, 2013; Geers & Sedey, 2011).

While CIs are able to open up the world of sound to children at any age (and even adults), the data on age of implantation are consistent with the animal literature pointing to sensitive periods in auditory development (reviewed in the next section), as well as the many linguistic milestones that normally hearing infants achieve in the first year of life (see, e.g., Kuhl, 2004, for a review). The precise timing of such sensitive periods, and the extent to which language outcomes are affected by auditory versus linguistic sensitive periods, are not clear because there are very few cases in which children develop with normal hearing but no linguistic input—and those cases that do exist, such as neglected children and those in some orphanages, are confounded by much broader deficits in the children’s environments, such as impoverished social and emotional interactions. Furthermore, although the outcomes of children implanted prior to 1 year of age are significantly better than those implanted later, not all of those children necessarily achieve the same language outcomes as normally hearing peers. It will be important in the future to study outcomes as a function of age of implantation among children implanted prior to 1 year—given the rapid rate of development in this period, it may well be that implantation at 6, 8, or 12 months has as-yet undocumented differences in outcomes. On the other hand, the phenomenon of suspended auditory development in animal models (see next section) suggests that a “younger is always better” approach may not necessarily hold true across the first 12 months of life. Rather, there may be a lower limit on how young a child needs to be to receive maximum benefit from a CI.

It is also important to note that—while neurobiological sensitive periods doubtless play a major role in determining CI outcomes—when children receive a CI, they are not provided with acoustic information that is as rich as normal hearing. Thus it is not simply a case of auditory deprivation followed by normal hearing. CIs have between 8–32 channels (frequency bands) that they encode. In other words, whereas the intact cochlea encodes frequency along an effectively continuous range, a CI stimulates only 8–32 locations along the cochlea, although the auditory system adapts over time in such a way that experienced CI users are able to resolve a much finer-grained range of frequencies than the few that are directly encoded by the CI electrode. Nevertheless, children receiving CIs face (at least) two challenges: the effects of auditory deprivation on neurodevelopment, and learning to extract and make sense of the information necessary to understand speech from a degraded stimulus. In the following sections we will first examine the neurodevelopmental consequences of auditory deprivation, followed by consideration of the factors—other than age at implantation—that seem to mediate language outcomes in deaf children.

In order to better understand how deafness and CIs affect language processing, it is helpful to understand the effects of acoustic deprivation on brain organization and structure. This has been extensively studied in an animal model of congenital deafness: blue-eyed white cats (Ponton & Eggermont, 2002). A large proportion of these cats are born with a mutation that causes a lack of hair cells (the acoustic receptor cells in the cochlea), which is a common cause of deafness in humans as well. Studies using these cats have allowed for a better understanding of how cells in the auditory system develop in the absence of acoustic input, and also what impact cochlear implantation at different ages has on the development and organization of the auditory system. These studies (reviewed in Kral & Sharma, 2012) have shown that in the course of normal auditory development, cells become tuned to particular acoustic features (such as frequency, or sensitivity to inter-aural timing differences) through experience. However, even in the absence of auditory input, a very rudimentary organization around these features still exists. This suggests that coarse genetic coding guides the organization of the auditory system, but that experience is critical to refine the general “outline” provided by genetics, and to properly tune cells. In addition, with experience, hearing animals (and people) develop auditory object representations, learning to associate particular sounds with particular objects in the world (including environmental noises, such as food being poured into a bowl; voices associated with individuals; and—at least in humans—speech). Thus cells’ tuning during normal auditory cortex development is driven by a combination of both bottom-up (sensory) and top-down (cognitive) influences.

Since the top-down influences rely on representations of complex combinations of acoustic features, the bottom-up development necessarily starts first. This process has been shown to be subject to sensitive periods, such that normal development can occur only with acoustic exposure early in life. In cats, the sensitive period appears to be in the first 4 months of life: cats receiving CIs prior to this age show normal, or nearly normal, patterns of electrophysiological responses to sound after CI experience (measured by electrodes placed directly in A1—primary auditory cortex), whereas cats implanted later show much weaker and less organized responses—even after the same total duration of acoustic stimulation (Kral & Sharma, 2012). These changes in sensitivity seem to be caused by a combination of factors, including changes in neurotransmitter receptor density, the duration of postsynaptic potentials, dendritic branching, synaptogenesis, overall cortical inhibition, and structural changes in auditory cortex. In cases of auditory deprivation, the development of these processes is delayed for the first two months of life, and then proceeds in the absence of stimulation, albeit with much-degraded organization and sensitivity. Thus, to a certain extent, a lack of auditory stimulation seems to extend the temporal window of the sensitive period—creating a wider window for restoring hearing with minimal consequences—but this delay only lasts so long before self-organization begins to occur, even without input. A further consequence of this development that occurs in the absence of hearing is that it seems to reduce neuroplastic sensitivity to external input (Kral & Sharma, 2012).

In humans, neuroimaging offers a powerful way to study the effects of deafness and cochlear implantation on language processing, and potentially provide insights into what underlies suboptimal outcomes. Several imaging modalities have been used to study deaf people and CI users, including positron emission tomography (PET), structural magnetic resonance imaging (MRI), functional MRI (fMRI) (see Heim & Specht, Chapter 4 in this volume), functional near infrared spectroscopy (fNIRS) (see Minagawa & Cristia, Chapter 7 in this volume) and electroencephalography (EEG), including event-related potentials (ERP; EEG time-locked to stimulus events of interest) (see Leckey & Federmeier, Chapter 3 in this volume). One significant limitation imposed by cochlear implants is the fact that they are implanted electromagnetic devices containing metal. Since MRI uses strong magnetic fields and radio frequency waves, it is not possible to perform research MRI scans on people with CIs.

The developmental time course in humans is (not surprisingly) more protracted than in cats. Auditory brainstem potentials take approximately 2 years to reach adult-like shape and timing, and cortically generated auditory evoked potentials (AEPs; electrical recordings from the scalp) take longer, ranging from 2 years (for the P2 component) into adolescence (for the N1; Eggermont & Ponton, 2003). The development of these potentials reflects underlying changes in the maturation of auditory cortex, most notably myelination of axons; postmortem histological studies have shown that in most cortical layers this myelination begins between 4.5–12 months and matures between 3–5 years, though the auditory system is not fully mature until around 12 years. Studies of AEPs in CI users show results that parallel this developmental time course. A large study of people who received CIs at different ages examined the P1 component, which reflects the earliest processing of sound in the auditory cortex (Sharma, Dorman, & Kral, 2005; Sharma, Dorman, & Spahr, 2002). This study demonstrated that children who received CIs before 3.5 years of age ultimately showed P1 AEP latencies within the normal range, whereas those implanted after age 7 never showed normal P1 latencies, even after many years of use. Those implanted between 3.5–7 years showed more variable, intermediate outcomes, leading Sharma and colleagues to conclude that implantation prior to 3.5 years was optimal from a developmental neurophysioloical point of view, with a window of decreasing sensitivity up to 7 years. Eggermont and Ponton (2003) note that for early-implanted children, the delays in P1 latency quite closely track the duration of deafness; in other words, P1 latencies appear normal when adjusted for the duration that the child has received auditory stimulation. They further note that the P1 likely reflects projections from the thalamus to cortical layers III and IV, which are relatively slow to mature.

Similar results have been obtained for the N1 AEP, which occurs after the P1 and also reflects early stages of cortical auditory processing. The N1 normally begins to be detectable in the AEP at around 7 years of age, associated with maturation of cortical layer II in A1 and thalamo-cortical projections (Eggermont & Ponton, 2003). The development of this component continues up to approximately age 9–12 years, which coincides cortically with the development of A1 layers II and III, and behaviorally with improvements in speech in noise perception (Eggermont & Ponton, 2003). Sharma and colleagues (2015) examined N1 development in 80 CI users aged 2–16 years, as well as normally hearing children of similar age. The N1 began to be detectable in the AEP waveforms in both normally hearing and early-implanted (<3.5 years) CI users by 6–9 years, and was identifiable in all early implantees by 9–12 years. In contrast, among children implanted after the age of 7, the N1 was not detectable in any children under age 12, and only a small percentage of the older children; as with the P1, children implanted between 3.5–7 years showed intermediate responses, with less robust N1s than younger-implanted children. Ponton and Eggermont (2002) also examined N1 latency, in a smaller group of CI users, and found that N1 showed permanent increases in latency compared to normally hearing children. In a large study of 79 CI children with a wide range of hearing experience, Jiwani and colleagues (2013) noted that although differences in AEPs between normally hearing and CI children decreased over time, they were still present even with 10 or more years of hearing experience. Ponton and Eggermont suggested that immature myelination of layer I, arising from lack of early auditory stimulation, might result in poorly synchronized firing patterns that create a smeared and imprecise signal, affecting the N1 as well as auditory performance in noisy situations.

Structurally, congenitally deaf adults show no changes in gray matter volume in the superior temporal gyrus (STG) generally, nor in A1/Heschl’s gyrus specifically (Emmorey, Allen, Bruss, Schenker, & Damasio, 2003; Leporé et al., 2010; Penhune, Cismaru, Dorsaint-Pierre, Petitto, & Zatorre, 2003; Shibata, 2007). However, reductions in white matter volume have been reported (Emmorey et al., 2003; Olulade, Koo, LaSasso, & Eden, 2014). A study of deaf infants similarly showed decreased white matter in Heschl’s gyrus, as well as increased gray matter volume (Smith et al., 2011). These findings are consistent with Ponton and Eggermont’s (2002) suggestion of immature myelination. Outside of the auditory cortex, several other structural differences have been reported, including increased white matter volumes in the frontal lobes, corpus callosum, and insula (J. S. Allen, Emmorey, Bruss, & Damasio, 2008; Leporé et al., 2010); also, gray matter volume is increased in the left insula (J. S. Allen et al., 2008) and left motor cortex (Penhune et al., 2003). Conflicting results have been obtained for the primary visual cortex, where one study reported increases in gray matter (J. S. Allen, Emmorey, Bruss, & Damasio, 2013), while another reported decreases (Olulade et al., 2014). While these structural changes have not been correlated with specific functional abilities, nor with CI outcomes, it can be speculated that white matter reductions in auditory cortex reflect a lack of activity-dependent strengthening of these pathways, which could in turn negatively affect CI outcomes.

While most functional neuroimaging studies have focused on task-related activity in CI users (e.g., during speech processing), one PET study examined baseline (resting, not task-related) glucose metabolism measured prior to implantation in a group of children aged 1–11, and related this to their speech processing outcomes 3 years later (H.-J. Lee et al., 2007). Auditory speech comprehension was positively correlated with metabolic activity in the left dorsolateral prefrontal cortex (associated with motor/premotor tasks and verbal working memory) and angular gyrus (associated with phonological processing). Conversely, speech comprehension was negatively correlated with metabolism in other areas, notably the right STG including Heschl’s gyrus. The authors interpreted these findings as indicating that higher spontaneous neural activity in auditory/peri-auditory regions may reflect neuroplastic reorganization, whereby these regions assume other functions, and thus could not later assume a role in auditory processing—a topic discussed further in the next section.

A number of other studies have examined brain activation after cochlear implantation, using functional PET imaging to measure blood flow reflecting task-related activity. Studies have consistently reported auditory cortex activation by auditory stimuli (speech and non-speech) in deaf adults after cochlear implantation, and many have shown activation of classical left hemisphere language areas as well, including Broca’s and Wernicke’s areas (Giraud et al., 2000; Herzog et al., 1991; Ito, Iwasaki, Sakakibara, & Yonekura, 1993; Limb, Molloy, Jiradejvong, & Braun, 2010; Naito et al., 1995; Okazawa et al., 1996; Truy et al., 1995). A few studies have related brain activation to CI outcomes, and generally show higher levels of activation within auditory and language regions in people with better speech comprehension (Fujiki et al., 1999; K. M. J. Green, Julyan, Hastings, & Ramsden, 2005; Mortensen, Mirz, & Gjedde, 2006). While these findings—showing correlations between CI outcomes and brain activity markers—make intuitive sense, several caveats are important to note. First, all of these studies were based on extremely heterogeneous groups of CI users—varying in their ages of hearing loss (including a mix of pre- and post-lingually deafened people), duration of deafness, and duration of CI use. This reflects the difficulty of performing this type of research: adult CI recipients are inherently a highly heterogeneous group, and there are relatively few CI users in any given location. Of these, only a small subset is likely to be interested in undergoing in a PET scan (which involves radioactive tracers) and also to meet the inclusion criteria for the study. However, this poses significant challenges for interpreting the imaging data, especially the comparisons based on speech comprehension. In all of these studies, good and poor comprehenders differed in important variables, such as age of onset of hearing loss, duration of loss, and duration of CI use—all of which likely contributed to an individual’s performance level. Moreover, most of these studies had relatively few participants overall, and fewer when these were divided into subgroups based on performance. Thus, what appears as no activation in a lower-performing group may be weaker activation combined with insufficient statistical power to detect that activation. Overall, the small sample sizes in the neuroimaging studies of CI users published to date is of real concern, especially given the growing recognition that many neuroimaging studies are underpowered (Button et al., 2013). As such we are posed with a “chicken and egg” question: Do CI users who achieve better speech perception outcomes do so because they are better able to engage particular brain areas, or do they show greater brain activation as a consequence of their better performance?

The fate of auditory cortices in the absence of acoustic stimulation is mixed. On the one hand—consistent with the folk adage that when one sense is lost, the others are enhanced—some parts of the auditory system show evidence of takeover by other sensory systems. Evidence for this was first provided in humans, with congenitally deaf adults outperforming normally hearing adults on a task involving motion detection in the attended, peripheral visual field (Neville & Lawson, 1987). Notably, in this study deaf adults also showed both an enhanced visual N1 ERP component to attended peripheral visual stimuli (but not centrally presented stimuli), and an altered scalp topography of the N1, whereby its peak was shifted anteriorly from occipital to temporal regions. Although the location of an electrical potential on the scalp cannot accurately inform us as to the location of its generators in the brain, Neville and Lawson speculated that this reflected takeover of auditory cortex for visual processing.

Subsequent work has replicated and extended Neville and Lawson’s (1987) original findings, and has revealed that although deaf people’s visual abilities are not enhanced across the board, they do show consistent improvements on some specific tasks. Studies have failed to find any consistent differences between deaf and hearing individuals in most visual functions, including sustained attention, visual search, orienting or alerting aspects of attention, or low-level functions such as contrast sensitivity, motion-detection thresholds, or brightness detection (Bavelier et al., 2001; Bavelier, Dye, & Hauser, 2006; Bosworth & Dobkins, 2002a; Horn, Davis, Pisoni, & Miyamoto, 2005). Rather, congenitally deaf people—both adults and children—show a very specific profile of greater sensitivity in tasks requiring selective attention to the visual periphery (Bosworth & Dobkins, 2002a; Bottari, Nava, Ley, & Pavani, 2010; Dye, Baril, & Bavelier, 2007; Dye, Hauser, & Bavelier, 2009; Neville & Lawson, 1987), and to visual motion (Bosworth & Dobkins, 1999, 2002b; Mohammed et al., 2005; Neville & Lawson, 1987; Stevens & Neville, 2006). Congenitally deaf adults and children also evidence more interference when distracting stimuli are presented in the visual periphery, but less interference from stimuli presented centrally, than normally hearing individuals (Dye et al., 2007; Dye et al., 2009). Sign language experience itself leads to a right visual field superiority for motion detection, in contrast to the left field advantage typical of hearing non-signers (Bosworth & Dobkins, 1999, 2002b; Hauthal, Sandmann, Debener, & Thorne, 2013).

Similar selective enhancements have been shown in cats, along with associated reorganization of specific auditory regions. In an elegant study by Lomber and colleagues (Lomber, Meredith, & Kral, 2010), small cooling coils were implanted in cats’ brains to allow transient deactivation of specific cortical regions. When the posterior auditory field was transiently deactivated, congenitally deaf cats’ enhanced peripheral visual localization abilities were reduced to levels consistent with normally hearing cats, indicating a direct causal link between deaf cats’ enhanced performance and this brain area. Interestingly, in normally hearing cats this area subserves the localization of sounds, so it seems that the takeover of auditory processing regions by vision was specific to regions already involved in spatial localization. As well, deaf cats’ enhanced sensitivity to visual motion was reduced to the levels of normally hearing cats by transient deactivation of a different part of auditory cortex, the dorsal zone. Although the function of this region in normally hearing cats is unknown, later work showed that in deaf cats, this area receives some direct axonal projections from visual and somatosensory cortices (Barone, Lacassagne, & Kral, 2013). At the same time, this work demonstrated no effects on visual processing when either A1 or several other auditory cortical regions were deactivated. This indicates that in the absence of auditory input, cortical auditory regions are not universally taken over for other purposes; rather, much of this cortex retains a rough approximation of its normal organization. In terms of the implications of this for CI outcomes, the animal evidence clearly suggests the existence of sensitive periods in development such that earlier implantation is likely to ultimately result in a more typical pattern of organization. Nonetheless, in the absence of early intervention, much of the auditory system appears to remain unoccupied, rather than being taken over for other sensory or cognitive functions.

The behavioral work in cats and humans, along with the neurophysiological work in cats, is also supported by some neuroimaging evidence, although the picture is less clear than in the animal model. These data are almost entirely from congenitally deaf adult sign language users who did not have CIs. In one set of studies, deaf signers (but not hearing people) showed right STG activation (including Heschl’s gyrus) when viewing moving dots in the peripheral visual field (Fine, Finney, Boynton, & Dobkins, 2005; Finney, Fine, & Dobkins, 2001). However, this finding was not replicated in other studies using very similar stimuli (Bavelier et al., 2000; Bavelier et al., 2001)—which found instead changes in lateralization and functional connectivity within the same set of regions used for visual processing in hearing people. More recently, Cardin and colleagues (2013, 2016) showed primary auditory cortex activation by sign language both in deaf signers and—most notably—in deaf non-signers, but not in hearing non-signers. Another study showed that cortical thickness in right planum temporale correlated with visual motion-detection thresholds in congenitally deaf signers (Shiell, Champoux, & Zatorre, 2016). Further, in a recent study of deaf, non-signing adults with a range of ages of hearing loss, Muise-Hennessey and colleagues (2015, 2016), found that both earlier onset and longer duration of deafness were associated with stronger left STG (though not A1) activity when viewing communicative gestures; this activation was not present in normally hearing people. Collectively, these data suggest that deafness increases the sensitivity of auditory cortex to moving visual stimuli, regardless of sign language experience.

Some research has specifically investigated relationships between CI use and visual processing or visual interference. Doucet and colleagues (2006) reported that while proficient CI users showed enhanced visual N1 and P2 ERP components in response to moving visual stimuli, poor CI users showed smaller visual ERP components that, critically, were shifted anteriorly on the scalp compared to the ERPs of normally hearing or proficient CI users—similar to Neville and Lawson’s (1987) early findings. Other studies have investigated more directly how visual information affects speech processing in CI users. Champoux and colleagues (2009) investigated speech processing in the presence of different types of visual distractors. Proficient CI users performed comparably to normally hearing adults; however, CI users with poorer auditory-only speech comprehension showed greater interference during speech processing in the presence of moving visual stimuli (but not color-change distractors). Other studies investigated how the auditory perception of phonemes is influenced by whether the listener sees the same or a different phoneme being mouthed (Desai, Stickney, & Zeng, 2008; Rouger, Fraysse, Deguine, & Barone, 2008; Tremblay, Champoux, Lepore, & Théoret, 2010). CI users showed greater influence of visual information on their speech perception overall, and those with poorer speech perception showed relatively greater reliance on visual information (Tremblay et al., 2010). However, reliance on visual information during normal speech processing does not appear to impede auditory processing. A longitudinal study found that CI users who showed significant gains in auditory-only processing after receiving a CI maintained high levels of speechreading (sometimes referred to as lipreading) ability; the authors suggested that this was due to the fact that CI users have greater difficulty perceiving auditory speech in noisy environments, and so continue to rely on visual cues to aid speech perception, even when the CI is effective (Rouger et al., 2007).

Given the evidence suggesting cross-modal reorganization in deaf people, an important question is whether enhanced sensitivity of auditory cortex to visual information interferes with hearing after cochlear implantation. One possibility is that if areas normally specialized for auditory processing do take on other functions, then restoration of hearing through a CI might be less effective if these areas are forced to reorganize themselves, or are even unable to assume auditory processing functions once years of activity-dependent neural connections have been established. On the other hand, Campbell and MacSweeney (2014) have noted that even normally hearing people show activation within auditory regions of the STG for some types of stimuli, such as speechreading (Calvert & Campbell, 2003) and sign language (in hearing native signers; Neville et al., 1998). Further, Rouger and colleagues’ (2007) data suggest beneficial effects of at least one specific enhanced visual ability (speechreading) on speech perception, even after a CI. Thus if there is visual “takeover” in deafness of brain areas typically used for auditory processing, it appears to be more nuanced than an area being simply unimodally “auditory” in normally hearing people, and “visual” in deaf people. Although studies have shown visual interference and altered ERP topography in CI users with poor speech comprehension, the directionality of these effects is not clear. One interpretation is that visual takeover of auditory cortex mediates these effects, leading to poorer CI performance. However, there are many factors that contribute to CI outcomes, as discussed previously and in the following two sections. Therefore it is possible that visual activation of auditory cortex does not interfere with speech processing, but rather represents a compensatory mechanism whereby people who are unable to obtain optimal auditory information continue to rely to a greater extent on visual information. As a result, they show overall greater sensitivity to such information—which can manifest in some circumstances as visual interference. However, outside of experimental lab settings, visual information is usually beneficial to speech comprehension rather than detrimental, and so evidence of such interference should not necessarily be interpreted as a negative phenomenon.

As we have seen, age at implantation is a very strong predictor of CI outcomes, with implantation within the first year of life yielding the best outcomes in congenitally deaf children. Related to this, in children not born deaf, earlier onset and longer duration of deafness both predict worse outcomes (Geers, 2003; Sarant et al., 2001). However, this does not explain all the variance in CI outcomes; a number of other factors have been identified consistently across multiple studies that influence CI outcomes.

The home and family setting in which a child is raised has very strong influences on the linguistic and educational achievements of all children (Hart & Risley, 1995; 2003; Hoff-Ginsberg, 1998), and children with CIs are no different. Socioeconomic status (SES) is a significant predictor of language outcomes in children with CIs (Convertino, Marschark, Sapere, Sarchet, & Zupan, 2009; Geers, 2003; Hodges, Ash, Balkany, Schloffman, & Butts, 1999; Niparko, Tobey, Thal, & Eisenberg, 2010), as is maternal education status (which is closely related to SES; Sarant, Harris, Bennet, & Bant, 2014)—higher SES and maternal education levels predict better outcomes. Parental behavior also impacts outcomes in several ways. Mothers who speak in longer, more complex sentences to their children (mean length of utterance—MLU) have children with better language outcomes (DesJardin & Eisenberg, 2007), whereas the children of mothers who use more directives and prohibitions have poorer language outcomes (Fagan, Bergeson, & Morris, 2014). Maternal sensitivity, the level of parental involvement, and cognitive stimulation provided to children also influence outcomes (Moeller, 2000; Quittner et al., 2013; Sarant, Holt, Dowell, Rickards, & Blamey, 2008).

Parental sensitivity can be a double-edged sword, however: data suggest that many mothers adapt their speech to their expectations of deaf children’s limitations in understanding speech—producing simpler utterances on the expectation that this will be easier for the children to understand—which reduces the complexity of language input and may adversely affect language development (Fagan et al., 2014). It seems likely that the effects of SES and maternal education on outcomes are at least in part mediated by these parenting style variables, since better-educated mothers tend to provide their children with richer input (Hart & Risley, 1995, 2003). However, studies have also shown that positive interaction behaviors can be taught, and indeed intervention programs that teach parents positive interaction styles with their deaf children significantly improve the children’s CI outcomes (Moog & Geers, 2010). Related to this, having a deaf family member improves CI users’ outcomes, likely due to a combination of increased sensitivity to the specific needs of deaf children, and more sign language knowledge and use (Hassanzadeh, 2012; Sarant et al., 2014). Beyond SES and parenting effects, children born earlier than their siblings (birth order) and/or having fewer siblings show better outcomes (Geers, Brenner, & Davidson, 2003; Sarant et al., 2014), and girls often do better than boys (Geers, 2003; Sarant et al., 2014).

Cognitive factors also significantly influence CI outcomes. Among these, the ones that have been most consistently identified as strong predictors are nonverbal IQ (Geers, 2003; Geers et al., 2003; Pyman, Blamey, Lacy, Clark, & Dowell, 2000; Sarant et al., 2014; Sarant et al., 2008) and aspects of working memory—including working memory capacity, and the speed and efficiency of rehearsal and retrieval (Geers & Sedey, 2011). Motor skills are also closely related to CI outcomes: children with a history of fine motor problems have poorer outcomes (Pyman et al., 2000; Sarant et al., 2014), as do those with slower trajectories of motor development (Horn, Fagan, Dillon, Pisoni, & Miyamoto, 2007; Horn, Pisoni, Sanders, & Miyamoto, 2005).

Genetics may also influence outcomes. This has been studied primarily in the context of mutations to the connexin-26 (GJB2) gene—a leading cause of congenital deafness in many countries (Denoyelle et al., 1997; G. E. Green et al., 2002). The data are equivocal, however, with some studies reporting better outcomes for children with GJB2 mutations than with other causes of deafness (C.-M. Wu et al., 2015), but others reporting no differences (C.-C. Wu, Lee, Chen, & Hsu, 2008). Another mutation, to the SLC26A4 gene, has been associated with more positive outcomes (C.-M. Wu et al., 2015); this study further found that GJB2 and SLC26A4 mutations positively affected outcomes only in children implanted prior to age 3, but not in children implanted later.

Finally, whether cochlear implantation is unilateral or bilateral may influence outcomes. Bilateral implants were initially proposed because with normal hearing, sounds are localized using timing and phase difference cues from the two ears. Not only does localizing the origins of sounds help in orienting to the environment (which can help both identify where a speaker is, and where a danger such as oncoming traffic may be coming from), but it may help speech perception in noisy environments as well, by helping to focus auditory attention. Indeed, children with bilateral implants outperform those with unilateral implants—and also do better with both implants activated than only one—both on sound localization (Beijen, Snik, & Mylanus, 2007; Galvin, Mok, & Dowell, 2007; Litovsky, Johnstone, Godar, & Agrawal, 2006; Lovett, Kitterick, Hewitt, & Summerfield, 2010) and speech in noise perception (Beijen et al., 2007; Lovett et al., 2010; Peters, Litovsky, Parkinson, & Lake, 2007; Wolfe, Baker, Caraway, & Kasulis, 2007; Zeitler, Kessler, & Terushkin, 2008). These results also translate into better use of auditory and vocal skills in social interactions (Tait et al., 2010), and better performance on standardized language tests (Sarant et al., 2001).

Although virtually all studies comparing bilateral with unilateral implantation have reported bilateral advantages, there are some caveats to this. First, not all measures used show bilateral advantages, including subjective assessments of speech and quality of hearing (Beijen et al., 2007) and preschool language scales (Sarant et al., 2001). Second, the timing of the implants plays an important role in outcomes. Bilateral implantation is rarely simultaneous; it typically occurs sequentially, with the second implant being received months or even years after the first. The data indicate that the closer together the two CIs can be implanted, the better the outcomes are (Sharma et al., 2005). With longer separations between the first and second implants, children have more difficulty both adjusting to the second implant, and in integrating auditory input from the two sources. Related to this, speech in noise perception and localization are often significantly better when the noise occurs on the side of the first implant, and more generally hearing assessments show better performance for the first-implanted ear, especially with longer durations between the first and second implants (Galvin et al., 2007; Peters et al., 2007).

Finally, in considering the influence of so many variables on CI outcomes, it is important to consider that no research in this area has used randomized, double-blind studies that are the gold standard in clinical research (Vlastarakos et al., 2010). Rather, studies of CIs and the effects of deafness more generally reflect the results of “natural experiments” that occur as a result of the choices made by individual parents based on the options made available to them in a particular location at a particular point in time. For instance, a child born deaf in the past year in a major urban center of a developed country is very likely to be identified as deaf within days—if not hours—of being born and may well have an active CI before his or her first birthday. In contrast, this author has met people from low socioeconomic status (SES), rural communities who were identified as deaf only upon entering school and whose parents were informed by their family physician that there were no treatment options—and who thus only obtained a CI in adulthood. While this range of experiences creates a natural degree of heterogeneity that can be analyzed in an attempt to determine the influence of certain variables, the end result is that the variables are confounded and often difficult to disentangle.

The first and clearest take-home message from this chapter is that cochlear implantation is highly effective in restoring hearing, and earlier implantation (certainly before 3.5 years, and probably optimally before 1 year) leads to better outcomes than later implantation. Nevertheless, CIs are not magical spells that fix all the effects of hearing loss: children with CIs need specific supports in their language development, including parenting styles and in many cases additional supports in school, especially in early grades.

Even when CIs are readily available (which is not the case in many places for reasons such as cost and access to services), there will inevitably be a waiting period prior to the CI surgery, and an additional period is required post-surgery before the CI is activated and the child begins receiving auditory input—followed by additional time adapting to the CI before speech is perceived at all clearly. Although hearing aids may be provided during the interval before CI activation, for children whose hearing loss is so profound as to warrant a CI, these will provide little support for language development. The period between diagnosis and a level of restored hearing capable of supporting speech comprehension is thus likely to be months or even years. It is clear that parents should not simply wait for a CI before their children begin to receive language input—the first year of life is filled with critical/sensitive periods for language development that rely on language input. Sign language is the only natural human language that deaf children are able to perceive, and the evidence suggests that this should be provided, as much as possible and as early as possible—even if it is expected that children will rely primarily or exclusively on spoken language after their implant.

There is room in the middle ground for hearing parents to provide the optimal linguistic environment for their infant prior to cochlear implantation. Although parents who are new to signing cannot provide fluent, native-like input, there is strong evidence that young children pick up on linguistic structure and regularities even in imperfect, inconsistent input from nonfluent parents, and end up producing more linguistically regular output (Brentari, Coppola, Mazzoni, & Goldin-Meadow, 2011)—a testament to the readiness of infant brains to learn from language input. Parents do not need to become fluent signers in order to provide their deaf children with constructive language input—indeed, in recent years many parents of children with normal hearing have taken it upon themselves to learn “baby sign” (ASL vocabulary) to facilitate communication and bonding with their infants. This practice is supported by empirical evidence pointing to positive language development and stronger parent-child interactions, suggesting that there is benefit—and critically, no cost—to exposing children to sign language (Goodwyn, Acredolo, & Brown, 2000; Kirk, Howlett, Pine, & Fletcher, 2012; Mueller, Sepulveda, & Rodriguez, 2013). Put another way, there is no evidence that depriving children of natural language input leads to better language outcomes than providing them with imperfect input.

Providing deaf infants with sign language input prior to their receiving a CI—and even continuing afterward in some settings—should simply be viewed as a form of bilingualism. A vast proportion of children in the world are raised in multilingual households, without negative consequences to their development (and possibly even benefits; see, in this volume, Green & Kroll, Chapter 11, and Paz-Alonso, Oliver, Quiñones, & Carreiras, Chapter 24). For parents who worry that their child’s deafness is a disability and that bilingualism will be an additional burden, we can point to the data showing that children exposed to sign language from birth show normal language development, regardless of whether they are deaf and learn only sign language as an L1 (Mayberry et al., 2002), or whether they are hearing, native sign-speech bilinguals (K. Davidson et al., 2014). In this context, it is also worth noting that even children with severe intellectual disabilities such as Down’s syndrome and autism spectrum disorder do not show any costs to being raised bilingually (e.g., English-French) as opposed to monolingually (Bird et al., 2005; Ohashi et al., 2012). Beyond the irreplaceable benefits of natural language input during the critical first year of life, providing children with sign language prepares them for participation in Deaf culture. Although many deaf children may be raised in hearing families, educated in mainstream classrooms, and rarely or never encounter other deaf children, they may ultimately find important support and identity within Deaf culture—even if they achieve excellent hearing outcomes with a CI.

Allen, J. S., Emmorey, K., Bruss, J., & Damasio, H. (

2008
).
Morphology of the insula in relation to hearing status and sign language experience.
 
The Journal of Neuroscience
, 28(46), 11900–11905. http://doi.org/10.1523/JNEUROSCI.3141-08.2008

Allen, J. S., Emmorey, K., Bruss, J., & Damasio, H. (

2013
).
Neuroanatomical differences in visual, motor, and language cortices between congenitally deaf signers, hearing signers, and hearing non-signers.
 
Frontiers in Neuroanatomy,
7, 1–10. http://doi.org/10.3389/fnana.2013.00026

Allen, T. E. (

2015
).
ASL skills, fingerspelling ability, home communication context and early alphabetic knowledge of preschool-aged deaf children.
 
Sign Language Studies,
15(3), 233–265. http://doi.org/10.1353/sls.2015.0006

Barone, P., Lacassagne, L., & Kral, A. (

2013
).
Reorganization of the connectivity of cortical field DZ in congenitally deaf cats.
 
PloS One,
8(4), e60093–21. http://doi.org/10.1371/journal.pone.0060093

Bavelier, D., Brozinsky, C., Tomann, A., Mitchell, T., Neville, H., & Liu, G. (

2001
).
Impact of early deafness and early exposure to sign language on the cerebral organization for motion processing.
 
The Journal of Neuroscience,
21(22), 8931–8942.

Bavelier, D., Dye, M. W. G., & Hauser, P. C. (

2006
).
Do deaf individuals see better?
 
Trends in Cognitive Sciences,
10(11), 512–518. http://doi.org/10.1016/j.tics.2006.09.006

Bavelier, D., Tomann, A., Hutton, C., Mitchell, T., Corina, D., Liu, G., & Neville, H. (

2000
).
Visual attention to the periphery is enhanced in congenitally deaf individuals.
 
The Journal of Neuroscience,
20(17), RC93. http://www.jneurosci.org/content/20/17/RC93

Beijen, J. W., Snik, A., & Mylanus, E. (

2007
).
Sound localization ability of young children with bilateral cochlear implants.
 
Otology and Neurotology,
28(4), 479–485. http://doi.org/10.1097/mao.0b013e3180430179

Bird, E. K.-R., Cleave, P., Trudeau, N., Thordardottir, E., Sutton, A., & Thorpe, A. (

2005
).
The language abilities of bilingual children with Down syndrome.
 
American Journal of Speech-Language Pathology,
14(3), 187–199. http://doi.org/10.1044/1058-0360(2005/019)

Boons, T., De Raeve, L., Langereis, M., Peeraer, L., Wouters, J., & van Wieringen, A. (

2013
).
Expressive vocabulary, morphology, syntax and narrative skills in profoundly deaf children after early cochlear implantation.
 
Research in Developmental Disabilities,
34(6), 2008–2022. http://doi.org/10.1016/j.ridd.2013.03.003

Bosworth, R. G., & Dobkins, K. R. (

1999
).
Left-hemisphere dominance for motion processing in deaf signers.
 
Psychological Science,
10(3), 256–262. http://doi.org/10.1111/1467-9280.00146

Bosworth, R. G., & Dobkins, K. R. (

2002
a).
The effects of spatial attention on motion processing in deaf signers, hearing signers, and hearing nonsigners.
 
Brain and Cognition,
49(1), 152–169. http://doi.org/10.1006/brcg.2001.1497

Bosworth, R. G., & Dobkins, K. R. (

2002
b).
Visual field asymmetries for motion processing in deaf and hearing signers.
 
Brain and Cognition,
49(1), 170–181. http://doi.org/10.1006/brcg.2001.1498

Bottari, D., Nava, E., Ley, P., & Pavani, F. (

2010
).
Enhanced reactivity to visual stimuli in deaf individuals.
 
Restorative Neurology and Neuroscience,
28(2), 167–179. http://doi.org/10.3233/RNN-2010-0502

Brentari, D., Coppola, M., Mazzoni, L., & Goldin-Meadow, S. (

2011
).
When does a system become phonological? Handshape production in gesturers, signers, and homesigners.
 
Natural Language & Linguistic Theory,
30(1), 1–31. http://doi.org/10.1007/s11049-011-9145-1

Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafò, M. R. (

2013
).
Power failure: Why small sample size undermines the reliability of neuroscience.
 
Nature Reviews Neuroscience,
14(5), 365–376. http://doi.org/10.1038/nrn3475

Calvert, G. A., & Campbell, R. (

2003
).
Reading speech from still and moving faces: The neural substrates of visible speech.
 
Journal of Cognitive Neuroscience,
15(1), 57–70. http://doi.org/10.1162/089892903321107828

Campbell, R., & MacSweeney, M. (

2014
).
Cochlear implantation (CI) for prelingual deafness: The relevance of studies of brain organization and the role of first language acquisition in considering outcome success.
 
Frontiers in Human Neuroscience
, 8, 1–11. http://doi.org/10.3389/fnhum.2014.00834

Cardin, V., Orfanidou, E., Rönnberg, J., Capek, C. M., Rudner, M., & Woll, B. (

2013
).
Dissociating cognitive and sensory neural plasticity in human superior temporal cortex.
 
Nature Communications
, 4, 1473–1475. http://doi.org/10.1038/ncomms2463

Cardin, V., Smittenaar, R. C., Orfanidou, E., Rönnberg, J., Capek, C. M., Rudner, M., & Woll, B. (

2016
).
Differential activity in Heschl’s gyrus between deaf and hearing individuals is due to auditory deprivation rather than language modality.
 
NeuroImage,
124, 96–106. http://doi.org/10.1016/j.neuroimage.2015.08.073

Ceh, K. M., Bervinchak, D. M., & Francis, H. W. (

2013
).
Early literacy gains in children with cochlear implants.
 
Otology and Neurotology,
34, 416–421.

Chamberlain, C., & Mayberry, R. I. (

2008
).
American Sign Language syntactic and narrative comprehension in skilled and less skilled readers: Bilingual and bimodal evidence for the linguistic basis of reading.
 
Applied Psycholinguistics,
29(03), 367–388. http://doi.org/10.1017/S014271640808017X

Champoux, F., Lepore, F., Gagné, J.-P., & Théoret, H. (

2009
).
Visual stimuli can impair auditory processing in cochlear implant users.
 
Neuropsychologia,
47(1), 17–22. http://doi.org/10.1016/j.neuropsychologia.2008.08.028

Colletti, L., Mandalà, M., Zoccante, L., Shannon, R. V., & Colletti, V. (

2011
).
Infants versus older children fitted with cochlear implants: Performance over 10 years.
 
International Journal of Pediatric Otorhinolaryngology,
75(4), 504–509. http://doi.org/10.1016/j.ijporl.2011.01.005

Convertino, C. M., Marschark, M., Sapere, P., Sarchet, T., & Zupan, M. (

2009
).
Predicting academic success among deaf college students.
 
Journal of Deaf Studies and Deaf Education,
14(3), enp005–343. http://doi.org/10.1093/deafed/enp005

Cuda, D., Murri, A., Guerzoni, L., Fabrizi, E., & Mariani, V. (

2014
).
Pre-school children have better spoken language when early implanted.
 
International Journal of Pediatric Otorhinolaryngology,
78(8), 1327–1331. http://doi.org/10.1016/j.ijporl.2014.05.021

Davidson, K., Lillo-Martin, D., & Pichler, D. C. (

2014
).
Spoken English language development among native signing children with cochlear implants.
 
Journal of Deaf Studies and Deaf Education,
19(2), 238–250. http://doi.org/10.1093/deafed/ent045

Denoyelle, F., Weil, D., Maw, M. A., Wilcox, S. A., Lench, N. J., Allen-Powell, D. R., et al. (

1997
).
Prelingual deafness: High prevalence of a 30delG mutation in the connexin 26 gene.
 
Human Molecular Genetics,
6(12), 2173–2177.

Desai, S., Stickney, G., & Zeng, F.-G. (

2008
).
Auditory-visual speech perception in normal-hearing and cochlear-implant listeners.
 
The Journal of the Acoustical Society of America,
123(1), 428–440. http://doi.org/10.1121/1.2816573

DesJardin, J. L., & Eisenberg, L. S. (

2007
).
Maternal contributions: Supporting language development in young children with cochlear implants.
 
Ear and Hearing,
28(4), 456–469. http://doi.org/10.1097/aud.0b013e31806dc1ab

Dettman, S. J., Dowell, R. C., Choo, D., Arnott, W., Abrahams, Y., Davis, A., et al. (

2016
).
Long-term communication outcomes for children receiving cochlear implants younger than 12 months.
 
Otology and Neurotology
, 37(2), e82–e95. http://doi.org/10.1097/MAO.0000000000000915

Dettman, S. J., Pinder, D., Briggs, R., & Dowell, R. C. (

2007
).
Communication development in children who receive the cochlear implant younger than 12 months: Risks versus benefits.
 
Ear and Hearing,
28(Supplement), 11S–18S. http://doi.org/10.1097/aud.0b013e31803153f8

Doucet, M. E., Bergeron, F., Lassonde, M., Ferron, P., & Lepore, F. (

2006
).
Cross-modal reorganization and speech perception in cochlear implant users.
 
Brain,
129, 3376–3383. http://doi.org/10.1093/brain/awl264

Dye, M. W. G., Baril, D. E., & Bavelier, D. (

2007
).
Which aspects of visual attention are changed by deafness? The case of the Attentional Network Test.
 
Neuropsychologia,
45(8), 1801–1811. http://doi.org/10.1016/j.neuropsychologia.2006.12.019

Dye, M. W. G., Hauser, P. C., & Bavelier, D. (

2009
).
Is visual selective attention in deaf individuals enhanced or deficient? The case of the useful field of view.
 
PloS One
, 4(5), e5640. http://doi.org/10.1371/journal.pone.0005640

Eggermont, J. J., & Ponton, C. W. (

2003
).
Auditory-evoked potential studies of cortical maturation in normal hearing and implanted children: Correlations with changes in structure and speech Perception.
 
Acta Oto-Laryngologica,
123(2), 249–252. http://doi.org/10.1080/0036554021000028098

Emmorey, K., Allen, J. S., Bruss, J., Schenker, N., & Damasio, H. (

2003
).
A morphometric analysis of auditory brain regions in congenitally deaf adults.
 
Proceedings of the National Academy of Sciences USA,
100(17), 10049–10054. http://doi.org/10.1073/pnas.1730169100

Fagan, M. K., Bergeson, T. R., & Morris, K. J. (

2014
).
Synchrony, complexity and directiveness in mothers’ interactions with infants pre- and post-cochlear implantation.
 
Infant Behavior and Development,
37(3), 249–257. http://doi.org/10.1016/j.infbeh.2014.04.001

Ferjan Ramirez, N., Leonard, M. K., Torres, C., Hatrak, M., Halgren, E., & Mayberry, R. I. (

2013
).
Neural language processing in adolescent first-language learners.
 
Cerebral Cortex,
1, 2772–2783. http://doi.org/10.1093/cercor/bht137

Fine, I., Finney, E. M., Boynton, G. M., & Dobkins, K. R. (

2005
).
Comparing the effects of auditory deprivation and sign language within the auditory and visual cortex.
 
Journal of Cognitive Neuroscience,
17(10), 1621–1637. http://doi.org/10.1162/089892905774597173

Finney, E. M., Fine, I., & Dobkins, K. R. (

2001
).
Visual stimuli activate auditory cortex in the deaf.
 
Nature Neuroscience,
4(12), 1171–1173. http://doi.org/10.1038/nn763

Freel, B. L., Clark, M. D., Anderson, M. L., Gilbert, G. L., Musyoka, M. M., & Hauser, P. C. (

2011
).
Deaf individuals’ bilingual abilities: American Sign Language proficiency, reading skills, and family characteristics.
 
Psychology,
2(1), 18–23. http://doi.org/10.4236/psych.2011.21003

Fujiki, N., Naito, Y., Hirano, S., Kojima, H., Shiomi, Y., Nishizawa, S., et al. (

1999
).
Correlation between rCBF and speech perception in cochlear implant users.
 
Auris Nasus Larynx,
26(3), 229–236.

Galvin, K. L., Mok, M., & Dowell, R. C. (

2007
).
Perceptual benefit and functional outcomes for children using sequential bilateral cochlear implants.
 
Ear and Hearing
, 28(4), 470–482. http://doi.org/10.1097/aud.0b013e31806dc194

Geers, A. E. (

2003
).
Predictors of reading skill development in children with early cochlear implantation.
 
Ear and Hearing,
24(1 Suppl), 59S–68S. http://doi.org/10.1097/01.AUD.0000051690.43989.5D

Geers, A., Brenner, C., & Davidson, L. (

2003
).
Factors associated with development of speech perception skills in children implanted by age five.
 
Ear and Hearing,
24(1 Suppl), 24S–35S. http://doi.org/10.1097/01.AUD.0000051687.99218.0F

Geers, A. E., Pisoni, D. B., & Brenner, C. (

2013
).
Complex working memory span in cochlear implanted and normal hearing teenagers.
 
Otology and Neurotology,
34, 396–401.

Geers, A. E., & Sedey, A. L. (

2011
).
Language and verbal reasoning skills in adolescents with 10 or more years of cochlear implant experience.
 
Ear and Hearing,
32, 39S–48S. http://doi.org/10.1097/AUD.0b013e3181fa41dc

Geers, A. E., Strube, M. J., Tobey, E. A., Pisoni, D. B., & Moog, J. S. (

2011
).
Epilogue: Factors contributing to long-term outcomes of cochlear implantation in early childhood.
 
Ear and Hearing,
32, 84S–92S. http://doi.org/10.1097/AUD.0b013e3181ffd5b5

Giraud, A. L., Truy, E., Frackowiak, R. S., Gregoire, M. C., Pujol, J. F., & Collet, L. (

2000
).
Differential recruitment of the speech processing system in healthy subjects and rehabilitated cochlear implant patients.
 
Brain,
123(Pt 7), 1391–1402.

Goldin-Meadow, S., & Mayberry, R. I. (

2001
).
How do profoundly deaf children learn to read?
 
Learning Disabilities Research & Practice,
16(4), 222–229. http://doi.org/10.1111/0938-8982.00022

Goodwyn, S. W., Acredolo, L. P., & Brown, C. A. (

2000
).
Impact of symbolic gesturing on early language development.
 
Journal of Nonverbal Behavior,
24(2), 81–103. http://doi.org/10.1023/A:1006653828895

Green, G. E., Scott, D. A., McDonald, J. M., Teagle, H. F. B., Tomblin, B. J., Spencer, L. J., et al. (

2002
).
Performance of cochlear implant recipients with GJB2-related deafness.
 
American Journal of Medical Genetics,
109(3), 167–170. http://doi.org/10.1002/ajmg.10330

Green, K. M. J., Julyan, P. J., Hastings, D. L., & Ramsden, R. T. (

2005
).
Auditory cortical activation and speech perception in cochlear implant users: Effects of implant experience and duration of deafness.
 
Hearing Research,
205(1–2), 184–192. http://doi.org/10.1016/j.heares.2005.03.016

Hart, B., & Risley, T. R. (

1995
).
Meaningful differences in the everyday experience of young American children
. Baltimore, MD: P. H. Brookes.

Hart, B., & Risley, T. R. (

2003
).
The early catastrophe: The 30 million word gap by age 3.
 
American Educator
, 27(10), 4–9.

Hassanzadeh, S. (

2012
).
Outcomes of cochlear implantation in deaf children of deaf parents: Comparative study.
 
The Journal of Laryngology & Otology,
126(10), 989–994. http://doi.org/10.1017/S0022215112001909

Hauthal, N., Sandmann, P., Debener, S., & Thorne, J. (

2013
).
Visual movement perception in deaf and hearing individuals.
 
Advances in Cognitive Psychology,
9(2), 53–61. http://doi.org/10.5709/acp-0131-z

Herzog, H., Lamprecht, A., Kühn, A., Roden, W., Vosteen, K.-H., & Feinendegen, L. E. (

1991
).
Cortical activation in profoundly deaf patients during cochlear implant stimulation demonstrated by H215O PET.
 
Journal of Computer Assisted Tomography,
15(3), 369.

Hodges, A. V., Ash, M. D., Balkany, T. J., Schloffman, J. J., & Butts, S. L. (

1999
).
Speech perception results in children with cochlear implants: Contributing factors.
 
Otolaryngology—Head and Neck Surgery,
121(1), 31–34. http://doi.org/10.1016/S0194-5998(99)70119-1

Hoff-Ginsberg, E. (

1998
).
The relation of birth order and socioeconomic status to children’s language experience and language development.
 
Applied Psycholinguistics,
19(04), 603–629. http://doi.org/10.1017/S0142716400010389

Holmer, E., Heimann, M., & Rudner, M. (

2016
).
Evidence of an association between sign language phonological awareness and word reading in deaf and hard-of-hearing children.
 
Research in Developmental Disabilities,
48, 145–159. http://doi.org/10.1016/j.ridd.2015.10.008

Horn, D. L., Davis, R. A. O., Pisoni, D. B., & Miyamoto, R. T. (

2005
).
Development of visual attention skills in prelingually deaf children who use cochlear implants.
 
Ear and Hearing
, 26(4), 389–408.

Horn, D. L., Fagan, M. K., Dillon, C. M., Pisoni, D. B., & Miyamoto, R. T. (

2007
).
Visual-motor integration skills of prelingually deaf children: Implications for pediatric cochlear implantation.
 
The Laryngoscope,
117(11), 2017–2025. http://doi.org/10.1097/MLG.0b013e3181271401

Horn, D. L., Pisoni, D. B., Sanders, M., & Miyamoto, R. T. (

2005
).
Behavioral assessment of prelingually deaf children before cochlear implantation.
 
The Laryngoscope,
115(9), 1603–1611. http://doi.org/10.1097/01.mlg.0000171018.97692.c0

Ito, J., Iwasaki, Y., Sakakibara, J., & Yonekura, Y. (

1993
).
Positron emission tomography of auditory sensation in deaf patients and patients with cochlear implants.
 
The Annals of Otology, Rhinology, and Laryngology,
102(10), 797–801. http://doi.org/10.1177/000348949310201011

Jiwani, S., Papsin, B. C., & Gordon, K. A. (

2013
).
Central auditory development after long-term cochlear implant use.
 
Clinical Neurophysiology,
124(9), 1868–1880. http://doi.org/10.1016/j.clinph.2013.03.023

Kirk, E., Howlett, N., Pine, K. J., & Fletcher, B. C. (

2012
).
To sign or not to sign? The impact of encouraging infants to gesture on infant language and maternal mind-mindedness.
 
Child Development,
84(2), 574–590. http://doi.org/10.1111/j.1467-8624.2012.01874.x

Kral, A., & Sharma, A. (

2012
).
Developmental neuroplasticity after cochlear implantation.
 
Trends in Neurosciences,
35(2), 111–122. http://doi.org/10.1016/j.tins.2011.09.004

Kuhl, P. K. (

2004
).
Early language acquisition: Cracking the speech code.
 
Nature Reviews Neuroscience,
5(11), 831–843. http://doi.org/10.1038/nrn1533

Lee, H.-J., Giraud, A.-L., Kang, E., Oh, S. H., Kang, H., Kim, C. S., & Lee, D. S. (

2007
).
Cortical activity at rest predicts cochlear implantation outcome.
 
Cerebral Cortex,
17(4), 909–917. http://doi.org/10.1093/cercor/bhl001

Leigh, J., Dettman, S., Dowell, R., & Briggs, R. (

2013
).
Communication development in children who receive a cochlear implant by 12 months of age.
 
Otology & Neurotology
, 34(3), 443–450. http://doi.org/10.1097/mao.0b013e3182814d2c

Leporé, N., Vachon, P., Lepore, F., Chou, Y.-Y., Voss, P., Brun, C. C., et al. (

2010
).
3D mapping of brain differences in native signing congenitally and prelingually deaf subjects.
 
Human Brain Mapping,
31(7), 970–978. http://doi.org/10.1002/hbm.20910

Limb, C. J., Molloy, A. T., Jiradejvong, P., & Braun, A. R. (

2010
).
Auditory cortical activity during cochlear implant-mediated perception of spoken language, melody, and rhythm.
 
Journal of the Association for Research in Otolaryngology,
11(1), 133–143. http://doi.org/10.1007/s10162-009-0184-9

Litovsky, R. Y., Johnstone, P. M., Godar, S., & Agrawal, S. (

2006
).
Bilateral cochlear implants in children: Localization acuity measured with minimum audible angle.
 
Ear and Hearing,
27(1), 43–59. http://doi.org/10.1097/01.aud.0000194515.28023.4b

Lomber, S. G., Meredith, M. A., & Kral, A. (

2010
).
Cross-modal plasticity in specific auditory cortices underlies visual compensations in the deaf.
 
Nature Neuroscience
, 13(11), 1421–1427. http://doi.org/10.1038/nn.2653

Lovett, R. E. S., Kitterick, P. T., Hewitt, C. E., & Summerfield, A. Q. (

2010
).
Bilateral or unilateral cochlear implantation for deaf children: An observational study.
 
Archives of Disease in Childhood,
95(2), 107–112. http://doi.org/10.1136/adc.2009.160325

Marentette, P. F., & Mayberry, R. I. (

2000
). Principles for an emerging phonological system: A case study of early ASL acquisition. In C. Chamberlain, J. P. Morford, & R. I. Mayberry (Eds.),
Language acquisition by eye
(pp. 71–90). Mahwah, NJ: Psychology Press.

Marschark, M., Sapere, P., Convertino, C. M., Mayer, C., Wauters, L., & Sarchet, T. (

2009
).
Are deaf students’ reading challenges really about reading?
 
American Annals of the Deaf,
154(4), 357–370. http://doi.org/10.1353/aad.0.0111

Mayberry, R. I. (

1993
).
First-language acquisition after childhood differs from second-language acquisition: The case of American Sign Language.
 
Journal of Speech, Language, and Hearing Research,
36(6), 1258–1270. http://doi.org/10.1044/jshr.3606.1258

Mayberry, R. I., & Lock, E. (

2003
).
Age constraints on first versus second language acquisition: Evidence for linguistic plasticity and epigenesis.
 
Brain and Language,
87(3), 369–384. http://doi.org/10.1016/S0093-934X(03)00137-8

Mayberry, R. I., Chen, J.-K., Witcher, P., & Klein, D. (

2011
a).
Age of acquisition effects on the functional organization of language in the adult brain.
 
Brain and Language,
119(1), 16–29. http://doi.org/10.1016/j.bandl.2011.05.007

Mayberry, R. I., Del Giudice, A. A., & Lieberman, A. M. (

2011
b).
Reading achievement in relation to phonological coding and awareness in deaf readers: A meta-analysis.
 
Journal of Deaf Studies and Deaf Education,
16(2), 164–188. http://doi.org/10.1093/deafed/enq049

Mayberry, R. I., Lock, E., & Kazmi, H. (

2002
).
Development: Linguistic ability and early language exposure.
 
Nature,
417(6884), 38–38. http://doi.org/10.1038/417038a

McQuarrie, L., & Parrila, R. (

2014
).
Literacy and linguistic development in bilingual deaf children: Implications of the “and” for phonological processing.
 
American Annals of the Deaf,
159(4), 372–384.

Moeller, M. P. (

2000
).
Early intervention and language development in children who are deaf and hard of hearing.
 
Pediatrics
, 106(3), e43–e43. http://doi.org/10.1542/peds.106.3.e43

Mohammed, T., Campbell, R., MacSweeney, M., Milne, E., Hansen, P., & Coleman, M. (

2005
).
Speechreading skill and visual movement sensitivity are related in deaf speechreaders.
 
Perception,
34(2), 205–216. http://doi.org/10.1068/p5211

Moog, J. S., & Geers, A. E. (

2010
).
Early educational placement and later language outcomes for children with cochlear implants.
 
Otology and Neurotology,
31(8), 1315–1319. http://doi.org/10.1097/mao.0b013e3181eb3226

Mortensen, M. V., Mirz, F., & Gjedde, A. (

2006
).
Restored speech comprehension linked to activity in left inferior prefrontal and right temporal cortices in postlingual deafness.
 
NeuroImage,
31(2), 842–852. http://doi.org/10.1016/j.neuroimage.2005.12.020

Mueller, V., Sepulveda, A., & Rodriguez, S. (

2013
).
The effects of baby sign training on child development.
 
Early Child Development and Care,
184(8), 1178–1191. http://doi.org/10.1080/03004430.2013.854780

Muise-Hennessey, A., Tremblay, A., White, N. C., McWhinney, S. R., Zaini, W. H., Maessen, H., et al. (2015). Age of onset and duration of deafness influence neuroplastic reorganization of biological motion processing in deaf non-signers. Poster presented at the annual meeting of the Organization for Human Brain Mapping, Honolulu, HI.

Muise-Hennessey, A., Tremblay, A., White, N. C., McWhinney, S. R., Zaini, W. H., Maessen, H., et al. (2016). Age of onset and duration of deafness drive brain organization for biological motion perception in non-signers. DalSpace. Halifax, NS, Canada. http://dalspace.library.dal.ca/handle/10222/72221

Naito, Y., Okazawa, H., Honjo, I., Hirano, S., Takahashi, H., Shiomi, Y., et al. (

1995
).
Cortical activation with sound stimulation in cochlear implant users demonstrated by positron emission tomography.
 
Cognitive Brain Research,
2(3), 207–214. http://doi.org/10.1016/0926-6410(95)90009-8.

Neville, H. J., Bavelier, D., Corina, D., Rauschecker, J. P., Karni, A., Lalwani, A., et al. (

1998
).
Cerebral organization for language in deaf and hearing subjects: Biological constraints and effects of experience.
 
Proceedings of the National Academy of Sciences USA,
95(3), 922–929. http://doi.org/10.1073/pnas.95.3.922

Neville, H. J., & Lawson, D. (

1987
).
Attention to central and peripheral visual space in a movement detection task. III. Separate effects of auditory deprivation and acquisition of a visual language.
 
Brain Research,
405(2), 284–294.

Newport, E. L. (

1990
).
Maturational constraints on language learning.
 
Cognitive Science,
14(1), 11–28. http://doi.org/10.1207/s15516709cog1401_2