540604

research-article2014 Talebi et al

AORXXX10.1177/0003489414540604Annals of Otology, Rhinology & LaryngologyMoossavi et al

Article

Effects of Vowel Auditory Training on Concurrent Speech Segregation in Hearing Impaired Children

Annals of Otology, Rhinology & Laryngology 2015, Vol. 124(1) 13­–20 © The Author(s) 2014 Reprints and permissions: sagepub.com/journalsPermissions.nav DOI: 10.1177/0003489414540604 aor.sagepub.com

Hossein Talebi, PhD1, Abdollah Moossavi, MD2, Yones Lotfi, MD1, and Soghrat Faghihzadeh, PhD3

Abstract Objective: This clinical trial investigated the ability of concurrent speech segregation in hearing impaired children. The auditory behavioral responses and auditory late responses (ALRs) were compared between test and control groups prior to vowel auditory training and after 3 and 6 months of vowel auditory training to find the effects of bottom-up training on concurrent speech segregation in hearing impaired children. Methods: Auditory behavioral responses for 5 vowels and ALRs for double synthetic vowels, with special physical properties, were recorded in a timetable in 30 hearing impaired children (test group = 15 and control group = 15). Results: Identification score and reaction time data showed that the test group was approximately proficient for some vowels (P < .05 for vowels /æ/, /e/, and /u:/) and took less time to process after 6 months of training. N1-P2 amplitude indexing of the vowel change detection and reflecting central auditory speech representation without active client participation has been increased in the test group (P < .05). Conclusion: The present study showed training-related improvements in concurrent speech segregation. This information provided evidence for bottom-up training based on F0, its differences in auditory scene analysis, and neural underpinnings. Keywords auditory scene analysis, ASA, concurrent speech segregation, hearing impaired children, vowel auditory training

Introduction We often experience a complex acoustic environment with auditory information originating from several simultaneously active sources that often overlaps in many acoustic parameters. However, we are able to identify auditory events and hear distinct auditory objects. Auditory scene analysis (ASA) is the process involving the ability to segregate those sound inputs that originate from different sound sources and integrate those that belong together.1 Accordingly, segregation and integration processes are 2 fundamental aspects of ASA. Bregman1 believes that formation of auditory streams is the result of processes of sequential and simultaneous segregation. Sequential segregation separates and connects sense data over time, whereas simultaneous segregation selects, from the data arriving at the same time, those components that are probably parts of the same sound. Simultaneous segregation uses the properties of the incoming mixture that tends to be true whenever a subset of its components comes from a common source. For example, there is a broad class of sounds called “periodic,” such

as human voice, in which all the component frequencies (harmonic relations) are integer multiples of a common fundamental (fundamental frequency or F0). The auditory system takes advantage of this fact. If it detects, in the input, a subset of frequencies that are all multiples of a common fundamental, it strengthens its tendency to treat this subset as a single distinct sound.1 Much evidence supporting this view shows that the bottom-up sensory processes handle the segregation of sounds to distinct sources at a pre-attentive level of acoustic processing.2-4 This indicates that the auditory streams are sorted prior to stimulus selection. If automatic, low level systems 1

Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran 2 Department of Audiology, Iran University of Medical Sciences, Tehran, Iran 3 Department of Social Medicine, School of Medicine, Zanjan University of Medical Sciences, Zanjan, Iran Corresponding Author: Abdollah Moossavi, MD, Department of Audiology, Iran University of Medical Sciences, 134875459, Tehran, Iran. Email: [email protected]

Downloaded from aor.sagepub.com at MCMASTER UNIV LIBRARY on January 5, 2015

14

Annals of Otology, Rhinology & Laryngology 124(1)

handle the sorting of the incoming mixture of sounds, this could facilitate the ability to select external sources for further processing such as understanding of speech. Demany5 supposed that stream segregation processes are operative very early in life. In addition, Sussman et al6 demonstrated that the mechanisms for simultaneous auditory segregation operate similarly in school-age children and adults when frequency proximity is the cue for segregation. Winkler et al7 showed that newborn infants, like adults, are able to segregate concurrent streams of sound and organize the auditory input. They pointed out that neonates have potential capabilities required for shaping auditory conceptual objects. Segregating concurrent streams of sound is a critical moment of sound organization,1 a prerequisite of perceiving objects and forming conceptual objects.8 Concurrent segregation of sound sequences uses the temporal behavior of various acoustic parameters, of which spectral pitch is the most effective one.9 It is believed that segregation of acoustic components of speech such as vowels is very important for speech perception in children.10 These speech sounds convey much information about F0, the lowest frequency, and its high-numbered harmonics (F1, F2, etc). This acoustic information (F0 and its harmonic relations) carrying cues for pitch and timbre perception is a basis for speech perception, especially in natural auditory environments.10 Although vowels are not complex sound stimuli, they are periodic signals leading to pitch perception.10 F0 and first formant contrasts may be more perceptually salient in segregation of vowels and pitch cues.11 Most children with severe hearing loss are more likely to have residual hearing in the low frequencies than the higher ones. As vowels are prerequisites for speech formation and perception,10 we hypothesized that improving ability of vowel perception in hearing impaired children results in better speech understanding. The present study is the first investigation that connects concurrent speech segregation (as a basis of speech perception in complex auditory environments) and auditory rehabilitation (vowel auditory training). In other words, we investigated the effects of vowel auditory training on concurrent speech segregation and speech perception in children with hearing loss. Event related potentials (ERPs) are non-invasive measurements of cortical brain activity in response to sensory events. They give information about the timing of certain cognitive processes evoked by a given sound because of the high temporal resolution (in the order of milliseconds) of the responses that are time-locked to stimulus events. Event related potentials provide distinctive signatures for detection of sound change. Of particular importance are auditory late responses (ALRs), which reflect sound or change detection.

To examine simultaneous speech segregation, automatically in hearing impaired children, we used an electrophysiological index of auditory processing that does not require the participant to actively process the sounds or to indicate his or her perception of them. The N1-P2 complex is an obligatory ERP that can reflect central auditory speech representation without active participation.12-16 The N1 response reaches maximal amplitudes at frontocentral sites.17 The N1-P2 complex is thought to reflect synchronous neural activation of structures in the thalamic-cortical segment of the central nervous system in response to auditory stimulation.18-20 This complex could be used to monitor neurophysiologic changes during speech-sound acquisition after auditory rehabilitation, hearing aid use, or any other form of auditory learning.21-23 The N1-P2 complex proved to be a stable measure when it is recorded from electrode Cz, showing the reliable latency or amplitude results from test to retest.24 Some evidence demonstrated an increase in N1-P2 amplitude after auditory training.25 Increases in N1-P2 amplitude are thought to reflect increases in neural synchrony.24 Changes in neural firing patterns coinciding with learned behaviors are consistent with Hebbian principles of neural plasticity.26 Although the mismatch negativity (MMN) provides insight into physiologic processes underlying speech discrimination and training-related plasticity, this ERP may not be the most efficient response for assessing speechsound representation in individuals.27,28 This response is difficult to extract from background electroencephalic noise and often requires prolonged testing time and offline analyses and may be absent in normal persons.27-29 The N1-P2 complex may provide a practical tool for clinicians because this response can be collected using most commercially available systems and requires little testing time and minimal offline data manipulation compared to other ERPs. N1-P2 reflects changes in neural activity that coincide with improved perception, a one-to-one relationship between changes in perception and physiology.30-32 Recent evidence indicated an improved identification of synthetic double vowels with increasing the difference of their F0s. In addition, it showed that learning improves this ability, evidenced by an increase of P2 amplitude.33 In this study, hearing impaired children were presented with 5 vowels and 20 double vowels (pairs of synthetic vowels) with different F0 to identify them during auditory behavioral recording (identification score and reaction times) and ALRs before and after 3 and 6 months of vowel auditory training. We hypothesized that hearing impaired children would have more difficulty identifying vowels assessed by identification score and reaction times prior to training. In addition, we hypothesized that changes in processing would also be paralleled with ERPs.

Downloaded from aor.sagepub.com at MCMASTER UNIV LIBRARY on January 5, 2015

15

Talebi et al

Materials and Methods Thirty hearing impaired children with symmetrically binaural moderate to severe sensorineural hearing loss participated in this study. All of the participants used binaural behind the ear (BTE) hearing aids. They were 4 to 6 years old and included 15 female and 15 male participants, who were divided equally to control and test groups. Their age and hearing loss levels were the same. They were all right-handed and Farsi speaking (Persian language/mother language). Both groups received traditional rehabilitation programs for their disability; in addition, the test group received the vowel auditory training. Participant recruitment and follow-up occurred between August 22, 2012, and April 20, 2013.

Ethics Statement The parents of the participants were given informed written consent after the testing procedure was explained to them. The treatment procedure was performed in accordance with the Declaration of Helsinki. Our study was registered with the Iranian Registry of Clinical Trials (registration code IRCT2013011912174N1). The ethics committee of the University of Social Welfare and Rehabilitation Sciences approved our research.

Stimuli and Procedure The stimuli were 5 recorded steady-state Persian vowels: “AE” as in /æ/, “ER” as in /e/, “AH” as in /ɒː/, “EE” as in /i:/, and “OO” as in /u:/. A male speaker produced these vowels in an acoustic chamber. Each vowel was 200 milliseconds in duration (16 bits, and 48-kHz sample rate). F0 and formant frequencies were held constant. The source signal was the same in all 5 vowels, simulating “equal vocal effort.” To create the double vowels, the researchers summed up the digital waveforms of 2 phonetically different vowels. Each pair contained 1 vowel with F0 set at 100 Hz; the other vowel’s F0 was 0.5 semitones higher (1 semitone = 1/12 octave). Snyder and Alain34 showed that performance of young and older adults improved gradually with increasing ϪF0 up to 0.5 semitones, after which there was no significant improvement in performance from 0.5 semitones to 4 semitones ϪF0. Each vowel was paired with every other vowel, giving a total of 20 different pairs. The stimuli were presented binaurally through TDH 49 headphones at 80 dB nHL. Stimuli were randomized in blocks of 20 trials. It took 1 hour. They were designed as stimuli of ALRs. Prior to data collection, participants were presented with each stimulus to become familiarized with the task and the response.

Vowel Auditory Training Vowel auditory training objectives typically are designed to contrast different vowels.11 We trained 5 vowels—/æ/,

/e/, /ɒː/, /i:/, and /u:/—for 15 hearing impaired children (test group) using nonsense syllables. These syllables included unvoiced consonants and vowels as follows: / Pæ/, Shæ/, /Sæ/, /Hæ/, and /Kæ/. These combinations were repeated for the other vowels. The syllables were presented orally behind the children. After presentation, we asked children to identify and produce the syllables verbally. The training was conducted for 2 sessions (duration of each ~2 hours) per week. We recorded correct identification scores and reaction times for 5 vowels at the first time (prior to vowel auditory training) and classified all data after 3 and 6 months of vowel auditory training. Auditory late responses were recorded in parallel to auditory behavioral responses (with test duration of ~ 90 minutes).

Auditory Late Responses Prior to vowel auditory training, we recorded electrophysiological and behavioral responses of the test group, which received vowel auditory training. Electrophysiological responses were collected (rate: 1 per 2 seconds, ie, 0.5/s rate, artifact reject: ± 100 µV, gain: 10 000-20 000, trials per average: 25-50), digitized, and filtered (band-pass 1-30 Hz) from an array of 5 electrodes (2 inverting: Cz, 2 noninverting: mastoids, and ground: forehead) using BioLogic Auditory Evoked Potential System version 7.0.0 software. Event related potentials were stored for offline analysis. Eye movements were monitored with electrodes placed at the outer canthi and at the superior and inferior margins of orbit. The analysis epoch included 100 milliseconds of prestimulus activity and 500 milliseconds of poststimulus activity. For each participant, ERPs were then averaged separately for each double vowel. We measured the amplitude of the N1-P2 component of auditory late responses. After 3 and 6 months of vowel auditory training, the procedures were repeated.

Auditory Behavioral Responses Auditory behavioral responses including identification scores (in percentages) and reaction times (in milliseconds) were recorded before and after 3 and 6 months of vowel auditory training for the test group and without training for the control group. All of the data were collected at the University of Social Welfare and Rehabilitation Sciences.

Statistical Analysis As this study was a parallel clinical trial between 2 groups, we analyzed statistical data between test and control groups in repeated measures analysis of variance (ANOVA). In addition, Pearson’s correlation test was used to measure both the auditory behavioral and ERP responses.

Downloaded from aor.sagepub.com at MCMASTER UNIV LIBRARY on January 5, 2015

16

Annals of Otology, Rhinology & Laryngology 124(1)

Table 1.  Group Mean Identification Scores of 5 Vowels Used in Our Study.a Group Control (without vowel auditory training)         Test (with vowel auditory training)        

Vowel

First Session

3 Months Later

6 Months Later

/æ/ /e/ /ɒː/ /e:/ /u:/ /æ/ /e/ /ɒː/ /e:/ /u:/

68.00 ± 14.73 65.33 ± 11.87 66.67 ± 14.47 62.67 ± 14.86 58.67 ± 14.07 64.00 ± 15.49 53.33 ± 12.34 61.33 ± 14.07 50.67 ± 12.79 57.33 ± 12.79

73.33 ± 12.34 69.33 ± 14.86 73.33 ± 14.47 69.33 ± 14.86 66.67 ± 17.99 77.33 ± 10.32 72.00 ± 10.14 76.00 ± 13.52 66.67 ± 9.75 72.00 ± 12.64

70.67 ± 10.32 68.00 ± 10.14 73.33 ± 12.34 69.33 ± 10.32 64.00 ± 13.52 96.00 ± 8.28 90.67 ± 10.32 92.00 ± 10.14 92.00 ± 10.14 93.33 ± 9.75

a

The data show that both groups of participants were approximately proficient in correctly identifying vowels. Values are mean ± standard deviation.

Table 2.  Group Mean Reaction Times Needed to Identify the Vowels Used in Our Study.a Group Control (without vowel auditory training)         Test (with vowel auditory training)        

Vowel

First Session

3 Months Later

6 Months Later

/æ/ /e/ /ɒː/ /e:/ /u:/ /æ/ /e/ /ɒː/ /e:/ /u:/

818.67 ± 110.18 802.67 ± 97.35 802.00 ± 89.61 805.07 ± 127.87 767.33 ± 106.33 736.00 ± 86.42 741.33 ± 122.17 726.67 ± 112.22 782.00 ± 180.20 722.67 ± 130.95

818.00 ± 67.31 788.00 ± 73.60 709.33 ± 76.48 736.00 ± 92.10 748.67 ± 121.99 645.33 ± 67.28 638.67 ± 84.67 678.67 ± 124.89 717.33 ± 241.55 654.67 ± 141.26

818.33 ± 63.32 793.33 ± 79.16 707.33 ± 74.20 749.33 ± 108.59 761.33 ± 120.34 599.33 ± 66.05 595.33 ± 68.75 628.00 ± 123.82 655.33 ± 155.60 611.33 ± 126.76

a

Values are mean ± standard deviation.

Results and Analysis Behavioral Data To demonstrate the ability to perceive the vowels used in this study, we analyzed auditory behavioral responses of both groups. This measure reflects identification scores and reaction times before and after 3 and 6 months of vowel auditory training. The test group was nearly proficient to identify some vowels correctly (Table 1). There was a significant difference between test and control groups with regard to correctly identifying 2 vowels (F = 5.320 & P = .029 for /æ/, F = 2.023 & P = .166 for /e/, F = 1.891 & P = .180 for /ɒː/, F = 0.573 & P = .455 for /i:/, and F = 6.669 & P = .015 for /u:/, respectively), suggesting that training affected the identification scores of vowels /æ/ and /u:/. Also, there was a significant difference between the reaction times of test and control groups for vowels /æ/ (F = 43.20 & P < .001), /e/ (F = 19.42 & P < .001), and /u:/ (F = 4.75 & P = .038), suggesting the effect of vowel auditory training on decreasing the reaction times at least in some vowels (Table 2). Figure 1 shows the group mean reaction time (RT) values with ± 2 standard errors for vowel /u:/ in both groups.

Overall, participants in the control group tended to take more time than those in the test group to produce their responses. The RT data presented in Figure 1 suggest that identifying the vowels improved in the test group with vowel auditory training.

Electrophysiological Data The effect of vowel auditory training on N1-P2 amplitude was significant for 10 double-vowel stimuli (F = 7.233 & P = .008 for AE.EE, F = 7.649 & P = .006 for AE.ER, F = 13.473 & P = .001 for AH.AE, F = 16.448 & P < .001 for AH.ER, F = 7.403 & P = .007 for EE.AE, F = 29.575 & P < .001 for EE.AH, F = 15.100 & P < .001 for EE.ER, F = 25.791 & P < .001 for EE.OO, F = 9.699 & P = .003 for ER.OO, F = 8.340 & P = .005 for OO.ER) and approximately significant for AE.AH (F = 3.424 & P = .060). These results demonstrated that N1-P2 amplitude increased at least in response to half of the double-vowel stimuli after 3 and 6 months of vowel auditory training. Table 3 shows the group mean of N1-P2 amplitudes for the control and test groups. Bonferroni test indicated a statistically significant

Downloaded from aor.sagepub.com at MCMASTER UNIV LIBRARY on January 5, 2015

17

Talebi et al

Figure 1.  Decrement of reaction time (RT) values after vowel auditory training in test group (down). The group mean RT values (± standard errors) in 3 periods for vowel /u:/. The control group tended to take more time than the test group to produce their responses (top).

Table 3.  Group Mean of N1-P2 Amplitudes Elicited by Double-Vowel Stimuli for Control and Test Groups. Group Control (without vowel auditory training) Test (with vowel auditory training)

First Session, mean ± SD

Second Session (3 Months Later), mean ± SD

Third Session (6 Months Later), mean ± SD

P Value

5.81 ± 2.55

5.84 ± 2.38

5.71 ± 2.35

.209

5.64 ± 2.86

5.56 ± 2.61

6.71 ± 2.79

< .001a

a Bonferroni test indicated a statistically significant difference in the test group between the first and third sessions (6 months of auditory training) and between the second (3 months of auditory training) and third sessions.

difference in the test group between the first and third sessions (6 months of auditory training) and between the second (3 months of auditory training) and third sessions. In addition, Figure 2 shows the group mean of ERPs elicited by the double-vowel stimuli for the control and test groups, separately. Both groups showed a characteristic N1-P2 response following stimulus onset, with larger responses in the test group. This amplitude difference could in part be due to improvement of N1-P2 responses in the test group as a result of vowel auditory training.

The N1-P2 amplitude correlated significantly with reaction times in identification of vowels in the test group (Table 4).

Discussion The ability to correctly identify single and double vowels showed improvement after vowel auditory training. In the present study, overall, the control group was less accurate in identifying vowels compared to the test group, meaning that hearing impaired children would have more difficulty

Downloaded from aor.sagepub.com at MCMASTER UNIV LIBRARY on January 5, 2015

18

Annals of Otology, Rhinology & Laryngology 124(1)

Figure 2.  Group mean of N1-P2 amplitudes elicited by the double-vowel stimuli. Test and control groups’ event related potential waveforms measured from electrode Cz for 3 periods (prior to and after 3 and 6 months of vowel auditory training). Table 4.  Correlation Between N1-P2 Amplitude and Reaction Times in Test Group.a Reaction Time N1-P2 Amplitude Prior to vowel auditory training  After 3 months of vowel auditory training  After 6 months of vowel auditory training 

Pearson correlation Significance (2 tailed) Pearson correlation Significance (2 tailed) Pearson correlation Significance (2 tailed)

Prior to Vowel Auditory Training

After 3 Months of Vowel Auditory Training

After 6 Months of Vowel Auditory Training

.670 .006 — — — —

— — .650 .009 — —

— — — — .818 < .001

a

The N1-P2 amplitude correlated significantly with reaction times for identification of vowels.

identifying vowels without vowel auditory training. This problem may in part be related to deficits in parsing concurrent sounds.35-38

Behavioral Results The test group showed statistically significant changes in RT results (for 3 vowels) after 3 and 6 months of training regarding the results of pretraining tests (P < .05). The current results did not show a persistent and significant

decrease in RTs for vowels in the control group, although this group exhibited an overall but not significant increase in the identification score of some vowels during the 6 months. The effect of vowel auditory training in the test group showed a substantial decrease in RTs for all of the vowels. The analysis of RT data suggested that hearing impaired children without vowel auditory training required more processing time for correct vowel identification. We also found that data of the identification score (vowels /æ/ and /u:/) showed statistically significant results. It is

Downloaded from aor.sagepub.com at MCMASTER UNIV LIBRARY on January 5, 2015

19

Talebi et al apparent that vowel identification improved after 6 months of vowel auditory training. Together, the accuracy (correct identification score) and RT data suggest that vowel auditory training leads to decreased processing time of stimuli. This is consistent with a study by Atienza et al39 demonstrating the improved time course of perceptual learning as revealed by changes in reaction times after auditory training. According to our findings, acoustic training with nonsense syllables (bottom-up stimuli) leads to improvement of auditory information reaching the higher auditory system and results in the ultimate betterment of higher order (topdown) processes followed by auditory plasticity, for example, speech perception. This hypothesis could be studied comprehensively in future research.

Acknowledgments

Electrophysiological Results

Funding

The electrophysiological results provide comprehensive time-locked evidence for training-related processes.40-43 In the present study, we examined changes of N1-P2 amplitude associated with vowel auditory training. As we know, the N1-P2 complex event related potential holds promise as a clinical tool for assessing changes in neural activity associated with auditory rehabilitation.24 Overall, ERP N1-P2 amplitude was larger in the test group, an effect at least attributable to vowel auditory training effects. N1-P2 amplitude showed a significant increase in 3 and 6 month vowel auditory training (at least for half of double vowels) relative to the control group. Furthermore, the N1-P2 amplitude changes showed a significant correlation with vowel identification in the test group. In this study, the training-related increase in N1-P2 amplitude suggests improvement in the early bottom-up parsing of concurrent vowels without active participation. This interpretation is based on the fact that the N1-P2 amplitude becomes larger after auditory training.24 In addition, it is shown that the presence of the N1-P2 complex in an auditory stimulus provides physiologic evidence of the arrival at the auditory cortex of sensory information. Hence, this complex reflects the presence and registration of audible stimuli (bottom-up procedure). Although necessary for discrimination (top-down procedure), its presence does not by itself indicate discrimination.44,45 In the present study, we showed by this electrophysiologic response that hearing impaired children receiving vowel auditory training (nonsense syllables), which prevents top-down and high-level cognitive processing such as categorical perception, have more ability to detect concurrent speech sounds without active participation. The increased N1-P2 amplitude may be related to improvements in frequency selectivity, which might affect the ability to form accurate F0 representations for concurrent vowels.46

The authors are especially thankful to Al Bregman, who, after a very beneficial electronic discussion, suggested E. Sussman for further follow-up of the project, and also thankful to Elyse S. Sussman for introducing Joel S. Snyder (Department of Psychology, University of Nevada, Las Vegas), who provided his kindly co-supervision and aid in this study and who humbly rejected being named as co-author. The authors are so thankful to all the children and their families for their participation in this research. This study was conducted as a PhD thesis at the University of Social Welfare and Rehabilitation Sciences.

Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

The author(s) received no financial support for the research, authorship, and/or publication of this article.

References 1. Bregman AS, ed. Auditory Scene Analysis: The Perceptual Organization of Sound. Cambridge, MA: Bradford Books/ MIT Press; 1990. 2. Ritter W, Sussman E, Molholm S. Evidence that the mismatch negativity system works on the basis of objects. Neuroreport. 2000;11:61-63. 3. Sussman E, Ritter W, Vaughan HG Jr. Attention affects the organization of auditory input associated with the mismatch negativity system. Brain Res. 1998;789:130-138. 4. Sussman E, Ritter W, Vaughan HG Jr. An investigation of the auditory streaming effect using event-related brain potentials. Psychophysiology. 1999;36:22-34. 5. Demany L. Auditory stream segregation in infancy. Infant Behav Dev. 1982;5:261-276. 6. Sussman E, Ceponiene R, Shestakova A, Naatanen R, Winkler I. Auditory stream segregation processes operate similarly in school-aged children and adults. Hear Res. 2001;153:108114. 7. Winkler I, Kushnerenko E, Horváth J, et al. Newborn infants can organize the auditory world. PNAS. 2003;100:1181211815. 8. Leslie AM, Xu F, Tremoulet PD, Scholl BJ. Trends Cognit Sci. 1998;2:10-18. 9. Bregman AS. Auditory scene analysis. In: Dallos P, Oertel D, eds. The Senses: A Comprehensive Reference. San Diego, CA: Elsevier; 2008:861-870. 10. Katz J, Medwetsky L, Burkard R, Hood LJ. Handbook of Clinical Audiology. Baltimore, MD: Lippincott Williams & Wilkins; 2009. 11. Tye-Murray N. Foundations of Aural Rehabilitation: Children, Adults, and Their Family Members. 2nd ed. Clifton Park, NY: Thomson Delmar Learning; 2004. 12. Martin BA, Sigal A, Kurtzberg D, Stapells DR. The effects of decreased audibility produced by high-pass noise masking

Downloaded from aor.sagepub.com at MCMASTER UNIV LIBRARY on January 5, 2015

20

Annals of Otology, Rhinology & Laryngology 124(1)

on cortical event-related potentials to speech sounds /ba/ and /da/. J Acoust Soc Am. 1997;101:1585-1599. 13. Ostroff JM, Martin BA, Boothroyd A. Cortical evoked response to acoustic change within a syllable. Ear Hear. 1998;19:290-297. 14. Sharma A, Dorman M. Cortical auditory evoked potential correlates of categorical perception of voice-onset time. J Acoust Soc Am. 1999;106:1078-1083. 15. Sharma A, Dorman M. Neurophysiologic correlates of cross-language phonetic perception. J Acoust Soc Am. 2000;107:2697-2703. 16. Whiting KA, Martin BA, Stapells DR. The effects of broadband noise masking on cortical event-related potentials to speech sounds /ba/ and /da/. Ear Hear. 1998;19:218-231. 17. Vaughan HG Jr, Ritter W. The sources of auditory evoked responses recorded from the human scalp. Electroencephalogr Clin Neurophysiol. 1970;28:360-367. 18. Naatanen R, Picton T. The N1 wave of the human electric and magnetic response to sound: a review and analysis of the component structure. Psychophysiology. 1987;24:375-425. 19. Wolpaw JR, Penry JK. A temporal component of the auditory evoked response. Electroencephalogr Clin Neurophysiol. 1975;39:609-620. 20. Woods D. The component structure of the N1 wave of the human auditory evoked potential. In: Karmos G, Molnar M, Csepe V, Czigler I, Desmedt J, eds. Perspectives of EventRelated Potentials Research. Amsterdam, Netherlands: Elsevier; 1995:102-109. 21. Korczak PA, Kurtzberg D, Stapells DR. Effects of sensorineural hearing loss and personal hearing AIDS on cortical event-related potential and behavioral measures of speechsound processing. Ear Hear. 2005;26(2):165-185. 22. Tremblay KL, Billings CJ, Friesen LM, Souza PE. Neural representation of amplified speech sounds. Ear Hear. 2006;27(2):93-103. 23. Tremblay KL, Kraus N. Beyond the ear: central auditory plasticity. Otorhinolaryngologia. 2002;52(3):93-100. 24. Tremblay K, Kraus N, McGee T, Ponton C, Otis B. Central auditory plasticity: changes in the N1-P2 complex after speech-sound training. Ear Hear. 2001;22:79-90. 25. Hayes E, Warrier CM, Nicol T, Zecker SG, Kraus N. Neural plasticity following auditory training in children with learning problems. Clin Neurophysiol. 2003;114:673-684. 26. Hebb DO. Organization of Behavior. New York, NY: John Wiley & Sons; 1949. 27. McGee T, Kraus N, Nicol T. Is it really a mismatch negativity? An assessment of methods for determining response validity in individual subjects. Electroencephalogr Clin Neurophysiol. 1997;104:359-368. 28. Ponton C, Don M, Eggermont JJ, Kwong B. Integrated mismatch negativity (MMNi): a noise free representation of evoked responses allowing single point distribution-free statistical tests. Electroencephalogr Clin Neurophysiol. 1997;104:381-382. 29. Hall JW. New Handbook of Auditory Evoked Responses. Boston, MA: Pearson; 2006. 30. Kraus N, McGee T, Carrell T, King C, Tremblay K, Nicol T. Central auditory system plasticity associated with speech discrimination training. J Cogn Neurosci. 1995;7:25-32.

31. Tremblay K, Kraus N, Carrell T, McGee T. Central auditory system plasticity: generalization to novel stimuli following listening training. J Acoust Soc Am. 1997;102:37623773. 32. Tremblay K, Kraus N, McGee T. The time course of auditory perceptual learning: neurophysiologic changes during speechsound training. Neuroreport. 1998;9:3557-3560. 33. Reinke KS, He Y, Wang C, Alain C. Perceptual learning modulates sensory evoked response during vowel segregation. Brain Res Cogn Brain Res. 2003;17(3):781-791. 34. Snyder JS, Alain C. Age-related changes in neural activity associated with concurrent vowel segregation. Brain Res Cogn Brain Res. 2005;24:492-499. 35. Divenyi PL, Haupt KM. Audiological correlates of speech understanding deficits in elderly listeners with mild-to-moderate hearing loss. I. Age and lateral asymmetry effects. Ear Hear. 1997;18(1):42-61. 36. Divenyi PL, Haupt KM. Audiological correlates of speech understanding deficits in elderly listeners with mild-tomoderate hearing loss. II. Correlation analysis. Ear Hear. 1997;18(2):100-113. 37. Divenyi PL, Haupt KM. Audiological correlates of speech understanding deficits in elderly listeners with mild-tomoderate hearing loss. III. Factor representation. Ear Hear. 1997;18(3):189-201. 38. Grimault N, Micheyl C, Carlyon RP, Anhaud P, Collet L. Perceptual auditory stream segregation of sequences of complex sounds in subjects with normal and impaired hearing. Br J Audiol. 2001;35(3):173-182. 39. Atienza M, Cantero JL, Dominguez-Marin E. The time course of neural changes underlying auditory perceptual learning. Learn Mem. 2002;9:138-150. 40. Courchesne E. Neurophysiological correlates of cognitive development: changes in long-latency event-related potentials from childhood to adulthood. Electroencephalogr Clin Neurophysiol. 1978;45:468-482. 41. Ponton CW, Don M, Eggermont JJ, Waring MD, Masuda A. Auditory system plasticity in children after long periods of complete deafness. Neuroreport. 1996;8:61-65. 42. Ponton CW, Don M, Eggermont JJ, Waring MD, Masuda A. Maturation of human cortical auditory function: differences between normal hearing and cochlear implant children. Ear Hear. 1996;17:430-437. 43. Ponton CW, Eggermont JJ, Kwong B, Don M. Maturation of human central auditory system activity: evidence from multi-channel evoked potentials. Electroencephalogr Clin Neurophysiol. 2000;111:220-236. 44. Martin BA, Boothroyd A. Cortical, auditory, event-related potentials in response to periodic and aperiodic stimuli with the same spectral envelope. Ear Hear. 1999;20(1): 33-44. 45. Martin BA, Tremblay KL, Stapells DR. Principles and applications of cortical auditory evoked potentials. In: Burkard R, Don M, Eggermont J, eds. Auditory Evoked Potentials. Baltimore, MD: Lippincott Williams & Wilkins; 2007:482-507. 46. Sommers MS, Humes LE. Auditory filter shapes in normalhearing, noise-masked normal, and elderly listeners. J Acoust Soc Am. 1993;93:2903-2914.

Downloaded from aor.sagepub.com at MCMASTER UNIV LIBRARY on January 5, 2015

Effects of vowel auditory training on concurrent speech segregation in hearing impaired children.

This clinical trial investigated the ability of concurrent speech segregation in hearing impaired children. The auditory behavioral responses and audi...
419KB Sizes 1 Downloads 2 Views