J Am Acad Audiol 25:133–140 (2014)

Effects of Hearing Aid Settings for Electric-Acoustic Stimulation DOI: 10.3766/jaaa.25.2.2 Margaret T. Dillon* Emily Buss* Harold C. Pillsbury* Oliver F. Adunka* Craig A. Buchman* Marcia C. Adunka†

Abstract Background: Cochlear implant (CI) recipients with postoperative hearing preservation may utilize an ipsilateral bimodal listening condition known as electric-acoustic stimulation (EAS). Studies on EAS have reported significant improvements in speech perception abilities over CI-alone listening conditions. Adjustments to the hearing aid (HA) settings to match prescription targets routinely used in the programming of conventional amplification may provide additional gains in speech perception abilities. Purpose: Investigate the difference in users’ speech perception scores when listening with the recommended HA settings for EAS patients versus HA settings adjusted to match National Acoustic Laboratories’ nonlinear fitting procedure version 1 (NAL-NL1) targets. Research Design: Prospective analysis of the influence of HA settings. Study Sample: Nine EAS recipients with greater than 12 mo of listening experience with the DUET speech processor. Intervention: Subjects were tested in the EAS listening condition with two different HA setting configurations. Speech perception materials included consonant-nucleus-consonant (CNC) words in quiet, AzBio sentences in 10-talker speech babble at a signal-to-noise ratio (SNR) of 110, and the Bamford-KowalBench sentences in noise (BKB-SIN) test. Data Collection and Analysis: The speech perception performance on each test measure was compared between the two HA configurations. Results: Subjects experienced a significant improvement in speech perception abilities with the HA settings adjusted to match NAL-NL1 targets over the recommended HA settings. Conclusions: EAS subjects have been shown to experience improvements in speech perception abilities when listening to ipsilateral combined stimulation. This population’s abilities may be underestimated with current HA settings. Tailoring the HA output to the patient’s individual hearing loss offers improved outcomes on speech perception measures. Key Words: Cochlear implant, combined stimulation, electric-acoustic stimulation Abbreviations: BKB-SIN 5 Bamford-Kowal-Bench sentences in noise; CI 5 cochlear Implant; CNC 5 consonant nucleus consonant; EAS 5 electric-acoustic stimulation; HA 5 hearing aid; LTASS 5 longterm average speech spectrum; MSTB 5 Minimum Speech Test Battery; NAL-NL1 5 National Acoustic Laboratories’ nonlinear fitting procedure version 1; RECD 5 real ear to coupler difference; SNR 5 signal-to-noise ratio; UCL 5 uncomfortable loudness

*University of North Carolina at Chapel Hill, Department of Otolaryngology/Head and Neck Surgery; †Department of Audiology, UNC Healthcare Margaret T Dillon, AuD, 170 Manning Drive, CB#7070, Physician’s Office Building G190, Chapel Hill, NC 27517; Phone: 919-966-5251; Fax: 919966-7941; E-mail: [email protected] This work was supported in part by MED-EL Corporation.

133 Delivered by Ingenta to: University of South Florida IP : 5.8.47.115 On: Tue, 28 Jun 2016 17:31:25

Journal of the American Academy of Audiology/Volume 25, Number 2, 2014

T

he ability to preserve residual hearing after cochlear implantation and subsequently provide electric and acoustic stimulation in the same ear is known as electric-acoustic stimulation (EAS; von Ilberg et al, 1999). In this ipsilateral listening condition, the hearing aid (HA) provides acoustic stimulation at low frequencies, while the cochlear implant (CI) provides electric stimulation at higher frequencies. Initially, this was achieved by using two independent devices in the same ear: an in-the-ear (ITE) HA and a CI speech processor. Today, the DUET (MED-EL Corporation) and Hybrid (Cochlear Corporation) speech processors combine electric and acoustic technologies into a single unit. Multiple reports have shown improvements in speech perception abilities with EAS in both quiet and in noise over preoperative listening conditions (Gantz et al, 2005; Kiefer et al, 2005; Adunka et al, 2010). Further, EAS users experience a significant speech perception benefit in noise over conventional CI recipients with a full insertion and a full-frequency CI map (Lorens et al, 2008). In consideration of these improvements, recommendations are needed regarding programming of acoustic and electric components in tandem for maximum speech perception ability. In the conventional CI population, optimization of a patient’s map is critical for maximizing speech perception outcomes (Skinner, 2003; Baudhuin et al, 2012). Mapping techniques, such as loudness balancing (Dawson et al, 1997), have been shown to improve speech perception outcomes. Rarely is a global fitting recommendation appropriate for all CI patients, thus confirming the importance of the audiologist’s expertise in optimized fitting of electric stimulation. Similarly, there have been fitting recommendations for bimodal CI users, who listen with a CI in one ear and a HA in the contralateral ear. It is well documented that most CI patients who have residual hearing in the contralateral ear experience improvements in speech perception with the addition of a HA in that ear (Potts et al, 2009; Gifford et al, 2007; Kong et al, 2005; Ching et al, 2004; Armstrong et al, 1997). When fitting bimodal patients, recommendations include prescriptive fitting of the HA, individually fine-tuning the gain and frequency response (Ching, Psarros, et al, 2001; Ching et al 2004), and balancing the loudness between the HA and CI (Ching, Psarros, et al 2001; Annerose et al, 2009). Presently, programming recommendations for EAS describe how the acoustic and electric modality settings are determined, with some attention given to ensuring that the signals provided by the two devices are complementary. Previous research has evaluated the amount of information overlap presented by the two modalities (Vermeire et al, 2008), determination of the low-frequency cutoff for electric stimulation (Kiefer et al, 2005), and the amount of residual hearing required

to support a benefit from acoustic amplification (Helbig and Baumann, 2010). Attention to programming parameters is crucial, as Polak et al (2010) reported that even relatively minor changes in the HA setting may negatively influence a patient’s speech perception outcome. Further investigation is needed to determine how to optimize the fitting of EAS patients to offer the maximum speech perception benefit. Current HA setting recommendations for the DUET speech processor are based on the work of Polak et al (2010). These recommendations include gain settings based on the half-gain rule, and slope determined by half of the difference between thresholds at 250 and 500 Hz. Though subjects showed improved speech perception with these settings as compared to other configurations evaluated in that study, application of a prescriptive method currently used in programming conventional amplification may offer additional benefit. In bimodal patients, the National Acoustic Laboratories’ nonlinear fitting procedure version 1 (NAL-NL1; Byrne et al, 2001) prescriptive method has been used in the programming of the contralateral HA (Ching et al, 2004). The NAL-NL1 prescriptive rationale is to maximize speech intelligibility with amplification set at or below levels required to restore normal loudness, as compared to the half-gain rule recommendation of loudness equalization across frequencies (Byrne et al, 2001; Ching, Dillon, et al, 2001; Dillon, 2001). NAL-NL1 targets are generated based on a patient’s residual hearing threshold at each frequency and the intensity level of the speech signal, an approach that accounts for the loss in acuity associated with different degrees of hearing loss even when the signal is audible. This strategy may offer a more appropriate fit to the acoustic portion of the DUET speech processor than current recommendations, especially for patients with greater degrees of hearing loss, where equalizing loudness across frequency may not support intelligibility of the speech signal (Byrne et al, 2001). Programming the HA to match NAL-NL1 targets may improve the audibility of the acoustic portion of the signal; however, it is unclear how much additional benefit would be obtained using this fitting method in DUET users. Previous work in bimodal listeners has shown that even limited amounts of acoustic lowfrequency information provide substantial benefit for speech perception when presented in conjunction with electric stimulation. Zhang, Dorman, et al (2010) assessed the minimum amount of acoustic information needed for bimodal users to show improvements in speech perception. A benefit of low-frequency acoustic information was observed in both quiet and noise even when the stimulus was low-pass filtered at 125 Hz. This indicates that even a very sparse acoustic signal is beneficial. It is unclear how much acoustic information is required to maximize performance, however. Zhang, Dorman, et al (2010) reported asymptotic performance

134 Delivered by Ingenta to: University of South Florida IP : 5.8.47.115 On: Tue, 28 Jun 2016 17:31:25

Effects of HA Settings for EAS/Dillon et al

in quiet with the 125 Hz low-pass filtered stimulus, whereas Zhang, Spahr, et al (2010) reported additional benefits when increasing the low-pass filter cutoff to 750 Hz. Both studies indicate additional benefits with increasing filter cutoffs in noise. These studies with bimodal listening suggest that increased low-frequency audibility of the acoustic stimulus could benefit performance of EAS users, especially in background noise. The purpose of this project was to evaluate the influence of the low-frequency acoustic stimulation on EAS recipients’ speech perception. Specifically, the research question was whether the HA targets routinely used by audiologists for the fitting of conventional amplification could support better speech perception outcomes as compared to current manufacturer recommended settings associated with the MED-EL EAS clinical trial. METHODS

The DUET Speech Processor The DUET speech processor offers HA and CI technologies in a single device and was provided to subjects as part of the EAS clinical trial. The CI portion of the device closely resembles the Tempo speech processor, with three volume settings and three map options. Frequency selection for the low-frequency cutoff of electric stimulation ranges from 200 to 1100 Hz. The amplification frequency range for the acoustic portion is 125– 1800 Hz. The compression ratio of the HA portion of the device is fixed at 1.33:1, and trimpots are used to adjust four parameters: gain, slope, volume, and kneepoint. The range of values for gain is 27 to 42 dB SPL, for kneepoint is 40 to 70 dB SPL, and for volume is 0 to 40 dB SPL. The slope may be adjusted to scale back the low-frequency gain at a rate of 0, 6, 12, or 18 dB/octave. Cochlear Implant Programming

T

he study site’s institutional review board (IRB) approved this project, and informed consent was obtained from each subject. Nine subjects were previous participants in the MED-EL EAS clinical trial assessing the efficacy and safety of the MED-EL (Innsbruck, Austria) EAS system. All participants were implanted with either the PULSAR or SONATA FlexEAS internal device and fit with the DUET external speech processor. All subjects had completed the 12 mo postinitial EAS activation test interval, which was an individual’s endpoint for the clinical trial. Listening experience with the DUET speech processor ranged from 1.1 to 5.0 yr (M 5 2.0 yr, SD 5 1.3 yr). The age at implantation ranged from 39.0 to 69.5 yr (M 5 55.4 yr, SD 5 10.1 yr). Residual unaided thresholds in the implanted ear at the time of testing for this population are displayed in Table 1. All subjects had documented stability in speech perception performance.

Maps for electric stimulation were created using the MED-EL CI-Studio software. All subjects were fit with the Continuous Interleaved Sampling (CIS) signal coding strategy (Wilson et al, 1991). All subjects had a highfrequency cutoff of electric stimulation of 8500 Hz. The low-frequency cutoff was determined by identifying the frequency where the measured unaided threshold in the implanted ear fell at or below 65 dB HL. The lowfrequency cutoff of electric stimulation had been determined at a previous test interval, with values included in Table 1. For this study, the electric portion of the DUET speech processor remained unchanged at the time of testing, allowing subjects to listen with their familiar, everyday CI map. Hearing Aid Programming Two HA settings were evaluated, one that followed the manufacturer fitting recommendations (labeled

Table 1. Unaided Thresholds and Program Settings Unaided thresholds

Implant

HA, Protocol

HA, NAL-NL1 matched

Subject 250 500 750 1000 1500 Low-freq cutoff Gain (dB) Slope (dB/oct) Target (Hz) Gain (dB) Slope (dB/oct) Target (Hz) s S1 S2 , S3 S4 * S5 e S6 S7 n S8 u S9

u





45 30 70 45 30 50 35 50 20

75 60 60 80 50 65 60 75 55

75 80 75 75 70 75 80 80 85

75 90 95 80 105 80 105 95 95

95 105 100 100 115 80 115 110 105

500 750 500 500 750 500 750 750 750

37.5 30 30 40 25 32.5 30 37.5 27.5

18 18 0 18 6 6 12 12 18

250 250 250 750 250 500 250 250 1000

42 32 37 42 32 34.5 34.5 42 34.5

18 18 12 18 18 18 18 18 18

1000 500 500 750 500 750 500 250 750

Note: The unaided thresholds were measured at the time of testing. The low-frequency cutoff for electric stimulation was determined at a prior interval and not adjusted during the test session. The “Target” column indicates the highest frequency that met NAL-NL1 targets for each HA setting configuration.

135 Delivered by Ingenta to: University of South Florida IP : 5.8.47.115 On: Tue, 28 Jun 2016 17:31:25

Journal of the American Academy of Audiology/Volume 25, Number 2, 2014

“Protocol”) and one where the parameters were adjusted to match NAL-NL1 targets (labeled “NALNL1 matched”). The gain and slope values for both HA settings are detailed in Table 1. The programming of both settings did not include consideration of the low-frequency cutoff of electric stimulation. That is, HA settings were determined solely based on unaided thresholds. In the EAS clinical trial, programming of the acoustic component followed manufacturer recommendations. This included adjusting the gain based on the half-gain rule (500 Hz unaided threshold [dB HL]/2) (Lybarger, 1944). Slope was defined as one-half of the difference between unaided thresholds (dB HL) at 250 and 500 Hz. The resulting value determined whether the lowfrequency gain would be filtered by 0, 6, 12, or 18 dB/ octave. If the calculation for gain or slope resulted in a value beyond the limits of the device, the setting was moved to the maximum or minimum value. For example, a subject with a threshold of 20 dB HL at 250 Hz and 50 dB HL at 500 Hz would be programmed with 25 dB of gain. The programming guidelines recommend setting the kneepoint and volume to the subject’s comfort level. Volume was adjusted to subject preference when listening to the combined input of the HA and CI to achieve a loudness balance between the two modes of stimulation. The kneepoint was held constant across all subjects at 55 dB SPL, a value recommended by the study protocol. The NAL-NL1 matched settings were created by adjusting the slope, gain, and volume parameters to match NAL-NL1 targets. The measured unaided thresholds (250–6000 Hz) were input to generate the NAL-NL1 targets with the Verifit system (Audioscan). The simulated real-ear function of the Verifit system was used to verify that changes to the HA output met NAL-NL1 targets using the long-term average speech spectrum (LTASS). Target levels were defined in terms of SPL. The acoustic tonehook of the DUET speech processor was connected to the 2 cc coupler. Average real ear to coupler difference (RECD) and uncomfortable loudness (UCL) were selected. The presentation level selected for the speech signal was 70 dB SPL, which corresponded with the sound field presentation level used to assess aided speech perception in the EAS clinical trial. The volume was set to the maximum for all subjects. Table 1 indicates the highest frequency where targets were met with the Protocol versus NAL-NL1 matched settings. A target was considered matched if the LTASS line fell within 3 dB of a specific frequency’s calculated NAL-NL1 target. As the frequency range of the DUET speech processor is 125–1800 Hz, the highest possible target to match was 1500 Hz. It was not always the case that all lower frequency targets were met due to limitations in programming frequency selectivity.

Speech Perception Evaluation Aided speech perception testing was conducted in a soundproof booth, with the subject seated 1 m away from the speaker at 0° azimuth. During the subjects’ previous participation in the EAS clinical trial, recorded word and sentence materials were presented at 70 dB SPL. Since some subjects were at ceiling on the clinical trial test materials, testing for this study was completed with recorded material from the new CI Minimum Speech Test Battery (MSTB), including consonant-nucleus-consonant (CNC; Peterson and Lehiste, 1962) words in quiet, AzBio sentences (Spahr and Dorman, 2004; Spahr et al, 2012) in a 10-talker babble at a signal-to-noise ratio (SNR) of 110, and the Bamford-Kowal-Bench sentences in noise (BKB-SIN) test (Bench et al, 1979). These materials were presented at 60 dB SPL, which is the recommended presentation level for the MSTB. Since EAS recipients typically have substantial residual hearing in both ears, the influence of the residual hearing in the contralateral ear was considered. The audiometer provides two channels to present either recorded materials or masking. The presentation of CNC and BKB-SIN materials required only one channel, which allowed for the contralateral ear to be effectively masked using the second channel via a deeply inserted ER-1 eartip. The AzBio sentences require one channel for sentences and the second channel for the 10-talker babble, which were summed before presentation through the loudspeaker. Since masking of the contralateral ear could not be achieved for this measure, a deeply inserted eartip was used to attenuate signals in the contralateral ear. Subjects had previous listening experience with the Protocol settings, while the NAL-NL1 matched settings were adjusted acutely. Subjects were blinded to the specific HA setting for each test condition; however, some noted an inability to hear a difference in sound quality between the two settings when listening in quiet. Data Analysis Speech perception data were analyzed using parametric statistical methods and commercially available software routines. The speech perception scores for each task were compared between the two listening conditions (Protocol HA settings versus NAL-NL1 matched HA settings), using two-tailed t-tests with a significance level of a 5 0.05. The pattern of significance was the same whether percent correct data were subject to an arcsine transformation; results for analyses of percent correct are reported below.

136 Delivered by Ingenta to: University of South Florida IP : 5.8.47.115 On: Tue, 28 Jun 2016 17:31:25

Effects of HA Settings for EAS/Dillon et al

RESULTS

W

ord recognition scores in quiet as measured with CNC words for the two HA settings are documented in Figure 1. One subject (S3) did not complete this test condition due to time limitations, so results from eight subjects are displayed. Scores achieved when listening with the Protocol settings ranged from 60 to 96% (M 5 74.8%, SD 5 14.1%), and those for the NAL-NL1 matched settings ranged from 68 to 100% (M 5 85.0%, SD 5 11.3%). Figure 2 displays the speech perception scores for AzBio sentences when presented in speech babble at SNR110. When listening with the Protocol HA settings, scores ranged from 50 to 90% (M 5 68.3%, SD 5 13.8%), and those from the NAL-NL1 matched settings ranged from 60 to 97% (M 5 79.8%, SD 5 11.9%). A repeated-measures of ANOVA was conducted for speech perception on CNC words in quiet and AzBio sentences at SNR110, as both of these tests are scored by percent correct. There was a significant difference between the speech perception abilities with the Protocol versus NAL-NL1 matched HA settings (F(1,7) 5 31.46, p 5 0.001) but no interaction between the specific HA setting and test measure (F(1,7) 5 0.02, p 5 0.88). Since one subject did not complete the CNC words measure, a paired t-test was also completed, comparing AzBio scores for the two HA conditions. The paired t-test for AzBio sentences found that subjects experienced a significant improvement (t(8) 5 25.82, p , 0.001) in speech perception abilities when tested with the NAL-NL1 matched HA settings as compared to the Protocol HA settings. The results for the BKB-SIN test, plotted in Figure 3, are reported as the SNR at which the subject understood 50% of the sentence. A lower SNR value indicates better performance. Results ranged from 3 to 10.5 dB (M 5 6.9 dB, SD 5 2.3 dB) for the Protocol settings

Figure 1. Percent correct on CNC words in quiet (n 5 8) are shown for two HA settings, indicated on the abscissa. Individual patients’ thresholds are indicated by symbol shape and coloring, as defined in Table 1; group means are indicated by horizontal lines.

Figure 2. Percent correct on AzBio sentences in 10-talker speech babble at SNR110 (n 5 9). Plotting conventions follow those of Figure 1.

and from 1 to 9 dB (M 5 4.9 dB, SD 5 3.0 dB) for the NAL-NL1 matched settings. There was a significant difference (t(8) 5 2.42, p , 0.05) between the speech perception abilities for this measure, as well. DISCUSSION

A

significant improvement in speech perception abilities was noted when subjects listened with the NAL-NL1 matched settings over their former Protocol HA settings in quiet and noise test conditions. All subjects elected to continue listening with the NAL-NL1 matched settings at the completion of the test session. In bimodal populations utilizing a HA contralateral to a CI, suboptimal HA settings are suspected as a source of variability in speech perception outcomes. Harris and Hay-McCutcheon (2010) evaluated the output of acoustic amplification in bimodal listeners, finding the majority of subjects to have insufficient gain when evaluated with the Desired Sensation Level [input/output] prescription method for adults (DSLAdult[i/o]). It has been suggested that attention to the acoustic component in the bimodal population may reduce the amount of variability in speech perception outcomes. Using a prescription method to fit the HA and having subjects complete a loudness balancing task between HA and CI may also improve speech perception performance in bimodal listeners (Keilmann et al, 2009). As in bimodal hearing, insufficient gain provided by the HA also appears to affect performance in EAS patients. Previous research has shown that the combination of CI and HA devices in EAS patients with residual hearing supports an improvement over preoperative performance (Gantz et al, 2005; Gstoettner et al, 2008; Adunka et al, 2010); however, this population’s abilities may be underestimated if procedures for fitting the HA fail to provide sufficient acoustic gain. The significant improvement in speech perception abilities experienced

137 Delivered by Ingenta to: University of South Florida IP : 5.8.47.115 On: Tue, 28 Jun 2016 17:31:25

Journal of the American Academy of Audiology/Volume 25, Number 2, 2014

Figure 3. The average dB SNR required to achieve 50% correct speech understanding of BKB sentences in a four-talker background noise (n 5 9). The lower the value, the better the patient’s acuity. Plotting conventions follow those of Figures 1 and 2.

with the NAL-NL1 matched settings over the Protocol settings may be due to access to additional acoustic information and/or more suitable gain in the low frequencies. In the present dataset, adjustments to the HA parameters to match NAL-NL1 targets extended the amount of available acoustic information for some subjects, significantly improving performance scores. This may have provided them with better access to or representation of low-frequency cues. These cues could benefit the listener in a number of ways. Kong et al (2005) found that the combination of low-frequency acoustic information to electric stimulation in bimodal listeners improved speech recognition in noise and music appreciation. Further, Carroll et al (2011) noted the importance of the fundamental frequency perception for improved speech perception in noise. Better representation of acoustic low-frequency information may improve speech perception abilities, despite the possible introduction of acoustic information into regions of the cochlea that respond to electrical stimulation provided by the CI. With greater amounts of residual hearing preserved postoperatively, utilization of improved programming techniques and incorporation of advanced amplification systems may support even better speech perception outcomes in EAS recipients. It should be noted that not all adjustments to the HA parameters to match NAL-NL1 targets resulted in an increase in the overlap of electric and acoustic stimulation as compared to the protocol recommendations. For instance, subject S9 could detect acoustic information out to 1000 Hz when listening with the Protocol HA settings. Adjustments to the HA parameters to match NAL-NL1 targets did not provide acoustic information as far into the midfrequencies as the Protocol settings. This subject, however, experienced an improvement when listening with the NAL-NL1 matched settings. The NAL-NL1 matched HA settings may have provided

greater low-frequency gain, offering the subject greater access to this acoustic information. The NAL-NL1 prescription method was selected for this study because it is used routinely with adults for the fitting of conventional HAs. The NAL-NL1 prescription method accounts for the severity of the patient’s hearing loss in the target estimation to achieve effective audibility (Byrne et al, 2001; Ching, Dillon, et al, 2001; Dillon, 2001) without a focus on loudness normalization. Vermeire et al (2008) found that EAS patients with a wide variety of low- to mid-frequency hearing losses can benefit from acoustic amplification, and the NALNL1 prescription method accommodates those variations more than current fitting recommendations. Additionally, the NAL-NL1 prescription method accounts for differences in the presentation level of the signal in combination with the level of hearing loss. In the present report, the signal presentation level when programming the NAL-NL1 matched settings was 70 dB SPL. At this intensity level, the targets may have been providing equal loudness (Dillon, 2001). Retrospectively, acoustic output was also evaluated with a 60 dB SPL input. At this intensity level, the acoustic output fell either at or slightly below targets with the NALNL1 matched settings. To fully assess the benefits of the NAL-NL1 prescription method over the protocol recommendations, supplementary comparisons between the two methods should be conducted at different intensity levels. Further, the verification of the output in this study used simulated real-ear measures with average RECD and UCL values. A more optimal fit may be achieved by incorporating real-ear measures; however, the abbreviated method used here provided an improved response while not significantly extending the duration of the clinical visit. Despite the improvements noted here, assessment with other fitting methods and HA technologies should be explored as they may reveal additional improvements for this patient population in speech perception abilities or perceived quality of sound. Dunn et al (2010) adjusted the acoustic settings of the Hybrid speech processor to match NAL-RP targets. Investigation is needed to determine whether EAS recipients would experience a difference in speech perception when the acoustic portion is matched to NAL-RP versus NALNL1 targets. Further, programming options for newer generations of the combined processors, such as the DUET2 (Lorens et al, 2012), should be evaluated. Another factor to consider in evaluating the present results is that subjects had longstanding listening experience with the Protocol HA settings, while the NALNL1 matched HA setting was assessed without the benefit of an extended period of acclimatization. During the randomization of the HA settings, however, some subjects reported they were unable to tell a difference between the sound quality of the two settings during

138 Delivered by Ingenta to: University of South Florida IP : 5.8.47.115 On: Tue, 28 Jun 2016 17:31:25

Effects of HA Settings for EAS/Dillon et al

brief discussions in quiet; this suggests that the relative familiarity of fits may not have affected performance. Longitudinal data are being collected to monitor whether speech perception abilities continue to improve with the NAL-NL1 matched HA settings. CONCLUSIONS

C

ritical evaluation of programming methods and patient performance may improve speech perception and subsequent quality of life outcomes for EAS patients. As candidacy for cochlear implantation expands to include patients with greater degrees of residual hearing, the role of the cochlear implant audiologist likewise expands to include the coordination of devices (CI and HA). Previous work has shown that optimizing HA settings can significantly benefit contralateral bimodal hearing. The data presented herein suggest that EAS patients could also benefit from this approach. Acknowledgments. The authors would like to thank Drs. Martha Mundy, Barbara Winslow-Warren, and Andrea Hillock-Dunn from the UNC Department of Allied Health, Division of Speech and Hearing Sciences for their insightful comments on this project.

Dillon H. (2001) Hearing Aids. New York: Thieme Medical Publishers. Dunn CC, Perreau A, Gantz B, Tyler RS. (2010) Benefits of localization and speech perception with multiple noise sources in listeners with a short-electrode cochlear implant. J Am Acad Audiol 21(1): 44–51. Gantz BJ, Turner C, Gfeller KE, Lowder MW. (2005) Preservation of hearing in cochlear implant surgery: advantages of combined electrical and acoustical speech processing. Laryngoscope 115(5): 796–802. Gifford RH, Dorman MF, McKarns SA, Spahr AJ. (2007) Combined electric and contralateral acoustic hearing: word and sentence recognition with bimodal hearing. J Speech Lang Hear Res 50(4):835–843. Gstoettner WK, van de Heyning P, O’Connor AF, et al. (2008) Electric acoustic stimulation of the auditory system: results of a multicentre investigation. Acta Otolaryngol 128(9):968–975. Harris MS, Hay-McCutcheon M. (2010) An analysis of hearing aid fittings in adults using cochlear implants and contralateral hearing aids. Laryngoscope 120(12):2484–2488. Helbig S, Baumann U. (2010) Acceptance and fitting of the DUET device—a combined speech processor for electric acoustic stimulation. Adv Otorhinolaryngol 67:81–87. Keilmann AM, Bohnert AM, Gosepath J, Mann WJ. (2009) Cochlear implant and hearing aid: a new approach to optimizing the fitting in this bimodal situation. Eur Arch Otorhinolaryngol 266(12):1879–1884.

REFERENCES Adunka OF, Pillsbury HC, Adunka MC, Buchman CA. (2010) Is electric acoustic stimulation better than conventional cochlear implantation for speech perception in quiet? Otol Neurotol 31 (7):1049–1054. Baudhuin J, Cadieux J, Firszt JB, Reeder RM, Maxson JL. (2012) Optimization of programming parameters in children with the advanced bionics cochlear implant. J Am Acad Audiol 23(5): 302–312. Bench J, Kowal A, Bamford J. (1979) The BKB (Bamford-KowalBench) sentence lists for partially-hearing children. Br J Audiol 13(3):108–112. Byrne D, Dillon H, Ching T, Katsch R, Keidser G. (2001) NAL-NL1 procedure for fitting nonlinear hearing aids: characteristics and comparisons with other procedures. J Am Acad Audiol 12(1):37–51. Carroll J, Tiaden S, Zeng FG. (2011) Fundamental frequency is critical to speech perception in noise in combined acoustic and electric hearing. J Acoust Soc Am 130(4):2054–2062. Ching TY, Dillon H, Katsch R, Byrne D. (2001) Maximizing effective audibility in hearing aid fitting. Ear Hear 22(3):212–224. Ching TY, Incerti P, Hill M. (2004) Binaural benefits for adults who use hearing aids and cochlear implants in opposite ears. Ear Hear 25(1):9–21.

Kiefer J, Pok M, Adunka O, et al. (2005) Combined electric and acoustic stimulation of the auditory system: results of a clinical study. Audiol Neurootol 10(3):134–144. Kong YY, Stickney GS, Zeng FG. (2005) Speech and melody recognition in binaurally combined acoustic and electric hearing. J Acoust Soc Am 117(3, Pt. 1):1351–1361. Lorens A, Polak M, Piotrowska A, Skarzynski H. (2008) Outcomes of treatment of partial deafness with cochlear implantation: a DUET study. Laryngoscope 118(2):288–294. Lorens A, Zgoda M, Skarzynski H. (2012) A new audio processor for combined electric and acoustic stimulation for the treatment of partial deafness. Acta Otolaryngol 132(7):739–750. Lybarger SF. (July 3, 1944). U.S. Patent Application SN 543.278. Peterson GE, Lehiste I. (1962) Revised CNC lists for auditory tests. J Speech Hear Disord 27:62–70. Polak M, Lorens A, Helbig S, McDonald S, McDonald S, Vermeire K. (2010) Fitting of the hearing system affects partial deafness cochlear implant performance. Cochlear Implants Int 11(Suppl. 1):117–121. Potts LG, Skinner MW, Litovsky RA, Strube MJ, Kuk F. (2009) Recognition and localization of speech by adult cochlear implant recipients wearing a digital hearing aid in the nonimplanted ear (bimodal hearing). J Am Acad Audiol 20(6):353–373.

Ching TYC, Psarros C, Hill M, Dillon H, Incerti P. (2001) Should children who use cochlear implants wear hearing aids in the opposite ear? Ear Hear 22(5):365–380.

Skinner MW. (2003) Optimizing cochlear implant speech performance. Ann Otol Rhinol Laryngol Suppl 191:4–13.

Dawson PW, Skok M, Clark GM. (1997) The effect of loudness imbalance between electrodes in cochlear implant users. Ear Hear 18(2):156–165.

Spahr AJ, Dorman MF. (2004) Performance of subjects fit with the Advanced Bionics CII and Nucleus 3G cochlear implant devices. Arch Otolaryngol Head Neck Surg 130(5):624–628.

139 Delivered by Ingenta to: University of South Florida IP : 5.8.47.115 On: Tue, 28 Jun 2016 17:31:25

Journal of the American Academy of Audiology/Volume 25, Number 2, 2014

Spahr AJ, Dorman MF, Litvak LM, et al. (2012) Development and validation of the AzBio sentence lists. Ear Hear 33(1): 112–117.

Wilson BS, Finley CC, Lawson DT, Wolford RD, Eddington DK, Rabinowitz WM. (1991) Better speech recognition with cochlear implants. Nature 352(6332):236–238.

Vermeire K, Anderson I, Flynn M, Van de Heyning P. (2008) The influence of different speech processor and hearing aid settings on speech perception outcomes in electric acoustic stimulation patients. Ear Hear 29(1):76–86.

Zhang T, Dorman MF, Spahr AJ. (2010) Information from the voice fundamental frequency (F0) region accounts for the majority of the benefit when acoustic stimulation is added to electric stimulation. Ear Hear 31(1):63–69.

von Ilberg C, Kiefer J, Tillein J, et al. (1999) Electric-acoustic stimulation of the auditory system. New technology for severe hearing loss. ORL J Otorhinolaryngol Relat Spec 61(6): 334–340.

Zhang T, Spahr AJ, Dorman MF. (2010) Frequency overlap between electric and acoustic stimulation and speech-perception benefit in patients with combined electric and acoustic stimulation. Ear Hear 31(2):195–201.

140 Delivered by Ingenta to: University of South Florida IP : 5.8.47.115 On: Tue, 28 Jun 2016 17:31:25

Effects of hearing aid settings for electric-acoustic stimulation.

Cochlear implant (CI) recipients with postoperative hearing preservation may utilize an ipsilateral bimodal listening condition known as electric-acou...
179KB Sizes 2 Downloads 3 Views