Hearing Research 317 (2014) 41e49

Contents lists available at ScienceDirect

Hearing Research journal homepage: www.elsevier.com/locate/heares

Research paper

An examination of the effects of broadband air-conduction masker on the speech intelligibility of speech-modulated bone-conduction ultrasound Tadashi Nishimura a, *, Tadao Okayasu a, Osamu Saito a, Ryota Shimokura a, Akinori Yamashita a, Toshiaki Yamanaka a, Hiroshi Hosoi b, Tadashi Kitahara a a b

Department of Otolaryngology e Head & Neck Surgery, Nara Medical University, 840 Shijo-cho Kashihara, Nara 634-8522, Japan Nara Medical University, 840 Shijo-cho Kashihara, Nara 634-8522, Japan

a r t i c l e i n f o

a b s t r a c t

Article history: Received 3 June 2014 Received in revised form 18 September 2014 Accepted 24 September 2014 Available online 5 October 2014

Ultrasound can be heard by bone-conduction, and speech-modulated bone-conducted ultrasound (BCU) delivers the speech information to the human ear. One of the recognition mechanisms is the demodulation of the signals. Because some of the profoundly deaf can also hear speech-modulated BCU, another mechanism may also contribution to the recognition of speech-modulated BCU. In this study, eight volunteers with normal hearing participated. The intelligibilities of speech-modulated BCU were measured using a numeral word list under masking conditions. Because the masker can mask the demodulated sounds, the evaluation of the masking reveals the contribution of the demodulation to the recognition of speech-modulated BCU. In the current results, the masking of speech-modulated BCU differed from that of original non-modulated speech. Although the masking shifted the recognition curve for the original speech upward, the same results were not observed for the speech-modulated BCU. The masking generated the difference in the correct answers among the words for the speech-modulated BCU. The current results suggested the importance of the envelope of the modulated ultrasonic signal to the recognition under masking condition. Both demodulation and direct ultrasonic stimulation contribute to the recognition of speech-modulated BCU for the normal hearing individuals, and the direct ultrasonic stimulation plays an important role in the recognition for the profoundly deaf. © 2014 Elsevier B.V. All rights reserved.

1. Introduction In general, it is thought that ultrasound cannot be perceived by the human ear. However, when the ultrasound is delivered via bone-conduction, it can be heard at least at frequencies below 100 kHz (Pumphrey, 1950; Corso, 1963; Haeff and Knox, 1963; Dieroff and Ertel, 1975). This phenomenon has been reported and investigated approximately for 70 years (Gavreau, 1948), and the characteristics of ultrasonic hearing as different from those of air-conduction audible sound (ACAS) hearing has been revealed. For instance, the pitch is independent of the frequency, the dynamic range is extremely narrow (Deatherage et al., 1954;

Abbreviations: ACAS, air-conducted audible sound; ANOVA, analysis of variance; BCU, bone-conducted ultrasound; MEG, magnetoencephalography * Corresponding author. Tel.: þ81 744 22 3051; fax: þ81 744 24 6844. E-mail address: [email protected] (T. Nishimura). http://dx.doi.org/10.1016/j.heares.2014.09.012 0378-5955/© 2014 Elsevier B.V. All rights reserved.

Nishimura et al., 2003, 2009), and the hearing is rarely masked by ACAS (Bellucci and Schuneider, 1962; Dieroff and Ertel, 1975). Of greatest interest is the hearing in the profoundly deaf. Some of them can hear bone-conducted ultrasound (BCU), even though ACAS cannot be perceived (Lenhardt et al., 1991; Hosoi et al., 1998; Imaizumi et al., 2001). This characteristic may contribute to the development of novel medical treatment, such as hearing aids, and tinnitus treatment devices for the profoundly deaf (Koizumi et al., 2014). Conventional hearing aids have no benefit for the profoundly deaf owing to their severe hearing loss. Cochlear implant has been applied to such patients. However, this necessitates surgery, and its benefit cannot be promised before the surgery. If an effective hearing device using BCU is developed, the communication ability of the profoundly deaf can be improved while avoiding the risk of surgery. In order to present speech signals using BCU, ultrasound has to be converted, that is, modulated by speech signals. With regard to recognition of speech signals presented by BCU, a

42

T. Nishimura et al. / Hearing Research 317 (2014) 41e49

Table 1 Numeral word list. The list consists of six words, and six presentation orders. 5 7 2 3 6 4

2 4 7 5 3 6

4 6 3 2 7 5

3 5 6 4 2 7

7 2 5 6 4 3

6 3 4 7 5 2

previous study showed high intelligibility of monosyllablemodulated BCU (Okamoto et al., 2005; Yamashita et al., 2009a). The presentation of both audio and visual information improves the intelligibility of speech-modulated BCU (Yamashita et al., 2009b). In communication, not only linguistic, but also non-linguistic aspects (such as prosodic information) are important. A previous magnetoencephalography (MEG) study showed that prosodic change can be discriminated for speech-modulated BCU (Okayasu et al., 2014). These previous findings may contribute to advancements in the development of ultrasonic hearing aids. One of the mechanisms underlying the recognition of speechmodulated BCU is the demodulation of the ultrasonic signals (Fujimoto et al., 2005). In the transmission pathway to the cochlea

or in the cochlea, bone-conducted stimuli are possibly demodulated due to non-linearity to produce the original speech signals. The demodulated sounds may contribute to hearing. Another mechanism is that the direct cochlear activation by ultrasonic stimuli delivers the speech signals. The envelope of the activation may contribute to the hearing. The demodulated sounds cannot be heard by the profoundly deaf. If solely demodulation contributed to the recognition, they cannot understand the speech signal. According to the previous study, Lenhardt et al. (1991) reported that hearing-impaired individuals could discriminate BCU speech signals. In the MEG study, the auditory cortex was activated by BCU even in the profoundly deaf, and mismatch fields were recorded in a speech-modulated BCU discrimination task (Hosoi et al., 1998). These results indicate the possible role of another mechanism different from demodulation. With regard to the perception mechanism of ultrasonic stimuli, the inner hair cells are activated by ultrasonic stimuli without the generation of ACAS due to non-linearity (Nishimura et al., 2003). In contrast to ACAS, the outer hair cells do not function in ultrasonic perception (Nishimura et al., 2011; Okayasu et al., 2013). Ultrasound broadly excites the cochlear basal turn, and the range of the excitation is not obviously changed depending on the frequency

Fig. 1. Waveforms of the speech signal. The waveforms in the left and right parts indicate the signals of the original speech and the speech-modulated bone-conduction sounds, respectively.

T. Nishimura et al. / Hearing Research 317 (2014) 41e49

43

Fig. 2. Power spectral density plots of the speech signal. The power spectral density plots in the left and right parts indicate the signals of the original speech and the speechmodulated bone-conduction sounds, respectively.

(Nishimura et al., 2003). Therefore, the difference in the cochlear region activated by ultrasonic stimuli does not contribute to the discrimination of speech signals. Difference limens for frequency depend on both place mechanism and phase locking in the peripheral auditory system (Moore and Ernst, 2012). According to previous studies, phase locking works for frequencies up to 6 kHz (Oxenham et al., 2011). Thus, frequency difference in the ultrasonic range cannot be discriminated. On the other hand, the envelope of

the activation by speech-modulated BCU does not change beyond 6 kHz because the frequency of the original speech sound ranges within 6 kHz. Temporal codes in the peripheral auditory system are capable of delivering speech information to the central auditory system. As mentioned above, the perception mechanism of speechmodulated BCU for the profoundly deaf is probably different from that for normal hearing individuals. The data obtained in normal

44

T. Nishimura et al. / Hearing Research 317 (2014) 41e49

Fig. 3. Power spectral density plot of speech-weighted noise.

hearing individuals is not always applicable to the development of ultrasonic hearing devices for the profoundly deaf. Therefore, the difference in the mechanism underlying the recognition has to be considered. In this study, the contribution of demodulation to the recognition of speech-modulated BCU was evaluated, and the participation of direct ultrasonic stimulation was investigated. Demodulated sounds are perceived in the same manner as nonmodulated original speech signals. Therefore, they are masked by ACAS maskers. In contrast, direct ultrasonic stimulation is not easily masked by ACAS maskers, because BCU is not masked by sounds below 8 kHz (Nishimura et al., 2011). By evaluating the change in

Fig. 5. The average intelligibility of speech-modulated bone-conducted ultrasound under five masker conditions. The vertical bars indicate standard deviations.

the intelligibility of speech-modulated BCU by masking, the contribution of demodulation and direct ultrasonic stimulation can be established. The intelligibility and the difference in the score for

Fig. 4. The intelligibility of speech-modulated bone-conducted ultrasound over three trials under five masker conditions. The vertical bars indicate standard deviations.

T. Nishimura et al. / Hearing Research 317 (2014) 41e49

45

Fig. 6. The average scores of the correct answers for each masker condition. The vertical bars indicate standard deviations.

the correct answers among words were compared between speech-modulated BCU and original speech signals. 2. Materials and methods Eight volunteers (2 females and 6 males; 27e37 years old) participated in this experiment. Their thresholds in conventional audiometry were within 20 dB HL for both ears. Participants provided written informed consent before being enrolled. The experimental procedure was approved by the ethics committee of Nara Medical University. Considering the difficulty in understanding speech-modulated BCU, speech audiometry was performed using a numeral word list. This is the standard list used for the speech recognition threshold test in Japan, presented by the Japan Audiological Society (2010). The list consists of six numeral words, that are presented in six presentation orders (Table 1). In this study, the six presentation orders were presented in a random order. As for the stimuli, a 30kHz sine wave was used as an ultrasonic carrier. Amplitude modulation was based on a double-sideband transmitted-carrier with a modulated depth of 1.0. The modulated signal was calculated using the following formula:

UðtÞ ¼ 1=2  ð1 þ SðtÞ=ScÞ  sinð2pfctÞ;

where S(t) was the speech signal, Sc was the peak amplitude of the sinusoidal wave whose equivalent continuous A-weighted sound pressure level was equal to the speech signals, and fc was the carrier frequency (30 kHz). Figs. 1 and 2 show the waveforms and power spectral density plots of the signals, respectively. The original speech signals contained frequencies from 200 Hz to 8 kHz. Prior to each measurement, the threshold of 30 kHz BCU was measured by an ascending technique using 1-dB steps. Tone bursts of 300 ms (including rising/falling ramps of 50 ms) were used as the stimuli. The threshold was defined as 0 dB sensation level (SL) for the ultrasonic carrier. The intensity of modulated BCU was represented using the intensity of the ultrasonic carrier. We also measured the BCU thresholds in the presence of masking. We confirmed that the BCU threshold was not elevated by more than 3 dB with the masker presentation. In the measurement of speech recognition, speech-modulated BCU was presented with a decrease of the intensity. The ultrasonic carrier was presented 1.5 s before the stimulus onset. The intensity of the ultrasonic carrier decreased 1.5 s after the stimulus offset, and continued to the next word presentation. The initial intensity of BCU was 22.5 dB SL, which was decreased by 2.5 dB/word step. The intensity of the final sixth word was 10 dB SL. The ultrasonic transducer was fixed on the forehead with a headband. The standard fixation force of 5.4 N was used. Earphones were worn on both ears. A speech-weighted noise (Fig. 3) was employed as masking, and binaurally presented. Five

46

T. Nishimura et al. / Hearing Research 317 (2014) 41e49

Fig. 7. The average scores of the correct answers for each word under five masker conditions. The vertical bars indicate standard deviations.

conditions of the masking intensity (none, 0, 10, 20, and 30 dB) were randomly administered. The measurements were carried out three times. The interval between the repeated measurements was approximately one week. As for a control, masking was also measured when the original speech signals were presented by a conventional bone vibrator. The masking levels of the speech-weighted noise were randomly set at 10, 20, and 30 dB. The initial of the presentation levels were 15, 25, and 35 dB, respectively. If none of six first words were recognized, the initial presentation levels were changed. These words were also presented with a 2.5 dB-decrease of the intensity/word. When the intelligibility was 0% at a certain intensity, the intelligibility below this intensity was considered as 0%. The bone vibrator (BR-41; Rion, Tokyo, Japan) was fixed on the forehead with a headband. The standard fixation force of 5.4 N was used. The masking was binaurally presented with the earphones. Ultrasound was generated by a function generator (WF1946; NF Electronic Instruments Co., Yokohama, Japan). The modulation of ultrasound by speech signal was performed with the function generator (WF1946). The ultrasound signal was increased through a high-speed power amplifier (HSA4011; NF Electronic Instruments Co.). The intensities were controlled logarithmically in order to use the dB scale through attenuators (PA5; TuckereDavis Technologies,

Gainesville, FL, USA). The experiments were carried out in a soundproof room. 2.1. Data analysis Because speech-modulated BCU is not ordinarily heard, repetition of the measurements may improve the intelligibility. To investigate the repetition effect, the intelligibility was compared among three trials. The data were analyzed using a two-way repeatedmeasures analysis of variance (ANOVA), with the number of trials, and intensity as within-subject factors. As for masking, the average intelligibility was analyzed by a two-way repeated-measures ANOVA, with masking, and intensity as within-subject factors. Furthermore, the average scores of the correct answers for each word were also analyzed. As for masking in each word, the average scores of the correct answers were compared among the six words, which were analyzed by a two-way repeated-measures ANOVA, with word and intensity as within-subject factors. These statistical analyses were performed by SPSS, ver. 22 (International Business Machines Corporation, Armonk, New York). Bonferroni's method was used for post-hoc comparisons. The significance level was set at 0.05. With regard to original speech signals, the recognition curves were compared among the three masking conditions, and the scores

T. Nishimura et al. / Hearing Research 317 (2014) 41e49

Fig. 8. Intelligibility of bone-conduction original speech signals under three masker conditions. The vertical bars indicate standard deviations.

of the correct answers for the six words were compared. Data regarding the subjects' confusion patterns were analyzed using confusion matrices. Responses to the signals, including the absence of a response, were aggregated within each masking condition.

47

scores of the correct answers decreased, except for “6/roku/”. In the presence of 20- and 30-dB maskers, the average scores of the correct answers for “6/roku/” were significantly superior to those for the other words. Fig. 7 shows the average scores of the correct answers for each word. The average scores of the correct answers for “6/roku/” did not changed with the masker. In contrast, the average scores of the correct answers for the other five words were significantly influenced by masking. Fig. 8 shows the intelligibility of bone-conduction original speech signals for the three masker conditions. The recognition curve shifted upward depending on the masker level. Fig. 9 shows the average scores of the correct answers for each masker condition. No obvious characteristic was observed. The scores of the correct answers for “6/roku/” were not outstanding, and relatively low. Fig. 10 demonstrates the confusion matrices calculated from the results of speech-modulated BCU and the original speech signal. For speech-modulated BCU, most responses were either correct answers or absent responses with a low-intensity masker. In addition, confusion with other words was infrequent. As the masking level increased, confusion occurred more frequently for the signals of “2/ ni/,” “3/saN/,” “4/yoN/,” and “5/go/.” For the original speech signal, most responses were either correct answers or absent responses under all masking conditions.

3. Results 4. Discussion Fig. 4 shows the intelligibility of speech-modulated BCU in the three trials. No significant effect of the number of trials was revealed for any of the masking conditions, although the intelligibility significantly depended on the intensity. Fig. 5 shows the average intelligibility of speech-modulated BCU for five masker conditions. The two-way repeated-measures ANOVA revealed statistically effects of masking and intensity. The intelligibilities for the 20- and 30dB masker conditions were significantly lower than those for the other conditions. The intelligibility for the 30-dB masker condition was significantly lower than that for the 20-dB masker condition. Fig. 6 shows the average scores of the correct answers for each masker condition. As the masking level increased, the average

Speech-modulated BCU is not ordinarily heard, and no current subjects have been trained to hear it. We expected that the intelligibility increased as the trials proceeded owing to the repetition effect. However, no improvement was observed with the increase in the number of trials. A previous study showed a high intelligibility of monosyllable-modulated BCU (Okamoto et al., 2005; Yamashita et al., 2009a). Because a syllable is a unit of organization for a sequence of speech sounds, the degree of difficulty of speech audiometry using a monosyllable is high. In this study, the subjects were instructed to answer what numeral word was presented. Under the condition without the masking, all words at a

Fig. 9. The average scores of the correct answers under each masker condition.

48

T. Nishimura et al. / Hearing Research 317 (2014) 41e49

Fig. 10. Confusion matrices based on the results of speech-modulated bone-conducted ultrasound (AeE) and original speech signals (FeH) under different masking conditions. Presentation signals are represented on the left axis, and responses across the top axis. “A” and “B” indicate other responses besides the six numeral words and no response, respectively. Blocks with larger grey values (i.e., darker shading) indicate higher appearance frequencies for those pairs. When the appearance frequency was higher than 50%, the block is marked fully black. Numerical values in the cells indicate the percentage of the appearance frequency.

sufficiently intensity were recognized by all subjects. Thus, it is not so surprising that the intelligibility was high even in the first trial. In the presence of masking, the speech recognition curve for the original speech signal shifted to the right depending on the masker intensity. For non-modulated speech signal, signal-to-noise ratio is important factor rather than the signal intensity (Dirks et al., 1982; Studebaker et al., 1999). Low signal-to noise ratio worsens the intelligibility, and the current results also agreed with this notion. In contrast, while the masking influences the intelligibility of the speech-modulated BCU, the curve did not shift upward, as for the original speech signal. The intelligibility at low intensities showed almost no change with the presentation of the masker. The intelligibility at high intensities decreased depending on the masker intensity. These differences in the effect of masking suggested that the recognition of speech-modulated BCU differs from that of original speech signal. The perception mechanism of speech-modulated BCU has not fully been elucidated. Not only demodulated speech signals but also direct ultrasonic stimulation might contribute to the recognition. Although 30 kHz-BCU (carrier wave) was not masked by the current masking, the masking influenced the results, implying the

contribution of demodulation. In contrast, demodulated sounds will be masked by masker to the same extent as original speech signal. If the recognition was performed solely by the demodulation, the recognition curve would shift upward depending on the masking intensity. The difference in the results between speech-modulated BCU and original speech signal indicated the importance of direct ultrasonic stimulation, particularity, under the masking condition. For speech-modulated BCU, no differences in intelligibility among the words were observed under the condition of no masking. However, the masking generated differences in the correct answers among the words. Particularly, the recognition of “6/roku/” was maintained even under the 30-dB masking condition. For original speech signal, all words were influenced by the masker, and the correct answers for “6/roku/” were relatively low, and were dependent on the masking. The current word list consists of six numeral words, and it was repetitively used for the experiment. The recognition mechanism of speech-modulated BCU cannot be concluded based on the current results alone. At least, the current findings indicate the difference between speech-modulated BCU and original speech signal, and not only demodulated sound, but also direct ultrasonic stimulation contributes to the recognition.

T. Nishimura et al. / Hearing Research 317 (2014) 41e49

With regard to the temporal resolution, phase locking contributes to difference limen for frequency up to 6 kHz (Oxenham et al., 2011). Therefore, in the ultrasonic frequency range, difference in frequency is not coded by the peripheral auditory system. On the other hand, the envelope of the modulated ultrasonic signal basically corresponds to the original signal. The envelope can be coded by the peripheral auditory system, and can be transmitted to the central nervous system. In the current study, as the masking level increased, the demodulated sounds were strongly masked, and could not contribute to hearing. In contrast, the direct ultrasonic stimulation was not masked by the current masker. The temporal codes of the ultrasonic stimulation may contribute to hearing in the presentation of masking. The characteristics of the confusion pattern for speechmodulated BCU were different from those for the original speech signal (Fig. 10). For the latter, most of the responses, except for the absence of responses, were correct. Thus, the decrease in intelligibility was derived from the absence of responses. The results indicated that confusion rarely occurred in the recognition test using the current numeral word list. In contrast, for speechmodulated BCU, the frequency of confusing “2/ni/,” “3/saN/,” “4/ yoN/,” and “5/go/” correlated positively with increasing masker intensity. Under high-intensity making conditions, the demodulated sound is strongly masked by the speech-weighted noise, so subjects are forced to respond based on the unmasked ultrasonic stimulation. However, the current data suggest that information transmitted solely by ultrasonic stimulation is insufficient to recognize the above-mentioned words. This results in more frequent confusion under masker presentation conditions for speech-modulated BCU. The Japanese numeral words, “2/ni/,” “3/saN/,” “4/yoN/,” and “5/ go/,” are one-syllable words, with 3 and 4 being two-mora words and 2 and 5 being one-mora words. On the other hand, 6 and 7 are two-syllable, two-mora words. In the case of one-syllable, onemora words, “2/ni/” was often confused with “5/go/” but not vice versa. “5/go/” was confused with various words. As for one-syllable, two-mora words, “3/saN/” and “4/yoN/” were often confused with “4/yoN/” and “2/ni/,” respectively. Therefore, the recognition of one-syllable words with the information transmitted solely by ultrasonic stimulation is difficult, and the difference in mora is not beneficial to the recognition of these words. Two-syllable words provide a longer duration than the other four words. For “6/roku/,” intelligibility was not influenced by the masker, and confusion with other words was rare. For “7/nana/,” while intelligibility decreased with increasing masker intensity, confusion still did not occur often. Furthermore, confusion between “6/roku/” and “7/nana/” was rarely observed. The envelope of “6/roku/” is characteristic (Fig. 1). A silent interval is contained between the first and last syllables, which probably contributes to the high response accuracy and lack of confusion in the masker presentation conditions. Thus, it is suggested that the envelope is an important clue for recognition of speech-modulated BCU. Previous findings on the intelligibility of speech-modulated BCU suggested that hearing devices utilizing BCU can be developed in the near future (Lenhardt et al., 1991; Hosoi et al., 1998). However, it has been difficult to obtain a sufficient benefit for the profoundly deaf as well as normal hearing individuals (Shimokura et al., 2012; Matsui et al., 2013). For the profoundly deaf, they cannot hear demodulated sounds. In contrast, the normal hearing individuals can hear demodulated sounds. As the current finding indicated, the demodulated sounds and direct ultrasonic stimulation contribute to the recognition of speech-modulated BCU, which is responsible for the difference in the benefit of ultrasonic hearing aids between the normal hearing individuals and the profoundly deaf. In developing the hearing device, investigation in normal hearing individuals is

49

also of importance. However, the benefits of demodulation and direct ultrasonic stimulation have to be divided in the investigation. In order to develop hearing aids for the profoundly deaf, the experiment has to be carried out under the condition in which demodulated sounds are masked, as in the current study. Acknowledgments This study was supported by Grant-in-Aid for Young Scientists (B) (20791217) and Grant-in-Aid for Scientific Research (B) (26282130) from the Japan Society for the Promotion on Science (JSPS). References Bellucci, R., Schuneider, D., 1962. Some observations on ultrasonic perception in man. Ann. Otol. Rhinol. Laryngol. 71, 719e726. Corso, J.F., 1963. Bone-conduction thresholds for sonic and ultrasonic frequencies. J. Acoust. Soc. Am. 35, 1738e1743. Deatherage, H., Jeffress, J.A., Blodgett, H.C., 1954. A note on the audibility of intense ultrasonic sound. J. Acoust. Soc. Am. 26, 582. Dieroff, H.G., Ertel, H., 1975. Some thoughts on the perception of ultrasonics by man. Arch. Otorhinolaryngol. 209, 277e299. Dirks, D.D., Morgan, D.E., Dubno, J.R., 1982. A procedure for quantifying the effects of noise on speech recognition. J. Speech Hear. Disord. 47, 114e123. Fujimoto, K., Nakagawa, S., Tonoike, M., 2005. Nonlinear explanation for boneconducted ultrasonic hearing. Hear. Res. 204, 210e215. Gavreau, V., 1948. Audibillite de sons de frequence elevee. Compt Rendu 226, 2053e2054. Haeff, A.V., Knox, C., 1963. Perception of ultrasound. Science 139, 590e592. Hosoi, H., Imaizumi, S., Sakaguchi, T., Tonoike, M., Murata, K., 1998. Activation of the auditory cortex by ultrasound. Lancet 351, 496e497. Imaizumi, S., Hosoi, H., Sakaguchi, T., Watanabe, Y., Sadato, N., Nakamura, S., Waki, A., Yonekura, Y., 2001. Ultrasound activates the auditory cortex of profound deaf subjects. Neuroreport 12, 583e586. Japan Audiological Society, 2010. Method of speech audiometry. Audiol. Jpn. 46, 622e637. Koizumi, T., Nishimura, T., Yamashita, A., Yamanaka, T., Imamura, T., Hosoi, H., 2014. Residual inhibition of tinnitus induced by 30-kHz bone-conducted ultrasound. Hear. Res. 310, 48e53. Lenhardt, M.L., Skellett, R., Wang, P., Clarke, A.M., 1991. Human ultrasonic speech perception. Science 253, 82e85. Matsui, T., Shimokura, R., Nishimura, T., Hosoi, H., Nakagawa, S., 2013. Speech intelligibility of hearing impaired participants in long-term training of boneconducted ultrasonic hearing aid. Proc. Meet. Acoust. 19, 050088. http://dx. doi.org/10.1121/1.4799193. Moore, B.C., Ernst, S.M., 2012. Frequency difference limens at high frequencies: evidence for a transition from a temporal to a place code. J. Acoust. Soc. Am. 132, 1542e1547. Nishimura, T., Nakagawa, S., Sakaguchi, T., Hosoi, H., 2003. Ultrasonic masker clarifies ultrasonic perception in man. Hear. Res. 175, 171e177. Nishimura, T., Okayasu, T., Uratani, Y., Fukuda, F., Saito, O., Hosoi, H., 2011. Peripheral perception mechanism of ultrasonic hearing. Hear. Res. 277, 176e183. Nishimura, T., Nakagawa, S., Yamashita, A., Sakaguchi, T., Hosoi, H., 2009. N1m amplitude growth function for bone-conducted ultrasound. Acta Otolaryngol. Suppl. 562, 28e33. Okamoto, Y., Nakagawa, S., Fujimoto, K., Tonoike, M., 2005. Intelligibility of boneconducted ultrasonic speech. Hear. Res. 208, 107e113. Okayasu, T., Nishimura, T., Yamashita, A., Saito, O., Fukuda, F., Yanai, S., Hosoi, H., 2013. Human ultrasonic hearing is induced by a direct ultrasonic stimulation of the cochlea. Neurosci. Lett. 539, 71e76. Okayasu, T., Nishimura, T., Nakagawa, S., Yamashita, A., Nagatani, Y., Uratani, Y., Yamanaka, T., Hosoi, H., 2014. Evaluation of prosodic and segmental change in speech-modulated bone-conducted ultrasound by mismatch fields. Neurosci. Lett. 559, 117e121. Oxenham, A.J., Micheyl, C., Keebler, M.V., Loper, A., Santurette, S., 2011. Pitch perception beyond the traditional existence region of pitch. Proc. Natl. Acad. Sci. U. S. A. 108, 7629e7634. Pumphrey, R., 1950. Upper limit of frequency for human hearing. Nature 166, 571. Shimokura, R., Fukuda, F., Hosoi, H., 2012. A case study of auditory rehabilitation in a profoundly deaf participant using a bone-conducted ultrasonic hearing aid. Behav. Sci. Res. 50, 187e198. Studebaker, G.A., Sherbecoe, R.L., McDaniel, D.M., Gwaltney, C.A., 1999. Monosyllabic word recognition at higher-than-normal speech and noise levels. J. Acoust. Soc. Am. 105, 2431e2444. Yamashita, A., Nishimura, T., Nagatani, Y., Okayasu, T., Koizumi, T., Sakaguchi, T., Hosoi, H., 2009a. Comparison between bone-conducted ultrasound and audible sound in speech recognition. Acta Otolaryngol. Suppl. 562, 34e39. Yamashita, A., Nishimura, T., Nagatani, Y., Sakaguchi, T., Okayasu, T., Yanai, S., Hosoi, H., 2009b. The effect of visual information in speech signals by boneconducted ultrasound. Neuroreport 21, 119e122.

An examination of the effects of broadband air-conduction masker on the speech intelligibility of speech-modulated bone-conduction ultrasound.

Ultrasound can be heard by bone-conduction, and speech-modulated bone-conducted ultrasound (BCU) delivers the speech information to the human ear. One...
2MB Sizes 4 Downloads 6 Views