The effect of air pressure change on bone conduction (BC) hearing thresholds in the occluded ear was investigated. The pump manometer system of an impedance bridge was used to change the air pressure in the ear canal of twenty-two normally hearing subjects. BC thresholds were measured with: (1) open ear; (2) the ear canal occluded with a probe tube and application of 0 daPa air pressure; and (3) the ear canal occluded with a probe tube and application of -350 daPa air pressure. Thresholds were lower in condition 2 than in condition 1, the difference decreasing from 27 dB at 2500 Hz to 4.5 dB at 2000 Hz. Thresholds were higher in condition 3 than in condition 2. The results are interpreted in terms of changes in the relative contribution of the three routes of transmission for BC sound produced by occlusion and by a static pressure difference.
Objective: To study the development of the bilateral benefit in children using bilateral cochlear implants by measurements of speech recognition and sound localization. Design: Bilateral and unilateral speech recognition in quiet, in multi-source noise, and horizontal sound localization was measured at three occasions during a two-year period, without controlling for age or implant experience. Longitudinal and cross-sectional analyses were performed. Results were compared to cross-sectional data from children with normal hearing. Study sample: Seventy-eight children aged 5.1-11.9 years, with a mean bilateral cochlear implant experience of 3.3 years and a mean age of 7.8 years, at inclusion in the study. Thirty children with normal hearing aged 4.8-9.0 years provided normative data. Results: For children with cochlear implants, bilateral and unilateral speech recognition in quiet was comparable whereas a bilateral benefit for speech recognition in noise and sound localization was found at all three test occasions. Absolute performance was lower than in children with normal hearing. Early bilateral implantation facilitated sound localization. Conclusions: A bilateral benefit for speech recognition in noise and sound localization continues to exist over time for children with bilateral cochlear implants, but no relative improvement is found after three years of bilateral cochlear implant experience.
Objective: To compare bilateral and unilateral speech recognition in quiet and in multi-source noise, and horizontal sound localization of low and high frequency sounds in children with bilateral cochlear implants. Design: Bilateral performance was compared to performance of the implanted side with the best monaural speech recognition in quiet result. Parental reports were collected in a questionnaire. Results from the CI children were compared to binaural and monaural performance of normal-hearing peers. Study sample: Sixty-four children aged 5.1-11.9 years who were daily users of bilateral cochlear implants. Thirty normal-hearing children aged 4.8-9.0 years were recruited as controls. Results and Conclusions : Group data showed a statistically significant bilateral speech recognition and sound localization benefit, both behaviorally and in parental reports. The bilateral speech recognition benefit was smaller in quiet than in noise. The majority of subjects localized high and low frequency sounds significantly better than chance using bilateral implants, while localization accuracy was close to chance using unilateral implants. Binaural normal-hearing performance was better than bilateral performance in implanted children across tests, while bilaterally implanted children showed better localization than normal-hearing children under acute monaural conditions.
The audiogram predicts less than a third of the variance in speech reception thresholds (SRTs) for hearing-impaired (HI) listeners properly fit with individualized frequency-dependent gain. The remaining variance is often attributed to a combination of su-prathreshold distortion in the auditory pathway and non-auditory factors such as cogni-tive processing. Distinguishing between these factors requires a measure of suprathresh-old auditory processing to account for the non-cognitive contributions. Preliminary re-sults in 12 HI listeners identified a correlation between spectrotemporal modulation (STM) sensitivity and speech intelligibility in noise presented over headphones. The cur-IHCON 2014 27 August 13-17, 2014rent study assessed the effectiveness of STM sensitivity as a measure of suprathreshold auditory function to predict free-field SRTs in noise for a larger group of 47 HI listeners with hearing aids.SRTs were measured for Hagerman sentences presented at 65 dB SPL in stationary speech-weighted noise or four-talker babble. Pre-recorded speech and masker stimuli were played through a small anechoic chamber equipped with a master hearing aid pro-grammed with individualized gain. The output from an IEC711 Ear Simulator was played binaurally through insert earphones. Three processing algorithms were examined: linear gain, linear gain plus noise reduction, or fast-acting compressive gain.STM stimuli consist of spectrally-rippled noise with spectral-peak frequencies that shift over time. STM with a 2-cycle/octave spectral-ripple density and a 4-Hz modulation rate was applied to a 2-kHz lowpass-filtered pink-noise carrier. Stimuli were presented over headphones at 80 dB SPL (±5-dB roving). The threshold modulation depth was estimated adaptively in a two-alternative forced-choice task.STM sensitivity was strongly correlated (R2=0.48) with the global SRT (i.e., the SRTs averaged across masker and processing conditions). The high-frequency pure-tone aver-age (3-8 kHz) and age together accounted for 23% of the variance in global SRT. STM sensitivity accounted for an additional 28% of the variance in global SRT (total R2=0.51) when combined with these two other metrics in a multiple-regression analysis. Correla-tions between STM sensitivity and SRTs for individual conditions were weaker for noise reduction than for the other algorithms, and marginally stronger for babble than for sta-tionary noise.The results are discussed in the context of previous work suggesting that STM sensitivity for low rates and low carrier frequencies is impaired by a reduced ability to use temporal fine-structure information to detect slowly shifting spectral peaks. STM detection is a fast, simple test of suprathreshold auditory function that accounts for a substantial pro-portion of variability in hearing-aid outcomes for speech perception in noise.
The audiogram predicts amp;lt;30% of the variance in speech-reception thresholds (SRTs) for hearing-impaired (HI) listeners fitted with individualized frequency-dependent gain. The remaining variance could reflect suprathreshold distortion in the auditory pathways or nonauditory factors such as cognitive processing. The relationship between a measure of suprathreshold auditory function-spectrotemporal modulation (STM) sensitivity-and SRTs in noise was examined for 154 HI listeners fitted with individualized frequency-specific gain. SRTs were measured for 65-dB SPL sentences presented in speech-weighted noise or four-talker babble to an individually programmed master hearing aid, with the output of an ear-simulating coupler played through insert earphones. Modulation-depth detection thresholds were measured over headphones for STM (2cycles/octave density, 4-Hz rate) applied to an 85-dB SPL, 2-kHz lowpass-filtered pink-noise carrier. SRTs were correlated with both the high-frequency (2-6 kHz) pure-tone average (HFA; R-2 = .31) and STM sensitivity (R-2 = .28). Combined with the HFA, STM sensitivity significantly improved the SRT prediction (Delta R-2 = .13; total R-2 = .44). The remaining unaccounted variance might be attributable to variability in cognitive function and other dimensions of suprathreshold distortion. STM sensitivity was most critical in predicting SRTs for listenersamp;lt;65 years old or with HFA amp;lt;53 dB HL. Results are discussed in the context of previous work suggesting that STM sensitivity for low rates and low-frequency carriers is impaired by a reduced ability to use temporal fine-structure information to detect dynamic spectra. STM detection is a fast test of suprathreshold auditory function for frequencies amp;lt;2 kHz that complements the HFA to predict variability in hearing-aid outcomes for speech perception in noise.
Bone conduction (BC) is the transmission of sound to the inner ear through the bones of the skull. This type of transmission is used in humans fitted with BC hearing aids as well as to classify between conductive and sensorineural hearing losses. The objective of the present study is to develop a finite-element (FE) model of the human skull based on cryosectional images of a female cadaver head in order to gain better understanding of the sound transmission. Further, the BC behavior was validated in terms of sound transmission against experimental data published in the literature. Results showed the responses of the simulated skull FE model were consistent with the experimentally reported data.
Bone conduction (BC) sound is the perception of sound transmitted in the skull bones and surrounding tissues. To better understand BC sound perception and the interaction with surrounding tissues, the power transmission of BC sound is investigated in a three-dimensional finite-element model of a whole human head. BC sound transmission was simulated in the FE model and the power dissipation as well as the power flow following a mechanical vibration at the mastoid process behind the ear was analyzed. The results of the simulations show that the skull bone (comprises the cortical bone and diploe) has the highest BC power flow and thereby provide most power transmission for BC sound. The soft tissues was the second most important media for BC sound power transmission, while the least BC power transmission is through the brain and the surrounding cerebrospinal fluid (CSF) inside the cranial vault. The vibrations transmitted in the skull are mainly concentrated at the skull base when the stimulation is at the mastoid. Other vibration transmission pathways of importance are located at the occipital bone at the posterior side of the head while the transmission of sound power through the face, forehead and vertex is minor. The power flow between the skull bone and skull interior indicate that some BC power is transmitted to and from the skull interior but the transmission of sound power through the brain seem to be minimal and only local to the brain-bone interface.
A whole head finite element model for simulation of bone conducted (BC) sound transmission was developed. The geometry and structures were identified from cryosectional images of a female human head and eight different components were included in the model: cerebrospinal fluid, brain, three layers of bone, soft tissue, eye, and cartilage. The skull bone was modeled as a sandwich structure with an inner and outer layer of cortical bone and soft spongy bone (diploe) in between. The behavior of the finite element model was validated against experimental data of mechanical point impedance, vibration of the cochlear promontories, and transcranial BC sound transmission. The experimental data were obtained in both cadaver heads and live humans. The simulations showed multiple low-frequency resonances where the first was caused by rotation of the head and the second was close in frequency to average resonances obtained in cadaver heads. At higher frequencies, the simulation results of the impedance were within one standard deviation of the average experimental data. The acceleration response at the cochlear promontory was overall lower for the simulations compared with experiments but the overall tendencies were similar. Even if the current model cannot predict results in a specific individual, it can be used for understanding the characteristic of BC sound transmission in general. (C) 2016 Acoustical Society of America.
Nowadays, many different kinds of bone-conduction devices (BCDs) are available for hearing rehabilitation. Most studies of these devices fail to compare the different types of BCDs under the same conditions. Moreover, most results are between two BCDs in the same subject, or two BCDs in different subjects failing to provide an overview of the results between several of the BCDs. Another issue is that some BCDs require surgical procedures that prevent comparison of the BCDs in the same persons. In this study, four types of skin-drive BCDs, three direct-drive BCDs, and one oral device were evaluated in a finite-element model of the human head that was able to simulate all BCDs under the same conditions. The evaluation was conducted using both a dynamic force as input and an electric voltage to a model of a BCD vibrator unit. The results showed that the direct-drive BCDs and the oral device gave vibration responses within 10 dB at the cochlea. The skin-drive BCDs had similar or even better cochlear vibration responses than the direct-drive BCDs at low frequencies, but the direct-drive BCDs gave up to 30 dB higher cochlear vibration responses at high frequencies. The study also investigated the mechanical point impedance at the interface between the BCD and the head, providing information that explains some of the differences seen in the results. For example, when the skin-drive BCD attachment area becomes too small, the transducer cannot provide an output force similar to the devices with larger attachment surfaces.
Background: Bone conduction (BC) is an alternative to air conduction to stimulate the inner ear. In general, the stimulation for BC occurs on a specific location directly on the skull bone or through the skin covering the skull bone. The stimulation propagates to the ipsilateral and contralateral cochlea, mainly via the skull bone and possibly via other skull contents. This study aims to investigate the wave propagation on the surface of the skull bone during BC stimulation at the forehead and at ipsilateral mastoid. Methods: Measurements were performed in five human cadaveric whole heads. The electro-magnetic transducer from a BCHA (bone conducting hearing aid), a Baha (R) Cordelle II transducer in particular, was attached to a percutaneously implanted screw or positioned with a 5-Newton steel headband at the mastoid and forehead. The Baha transducer was driven directly with single tone signals in the frequency range of 0.25-8 kHz, while skull bone vibrations were measured at multiple points on the skull using a scanning laser Doppler vibrometer (SLDV) system and a 3D LDV system. The 3D velocity components, defined by the 3D LDV measurement coordinate system, have been transformed into tangent (in-plane) and normal (out-of-plane) components in a local intrinsic coordinate system at each measurement point, which is based on the cadaver heads shape, estimated by the spatial locations of all measurement points. Results: Rigid-body-like motion was dominant at low frequencies below 1 kHz, and clear transverse traveling waves were observed at high frequencies above 2 kHz for both measurement systems. The surface waves propagation speeds were approximately 450 m/s at 8 kHz, corresponding trans-cranial time interval of 0.4 ms. The 3D velocity measurements confirmed the complex space and frequency dependent response of the cadaver heads indicated by the ID data from the SLDV system. Comparison between the tangent and normal motion components, extracted by transforming the 3D velocity components into a local coordinate system, indicates that the normal component, with spatially varying phase, is dominant above 2 kHz, consistent with local bending vibration modes and traveling surface waves. Conclusion: Both SLDV and 3D LDV data indicate that sound transmission in the skull bone causes rigid body-like motion at low frequencies whereas transverse deformations and travelling waves were observed above 2 kHz, with propagation speeds of approximately of 450 m/s at 8 kHz. (C) 2017 Elsevier B.V. All rights reserved.
Objective: This study explores the influence of stimulation position on bone conduction (BC) hearing sensitivity with a BC transducer attached using a headband. Design:(1) The cochlear promontory motion was measured in cadaver heads using laser Doppler vibrometry while seven different positions around the pinna were stimulated using a bone anchored hearing aid transducer attached using a headband. (2) The BC hearing thresholds were measured in human subjects, with the bone vibrator Radioear B71 attached to the same seven stimulation positions. Study sample: Three cadaver heads and twenty participants. Results: Stimulation on a position superior-anterior to the pinna generated the largest promontory motion and the lowest BC thresholds. Stimulations on the positions superior to the pinna, the mastoid, and posterior-inferior to the pinna showed similar magnitudes of promontory motion and similar levels of BC thresholds. Conclusion: Stimulations on the regions superior to the pinna, the mastoid, and posterior-inferior to the pinna provide stable BC transmission, and are insensitive to small changes of the stimulation position. Therefore it is reliable to use the mastoid to determine BC thresholds in clinical audiometry. However, stimulation on a position superior-anterior to the pinna provides more efficient BC transmission than stimulation on the mastoid.
One limitation with the Bone Anchored Hearing Aid (Baha) is too poor amplification for patients with moderate to severe sensorineural hearing losses. Therefore, we investigated if bone conducted (BC) sound transmission improves when the stimulation approaches the cochlea. Also the influence from the squamosal suture on BC sound transmission was investigated. Both sides of the heads on seven human cadavers were used and vibrational stimulation was applied at eight positions on each side with a frequency range of 0.1-10 kHz. A laser Doppler vibrometer was used to measure the resulting velocity of the cochlear promontory. It was found that the velocity of the promontory increases as the stimulation position approaches the cochlea; this was especially apparent at distances within 2.5 cm from the ear canal opening and when the stimulation position was in the opened mastoid. At frequencies above 500 Hz there was on average 10 to 20 dB greater vibrational response at the cochlea when the stimulation was close to the cochlea compared with the normal Baha position. Moreover, even if there were general indications of attenuation of BC sound when passing the squamosal suture, an effect from the suture could not be conclusively determined.
Hypothesis: The velocity response at the contralateral cochlea from bone-conducted (BC) stimulation depends on the stimulation position.
Background: BC sound transmission in the human skull is complex and differs from air-conducted sound. BC sound stimulates both cochleae with different amplitudes and time delays influencing hearing perception in a way that is not completely understood. One important parameter is the stimulation position on the human skull.
Method: By applying BC stimulation at 8 positions on both sides of 7 human cadaver skulls, the contralateral velocity response of the cochlear promontory was investigated in the frequency range of 0.1 to 10 kHz. Using previous data from ipsilateral stimulation, the transcranial transmission (TT) and effects of bilateral stimulation to one cochlea was calculated.
Results: The contralateral transmission from the 8 positions showed small differences, but the TT showed a generally increased cochlear separation when the stimulation position approached the cochlea. The effect of simultaneous bilateral stimulation was calculated, showing a low-frequency negative effect for correlated signals, whereas uncorrelated signals gave 3-dB gain. At higher frequencies, there was less interaction of the combined stimulation because of the greater intercochlear separation. Also, the greatest time difference between ipsilateral transmission and contralateral transmission was at positions close to the cochlea.
Conclusion: The stimulation position only slightly affects the amplitude and phase of the contralateral cochlear velocity response. However, because of the great influence from the ipsilateral transmission, a position close to the cochlea would be beneficial for patients with bilateral BC hearing aids.
The vibration velocity of the lateral semicircular canal and the cochlear promontory was measured on 16 subjects with a unilateral middle ear common cavity, using a laser Doppler vibrometer, when the stimulation was by bone conduction (BC). Four stimulation positions were used: three ipsilateral positions and one contralateral position. Masked BC pure tone thresholds were measured with the stimulation at the same four positions. Valid vibration data were obtained at frequencies between 0.3 and 5.0 kHz. Large intersubject variation of the results was found with both methods. The difference in cochlear velocity with BC stimulation at the four positions varied as a function of frequency while the tone thresholds showed a tendency of lower thresholds with stimulation at positions close to the cochlea. The correlation between the vibration velocities of the two measuring sites of the otic capsule was high. Also, relative median data showed similar trends for both vibration and threshold measurements. However, due to the high variability for both vibration and perceptual data, low correlation between the two methods was found at the individual level. The results from this study indicated that human hearing perception from BC sound can be estimated from the measure of cochlear vibrations of the otic capsule. It also showed that vibration measurements of the cochlea in cadaver heads are similar to that measured in live humans.
Previous research has shown deficits in vocal emotion recognition in sub-populations of individuals with hearing loss, making this a high priority research topic. However, previousresearch has only examined vocal emotion recognition using verbal material, in which emotions are expressed through emotional prosody. There is evidence that older individualswith hearing loss suffer from deficits in general prosody recognition, not specific to emotionalprosody. No study has examined the recognition of non-verbal vocalization, which constitutes another important source for the vocal communication of emotions. It might be thecase that individuals with hearing loss have specific difficulties in recognizing emotionsexpressed through prosody in speech, but not non-verbal vocalizations. We aim to examinewhether vocal emotion recognition difficulties in middle- aged-to older individuals with sensorineural mild-moderate hearing loss are better explained by deficits in vocal emotion recognition specifically, or deficits in prosody recognition generally by including both sentencesand non-verbal expressions. Furthermore a, some of the studies which have concluded thatindividuals with mild-moderate hearing loss have deficits in vocal emotion recognition abilityhave also found that the use of hearing aids does not improve recognition accuracy in thisgroup. We aim to examine the effects of linear amplification and audibility on the recognitionof different emotions expressed both verbally and non-verbally. Besides examining accuracy for different emotions we will also look at patterns of confusion (which specific emotionsare mistaken for other specific emotion and at which rates) during both amplified and nonamplified listening, and we will analyze all material acoustically and relate the acoustic content to performance. Together these analyses will provide clues to effects of amplification onthe perception of different emotions. For these purposes, a total of 70 middle-aged-olderindividuals, half with mild-moderate hearing loss and half with normal hearing will perform acomputerized forced-choice vocal emotion recognition task with and without amplification
Few studies have examined which acoustic features of speech can be used to distinguish between different emotions, and how combinations of acoustic parameters contribute to identification of emotions. The aim of the present study was to investigate which acoustic parameters in Swedish speech are most important for differentiation between, and identification of, the emotions anger, fear, happiness, sadness, and surprise in Swedish sentences. One-way ANOVAs were used to compare acoustic parameters between the emotions and both simple and multiple logistic regression models were used to examine the contribution of different acoustic parameters to differentiation between emotions. Results showed differences between emotions for several acoustic parameters in Swedish speech: surprise was the most distinct emotion, with significant differences compared to the other emotions across a range of acoustic parameters, while anger and happiness did not differ from each other on any parameter. The logistic regression models showed that fear was the best-predicted emotion while happiness was most difficult to predict. Frequency- and spectral-balance-related parameters were best at predicting fear. Amplitude- and temporal-related parameters were most important for surprise, while a combination of frequency-, amplitude- and spectral balance-related parameters are important for sadness. Assuming that there are similarities between acoustic models and how listeners infer emotions in speech, results suggest that individuals with hearing loss, who lack abilities of frequency detection, may compared to normal hearing individuals have difficulties in identifying fear in Swedish speech. Since happiness and fear relied primarily on amplitude- and spectral-balance-related parameters, detection of them are probably facilitated more by hearing aid use.
The participants in the Eriksholm Workshop on Wideband Absorbance Measures of the Middle Ear developed statements for this consensus article on the final morning of the Workshop. The presentations of the first 2 days of the Workshop motivated the discussion on that day. The article is divided into three general areas: terminology; research needs; and clinical application.The varied terminology in the area was seen as potentially confusing, and there was consensus on adopting an organizational structure that grouped the family of measures into the term wideband acoustic immittance (WAI), and dropped the term transmittance in favor of absorbance. There is clearly still a need to conduct research on WAI measurements. Several areas of research were emphasized, including the establishment of a greater WAI normative database, especially developmental norms, and more data on a variety of disorders; increased research on the temporal aspects of WAI; and methods to ensure the validity of test data. The area of clinical application will require training of clinicians in WAI technology. The clinical implementation of WAI would be facilitated by developing feature detectors for various pathologies that, for example, might combine data across ear-canal pressures or probe frequencies.
Objectives: The output performance of a novel semi-implantable transcutaneous bone conduction device was compared to an established percutaneous bone-anchored hearing system device using cadaver heads. The influence of actuator position, tissue growth below the actuator and mounting it on the surface or in a flattened bone bed on the performance of the implanted actuator was investigated.Materials and Methods: The percutaneous and the new transcutaneous device were sequentially implanted at two sites in five human cadaver heads: 55 mm superior-posterior to the ear canal opening (position A) and, closer to the cochlea, about 20 mm inferior-posterior to the ear canal opening behind the pinna on the mastoid (position B). The ipsi-and contralateral cochlear promontory (CP) velocity magnitude responses to percutaneous and transcutaneous stimulation were measured using laser Doppler vibrometry. In addition, the CP vibration of the transcutaneous device placed directly on the skull bone surface was compared with the placement in a flattened bone bed at a depth of about 3 mm. Finally, the influence of placing a thin silicone interposition layer under the implanted transducer was also explored.Results: The percutaneous device provided about an 11 dB higher average CP vibration level than the transcutaneous device at frequencies between 0.5 and 10 kHz. The ipsilateral CP vibration responses with stimulations at position B were on average 13 dB higher compared to stimulation at position A. The placement of the transcutaneous transducer at position B provided similar or higher average vibration magnitudes than the percutaneous transducer at position A. The 3 mm deep flattened bone bed had no significant effects on the output performance. Placing a thin silicone layer under the transcutaneous transducer had no significant influence on the output of the transcutaneous device.Conclusions: Our results using the CP vibration responses show that at frequencies above 500 Hz the new transcutaneous device at position B provides similar output levels as the percutaneous device at position A. The results also indicated that neither a bone bed for the placement of the transcutaneous transducer nor a simulated tissue growth between the actuator and the bone affect the output performance of the device.
Objective: Percutaneous bone-anchored hearing aid (BAHA) is an important rehabilitation alternative for patients who have conductive or mixed hearing loss. However, these devices use a percutaneous and bone-anchored implant that has some drawbacks reported. A transcutaneous bone conduction implant system (BCI) is proposed as an alternative to the percutaneous system because it leaves the skin intact. The BCI transmits the signal to a permanently implanted transducer with an induction loop system through the intact skin. The aim of this study was to compare the electroacoustic performance of the BAHA Classic-300 with a full-scale BCI on a cadaver head in a sound field. The BCI comprised the audio processor of the vibrant sound bridge connected to a balanced vibration transducer (balanced electromagnetic separation transducer).
Methods: Implants with snap abutments were placed in the parietal bone (Classic-300) and 15-mm deep in the temporal bone (BCI). The vibration responses at the ipsilateral and contralateral cochlear promontories were measured with a laser Doppler vibrometer, with the beam aimed through the ear canal.
Results: Results show that the BCI produces approximately 5 dB higher maximum output level and has a slightly lower distortion than the Classic-300 at the ipsilateral promontorium at speech frequencies. At the contralateral promontorium, the maximum output level was considerably lower for the BCI than for the Classic-300 except in the 1-2 kHz range, where it was similar.
Conclusion: Present results support the proposal that a BCI system can be a realistic alternative to a BAHA.
Percutaneous bone anchored hearing aids (BAHA) are today an important rehabilitation alternative for patients suffering from conductive or mixed hearing loss. Despite their success they are associated with drawbacks such as skin infections, accidental or spontaneous loss of the bone implant, and patient refusal for treatment due to stigma. A novel bone conduction implant (BCI) system has been proposed as an alternative to the BAHA system because it leaves the skin intact. Such a BCI system has now been developed and the encapsulated transducer uses a non-screw attachment to a hollow recess of the lateral portion of the temporal bone. The aim of this study is to describe the basic engineering principals and some preclinical results obtained with the new BCI system. Laser Doppler vibrometer measurements on three cadaver heads show that the new BCI system produces 0-10 dB higher maximum output acceleration level at the ipsilateral promontory relative to conventional ear-level BAHA at speech frequencies. At the contralateral promontory the maximum output acceleration level was considerably lower for the BCI than for the BAHA.
This book aims to facilitate the exchange of ideas between otosurgeons and engineers on common topics such as middle ear function, tympanoplasty, implantable hearing devices and ear prostheses. Due to recent advances in technology, gene-therapy and tissue-engineering procedures will also be important issues in the treatment of middle ear
The auditory system helps regulate phonation. A speakers perception of their own voice is likely to be of both emotional and functional significance. Although many investigations have observed deviating voice qualities in individuals who are prelingually deaf or profoundly hearing impaired, less is known regarding how older adults with acquired hearing impairments perceive their own voice and potential voice problems. Purpose: The purpose of this study was to investigate problems relating to phonation and self-perceived voice sound quality in older adults based on hearing ability and the use of hearing aids. Method: This was a cross-sectional study, with 290 participants divided into 3 groups (matched by age and gender): (a) individuals with hearing impairments who did not use hearing aids (n = 110), (b) individuals with hearing impairments who did use hearing aids (n = 110), and (c) individuals with no hearing impairments (n = 70). All participants underwent a pure-tone audiometry exam; completed standardized questionnaires regarding their hearing, voice, and general health; and were recorded speaking in a soundproof room. Results: The hearing aid users surpassed the benchmarks for having a voice disorder on the Voice Handicap Index (VHI; Jacobson et al., 1997) at almost double the rate predicted by the Swedish normative values for their age range, although there was no significant difference in acoustical measures between any of the groups. Both groups with hearing impairments scored significantly higher on the VHI than the control group, indicating more impairment. It remains inconclusive how much hearing loss versus hearing aids separately contribute to the difference in voice problems. The total scores on the Hearing Handicap Inventory for the Elderly (Ventry amp; Weinstein, 1982), in combination with the variables gender and age, explained 21.9% of the variance on the VHI. Perceiving ones own voice as being distorted, dull, or hollow had a strong negative association with a general satisfaction about the sound quality of ones own voice. In addition, groupwise differences in own-voice descriptions suggest that a negative perception of ones voice could be influenced by alterations caused by hearing aid processing. Conclusions: The results indicate that hearing impairments and hearing aids affect several aspects of vocal satisfaction in older adults. A greater understanding of how hearing impairments and hearing aids relate to voice problems may contribute to better voice and hearing care.
Dissatisfaction with the sound of ones own voice is common among hearing-aid users. Little is known regarding how hearing impairment and hearing aids separately affect own-voice perception. This study examined own-voice perception and associated issues before and after a hearing-aid fitting for new hearing-aid users and refitting for experienced users to investigate whether it was possible to differentiate between the effect of (unaided) hearing impairment and hearing aids. Further aims were to investigate whether First-Time and Experienced users as well as users with dome and mold inserts differed in the severity of own-voice problems. The study had a cohort design with three groups: First-Time hearing-aid users going from unaided to aided hearing (n = 70), Experienced hearing-aid users replacing their old hearing aids (n = 70), and an unaided control group (n = 70). The control group was surveyed once and the hearing-aid users twice; once before hearing-aid fitting/refitting and once after. The results demonstrated that own-voice problems are common among both First-Time and Experienced hearing-aid users with either dome- or mold-type fittings, while people with near-normal hearing and not using hearing aids report few problems. Hearing aids increased ratings of own-voice problems among First-Time users, particularly those with mold inserts. The results suggest that altered auditory feedback through unaided hearing impairment or through hearing aids is likely both to change own-voice perception and complicate regulation of vocal intensity, but hearing aids are the primary reason for poor perceived sound quality of ones own voice.
Objective: The aim of this study was to develop and evaluate a Swedish version of the Hearing In Noise Test for Children (HINT-C). Design: In the first part, the Swedish HINT lists for adults was evaluated by children at three signal to noise ratios (SNRs), -4, -1 and +2 dB. Lists including sentences not reaching 50% recognition at +2 dB SNR were excluded and the rest constituted the HINT-C. In the second part, HINT-C was evaluated in children and adults using an adaptive procedure to determine the SNR for 50% correctly repeated sentences. Study Sample In the first part, 112 children aged 6-11 years participated while another 28 children and 9 adults participated in the second part. Results: Eight out of 24 tested adult HINT lists did not reach the inclusion criteria. The remaining 16 lists formed the Swedish HINT-C which was evaluated in children 6-11 years old. A regression analysis showed that the predicted SNR threshold (dB) was 0.495-0.365*age (years + months/12) and the children reached the mean adult score at an age of 10.5 years. Conclusions: A Swedish version of HINT-C was developed and evaluated in children six years and older.
Objective: To compare recordings of bone conduction (BC) stimulated auditory brainstem response (ABR) obtained using the newer BC transducer Radioear B81 and the conventional BC transducer Radioear B71. Balanced electromagnetic separation transducer (BEST) design found in the B81 may influence the ABR magnitudes and latencies, as well as electrical artefacts. Design: ABRs to tone burst stimuli of 500 Hz, 2000 Hz, 4000 Hz, click stimulation, and broad-band chirp stimulation at 20 and 50dB nHL were recorded. For each device, stimulus and intensity level, the ABR Jewett wave V amplitude and latency were obtained. The device-related electrical stimulus artefacts on the ABR recordings were also analysed by calculating the Hilbert envelope of the peri-stimulus recording segments. Study sample: Twenty-three healthy adults with normal hearing were included in the study. Results: The ABRs obtained by the B81 were similar to that of the B71 in terms of ABR wave V amplitude and latency. However, the B81 produced smaller electrical artefacts than B71 and this difference was statistically significant. Conclusions: The BC transducer Radioear B81 provides ABRs comparable to Radioear B71 while causing smaller artefacts.
Nickel-yttria-stabilized zirconia (Ni-YSZ) cermet is widely used as an anode material in solid oxide fuel cells (SOFCs); however, Ni re-oxidation causes critical problems due to volume expansion, which causes high thermal stress. We fabricated a Ni-YSZ anode functional layer (AFL), which is an essential component in high-performance SOFCs, and re-oxidized it to investigate the related three-dimensional (3D) microstructural and thermo-mechanical effects. A 3D model of the re-oxidized AFL was generated using focused ion beam-scanning electron microscope (FIB-SEM) tomography. Re-oxidation of the Ni phase caused significant volumetric expansion, which was confirmed via image analysis and calculation of the volume fraction, connectivity, and two-phase boundary density. Finite element analysis (FEA) with simulated heating to 500-900 degrees C confirmed that the thermal stress in re-oxidized Ni-YSZ is concentrated at the boundaries between YSZ and re-oxidized NiO (nickel oxide). NiO is subjected to more stress than YSZ. Stress exceeding the fracture stress of 8 mol% YSZ appears primarily at 800 degrees C or higher. The stress is also more severe near the electrolyte-anode boundary than in the Ni-YSZ cermet and the YSZ regions. This may be responsible for the electrolyte membrane delamination and fracture that are observed during high-temperature operation. (C) 2018 Elsevier B.V. All rights reserved.
Ceramic-metal composites (CMC) have been used for various high temperature applications including combustion engines, steam and gas turbines, industrial heaters and ceramic fuel cells. Reliable incorporation of the CMC at elevated temperatures, however, is very difficult in practice for the following reasons. First, meting and sublimation points of those solids are different causing undesired diffusion and mixing of elements across the material boundaries degrading functions of the materials. Secondly, maintaining temperature and pressure regimes for desired phases of the component materials is challenging during operation in many of practical cases. Lastly, thermal expansion rates of those two materials are significantly different frequently causing mechanical stresses and fractures. There have been numerous efforts to evaluate and design the CMC materials to minimize the thermo-mechanical stresses. Among various techniques, the focused ion beam-scanning electron microscope (FIB-SEM) tomography has been proved as a state-of-art technique to obtain 3D compositional and structural information of the CMC materials. In this study, we have evaluated thermal stresses applied on nickel-zirconia CMCs by using the FIB-SEM 3D tomography and finite element analysis.
A third window, which is another cochlear fluid pathway different from the oval window and round window, is considered to be a significant factor in bone-conducted hearing. A three-dimensional finite element model of the human ear consisting of the middle ear and cochlea was used to investigate the effect of the third windows on bone-conducted heraing. This study is aimed to find the third window which causes the consistent cochlear responses with previous studies in air-conducted hearing, and causes the asymmetry of the volume velocity ratio between the oval window and round window in bone-conducted hearing. The preliminary result shows that the cochlear aqueduct and the vestibular aqueduct with high impedance do not affect the basilar membrane velocity in air-conducted hearing. On the contrary, in bone-conducted hearing, the direction of the shaking structure for the bone-conducted stimulation as well as the third window can be a significant factor causing the asymmetry of the volume velocity ratio found by Stenfelt et al.
A three-dimensional finite-element (FE) model of a human dry skull was devised for simulation of human bone-conduction (BC) hearing. Although a dry skull is a simplification of the real complex human skull, such model is valuable for understanding basic BC hearing processes. For validation of the model, the mechanical point impedance of the skull as well as the acceleration of the ipsilateral and contralateral cochlear bone was computed and compared to experimental results. Simulation results showed reasonable consistency between the mechanical point impedance and the experimental measurements when Youngs modulus for skull and polyurethane was set to be 7.3 GPa and 1 MPa with 0.01 and 0.1 loss factors at 1 kHz, respectively. Moreover, the acceleration in the medial-lateral direction showed the best correspondence with the published experimental data, whereas the acceleration in the inferior-superior direction showed the largest discrepancy. However, the results were reasonable considering that different geometries were used for the 3D FE skull and the skull used in the published experimental study. The dry skull model is a first step for understanding BC hearing mechanism in a human head and simulation results can be used to predict vibration pattern of the bone surrounding the middle and inner ear during BC stimulation.
A three-dimensional finite-element (FE) model of a human head including the auditory periphery was developed to obtain a better understanding of bone-conducted (BC) hearing. The model was validated by comparison of cochlear and head responses in both air-conducted (AC) and BC hearing with experimental data. Specifically, the FE model provided the cochlear responses such as basilar membrane velocity and intracochlear pressure corresponding to BC stimulations applied to the mastoid or the conventional bone-anchored-hearing-aid (BAHA) positions. This is a strength of the model because it is difficult to obtain the cochlear responses from experiments corresponding to the BC stimulation applied at a specific position on the head surface. In addition, there have been few studies based on an FE model that can calculate the head and cochlear responses simultaneously from a BC stimulation. Moreover, in this study, the intracochlear sound pressure at multi-positions along the BM length was calculated and used to clarify the effect of stimulating force direction on the cochlear and promontory velocities in BC hearing. Also, the relationship between BC and AC stimulation and the basilar membrane velocity in the FE model was used to calculate the stimulation level at hearing thresholds which has been investigated only by psychoacoustical methods.
Nowadays, several options are available to treat patients with conductive or mixed hearing loss. Whenever surgical intervention is not possible or contra-indicated, and amplification by a conventional hearing device (e.g., behind-the-ear device) is not feasible, then implantable hearing devices are an indispensable next option. Implantable bone-conduction devices and middle-ear implants have advantages but also limitations concerning complexity/invasiveness of the surgery, medical complications, and effectiveness. To counsel the patient, the clinician should have a good overview of the options with regard to safety and reliability as well as unequivocal technical performance data. The present consensus document is the outcome of an extensive iterative process including ENT specialists, audiologists, health-policy scientists, and representatives/technicians of the main companies in this field. This document should provide a first framework for procedures and technical characterization to enhance effective communication between these stakeholders, improving health care.
Listening to speech in noise depletes cognitive resources, affecting speech processing. The present study investigated how remaining resources or cognitive spare capacity (CSC) can be deployed by young adults with normal hearing. We administered a test of CSC (CSCT; Mishra et al., 2013) along with a battery of established cognitive tests to 20 participants with normal hearing. In the CSCT, lists of two-digit numbers were presented with and without visual cues in quiet, as well as in steady-state and speech-like noise at a high intelligibility level. In low load conditions, two numbers were recalled according to instructions inducing executive processing (updating, inhibition) and in high load conditions the participants were additionally instructed to recall one extra number, which was the always the first item in the list. In line with previous findings, results showed that CSC was sensitive to memory load and executive function but generally not related to working memory capacity (WMC). Furthermore, CSCT scores in quiet were lowered by visual cues, probably due to distraction. In steady-state noise, the presence of visual cues improved CSCT scores, probably by enabling better encoding. Contrary to our expectation, CSCT performance was disrupted more in steady-state than speech-like noise, although only without visual cues, possibly because selective attention could be used to ignore the speech-like background and provide an enriched representation of target items in working memory similar to that obtained in quiet. This interpretation is supported by a consistent association between CSCT scores and updating skills.
PURPOSE:
The purpose of the present study was to evaluate the new Cognitive Spare Capacity Test (CSCT), which measures aspects of working memory capacity for heard speech in the audiovisual and auditory-only modalities of presentation.
METHOD:
In Experiment 1, 20 young adults with normal hearing performed the CSCT and an independent battery of cognitive tests. In the CSCT, they listened to and recalled 2-digit numbers according to instructions inducing executive processing at 2 different memory loads. In Experiment 2, 10 participants performed a less executively demanding free recall task using the same stimuli.
RESULTS:
CSCT performance demonstrated an effect of memory load and was associated with independent measures of executive function and inference making but not with general working memory capacity. Audiovisual presentation was associated with lower CSCT scores but higher free recall performance scores.
CONCLUSIONS:
CSCT is an executively challenging test of the ability to process heard speech. It captures cognitive aspects of listening related to sentence comprehension that are quantitatively and qualitatively different from working memory capacity. Visual information provided in the audiovisual modality of presentation can hinder executive processing in working memory of nondegraded speech material.