liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 28 of 28
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Amundin, Mats
    et al.
    Linköping University, Department of Physics, Chemistry and Biology, Zoology . Linköping University, The Institute of Technology.
    Starkhammar, Josefin
    Evander, Mikael
    Almqvist, Monica
    Lindström, Kjell
    Persson, Hans W.
    An echolocation visualization and interface system for dolphin research2008In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 123, no 2, p. 1188-1194Article in journal (Refereed)
    Abstract [en]

    The present study describes the development and testing of a tool for dolphin research. This tool was able to visualize the dolphin echolocation signals as well as function as an acoustically operated "touch screen." The system consisted of a matrix of hydrophones attached to a semitransparent screen, which was lowered in front of an underwater acrylic panel in a dolphin pool. When a dolphin aimed its sonar beam at the screen, the hydrophones measured the received sound pressure levels. These hydrophone signals were then transferred to a computer where they were translated into a video image that corresponds to the dynamic sound pressure variations in the sonar beam and the location of the beam axis. There was a continuous projection of the image back onto the hydrophone matrix screen, giving the dolphin an immediate visual feedback to its sonar output. The system offers a whole new experimental methodology in dolphin research and since it is software-based, many different kinds of scientific questions can be addressed. The results were promising and motivate further development of the system and studies of sonar and cognitive abilities of dolphins. © 2008 Acoustical Society of America.

  • 2.
    Chang, You
    et al.
    Linköping University, Department of Clinical and Experimental Medicine, Division of Neuro and Inflammation Science. Linköping University, Faculty of Medicine and Health Sciences.
    Kim, Namkeun
    Incheon National University, South Korea.
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Division of Neuro and Inflammation Science. Linköping University, Faculty of Medicine and Health Sciences.
    The development of a whole-head human finite-element model for simulation of the transmission of bone-conducted sound2016In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 140, no 3, p. 1635-1651Article in journal (Refereed)
    Abstract [en]

    A whole head finite element model for simulation of bone conducted (BC) sound transmission was developed. The geometry and structures were identified from cryosectional images of a female human head and eight different components were included in the model: cerebrospinal fluid, brain, three layers of bone, soft tissue, eye, and cartilage. The skull bone was modeled as a sandwich structure with an inner and outer layer of cortical bone and soft spongy bone (diploe) in between. The behavior of the finite element model was validated against experimental data of mechanical point impedance, vibration of the cochlear promontories, and transcranial BC sound transmission. The experimental data were obtained in both cadaver heads and live humans. The simulations showed multiple low-frequency resonances where the first was caused by rotation of the head and the second was close in frequency to average resonances obtained in cadaver heads. At higher frequencies, the simulation results of the impedance were within one standard deviation of the average experimental data. The acceleration response at the cochlear promontory was overall lower for the simulations compared with experiments but the overall tendencies were similar. Even if the current model cannot predict results in a specific individual, it can be used for understanding the characteristic of BC sound transmission in general. (C) 2016 Acoustical Society of America.

  • 3.
    Dreschler, W. A.,
    et al.
    AMC, Clinical and Experimental Audiology, Amsterdam, Netherland.
    van Esch, T
    AMC, Clinical and Experimental Audiology, Amsterdam, Netherland.
    Larsby, Birgitta
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Hällgren, Mathias
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Lutman, Mark
    University of Southampton, Southampton, UK.
    Lyzenga, Johannes
    Vrije Universiteit Medical Center, Amsterdam, Netherlands.
    Vorman, M
    Hoerzentrum Oldenburg, Hoerzentrum Oldenburg, Oldenburg, Germany.
    Kollmeier, B
    Universität Oldenburg, Medizinische Physik, Oldenburg, Germany.
    Charactering the individual ear by the "Auditory Profile2008In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 123, no 5, article id 3714Article in journal (Refereed)
    Abstract [en]

    This paper describes a new approach to auditory diagnostics, which is one of the central themes of the EU-project HEARCOM. For this purpose we defined a so-called "Auditory Profile" that can be assessed for each individual listener using a standardized battery of audiological tests that - in addition to the pure-tone audiogram - focus on loudness perception, frequency resolution, temporal acuity, speech perception, binaural functioning, listening effort, subjective hearing abilities, and cognition. For the sake of testing time only summary tests are included from each of these areas, but the broad approach of characterizing auditory communication problems by means of standardized test is expected to have an added value above traditional testing in understanding the reasons for poor speech reception. The Auditory profile may also be relevant in the field of auditory rehabilitation and for design of acoustical environments. The results of an international 5-center study (in 4 countries and in 4 languages) will be presented and the relevance of a broad but well-standardized approach will be discussed.

  • 4.
    Eklund, Robert
    Telia Research AB, Spoken Language Processing, Haninge, Sweden.
    A Comparative Study of Focus Realization in Three Swedish Dialects1996In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 99, no 4, p. 2492-2492Article in journal (Refereed)
    Abstract [en]

    State-of-the-art speech recognition and speech translation systems do not currently make use of prosodic information. Utterances often have one or more constituents semantically focused by prosodic means and detection of the focus/foci of anutterance is crucial for a correct interpretation of the speech signal. Thus, a semantic model of focus should be linked to a model describing the acoustic-phonetic correlates of the speech. However, variability exists at both the semantic and the prosodic ends. Semantically different kinds of foci might be associated with specific prosodic gestures. Also, a semantically specific type of focus might be realised indifferent ways in different varieties of a given language since general intonational patterns vary between dialects. In this paper, focus realisation in three different dialects of Swedish is investigated. Subjects from Stockholm, Gšteborg and Malmšö recorded three sets of four sentences where focus was systematically put on four different constituents by having the subjects answer wh-questions. Since Swedish is a language with two tonal accents, words with these accents both in and out of focus were included. Dialectal as well as individual variation in focus realisation is described with emphasis on invariant and optional phenomena.

  • 5.
    Eklund, Robert
    et al.
    Telia Research AB, System Res. Spoken Language Processing.
    Lyberg, Bertil
    Telia Research AB, System Res. Spoken Language Processing.
    Inclusion of a prosodic module in spoken language translation1995In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 98, no 5, p. 2894-2895, article id 2aSC27Article in journal (Refereed)
    Abstract [en]

    Current speech recognition systems mainly work on statistical bases and make no use of information signalled by prosody, i.e. the segment duration and fundamental frequency contour of the speech signal. In more advanced applications for speech recognition, such as speech-to-speech translation systems, it is necessary to include the linguistic information conveyed by prosody. Earlier research has shown that prosody conveys information at syntactic, semantic and pragmatic levels. The degree of linguistic information conveyed by prosody varies between languages, from languages such as English, with a relatively low degree of prosodic disambiguation, via tone-accent languages such as Swedish, to pure tone languages. The inclusion of a prosodic module in speech translation systems is not only vital in order to link the source language to the target language, but could also be used to enhance speech recognition proper.  Besides syntactic and semantic information, properties such as dialect, sociolect, sex and attitude etc is signalled by prosody. Speech-to-speech recognition systems that will not transfer this type of information will be of limited value for person-to-person communication. A tentative architecture for the inclusion of a prosodic module in a speech-to-speech translation system is presented.

  • 6.
    Hellgren, Johan
    et al.
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Neuroscience and Locomotion, Technical Audiology.
    Lunner, Thomas
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Neuroscience and Locomotion, Technical Audiology.
    Arlinger, Stig
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Neuroscience and Locomotion, Technical Audiology.
    System identification of feedback in hearing aids.1999In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 105, p. 3481-3496Article in journal (Refereed)
  • 7.
    Hellgren, Johan
    et al.
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Neuroscience and Locomotion, Technical Audiology.
    Lunner, Thomas
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Neuroscience and Locomotion, Technical Audiology.
    Arlinger, Stig
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Neuroscience and Locomotion, Technical Audiology.
    Variations in the feedback of hearing aids.1999In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 106, p. 2821-2833Article in journal (Refereed)
  • 8.
    Håkansson, Bo
    et al.
    Chalmers.
    Carlsson, Peder
    Chalmers.
    Brandt, Anders
    Chalmers.
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology . Linköping University, Faculty of Health Sciences.
    Linearity of sound propagation through the human skull in vivo1996In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 99, no 4, p. 2239-2243Article in journal (Refereed)
  • 9.
    Kjems, Ulrik
    et al.
    Oticon AS, Smørum, Denmark .
    Boldt, Jesper B
    Oticon AS, Smørum, Denmark .
    Pedersen, Michael S
    Oticon AS, Smørum, Denmark .
    Lunner, Thomas
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences. Oticon Research Centre Eriksholm, Snekkersten, Denmark.
    Wang, DeLiang
    Ohio State University, Columbus, USA.
    Role of mask pattern in intelligibility of ideal binary-masked noisy speech2009In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 126, no 3, p. 1415-1426Article in journal (Refereed)
    Abstract [en]

    Intelligibility of ideal binary masked noisy speech was measured on a group of normal hearing individuals across mixture signal to noise ratio (SNR) levels, masker types, and local criteria for forming the binary mask. The binary mask is computed from time-frequency decompositions of target and masker signals using two different schemes: an ideal binary mask computed by thresholding the local SNR within time-frequency units and a target binary mask computed by comparing the local target energy against the long-term average speech spectrum. By depicting intelligibility scores as a function of the difference between mixture SNR and local SNR threshold, alignment of the performance curves is obtained for a large range of mixture SNR levels. Large intelligibility benefits are obtained for both sparse and dense binary masks. When an ideal mask is dense with many ones, the effect of changing mixture SNR level while fixing the mask is significant, whereas for more sparse masks the effect is small or insignificant.

  • 10.
    Koelewijn, Thomas
    et al.
    VU University Medical Cente, Amsterdam, Netherlands .
    Zekveld, Adriana
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Festen, Joost M.
    VU University Medical Cente, Amsterdam, Netherlands.
    Kramer, Sophia E.
    VU University Medical Cente, Amsterdam, Netherlands.
    The influence of informational masking on speech perception and pupil response in adults with hearing impairment2014In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 135, no 3, p. 1596-1606Article in journal (Refereed)
    Abstract [en]

    A recent pupillometry study on adults with normal hearing indicates that the pupil response during speech perception (cognitive processing load) is strongly affected by the type of speech masker. The current study extends these results by recording the pupil response in 32 participants with hearing impairment (mean age 59 yr) while they were listening to sentences masked by fluctuating noise or a single-talker. Efforts were made to improve audibility of all sounds by means of spectral shaping. Additionally, participants performed tests measuring verbal working memory capacity, inhibition of interfering information in working memory, and linguistic closure. The results showed worse speech reception thresholds for speech masked by single-talker speech compared to fluctuating noise. In line with previous results for participants with normal hearing, the pupil response was larger when listening to speech masked by a single-talker compared to fluctuating noise. Regression analysis revealed that larger working memory capacity and better inhibition of interfering information related to better speech reception thresholds, but these variables did not account for inter-individual differences in the pupil response. In conclusion, people with hearing impairment show more cognitive load during speech processing when there is interfering speech compared to fluctuating noise. (C) 2014 Acoustical Society of America.

  • 11.
    Lidestam, Björn
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Psychology. Linköping University, Faculty of Arts and Sciences.
    Moradi, Shahram
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Pettersson, Rasmus
    Linköping University, Department of Clinical and Experimental Medicine, Division of Neuroscience. Linköping University, Faculty of Health Sciences.
    Ricklefs, Theodor
    Linköping University, Department of Clinical and Experimental Medicine, Division of Neuroscience. Linköping University, Faculty of Health Sciences.
    Audiovisual training is better than auditory-only training for auditory-only speech-in-noise identification2014In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 136, no 2, p. EL142-EL147Article in journal (Refereed)
    Abstract [en]

    The effects of audiovisual versus auditory training for speech-in-noise identification were examined in 60 young participants. The training conditions were audiovisual training, auditory-only training, and no training (n = 20 each). In the training groups, gated consonants and words were presented at 0 dB signal-to-noise ratio; stimuli were either audiovisual or auditory-only. The no-training group watched a movie clip without performing a speech identification task. Speech-in-noise identification was measured before and after the training (or control activity). Results showed that only audiovisual training improved speech-in-noise identification, demonstrating superiority over auditory-only training.

  • 12.
    Munhall, K. G.
    et al.
    Queen’s University, Kingston, Ontario, Canada .
    MacDonald, E. N.
    Queen’s University, Kingston, Ontario, Canada .
    Byrne, S. K.
    Queen’s University, Kingston, Ontario, Canada .
    Johnsrude, Ingrid
    Queen’s University, Kingston, Ontario, Canada .
    Talkers alter vowel production in response to real-time formant perturbation even when instructed not to compensate2009In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 125, no 1, p. 384-390Article in journal (Refereed)
    Abstract [en]

    Talkers show sensitivity to a range of perturbations of auditory feedback (e.g., manipulation of vocal amplitude, fundamental frequency and formant frequency). Here, 50 subjects spoke a monosyllable (“head”), and the formants in their speech were shifted in real time using a custom signal processing system that provided feedback over headphones. First and second formants were altered so that the auditory feedback matched subjects’ production of “had.” Three different instructions were tested: (1) control, in which subjects were naïve about the feedback manipulation, (2) ignore headphones, in which subjects were told that their voice might sound different and to ignore what they heard in the headphones, and (3) avoid compensation, in which subjects were informed in detail about the manipulation and were told not to compensate. Despite explicit instruction to ignore the feedback changes, subjects produced a robust compensation in all conditions. There were no differences in the magnitudes of the first or second formant changes between groups. In general, subjects altered their vowel formant values in a direction opposite to the perturbation, as if to cancel its effects. These results suggest that compensation in the face of formant perturbation is relatively automatic, and the response is not easily modified by conscious strategy.

  • 13.
    Neher, T
    et al.
    Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark.
    Lunner, Thomas
    Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark.
    Hopkins, K
    School of Psychological Sciences, University of Manchester, Manchester, United Kingdom.
    Moore, BC
    Department of Experimental Psychology, University of Cambridge, Cambridge, United Kingdom.
    Binaural temporal fine structure sensitivity, cognitive function, and spatial speech recognition of hearing-impaired listeners (L).2012In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 131, no 4, p. 2561-4Article in journal (Refereed)
    Abstract [en]

    The relationships between spatial speech recognition (SSR; the ability to understand speech in complex spatial environments), binaural temporal fine structure (TFS) sensitivity, and three cognitive tasks were assessed for 17 hearing-impaired listeners. Correlations were observed between SSR, TFS sensitivity, and two of the three cognitive tasks, which became non-significant when age effects were controlled for, suggesting that reduced TFS sensitivity and certain cognitive deficits may share a common age-related cause. The third cognitive measure was also significantly correlated with SSR, but not with TFS sensitivity or age, suggesting an independent non-age-related cause.

  • 14.
    Neher, Tobias
    et al.
    Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark.
    Lunner, Thomas
    Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark.
    Hopkins, Kathryn
    University of Manchester, United Kingdom .
    Moore, Brian C. J.
    University of Cambridge, United Kingdom .
    Binaural temporal fine structure sensitivity, cognitive function, and spatial speech recognition of hearing-impaired listeners (L)a)2012In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 131, no 4, p. 2561-2564Article in journal (Refereed)
    Abstract [en]

    The relationships between spatial speech recognition (SSR; the ability to understand speech in complex spatial environments), binaural temporal fine structure (TFS) sensitivity, and three cognitive tasks were assessed for 17 hearing-impaired listeners. Correlations were observed between SSR, TFS sensitivity, and two of the three cognitive tasks, which became non-significant when age effects were controlled for, suggesting that reduced TFS sensitivity and certain cognitive deficits may share a common age-related cause. The third cognitive measure was also significantly correlated with SSR, but not with TFS sensitivity or age, suggesting an independent non-age-related cause.

  • 15.
    Reinfeldt, Sabine
    et al.
    Chalmers University of Technology.
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Good, Tobias
    Chalmers University of Technology.
    Håkansson, Bo
    Chalmers University of Technology.
    Examination of bone-conducted transmission from sound field excitation measured by thresholds, ear-canal sound pressure, and skull vibrations2007In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 121, no 3, p. 1576-1587Article in journal (Refereed)
    Abstract [en]

    Bone conduction (BC) relative to air conduction (AC) sound field sensitivity is here defined as the perceived difference between a sound field transmitted to the ear by BC and by AC. Previous investigations of BC-AC sound field sensitivity have used different estimation methods and report estimates that vary by up to 20 dB at some frequencies. In this study, the BC-AC sound field sensitivity was investigated by hearing threshold shifts, ear canal sound pressure measurements, and skull bone vibrations measured with an accelerometer. The vibration measurement produced valid estimates at 400 Hz and below, the threshold shifts produced valid estimates at 500 Hz and above, while the ear canal sound pressure measurements were found erroneous for estimating the BC-AC sound field sensitivity. The BC-AC sound field sensitivity is proposed, by combining the present result with others, as frequency independent at 50 to 60 dB at frequencies up to 900 Hz. At higher frequencies, it is frequency dependent with minima of 40 to 50 dB at 2 and 8 kHz, and a maximum of 50 to 60 dB at 4 kHz. The BC-AC sound field sensitivity is the theoretical limit of maximum attenuation achievable with ordinary hearing protection devices. © 2007 Acoustical Society of America.

  • 16.
    Saremi, Amin
    et al.
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Effect of metabolic presbyacusis on cochlear responses: A simulation approach using a physiologically-based model2013In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 134, no 4, p. 2833-2851Article in journal (Refereed)
    Abstract [en]

    In the presented model, electrical, acoustical, and mechanical elements of the cochlea are explicitly integrated into a signal transmission line where these elements convey physiological interpretations of the human cochlear structures. As a result, this physiologically-motivated model enables simulation of specific cochlear lesions such as presbyacusis. The hypothesis is that high-frequency hearing loss in older adults may be due to metabolic presbyacusis whereby age-related cellular/chemical degenerations in the lateral wall of the cochlea cause a reduction in the endocochlear potential. The simulations quantitatively confirm this hypothesis and emphasize that even if the outer and inner hair cells are totally active and intact, metabolic presbyacusis alone can significantly deteriorate the cochlear functionality. Specifically, in the model, as the endocochlear potential decreases, the transduction mechanism produces less receptor current such that there is a reduction in the battery of the somatic motor. This leads to a drastic decrease in cochlear amplification and frequency sensitivity, as well as changes in position-frequency map (tuning pattern) of the cochlea. In addition, the simulations show that the age-related reduction of the endocochlear potential significantly inhibits the firing rate of the auditory nerve which might contribute to the decline of temporal resolution in the aging auditory system.

  • 17.
    Starkhammar, Josefin
    et al.
    Lund University, Lund, Sweden.
    Amundin, Mats
    Linköping University, Department of Physics, Chemistry and Biology, Zoology. Linköping University, The Institute of Technology.
    Nilsson, Johan
    Lund University, Lund, Sweden.
    Jansson, Tomas
    Lund University, Lund, Sweden.
    Kuczaj, Stan A
    University of South Mississippi, Hattiesburg, MS, USA.
    Almqvist, Monica
    Lund University, Lund, Sweden.
    Persson, Hans W
    Lund University, Lund, Sweden.
    Editorial: 47-channel burst-mode recording hydrophone system enabling measurements of the dynamic echolocation behavior of free-swimming dolphins2009In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 126, no 3, p. 959-962Article in journal (Other academic)
    Abstract [en]

    Detailed echolocation behavior studies on free-swimming dolphins require a measurement system that incorporates multiple hydrophones (often andgt; 16). However, the high data flow rate of previous systems has limited their usefulness since only minute long recordings have been manageable. To address this problem, this report describes a 47-channel burst-mode recording hydrophone system that enables highly resolved full beamwidth measurements on multiple free-swimming dolphins during prolonged recording periods. The system facilitates a wide range of biosonar studies since it eliminates the need to restrict the movement of animals in order to study the fine details of their sonar beams.

  • 18.
    Stenfelt, Stefan
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Neuroscience and Locomotion, Technical Audiology.
    Middle ear ossicles motion at hearing thresholds with air conduction and bone conduction stimulation2006In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 119, no 5, p. 2848-2858Article in journal (Refereed)
    Abstract [en]

    Hearing threshold data with bone conduction and air conduction stimulation are combined with physiological and mechanical measurements of the middle ear ossicles vibration to compute the vibration level of the ossicles at threshold stimulation. By comparing the displacements of the stapes footplate with the two stimulation modalities and assuming the vibration of the stapes footplate to be the input to the cochlea when stimulation is by air conduction, the importance of middle ear ossicles inertia with bone conduction stimulation is evaluated. Given the limitations of the analysis, the results indicate that the inertia of the middle ear is not an important contribution to the perception of BC sound for frequencies below 1.5 kHz, it seems to contribute to perception of bone conducted sound between the frequencies 1.5 and 3.5 kHz. At frequencies above 4 kHz, the analysis failed since the input to the cochlea is probably not through the oval window with bone conduction stimulation. Comparison of basilar membrane vibration data verified the calculations for frequencies between 0.8 and 3.5 kHz. It was also found that the fluid flow at the round window, rather than at the oval window, reflects the stimulation of the basilar membrane with bone conduction stimulation. © 2006 Acoustical Society of America.

  • 19.
    Stenfelt, Stefan
    et al.
    Chalmers .
    Goode, RL
    Stanford University.
    Transmission properties of bone conducted sound: Measurements in cadaver heads2005In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 118, no 4, p. 2373-2391Article in journal (Refereed)
    Abstract [en]

    In the past, only a few investigations have measured vibration at the cochlea with bone conduction stimulation: dry skulls were used in those investigations. In this paper, the transmission properties of bone conducted sound in human head are presented, measured as the three-dimensional vibration at the cochlear promontory in six intact cadaver heads. The stimulation was provided at 27 positions on the skull surface and two close to the cochlea; mechanical point impedance was measured at all positions. Cochlear promontory vibration levels in the three perpendicular directions were normally within 5 dB. With the stimulation applied on the ipsilateral side, the response decreased, and the accumulated phase increased, with distance between the cochlea and the excitation position. No significant changes were obtained when the excitations were on the contralateral side. In terms of vibration level, the best stimulation position is on the mastoid close to the cochlea; the worst is at the midline of the skull. The transcranial transmission was close to 0 dB for frequencies up to 700 Hz; above it decreased at 12 dB/decade. Wave transmission at the skull-base was found to be nondispersive at frequencies above 2 kHz whereas it altered with frequency at the cranial vault. (c) 2005 Acoustical Society of America.

  • 20.
    Stenfelt, Stefan
    et al.
    Chalmers.
    Hato, N
    Stanford University.
    Goode, RL
    Stanford University.
    Fluid volume displacement at the oval and round windows with air and bone conduction stimulation2004In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 115, no 2, p. 797-812Article in journal (Refereed)
    Abstract [en]

    The fluids in the cochlea are normally considered incompressible, and the fluid volume displacement of the oval window (OW) and the round window (RW) should be equal and of opposite phase. However, other channels, such as the cochlear and vestibular aqueducts, may affect the fluid flow. To test if the OW and RW fluid flows are equal and of opposite phase, the volume displacement was assessed by multiple point measurement at the windows with a laser Doppler vibrometer. This was done during air conduction (AC) stimulation in seven fresh human temporal bones, and with bone conduction (BC) stimulation in-eight temporal bones and one human cadaver head. With AC stimulation, the average volume displacement of the two windows is within 3 dB, and the phase difference is close to 180degrees for the frequency range 0.1 to 10 kHz. With BC stimulation, the average volume displacement difference between the two windows is greater: below 2 kHz, the volume displacement at the RW is 5 to 15 dB greater than at the OW and above 2 kHz more fluid is displaced at the OW. With BC stimulation, lesions at the OW caused only minor changes of the fluid flow at the RW.

  • 21.
    Stenfelt, Stefan
    et al.
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology . Linköping University, Faculty of Health Sciences.
    Hato, Naohito
    Stanford University.
    Goode, Richard
    Stanford University.
    Factors contributing to bone conduction: The middle ear2002In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 111, no 2, p. 947-959Article in journal (Refereed)
  • 22.
    Stenfelt, Stefan
    et al.
    Chalmers University Technology.
    Håkansson, B
    Chalmers University Technology.
    Tjellström, A
    Sahlgrenska University Hospital.
    Vibration characteristics of bone conducted sound in vitro2000In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 107, no 1, p. 422-431Article in journal (Refereed)
    Abstract [en]

    A dry skull added with damping material was used to investigate the vibratory pattern of bone conducted sound. Three orthogonal vibration responses of the cochleae were measured, by means of miniature accelerometers, in the frequency range 0.1-10 kHz. The exciter was attached to the temporal, parietal, and frontal bones, one at the time. In the transmission response to the ipsilateral; cochlea, a profound low frequency antiresonance (attenuation) was found, verified psycho-acoustically, and shown to yield a distinct lateralization effect. It was also shown that, for the ipsilateral side, the direction of excitation coincides with that of maximum response. At the contralateral cochlea, no such dominating response direction was found for frequencies above the first skull resonance. An overall higher response level was achieved, for the total energy transmission in general and specifically for the direction of excitation, at the ipsilateral cochlea when the transducer was attached to the excitation point closest to the cochlea. The transranial attenuation was found to be frequency dependent, with values from -5 to 10 dB for the energy transmission and -30 to 40 dB for measurements in a single direction, with a tendency toward higher attenuation at the higher frequencies.

  • 23.
    Stenfelt, Stefan
    et al.
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology . Linköping University, Faculty of Health Sciences.
    Wild, Tim
    Stanford University.
    Hato, Naohito
    Stanford University.
    Goode, Richard
    Stanford University.
    Factors contributing to bone conduction: The outer ear2003In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 113, no 2, p. 902-913Article in journal (Refereed)
  • 24.
    Stenfelt, Stefan
    et al.
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Zeitooni, Mehrnaz
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Binaural hearing ability with mastoid applied bilateral bone conduction stimulation in normal hearing subjects2013In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 134, no 1, p. 481-493Article in journal (Refereed)
    Abstract [en]

    The ability to use binaural cues when stimulation was by bilaterally applied bone conduction (BC) transducers was investigated in 20 normal hearing participants. The results with BC stimulation were compared with normal air conduction (AC) stimulation through earphones. The binaural hearing ability was tested by spatial release from masking, binaural intelligibility level difference (BILD), binaural masking level difference (BMLD) using chirp stimulation, and test of the precedence effect. In all tests, the participants revealed a benefit of bilateral BC stimulation indicating use of binaural cues. In the speech based tests, the binaural benefit for BC stimulation was approximately half that with AC stimulation. For the BC BMLD test with chirp stimulation, there were indications of superposition of the ipsilateral and contralateral pathways at the cochlear level affecting the results. The precedence effect test indicated significantly worse results for BC stimulation than for AC stimulation with low-frequency stimulation while they were close for high-frequency stimulation; broad-band stimulation gave results that were slightly worse than the high-frequency results.

  • 25.
    Wang, DeLiang
    et al.
    Ohio State University, USA.
    Kjems, Ulrik
    Oticon A/S, Smørum, Denmark .
    Pedersen, Michael S
    Oticon A/S, Smørum, Denmark .
    Boldt, Jesper B
    Oticon A/S, Smørum, Denmark .
    Lunner, Thomas
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences. Oticon Research Centre Eriksholm, Snekkersten, Denmark .
    Speech intelligibility in background noise with ideal binary time-frequency masking2009In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 125, no 4, p. 2336-2347Article in journal (Refereed)
    Abstract [en]

    Ideal binary time-frequency masking is a signal separation technique that retains mixture energy in time-frequency units where local signal-to-noise ratio exceeds a certain threshold and rejects mixture energy in other time-frequency units. Two experiments were designed to assess the effects of ideal binary masking on speech intelligibility of both normal-hearing (NH) and hearing-impaired (HI) listeners in different kinds of background interference. The results from Experiment 1 demonstrate that ideal binary masking leads to substantial reductions in speech-reception threshold for both NH and HI listeners, and the reduction is greater in a cafeteria background than in a speech-shaped noise. Furthermore, listeners with hearing loss benefit more than listeners with normal hearing, particularly for cafeteria noise, and ideal masking nearly equalizes the speech intelligibility performances of NH and HI listeners in noisy backgrounds. The results from Experiment 2 suggest that ideal binary masking in the low-frequency range yields larger intelligibility improvements than in the high-frequency range, especially for listeners with hearing loss. The findings from the two experiments have major implications for understanding speech perception in noise, computational auditory scene analysis, speech enhancement, and hearing aid design.

  • 26.
    Wang, DeLiang
    et al.
    Department of Computer Science & Engineering, and Center for Cognitive Science, The Ohio State University, Columbus, USA.
    Kjems, Ulrik
    Oticon A/S, Smørum, Denmark .
    Pedersen, Michael S
    Oticon A/S, Smørum, Denmark.
    Boldt, Jesper B
    Oticon A/S, Smørum, Denmark.
    Lunner, Thomas
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences. Oticon Research Centre Eriksholm, Snekkersten, Denmark and Department of Clinical and Experimental Medicine, and Technical Audiology, Linköping University, S-58183 Linköping, Sweden .
    Speech perception of noise with binary gains2008In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 124, no 4, p. 2303-2307Article in journal (Refereed)
    Abstract [en]

    For a given mixture of speech and noise, an ideal binary time-frequency mask is constructed by comparing speech energy and noise energy within local time-frequency units. It is observed that listeners achieve nearly perfect speech recognition from gated noise with binary gains prescribed by the ideal binary mask. Only 16 filter channels and a frame rate of 100Hz are sufficient for high intelligibility. The results show that, despite a dramatic reduction of speech information, a pattern of binary gains provides an adequate basis for speech perception

  • 27.
    Wong, Lena L. N.
    et al.
    University of Hong Kong.
    Ng, Hoi Ning Elaine
    The University of Hong Kong.
    Soli, Sigfrid D.
    House Ear Institute, Los Angeles, California, USA.
    Characterization of speech understanding in various types of noise2012In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 132, no 4, p. 2642-2651Article in journal (Refereed)
    Abstract [en]

    This study examined (1) the effects of noise on speech understanding and (2) whether performance in real-life noises could be predicted based on performance in steady-state speech-spectrum-shaped noise. The noise conditions included a steady-state speech-spectrum-shaped noise and six types of real-life noise. Thirty normal-hearing adults were tested using sentence materials from the Cantonese Hearing In Noise Test (CHINT). To achieve the first aim, the performance–intensity function slopes in these noise conditions were estimated and compared. Variations in performance–intensity function slopes were attributed to differences in the amount of amplitude fluctuations and the presence of competing background speech. How well the data obtained in real-life noises fit the performance–intensity functions obtained in steady-state speech-spectrum-shaped noises was examined for the second aim of the study. Four out of six types of noise yielded performance–intensity function slopes similar to that in steady-state speech-spectrum-shaped noise. After accounting for individual differences in sentence reception threshold (SRT) and the offset between the signal-to-noise ratio for 50% intelligibility across different types of noise, performance in steady-state speech-spectrum-shaped noise was found to predict well the performance in most of the real-life noise conditions.

  • 28.
    Zekveld, Adriana A
    et al.
    Department of ENT/Audiology and the EMGO+ Institute for Health and Care Research, VU University Medical Center, Amsterdam, The Netherlands.
    Rudner, Mary
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    S Johnsrude, Ingrid
    Department of ENT/Audiology and the EMGO+ Institute for Health and Care Research, VU University Medical Center, Amsterdam, The Netherlands.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    The effects of working memory capacity and semantic cues on the intelligibility of speech in noise2013In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 134, no 3, p. 2225-2234Article in journal (Refereed)
    Abstract [en]

    This study examined how semantically related information facilitates the intelligibility of spoken sentences in the presence of masking sound, and how this facilitation is influenced by masker type and by individual differences in cognitive functioning. Dutch sentences were masked by stationary noise, fluctuating noise, or an interfering talker. Each sentence was preceded by a text cue; cues were either three words that were semantically related to the sentence or three unpronounceable nonwords. Speech reception thresholds were adaptively measured. Additional measures included working memory capacity (reading span and size comparison span), linguistic closure ability (text reception threshold), and delayed sentence recognition. Word cues facilitated speech perception in noise similarly for all masker types. Cue benefit was related to reading span performance when the masker was interfering speech, but not when other maskers were used, and it did not correlate with text reception threshold or size comparison span. Better reading span performance was furthermore associated with enhanced delayed recognition of sentences preceded by word relative to nonword cues, across masker types. The results suggest that working memory capacity is associated with release from informational masking by semantically related information, and additionally with the encoding, storage, or retrieval of speech content in memory.

1 - 28 of 28
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf