liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 181) Show all publications
Ekberg, M., Stavrinos, G., Andin, J., Stenfelt, S. & Dahlström, Ö. (2023). Acoustic Features Distinguishing Emotions in Swedish Speech.. Journal of Voice, Article ID S0892-1997(23)00103-0.
Open this publication in new window or tab >>Acoustic Features Distinguishing Emotions in Swedish Speech.
Show others...
2023 (English)In: Journal of Voice, ISSN 0892-1997, E-ISSN 1873-4588, article id S0892-1997(23)00103-0Article in journal (Refereed) Epub ahead of print
Abstract [en]

Few studies have examined which acoustic features of speech can be used to distinguish between different emotions, and how combinations of acoustic parameters contribute to identification of emotions. The aim of the present study was to investigate which acoustic parameters in Swedish speech are most important for differentiation between, and identification of, the emotions anger, fear, happiness, sadness, and surprise in Swedish sentences. One-way ANOVAs were used to compare acoustic parameters between the emotions and both simple and multiple logistic regression models were used to examine the contribution of different acoustic parameters to differentiation between emotions. Results showed differences between emotions for several acoustic parameters in Swedish speech: surprise was the most distinct emotion, with significant differences compared to the other emotions across a range of acoustic parameters, while anger and happiness did not differ from each other on any parameter. The logistic regression models showed that fear was the best-predicted emotion while happiness was most difficult to predict. Frequency- and spectral-balance-related parameters were best at predicting fear. Amplitude- and temporal-related parameters were most important for surprise, while a combination of frequency-, amplitude- and spectral balance-related parameters are important for sadness. Assuming that there are similarities between acoustic models and how listeners infer emotions in speech, results suggest that individuals with hearing loss, who lack abilities of frequency detection, may compared to normal hearing individuals have difficulties in identifying fear in Swedish speech. Since happiness and fear relied primarily on amplitude- and spectral-balance-related parameters, detection of them are probably facilitated more by hearing aid use.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Acoustic features, Emotions, Speech
National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-193879 (URN)10.1016/j.jvoice.2023.03.010 (DOI)37045739 (PubMedID)2-s2.0-85152131048 (Scopus ID)
Available from: 2023-05-17 Created: 2023-05-17 Last updated: 2023-12-28Bibliographically approved
Roeoesli, C. & Stenfelt, S. (2022). Editorial: Special issue on acoustic implant technology. Hearing Research, 421, Article ID 108538.
Open this publication in new window or tab >>Editorial: Special issue on acoustic implant technology
2022 (English)In: Hearing Research, ISSN 0378-5955, E-ISSN 1878-5891, Vol. 421, article id 108538Article in journal, Editorial material (Other academic) Published
Place, publisher, year, edition, pages
Amsterdam, Netherlands: Elsevier, 2022
National Category
Otorhinolaryngology
Identifiers
urn:nbn:se:liu:diva-186482 (URN)10.1016/j.heares.2022.108538 (DOI)000812157500009 ()35654632 (PubMedID)2-s2.0-85131439427 (Scopus ID)
Available from: 2022-06-28 Created: 2022-06-28 Last updated: 2022-07-07Bibliographically approved
Ekberg, M., Andin, J., Stenfelt, S. & Dahlström, Ö. (2022). Effects of mild-to-moderate sensorineuralhearing loss and signal amplification on vocalemotion recognition in middle-aged–olderindividuals. PLOS ONE, 17(1), Article ID e0261354.
Open this publication in new window or tab >>Effects of mild-to-moderate sensorineuralhearing loss and signal amplification on vocalemotion recognition in middle-aged–olderindividuals
2022 (English)In: PLOS ONE, E-ISSN 1932-6203, Vol. 17, no 1, article id e0261354Article in journal (Refereed) Published
Abstract [en]

Previous research has shown deficits in vocal emotion recognition in sub-populations of individuals with hearing loss, making this a high priority research topic. However, previousresearch has only examined vocal emotion recognition using verbal material, in which emotions are expressed through emotional prosody. There is evidence that older individualswith hearing loss suffer from deficits in general prosody recognition, not specific to emotionalprosody. No study has examined the recognition of non-verbal vocalization, which constitutes another important source for the vocal communication of emotions. It might be thecase that individuals with hearing loss have specific difficulties in recognizing emotionsexpressed through prosody in speech, but not non-verbal vocalizations. We aim to examinewhether vocal emotion recognition difficulties in middle- aged-to older individuals with sensorineural mild-moderate hearing loss are better explained by deficits in vocal emotion recognition specifically, or deficits in prosody recognition generally by including both sentencesand non-verbal expressions. Furthermore a, some of the studies which have concluded thatindividuals with mild-moderate hearing loss have deficits in vocal emotion recognition abilityhave also found that the use of hearing aids does not improve recognition accuracy in thisgroup. We aim to examine the effects of linear amplification and audibility on the recognitionof different emotions expressed both verbally and non-verbally. Besides examining accuracy for different emotions we will also look at patterns of confusion (which specific emotionsare mistaken for other specific emotion and at which rates) during both amplified and nonamplified listening, and we will analyze all material acoustically and relate the acoustic content to performance. Together these analyses will provide clues to effects of amplification onthe perception of different emotions. For these purposes, a total of 70 middle-aged-olderindividuals, half with mild-moderate hearing loss and half with normal hearing will perform acomputerized forced-choice vocal emotion recognition task with and without amplification

Place, publisher, year, edition, pages
Public Library of Science (PLoS), 2022
National Category
Applied Psychology
Identifiers
urn:nbn:se:liu:diva-188114 (URN)10.1371/journal.pone.0261354 (DOI)34995305 (PubMedID)2-s2.0-85122938516 (Scopus ID)
Available from: 2022-09-05 Created: 2022-09-05 Last updated: 2023-12-28Bibliographically approved
Stenfelt, S., Lunner, T., Ng, E., Lidestam, B., Zekveld, A., Sörqvist, P., . . . Rönnberg, J. (2016). Auditory, signal processing, and cognitive factors  influencing  speech  perception  in  persons with hearing loss fitted with hearing aids – the N200 study. In: : . Paper presented at IHCON2016, International Hearing Aid Research Conference, Tahoe City, California, USA, August 10–14, 2016. , Article ID B46.
Open this publication in new window or tab >>Auditory, signal processing, and cognitive factors  influencing  speech  perception  in  persons with hearing loss fitted with hearing aids – the N200 study
Show others...
2016 (English)Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

Objective: The aim of the current study was to assess aided speech-in-noise outcomes and relate those measures to auditory sensitivity and processing, different types of cognitive processing abilities, and signal processing in hearing aids.

Material and method: Participants were 200 hearing-aid wearers, with a mean age of 60.8 years, 43% females, with average hearing thresholds in the better ear of 37.4 dB HL. Tests of auditory functions were hearing thresholds, DPOAEs, tests of fine structure processing, IHC dead regions, spectro-temporal modulation, and speech recognition in quiet (PB words). Tests of cognitive processing function were tests of phonological skills, working memory, executive functions and inference making abilities, and general cognitive tests (e.g., tests of cognitive decline and IQ). The outcome test variables were the Hagerman sentences with 50 and 80% speech recognition levels, using two different noises (stationary speech weighted noise and 4-talker babble), and three types of signal processing (linear gain, fast acting compression, and linear gain plus a non-ideal binary mask). Another sentence test included typical and atypical sentences with contextual cues that were tested both audio-visually and in an auditory mode only. Moreover, HINT and SSQ were administrated.

Analysis: Factor analyses were performed separate for the auditory, cognitive, and outcome tests.

Results: The auditory tests resulted in two factors labeled SENSITIVITY and TEMPORAL FINE STRUCTURE, the cognitive tests in one factor (COGNITION), and the outcome tests in the two factors termed NO CONTEXT and CONTEXT that relates to the level of context in the different outcome tests. When age was partialled out, COGNITION was moderately correlated with the TEMPORAL FINE STRUCTURE and NO CONTEXT factors but only weakly correlated with the CONTEXT factor. SENSITIVITY correlated weakly with TEMPORAL FINE STRUCTURE and CONTEXT, and moderately with NO CONTEXT, while TEMPORAL FINE STRUCTURE showed weak correlation with CONTEXT and moderate correlation with NO CONTEXT. CONTEXT and NO CONTEXT had a  moderate correlation. Moreover, the overall results of the Hagerman sentences showed 0.9 dB worse SNR with fast acting compression compared with linear gain and 5.5 dB better SNR with linear  gain and noise reduction compared with only linear gain.

Conclusions: For hearing aid wearers, the ability to recognize speech in noise is associated with both sensory and cognitive processing abilities when the speech materials have low internal context. These associations are less prominent when the speech material has contextual cues.

National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-159504 (URN)
Conference
IHCON2016, International Hearing Aid Research Conference, Tahoe City, California, USA, August 10–14, 2016
Available from: 2019-08-09 Created: 2019-08-09 Last updated: 2021-12-28Bibliographically approved
Dobrev, I., Stenfelt, S., Roosli, C., Bolt, L., Pfiffner, F., Gerig, R., . . . Hoon Sim, J. (2016). Influence of stimulation position on the sensitivity for bone conduction hearing aids without skin penetration. International Journal of Audiology, 55(8), 439-446
Open this publication in new window or tab >>Influence of stimulation position on the sensitivity for bone conduction hearing aids without skin penetration
Show others...
2016 (English)In: International Journal of Audiology, ISSN 1499-2027, E-ISSN 1708-8186, Vol. 55, no 8, p. 439-446Article in journal (Refereed) Published
Abstract [en]

Objective: This study explores the influence of stimulation position on bone conduction (BC) hearing sensitivity with a BC transducer attached using a headband. Design:(1) The cochlear promontory motion was measured in cadaver heads using laser Doppler vibrometry while seven different positions around the pinna were stimulated using a bone anchored hearing aid transducer attached using a headband. (2) The BC hearing thresholds were measured in human subjects, with the bone vibrator Radioear B71 attached to the same seven stimulation positions. Study sample: Three cadaver heads and twenty participants. Results: Stimulation on a position superior-anterior to the pinna generated the largest promontory motion and the lowest BC thresholds. Stimulations on the positions superior to the pinna, the mastoid, and posterior-inferior to the pinna showed similar magnitudes of promontory motion and similar levels of BC thresholds. Conclusion: Stimulations on the regions superior to the pinna, the mastoid, and posterior-inferior to the pinna provide stable BC transmission, and are insensitive to small changes of the stimulation position. Therefore it is reliable to use the mastoid to determine BC thresholds in clinical audiometry. However, stimulation on a position superior-anterior to the pinna provides more efficient BC transmission than stimulation on the mastoid.

Place, publisher, year, edition, pages
TAYLOR & FRANCIS LTD, 2016
Keywords
Bone conduction (BC); BC hearing aid (BCHA); BC hearing threshold; cochlear promontory; mastoid; skin penetration; skull bone
National Category
Otorhinolaryngology
Identifiers
urn:nbn:se:liu:diva-130314 (URN)10.3109/14992027.2016.1172120 (DOI)000378816400002 ()27139310 (PubMedID)
Available from: 2016-07-31 Created: 2016-07-28 Last updated: 2017-11-28
Stenfelt, S. (2016). Model predictions for bone conduction perception in the human. Hearing Research (15), 30076-30079
Open this publication in new window or tab >>Model predictions for bone conduction perception in the human
2016 (English)In: Hearing Research, ISSN 0378-5955, E-ISSN 1878-5891, no 15, p. 30076-30079Article in journal (Refereed) Published
Abstract [en]

Five different pathways are often suggested as important for bone conducted (BC) sound: (1) sound pressure in the ear canal, (2) inertia of the middle ear ossicles, (3) inertia of the inner ear fluid, (4) compression of the inner ear space, and (5) pressure transmission from the skull interior. The relative importance of these pathways was investigated with an acoustic-impedance model of the inner ear. The model incorporated data of BC generated ear canal sound pressure, middle ear ossicle motion, cochlear promontory vibration, and intracranial sound pressure. With BC stimulation at the mastoid, the inner ear inertia dominated the excitation of the cochlea but inner ear compression and middle ear inertia were within 10 dB for almost the entire frequency range of 0.1-10 kHz. Ear canal sound pressure gave little contribution at the low and high frequencies, but was around 15 dB below the total contribution at the mid frequencies. Intracranial sound pressure gave responses similar to the others at low frequencies, but decreased with frequency to a level of 55 dB below the total contribution at 10 kHz. When the BC inner ear model was evaluated against AC stimulation at threshold levels, the results were close up to approximately 4 kHz but deviated significantly at higher frequencies.

Place, publisher, year, edition, pages
Elsevier, 2016
Keywords
Bone conduction; Fluid inertia; Inner ear compression; Inner ear model; Middle ear inertia
National Category
Otorhinolaryngology
Identifiers
urn:nbn:se:liu:diva-125727 (URN)10.1016/j.heares.2015.10.014 (DOI)000386417900017 ()26657096 (PubMedID)
Funder
EU, FP7, Seventh Framework Programme, 600933
Available from: 2016-03-01 Created: 2016-03-01 Last updated: 2017-11-30
Rudner, M., Mishra, S., Stenfelt, S., Lunner, T. & Rönnberg, J. (2016). Seeing the talker’s face improves free recall of speech for young adults with normal hearing but not older adults with hearing loss. Journal of Speech, Language and Hearing Research, 59, 590-599
Open this publication in new window or tab >>Seeing the talker’s face improves free recall of speech for young adults with normal hearing but not older adults with hearing loss
Show others...
2016 (English)In: Journal of Speech, Language and Hearing Research, ISSN 1092-4388, E-ISSN 1558-9102, Vol. 59, p. 590-599Article in journal (Refereed) Published
Abstract [en]

Purpose Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers.

Method Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13 two-digit numbers, with alternating male and female talkers. Lists were presented in quiet as well as in stationary and speech-like noise at a signal-to-noise ratio giving approximately 90% intelligibility. Amplification compensated for loss of audibility.

Results Seeing the talker's face improved free recall performance for the younger but not the older group. Poorer performance in background noise was contingent on individual differences in working memory capacity. The effect of seeing the talker's face did not differ in quiet and noise.

Conclusions We have argued that the absence of an effect of seeing the talker's face for older adults with hearing loss may be due to modulation of audiovisual integration mechanisms caused by an interaction between task demands and participant characteristics. In particular, we suggest that executive task demands and interindividual executive skills may play a key role in determining the benefit of seeing the talker's face during a speech-based cognitive task

National Category
Psychology (excluding Applied Psychology) General Language Studies and Linguistics
Identifiers
urn:nbn:se:liu:diva-126019 (URN)10.1044/2015_JSLHR-H-15-0014 (DOI)000386781500016 ()
Note

Funding agencies: Swedish Council for Working Life and Social Research [2007-0788].

The previous status of this article was Manuscript and the working title was Updating ability reduces the negative effect of noise on memory of speech for persons with age-related hearing loss.

Available from: 2016-03-11 Created: 2016-03-11 Last updated: 2022-09-29Bibliographically approved
Asp, F., Mäki-Torkko, E., Karltorp, E., Harder, H., Hergils, L., Eskilsson, G. & Stenfelt, S. (2015). A longitudinal study of the bilateral benefit in children with bilateral cochlear implants. International Journal of Audiology, 54(2), 77-88
Open this publication in new window or tab >>A longitudinal study of the bilateral benefit in children with bilateral cochlear implants
Show others...
2015 (English)In: International Journal of Audiology, ISSN 1499-2027, E-ISSN 1708-8186, Vol. 54, no 2, p. 77-88Article in journal (Refereed) Published
Abstract [en]

Objective: To study the development of the bilateral benefit in children using bilateral cochlear implants by measurements of speech recognition and sound localization. Design: Bilateral and unilateral speech recognition in quiet, in multi-source noise, and horizontal sound localization was measured at three occasions during a two-year period, without controlling for age or implant experience. Longitudinal and cross-sectional analyses were performed. Results were compared to cross-sectional data from children with normal hearing. Study sample: Seventy-eight children aged 5.1-11.9 years, with a mean bilateral cochlear implant experience of 3.3 years and a mean age of 7.8 years, at inclusion in the study. Thirty children with normal hearing aged 4.8-9.0 years provided normative data. Results: For children with cochlear implants, bilateral and unilateral speech recognition in quiet was comparable whereas a bilateral benefit for speech recognition in noise and sound localization was found at all three test occasions. Absolute performance was lower than in children with normal hearing. Early bilateral implantation facilitated sound localization. Conclusions: A bilateral benefit for speech recognition in noise and sound localization continues to exist over time for children with bilateral cochlear implants, but no relative improvement is found after three years of bilateral cochlear implant experience.

Place, publisher, year, edition, pages
Informa Healthcare, 2015
Keywords
Bilateral cochlear implants; children; release from masking; sound localization
National Category
Otorhinolaryngology
Identifiers
urn:nbn:se:liu:diva-114231 (URN)10.3109/14992027.2014.973536 (DOI)000347971300003 ()25428567 (PubMedID)
Note

Funding Agencies|Tysta Skolan Foundation; Stockholm County Council; Karolinska Institutet; Karolinska University Hospital

Available from: 2015-02-16 Created: 2015-02-16 Last updated: 2024-01-10
Kim, N. K. & Stenfelt, S. (2015). A Possible Third Window for Bone Conducted Hearing: Cochlear Aqueduct vs. Vestibular Aqueduct. In: Mechanics of hearing: Protein to perception: . Paper presented at 12th International Workshop on the Mechanics of Hearing, Cape Sounio, Greece, 23–29 June 2014 (pp. 060016-1-060016-4). American Institute of Physics (AIP), 1703(060016)
Open this publication in new window or tab >>A Possible Third Window for Bone Conducted Hearing: Cochlear Aqueduct vs. Vestibular Aqueduct
2015 (English)In: Mechanics of hearing: Protein to perception, American Institute of Physics (AIP), 2015, Vol. 1703, no 060016, p. 060016-1-060016-4Conference paper, Published paper (Refereed)
Abstract [en]

A third window, which is another cochlear fluid pathway different from the oval window and round window, is considered to be a significant factor in bone-conducted hearing. A three-dimensional finite element model of the human ear consisting of the middle ear and cochlea was used to investigate the effect of the third windows on bone-conducted heraing. This study is aimed to find the third window which causes the consistent cochlear responses with previous studies in air-conducted hearing, and causes the asymmetry of the volume velocity ratio between the oval window and round window in bone-conducted hearing. The preliminary result shows that the cochlear aqueduct and the vestibular aqueduct with high impedance do not affect the basilar membrane velocity in air-conducted hearing. On the contrary, in bone-conducted hearing, the direction of the shaking structure for the bone-conducted stimulation as well as the third window can be a significant factor causing the asymmetry of the volume velocity ratio found by Stenfelt et al.

Place, publisher, year, edition, pages
American Institute of Physics (AIP), 2015
Series
AIP Conference Proceedings, ISSN 0094-243X
National Category
Clinical Medicine
Identifiers
urn:nbn:se:liu:diva-127074 (URN)10.1063/1.4939371 (DOI)000372065400058 ()978-0-7354-1350-4 (ISBN)
Conference
12th International Workshop on the Mechanics of Hearing, Cape Sounio, Greece, 23–29 June 2014
Available from: 2016-04-13 Created: 2016-04-13 Last updated: 2016-04-28Bibliographically approved
Stenfelt, S. (2015). Cochlear Boundary Motion During Bone Conduction Stimulation: Implications for Inertial and Compressional Excitation of the Cochlea. In: MECHANICS OF HEARING: PROTEIN TO PERCEPTION: . Paper presented at 12th International Workshop on the Mechanics of Hearing. AMER INST PHYSICS, 1703(060005)
Open this publication in new window or tab >>Cochlear Boundary Motion During Bone Conduction Stimulation: Implications for Inertial and Compressional Excitation of the Cochlea
2015 (English)In: MECHANICS OF HEARING: PROTEIN TO PERCEPTION, AMER INST PHYSICS , 2015, Vol. 1703, no 060005Conference paper, Published paper (Refereed)
Abstract [en]

It is well accepted that the perception of bone conducted (BC) sound in the human relies on multiple pathways. Of these pathways, the inertial forces in the cochlear fluid and compression and expansion of the cochlear space have been suggested to be the most important. However, the frequency ranges where these two pathways dominate have not been clarified. This was investigated here using a box-model of the inner ear to estimate wall motion for a one-dimensional BC longitudinal skull vibration. Based on the dimensions of the inner ear and a BC wave speed of 400 m/s, the magnitude of the inertial motion of the cochlea was almost identical to the magnitude of the BC excitation except at the highest frequencies investigated (10 kHz). The compression (differential motion) was almost 100 times smaller than the inertial motion at 100 Hz but increased with frequency and at 5.9 kHz and above, the compression motion was greater than the inertial motion of the cochlea. This data was further analyzed in an impedance model of the cochlea and vestibule where the cochlear fluid, basilar membrane, oval window, round window, and cochlear and vestibular aqueducts were represented by acoustical impedances. That analysis showed that for a normal cochlea, the inertial excitation dominates the basilar membrane excitation for the whole frequency range investigated (0.1 to 10 kHz). However, when the oval window was immobilized simulating otosclerosis, the inertial effect diminished and the compressional excitation mode improved resulting in a dominant excitation from the compression of the cochlea at frequencies of 1.2 kHz and above. Also, the simulated BC hearing losses with otosclerosis according to this model were almost identical to the proposed Carharts notch seen clinically.

Place, publisher, year, edition, pages
AMER INST PHYSICS, 2015
Series
AIP Conference Proceedings, ISSN 0094-243X
National Category
Clinical Medicine
Identifiers
urn:nbn:se:liu:diva-127078 (URN)10.1063/1.4939360 (DOI)000372065400047 ()978-0-7354-1350-4 (ISBN)
Conference
12th International Workshop on the Mechanics of Hearing
Available from: 2016-04-13 Created: 2016-04-13 Last updated: 2016-04-13
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-3350-8997

Search in DiVA

Show all publications