liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Lidestam, Björn
Publications (10 of 50) Show all publications
Stenfelt, S., Lunner, T., Ng, E., Lidestam, B., Zekveld, A., Sörqvist, P., . . . Rönnberg, J. (2016). Auditory, signal processing, and cognitive factors  influencing  speech  perception  in  persons with hearing loss fitted with hearing aids – the N200 study. In: : . Paper presented at IHCON2016, International Hearing Aid Research Conference, Tahoe City, California, USA, August 10–14, 2016. , Article ID B46.
Open this publication in new window or tab >>Auditory, signal processing, and cognitive factors  influencing  speech  perception  in  persons with hearing loss fitted with hearing aids – the N200 study
Show others...
2016 (English)Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

Objective: The aim of the current study was to assess aided speech-in-noise outcomes and relate those measures to auditory sensitivity and processing, different types of cognitive processing abilities, and signal processing in hearing aids.

Material and method: Participants were 200 hearing-aid wearers, with a mean age of 60.8 years, 43% females, with average hearing thresholds in the better ear of 37.4 dB HL. Tests of auditory functions were hearing thresholds, DPOAEs, tests of fine structure processing, IHC dead regions, spectro-temporal modulation, and speech recognition in quiet (PB words). Tests of cognitive processing function were tests of phonological skills, working memory, executive functions and inference making abilities, and general cognitive tests (e.g., tests of cognitive decline and IQ). The outcome test variables were the Hagerman sentences with 50 and 80% speech recognition levels, using two different noises (stationary speech weighted noise and 4-talker babble), and three types of signal processing (linear gain, fast acting compression, and linear gain plus a non-ideal binary mask). Another sentence test included typical and atypical sentences with contextual cues that were tested both audio-visually and in an auditory mode only. Moreover, HINT and SSQ were administrated.

Analysis: Factor analyses were performed separate for the auditory, cognitive, and outcome tests.

Results: The auditory tests resulted in two factors labeled SENSITIVITY and TEMPORAL FINE STRUCTURE, the cognitive tests in one factor (COGNITION), and the outcome tests in the two factors termed NO CONTEXT and CONTEXT that relates to the level of context in the different outcome tests. When age was partialled out, COGNITION was moderately correlated with the TEMPORAL FINE STRUCTURE and NO CONTEXT factors but only weakly correlated with the CONTEXT factor. SENSITIVITY correlated weakly with TEMPORAL FINE STRUCTURE and CONTEXT, and moderately with NO CONTEXT, while TEMPORAL FINE STRUCTURE showed weak correlation with CONTEXT and moderate correlation with NO CONTEXT. CONTEXT and NO CONTEXT had a  moderate correlation. Moreover, the overall results of the Hagerman sentences showed 0.9 dB worse SNR with fast acting compression compared with linear gain and 5.5 dB better SNR with linear  gain and noise reduction compared with only linear gain.

Conclusions: For hearing aid wearers, the ability to recognize speech in noise is associated with both sensory and cognitive processing abilities when the speech materials have low internal context. These associations are less prominent when the speech material has contextual cues.

National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-159504 (URN)
Conference
IHCON2016, International Hearing Aid Research Conference, Tahoe City, California, USA, August 10–14, 2016
Available from: 2019-08-09 Created: 2019-08-09 Last updated: 2019-08-09Bibliographically approved
Moradi, S., Lidestam, B. & Rönnberg, J. (2016). Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli. TRENDS IN HEARING, 20, Article ID 2331216516653355.
Open this publication in new window or tab >>Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli
2016 (English)In: TRENDS IN HEARING, ISSN 2331-2165, Vol. 20, article id 2331216516653355Article in journal (Refereed) Published
Abstract [en]

The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context.

Place, publisher, year, edition, pages
SAGE PUBLICATIONS INC, 2016
Keywords
audiovisual speech perception; EHA users; ENH listeners; gating paradigm
National Category
General Language Studies and Linguistics
Identifiers
urn:nbn:se:liu:diva-130430 (URN)10.1177/2331216516653355 (DOI)000379790000003 ()27317667 (PubMedID)
Note

Funding Agencies|Swedish Research Council [349-2007-8654]

Available from: 2016-08-07 Created: 2016-08-05 Last updated: 2018-01-10
Henricson, C., Lidestam, B., Lyxell, B. & Moller, C. (2015). Cognitive skills and reading in adults with Usher syndrome type 2. Frontiers in Psychology, 6(326)
Open this publication in new window or tab >>Cognitive skills and reading in adults with Usher syndrome type 2
2015 (English)In: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 6, no 326Article in journal (Refereed) Published
Abstract [en]

Objective: To investigate working memory (WM), phonological skills, lexical skills, and reading comprehension in adults with Usher syndrome type 2 (USH2). Design: The participants performed tests of phonological processing, lexical access, WM, and reading comprehension. The design of the test situation and tests was specifically considered for use with persons with low vision in combination with hearing impairment. The performance of the group with USH2 on the different cognitive measures was compared to that of a matched control group with normal hearing and vision (NVH). Study Sample: Thirteen participants with USH2 aged 21-60 years and a control group of 10 individuals with NVH, matched on age and level of education. Results: The group with USH2 displayed significantly lower performance on tests of phonological processing, and on measures requiring both fast visual judgment and phonological processing. There was a larger variation in performance among the individuals with USH2 than in the matched control group. Conclusion: The performance of the group with USH2 indicated similar problems with phonological processing skills and phonological WM as in individuals with long-term hearing loss. The group with USH2 also had significantly longer reaction times, indicating that processing of visual stimuli is difficult due to the visual impairment. These findings point toward the difficulties in accessing information that persons with USH2 experience, and could be part of the explanation of why individuals with USH2 report high levels of fatigue and feelings of stress (Wahlqvist et al., 2013).

Place, publisher, year, edition, pages
Frontiers, 2015
Keywords
deafblindness; Usher syndrome; phonological skill; lexical skill; working memory; reading
National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-117380 (URN)10.3389/fpsyg.2015.00326 (DOI)000351714000001 ()25859232 (PubMedID)
Note

Funding Agencies|Swedish Research Council Forte; Audiological Research Centre in Orebro

Available from: 2015-04-24 Created: 2015-04-24 Last updated: 2017-12-04
Moradi, S., Lidestam, B. & Rönnberg, J. (2015). Comparison of gated audiovisual speech perception between elderly hearing-aid users and elderly normal-hearing listeners. In: : . Paper presented at Conference on Cognitive Hearing Science for Communication (CHCCOM2015), Linköping, June 14-17, 2015.
Open this publication in new window or tab >>Comparison of gated audiovisual speech perception between elderly hearing-aid users and elderly normal-hearing listeners
2015 (English)Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

The addition of visual cues to amplified auditory signals by hearing aids resulted in better identification of speech stimuli relative to unaided audiovisual or aided auditory-only conditions (Walden et al., 2001). An important question that remains unexplored is whether hearing-aid users have the same level of ability for audiovisual speech perception relative to their age-matched normal hearing counterparts.

Here we present the preliminary findings from collected data of 18 elderly hearing-aid users and 18 normal-hearing listeners in gated-audiovisual identification of different types of speech stimuli (consonants, words, and final words in low-predictable and high-predictable sentences). In terms of isolation point (IP; the shortest time from the onset of an speech stimulus required for correct identification of that speech stimulus), results showed that elderly hearing-aid users needed more IPs for identification of consonants and words than elderly normal-hearing individuals under quiet condition. There were no differences between two groups in IPs needed for identification of final words embedded in low-predictable or high-predictable sentences. In terms of accuracy, both elderly hearing-aid and elderly normal-hearing groups achieved ceiling on audiovisual identification of speech stimuli under quiet condition.

National Category
Otorhinolaryngology
Identifiers
urn:nbn:se:liu:diva-123283 (URN)
Conference
Conference on Cognitive Hearing Science for Communication (CHCCOM2015), Linköping, June 14-17, 2015
Available from: 2015-12-09 Created: 2015-12-09 Last updated: 2015-12-17
Moradi, S., Lidestam, B. & Rönnberg, J. (2015). Greater explicit cognitive resources support speech-in-noise identification in elderly normal-hearing listeners. In: : . Paper presented at Conference on Cognitive Hearing Science for Communication (CHCCOM2015), Linköping, June 14-17, 2015.
Open this publication in new window or tab >>Greater explicit cognitive resources support speech-in-noise identification in elderly normal-hearing listeners
2015 (English)Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Prior studies have demonstrated that cognitive capacity of listeners is a key factor in speech-in-noise tests in young-normal-hearing listeners (e.g., Moradi et al., 2014) and hearing-impaired individuals (e.g., Foo et al., 2007; Rudner et al., 2012). In addition, aging is associated with decline in sensory and cognitive functions that may impair speech perception in noisy conditions.

The present study aimed to investigate the relationships between working memory and attentional capacities and speech-in-noise identification in elderly normal-hearing listeners. Twenty-four native Swedish speakers (13 women and 11 men) normal hearing were recruited to participate in the study. The mean age of participants was 71.5 years (SD = 3.1 years, range: 66–77 years). The reading span test (RST) and the Paced Auditory Serial Attention Test (PASAT) were used to measure working memory capacity and attentional capacity, respectively. The speech-in-noise identification measured using the HINT (at 50% correct level) and Hagerman test (at 80% correct level). Results showed that individuals with greater working memory and attentional capacities had better performance in HINT and Hagerman tests. These findings support the notion that explicit cognitive resources of listeners play a critical role in identification of speech stimuli under degraded listening conditions.

National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-123282 (URN)
Conference
Conference on Cognitive Hearing Science for Communication (CHCCOM2015), Linköping, June 14-17, 2015
Available from: 2015-12-09 Created: 2015-12-09 Last updated: 2015-12-17
Henricson, C., Lyxell, B., Lidestam, B. & Möller, C. (2015). Reading skill in five children with Usher Syndrome type 1 and Cochlear implants.
Open this publication in new window or tab >>Reading skill in five children with Usher Syndrome type 1 and Cochlear implants
2015 (English)Manuscript (preprint) (Other academic)
Abstract [en]

Objective: The aim of this study was to explore and describe reading skill in children with Usher syndrome type 1 and who have cochlear implants (USH1+CI), and to position their performance in relation to that of three control groups: children with normal hearing (NH), children with hearing impairment and hearing aids (HI+HA), and children with other types of deafness and CI (other CI).

Method: Reading comprehension and decoding was measured in five children with USH1+CI in the ages 7.5–16 years. The children participated during a test session of 2–2.5 hours and performed tests including reading skill, WM, phonological skills, and lexical skills.

Results: Four of the children with USH1+CI achieved results similar to those of the control group with NH on the measures of reading skill. One child with USH1+CI performed below all control groups. Three of the children with USH1+CI had high performance on both the measures of phonological skill and on the tests of reading skill. The groups perform similar results on the tests of reading skill.

Conclusions: Three of the children with USH1+CI decode non-words with a phonological decoding strategy, similar to the strategy applied by the control group with NH. Two of the children with USH1+CI relied on an orthographic decoding strategy, possibly relying on other cognitive skills than the phonological strategy.

Keywords
Reading skill; Usher syndrome type 1; Cochlear Implant; phonological skills; working memory
National Category
Social Sciences Interdisciplinary Psychology
Identifiers
urn:nbn:se:liu:diva-120112 (URN)
Available from: 2015-07-09 Created: 2015-07-09 Last updated: 2018-01-11Bibliographically approved
Lidestam, B. (2014). Audiovisual presentation of video-recorded stimuli at a high frame rate. Behavior Research Methods, 46(2), 499-516
Open this publication in new window or tab >>Audiovisual presentation of video-recorded stimuli at a high frame rate
2014 (English)In: Behavior Research Methods, ISSN 1554-351X, E-ISSN 1554-3528, Vol. 46, no 2, p. 499-516Article in journal (Refereed) Published
Abstract [en]

A method for creating and presenting video-recorded synchronized audiovisual stimuli at a high frame rate-which would be highly useful for psychophysical studies on, for example, just-noticeable differences and gating-is presented. Methods for accomplishing this include recording audio and video separately using an exact synchronization signal, editing the recordings and finding exact synchronization points, and presenting the synchronized audiovisual stimuli with a desired frame rate on a cathode ray tube display using MATLAB and Psychophysics Toolbox 3. The methods from an empirical gating study (Moradi, Lidestam, and Ronnberg, Frontiers in Psychology 4: 359, 2013) are presented as an example of the implementation of playback at 120 fps.

Place, publisher, year, edition, pages
Springer Verlag (Germany), 2014
Keywords
Psychophysics; Frame rate; Audiovisual; Synchronization; Temporal resolution
National Category
Basic Medicine
Identifiers
urn:nbn:se:liu:diva-110292 (URN)10.3758/s13428-013-0394-2 (DOI)000340226200017 ()24197711 (PubMedID)
Available from: 2014-09-05 Created: 2014-09-05 Last updated: 2018-01-11
Lidestam, B., Moradi, S., Pettersson, R. & Ricklefs, T. (2014). Audiovisual training is better than auditory-only training for auditory-only speech-in-noise identification. Journal of the Acoustical Society of America, 136(2), EL142-EL147
Open this publication in new window or tab >>Audiovisual training is better than auditory-only training for auditory-only speech-in-noise identification
2014 (English)In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 136, no 2, p. EL142-EL147Article in journal (Refereed) Published
Abstract [en]

The effects of audiovisual versus auditory training for speech-in-noise identification were examined in 60 young participants. The training conditions were audiovisual training, auditory-only training, and no training (n = 20 each). In the training groups, gated consonants and words were presented at 0 dB signal-to-noise ratio; stimuli were either audiovisual or auditory-only. The no-training group watched a movie clip without performing a speech identification task. Speech-in-noise identification was measured before and after the training (or control activity). Results showed that only audiovisual training improved speech-in-noise identification, demonstrating superiority over auditory-only training.

Place, publisher, year, edition, pages
American Institute of Physics (AIP), 2014
Keywords
Audiovisual training, audio training, speech-in-noise identification
National Category
Applied Psychology
Identifiers
urn:nbn:se:liu:diva-108989 (URN)10.1121/1.4890200 (DOI)000341178100014 ()25096138 (PubMedID)
Funder
Swedish Research Council, 006-6917
Available from: 2014-07-19 Created: 2014-07-19 Last updated: 2017-12-05Bibliographically approved
Thorslund, B., Ahlström, C., Peters, B., Eriksson, O., Lidestam, B. & Lyxell, B. (2014). Cognitive workload and visual behavior in elderly drivers with hearing loss. European Transport Research Review, 6(4), 377-385
Open this publication in new window or tab >>Cognitive workload and visual behavior in elderly drivers with hearing loss
Show others...
2014 (English)In: European Transport Research Review, ISSN 1867-0717, E-ISSN 1866-8887, Vol. 6, no 4, p. 377-385Article in journal (Refereed) Published
Abstract [en]

Purpose

To examine eye tracking data and compare visual behavior in individuals with normal hearing (NH) and with moderate hearing loss (HL) during two types of driving conditions: normal driving and driving while performing a secondary task.

Methods

24 participants with HL and 24 with NH were exposed to normal driving and to driving with a secondary task (observation and recall of 4 visually displayed letters). Eye movement behavior was assessed during normal driving by the following performance indicators: number of glances away from the road; mean duration of glances away from the road; maximum duration of glances away from the road; and percentage of time looking at the road. During driving with the secondary task, eye movement data were assessed in terms of number of glances to the secondary task display, mean duration of glances to the secondary task display, and maximum duration of glances to the secondary task display. The secondary task performance was assessed as well, counting the number of correct letters, the number of skipped letters, and the number of correct letters ignoring order.

Results

While driving with the secondary task, drivers with HL looked twice as often in the rear-view mirror than during normal driving and twice as often as drivers with NH regardless of condition. During secondary task, the HL group looked away from the road more frequently but for shorter durations than the NH group. Drivers with HL had fewer correct letters and more skipped letters than drivers with NH.

Conclusions

Differences in visual behavior between drivers with NH and with HL are bound to the driving condition. Driving with a secondary task, drivers with HL spend as much time looking away from the road as drivers with NH, however with more frequent and shorter glances away. Secondary task performance is lower for the HL group, suggesting this group is less willing to perform this task. The results also indicate that drivers with HL use fewer but more focused glances away than drivers with NH, they also perform a visual scan of the surrounding traffic environment before looking away towards the secondary task display.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2014
Keywords
Hearing loss; Driving simulator; Visual behavior; Cognitive workload
National Category
Social Sciences Interdisciplinary Other Medical Sciences not elsewhere specified
Identifiers
urn:nbn:se:liu:diva-111932 (URN)10.1007/s12544-014-0139-z (DOI)000209729200003 ()2-s2.0-84920249351 (Scopus ID)
Available from: 2014-11-10 Created: 2014-11-10 Last updated: 2018-01-11Bibliographically approved
Lidestam, B., Holgersson, J. & Moradi, S. (2014). Comparison of informational vs. energetic masking effects on speechreading performance. Frontiers in Psychology, 5(639)
Open this publication in new window or tab >>Comparison of informational vs. energetic masking effects on speechreading performance
2014 (English)In: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 5, no 639Article in journal (Refereed) Published
Abstract [en]

The effects of two types of auditory distracters (steady-state noise vs. four-talker babble) on visual-only speechreading accuracy were tested against a baseline (silence) in 23 participants with above-average speechreading ability. Their task was to speechread high frequency Swedish words. They were asked to rate their own performance and effort, and report how distracting each type of auditory distracter was. Only four-talker babble impeded speechreading accuracy. This suggests competition for phonological processing, since the four-talker babble demands phonological processing, which is also required for the speechreading task. Better accuracy was associated with lower self-rated effort in silence; no other correlations were found.

Keywords
speechperception, cognition, speechreading, informational masking, energetic masking
National Category
Psychology
Identifiers
urn:nbn:se:liu:diva-108990 (URN)10.3389/fpsyg.2014.00639 (DOI)000338723500001 ()
Funder
Swedish Research Council, 2006–6917
Available from: 2014-07-19 Created: 2014-07-19 Last updated: 2017-12-05Bibliographically approved
Organisations

Search in DiVA

Show all publications