liu.seSearch for publications in DiVA
Change search
Refine search result
12 51 - 67 of 67
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Rudner, Mary
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Linköping University, Linnaeus Centre HEAD.
    Foo, Catharina
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Snekkersten, Oticon A/S, Eriksholm Research Centre.
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Linköping University, Linnaeus Centre HEAD.
    Lunner, Thomas
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Snekkersten, Oticon A/S, Eriksholm Research Centre.
    Phonological mismatch makes aided speech recognition in noise cognitively taxing2007In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 28, p. 879-892Article in journal (Refereed)
    Abstract [en]

    Objectives: The working memory framework for Ease of Language Understanding predicts that speech processing becomes more effortful, thus requiring more explicit cognitive resources, when there is mismatch between speech input and phonological representations in long-term memory. To test this prediction, we changed the compression release settings in the hearing instruments of experienced users and allowed them to train for 9 weeks with the new settings. After training, aided speech recognition in noise was tested with both the trained settings and orthogonal settings. We postulated that training would lead to acclimatization to the trained setting, which in turn would involve establishment of new phonological representations in long-term memory. Further, we postulated that after training, testing with orthogonal settings would give rise to phonological mismatch, associated with more explicit cognitive processing.

    Design: Thirty-two participants (mean = 70.3 years, SD = 7.7) with bilateral sensorineural hearing loss (pure-tone average = 46.0 dB HL, SD = 6.5), bilaterally fitted for more than 1 year with digital, two-channel, nonlinear signal processing hearing instruments and chosen from the patient population at the Linkooping University Hospital were randomly assigned to 9 weeks training with new, fast (40 ms) or slow (640 ms), compression release settings in both channels. Aided speech recognition in noise performance was tested according to a design with three within-group factors: test occasion (T1, T2), test setting (fast, slow), and type of noise (unmodulated, modulated) and one between-group factor: experience setting (fast, slow) for two types of speech materials-the highly constrained Hagerman sentences and the less-predictable Hearing in Noise Test (HINT). Complex cognitive capacity was measured using the reading span and letter monitoring tests.

    Prediction: We predicted that speech recognition in noise at T2 with mismatched experience and test settings would be associated with more explicit cognitive processing and thus stronger correlations with complex cognitive measures, as well as poorer performance if complex cognitive capacity was exceeded.

    Results: Under mismatch conditions, stronger correlations were found between performance on speech recognition with the Hagerman sentences and reading span, along with poorer speech recognition for participants with low reading span scores. No consistent mismatch effect was found with HINT.

    Conclusions: The mismatch prediction generated by the working memory framework for Ease of Language Understanding is supported for speech recognition in noise with the highly constrained Hagerman sentences but not the less-predictable HINT.

  • 52.
    Rudner, Mary
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Keidser, Gitte
    National Acoustic Laboratories, Australia.
    Hygge, Staffan
    University of Gävle, Sweden.
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Better visuospatial working memory in adults who report profound deafness compared to those with normal or poor hearing: data from the UK Biobank resource2016In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 37, no 5, p. 620-622Article in journal (Refereed)
    Abstract [en]

    Experimental work has shown better visuospatial working memory (VSWM) in profoundly deaf individuals compared to those with normal hearing. Other data, including the UK Biobank resource shows poorer VSWM in individuals with poorer hearing. Using the same database, the authors investigated VSWM in individuals who reported profound deafness. Included in this study were 112 participants who were profoundly deaf, 1310 with poor hearing and 74,635 with normal hearing. All participants performed a card-pair matching task as a test of VSWM. Although variance in VSWM performance was large among profoundly deaf participants, at group level it was superior to that of participants with both normal and poor hearing. VSWM in adults is related to hearing status but the association is not linear. Future study should investigate the mechanism behind enhanced VSWM in profoundly deaf adults.

  • 53. Samuelsson, A-K
    et al.
    Hydén, Dag
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Neuroscience and Locomotion, Oto-Rhiono-Laryngology and Head & Neck Surgery. Östergötlands Läns Landsting, RC - Rekonstruktionscentrum, ÖNH - Öron- Näsa- Halskliniken.
    Roberg, Magnus
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Molecular and Clinical Medicine, Infectious Diseases. Östergötlands Läns Landsting, Centre for Medicine, Department of Infectious Diseases in Östergötland.
    Skogh, Thomas
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Molecular and Clinical Medicine, Rheumatology. Östergötlands Läns Landsting, Centre for Medicine, Department of Rheumatology in Östergötland.
    Evaluation of anti-hsp70 antibody screening in sudden deafness2003In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 24, no 3, p. 233-235Article in journal (Refereed)
    Abstract [en]

    Objective: To assess the diagnostic utility of anti-hsp70 antibody screening in sudden deafness. Design: Sera from 27 patients with sudden deafness and 100 healthy blood donors were analyzed by Western blotting (WB) for the presence of antibodies against 68 kD heat shock protein (anti-hsp70). Results: 19% of the patient sera and 14% of the control sera turned out positive, which was not significantly different. Conclusions: The anti-hsp70 WB test lacks clinical utility for diagnostic screening in patients with sudden deafness.

  • 54.
    Saunders, Gabrielle H.
    et al.
    Portland VA Medical Centre, OR USA; Oregon Health and Science University, OR 97201 USA.
    Frederick, Melissa T.
    Portland VA Medical Centre, OR USA.
    Silverman, ShienPei C.
    Portland VA Medical Centre, OR USA.
    Nielsen, Claus
    Oticon AS, Denmark.
    Laplante-Lévesque, Ariane
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Oticon AS, Denmark.
    Description of Adults Seeking Hearing Help for the First Time According to Two Health Behavior Change Approaches: Transtheoretical Model (Stages of Change) and Health Belief Model2016In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 37, no 3, p. 324-333Article in journal (Refereed)
    Abstract [en]

    Objectives: Several models of health behavior change are commonly used in health psychology. This study applied the constructs delineated by two models-the transtheoretical model (in which readiness for health behavior change can be described with the stages of precontemplation, contemplation and action) and the health belief model (in which susceptibility, severity, benefits, barriers, self-efficacy, and cues to action are thought to determine likelihood of health behavior change)-to adults seeking hearing help for the first time. Design: One hundred eighty-two participants (mean age: 69.5 years) were recruited following an initial hearing assessment by an audiologist. Participants mean four-frequency pure-tone average was 35.4 dB HL, with 25.8% having no hearing impairment, 50.5% having a slight impairment, and 23.1% having a moderate or severe impairment using the World Health Organization definition of hearing loss. Participants hearing-related attitudes and beliefs toward hearing health behaviors were examined using the University of Rhode Island Change Assessment (URICA) and the health beliefs questionnaire (HBQ), which assess the constructs of the transtheoretical model and the health belief model, respectively. Participants also provided demographic information, and completed the hearing handicap inventory (HHI) to assess participation restrictions, and the psychosocial impact of hearing loss (PIHL) to assess the extent to which hearing impacts competence, self-esteem, and adaptability. Results: Degree of hearing impairment was associated with participation restrictions, perceived competence, self-esteem and adaptability, and attitudes and beliefs measured by the URICA and the HBQ. As degree of impairment increased, participation restrictions measured by the HHI, and impacts of hearing loss, as measured by the PIHL, increased. The majority of first-time help seekers in this study were in the action stage of change. Furthermore, relative to individuals with less hearing impairment, individuals with more hearing impairment were at more advanced stages of change as measured by the URICA (i.e., higher contemplation and action scores relative to their precontemplation score), and they perceived fewer barriers and more susceptibility, severity, benefits and cues to action as measured by the HBQ. Multiple regression analyses showed participation restrictions (HHI scores) to be a highly significant predictor of stages of change explaining 30% to 37% of the variance, as were duration of hearing difficulty, and perceived benefits, severity, self-efficacy and cues to action assessed by the HBQ. Conclusions: The main predictors of stages of change in first-time help seekers were reported participation restrictions and duration of hearing difficulty, with constructs from the health belief model also explaining some of the variance in stages of change scores. The transtheoretical model and the health belief model are valuable for understanding hearing health behaviors and can be applied when developing interventions to promote help seeking.

  • 55.
    Signoret, Carine
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Hearing impairment and perceived clarity of predictable speech2019In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 40, no 5, p. 1140-1148Article in journal (Refereed)
    Abstract [en]

    Objectives: The precision of stimulus-driven information is less critical for comprehension when accurate knowledge-based predictions of the upcoming stimulus can be generated. A recent study in listeners without hearing impairment (HI) has shown that form- and meaning-based predictability independently and cumulatively enhance perceived clarity of degraded speech. In the present study, we investigated whether form- and meaning-based predictability enhanced the perceptual clarity of degraded speech for individuals with moderate to severe sensorineural HI, a group for whom such enhancement may be particularly important.

    Design: Spoken sentences with high or low semantic coherence were degraded by noise-vocoding and preceded by matching or nonmatching text primes. Matching text primes allowed generation of form-based predictions while semantic coherence allowed generation of meaning-based predictions.

    Results: The results showed that both form- and meaning-based predictions make degraded speech seem clearer to individuals with HI. The benefit of form-based predictions was seen across levels of speech quality and was greater for individuals with HI in the present study than for individuals without HI in our previous study. However, for individuals with HI, the benefit of meaning-based predictions was only apparent when the speechwas slightly degraded. When it was more severely degraded, the benefit of meaning-based predictions was only seen when matching text primes preceded the degraded speech. The benefit in terms of perceptual clarity of meaning-based predictions was positively related to verbal fluency but not working memory performance.

    Conclusions: Taken together, these results demonstrate that, for individuals with HI, form-based predictability has a robust effect on perceptual clarity that is greater than the effect previously shown for individuals without HI. However, when speech quality is moderately or severely degraded, meaning-based predictability is contingent on form-based predictability. Further, the ability to mobilize the lexicon seems to contribute to the strength of meaning-based predictions. Whereas individuals without HI may be able to devote explicit working memory capacity for storing meaning-based predictions, individuals with HI may already be using all available explicit capacity to process the degraded speech and thus become reliant on explicit skills such as their verbal fluency to generate useful meaning-based predictions.

    Download full text (pdf)
    fulltext
  • 56.
    Singh, Gurjit
    et al.
    University of Toronto, Canada and Toronto Rehabil Institute, Canada .
    Pichora-Fuller, Kathleen M
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Hayes, Donald
    Unitron Hearing Ltd., Kitchener, Canada.
    von Schroeder, Herbert P.
    University of Toronto, Canada .
    Carnahan, Heather
    University of Toronto, Canada and Womens College Hospital, Toronto, Canada .
    The Aging Hand and the Ergonomics of Hearing Aid Controls2013In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 34, no 1, p. E1-E13Article in journal (Refereed)
    Abstract [en]

    Objectives: The authors investigated the effects of hand function and aging on the ability to manipulate different hearing instrument controls. Over the past quarter century, hearing aids and hearing aid controls have become increasingly miniaturized. It is important to investigate the aging hand and hearing aid ergonomics because most hearing aid wearers are adults aged 65 years and above, who may have difficulty handling these devices. less thanbrgreater than less thanbrgreater thanDesign: In Experiment 1, the effect of age on the ability to manipulate two different open-fit behind-the-ear style hearing aids was investigated by comparing the performance of 20 younger (18-25 years of age), 20 young-old (60-70 years of age), and 20 older adults (71-80 years of age). In Experiment 2, ability to manipulate 11 different hearing instrument controls was investigated in 28 older adults who self-reported having arthritis in their hand, wrist, or finger and 28 older adults who did not report arthritis. For both experiments, the relationship between performance on the measures of ability to manipulate the devices and performance on a battery of tests to assess hand function was investigated. less thanbrgreater than less thanbrgreater thanResults: In Experiment 1, age-related differences in performance were observed in all the tasks assessing hand function and in the tasks assessing ability to manipulate a hearing aid. In Experiment 2, although minimal differences were observed between the two groups, significant differences were observed depending on the type of hearing instrument control. Performance on several of the objective tests of hand function was associated with the ability to manipulate hearing instruments. less thanbrgreater than less thanbrgreater thanConclusions: The overall pattern of findings suggest that haptic (touch) sensitivity in the fingertips and manual dexterity, as well as disability, pain, and joint stiffness of the hand, all contribute to the successful operation of a hearing instrument. However, although aging is associated with declining hand function and co-occurring declines in ability to manipulate a hearing instrument, for the sample of individuals in this study, including those who self-reported having arthritis, only minimal declines were observed.

  • 57.
    Smeds, Henrik
    et al.
    Karolinska Inst, Sweden; Karolinska Univ Hosp, Sweden.
    Wales, Jeremy
    Karolinska Inst, Sweden; Karolinska Univ Hosp, Sweden.
    Karltorp, Eva
    Karolinska Inst, Sweden; Karolinska Univ Hosp, Sweden.
    Anderlid, Britt-Marie
    Karolinska Inst, Sweden; Karolinska Univ Hosp, Sweden.
    Henricson, Cecilia
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Asp, Filip
    Karolinska Inst, Sweden.
    Anmyr, Lena
    Karolinska Inst, Sweden; Karolinska Univ Hosp, Sweden.
    Lagerstedt-Robinson, Kristina
    Karolinska Inst, Sweden; Karolinska Univ Hosp, Sweden.
    Löfkvist, Ulrika
    Karolinska Inst, Sweden; Uppsala Univ, Sweden.
    X-linked Malformation Deafness: Neurodevelopmental Symptoms Are Common in Children With IP3 Malformation and Mutation in POU3F42022In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 43, no 1, p. 53-69Article in journal (Refereed)
    Abstract [en]

    Objective: Incomplete partition type 3 (IP3) malformation deafness is a rare hereditary cause of congenital or rapid progressive hearing loss. The children present with a severe to profound mixed hearing loss and temporal bone imaging show a typical inner ear malformation classified as IP3. Cochlear implantation is one option of hearing restoration in severe cases. Little is known about other specific difficulties these children might exhibit, for instance possible neurodevelopmental symptoms. Material and methods: Ten 2; 0 to 9; 6-year-old children with IP3 malformation deafness (nine boys and one girl) with cochlear implants were evaluated with a retrospective chart review in combination with an additional extensive multidisciplinary assessment day. Hearing, language, cognition, and mental ill-health were compared with a control group of ten 1; 6 to 14; 5-year-old children with cochlear implants (seven boys and three girls) with another genetic cause of deafness, mutations in the GJB2 gene. Results: Mutations in POU3F4 were found in nine of the 10 children with IP3 malformation. Children with IP3 malformation deafness had an atypical outcome with low level of speech recognition (especially in noise), executive functioning deficits, delayed or impaired speech as well as atypical lexical-semantic and pragmatic abilities, and exhibited mental ill-health issues. Parents of children with IP3 malformation were more likely to report that they were worried about their childs psychosocial wellbeing. Controls, however, had more age-typical results in all these domains. Eight of 10 children in the experimental group had high nonverbal cognitive ability despite their broad range of neurodevelopmental symptoms. Conclusions: While cochlear implantation is a feasible alternative for children with IP3 malformation deafness, co-occurring neurodevelopmental anomalies, such as attention deficit hyperactivity or developmental language disorder, and mental ill-health issues require an extensive and consistent multidisciplinary team approach during childhood to support their overall habilitation.

    Download full text (pdf)
    fulltext
  • 58.
    Smith, Sherri L.
    et al.
    Vet Affairs Medical Centre, TN USA; East Tennessee State University, TN 37614 USA.
    Pichora-Fuller, Kathleen
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, The Swedish Institute for Disability Research. Linköping University, Faculty of Arts and Sciences. University of Toronto, Canada; University of Health Network, Canada; Baycrest Hospital, Canada.
    Alexander, Genevieve
    Vet Affairs Medical Centre, TN USA.
    Development of the Word Auditory Recognition and Recall Measure: A Working Memory Test for Use in Rehabilitative Audiology2016In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 37, no 6, p. E360-E376Article in journal (Refereed)
    Abstract [en]

    Objectives: The purpose of this study was to develop the Word Auditory Recognition and Recall Measure (WARRM) and to conduct the inaugural evaluation of the performance of younger adults with normal hearing, older adults with normal to near-normal hearing, and older adults with pure-tone hearing loss on the WARRM. Design: The WARRM is a new test designed for concurrently assessing word recognition and auditory working memory performance in adults who may have pure-tone hearing loss. The test consists of 100 monosyllabic words based on widely used speech-recognition test materials. The 100 words are presented in recall set sizes of 2, 3, 4, 5, and 6 items, with 5 trials in each set size. The WARRM yields a word-recognition score and a recall score. The WARRM was administered to all participants in three listener groups under two processing conditions in a mixed model (between-subjects, repeated measures) design. The between-subjects factor was group, with 48 younger listeners with normal audiometric thresholds (younger listeners with normal hearing [YNH]), 48 older listeners with normal thresholds through 3000 Hz (older listeners with normal hearing [ONH]), and 48 older listeners with sensorineural hearing loss (older listeners with hearing loss [OHL]). The within-subjects factor was WARRM processing condition (no additional task or with an alphabet judgment task). The associations between results on the WARRM test and results on a battery of other auditory and memory measures were examined. Results: Word-recognition performance on the WARRM was not affected by processing condition or set size and was near ceiling for the YNH and ONH listeners (99 and 98%, respectively) with both groups performing significantly better than the OHL listeners (83%). The recall results were significantly better for the YNH, ONH, and OHL groups with no processing (93, 84, and 75%, respectively) than with the alphabet processing (86, 77, and 70%). In both processing conditions, recall was best for YNH, followed by ONH, and worst for OHL listeners. WARRM recall scores were significantly correlated with other memory measures. In addition, WARRM recall scores were correlated with results on the Words-In-Noise (WIN) test for the OHL listeners in the no processing condition and for ONH listeners in the alphabet processing condition. Differences in the WIN and recall scores of these groups are consistent with the interpretation that the OHL listeners found listening to be sufficiently demanding to affect recall even in the no processing condition, whereas the ONH group listeners did not find it so demanding until the additional alphabet processing task was added. Conclusions: These findings demonstrate the feasibility of incorporating an auditory memory test into a word-recognition test to obtain measures of both word recognition and working memory simultaneously. The correlation of WARRM recall with scores from other memory measures is evidence of construct validity. The observation of correlations between the WIN thresholds with each of the older groups and recall scores in certain processing conditions suggests that recall depends on listeners word-recognition abilities in noise in combination with the processing demands of the task. The recall score provides additional information beyond the pure-tone audiogram and word-recognition scores that may help rehabilitative audiologists assess the listening abilities of patients with hearing loss.

  • 59.
    Smith, Sherri L
    et al.
    East Tennessee State University, USA.
    Pichora-Fuller, Kathleen
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Wilson, Richard H
    East Tennessee State University, USA.
    MacDonald, Ewen N
    Technical University of Denmark, Lyngby.
    Word Recognition for Temporally and Spectrally Distorted Materials: The Effects of Age and Hearing Loss2012In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 33, no 3, p. 349-366Article in journal (Refereed)
    Abstract [en]

    Objectives: The purpose of Experiment 1 was to measure word recognition in younger adults with normal hearing when speech or babble was temporally or spectrally distorted. In Experiment 2, older listeners with near-normal hearing and with hearing loss (for pure tones) were tested to evaluate their susceptibility to changes in speech level and distortion types. The results across groups and listening conditions were compared to assess the extent to which the effects of the distortions on word recognition resembled the effects of age-related differences in auditory processing or pure-tone hearing loss. less thanbrgreater than less thanbrgreater thanDesign: In Experiment 1, word recognition was measured in 16 younger adults with normal hearing using Northwestern University Auditory Test No. 6 words in quiet and the Words-in-Noise test distorted by temporal jittering, spectral smearing, or combined jittering and smearing. Another 16 younger adults were evaluated in four conditions using the Words-in-Noise test in combinations of unaltered or jittered speech and unaltered or jittered babble. In Experiment 2, word recognition in quiet and in babble was measured in 72 older adults with near-normal hearing and 72 older adults with hearing loss in four conditions: unaltered, jittered, smeared, and combined jittering and smearing. less thanbrgreater than less thanbrgreater thanResults: For the listeners in Experiment 1, word recognition was poorer in the distorted conditions compared with the unaltered condition. The signal to noise ratio at 50% correct word recognition was 4.6 dB for the unaltered condition, 6.3 dB for the jittered, 6.8 dB for the smeared, 6.9 dB for the double-jitter, and 8.2 dB for the combined jitter-smear conditions. Jittering both the babble and speech signals did not significantly reduce performance compared with jittering only the speech. In Experiment 2, the older listeners with near-normal hearing and hearing loss performed best in the unaltered condition, followed by the jitter and smear conditions, with the poorest performance in the combined jitter-smear condition in both quiet and noise. Overall, listeners with near-normal hearing performed better than listeners with hearing loss by similar to 30% in quiet and similar to 6 dB in noise. In the quiet distorted conditions, when the level of the speech was increased, performance improved for the hearing loss group, but decreased for the older group with near-normal hearing. Recognition performance of younger listeners in the jitter-smear condition and the performance of older listeners with near-normal hearing in the unaltered conditions were similar. Likewise, the performance of older listeners with near-normal hearing in the jitter-smear condition and the performance of older listeners with hearing loss in the unaltered conditions were similar. less thanbrgreater than less thanbrgreater thanConclusions: The present experiments advance our understanding regarding how spectral or temporal distortions of the fine structure of speech affect word recognition in older listeners with and without clinically significant hearing loss. The Speech Intelligibility Index was able to predict group differences, but not the effects of distortion. Individual differences in performance were similar across all distortion conditions with both age and hearing loss being implicated. The speech materials needed to be both spectrally and temporally distorted to mimic the effects of age-related differences in auditory processing and hearing loss.

  • 60.
    Voss, Susan
    et al.
    Smith College, Northampton, Massachusetts, USA.
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Neely, Stephen
    Boys Town National Research Hospital, Omaha, Nebraska, USA.
    Rosowski, John
    Massachusetts Eye and Ear Infirmary, Boston, Massachusetts, USA.
    Factors that introduce intrasubject variability into ear-canal absorbance measurements2013In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 34, no Supplement 1, p. 60s-64sArticle in journal (Refereed)
    Abstract [en]

    Wideband immittance measures can be useful in analyzing acoustic sound flow through the ear and also have diagnostic potential for the identification of conductive hearing loss as well as causes of conductive hearing loss. To interpret individual measurements, the variability in test–retest data must be described and quantified. Contributors to variability in ear-canal absorbance–based measurements are described in this article. These include assumptions related to methodologies and issues related to the probe fit within the ear and potential acoustic leaks. Evidence suggests that variations in ear-canal cross-sectional area or measurement location are small relative to variability within a population. Data are shown to suggest that the determination of the Thévenin equivalent of the ER-10C probe introduces minimal variability and is independent of the foam ear tip itself. It is suggested that acoustic leaks in the coupling of the ear tip to the ear canal lead to substantial variations and that this issue needs further work in terms of potential criteria to identify an acoustic leak. In addition, test–retest data from the literature are reviewed.

  • 61.
    Wang, Yang
    et al.
    Vrije Univ Amsterdam, Netherlands; Amsterdam Publ Hlth Res Inst, Netherlands; Oticon AS, Denmark.
    Naylor, Graham
    MRC, Scotland.
    Kramer, Sophia E.
    Vrije Univ Amsterdam, Netherlands; Amsterdam Publ Hlth Res Inst, Netherlands.
    Zekveld, Adriana
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, The Swedish Institute for Disability Research. Linköping University, Faculty of Arts and Sciences. Vrije Univ Amsterdam, Netherlands; Amsterdam Publ Hlth Res Inst, Netherlands.
    Wendt, Dorothea
    Oticon AS, Denmark; Tech Univ Denmark, Denmark.
    Ohlenforst, Barbara
    Vrije Univ Amsterdam, Netherlands; Amsterdam Publ Hlth Res Inst, Netherlands; Oticon AS, Denmark.
    Lunner, Thomas
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Oticon AS, Denmark; Tech Univ Denmark, Denmark.
    Relations Between Self-Reported Daily-Life Fatigue, Hearing Status, and Pupil Dilation During a Speech Perception in Noise Task2018In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 39, no 3, p. 573-582Article in journal (Refereed)
    Abstract [en]

    Objective: People with hearing impairment are likely to experience higher levels of fatigue because of effortful listening in daily communication. This hearing-related fatigue might not only constrain their work performance but also result in withdrawal from major social roles. Therefore, it is important to understand the relationships between fatigue, listening effort, and hearing impairment by examining the evidence from both subjective and objective measurements. The aim of the present study was to investigate these relationships by assessing subjectively measured daily-life fatigue (self-report questionnaires) and objectively measured listening effort (pupillometry) in both normally hearing and hearing-impaired participants. Design: Twenty-seven normally hearing and 19 age-matched participants with hearing impairment were included in this study. Two self-report fatigue questionnaires Need For Recovery and Checklist Individual Strength were given to the participants before the test session to evaluate the subjectively measured daily fatigue. Participants were asked to perform a speech reception threshold test with single-talker masker targeting a 50% correct response criterion. The pupil diameter was recorded during the speech processing, and we used peak pupil dilation (PPD) as the main outcome measure of the pupillometry. Results: No correlation was found between subjectively measured fatigue and hearing acuity, nor was a group difference found between the normally hearing and the hearing-impaired participants on the fatigue scores. A significant negative correlation was found between self-reported fatigue and PPD. A similar correlation was also found between Speech Intelligibility Index required for 50% correct and PPD. Multiple regression analysis showed that factors representing "hearing acuity" and "self-reported fatigue" had equal and independent associations with the PPD during the speech in noise test. Less fatigue and better hearing acuity were associated with a larger pupil dilation. Conclusions: To the best of our knowledge, this is the first study to investigate the relationship between a subjective measure of daily-life fatigue and an objective measure of pupil dilation, as an indicator of listening effort. These findings help to provide an empirical link between pupil responses, as observed in the laboratory, and daily-life fatigue.

    Download full text (pdf)
    fulltext
  • 62.
    Wasmann, Jan-Willem A.
    et al.
    Radboud Univ Nijmegen Med Ctr, Netherlands.
    Lanting, Cris P.
    Radboud Univ Nijmegen Med Ctr, Netherlands.
    Huinck, Wendy J.
    Radboud Univ Nijmegen Med Ctr, Netherlands.
    Mylanus, Emmanuel A. M.
    Radboud Univ Nijmegen Med Ctr, Netherlands.
    van der Laak, Jeroen
    Linköping University, Department of Health, Medicine and Caring Sciences, Division of Diagnostics and Specialist Medicine. Linköping University, Faculty of Medicine and Health Sciences. Region Östergötland, Center for Diagnostics, Clinical pathology. Radboud Univ Nijmegen Med Ctr, Netherlands.
    Govaerts, Paul J.
    The Eargrp, Belgium.
    Swanepoel, De Wet
    Univ Pretoria, South Africa.
    Moore, David R.
    Cincinnati Childrens Hosp Med Ctr, OH 45229 USA; Univ Cincinnati, OH USA; Univ Manchester, England.
    Barbour, Dennis L.
    Washington Univ, MO 63110 USA.
    Computational Audiology: New Approaches to Advance Hearing Health Care in the Digital Age2021In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 42, no 6, p. 1499-1507Article in journal (Refereed)
    Abstract [en]

    The global digital transformation enables computational audiology for advanced clinical applications that can reduce the global burden of hearing loss. In this article, we describe emerging hearing-related artificial intelligence applications and argue for their potential to improve access, precision, and efficiency of hearing health care services. Also, we raise awareness of risks that must be addressed to enable a safe digital transformation in audiology. We envision a future where computational audiology is implemented via interoperable systems using shared data and where health care providers adopt expanded roles within a network of distributed expertise. This effort should take place in a health care system where privacy, responsibility of each stakeholder, and patients safety and autonomy are all guarded by design.

  • 63.
    Wendt, Dorothea
    et al.
    Eriksholm Res Ctr, Denmark; Tech Univ Denmark, Denmark.
    Hietkamp, Renskje K.
    Eriksholm Res Ctr, Denmark.
    Lunner, Thomas
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Eriksholm Res Ctr, Denmark; Tech Univ Denmark, Denmark.
    Impact of Noise and Noise Reduction on Processing Effort: A Pupillometry Study2017In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 38, no 6, p. 690-700Article in journal (Refereed)
    Abstract [en]

    Objectives: Speech perception in adverse listening situations can be exhausting. Hearing loss particularly affects processing demands, as it requires increased effort for successful speech perception in background noise. Signal processing in hearing aids and noise reduction (NR) schemes aim to counteract the effect of noise and reduce the effort required for speech recognition in adverse listening situations. The present study examined the benefit of NR schemes, applying a combination of a digital NR and directional microphones, for reducing the processing effort during speech recognition. Design: The effect of noise (intelligibility level) and different NR schemes on effort were evaluated by measuring the pupil dilation of listeners. In 2 different experiments, performance accuracy and peak pupil dilation (PPD) were measured in 24 listeners with hearing impairment while they performed a speech recognition task. The listeners were tested at 2 different signal to noise ratios corresponding to either the individual 50% correct (L50) or the 95% correct (L95) performance level in a 4-talker babble condition with and without the use of a NR scheme. Results: In experiment 1, the PPD differed in response to both changes in the speech intelligibility level (L50 versus L95) and NR scheme. The PPD increased with decreasing intelligibility, indicating higher processing effort under the L50 condition compared with the L95 condition. Moreover, the PPD decreased when the NR scheme was applied, suggesting that the processing effort was reduced. In experiment 2, 2 hearing aids using different NR schemes (fast-acting and slow-acting) were compared. Processing effort changed as indicated by the PPD depending on the hearing aids and therefore on the NR scheme. Larger PPDs were measured for the slow-acting NR scheme. Conclusions: The benefit of applying an NR scheme was demonstrated for both L50 and L95, that is, a situation at which the performance level was at a ceiling. This opens the opportunity for new means of evaluating hearing aids in situations in which traditional speech reception measures are shown not to be sensitive.

    Download full text (pdf)
    fulltext
  • 64.
    Zeitooni, Mehrnaz
    et al.
    Linköping University, Department of Clinical and Experimental Medicine, Division of Neuro and Inflammation Science. Linköping University, Faculty of Medicine and Health Sciences.
    Mäki-Torkko, Elina
    Linköping University, Department of Clinical and Experimental Medicine, Division of Neuro and Inflammation Science. Linköping University, Faculty of Medicine and Health Sciences.
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Division of Neuro and Inflammation Science. Linköping University, Faculty of Medicine and Health Sciences.
    Binaural Hearing Ability With Bilateral Bone Conduction Stimulation in Subjects With Normal Hearing: Implications for Bone Conduction Hearing Aids.2016In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 37, no 6, p. 690-702Article in journal (Refereed)
    Abstract [en]

    OBJECTIVES: The purpose of this study is to evaluate binaural hearing ability in adults with normal hearing when bone conduction (BC) stimulation is bilaterally applied at the bone conduction hearing aid (BCHA) implant position as well as at the audiometric position on the mastoid. The results with BC stimulation are compared with bilateral air conduction (AC) stimulation through earphones.

    DESIGN: Binaural hearing ability is investigated with tests of spatial release from masking and binaural intelligibility level difference using sentence material, binaural masking level difference with tonal chirp stimulation, and precedence effect using noise stimulus.

    RESULTS: In all tests, results with bilateral BC stimulation at the BCHA position illustrate an ability to extract binaural cues similar to BC stimulation at the mastoid position. The binaural benefit is overall greater with AC stimulation than BC stimulation at both positions. The binaural benefit for BC stimulation at the mastoid and BCHA position is approximately half in terms of decibels compared with AC stimulation in the speech based tests (spatial release from masking and binaural intelligibility level difference). For binaural masking level difference, the binaural benefit for the two BC positions with chirp signal phase inversion is approximately twice the benefit with inverted phase of the noise. The precedence effect results with BC stimulation at the mastoid and BCHA position are similar for low frequency noise stimulation but differ with high-frequency noise stimulation.

    CONCLUSIONS: The results confirm that binaural hearing processing with bilateral BC stimulation at the mastoid position is also present at the BCHA implant position. This indicates the ability for binaural hearing in patients with good cochlear function when using bilateral BCHAs.

    Download full text (pdf)
    fulltext
  • 65.
    Zekveld, Adriana
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Vrije Univ Amsterdam, Netherlands.
    Kramer, Sophia E.
    Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery and Amsterdam Public Health research institute VU University Medical Center, Amsterdam, The Netherlands.
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    In a Concurrent Memory and Auditory Perception Task, the Pupil Dilation Response Is More Sensitive to Memory Load Than to Auditory Stimulus Characteristics2019In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 40, no 2, p. 272-286Article in journal (Refereed)
    Abstract [en]

    Objectives: Speech understanding may be cognitively demanding, but it can be enhanced when semantically related text cues precede auditory sentences. The present study aimed to determine whether (a) providing text cues reduces pupil dilation, a measure of cognitive load, during listening to sentences, (b) repeating the sentences aloud affects recall accuracy and pupil dilation during recall of cue words, and (c) semantic relatedness between cues and sentences affects recall accuracy and pupil dilation during recall of cue words.

    Design: Sentence repetition following text cues and recall of the text cues were tested. Twenty-six participants (mean age, 22 years) with normal hearing listened to masked sentences. On each trial, a set of four-word cues was presented visually as text preceding the auditory presentation of a sentence whose meaning was either related or unrelated to the cues. On each trial, participants first read the cue words, then listened to a sentence. Following this they spoke aloud either the cue words or the sentence, according to instruction, and finally on all trials orally recalled the cues. Peak pupil dilation was measured throughout listening and recall on each trial. Additionally, participants completed a test measuring the ability to perceive degraded verbal text information and three working memory tests (a reading span test, a size-comparison span test, and a test of memory updating).

    Results: Cue words that were semantically related to the sentence facilitated sentence repetition but did not reduce pupil dilation. Recall was poorer and there were more intrusion errors when the cue words were related to the sentences. Recall was also poorer when sentences were repeated aloud. Both behavioral effects were associated with greater pupil dilation. Larger reading span capacity and smaller size-comparison span were associated with larger peak pupil dilation during listening. Furthermore, larger reading span and greater memory updating ability were both associated with better cue recall overall.

    Conclusions: Although sentence-related word cues facilitate sentence repetition, our results indicate that they do not reduce cognitive load during listening in noise with a concurrent memory load. As expected, higher working memory capacity was associated with better recall of the cues. Unexpectedly, however, semantic relatedness with the sentence reduced word cue recall accuracy and increased intrusion errors, suggesting an effect of semantic confusion. Further, speaking the sentence aloud also reduced word cue recall accuracy, probably due to articulatory suppression. Importantly, imposing a memory load during listening to sentences resulted in the absence of formerly established strong effects of speech intelligibility on the pupil dilation response. This nullified intelligibility effect demonstrates that the pupil dilation response to a cognitive (memory) task can completely overshadow the effect of perceptual factors on the pupil dilation response. This highlights the importance of taking cognitive task load into account during auditory testing.

  • 66.
    Zekveld, Adriana
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Festen, Joost M.
    VU University Medical Center Amsterdam, The Netherlands.
    Van Beek, Johannes H M
    VU University Medical Center Amsterdam, The Netherlands.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    The influence of semanically related and unrelated text cues on the intelligibility of sentences in noice2011In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 32, no 6, p. 16-25Article in journal (Refereed)
    Abstract [en]

    Objectives: In two experiments with different subject groups, we explored the relationship between semantic context and intelligibility by examining the influence of visually presented, semantically related, and unrelated three-word text cues on perception of spoken sentences in stationary noise across a range of speech-to-noise ratios (SNRs). In addition, in Experiment (Exp) 2, we explored the relationship between individual differences in cognitive factors and the effect of the cues on speech intelligibility.

    Design: In Exp 1, cues had been generated by participants themselves in a previous test session (own) or by someone else (alien). These cues were either appropriate for that sentence (match) or for a different sentence (mismatch). A condition with nonword cues, generated by the experimenter, served as a control. Experimental sentences were presented at three SNRs (dB SNR) corresponding to the entirely correct repetition of 29%, 50%, or 71% of sentences (speech reception thresholds; SRTs). In Exp 2, semantically matching or mismatching cues and nonword cues were presented before sentences at SNRs corresponding to SRTs of 16% and 29%. The participants in Exp 2 also performed tests of verbal working memory capacity and the ability to read partially masked text.

    Results: In Exp 1, matching cues improved perception relative to the nonword and mismatching cues, with largest benefits at the SNR corresponding to 29% performance in the SRT task. Mismatching cues did not impair speech perception relative to the nonword cue condition, and no difference in the effect of own and alien matching cues was observed. In Exp 2, matching cues improved speech perception as measured using both the percentage of correctly reported words and the percentage of entirely correctly reported sentences. Mismatching cues reduced the percentage of repeated words (but not the sentence-based scores) compared with the nonword cue condition. Working memory capacity and ability to read partly masked sentences were positively associated with the number of sentences repeated entirely correctly in the mismatch condition at the 29% SNR.

    Conclusions: In difficult listening conditions, both relevant and irrelevant semantic context can influence speech perception in noise. High working memory capacity and good linguistic skills are associated with a greater ability to inhibit irrelevant context when uncued sentence intelligibility is around 29% correct.

  • 67.
    Zekveld, Adriana
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Festen, Joost M
    Vrije University of Amsterdam Medical Centre.
    van Beek, Johannes H M
    Vrije University of Amsterdam Medical Centre.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    The Influence of Semantically Related and Unrelated Text Cues on the Intelligibility of Sentences in Noise2011In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, ISSN 0196-0202, Vol. 32, no 6, p. E16-E25Article in journal (Refereed)
    Abstract [en]

    Objectives: In two experiments with different subject groups, we explored the relationship between semantic context and intelligibility by examining the influence of visually presented, semantically related, and unrelated three-word text cues on perception of spoken sentences in stationary noise across a range of speech-to-noise ratios (SNRs). In addition, in Experiment (Exp) 2, we explored the relationship between individual differences in cognitive factors and the effect of the cues on speech intelligibility. less thanbrgreater than less thanbrgreater thanDesign: In Exp 1, cues had been generated by participants themselves in a previous test session (own) or by someone else (alien). These cues were either appropriate for that sentence (match) or for a different sentence (mismatch). A condition with nonword cues, generated by the experimenter, served as a control. Experimental sentences were presented at three SNRs (dB SNR) corresponding to the entirely correct repetition of 29%, 50%, or 71% of sentences (speech reception thresholds; SRTs). In Exp 2, semantically matching or mismatching cues and nonword cues were presented before sentences at SNRs corresponding to SRTs of 16% and 29%. The participants in Exp 2 also performed tests of verbal working memory capacity and the ability to read partially masked text. less thanbrgreater than less thanbrgreater thanResults: In Exp 1, matching cues improved perception relative to the nonword and mismatching cues, with largest benefits at the SNR corresponding to 29% performance in the SRT task. Mismatching cues did not impair speech perception relative to the nonword cue condition, and no difference in the effect of own and alien matching cues was observed. In Exp 2, matching cues improved speech perception as measured using both the percentage of correctly reported words and the percentage of entirely correctly reported sentences. Mismatching cues reduced the percentage of repeated words (but not the sentence-based scores) compared with the nonword cue condition. Working memory capacity and ability to read partly masked sentences were positively associated with the number of sentences repeated entirely correctly in the mismatch condition at the 29% SNR. less thanbrgreater than less thanbrgreater thanConclusions: In difficult listening conditions, both relevant and irrelevant semantic context can influence speech perception in noise. High working memory capacity and good linguistic skills are associated with a greater ability to inhibit irrelevant context when uncued sentence intelligibility is around 29% correct.

    Download full text (pdf)
    fulltext
12 51 - 67 of 67
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf