liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 27 of 27
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Andin, Josefine
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Fransson, Peter
    Karolinska Inst, Sweden.
    Dahlström, Örjan
    Linköping University, Department of Behavioural Sciences and Learning, Psychology. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    The neural basis of arithmetic and phonology in deaf signing individuals2019In: Language, Cognition and Neuroscience, ISSN 2327-3798, E-ISSN 2327-3801, Vol. 34, no 7, p. 813-825Article in journal (Refereed)
    Abstract [en]

    Deafness is generally associated with poor mental arithmetic, possibly due to neuronal differences in arithmetic processing across language modalities. Here, we investigated for the first time the neuronal networks supporting arithmetic processing in adult deaf signers. Deaf signing adults and hearing non-signing peers performed arithmetic and phonological tasks during fMRI scanning. At whole brain level, activation patterns were similar across groups. Region of interest analyses showed that although both groups activated phonological processing regions in the left inferior frontal gyrus to a similar extent during both phonological and multiplication tasks, deaf signers showed significantly more activation in the right horizontal portion of the inferior parietal sulcus. This region is associated with magnitude manipulation along the mental number line. This pattern of results suggests that deaf signers rely more on magnitude manipulation than hearing non-signers during multiplication, but that phonological involvement does not differ significantly between groups.Abbreviations: AAL: Automated Anatomy Labelling; fMRI: functional magnetic resonance imaging; HIPS: horizontal portion of the intraparietal sulcus; lAG: left angular gyrus; lIFG: left inferior frontal gyrus; rHIPS: right horizontal portion of the intraparietal sulcus

  • 2.
    Andin, Josefine
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Fransson, Peter
    Karolinska Inst, Sweden.
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    fMRI Evidence of Magnitude Manipulation during Numerical Order Processing in Congenitally Deaf Signers2018In: Neural Plasticity, ISSN 2090-5904, E-ISSN 1687-5443, article id 2576047Article in journal (Refereed)
    Abstract [en]

    Congenital deafness is often compensated by early sign language use leading to typical language development with corresponding neural underpinnings. However, deaf individuals are frequently reported to have poorer numerical abilities than hearing individuals and it is not known whether the underlying neuronal networks differ between groups. In the present study, adult deaf signers and hearing nonsigners performed a digit and letter order tasks, during functional magnetic resonance imaging. We found the neuronal networks recruited in the two tasks to be generally similar across groups, with significant activation in the dorsal visual stream for the letter order task, suggesting letter identification and position encoding. For the digit order task, no significant activation was found for either of the two groups. Region of interest analyses on parietal numerical processing regions revealed different patterns of activation across groups. Importantly, deaf signers showed significant activation in the right horizontal portion of the intraparietal sulcus for the digit order task, suggesting engagement of magnitude manipulation during numerical order processing in this group.

  • 3.
    Blomberg, Rina
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Danielsson, Henrik
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Soderlund, Goran B. W.
    Western Norway Univ Appl Sci, Norway.
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Speech Processing Difficulties in Attention Deficit Hyperactivity Disorder2019In: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 10, article id 1536Article in journal (Refereed)
    Abstract [en]

    The large body of research that forms the ease of language understanding (ELU) model emphasizes the important contribution of cognitive processes when listening to speech in adverse conditions; however, speech-in-noise (SIN) processing is yet to be thoroughly tested in populations with cognitive deficits. The purpose of the current study was to contribute to the field in this regard by assessing SIN performance in a sample of adolescents with attention deficit hyperactivity disorder (ADHD) and comparing results with age-matched controls. This population was chosen because core symptoms of ADHD include developmental deficits in cognitive control and working memory capacity and because these top-down processes are thought to reach maturity during adolescence in individuals with typical development. The study utilized natural language sentence materials under experimental conditions that manipulated the dependency on cognitive mechanisms in varying degrees. In addition, participants were tested on cognitive capacity measures of complex working memory-span, selective attention, and lexical access. Primary findings were in support of the ELU-model. Age was shown to significantly covary with SIN performance, and after controlling for age, ADHD participants demonstrated greater difficulty than controls with the experimental manipulations. In addition, overall SIN performance was strongly predicted by individual differences in cognitive capacity. Taken together, the results highlight the general disadvantage persons with deficient cognitive capacity have when attending to speech in typically noisy listening environments. Furthermore, the consistently poorer performance observed in the ADHD group suggests that auditory processing tasks designed to tax attention and working memory capacity may prove to be beneficial clinical instruments when diagnosing ADHD.

  • 4.
    Cardin, Velia
    et al.
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. UCL, England; Univ East Anglia, England.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    De Oliveira, Rita F.
    London South Bank Univ, England.
    Andin, Josefine
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Su, Merina T.
    UCL GOS Inst Child Hlth, England.
    Beese, Lilli
    UCL, England.
    Woll, Bencie
    UCL, England.
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    The Organization of Working Memory Networks is Shaped by Early Sensory Experience2018In: Cerebral Cortex, ISSN 1047-3211, E-ISSN 1460-2199, Vol. 28, no 10, p. 3540-3554Article in journal (Refereed)
    Abstract [en]

    Early deafness results in crossmodal reorganization of the superior temporal cortex (STC). Here, we investigated the effect of deafness on cognitive processing. Specifically, we studied the reorganization, due to deafness and sign language (SL) knowledge, of linguistic and nonlinguistic visual working memory (WM). We conducted an fMRI experiment in groups that differed in their hearing status and SL knowledge: deaf native signers, and hearing native signers, hearing nonsigners. Participants performed a 2-back WM task and a control task. Stimuli were signs from British Sign Language (BSL) or moving nonsense objects in the form of point-light displays. We found characteristic WM activations in fronto-parietal regions in all groups. However, deaf participants also recruited bilateral posterior STC during the WM task, independently of the linguistic content of the stimuli, and showed less activation in fronto-parietal regions. Resting-state connectivity analysis showed increased connectivity between frontal regions and STC in deaf compared to hearing individuals. WM for signs did not elicit differential activations, suggesting that SL WM does not rely on modality-specific linguistic processing. These findings suggest that WM networks are reorganized due to early deafness, and that the organization of cognitive networks is shaped by the nature of the sensory inputs available during development.

  • 5.
    Foo, Catharina
    et al.
    CDD IBV.
    Rudner, Mary
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences, The Swedish Institute for Disability Research.
    Rönnberg, Jerker
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences, The Swedish Institute for Disability Research.
    Lunner, Thomas
    Teknisk audiologi INR.
    Recognition of speech in noise with new hearing instrument compression release settings requeres explicit cognitive storage and processing capacity2007In: Journal of the American Academy of Audiology, ISSN 1050-0545, Vol. 18, no 7, p. 618-631Article in journal (Refereed)
    Abstract [en]

    Evidence suggests that cognitive capacity predicts the ability to benefit from specific compression release settings in non-linear digital hearing instruments. Previous studies have investigated the predictive value of various cognitive tests in relation to aided speech recognition in noise using compression release settings that have been experienced for a certain period. However, the predictive value of cognitive tests with new settings, to which the user has not had the opportunity to become accustomed, has not been studied. In the present study, we compare the predictive values of two cognitive tests, reading span and letter monitoring, in relation to aided speech recognition in noise for 32 habitual hearing instrument users using new compression release settings. We found that reading span was a strong predictor of speech recognition in noise with new compression release settings. This result generalizes previous findings for experienced test settings to new test settings, for both speech recognition in noise tests used in the present study, Hagerman sentences and HINT. Letter monitoring, on the other hand, was not found to be a strong predictor of speech recognition in noise with new compression release settings.

  • 6.
    Holmer, Emil
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Heimann, Mikael
    Linköping University, Department of Behavioural Sciences and Learning, Psychology. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Computerized Sign Language-Based Literacy Trainingfor Deaf and Hard-of-Hearing Children2017In: Journal of Deaf Studies and Deaf Education, ISSN 1081-4159, E-ISSN 1465-7325, Vol. 22, no 4, p. 404-421Article in journal (Refereed)
    Abstract [en]

    Strengthening the connections between sign language and written language may improve reading skills in deaf and hard-of-hearing (DHH) signing children. The main aim of the present study was to investigate whether computerized sign language-based literacy training improves reading skills in DHH signing children who are learning to read. Further, longitudinal associations between sign language skills and developing reading skills were investigated. Participants were recruited from Swedish state special schools for DHH children, where pupils are taught in both sign language and spoken language. Reading skills were assessed at five occasions and the intervention was implemented in a cross-over design. Results indicated that reading skills improved over time and that development of word reading was predicted by the ability to imitate unfamiliar lexical signs, but there was only weak evidence that it was supported by the intervention. These results demonstrate for the first time a longitudinal link between sign-based abilities and word reading in DHH signing children who are learning to read. We suggest that the active construction of novel lexical forms may be a supramodal mechanism underlying word reading development.

  • 7.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Working Memory for Linguistic and Non-linguistic Manual Gestures: Evidence, Theory, and Application2018In: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 9, article id 679Article, review/survey (Refereed)
    Abstract [en]

    Linguistic manual gestures are the basis of sign languages used by deaf individuals. Working memory and language processing are intimately connected and thus when language is gesture-based, it is important to understand related working memory mechanisms. This article reviews work on working memory for linguistic and non-linguistic manual gestures and discusses theoretical and applied implications. Empirical evidence shows that there are effects of load and stimulus degradation on working memory for manual gestures. These effects are similar to those found for working memory for speech-based language. Further, there are effects of pre-existing linguistic representation that are partially similar across language modalities. But above all, deaf signers score higher than hearing non-signers on an n-back task with sign-based stimuli, irrespective of their semantic and phonological content, but not with non-linguistic manual actions. This pattern may be partially explained by recent findings relating to cross-modal plasticity in deaf individuals. It suggests that in linguistic gesture-based working memory, semantic aspects may outweigh phonological aspects when processing takes place under challenging conditions. The close association between working memory and language development should be taken into account in understanding and alleviating the challenges faced by deaf children growing up with cochlear implants as well as other clinical populations.

  • 8.
    Rudner, Mary
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Danielsson, Henrik
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Lyxell, Björn
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Univ Oslo, Norway.
    Lunner, Thomas
    Oticon AS, Denmark.
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Visual Rhyme Judgment in Adults With Mild-to-Severe Hearing Loss2019In: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 10, article id 1149Article in journal (Refereed)
    Abstract [en]

    Adults with poorer peripheral hearing have slower phonological processing speed measured using visual rhyme tasks, and it has been suggested that this is due to fading of phonological representations stored in long-term memory. Representations of both vowels and consonants are likely to be important for determining whether or not two printed words rhyme. However, it is not known whether the relation between phonological processing speed and hearing loss is specific to the lower frequency ranges which characterize vowels or higher frequency ranges that characterize consonants. We tested the visual rhyme ability of 212 adults with hearing loss. As in previous studies, we found that rhyme judgments were slower and less accurate when there was a mismatch between phonological and orthographic information. A substantial portion of the variance in the speed of making correct rhyme judgment decisions was explained by lexical access speed. Reading span, a measure of working memory, explained further variance in match but not mismatch conditions, but no additional variance was explained by auditory variables. This pattern of findings suggests possible reliance on a lexico-semantic word-matching strategy for solving the rhyme judgment task. Future work should investigate the relation between adoption of a lexico-semantic strategy during phonological processing tasks and hearing aid outcome.

  • 9.
    Rudner, Mary
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Holmer, Emil
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Signoret, Carine
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Conversation in noise: investigating the effect of signal degradation on response generation2017Conference paper (Other academic)
  • 10.
    Rudner, Mary
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Karlsson Foo, Catharina
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning.
    Rönnberg, Jerker
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences, The Swedish Institute for Disability Research.
    Lunner, Thomas
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Neuroscience and Locomotion, Technical Audiology.
    Retracted Article: Phonological mismatch makes aided speech recognition in noise cognitively taxing: in Ear and Hearing(ISSN 0196-0202), vol 28, issue 6, pp 879-8922007Other (Other academic)
    Abstract [en]

    OBJECTIVES: The working memory framework for Ease of Language Understanding predicts that speech processing becomes more effortful, thus requiring more explicit cognitive resources, when there is mismatch between speech input and phonological representations in long-term memory. To test this prediction, we changed the compression release settings in the hearing instruments of experienced users and allowed them to train for 9 weeks with the new settings. After training, aided speech recognition in noise was tested with both the trained settings and orthogonal settings. We postulated that training would lead to acclimatization to the trained setting, which in turn would involve establishment of new phonological representations in long-term memory. Further, we postulated that after training, testing with orthogonal settings would give rise to phonological mismatch, associated with more explicit cognitive processing. DESIGN: Thirty-two participants (mean = 70.3 years, SD = 7.7) with bilateral sensorineural hearing loss (pure-tone average = 46.0 dB HL, SD = 6.5), bilaterally fitted for more than 1 year with digital, two-channel, nonlinear signal processing hearing instruments and chosen from the patient population at the Linköoping University Hospital were randomly assigned to 9 weeks training with new, fast (40 ms) or slow (640 ms), compression release settings in both channels. Aided speech recognition in noise performance was tested according to a design with three within-group factors: test occasion (T1, T2), test setting (fast, slow), and type of noise (unmodulated, modulated) and one between-group factor: experience setting (fast, slow) for two types of speech materials-the highly constrained Hagerman sentences and the less-predictable Hearing in Noise Test (HINT). Complex cognitive capacity was measured using the reading span and letter monitoring tests. PREDICTION: We predicted that speech recognition in noise at T2 with mismatched experience and test settings would be associated with more explicit cognitive processing and thus stronger correlations with complex cognitive measures, as well as poorer performance if complex cognitive capacity was exceeded. RESULTS: Under mismatch conditions, stronger correlations were found between performance on speech recognition with the Hagerman sentences and reading span, along with poorer speech recognition for participants with low reading span scores. No consistent mismatch effect was found with HINT. CONCLUSIONS: The mismatch prediction generated by the working memory framework for Ease of Language Understanding is supported for speech recognition in noise with the highly constrained Hagerman sentences but not the less-predictable HINT. © 2007 Lippincott Williams & Wilkins, Inc.

  • 11.
    Rudner, Mary
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Lyberg-Ahlander, Viveka
    Lund Univ, Sweden.
    Brännstrom, Jonas
    Lund Univ, Sweden.
    Nirme, Jens
    Lund Univ, Sweden.
    Pichora-Fuller, M. K.
    Univ Toronto, Canada.
    Sahlen, Birgitta
    Lund Univ, Sweden.
    Listening Comprehension and Listening Effort in the Primary School Classroom2018In: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 9, article id 1193Article in journal (Refereed)
    Abstract [en]

    In the primary school classroom, children are exposed to multiple factors that combine to create adverse conditions for listening to and understanding what the teacher is saying. Despite the ubiquity of these conditions, there is little knowledge concerning the way in which various factors combine to influence listening comprehension and the effortfulness of listening. The aim of the present study was to investigate the combined effects of background noise, voice quality, and visual cues on childrens listening comprehension and effort. To achieve this aim, we performed a set of four well-controlled, yet ecologically valid, experiments with 245 eight-year-old participants. Classroom listening conditions were simulated using a digitally animated talker with a dysphonic (hoarse) voice and background babble noise composed of several children talking. Results show that even low levels of babble noise interfere with listening comprehension, and there was some evidence that this effect was reduced by seeing the talkers face. Dysphonia did not significantly reduce listening comprehension scores, but it was considered unpleasant and made listening seem difficult, probably by reducing motivation to listen. We found some evidence that listening comprehension performance under adverse conditions is positively associated with individual differences in executive function. Overall, these results suggest that multiple factors combine to influence listening comprehension and effort for child listeners in the primary school classroom. The constellation of these room, talker, modality, and listener factors should be taken into account in the planning and design of educational and learning activities.

  • 12.
    Rudner, Mary
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences, The Swedish Institute for Disability Research.
    Rönnberg, Jerker
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences, The Swedish Institute for Disability Research.
    Towards a functional ontology for working memory for sign and speech2006In: Cognitive processing, ISSN 1612-4790, Vol. 7, p. S183-S186Article in journal (Refereed)
  • 13.
    Rudner, Mary
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Seeto, Mark
    Natl Acoust Labs, Australia; HEARing CRC, Australia.
    Keidser, Gitte
    Natl Acoust Labs, Australia; HEARing CRC, Australia.
    Johnson, Blake
    Macquarie Univ, Australia.
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Poorer Speech Reception Threshold in Noise Is Associated With Lower Brain Volume in Auditory and Cognitive Processing Regions2019In: Journal of Speech, Language and Hearing Research, ISSN 1092-4388, E-ISSN 1558-9102, Vol. 62, no 4, p. 1117-1130Article in journal (Refereed)
    Abstract [en]

    Purpose: Hearing loss is associated with changes in brain volume in regions supporting auditory and cognitive processing. The purpose of this study was to determine whether there is a systematic association between hearing ability and brain volume in cross-sectional data from a large nonclinical cohort of middle-aged adults available from the UK Biobank Resource (http://www.ukbiobank.ac.uk). Method: We performed a set of regression analyses to determine the association between speech reception threshold in noise (SRTn) and global brain volume as well as predefined regions of interest (ROIs) based on T1-weighted structural images, controlling for hearing-related comorbidities and cognition as well as demographic factors. In a 2nd set of analyses, we additionally controlled for hearing aid (HA) use. We predicted statistically significant associations globally and in ROIs including auditory and cognitive processing regions, possibly modulated by HA use. Results: Whole-brain gray matter volume was significantly lower for individuals with poorer SRTn. Furthermore, the volume of 9 predicted ROIs including both auditory and cognitive processing regions was lower for individuals with poorer SRTn. The greatest percentage difference (-0.57%) in ROI volume relating to a 1 SD worsening of SRTn was found in the left superior temporal gyrus. HA use did not substantially modulate the pattern of association between brain volume and SRTn. Conclusions: In a large middle-aged nonclinical population, poorer hearing ability is associated with lower brain volume globally as well as in cortical and subcortical regions involved in auditory and cognitive processing, but there was no conclusive evidence that this effect is moderated by HA use. This pattern of results supports the notion that poor hearing leads to reduced volume in brain regions recruited during speech understanding under challenging conditions. These findings should be tested in future longitudinal, experimental studies.

  • 14.
    Rudner, Mary
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Signoret, Carine
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Holmer, Emil
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Auditory Semantic Illusions: investigating the effect of semantic cues on speech understanding2017Conference paper (Other academic)
  • 15.
    Rönnberg, Jerker
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Holmer, Emil
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Cognitive hearing science and ease of language understanding2019In: International Journal of Audiology, ISSN 1499-2027, E-ISSN 1708-8186, Vol. 58, no 5, p. 247-261Article, review/survey (Refereed)
    Abstract [en]

    Objective: The current update of the Ease of Language Understanding (ELU) model evaluates the predictive and postdictive aspects of speech understanding and communication.Design: The aspects scrutinised concern: (1) Signal distortion and working memory capacity (WMC), (2) WMC and early attention mechanisms, (3) WMC and use of phonological and semantic information, (4) hearing loss, WMC and long-term memory (LTM), (5) WMC and effort, and (6) the ELU model and sign language.Study Samples: Relevant literature based on own or others data was used.Results: Expectations 1-4 are supported whereas 5-6 are constrained by conceptual issues and empirical data. Further strands of research were addressed, focussing on WMC and contextual use, and on WMC deployment in relation to hearing status. A wider discussion of task demands, concerning, for example, inference-making and priming, is also introduced and related to the overarching ELU functions of prediction and postdiction. Finally, some new concepts and models that have been inspired by the ELU-framework are presented and discussed.Conclusions: The ELU model has been productive in generating empirical predictions/expectations, the majority of which have been confirmed. Nevertheless, new insights and boundary conditions need to be experimentally tested to further shape the model.

  • 16.
    Rönnberg, Jerker
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Lunner, Thomas
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Oticon AS, Denmark.
    Ng, Elaine Hoi Ning
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Lidestam, Björn
    Linköping University, Department of Behavioural Sciences and Learning, Psychology. Linköping University, Faculty of Arts and Sciences.
    Zekveld, Adriana
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, The Swedish Institute for Disability Research. Linköping University, Faculty of Arts and Sciences. Vrije University of Amsterdam, Netherlands; Vrije University of Amsterdam, Netherlands.
    Sörqvist, Patrik
    Linköping University, Department of Behavioural Sciences, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences. University of Gavle, Sweden.
    Lyxell, Björn
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Träff, Ulf
    Linköping University, Department of Behavioural Sciences and Learning, Psychology. Linköping University, Faculty of Arts and Sciences.
    Yumba, Wycliffe
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Classon, Elisabet
    Linköping University, Department of Clinical and Experimental Medicine, Division of Neuro and Inflammation Science. Linköping University, Faculty of Medicine and Health Sciences. Linköping University, The Swedish Institute for Disability Research. Region Östergötland, Local Health Care Services in Central Östergötland, Department of Acute Internal Medicine and Geriatrics.
    Hällgren, Mathias
    Linköping University, Department of Clinical and Experimental Medicine, Division of Neuro and Inflammation Science. Linköping University, Faculty of Medicine and Health Sciences. Region Östergötland, Anaesthetics, Operations and Specialty Surgery Center, Department of Otorhinolaryngology in Linköping.
    Larsby, Birgitta
    Linköping University, Department of Clinical and Experimental Medicine, Division of Neuro and Inflammation Science. Linköping University, Faculty of Medicine and Health Sciences.
    Signoret, Carine
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Pichora-Fuller, Kathleen
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, The Swedish Institute for Disability Research. Linköping University, Faculty of Arts and Sciences. University of Toronto, Canada; University of Health Network, Canada; Baycrest Hospital, Canada.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Danielsson, Henrik
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Division of Neuro and Inflammation Science. Linköping University, Faculty of Medicine and Health Sciences.
    Hearing impairment, cognition and speech understanding: exploratory factor analyses of a comprehensive test battery for a group of hearing aid users, the n200 study2016In: International Journal of Audiology, ISSN 1499-2027, E-ISSN 1708-8186, Vol. 55, no 11, p. 623-642Article in journal (Refereed)
    Abstract [en]

    Objective: The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. Study sample: Participants were 200 hard-of-hearing hearing-aid users, with a mean age of 60.8 years. Forty-three percent were females and the mean hearing threshold in the better ear was 37.4dB HL. Design: LEVEL1 factor analyses extracted one factor per test and/or cognitive function based on a priori conceptualizations. The more abstract LEVEL 2 factor analyses were performed separately for the three classes of test variables. Results: The HEARING test variables resulted in two LEVEL 2 factors, which we labelled SENSITIVITY and TEMPORAL FINE STRUCTURE; the COGNITIVE variables in one COGNITION factor only, and OUTCOMES in two factors, NO CONTEXT and CONTEXT. COGNITION predicted the NO CONTEXT factor to a stronger extent than the CONTEXT outcome factor. TEMPORAL FINE STRUCTURE and SENSITIVITY were associated with COGNITION and all three contributed significantly and independently to especially the NO CONTEXT outcome scores (R-2 = 0.40). Conclusions: All LEVEL 2 factors are important theoretically as well as for clinical assessment.

  • 17.
    Rönnberg, Jerker
    et al.
    Linköping University, Department of Behavioural Sciences, The Swedish Institute for Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Karlsson Foo, Catharina
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Lunner, Thomas
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology . Linköping University, Faculty of Health Sciences.
    Cognition counts: A working memory system for ease of language understanding (ELU)2008In: International Journal of Audiology, ISSN 1499-2027, E-ISSN 1708-8186, Vol. 47, no Suppl. 2, p. 99-105Article in journal (Refereed)
    Abstract [en]

    A general working memory system for ease of language understanding (ELU, Rnnberg, 2003a) is presented. The purpose of the system is to describe and predict the dynamic interplay between explicit and implicit cognitive functions, especially in conditions of poorly perceived or poorly specified linguistic signals. In relation to speech understanding, the system based on (1) the quality and precision of phonological representations in long-term memory, (2) phonologically mediated lexical access speed, and (3) explicit, storage, and processing resources. If there is a mismatch between phonological information extracted from the speech signal and the phonological information represented in long-term memory, the system is assumed to produce a mismatch signal that invokes explicit processing resources. In the present paper, we focus on four aspects of the model which have led to the current, updated version: the language generality assumption; the mismatch assumption; chronological age; and the episodic buffer function of rapid, automatic multimodal binding of phonology (RAMBPHO). We evaluate the language generality assumption in relation to sign language and speech, and the mismatch assumption in relation to signal processing in hearing aids. Further, we discuss the effects of chronological age and the implications of RAMBPHO.

  • 18.
    Shirnin, Denis
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Lyxell, Björn
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Dahlström, Örjan
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Blomberg, Rina
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Signoret, Carine
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Speech perception in noise: prediction patterns of neural pre-activation in lexical processing2017Conference paper (Other academic)
    Abstract [en]

    The purpose of this study is to examine whether the neural correlates of lexical expectations could be used to predict speech in noise perception. We analyse mag-netoencephalography (MEG) data from 20 normal hearing participants, who read a set of couplets (a pair of phrases with rhyming end words) prior to the experiment. During the experiment, the participants are asked to listen to the couplets, whose intelligibility is set to 80%. However, the last word is pronounced with a delay of 1600 ms (i.e. expectation gap) and is masked at 50% of intelligibility. At the end of each couplet, the participants are asked to indicate if the last word was cor-rect, i.e. corresponding to the expected word. Given the oscillatory characteristics of neural patterns of lexical expectations during the expectation gap, can we predict the participant’s actual perception of the last word? In order to approach this re-search question, we aim to identify the correlation patterns between the instances of neural pre-activation, occurring during the interval of the expectation gap and the type of the given answer. According to the sequential design of the experiment, the expectation gap is placed 4400 ms prior to the time interval dedicated to the participant’s answer. Machine Learning approach has been chosen as the main tool for the pattern recognition.

  • 19.
    Signoret, Carine
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Blomberg, Rina
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Dahlstrom, Orjan
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Phonological expectations override semantic mismatch during speech in noise perception2017Conference paper (Other academic)
    Abstract [en]

    Perception of speech in noise is modulated by stimulus-driven and knowledge-driven processes. In the ELU model, working memory capacity (WM) has been proposed to play a determinant role in the resolution of a mismatch between knowledge-based predictions and stimulus-based processing. However, the neural correlates and the temporal course of the mismatch resolution have not been investigated. After exposure to 48 semantically coherent couplets, 20 normal-hearing participants were tested in a MEG study. Couplet sentences were presented in background noise with 80% intelligibility, except the last word of the couplet that was presented with 50% intelligibility. This last word could be either 1) the word that was in the exposure couplet, or 2) a phonologically related but semantically incorrect word, or 3) a semantically coherent but phonologically incorrect word, or 4) a semantically and phonologically incorrect word. Before the presentation of the last word, participants had time to predict it and their task was to answer if the presented word was the correct one. Behavioural results showed more errors in the condition 2 than conditions 3 or 4, suggesting that phonological compatibility overrides semantic mismatch when intelligibility is poor. Preliminary results of the neural correlates reflecting the role of WM in the mismatch resolution between the knowledge-driven and stimulus-driven processes will be presented.

  • 20.
    Signoret, Carine
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Blomberg, Rina
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Dahlström, Örjan
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Andersen, L. Møller
    Dept. of Clin. Neurosci., The Natl. Res. Facility for Magnetoencephalography, Karolinska Institute, Sweden.
    Lundqvist, D.
    Dept. of Clin. Neurosci., The Natl. Res. Facility for Magnetoencephalography, Karolinska Institute, Sweden.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Resolving discrepancies between incoming auditory information and linguistic expectations2018In: Neuroscience 2018: 48th annual meeting of Society for Neuroscience, Society for Neuroscience , 2018Conference paper (Other academic)
    Abstract [en]

    Speech perception in noise is dependent on stimulus-driven and knowledge-driven processes. Here we investigate the neural correlates and time course of discrepancies between incoming auditory information (i.e. stimulus-driven processing) and linguistic expectations (knowledge-driven processing) by including 20 normal hearing adults in a MEG study. Participants read 48 rhyming sentence pairs beforehand. In the scanner, they listened to sentences that corresponded exactly to the read sentences except that the last word (presented after 1600 millisecond delay and with 50% intelligibility) was only correct in half of the cases. Otherwise, it was 1) phonologically but not semantically related, 2) semantically but not phonologically related, or 3) neither phonologically nor semantically related to the sentence. Participants indicated by button press whether the last word matched the sentence they had read outside the scanner. Behavioural results showed more errors in condition 1 than in conditions 2 or 3, suggesting that phonological compatibility overrides semantic discrepancy when intelligibility is poor. Event-related field analysis demonstrated larger activity on frontal sites for correct than unrelated words, suggesting that the former were more accurately expected than the latter. An early M170 component was also observed, possibly reflecting expectation violation in the auditory modality. Dipole analysis will reveal whether M170 could be modulated by type of linguistic discrepancy. Distributed-network analysis will further our understanding of the time course and neural correlates of discrepancies between incoming auditory information and linguistic expectations.

  • 21.
    Signoret, Carine
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Blomberg, Rina
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Dahlström, Örjan
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Modulation of the neural expectation violation marker during speech perception in noise.2018Conference paper (Other academic)
  • 22.
    Signoret, Carine
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Johnsrude, Ingrid
    Univ Western Ontario, Sch Commun Sci & Disorders, London, Canada; Univ Western Ontario, Dept Psychol, London, Canada.
    Classon, Elisabet
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Combined Effects of Form- and Meaning-Based Predictability on Perceived Clarity of Speech2018In: Journal of Experimental Psychology: Human Perception and Performance, ISSN 0096-1523, E-ISSN 1939-1277, Vol. 44, no 2, p. 277-285Article in journal (Refereed)
    Abstract [en]

    The perceptual clarity of speech is influenced by more than just the acoustic quality of the sound; it also depends on contextual support. For example, a degraded sentence is perceived to be clearer when the content of the speech signal is provided with matching text (i.e., form-based predictability) before hearing the degraded sentence. Here, we investigate whether sentence-level semantic coherence (i.e., meaning-based predictability), enhances perceptual clarity of degraded sentences, and if so, whether the mechanism is the same as that underlying enhancement by matching text. We also ask whether form- and meaning-based predictability are related to individual differences in cognitive abilities. Twenty participants listened to spoken sentences that were either clear or degraded by noise vocoding and rated the clarity of each item. The sentences had either high or low semantic coherence. Each spoken word was preceded by the homologous printed word (matching text), or by a meaningless letter string (nonmatching text). Cognitive abilities were measured with a working memory test. Results showed that perceptual clarity was significantly enhanced both by matching text and by semantic coherence. Importantly, high coherence enhanced the perceptual clarity of the degraded sentences even when they were preceded by matching text, suggesting that the effects of form- and meaning-based predictions on perceptual clarity are independent and additive. However, when working memory capacity indexed by the Size-Comparison Span Test was controlled for, only form-based predictions enhanced perceptual clarity, and then only at some sound quality levels, suggesting that prediction effects are to a certain extent dependent on cognitive abilities. (PsycINFO Database Record

  • 23.
    Signoret, Carine
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Hearing impairment and perceived clarity of predictable speech2019In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 40, no 5, p. 1140-1148Article in journal (Refereed)
    Abstract [en]

    Objectives: The precision of stimulus-driven information is less critical for comprehension when accurate knowledge-based predictions of the upcoming stimulus can be generated. A recent study in listeners without hearing impairment (HI) has shown that form- and meaning-based predictability independently and cumulatively enhance perceived clarity of degraded speech. In the present study, we investigated whether form- and meaning-based predictability enhanced the perceptual clarity of degraded speech for individuals with moderate to severe sensorineural HI, a group for whom such enhancement may be particularly important.

    Design: Spoken sentences with high or low semantic coherence were degraded by noise-vocoding and preceded by matching or nonmatching text primes. Matching text primes allowed generation of form-based predictions while semantic coherence allowed generation of meaning-based predictions.

    Results: The results showed that both form- and meaning-based predictions make degraded speech seem clearer to individuals with HI. The benefit of form-based predictions was seen across levels of speech quality and was greater for individuals with HI in the present study than for individuals without HI in our previous study. However, for individuals with HI, the benefit of meaning-based predictions was only apparent when the speechwas slightly degraded. When it was more severely degraded, the benefit of meaning-based predictions was only seen when matching text primes preceded the degraded speech. The benefit in terms of perceptual clarity of meaning-based predictions was positively related to verbal fluency but not working memory performance.

    Conclusions: Taken together, these results demonstrate that, for individuals with HI, form-based predictability has a robust effect on perceptual clarity that is greater than the effect previously shown for individuals without HI. However, when speech quality is moderately or severely degraded, meaning-based predictability is contingent on form-based predictability. Further, the ability to mobilize the lexicon seems to contribute to the strength of meaning-based predictions. Whereas individuals without HI may be able to devote explicit working memory capacity for storing meaning-based predictions, individuals with HI may already be using all available explicit capacity to process the degraded speech and thus become reliant on explicit skills such as their verbal fluency to generate useful meaning-based predictions.

  • 24.
    Signoret, Carine
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    The contribution of phonological and semantic knowledge to the perceptual clarity of degraded speech2016Conference paper (Other academic)
  • 25.
    Signoret, Carine
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    The interplay of phonological and semantic knowledge during perception of degraded speech2016Conference paper (Other academic)
    Abstract [en]

    The perceptual clarity of speech depends not only on acoustic quality of the sound, but also on linguistic support. In a set of three experiments, we investigated the interplay of phonological and semantic knowledge during speech perception in persons with normal (NH) and impaired hearing (IH). In three experiments, participants listened to grammatically correct spoken Swedish sentences at different sound quality levels (clear or degraded by noise vocoding). The sentences were more or less coherent and each spoken word (matching prime) or consonant strings (non-matching prime) was visually presented 200 ms beforehand. Analysis of variance in rated clarity showed significant interactions between coherence and prime type: a benefit of coherence with and without matching primes for NH but only with matching primes for IH was observed, although three-way interactions including sound quality levels somewhat modified this picture. Preliminary fMRI results from NH suggest that processing of semantic coherence in the absence of matching primes is supported by right middle temporal gyrus. These findings suggest that, when no phonological information is available, NH mobilize long-term semantic representations to successfully utilize the semantic information in spoken sentences that are moderately degraded. Future work should investigate what prevents IH from doing the same.

  • 26.
    Stenfelt, Stefan
    et al.
    Linköping University, Department of Clinical and Experimental Medicine, Division of Neuro and Inflammation Science. Linköping University, Faculty of Medicine and Health Sciences.
    Lunner, Thomas
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Eriksholm Research Centre, Oticon A/S, Helsingor, Denmark.
    Ng, Elaine
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Lidestam, Björn
    Linköping University, Department of Behavioural Sciences and Learning, Psychology. Linköping University, Faculty of Arts and Sciences.
    Zekveld, Adriana
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. VU University Medical Center, Amsterdam, Netherlands.
    Sörqvist, Patrik
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. University of Gävle, Gävle, Sweden.
    Lyxell, Björn
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Träff, Ulf
    Linköping University, Department of Behavioural Sciences and Learning, Psychology. Linköping University, Faculty of Arts and Sciences.
    Yumba, Wycliffe
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Classon, Elisabet
    Linköping University, Department of Clinical and Experimental Medicine, Division of Neuro and Inflammation Science. Linköping University, Faculty of Medicine and Health Sciences. Linköping University, The Swedish Institute for Disability Research.
    Hällgren, Mathias
    Linköping University, Department of Clinical and Experimental Medicine, Division of Neuro and Inflammation Science. Linköping University, Faculty of Medicine and Health Sciences.
    Larsby, Birgitta
    Linköping University, Department of Clinical and Experimental Medicine, Division of Neuro and Inflammation Science. Linköping University, Faculty of Medicine and Health Sciences. Linköping University, The Swedish Institute for Disability Research.
    Signoret, Carine
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Pichora-Fuller, Kathleen
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. University of Toronto, Toronto, Canada.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Danielsson, Henrik
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Auditory, signal processing, and cognitive factors  influencing  speech  perception  in  persons with hearing loss fitted with hearing aids – the N200 study2016Conference paper (Other academic)
    Abstract [en]

    Objective: The aim of the current study was to assess aided speech-in-noise outcomes and relate those measures to auditory sensitivity and processing, different types of cognitive processing abilities, and signal processing in hearing aids.

    Material and method: Participants were 200 hearing-aid wearers, with a mean age of 60.8 years, 43% females, with average hearing thresholds in the better ear of 37.4 dB HL. Tests of auditory functions were hearing thresholds, DPOAEs, tests of fine structure processing, IHC dead regions, spectro-temporal modulation, and speech recognition in quiet (PB words). Tests of cognitive processing function were tests of phonological skills, working memory, executive functions and inference making abilities, and general cognitive tests (e.g., tests of cognitive decline and IQ). The outcome test variables were the Hagerman sentences with 50 and 80% speech recognition levels, using two different noises (stationary speech weighted noise and 4-talker babble), and three types of signal processing (linear gain, fast acting compression, and linear gain plus a non-ideal binary mask). Another sentence test included typical and atypical sentences with contextual cues that were tested both audio-visually and in an auditory mode only. Moreover, HINT and SSQ were administrated.

    Analysis: Factor analyses were performed separate for the auditory, cognitive, and outcome tests.

    Results: The auditory tests resulted in two factors labeled SENSITIVITY and TEMPORAL FINE STRUCTURE, the cognitive tests in one factor (COGNITION), and the outcome tests in the two factors termed NO CONTEXT and CONTEXT that relates to the level of context in the different outcome tests. When age was partialled out, COGNITION was moderately correlated with the TEMPORAL FINE STRUCTURE and NO CONTEXT factors but only weakly correlated with the CONTEXT factor. SENSITIVITY correlated weakly with TEMPORAL FINE STRUCTURE and CONTEXT, and moderately with NO CONTEXT, while TEMPORAL FINE STRUCTURE showed weak correlation with CONTEXT and moderate correlation with NO CONTEXT. CONTEXT and NO CONTEXT had a  moderate correlation. Moreover, the overall results of the Hagerman sentences showed 0.9 dB worse SNR with fast acting compression compared with linear gain and 5.5 dB better SNR with linear  gain and noise reduction compared with only linear gain.

    Conclusions: For hearing aid wearers, the ability to recognize speech in noise is associated with both sensory and cognitive processing abilities when the speech materials have low internal context. These associations are less prominent when the speech material has contextual cues.

  • 27.
    Zekveld, Adriana
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Vrije Univ Amsterdam, Netherlands.
    Kramer, Sophia E.
    Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery and Amsterdam Public Health research institute VU University Medical Center, Amsterdam, The Netherlands.
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    In a Concurrent Memory and Auditory Perception Task, the Pupil Dilation Response Is More Sensitive to Memory Load Than to Auditory Stimulus Characteristics2019In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 40, no 2, p. 272-286Article in journal (Refereed)
    Abstract [en]

    Objectives: Speech understanding may be cognitively demanding, but it can be enhanced when semantically related text cues precede auditory sentences. The present study aimed to determine whether (a) providing text cues reduces pupil dilation, a measure of cognitive load, during listening to sentences, (b) repeating the sentences aloud affects recall accuracy and pupil dilation during recall of cue words, and (c) semantic relatedness between cues and sentences affects recall accuracy and pupil dilation during recall of cue words.

    Design: Sentence repetition following text cues and recall of the text cues were tested. Twenty-six participants (mean age, 22 years) with normal hearing listened to masked sentences. On each trial, a set of four-word cues was presented visually as text preceding the auditory presentation of a sentence whose meaning was either related or unrelated to the cues. On each trial, participants first read the cue words, then listened to a sentence. Following this they spoke aloud either the cue words or the sentence, according to instruction, and finally on all trials orally recalled the cues. Peak pupil dilation was measured throughout listening and recall on each trial. Additionally, participants completed a test measuring the ability to perceive degraded verbal text information and three working memory tests (a reading span test, a size-comparison span test, and a test of memory updating).

    Results: Cue words that were semantically related to the sentence facilitated sentence repetition but did not reduce pupil dilation. Recall was poorer and there were more intrusion errors when the cue words were related to the sentences. Recall was also poorer when sentences were repeated aloud. Both behavioral effects were associated with greater pupil dilation. Larger reading span capacity and smaller size-comparison span were associated with larger peak pupil dilation during listening. Furthermore, larger reading span and greater memory updating ability were both associated with better cue recall overall.

    Conclusions: Although sentence-related word cues facilitate sentence repetition, our results indicate that they do not reduce cognitive load during listening in noise with a concurrent memory load. As expected, higher working memory capacity was associated with better recall of the cues. Unexpectedly, however, semantic relatedness with the sentence reduced word cue recall accuracy and increased intrusion errors, suggesting an effect of semantic confusion. Further, speaking the sentence aloud also reduced word cue recall accuracy, probably due to articulatory suppression. Importantly, imposing a memory load during listening to sentences resulted in the absence of formerly established strong effects of speech intelligibility on the pupil dilation response. This nullified intelligibility effect demonstrates that the pupil dilation response to a cognitive (memory) task can completely overshadow the effect of perceptual factors on the pupil dilation response. This highlights the importance of taking cognitive task load into account during auditory testing.

1 - 27 of 27
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf