liu.seSearch for publications in DiVA
Change search
Refine search result
12 51 - 92 of 92
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Lyxell, Björn
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences. Östergötlands Läns Landsting, Anaesthetics, Operations and Specialty Surgery Center, Department of Otorhinolaryngology in Linköping.
    Wass, Malin
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Sahlén, Birgitta
    Department of Clinical Sciences, Lunds Universitet, Lund, Sweden.
    Uhlén, Inger
    Department of Clinical Science, Intervention and Technology, Karolinska Institute, Stockholm, Sweden.
    Samuelsson, Christina
    Linköping University, Department of Clinical and Experimental Medicine, Speech and Language Pathology. Linköping University, Faculty of Health Sciences.
    Asker-Arnason, Lena
    Department of Clinical Sciences, Lunds Universitet, Lund, Sweden.
    Ibertsson, Tina
    Department of Clinical Sciences, Lunds Universitet, Lund, Sweden.
    Mäki-Torkko, Elina
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Larsby, Birgitta
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Hällgren, Mathias
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology.
    Development of cognitive and reading skills in deaf children with CIs2011In: Cochlear Implants International, ISSN 1467-0100, E-ISSN 1754-7628, Vol. 12, no Suppl 1, p. 98-100Article in journal (Refereed)
    Abstract [en]

    n/a

  • 52.
    Nygren, Maria
    et al.
    Linköping University, Department of Clinical and Experimental Medicine, Pediatrics. Linköping University, Faculty of Health Sciences.
    Ludvigsson, Johnny
    Linköping University, Department of Clinical and Experimental Medicine, Pediatrics. Linköping University, Faculty of Health Sciences. Östergötlands Läns Landsting, Center of Paediatrics and Gynaecology and Obstetrics, Department of Paediatrics in Linköping.
    Carstensen, John
    Linköping University, Department of Medical and Health Sciences, Division of Health and Society. Linköping University, Faculty of Health Sciences.
    Sepa Frostell, Anneli
    Linköping University, Department of Clinical and Experimental Medicine, Pediatrics. Linköping University, Faculty of Health Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Family psychological stress early in life and development of type 1 diabetes: The ABIS prospective study2013In: Diabetes Research and Clinical Practice, ISSN 0168-8227, E-ISSN 1872-8227, Vol. 100, no 2, p. 257-264Article in journal (Refereed)
    Abstract [en]

    Aims: This study investigated whether psychological stress in the family during the childs first year of life are associated with the risk of childhood type 1 diabetes (T1D). According to the beta-cell stress hypothesis all factors that increase the need for, or the resistance to, insulin may be regarded as risk factors for T1D. less thanbrgreater than less thanbrgreater thanMethods: Among 8921 children from the general population with questionnaire data from one parent at childs birth and at 1 year of age, 42 cases of T1D were identified up to 11-13 years of age. Additionally 15 cases with multiple diabetes-related autoantibodies were detected in a sub-sample of 2649 children. less thanbrgreater than less thanbrgreater thanResults: Cox regression analyses showed no significant associations between serious life events (hazard ratio 0.7 for yes vs. no [95% CI 0.2-1.9], p = 0.47), parenting stress (0.9 per scale score [0.5-1.7], p = 0.79), or parental dissatisfaction (0.6 per scale score [0.3-1.2], p = 0.13) during the first year of life and later diagnosis of T1D, after controlling for socioeconomic, demographic, and diabetes-related factors. Inclusion of children with multiple autoantibodies did not alter the results. less thanbrgreater than less thanbrgreater thanConclusions: No association between psychological stress early in life and development of T1D could be confirmed.

  • 53.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Modalities of Mind: Modality-specific and nonmodality-specific aspects of working memory for sign and speech2005Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Language processing is underpinned by working memory and while working memory for signed languages has been shown to display some of the characteristics of working memory for speech-based languages, there are a range of anomalous effects related to the inherently visuospatial modality of signed languages. On the basis of these effects, four research questions were addressed in a series of studies:

    1. Are differences in working memory storage for sign and speech reflected in neural representation?

    2. Do the neural networks supporting speech-sign switching during a working memory task reflect executive or semantic processes?

    3. Is working memory for sign language enhanced by a spatial style of information presentation?

    4. Do the neural networks supporting word reversal indicate tongue-twisting or mind-twisting?

    The results of the studies showed that:

    1. Working memory for sign and speech is supported by a combination of modality-specific and nonmodality-specific neural networks.

    2. Switching between sign and speech during a working memory task is supported by semantic rather than executive processes.

    3. Working memory performance in educationally promoted native deaf signers is enhanced by a spatial style of presentation.

    4. Word reversal is a matter of mind-twisting, rather than tongue-twisting.

    These findings indicate that working memory for sign and speech has modality-specific components as well as nonmodality-specific components. Modality-specific aspects can be explained in terms of Wilson’s (2001) sensorimotor account, which is based on the component model (Baddeley, 2000), given that the functionality of the visuospatial sketchpad is extended to include language processing. Nonmodality-specific working memory processing is predicted by Rönnberg’s (2003) model of cognitive involvement in language processing. However, the modality-free, cross-modal and extra-modal aspects of working memory processing revealed in the present work can be explained in terms of the central executive and the episodic buffer, providing the functionality and neural representation of the episodic buffer are extended.

    A functional ontology is presented which ties cognitive processes to their neural representation, along with a model explaining modality-specific findings relating to sign language cognition. Predictions of the ontology and the model are discussed in relation to future work.

    List of papers
    1. Neural correlates of working memory for sign language
    Open this publication in new window or tab >>Neural correlates of working memory for sign language
    2004 (English)In: Cognitive Brain Research, ISSN 0926-6410, Vol. 20, no 2, p. 165-182Article in journal (Refereed) Published
    Abstract [en]

    Eight, early bilingual, sign language interpreters participated in a PET study, which compared working memory for Swedish Sign Language (SSL) with working memory for audiovisual Swedish speech. The interaction between language modality and memory task was manipulated in a within-subjects design. Overall, the results show a previously undocumented, language modality-specific working memory neural architecture for SSL, which relies on a network of bilateral temporal, bilateral parietal and left premotor activation. In addition, differential activation in the right cerebellum was found for the two language modalities. Similarities across language modality are found in Broca's area for all tasks and in the anterior left inferior frontal lobe for semantic retrieval. The bilateral parietal activation pattern for sign language bears similarity to neural activity during, e.g., nonverbal visuospatial tasks, and it is argued that this may reflect generation of a virtual spatial array. Aspects of the data suggesting an age of acquisition effect are also considered. Furthermore, it is discussed why the pattern of parietal activation cannot be explained by factors relating to perception, production or recoding of signs, or to task difficulty. The results are generally compatible with Wilson's [Psychon. Bull. Rev. 8 (2001) 44] account of working memory.

    Keywords
    Working memory, Sign language, Speech, Language modality, PET
    National Category
    Social Sciences
    Identifiers
    urn:nbn:se:liu:diva-13354 (URN)10.1016/j.cogbrainres.2004.03.002 (DOI)
    Available from: 2005-09-21 Created: 2005-09-21 Last updated: 2017-11-06
    2. Neural representation of binding lexical signs and words in the episodic buffer of working memory
    Open this publication in new window or tab >>Neural representation of binding lexical signs and words in the episodic buffer of working memory
    Show others...
    2007 (English)In: Neuropsychologia, ISSN 0028-3932, E-ISSN 1873-3514, Vol. 45, no 10, p. 2258-2276Article in journal (Refereed) Published
    Abstract [en]

    The episodic buffer accommodates formation and maintenance of unitary multidimensional representations based on information in different codes from different sources. Formation, based on submorphemic units, engages posterior brain regions, while maintenance engages frontal regions. Using a hybrid fMRI design, that allows separate analysis of transient and sustained components, an n-back task and an experimental group of 13 hearing native signers, with experience of Swedish Sign Language and Swedish since birth, we investigated binding of lexical signs and words in working memory. Results show that the transient component of these functions is supported by a buffer-specific network of posterior regions including the right middle temporal lobe, possibly relating to binding of phonological loop representations with semantic representations in long-term memory, as well as a loop-specific network, in line with predictions of a functional relationship between loop and buffer. The left hippocampus was engaged in transient and sustained components of buffer processing, possibly reflecting the meaningful nature of the stimuli. Only a minor role was found for executive functions in line with other recent work. A novel representation of the sustained component of working memory for audiovisual language in the right inferior temporal lobe may be related to perception of speech-related facial gestures. Previous findings of sign and speech loop representation in working memory were replicated and extended. Together, these findings support the notion of a module that mediates between codes and sources, such as the episodic buffer, and further our understanding of its nature.

    Keywords
    Binding, Episodic buffer, Working memory, Sign language, fMRI
    National Category
    Human Computer Interaction
    Identifiers
    urn:nbn:se:liu:diva-13355 (URN)10.1016/j.neuropsychologia.2007.02.017 (DOI)
    Note

    On the day of the defence date the title of this article was Speach-sign switching in working memory i supported by semantic networks.

    Available from: 2005-09-21 Created: 2005-09-21 Last updated: 2018-01-13
    3. Explicit processing demands reveal language modality specific organization of working memory
    Open this publication in new window or tab >>Explicit processing demands reveal language modality specific organization of working memory
    2008 (English)In: Journal of Deaf Studies and Deaf Education, ISSN 1081-4159, E-ISSN 1465-7325, Vol. 13, no 4, p. 466-484Article in journal (Refereed) Published
    Abstract [en]

    The working memory model for Ease of Language Understanding(ELU) predicts that processing differences between languagemodalities emerge when cognitive demands are explicit. Thisprediction was tested in three working memory experiments withparticipants who were Deaf Signers (DS), Hearing Signers (HS),or Hearing Nonsigners (HN). Easily nameable pictures were usedas stimuli to avoid confounds relating to sensory modality.Performance was largely similar for DS, HS, and HN, suggestingthat previously identified intermodal differences may be dueto differences in retention of sensory information. When explicitprocessing demands were high, differences emerged between DSand HN, suggesting that although working memory storage in bothgroups is sensitive to temporal organization, retrieval is notsensitive to temporal organization in DS. A general effect ofsemantic similarity was also found. These findings are discussedin relation to the ELU model.

    National Category
    Social Sciences
    Identifiers
    urn:nbn:se:liu:diva-13356 (URN)10.1093/deafed/enn005 (DOI)
    Note

    On the day of the defence date the title of this article was Space for compensation: Further support for a visuospatial array for temporary storage in working memory for deaf native signers.

    Available from: 2005-09-21 Created: 2005-09-21 Last updated: 2017-12-13
    4. Perceptual saliency in the visual channel enhances explicit language processing
    Open this publication in new window or tab >>Perceptual saliency in the visual channel enhances explicit language processing
    2004 (English)In: Iranian Audiology, ISSN 1735-045X, Vol. 3, no 1, p. 16-26Article in journal (Refereed) Published
    National Category
    Social Sciences
    Identifiers
    urn:nbn:se:liu:diva-13357 (URN)
    Available from: 2005-09-21 Created: 2005-09-21 Last updated: 2017-11-06
    5. Reversing spoken items: mind twisting not tongue twisting
    Open this publication in new window or tab >>Reversing spoken items: mind twisting not tongue twisting
    2005 (English)In: Brain and Language, ISSN 0093-934X, Vol. 92, no 1, p. 78-90Article in journal (Refereed) Published
    Abstract [en]

    Using 12 participants we conducted an fMRI study involving two tasks, word reversal and rhyme judgment, based on pairs of natural speech stimuli, to study the neural correlates of manipulating auditory imagery under taxing conditions. Both tasks engaged the left anterior superior temporal gyrus, reflecting previously established perceptual mechanisms. Engagement of the left inferior frontal gyrus in both tasks relative to baseline could only be revealed by applying small volume corrections to the region of interest, suggesting that phonological segmentation played only a minor role and providing further support for factorial dissociation of rhyming and segmentation in phonological awareness. Most importantly, subtraction of rhyme judgment from word reversal revealed activation of the parietal lobes bilaterally and the right inferior frontal cortex, suggesting that the dynamic manipulation of auditory imagery involved in mental reversal of words seems to engage mechanisms similar to those involved in visuospatial working memory and mental rotation. This suggests that reversing spoken items is a matter of mind twisting rather than tongue twisting and provides support for a link between language processing and manipulation of mental imagery.

    Keywords
    Speech; Auditory imagery; Word reversal; Parietal lobes; Spatial processing; Rhyme judgment; fMRI
    National Category
    Social Sciences
    Identifiers
    urn:nbn:se:liu:diva-13358 (URN)10.1016/j.bandl.2004.05.010 (DOI)
    Available from: 2005-09-21 Created: 2005-09-21 Last updated: 2017-11-06
  • 54.
    Rudner, Mary
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    The HEAD Graduate School2008In: 4th Annual Meeting of the Centre for Communication Science, Stockholm, October 15-17, 2008.,2008, 2008Conference paper (Other academic)
  • 55.
    Rudner, Mary
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    The HEAD Graduate School2008In: Invited lecture. Cognitive Scientists at Work, Linköping, October 21, 2008.,2008, 2008Conference paper (Other academic)
  • 56.
    Rudner, Mary
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Time and space in working memory for sign and speech2008In: Invited lecture. Research seminar within the project Better interpretation for people with deafblindness, Örebro, November 6, 2008.,2008, 2008Conference paper (Other academic)
  • 57.
    Rudner, Mary
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Working memory for sign and speech2008In: Annual Psychiatry Meeting, Odigos/Mogård, Getå, May 12-13, 2008.,2008, 2008Conference paper (Other academic)
  • 58.
    Rudner, Mary
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Andin, Josefine
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Rönnberg, Jerker
    Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research.
    Differences in temporal and spatial processing mechanisms in working memory for signed and spoken language2009In: The 11th European congress of Psychology, Oslo, Norway, 7-10 July 2009.,2009, 2009Conference paper (Refereed)
    Abstract [en]

     Objectives Working memory (WM) capacity is similar for signed (SL) and spoken (SpL) language yet underlying temporal and spatial processing mechanisms may not be identical. To investigate this, two studies with deaf native signers (DS) and hearing non-signers (HN) were conducted. Methods DS and matched HN groups performed WM tasks with varying temporal and spatial demands in study 1 at encoding (temporal, spatial and mixed presentation styles) and in study 2 at retrieval (forward and backward span) and with abstract spatial demands (math span). Results DS performance was inferior with high temporal demands at encoding (temporal style) and retreival (forward span). There was no difference between groups with high spatial order demands at encoding (spatial style) or retrieval (backward span). DS performance was worse when abstract spatial processing was involved (math span). Conclusion WM processing mechanisms for SL and SpL differ for temporal information at encoding and retrieval and for abstract spatial information. 

  • 59.
    Rudner, Mary
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Andin, Josefine
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Working memory, deafness and sign language.2009In: Scandinavian journal of psychology, ISSN 1467-9450, Vol. 50, no 5, p. 495-505Article in journal (Refereed)
    Abstract [en]

    Working memory (WM) for sign language has an architecture similar to that for speech-based languages at both functional and neural levels. However, there are some processing differences between language modalities that are not yet fully explained, although a number of hypotheses have been mooted. This article reviews some of the literature on differences in sensory, perceptual and cognitive processing systems induced by auditory deprivation and sign language use and discusses how these differences may contribute to differences in WM architecture for signed and speech-based languages. In conclusion, it is suggested that left-hemisphere reorganization of the motion-processing system as a result of native sign-language use may interfere with the development of the order processing system in WM.

  • 60.
    Rudner, Mary
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Davidsson, Lena
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Ronnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Effects of Age on the Temporal Organization of Working Memory in Deaf Signers2010In: AGING NEUROPSYCHOLOGY AND COGNITION, ISSN 1382-5585, Vol. 17, no 3, p. 360-383Article in journal (Refereed)
    Abstract [en]

    Deaf native signers have a general working memory (WM) capacity similar to that of hearing non-signers but are less sensitive to the temporal order of stored items at retrieval. General WM capacity declines with age, but little is known of how cognitive aging affects WM function in deaf signers. We investigated WM function in elderly deaf signers (EDS) and an age-matched comparison group of hearing non-signers (EHN) using a paradigm designed to highlight differences in temporal and spatial processing of item and order information. EDS performed worse than EHN on both item and order recognition using a temporal style of presentation. Reanalysis together with earlier data showed that with the temporal style of presentation, order recognition performance for EDS was also lower than for young adult deaf signers. Older participants responded more slowly than younger participants. These findings suggest that apart from age-related slowing irrespective of sensory and language status, there is an age-related difference specific to deaf signers in the ability to retain order information in WM when temporal processing demands are high. This may be due to neural reorganisation arising from sign language use. Concurrent spatial information with the Mixed style of presentation resulted in enhanced order processing for all groups, suggesting that concurrent temporal and spatial cues may enhance learning for both deaf and hearing groups. These findings support and extend the WM model for Ease of Language Understanding.

  • 61.
    Rudner, Mary
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Foo, Catharina
    Avdelningen för kognition, utveckling och handikapp CDD Linköpings universitet.
    Lunner, Thomas
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology .
    Rönnberg, Jerker
    Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research.
    Aided speech recognition in noise, perceived effort and explicit cognitive capacity2008In: International Hearing Aid Research Conference IHCON 2008, Lake Tahoe, California, 13-17 August 2008.,2008, 2008Conference paper (Refereed)
    Abstract [en]

     Speech recognition in noise is an effortful process requiring explicit cognitive processing. It may be influenced by level and type of noise and by the signal processing algorithms employed when hearing is aided. These complex relationships may be understood in terms of the working memory model for Ease of language Understanding (ELU, Rönnberg et al., in press). This model predicts that under challenging listening conditions, explicit cognitive processing demands will be high and that persons with good explicit cognitive capacity will be better listeners. Previous work has suggested that they may also find listening less effortful (Behrens et al., 2004; Larsby et al., 2005; in press). We studied this issue by including subjective effort ratings in a larger study designed to investigate aided speech recognition in noise and cognition. 32 experienced hearing aid users participated. Effort was rated using a visual analogue scale and the speech material was the Hagerman sentences presented in three fixed speech to noise ratios of +10 dB, +4 dB and -2dB. Effort was rated in modulated and unmodulated noise with fast and slow compression release settings, after each of two nine week training sessions with the same settings. Speech recognition performance was tested objectively under the same conditions using an adaptive procedure. Order of testing was balanced. Explicit cognitive capacity was measured using the reading span test. ANOVAs and correlations were computed. Preliminary results showed that decreasing SNR led to greater perceived effort and that the difference in perceived effort between the highest and the lowest SNR was greater in unmodulated noise than in modulated noise. Speech recognition performance in unmodulated noise generally correlated with effort ratings under similar conditions but in modulated noise generally it did not. Effort ratings correlated with reading span performance at the lowest SNR (-2dB) but only in unmodulated noise after the first training session. These preliminary findings show that subjective ratings of the effort involved in aided speech recognition covary with noise level and performance but that these effects are reduced by noise modulation. Further, the perceived effort of aided speech recognition at low SNR may be related to explicit cognitive capacity as measured by the reading span test. However, we only find evidence of this in unmodulated noise after the first training session. These findings extend previous work on perceived effort and cognitive capacity and provide further evidence that type of noise is an important factor in this relationship.

  • 62.
    Rudner, Mary
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Foo, Catharina
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Lunner, Thomas
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology . Linköping University, Faculty of Health Sciences.
    Cognition and aided speech recognition in noise: specific role for cognitive factors following nine-week experience with adjusted compression settings in hearing aids.2009In: Scandinavian journal of psychology, ISSN 1467-9450, Vol. 50, no 5, p. 405-418Article in journal (Refereed)
    Abstract [en]

    The working memory model for Ease of Language Understanding (ELU) proposes that language understanding under taxing conditions is related to explicit cognitive capacity. We refer to this as the mismatch hypothesis, since phonological representations based on the processing of speech under established conditions may not be accessed so readily when input conditions change and a match becomes problematic. Then, cognitive capacity requirements may differ from those used for processing speech hitherto. In the present study, we tested this hypothesis by investigating the relationship between aided speech recognition in noise and cognitive capacity in experienced hearing aid users when there was either a match or mismatch between processed speech input and established phonological representations. The settings in the existing digital hearing aids of the participants were adjusted to one of two different compression settings which processed the speech signal in qualitatively different ways ("fast" or "slow"). Testing took place after a 9-week period of experience with the new setting. Speech recognition was tested under different noise conditions and with match or mismatch (i.e. alternative compression setting) manipulations of the input signal. Individual cognitive capacity was measured using a reading span test and a letter monitoring test. Reading span, a reliable measure of explicit cognitive capacity, predicted speech recognition performance under mismatch conditions when processed input was incongruent with recently established phonological representations, due to the specific hearing aid setting. Cognitive measures were not main predictors of performance under match conditions. These findings are in line with the ELU model.

  • 63.
    Rudner, Mary
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Fransson, Peter
    Department of Clinical Neuroscience, Karolinska University Hospital, Stockholm, Sweden.
    Nyberg, Lars
    Department of Radiation Sciences and Integrative Medical Biology, Umeå University, Umeå, Sweden.
    Ingvar, Martin
    Department of Clinical Neuroscience, Karolinska University Hospital, Stockholm, Sweden.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Neural representation of binding lexical signs and words in the episodic buffer of working memory2007In: Neuropsychologia, ISSN 0028-3932, E-ISSN 1873-3514, Vol. 45, no 10, p. 2258-2276Article in journal (Refereed)
    Abstract [en]

    The episodic buffer accommodates formation and maintenance of unitary multidimensional representations based on information in different codes from different sources. Formation, based on submorphemic units, engages posterior brain regions, while maintenance engages frontal regions. Using a hybrid fMRI design, that allows separate analysis of transient and sustained components, an n-back task and an experimental group of 13 hearing native signers, with experience of Swedish Sign Language and Swedish since birth, we investigated binding of lexical signs and words in working memory. Results show that the transient component of these functions is supported by a buffer-specific network of posterior regions including the right middle temporal lobe, possibly relating to binding of phonological loop representations with semantic representations in long-term memory, as well as a loop-specific network, in line with predictions of a functional relationship between loop and buffer. The left hippocampus was engaged in transient and sustained components of buffer processing, possibly reflecting the meaningful nature of the stimuli. Only a minor role was found for executive functions in line with other recent work. A novel representation of the sustained component of working memory for audiovisual language in the right inferior temporal lobe may be related to perception of speech-related facial gestures. Previous findings of sign and speech loop representation in working memory were replicated and extended. Together, these findings support the notion of a module that mediates between codes and sources, such as the episodic buffer, and further our understanding of its nature.

  • 64.
    Rudner, Mary
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Karlsson, Catharina
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Sundewall-Thoren, Elisabet
    Oticon A/S, Research Centre Eriksholm, Snekkersten, Denmark.
    Lunner, Thomas
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology.
    Rönnberg, Jerker
    Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research.
    Phonological mismatch and explicit cognitive processing in a sample of 102 hearing-aid users2008In: International Journal of Audiology, ISSN 1499-2027, E-ISSN 1708-8186, Vol. 47, no 2, p. S91-S98Article in journal (Refereed)
    Abstract [en]

    Rudner et al (2008) showed that when compression release settings are manipulated in the hearing instruments of Swedish habitual users, the resulting mismatch between the phonological form of the input speech signal and representations stored in long-term memory leads to greater engagement of explicit cognitive processing under taxing listening conditions. The mismatch effect is manifest in significant correlations between performance on cognitive tests and aided-speech-recognition performance in modulated noise and/or with fast compression release settings. This effect is predicted by the ELU model (Ronnberg et al, 2008). In order to test whether the mismatch effect can be generalized across languages, we examined two sets of aided speech recognition data collected from a Danish population where two cognitive tests, reading span and letter monitoring, had been administered. A reanalysis of all three datasets, including 102 participants, demonstrated the mismatch effect. These findings suggest that the effect of phonological mismatch, as predicted by the ELU model (Ronnberg et al, this issue) and tapped by the reading span test, is a stable phenomenon across these two Scandinavian languages.

  • 65.
    Rudner, Mary
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Karlsson Foo, Catharina
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Rönnberg, Jerker
    Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research.
    Lunner, Thomas
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology .
    Notice of retraction: unintentional errors in "Phonological mismatch makes aided speech recognition in noise cognitively taxing." (Ear & Hear.2007;28[6]) in Ear and Hearing(ISSN 0196-0202), vol 29, issue 5, pg 8142008Other (Other academic)
    Abstract [en]

       

  • 66.
    Rudner, Mary
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Karlsson Foo, Catharina
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning.
    Rönnberg, Jerker
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences, The Swedish Institute for Disability Research.
    Lunner, Thomas
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Neuroscience and Locomotion, Technical Audiology.
    Retracted Article: Phonological mismatch makes aided speech recognition in noise cognitively taxing: in Ear and Hearing(ISSN 0196-0202), vol 28, issue 6, pp 879-8922007Other (Other academic)
    Abstract [en]

    OBJECTIVES: The working memory framework for Ease of Language Understanding predicts that speech processing becomes more effortful, thus requiring more explicit cognitive resources, when there is mismatch between speech input and phonological representations in long-term memory. To test this prediction, we changed the compression release settings in the hearing instruments of experienced users and allowed them to train for 9 weeks with the new settings. After training, aided speech recognition in noise was tested with both the trained settings and orthogonal settings. We postulated that training would lead to acclimatization to the trained setting, which in turn would involve establishment of new phonological representations in long-term memory. Further, we postulated that after training, testing with orthogonal settings would give rise to phonological mismatch, associated with more explicit cognitive processing. DESIGN: Thirty-two participants (mean = 70.3 years, SD = 7.7) with bilateral sensorineural hearing loss (pure-tone average = 46.0 dB HL, SD = 6.5), bilaterally fitted for more than 1 year with digital, two-channel, nonlinear signal processing hearing instruments and chosen from the patient population at the Linköoping University Hospital were randomly assigned to 9 weeks training with new, fast (40 ms) or slow (640 ms), compression release settings in both channels. Aided speech recognition in noise performance was tested according to a design with three within-group factors: test occasion (T1, T2), test setting (fast, slow), and type of noise (unmodulated, modulated) and one between-group factor: experience setting (fast, slow) for two types of speech materials-the highly constrained Hagerman sentences and the less-predictable Hearing in Noise Test (HINT). Complex cognitive capacity was measured using the reading span and letter monitoring tests. PREDICTION: We predicted that speech recognition in noise at T2 with mismatched experience and test settings would be associated with more explicit cognitive processing and thus stronger correlations with complex cognitive measures, as well as poorer performance if complex cognitive capacity was exceeded. RESULTS: Under mismatch conditions, stronger correlations were found between performance on speech recognition with the Hagerman sentences and reading span, along with poorer speech recognition for participants with low reading span scores. No consistent mismatch effect was found with HINT. CONCLUSIONS: The mismatch prediction generated by the working memory framework for Ease of Language Understanding is supported for speech recognition in noise with the highly constrained Hagerman sentences but not the less-predictable HINT. © 2007 Lippincott Williams & Wilkins, Inc.

  • 67.
    Rudner, Mary
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Karlsson Foo, Catharina
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning.
    Sundewall-Thórén, Elisabeth
    Oticon A/S, Research Centre Eriksholm, Snekkersten, Denmark.
    Lunner, Thomas
    Linköping University, Faculty of Health Sciences. Linköping University, Department of Neuroscience and Locomotion, Technical Audiology.
    Rönnberg, Jerker
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences, The Swedish Institute for Disability Research.
    Phonological mismatch and explicit cognitive processing in a sample of 100 hearing aid users.2007In: From Signal to Dialogue: Dynamic Aspects of Hearing, Language and Cognition,2007, 2007Conference paper (Other academic)
  • 68.
    Rudner, Mary
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Rönnberg , Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    The role of the episodic buffer in working memory for language processing2008In: COGNITIVE PROCESSING, ISSN 1612-4782 , Vol. 9, no 1, p. 19-28Article in journal (Refereed)
    Abstract [en]

    A body of work has accumulated to show that the cognitive process of binding information from different mnemonic and sensory sources as well as in different linguistic modalities can be fractionated from general executive functions in working memory both functionally and neurally. This process has been defined in terms of the episodic buffer (Baddeley in Trends Cogn Sci 4(11):417-423, 2000). This paper considers behavioural, neuropsychological and neuroimaging data that elucidate the role of the episodic buffer in language processing. We argue that the episodic buffer seems to be truly multimodal in function and that while formation of unitary multidimensional representations in the episodic buffer seems to engage posterior neural networks, maintenance of such representations is supported by frontal networks. Although, the episodic buffer is not necessarily supported by executive processes and seems to be supported by different neural networks, it may operate in tandem with the central executive during effortful language processing. There is also evidence to suggest engagement of the phonological loop during buffer processing. The hippocampus seems to play a role in formation but not maintenance of representations in the episodic buffer of working memory.

  • 69.
    Rudner, Mary
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Explicit processing demands reveal language modality specific organization of working memory2008In: Journal of Deaf Studies and Deaf Education, ISSN 1081-4159, E-ISSN 1465-7325, Vol. 13, no 4, p. 466-484Article in journal (Refereed)
    Abstract [en]

    The working memory model for Ease of Language Understanding(ELU) predicts that processing differences between languagemodalities emerge when cognitive demands are explicit. Thisprediction was tested in three working memory experiments withparticipants who were Deaf Signers (DS), Hearing Signers (HS),or Hearing Nonsigners (HN). Easily nameable pictures were usedas stimuli to avoid confounds relating to sensory modality.Performance was largely similar for DS, HS, and HN, suggestingthat previously identified intermodal differences may be dueto differences in retention of sensory information. When explicitprocessing demands were high, differences emerged between DSand HN, suggesting that although working memory storage in bothgroups is sensitive to temporal organization, retrieval is notsensitive to temporal organization in DS. A general effect ofsemantic similarity was also found. These findings are discussedin relation to the ELU model.

  • 70.
    Rudner, Mary
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Rönnberg, Jerker
    Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research.
    Modality specific differences in working memory for sign and speech2008In: HEAD Graduate School First Summer Workshop, Rimforsa June 9-10, 2008.,2008, 2008Conference paper (Other academic)
    Abstract [en]

    The working memory model for Ease of Language Understanding (ELU) predicts that processing differences between language modalities emerge when cognitive demands are explicit. Previous behavioural and neurocognitive work has shown that cognitive processing differences may be related to the different spatial and temporal processing demands involved in sign language and speech. In a set of working memory experiments with participants who were Deaf Signers (DS), Hearing Signers (HS) or Hearing Nonsigners (HN), we manipulated level of explicit processing required as well as temporal and spatial demands. Easily nameable pictures were used as stimuli to avoid confounds relating to sensory modality. When explicit processing demands were low, performance was largely similar for DS, HS and HN. However, when explicit and temporal processing demands were high, DS did not perform as well as HN. This effect was compounded by oral education. These findings suggest that temporal organization is not as prominent in working memory for sign language as it is in working memory for speech. A general effect of semantic similarity was also found. These findings are discussed in relation to the ELU model.

  • 71.
    Rudner, Mary
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Perceptual saliency in the visual channel enhances explicit language processing2004In: Iranian Audiology, ISSN 1735-045X, Vol. 3, no 1, p. 16-26Article in journal (Refereed)
  • 72.
    Rudner, Mary
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Rönnberg, Jerker
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences, The Swedish Institute for Disability Research.
    Transient and sustained components of working memory for sign language2007In: 9th Nordic Meeting in Neuropsychology,2007, 2007Conference paper (Refereed)
  • 73.
    Rudner, Mary
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Rönnberg, Jerker
    Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research.
    Davidsson, L.
    Temporal and spatial processing in working memory for sign and speech2008In: First European congress of Neuropsychology,2008, Edinburgh, September 7-9, 2008Conference paper (Other academic)
  • 74.
    Rudner, Mary
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Hugdahl, Kenneth
    Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway.
    Reversing spoken items: mind twisting not tongue twisting2005In: Brain and Language, ISSN 0093-934X, Vol. 92, no 1, p. 78-90Article in journal (Refereed)
    Abstract [en]

    Using 12 participants we conducted an fMRI study involving two tasks, word reversal and rhyme judgment, based on pairs of natural speech stimuli, to study the neural correlates of manipulating auditory imagery under taxing conditions. Both tasks engaged the left anterior superior temporal gyrus, reflecting previously established perceptual mechanisms. Engagement of the left inferior frontal gyrus in both tasks relative to baseline could only be revealed by applying small volume corrections to the region of interest, suggesting that phonological segmentation played only a minor role and providing further support for factorial dissociation of rhyming and segmentation in phonological awareness. Most importantly, subtraction of rhyme judgment from word reversal revealed activation of the parietal lobes bilaterally and the right inferior frontal cortex, suggesting that the dynamic manipulation of auditory imagery involved in mental reversal of words seems to engage mechanisms similar to those involved in visuospatial working memory and mental rotation. This suggests that reversing spoken items is a matter of mind twisting rather than tongue twisting and provides support for a link between language processing and manipulation of mental imagery.

  • 75.
    Rudner, Mary
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Lunner, Thomas
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Working Memory Supports Listening in Noise for Persons with Hearing Impairment2011In: JOURNAL OF THE AMERICAN ACADEMY OF AUDIOLOGY, ISSN 1050-0545, Vol. 22, no 3, p. 156-167Article in journal (Refereed)
    Abstract [en]

    Background: Previous studies have demonstrated a relation between cognitive capacity, in particular working memory, and the ability to understand speech in noise with different types of hearing aid signal processing. Purpose: The present study investigates the relation between working memory capacity and the speech recognition performance of persons with hearing impairment under both aided and unaided conditions, following a period of familiarization to both fast- and slow-acting compression settings in the participants own hearing aids. Research Design: Speech recognition was tested in modulated and steady state noise with fast and slow compression release settings (for aided conditions) with each of two materials. Working memory capacity was also measured. Study Sample: Thirty experienced hearing aid users with a mean age of 70 yr (SD = 7.8) and pure-tone average hearing threshold across the frequencies 0.25, 0.5, 1, 2, 3, 4, and 6 kHz (PTA(7)) and for both ears of 45.8 dB HL (SD = 6.6). Intervention: 9 wk experience with each of fast-acting and slow-acting compression. Data Collection and Analysis: Speech recognition data were analyzed using repeated measures analysis of variance with the within-subjects factors of material (high constraint, low constraint), noise type (steady state, modulated), and compression (fast, slow), and the between-subjects factor working memory capacity (high, low). Results: With high constraint material, there were three-way interactions including noise type and working memory as well as compression, in aided conditions, and performance level, in unaided conditions, but no effects of either working memory or compression with low constraint material. Investigation of simple main effects showed a significant effect of working memory during speech recognition under conditions of both "high degradation" (modulated noise, fast-acting compression, low signal-to-noise ratio [SNR]) and "low degradation" (steady state noise, slow-acting compression, high SNR). The finding of superior performance of persons with high working memory capacity in modulated noise with fast-acting compression agrees with findings of previous studies including a familiarization period of at least 9 wk, in contrast to studies with familiarization of 4 wk or less that have shown that persons with lower cognitive capacity may benefit from slow-acting compression. Conclusions: Working memory is a crucial factor in speech understanding in noise for persons with hearing impairment, irrespective of whether hearing is aided or unaided. Working memory supports speech understanding in noise under conditions of both "high degradation" and "low degradation." A subcomponent view of working memory may contribute to our understanding of these phenomena. The effect of cognition on speech understanding in modulated noise with fast-acting compression may only pertain after a period of 4-9 wk of familiarization and that prior to such a period, persons with lower cognitive capacity may benefit more from slow-acting compression.

  • 76.
    Rönnberg, Jerker
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences, The Swedish Institute for Disability Research.
    Karlsson Foo, Catharina
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning.
    Rudner, Mary
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Chronological aging and ease of speech understanding2007In: Aging and Speech Communication: 2nd International and Interdisciplinary Research Conference,2007, 2007Conference paper (Other academic)
  • 77.
    Rönnberg, Jerker
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research.
    Rudner, Mary
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Cognitive Hearing Science at Linköping University2008In: Invited seminar. Dept. of ENT / Audiology, EMGO Institute, VU University Amsterdam, April 1, 2008.,2008, 2008Conference paper (Other academic)
    Abstract [sv]

      

  • 78.
    Rönnberg, Jerker
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Cognitive neuroscience of signed language: the case of working memory.2006In: Festschrift for Lars-Göran Nilsson,2006, 2006Conference paper (Refereed)
    Abstract [en]

      

  • 79.
    Rönnberg, Jerker
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Organizers of sympsium: Cognition and sign language2007In: 9th Nordic Meeting in Neuropsychology,2007, 2007Conference paper (Other academic)
  • 80.
    Rönnberg, Jerker
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Ingvar, Martin
    Department of Clinical Neuroscience, Karolinska Institutet, Karolinska Hospital, Stockholm, Sweden.
    Neural correlates of working memory for sign language2004In: Cognitive Brain Research, ISSN 0926-6410, Vol. 20, no 2, p. 165-182Article in journal (Refereed)
    Abstract [en]

    Eight, early bilingual, sign language interpreters participated in a PET study, which compared working memory for Swedish Sign Language (SSL) with working memory for audiovisual Swedish speech. The interaction between language modality and memory task was manipulated in a within-subjects design. Overall, the results show a previously undocumented, language modality-specific working memory neural architecture for SSL, which relies on a network of bilateral temporal, bilateral parietal and left premotor activation. In addition, differential activation in the right cerebellum was found for the two language modalities. Similarities across language modality are found in Broca's area for all tasks and in the anterior left inferior frontal lobe for semantic retrieval. The bilateral parietal activation pattern for sign language bears similarity to neural activity during, e.g., nonverbal visuospatial tasks, and it is argued that this may reflect generation of a virtual spatial array. Aspects of the data suggesting an age of acquisition effect are also considered. Furthermore, it is discussed why the pattern of parietal activation cannot be explained by factors relating to perception, production or recoding of signs, or to task difficulty. The results are generally compatible with Wilson's [Psychon. Bull. Rev. 8 (2001) 44] account of working memory.

  • 81.
    Rönnberg, Jerker
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Karlsson Foo, Catharina
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning.
    A working memory sytem for ease of speech understanding2007In: Fourth International Adult Aural Rehabilitation Conference and ISAC 2007,2007, 2007Conference paper (Other academic)
  • 82.
    Rönnberg, Jerker
    et al.
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Karlsson Foo, Catharina
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning.
    Cognition counts: A general working memory system for ease of language understanding (ELU)2007In: From Signal to Dialogue: Dynamic Aspects of Hearing, Language and Cognition,2007, 2007Conference paper (Other academic)
  • 83.
    Rönnberg, Jerker
    et al.
    Linköping University, Department of Behavioural Sciences, The Swedish Institute for Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Karlsson Foo, Catharina
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Lunner, Thomas
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology . Linköping University, Faculty of Health Sciences.
    Cognition counts: A working memory system for ease of language understanding (ELU)2008In: International Journal of Audiology, ISSN 1499-2027, E-ISSN 1708-8186, Vol. 47, no Suppl. 2, p. 99-105Article in journal (Refereed)
    Abstract [en]

    A general working memory system for ease of language understanding (ELU, Rnnberg, 2003a) is presented. The purpose of the system is to describe and predict the dynamic interplay between explicit and implicit cognitive functions, especially in conditions of poorly perceived or poorly specified linguistic signals. In relation to speech understanding, the system based on (1) the quality and precision of phonological representations in long-term memory, (2) phonologically mediated lexical access speed, and (3) explicit, storage, and processing resources. If there is a mismatch between phonological information extracted from the speech signal and the phonological information represented in long-term memory, the system is assumed to produce a mismatch signal that invokes explicit processing resources. In the present paper, we focus on four aspects of the model which have led to the current, updated version: the language generality assumption; the mismatch assumption; chronological age; and the episodic buffer function of rapid, automatic multimodal binding of phonology (RAMBPHO). We evaluate the language generality assumption in relation to sign language and speech, and the mismatch assumption in relation to signal processing in hearing aids. Further, we discuss the effects of chronological age and the implications of RAMBPHO.

  • 84.
    Rönnberg, Niklas
    et al.
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research.
    AIST - Ett test av lyssningsansträngning2011Conference paper (Other academic)
    Abstract [sv]

    Hörapparatsanpassning kan ses som en process för att minska en persons lyssningsansträngning, men det är oklart hur man bäst mäter lyssningsansträngning på ett objektivt sätt. Auditory Inference Span Test (AIST) syftar därför till att utvecklas till ett kliniskt instrument att använda vid hörapparatsanpassning för att mäta en patients ansträngning att uppfatta tal.

    AIST är ett kombinerat hörsel-, minnes- och bearbetningstest. Testet bygger på idén att ju mer kognitiva resurser som går åt för att bearbeta och förstå tal, desto mindre kognitiva resurser finns kvar för att minnas och lagra talinformation. Testet använder Hagermans meningar i brus, och försökspersonen behöver minnas och bearbeta informationen i talmaterialet för att kunna besvara frågor om innehållet. Poäng på frågorna samt reaktionstid mäts som mått på lyssningsansträngning. Data från pilottester visar på att AIST kan bli ett väl anpassat test för kliniskt bruk för att mäta lyssningsansträngning.

     

  • 85.
    Rönnberg, Niklas
    et al.
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research.
    Testing effort for speech comprehension using the individuals’ cognitive spare capacity - the Auditory Inference Span test2010Conference paper (Other academic)
    Abstract [en]

    Modern hearing aids use a multitude of parameters to give the user an optimal speech signal. Fitting of the hearing aid becomes a handiwork due to the limited data of the patients hearing status (primarily an audiogram). A hearing in noise test (SNR threshold) is often used to evaluate the fitting. However, testing the SNR threshold as done in clinical use today is not ecological valid. Another way to think about hearing aid fitting is to ease the listening effort.

    Therefore, we propose the Auditory Inference Span Test (AIST) as a clinical tool during hearing aid fitting to assess the patient’s effort to understand speech. AIST is a combined auditory, memory, and processing test. It relies on the idea that the more cognitive resources that are required to process and understand speech, less cognitive resources are available for storage of the speech information. In AIST, sentences are presented in noise and afterwards the patient is required to recall and process the information from the sentences. Correctness and answering speed is measured and scores correlate to the effort required to understand the speech.

    Data from piloting tests indicate that the AIST is well suited as a clinical test for listening effort.

  • 86.
    Rönnberg, Niklas
    et al.
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Testing listening effort for speech comprehension using the individuals’ cognitive spare capacity2011In: Audiology Research, ISSN 2039-4330, E-ISSN 2039-4349, Vol. 1, no 1, p. 82-85Article in journal (Refereed)
    Abstract [en]

    Most hearing aid fittings today are almost solely based on the patient’s audiogram. Although the loss of gain in the cochlea is important, for a more optimal fitting, more individual parameters of the patient’s cochlear loss together with the patient's cognitive abilities to process the auditory signal are required (Stenfelt & Rönnberg, 2009; Edwards, 2007). Moreover, the evaluation of the fitting is often based on a speech in noise task and the aim is to improve the individual patient’s signal to noise ratio (SNR) thresholds. As a consequence, hearing aid fitting may be seen as a process aimed to improve the patient’s SNR threshold rather than to improve communication ability. However, subsequent to a hearing aid fitting, there can be great differences in SNR improvement between patients that have identical hearing impairment in terms of threshold data (the audiogram). The reasons are certainly complex but one contributing factor may be the individual differences in cognitive capacity and associated listening effort. Another way to think about amplified hearing is to ease a subject’s listening effort (Sarampalis, et al., 2009). When the speech signal is degraded by noise or by a hearing impairment, more high-order cognitive or top-down processes are required to perceive and understand the signal, and listening is therefore more effortful. It is assumed that a hearing aid would ease the listening effort for a hearing impaired person. However, it is not clear how to measure the listening effort. We here present a test that will tap into the different cognitive aspects of listening effort, the Auditory Inference Span Test (AIST). The AIST is a dual task hearing in noise test, that combines auditory and memory processing and is well suited as a clinical test for listening effort.

  • 87.
    Rönnberg, Niklas
    et al.
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research.
    The Auditory Inference Span Test – Developing a test for cognitive aspects of listening effort for speech comprehension2010Conference paper (Other academic)
    Abstract [en]

    Most hearing aid fittings today are almost solely based on the patient’s audiogram. However, more individual parameters of the patient’s hearing thresholds together with the patient’s cognitive abilities to process the auditory signal are required. Hearing aid fitting may be seen as a process aimed to improve the patient’s hearing thresholds rather than to improve communication ability. Another way to think about hearing aid fitting is to ease the patient’s listening effort. However, it is not clear how to measure the listening effort.

    Therefore, we propose the Auditory Inference Span Test (AIST) as a clinical tool during hearing aid fitting to assess the patient’s effort to understand speech. AIST is a combined auditory, memory, and processing test. It relies on the idea that the more cognitive resources that are required to process and understand speech, less cognitive resources are available for storage of the speech information. In AIST, sentences are presented in noise and afterwards the patient is required to recall and process the information from the sentences. Correctness and reaction time is recorded as measurements of perceived listening effort.

    Data from piloting tests indicate that the AIST is well suited as a clinical test for listening effort. In a future study to verify that the AIST is sensitive to cognitive capacity, the test will be evaluated with measurements of the subject's cognitive capacity as well as the subject's hearing thresholds. For a clinical test the requirement is that it is fast and easily facilitated. The AIST takes no more than fifteen minutes to complete, and the aim is to further shorten the time and adapt the test for clinical use. This ensures the AIST to be a useable instrument for testing listening effort using the individuals' cognitive spare capacity.

     

  • 88.
    Rönnberg, Niklas
    et al.
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research.
    Lunner, Thomas
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research.
    An objective measure of listening effort: The Auditory Inference Span Test2011Conference paper (Refereed)
    Abstract [en]

    One aim of hearing aid fitting is to ease the patient’s effort in understanding speech, i.e. the listening effort needed to perceive speech in different sound environments. To obtain a good hearing aid fitting, knowledge about the patient’s auditory as well as cognitive abilities seems to be important. However, listening effort is usually not included as a fitting criterion, partly as it is not clear how to measure listening effort objectively.

    The Auditory Inference Span Test (AIST) is a dual-task hearing-in-noise test, that combines auditory and memory processing. The basis for the test is that when more cognitive resources are required for understanding speech, less cognitive resources are available for storage and processing of the speech information. In AIST, Hagerman sentences are presented in noise and the subject is required to recall and process the sentence information. Recall ability is tested with different cognitive loads. Button-press responses are recorded and used as an estimate of listening effort. In a pilot study, listeners showed decreasing accuracy with increasing cognitive load and longer reaction time at maximum cognitive load, suggesting that the test may be suited as a clinical test for listening effort.

    In an ongoing study, the AIST is being evaluated in relation to other auditory and cognitive measures: baseline audiometry (audiogram) and speech in noise test (Hagerman sentences) as well as text based dual processing and storage test (reading span) and updating (letter memory test), as well as subjective rating of listening effort. Data from this study will be presented.

  • 89.
    Rönnberg, Niklas
    et al.
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Lunner, Thomas
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Health Sciences.
    Testing listening effort for speech comprehension2011Conference paper (Other academic)
    Abstract [en]

    One aim of hearing aid fitting is to reduce the effort of understanding speech, especially in noisy environments. For a good hearing aid fitting, knowledge about the patient’s auditory abilities is necessary, but knowledge about cognitive abilities may also be important.

     

    The Auditory Inference Span Test (AIST) is a dual-task hearing-in-noise test, that combines auditory and memory processing. In AIST, Hagerman sentences are presented in steady state speech-shaped noise at -2dB, -4dB or -6dB SNR and the subject is required to recall and process the information from the sentences by giving button-press responses to multiple-choice questions thereby assessing what the subject could infer from what was heard.

     

    AIST will be administered to 40 normal hearing subjects (29 to date) and performance related to speech reception threshold, working memory capacity and updating ability, as well as subjective rating of listening effort. Preliminary results show a greater SNR-related improvement in AIST scores at low SNRs than can be explained by improved audibility alone, consistent with release of memory resources due to reduced listening effort. There is also a trend towards a positive relationship between AIST scores and individual working memory capacity and updating ability.

  • 90.
    Simonsson, Maria
    et al.
    Linköping University, Faculty of Educational Sciences. Linköping University, Department of Social and Welfare Studies.
    Eckert, Gisela
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability.
    Hur går det för glutenintoleranta barn i förskolan?2006In: Mat vitaminer och bättre hälsa vid celiaki. Den 8:e Celiaki dagen i Norrköping,2006, 2006Conference paper (Other academic)
  • 91.
    Wass, Malin
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Lyxell, Björn
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences. Östergötlands Läns Landsting, Reconstruction Centre, Department of ENT - Head and Neck Surgery UHL.
    Sahlé, Birgitta
    Deparment of Logopedics, Lund University, Sweden.
    Asker-Árnason, Lena
    Deparment of Logopedics, Lund University, Sweden.
    Ibertsson, Tina
    Deparment of Logopedics, Lund University, Sweden.
    Mäki-Torkko, Elina
    Linköping University, Department of Clinical and Experimental Medicine, Oto-Rhiono-Laryngology and Head & Neck Surgery. Linköping University, Faculty of Health Sciences.
    Hällgren, Mattias
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Larsby, Birgitta
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Cognitive Skills and Reading Ability in Children with Cochlear Implants2010In: Cochlear Implants International, ISSN 1467-0100, E-ISSN 1754-7628, Vol. 11, no Suppl. 1, p. 395-398Article in journal (Refereed)
    Abstract [en]

    n/a

  • 92.
    Zekveld, Adriana
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Festen, Joost M
    Vrije University of Amsterdam Medical Centre.
    van Beek, Johannes H M
    Vrije University of Amsterdam Medical Centre.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    The Influence of Semantically Related and Unrelated Text Cues on the Intelligibility of Sentences in Noise2011In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, ISSN 0196-0202, Vol. 32, no 6, p. E16-E25Article in journal (Refereed)
    Abstract [en]

    Objectives: In two experiments with different subject groups, we explored the relationship between semantic context and intelligibility by examining the influence of visually presented, semantically related, and unrelated three-word text cues on perception of spoken sentences in stationary noise across a range of speech-to-noise ratios (SNRs). In addition, in Experiment (Exp) 2, we explored the relationship between individual differences in cognitive factors and the effect of the cues on speech intelligibility. less thanbrgreater than less thanbrgreater thanDesign: In Exp 1, cues had been generated by participants themselves in a previous test session (own) or by someone else (alien). These cues were either appropriate for that sentence (match) or for a different sentence (mismatch). A condition with nonword cues, generated by the experimenter, served as a control. Experimental sentences were presented at three SNRs (dB SNR) corresponding to the entirely correct repetition of 29%, 50%, or 71% of sentences (speech reception thresholds; SRTs). In Exp 2, semantically matching or mismatching cues and nonword cues were presented before sentences at SNRs corresponding to SRTs of 16% and 29%. The participants in Exp 2 also performed tests of verbal working memory capacity and the ability to read partially masked text. less thanbrgreater than less thanbrgreater thanResults: In Exp 1, matching cues improved perception relative to the nonword and mismatching cues, with largest benefits at the SNR corresponding to 29% performance in the SRT task. Mismatching cues did not impair speech perception relative to the nonword cue condition, and no difference in the effect of own and alien matching cues was observed. In Exp 2, matching cues improved speech perception as measured using both the percentage of correctly reported words and the percentage of entirely correctly reported sentences. Mismatching cues reduced the percentage of repeated words (but not the sentence-based scores) compared with the nonword cue condition. Working memory capacity and ability to read partly masked sentences were positively associated with the number of sentences repeated entirely correctly in the mismatch condition at the 29% SNR. less thanbrgreater than less thanbrgreater thanConclusions: In difficult listening conditions, both relevant and irrelevant semantic context can influence speech perception in noise. High working memory capacity and good linguistic skills are associated with a greater ability to inhibit irrelevant context when uncued sentence intelligibility is around 29% correct.

12 51 - 92 of 92
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf