liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Rönnberg, Jerker
Alternative names
Publications (10 of 671) Show all publications
Rudner, M., Keidser, G., Hygge, S. & Rönnberg, J. (2016). Better visuospatial working memory in adults who report profound deafness compared to those with normal or poor hearing: data from the UK Biobank resource. Ear and Hearing, 37(5), 620-622.
Open this publication in new window or tab >>Better visuospatial working memory in adults who report profound deafness compared to those with normal or poor hearing: data from the UK Biobank resource
2016 (English)In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 37, no 5, 620-622 p.Article in journal (Refereed) Published
Abstract [en]

Experimental work has shown better visuospatial working memory (VSWM) in profoundly deaf individuals compared to those with normal hearing. Other data, including the UK Biobank resource shows poorer VSWM in individuals with poorer hearing. Using the same database, the authors investigated VSWM in individuals who reported profound deafness. Included in this study were 112 participants who were profoundly deaf, 1310 with poor hearing and 74,635 with normal hearing. All participants performed a card-pair matching task as a test of VSWM. Although variance in VSWM performance was large among profoundly deaf participants, at group level it was superior to that of participants with both normal and poor hearing. VSWM in adults is related to hearing status but the association is not linear. Future study should investigate the mechanism behind enhanced VSWM in profoundly deaf adults.

Place, publisher, year, edition, pages
Lippincott Williams & Wilkins, 2016
National Category
Other Health Sciences
Identifiers
urn:nbn:se:liu:diva-126479 (URN)10.1097/AUD.0000000000000314 (DOI)000395797700020 ()27232076 (PubMedID)
Available from: 2016-03-29 Created: 2016-03-29 Last updated: 2017-11-30
Moradi, S., Lidestam, B. & Rönnberg, J. (2016). Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli. TRENDS IN HEARING, 20, Article ID 2331216516653355.
Open this publication in new window or tab >>Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli
2016 (English)In: TRENDS IN HEARING, ISSN 2331-2165, Vol. 20, 2331216516653355Article in journal (Refereed) Published
Abstract [en]

The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context.

Place, publisher, year, edition, pages
SAGE PUBLICATIONS INC, 2016
Keyword
audiovisual speech perception; EHA users; ENH listeners; gating paradigm
National Category
General Language Studies and Linguistics
Identifiers
urn:nbn:se:liu:diva-130430 (URN)10.1177/2331216516653355 (DOI)000379790000003 ()27317667 (PubMedID)
Note

Funding Agencies|Swedish Research Council [349-2007-8654]

Available from: 2016-08-07 Created: 2016-08-05 Last updated: 2018-01-10
Sörqvist, P., Dahlström, Ö., Karlsson, T. & Rönnberg, J. (2016). Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction. Frontiers in Human Neuroscience, 10(221).
Open this publication in new window or tab >>Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction
2016 (English)In: Frontiers in Human Neuroscience, ISSN 1662-5161, E-ISSN 1662-5161, Vol. 10, no 221Article in journal (Refereed) Published
Abstract [en]

Whether cognitive load and other aspects of task difficulty increases or decreases distractibility is subject of much debate in contemporary psychology. One camp argues that cognitive load usurps executive resources, which otherwise could be used for attentional control, and therefore cognitive load increases distraction. The other camp argues that cognitive load demands high levels of concentration (focal task engagement), which suppresses peripheral processing and therefore decreases distraction. In this article, we employed an functional magnetic resonance imaging (fMRI) protocol to explore whether higher cognitive load in a visually-presented task suppresses task-irrelevant auditory processing in cortical and subcortical areas. The results show that selectively attending to an auditory stimulus facilitates its neural processing in the auditory cortex, and switching the locus-of-attention to the visual modality decreases the neural response in the auditory cortex. When the cognitive load of the task presented in the visual modality increases, the neural response to the auditory stimulus is further suppressed, along with increased activity in networks related to effortful attention. Taken together, the results suggest that higher cognitive load decreases peripheral processing of task-irrelevant information which decreases distractibility as a side effect of the increased activity in a focused-attention network.

Place, publisher, year, edition, pages
FRONTIERS MEDIA SA, 2016
Keyword
working memory; selective attention; concentration; cognitive load; distraction
National Category
Neurosciences
Identifiers
urn:nbn:se:liu:diva-129158 (URN)10.3389/fnhum.2016.00221 (DOI)000376059100002 ()27242485 (PubMedID)
Note

Funding Agencies|Stiftelsen Riksbankens Jubileumsfond [P11-0617:1]; Swedish Research Council [2015-01116]

Available from: 2016-06-13 Created: 2016-06-13 Last updated: 2018-01-10
Danielsson, H., Henry, L., Messer, D., Carney, D. P. J. & Rönnberg, J. (2016). Developmental delays in phonological recoding among children and adolescents with Down syndrome and Williams syndrome. Research in Developmental Disabilities, 55, 64-76.
Open this publication in new window or tab >>Developmental delays in phonological recoding among children and adolescents with Down syndrome and Williams syndrome
Show others...
2016 (English)In: Research in Developmental Disabilities, ISSN 0891-4222, E-ISSN 1873-3379, Vol. 55, 64-76 p.Article in journal (Refereed) Published
Abstract [en]

This study examined the development of phonological recoding in short-term memory (STM) span tasks among two clinical groups with contrasting STM and language profiles: those with Down syndrome (DS) and Williams syndrome (WS). Phonological recoding was assessed by comparing: (1) performance on phonologically similar and dissimilar items (phonological similarity effects, PSE); and (2) items with short and long names (word length effects, WLE). Participant groups included children and adolescents with DS (n = 29), WS (n = 25) and typical development (n = 51), all with average mental ages around 6 years. The group with WS, contrary to predictions based on their relatively strong verbal STM and language abilities, showed no evidence for phonological recoding. Those in the group with DS, with weaker verbal STM and language abilities, showed positive evidence for phonological recoding (PSE), but to a lesser degree than the typical group (who showed PSE and WLE). These findings provide new information about the memory systems of these groups of children and adolescents, and suggest that STM processes involving phonological recoding do not fit with the usual expectations of the abilities of children and adolescents with WS and DS. (c) 2016 Elsevier Ltd. All rights reserved.

Place, publisher, year, edition, pages
PERGAMON-ELSEVIER SCIENCE LTD, 2016
Keyword
Down syndrome; Williams syndrome; Phonological recoding; Phonological similarity effect; Word length effect; Visual similarity effect; Short-term memory
National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-130262 (URN)10.1016/j.ridd.2016.03.012 (DOI)000378455100007 ()27043367 (PubMedID)
Note

Funding Agencies|Swedish Research Council for Health, Working Life and Welfare [FAS 2010-0739]

Available from: 2016-08-01 Created: 2016-07-28 Last updated: 2017-11-28
Cardin, V., Smittenaar, R. C., Orfanidou, E., Rönnberg, J., Capek, C. M., Rudner, M. & Woll, B. (2016). Differential activity in Heschl's gyrus between deaf and hearing individuals is due to auditory deprivation rather than language modality. NeuroImage, 124, 96-106.
Open this publication in new window or tab >>Differential activity in Heschl's gyrus between deaf and hearing individuals is due to auditory deprivation rather than language modality
Show others...
2016 (English)In: NeuroImage, ISSN 1053-8119, E-ISSN 1095-9572, Vol. 124, 96-106 p.Article in journal (Refereed) Published
Abstract [en]

Sensory cortices undergo crossmodal reorganisation as a consequence of sensory deprivation. Congenital deafness in humans represents a particular case with respect to other types of sensory deprivation, because cortical reorganisation is not only a consequence of auditory deprivation, but also of language-driven mechanisms. Visual crossmodal plasticity has been found in secondary auditory cortices of deaf individuals, but it is still unclear if reorganisation also takes place in primary auditory areas, and how this relates to language modality and auditory deprivation.

Here, we dissociated the effects of language modality and auditory deprivation on crossmodal plasticity in Heschl's gyrus as a whole, and in cytoarchitectonic region Te1.0 (likely to contain the core auditory cortex). Using fMRI, we measured the BOLD response to viewing sign language in congenitally or early deaf individuals with and without sign language knowledge, and in hearing controls.

Results show that differences between hearing and deaf individuals are due to a reduction in activation caused by visual stimulation in the hearing group, which is more significant in Te1.0 than in Heschl's gyrus as a whole. Furthermore, differences between deaf and hearing groups are due to auditory deprivation, and there is no evidence that the modality of language used by deaf individuals contributes to crossmodal plasticity in Heschl's gyrus.

Keyword
Heschl's gyrus, Deafness, Sign language, Speech, fMRI
National Category
Neurosciences
Identifiers
urn:nbn:se:liu:diva-123221 (URN)10.1016/j.neuroimage.2015.08.073 (DOI)000366646700011 ()26348556 (PubMedID)
Note

Funding agencies: Riksbankens Jubileumsfond [P2008-0481:1-E]; Swedish Council for Working Life and Social Research [2008-0846]; Swedish Research Council [349-2007-8654]; Economic and Social Research Council of Great Britain [RES-620-28-6001, RES-620-28-0002]

Available from: 2015-12-08 Created: 2015-12-08 Last updated: 2018-01-10
Rönnberg, J. (2016). Hearing with your ears, listening with your brain. APS Observer, 29(2).
Open this publication in new window or tab >>Hearing with your ears, listening with your brain
2016 (English)In: APS Observer, ISSN 1050-4672, Vol. 29, no 2Article in journal (Other academic) Published
National Category
Psychology
Identifiers
urn:nbn:se:liu:diva-126351 (URN)
Available from: 2016-03-22 Created: 2016-03-22 Last updated: 2016-04-11
Cardin, V., Orfanidou, E., Kästner, L., Rönnberg, J., Woll, B., Capek, C. & Rudner, M. (2016). Monitoring Different Phonological Parameters of Sign Language Engages the Same Cortical Language Network but Distinctive Perceptual Ones. Journal of cognitive neuroscience, 28(1), 20-40.
Open this publication in new window or tab >>Monitoring Different Phonological Parameters of Sign Language Engages the Same Cortical Language Network but Distinctive Perceptual Ones
Show others...
2016 (English)In: Journal of cognitive neuroscience, ISSN 0898-929X, E-ISSN 1530-8898, Vol. 28, no 1, 20-40 p.Article in journal (Refereed) Published
Abstract [en]

The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.

National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-123220 (URN)10.1162/jocn_a_00872 (DOI)000365750400003 ()26351993 (PubMedID)
Note

Funding agencies: Riksbankens Jubileumsfond [P2008-0481:1-E]; Swedish Council for Working Life and Social Research [2008-0846]; Swedish Research Council (Linnaeus Centre HEAD); Economic and Social Research Council of Great Britain [RES-620-28-6001, RES-620-28-6002]

Available from: 2015-12-08 Created: 2015-12-08 Last updated: 2017-12-01
Ellis, R., Molander, P., Rönnberg, J., Lyxell, B., Andersson, G. & Lunner, T. (2016). Predicting Speech-in-Noise Recognition from Performance on the Trail Making Test: Results from a Large-Scale Internet Study. Ear and Hearing, 37(1), 73-79.
Open this publication in new window or tab >>Predicting Speech-in-Noise Recognition from Performance on the Trail Making Test: Results from a Large-Scale Internet Study
Show others...
2016 (English)In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 37, no 1, 73-79 p.Article in journal (Refereed) Published
Abstract [en]

Objective: The aim of the study was to investigate the utility of an internet-based version of the trail making test (TMT) to predict performance on a speech-in-noise perception task.

Design: Data were taken from a sample of 1509 listeners aged between 18 and 91 years old. Participants completed computerized versions of the TMT and an adaptive speech-in-noise recognition test. All testing was conducted via the internet.

Results: The results indicate that better performance on both the simple and complex subtests of the TMT are associated with better speech-in-noise recognition scores. Thirty-eight percent of the participants had scores on the speech-in-noise test that indicated the presence of a hearing loss.

Conclusions: The findings suggest that the TMT may be a useful tool in the assessment, and possibly the treatment, of speech-recognition difficulties. The results indicate that the relation between speech-in-noise recognition and TMT performance relates both to the capacity of the TMT to index processing speed and to the more complex cognitive abilities also implicated in TMT performance.

Place, publisher, year, edition, pages
Lippincott Williams & Wilkins, 2016
Keyword
Cognition, Internet screening, Speech-in-noise perception, Trail making test
National Category
Otorhinolaryngology Other Medical Sciences not elsewhere specified
Identifiers
urn:nbn:se:liu:diva-123218 (URN)10.1097/AUD.0000000000000218 (DOI)000367343400008 ()26317162 (PubMedID)
Note

Funding agencies: Swedish Council for Working Life and Social Research (Forte) [2009-0055]

Available from: 2015-12-08 Created: 2015-12-08 Last updated: 2017-05-03
Rudner, M., Orfanidou, E., Cardin, V., Capek, C. M., Woll, B. & Rönnberg, J. (2016). Preexisting semantic representation improves working memory performance in the visuospatial domain. Memory & Cognition, 44(4), 608-620.
Open this publication in new window or tab >>Preexisting semantic representation improves working memory performance in the visuospatial domain
Show others...
2016 (English)In: Memory & Cognition, ISSN 0090-502X, E-ISSN 1532-5946, Vol. 44, no 4, 608-620 p.Article in journal (Refereed) Published
Abstract [en]

Working memory (WM) for spoken language improves when the to-be-remembered items correspond to preexisting representations in long-term memory. We investigated whether this effect generalizes to the visuospatial domain by administering a visual n-back WM task to deaf signers and hearing signers, as well as to hearing nonsigners. Four different kinds of stimuli were presented: British Sign Language (BSL; familiar to the signers), Swedish Sign Language (SSL; unfamiliar), nonsigns, and nonlinguistic manual actions. The hearing signers performed better with BSL than with SSL, demonstrating a facilitatory effect of preexisting semantic representation. The deaf signers also performed better with BSL than with SSL, but only when WM load was high. No effect of preexisting phonological representation was detected. The deaf signers performed better than the hearing nonsigners with all sign-based materials, but this effect did not generalize to nonlinguistic manual actions. We argue that deaf signers, who are highly reliant on visual information for communication, develop expertise in processing sign-based items, even when those items do not have preexisting semantic or phonological representations. Preexisting semantic representation, however, enhances the quality of the gesture-based representations temporarily maintained in WM by this group, thereby releasing WM resources to deal with increased load. Hearing signers, on the other hand, may make strategic use of their speech-based representations for mnemonic purposes. The overall pattern of results is in line with flexible-resource models of WM.

Place, publisher, year, edition, pages
Springer, 2016
Keyword
Working memory, Visuospatial, Sign language, Deafness, Semantic
National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-126032 (URN)10.3758/s13421-016-0585-z (DOI)000374335500007 ()26800983 (PubMedID)
Note

Funding agencies: Riksbankens jubileumsfond [P2008-0481:1-E]; Economic and Social Research Council of Great Britain [RES-620-28-6001, RES-620-28-0002]

Available from: 2016-03-11 Created: 2016-03-11 Last updated: 2017-11-30
Rudner, M., Mishra, S., Stenfelt, S., Lunner, T. & Rönnberg, J. (2016). Seeing the talker’s face improves free recall of speech for young adults with normal hearing but not older adults with hearing loss. Journal of Speech, Language and Hearing Research, 59, 590-599.
Open this publication in new window or tab >>Seeing the talker’s face improves free recall of speech for young adults with normal hearing but not older adults with hearing loss
Show others...
2016 (English)In: Journal of Speech, Language and Hearing Research, ISSN 1092-4388, E-ISSN 1558-9102, Vol. 59, 590-599 p.Article in journal (Refereed) Published
Abstract [en]

Purpose Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers.

Method Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13 two-digit numbers, with alternating male and female talkers. Lists were presented in quiet as well as in stationary and speech-like noise at a signal-to-noise ratio giving approximately 90% intelligibility. Amplification compensated for loss of audibility.

Results Seeing the talker's face improved free recall performance for the younger but not the older group. Poorer performance in background noise was contingent on individual differences in working memory capacity. The effect of seeing the talker's face did not differ in quiet and noise.

Conclusions We have argued that the absence of an effect of seeing the talker's face for older adults with hearing loss may be due to modulation of audiovisual integration mechanisms caused by an interaction between task demands and participant characteristics. In particular, we suggest that executive task demands and interindividual executive skills may play a key role in determining the benefit of seeing the talker's face during a speech-based cognitive task

National Category
Psychology (excluding Applied Psychology) General Language Studies and Linguistics
Identifiers
urn:nbn:se:liu:diva-126019 (URN)10.1044/2015_JSLHR-H-15-0014 (DOI)000386781500016 ()
Note

Funding agencies: Swedish Council for Working Life and Social Research [2007-0788].

The previous status of this article was Manuscript and the working title was Updating ability reduces the negative effect of noise on memory of speech for persons with age-related hearing loss.

Available from: 2016-03-11 Created: 2016-03-11 Last updated: 2018-01-10Bibliographically approved
Organisations

Search in DiVA

Show all publications