liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Rönnberg, Jerker
Alternative names
Publications (10 of 677) Show all publications
Zekveld, A., Kramer, S. E., Rönnberg, J. & Rudner, M. (2019). In a Concurrent Memory and Auditory Perception Task, the Pupil Dilation Response Is More Sensitive to Memory Load Than to Auditory Stimulus Characteristics. Ear and Hearing, 40(2), 272-286
Open this publication in new window or tab >>In a Concurrent Memory and Auditory Perception Task, the Pupil Dilation Response Is More Sensitive to Memory Load Than to Auditory Stimulus Characteristics
2019 (English)In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 40, no 2, p. 272-286Article in journal (Refereed) Published
Abstract [en]

Objectives: Speech understanding may be cognitively demanding, but it can be enhanced when semantically related text cues precede auditory sentences. The present study aimed to determine whether (a) providing text cues reduces pupil dilation, a measure of cognitive load, during listening to sentences, (b) repeating the sentences aloud affects recall accuracy and pupil dilation during recall of cue words, and (c) semantic relatedness between cues and sentences affects recall accuracy and pupil dilation during recall of cue words.

Design: Sentence repetition following text cues and recall of the text cues were tested. Twenty-six participants (mean age, 22 years) with normal hearing listened to masked sentences. On each trial, a set of four-word cues was presented visually as text preceding the auditory presentation of a sentence whose meaning was either related or unrelated to the cues. On each trial, participants first read the cue words, then listened to a sentence. Following this they spoke aloud either the cue words or the sentence, according to instruction, and finally on all trials orally recalled the cues. Peak pupil dilation was measured throughout listening and recall on each trial. Additionally, participants completed a test measuring the ability to perceive degraded verbal text information and three working memory tests (a reading span test, a size-comparison span test, and a test of memory updating).

Results: Cue words that were semantically related to the sentence facilitated sentence repetition but did not reduce pupil dilation. Recall was poorer and there were more intrusion errors when the cue words were related to the sentences. Recall was also poorer when sentences were repeated aloud. Both behavioral effects were associated with greater pupil dilation. Larger reading span capacity and smaller size-comparison span were associated with larger peak pupil dilation during listening. Furthermore, larger reading span and greater memory updating ability were both associated with better cue recall overall.

Conclusions: Although sentence-related word cues facilitate sentence repetition, our results indicate that they do not reduce cognitive load during listening in noise with a concurrent memory load. As expected, higher working memory capacity was associated with better recall of the cues. Unexpectedly, however, semantic relatedness with the sentence reduced word cue recall accuracy and increased intrusion errors, suggesting an effect of semantic confusion. Further, speaking the sentence aloud also reduced word cue recall accuracy, probably due to articulatory suppression. Importantly, imposing a memory load during listening to sentences resulted in the absence of formerly established strong effects of speech intelligibility on the pupil dilation response. This nullified intelligibility effect demonstrates that the pupil dilation response to a cognitive (memory) task can completely overshadow the effect of perceptual factors on the pupil dilation response. This highlights the importance of taking cognitive task load into account during auditory testing.

Place, publisher, year, edition, pages
Lippincott Williams & Wilkins, 2019
Keywords
Listening effort; Memory processing; Pupil dilation response; Speech perception
National Category
Other Medical Sciences not elsewhere specified
Identifiers
urn:nbn:se:liu:diva-155558 (URN)10.1097/AUD.0000000000000612 (DOI)000459769700006 ()29923867 (PubMedID)2-s2.0-85061056453 (Scopus ID)
Note

Funding Agencies|Swedish Research Council

Available from: 2019-03-26 Created: 2019-03-26 Last updated: 2019-06-27Bibliographically approved
Signoret, C., Blomberg, R., Dahlström, Ö., Rudner, M. & Rönnberg, J. (2018). Modulation of the neural expectation violation marker during speech perception in noise.. In: : . Paper presented at MEGNord 2018 Conference, Stockholm, Sweden, May 16-18 2018.
Open this publication in new window or tab >>Modulation of the neural expectation violation marker during speech perception in noise.
Show others...
2018 (English)Conference paper, Poster (with or without abstract) (Other academic)
National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-159500 (URN)
Conference
MEGNord 2018 Conference, Stockholm, Sweden, May 16-18 2018
Available from: 2019-08-09 Created: 2019-08-09 Last updated: 2019-08-20Bibliographically approved
Zekveld, A. A., Pronk, M., Danielsson, H. & Rönnberg, J. (2018). Reading Behind the Lines: The Factors Affecting the Text Reception Threshold in Hearing Aid Users.. Journal of Speech, Language and Hearing Research, 61(3), 762-775
Open this publication in new window or tab >>Reading Behind the Lines: The Factors Affecting the Text Reception Threshold in Hearing Aid Users.
2018 (English)In: Journal of Speech, Language and Hearing Research, ISSN 1092-4388, E-ISSN 1558-9102, Vol. 61, no 3, p. 762-775Article in journal (Refereed) Published
Abstract [en]

Purpose: The visual Text Reception Threshold (TRT) test (Zekveld et al., 2007) has been designed to assess modality-general factors relevant for speech perception in noise. In the last decade, the test has been adopted in audiology labs worldwide. The 1st aim of this study was to examine which factors best predict interindividual differences in the TRT. Second, we aimed to assess the relationships between the TRT and the speech reception thresholds (SRTs) estimated in various conditions.; Method: First, we reviewed studies reporting relationships between the TRT and the auditory and/or cognitive factors and formulated specific hypotheses regarding the TRT predictors. These hypotheses were tested using a prediction model applied to a rich data set of 180 hearing aid users. In separate association models, we tested the relationships between the TRT and the various SRTs and subjective hearing difficulties, while taking into account potential confounding variables.; Results: The results of the prediction model indicate that the TRT is predicted by the ability to fill in missing words in incomplete sentences, by lexical access speed, and by working memory capacity. Furthermore, in line with previous studies, a moderate association between higher age, poorer pure-tone hearing acuity, and poorer TRTs was observed. Better TRTs were associated with better SRTs for the correct perception of 50% of Hagerman matrix sentences in a 4-talker babble, as well as with better subjective ratings of speech perception. Age and pure-tone hearing thresholds significantly confounded these associations. The associations of the TRT with SRTs estimated in other conditions and with subjective qualities of hearing were not statistically significant when adjusting for age and pure-tone average.; Conclusions: We conclude that the abilities tapped into by the TRT test include processes relevant for speeded lexical decision making when completing partly masked sentences and that these processes require working memory capacity. Furthermore, the TRT is associated with the SRT of hearing aid users as estimated in a challenging condition that includes informational masking and with experienced difficulties with speech perception in daily-life conditions. The current results underline the value of using the TRT test in studies involving speech perception and aid in the interpretation of findings acquired using the test.

Place, publisher, year, edition, pages
American Speech-Language-Hearing Association, 2018
National Category
Psychology
Identifiers
urn:nbn:se:liu:diva-146119 (URN)10.1044/2017_JSLHR-H-17-0196 (DOI)000428251900022 ()29450534 (PubMedID)
Note

Funding agencies:This research was supported by a Linnaeus Centre HEAD excellence center Grant 349-2007-8654 from the Swedish Research Council and by a program grant from FORTE (Grant 2012-1693), awarded to Jerker Rönnberg.

Available from: 2018-03-28 Created: 2018-03-28 Last updated: 2019-06-27
Signoret, C., Blomberg, R., Dahlström, Ö., Andersen, L. M., Lundqvist, D., Rudner, M. & Rönnberg, J. (2018). Resolving discrepancies between incoming auditory information and linguistic expectations. In: Neuroscience 2018: 48th annual meeting of Society for Neuroscience. Paper presented at 48th annual meeting of Society for Neuroscience, San Diego, CA, USA, Nov 3-7, 2018. Society for Neuroscience
Open this publication in new window or tab >>Resolving discrepancies between incoming auditory information and linguistic expectations
Show others...
2018 (English)In: Neuroscience 2018: 48th annual meeting of Society for Neuroscience, Society for Neuroscience , 2018Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

Speech perception in noise is dependent on stimulus-driven and knowledge-driven processes. Here we investigate the neural correlates and time course of discrepancies between incoming auditory information (i.e. stimulus-driven processing) and linguistic expectations (knowledge-driven processing) by including 20 normal hearing adults in a MEG study. Participants read 48 rhyming sentence pairs beforehand. In the scanner, they listened to sentences that corresponded exactly to the read sentences except that the last word (presented after 1600 millisecond delay and with 50% intelligibility) was only correct in half of the cases. Otherwise, it was 1) phonologically but not semantically related, 2) semantically but not phonologically related, or 3) neither phonologically nor semantically related to the sentence. Participants indicated by button press whether the last word matched the sentence they had read outside the scanner. Behavioural results showed more errors in condition 1 than in conditions 2 or 3, suggesting that phonological compatibility overrides semantic discrepancy when intelligibility is poor. Event-related field analysis demonstrated larger activity on frontal sites for correct than unrelated words, suggesting that the former were more accurately expected than the latter. An early M170 component was also observed, possibly reflecting expectation violation in the auditory modality. Dipole analysis will reveal whether M170 could be modulated by type of linguistic discrepancy. Distributed-network analysis will further our understanding of the time course and neural correlates of discrepancies between incoming auditory information and linguistic expectations.

Place, publisher, year, edition, pages
Society for Neuroscience, 2018
National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-159499 (URN)
Conference
48th annual meeting of Society for Neuroscience, San Diego, CA, USA, Nov 3-7, 2018
Available from: 2019-08-09 Created: 2019-08-09 Last updated: 2019-08-09Bibliographically approved
Shirnin, D., Lyxell, B., Dahlström, Ö., Blomberg, R., Rudner, M., Rönnberg, J. & Signoret, C. (2017). Speech perception in noise: prediction patterns of neural pre-activation in lexical processing. In: : . Paper presented at Fourth International Conference on Cognitive Hearing Science for Communication (CHSCOM2017), Linköping, Sweden,June 18-22, 2017. Swedish Institute for Disability Research, Linköping University, Article ID 65.
Open this publication in new window or tab >>Speech perception in noise: prediction patterns of neural pre-activation in lexical processing
Show others...
2017 (English)Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

The purpose of this study is to examine whether the neural correlates of lexical expectations could be used to predict speech in noise perception. We analyse mag-netoencephalography (MEG) data from 20 normal hearing participants, who read a set of couplets (a pair of phrases with rhyming end words) prior to the experiment. During the experiment, the participants are asked to listen to the couplets, whose intelligibility is set to 80%. However, the last word is pronounced with a delay of 1600 ms (i.e. expectation gap) and is masked at 50% of intelligibility. At the end of each couplet, the participants are asked to indicate if the last word was cor-rect, i.e. corresponding to the expected word. Given the oscillatory characteristics of neural patterns of lexical expectations during the expectation gap, can we predict the participant’s actual perception of the last word? In order to approach this re-search question, we aim to identify the correlation patterns between the instances of neural pre-activation, occurring during the interval of the expectation gap and the type of the given answer. According to the sequential design of the experiment, the expectation gap is placed 4400 ms prior to the time interval dedicated to the participant’s answer. Machine Learning approach has been chosen as the main tool for the pattern recognition.

Place, publisher, year, edition, pages
Swedish Institute for Disability Research, Linköping University, 2017
National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-159501 (URN)
Conference
Fourth International Conference on Cognitive Hearing Science for Communication (CHSCOM2017), Linköping, Sweden,June 18-22, 2017
Available from: 2019-08-09 Created: 2019-08-09 Last updated: 2019-08-09Bibliographically approved
Stenfelt, S., Lunner, T., Ng, E., Lidestam, B., Zekveld, A., Sörqvist, P., . . . Rönnberg, J. (2016). Auditory, signal processing, and cognitive factors  influencing  speech  perception  in  persons with hearing loss fitted with hearing aids – the N200 study. In: : . Paper presented at IHCON2016, International Hearing Aid Research Conference, Tahoe City, California, USA, August 10–14, 2016. , Article ID B46.
Open this publication in new window or tab >>Auditory, signal processing, and cognitive factors  influencing  speech  perception  in  persons with hearing loss fitted with hearing aids – the N200 study
Show others...
2016 (English)Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

Objective: The aim of the current study was to assess aided speech-in-noise outcomes and relate those measures to auditory sensitivity and processing, different types of cognitive processing abilities, and signal processing in hearing aids.

Material and method: Participants were 200 hearing-aid wearers, with a mean age of 60.8 years, 43% females, with average hearing thresholds in the better ear of 37.4 dB HL. Tests of auditory functions were hearing thresholds, DPOAEs, tests of fine structure processing, IHC dead regions, spectro-temporal modulation, and speech recognition in quiet (PB words). Tests of cognitive processing function were tests of phonological skills, working memory, executive functions and inference making abilities, and general cognitive tests (e.g., tests of cognitive decline and IQ). The outcome test variables were the Hagerman sentences with 50 and 80% speech recognition levels, using two different noises (stationary speech weighted noise and 4-talker babble), and three types of signal processing (linear gain, fast acting compression, and linear gain plus a non-ideal binary mask). Another sentence test included typical and atypical sentences with contextual cues that were tested both audio-visually and in an auditory mode only. Moreover, HINT and SSQ were administrated.

Analysis: Factor analyses were performed separate for the auditory, cognitive, and outcome tests.

Results: The auditory tests resulted in two factors labeled SENSITIVITY and TEMPORAL FINE STRUCTURE, the cognitive tests in one factor (COGNITION), and the outcome tests in the two factors termed NO CONTEXT and CONTEXT that relates to the level of context in the different outcome tests. When age was partialled out, COGNITION was moderately correlated with the TEMPORAL FINE STRUCTURE and NO CONTEXT factors but only weakly correlated with the CONTEXT factor. SENSITIVITY correlated weakly with TEMPORAL FINE STRUCTURE and CONTEXT, and moderately with NO CONTEXT, while TEMPORAL FINE STRUCTURE showed weak correlation with CONTEXT and moderate correlation with NO CONTEXT. CONTEXT and NO CONTEXT had a  moderate correlation. Moreover, the overall results of the Hagerman sentences showed 0.9 dB worse SNR with fast acting compression compared with linear gain and 5.5 dB better SNR with linear  gain and noise reduction compared with only linear gain.

Conclusions: For hearing aid wearers, the ability to recognize speech in noise is associated with both sensory and cognitive processing abilities when the speech materials have low internal context. These associations are less prominent when the speech material has contextual cues.

National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-159504 (URN)
Conference
IHCON2016, International Hearing Aid Research Conference, Tahoe City, California, USA, August 10–14, 2016
Available from: 2019-08-09 Created: 2019-08-09 Last updated: 2019-08-09Bibliographically approved
Rudner, M., Keidser, G., Hygge, S. & Rönnberg, J. (2016). Better visuospatial working memory in adults who report profound deafness compared to those with normal or poor hearing: data from the UK Biobank resource. Ear and Hearing, 37(5), 620-622
Open this publication in new window or tab >>Better visuospatial working memory in adults who report profound deafness compared to those with normal or poor hearing: data from the UK Biobank resource
2016 (English)In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 37, no 5, p. 620-622Article in journal (Refereed) Published
Abstract [en]

Experimental work has shown better visuospatial working memory (VSWM) in profoundly deaf individuals compared to those with normal hearing. Other data, including the UK Biobank resource shows poorer VSWM in individuals with poorer hearing. Using the same database, the authors investigated VSWM in individuals who reported profound deafness. Included in this study were 112 participants who were profoundly deaf, 1310 with poor hearing and 74,635 with normal hearing. All participants performed a card-pair matching task as a test of VSWM. Although variance in VSWM performance was large among profoundly deaf participants, at group level it was superior to that of participants with both normal and poor hearing. VSWM in adults is related to hearing status but the association is not linear. Future study should investigate the mechanism behind enhanced VSWM in profoundly deaf adults.

Place, publisher, year, edition, pages
Lippincott Williams & Wilkins, 2016
National Category
Other Health Sciences
Identifiers
urn:nbn:se:liu:diva-126479 (URN)10.1097/AUD.0000000000000314 (DOI)000395797700020 ()27232076 (PubMedID)
Available from: 2016-03-29 Created: 2016-03-29 Last updated: 2017-11-30
Moradi, S., Lidestam, B. & Rönnberg, J. (2016). Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli. TRENDS IN HEARING, 20, Article ID 2331216516653355.
Open this publication in new window or tab >>Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli
2016 (English)In: TRENDS IN HEARING, ISSN 2331-2165, Vol. 20, article id 2331216516653355Article in journal (Refereed) Published
Abstract [en]

The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context.

Place, publisher, year, edition, pages
SAGE PUBLICATIONS INC, 2016
Keywords
audiovisual speech perception; EHA users; ENH listeners; gating paradigm
National Category
General Language Studies and Linguistics
Identifiers
urn:nbn:se:liu:diva-130430 (URN)10.1177/2331216516653355 (DOI)000379790000003 ()27317667 (PubMedID)
Note

Funding Agencies|Swedish Research Council [349-2007-8654]

Available from: 2016-08-07 Created: 2016-08-05 Last updated: 2018-01-10
Sörqvist, P., Dahlström, Ö., Karlsson, T. & Rönnberg, J. (2016). Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction. Frontiers in Human Neuroscience, 10(221)
Open this publication in new window or tab >>Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction
2016 (English)In: Frontiers in Human Neuroscience, ISSN 1662-5161, E-ISSN 1662-5161, Vol. 10, no 221Article in journal (Refereed) Published
Abstract [en]

Whether cognitive load and other aspects of task difficulty increases or decreases distractibility is subject of much debate in contemporary psychology. One camp argues that cognitive load usurps executive resources, which otherwise could be used for attentional control, and therefore cognitive load increases distraction. The other camp argues that cognitive load demands high levels of concentration (focal task engagement), which suppresses peripheral processing and therefore decreases distraction. In this article, we employed an functional magnetic resonance imaging (fMRI) protocol to explore whether higher cognitive load in a visually-presented task suppresses task-irrelevant auditory processing in cortical and subcortical areas. The results show that selectively attending to an auditory stimulus facilitates its neural processing in the auditory cortex, and switching the locus-of-attention to the visual modality decreases the neural response in the auditory cortex. When the cognitive load of the task presented in the visual modality increases, the neural response to the auditory stimulus is further suppressed, along with increased activity in networks related to effortful attention. Taken together, the results suggest that higher cognitive load decreases peripheral processing of task-irrelevant information which decreases distractibility as a side effect of the increased activity in a focused-attention network.

Place, publisher, year, edition, pages
FRONTIERS MEDIA SA, 2016
Keywords
working memory; selective attention; concentration; cognitive load; distraction
National Category
Neurosciences
Identifiers
urn:nbn:se:liu:diva-129158 (URN)10.3389/fnhum.2016.00221 (DOI)000376059100002 ()27242485 (PubMedID)
Note

Funding Agencies|Stiftelsen Riksbankens Jubileumsfond [P11-0617:1]; Swedish Research Council [2015-01116]

Available from: 2016-06-13 Created: 2016-06-13 Last updated: 2018-04-07
Danielsson, H., Henry, L., Messer, D., Carney, D. P. J. & Rönnberg, J. (2016). Developmental delays in phonological recoding among children and adolescents with Down syndrome and Williams syndrome. Research in Developmental Disabilities, 55, 64-76
Open this publication in new window or tab >>Developmental delays in phonological recoding among children and adolescents with Down syndrome and Williams syndrome
Show others...
2016 (English)In: Research in Developmental Disabilities, ISSN 0891-4222, E-ISSN 1873-3379, Vol. 55, p. 64-76Article in journal (Refereed) Published
Abstract [en]

This study examined the development of phonological recoding in short-term memory (STM) span tasks among two clinical groups with contrasting STM and language profiles: those with Down syndrome (DS) and Williams syndrome (WS). Phonological recoding was assessed by comparing: (1) performance on phonologically similar and dissimilar items (phonological similarity effects, PSE); and (2) items with short and long names (word length effects, WLE). Participant groups included children and adolescents with DS (n = 29), WS (n = 25) and typical development (n = 51), all with average mental ages around 6 years. The group with WS, contrary to predictions based on their relatively strong verbal STM and language abilities, showed no evidence for phonological recoding. Those in the group with DS, with weaker verbal STM and language abilities, showed positive evidence for phonological recoding (PSE), but to a lesser degree than the typical group (who showed PSE and WLE). These findings provide new information about the memory systems of these groups of children and adolescents, and suggest that STM processes involving phonological recoding do not fit with the usual expectations of the abilities of children and adolescents with WS and DS. (c) 2016 Elsevier Ltd. All rights reserved.

Place, publisher, year, edition, pages
PERGAMON-ELSEVIER SCIENCE LTD, 2016
Keywords
Down syndrome; Williams syndrome; Phonological recoding; Phonological similarity effect; Word length effect; Visual similarity effect; Short-term memory
National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-130262 (URN)10.1016/j.ridd.2016.03.012 (DOI)000378455100007 ()27043367 (PubMedID)
Note

Funding Agencies|Swedish Research Council for Health, Working Life and Welfare [FAS 2010-0739]

Available from: 2016-08-01 Created: 2016-07-28 Last updated: 2017-11-28
Organisations

Search in DiVA

Show all publications