liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Rudner, Mary
Publications (10 of 274) Show all publications
Signoret, C. & Rudner, M. (2019). Hearing impairment and perceived clarity of predictable speech. Ear and Hearing
Open this publication in new window or tab >>Hearing impairment and perceived clarity of predictable speech
2019 (English)In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667Article in journal (Refereed) Epub ahead of print
Abstract [en]

Objectives: The precision of stimulus-driven information is less critical for comprehension when accurate knowledge-based predictions of the upcoming stimulus can be generated. A recent study in listeners without hearing impairment (HI) has shown that form- and meaning-based predictability independently and cumulatively enhance perceived clarity of degraded speech. In the present study, we investigated whether form- and meaning-based predictability enhanced the perceptual clarity of degraded speech for individuals with moderate to severe sensorineural HI, a group for whom such enhancement may be particularly important.

Design: Spoken sentences with high or low semantic coherence were degraded by noise-vocoding and preceded by matching or nonmatching text primes. Matching text primes allowed generation of form-based predictions while semantic coherence allowed generation of meaning-based predictions.

Results: The results showed that both form- and meaning-based predictions make degraded speech seem clearer to individuals with HI. The benefit of form-based predictions was seen across levels of speech quality and was greater for individuals with HI in the present study than for individuals without HI in our previous study. However, for individuals with HI, the benefit of meaning-based predictions was only apparent when the speechwas slightly degraded. When it was more severely degraded, the benefit of meaning-based predictions was only seen when matching text primes preceded the degraded speech. The benefit in terms of perceptual clarity of meaning-based predictions was positively related to verbal fluency but not working memory performance.

Conclusions: Taken together, these results demonstrate that, for individuals with HI, form-based predictability has a robust effect on perceptual clarity that is greater than the effect previously shown for individuals without HI. However, when speech quality is moderately or severely degraded, meaning-based predictability is contingent on form-based predictability. Further, the ability to mobilize the lexicon seems to contribute to the strength of meaning-based predictions. Whereas individuals without HI may be able to devote explicit working memory capacity for storing meaning-based predictions, individuals with HI may already be using all available explicit capacity to process the degraded speech and thus become reliant on explicit skills such as their verbal fluency to generate useful meaning-based predictions.

Place, publisher, year, edition, pages
Lippincott Williams & Wilkins, 2019
Keywords
Cognitive abilities, Lexical, Linguistic abilities, Noise-vocoding, Perceptual clarity, Phonological, Predictability, Semantic, Speech
National Category
Psychology
Identifiers
urn:nbn:se:liu:diva-155636 (URN)10.1097/AUD.0000000000000689 (DOI)30624251 (PubMedID)
Available from: 2019-03-21 Created: 2019-03-21 Last updated: 2019-03-28Bibliographically approved
Zekveld, A., Kramer, S. E., Rönnberg, J. & Rudner, M. (2019). In a Concurrent Memory and Auditory Perception Task, the Pupil Dilation Response Is More Sensitive to Memory Load Than to Auditory Stimulus Characteristics. Ear and Hearing, 40(2), 272-286
Open this publication in new window or tab >>In a Concurrent Memory and Auditory Perception Task, the Pupil Dilation Response Is More Sensitive to Memory Load Than to Auditory Stimulus Characteristics
2019 (English)In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 40, no 2, p. 272-286Article in journal (Refereed) Published
Abstract [en]

Objectives: Speech understanding may be cognitively demanding, but it can be enhanced when semantically related text cues precede auditory sentences. The present study aimed to determine whether (a) providing text cues reduces pupil dilation, a measure of cognitive load, during listening to sentences, (b) repeating the sentences aloud affects recall accuracy and pupil dilation during recall of cue words, and (c) semantic relatedness between cues and sentences affects recall accuracy and pupil dilation during recall of cue words.

Design: Sentence repetition following text cues and recall of the text cues were tested. Twenty-six participants (mean age, 22 years) with normal hearing listened to masked sentences. On each trial, a set of four-word cues was presented visually as text preceding the auditory presentation of a sentence whose meaning was either related or unrelated to the cues. On each trial, participants first read the cue words, then listened to a sentence. Following this they spoke aloud either the cue words or the sentence, according to instruction, and finally on all trials orally recalled the cues. Peak pupil dilation was measured throughout listening and recall on each trial. Additionally, participants completed a test measuring the ability to perceive degraded verbal text information and three working memory tests (a reading span test, a size-comparison span test, and a test of memory updating).

Results: Cue words that were semantically related to the sentence facilitated sentence repetition but did not reduce pupil dilation. Recall was poorer and there were more intrusion errors when the cue words were related to the sentences. Recall was also poorer when sentences were repeated aloud. Both behavioral effects were associated with greater pupil dilation. Larger reading span capacity and smaller size-comparison span were associated with larger peak pupil dilation during listening. Furthermore, larger reading span and greater memory updating ability were both associated with better cue recall overall.

Conclusions: Although sentence-related word cues facilitate sentence repetition, our results indicate that they do not reduce cognitive load during listening in noise with a concurrent memory load. As expected, higher working memory capacity was associated with better recall of the cues. Unexpectedly, however, semantic relatedness with the sentence reduced word cue recall accuracy and increased intrusion errors, suggesting an effect of semantic confusion. Further, speaking the sentence aloud also reduced word cue recall accuracy, probably due to articulatory suppression. Importantly, imposing a memory load during listening to sentences resulted in the absence of formerly established strong effects of speech intelligibility on the pupil dilation response. This nullified intelligibility effect demonstrates that the pupil dilation response to a cognitive (memory) task can completely overshadow the effect of perceptual factors on the pupil dilation response. This highlights the importance of taking cognitive task load into account during auditory testing.

Place, publisher, year, edition, pages
Lippincott Williams & Wilkins, 2019
Keywords
Listening effort; Memory processing; Pupil dilation response; Speech perception
National Category
Other Medical Sciences not elsewhere specified
Identifiers
urn:nbn:se:liu:diva-155558 (URN)10.1097/AUD.0000000000000612 (DOI)000459769700006 ()29923867 (PubMedID)2-s2.0-85061056453 (Scopus ID)
Note

Funding Agencies|Swedish Research Council

Available from: 2019-03-26 Created: 2019-03-26 Last updated: 2019-06-27Bibliographically approved
Rudner, M., Lyberg-Ahlander, V., Brännstrom, J., Nirme, J., Pichora-Fuller, M. K. & Sahlen, B. (2018). Listening Comprehension and Listening Effort in the Primary School Classroom. Frontiers in Psychology, 9, Article ID 1193.
Open this publication in new window or tab >>Listening Comprehension and Listening Effort in the Primary School Classroom
Show others...
2018 (English)In: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 9, article id 1193Article in journal (Refereed) Published
Abstract [en]

In the primary school classroom, children are exposed to multiple factors that combine to create adverse conditions for listening to and understanding what the teacher is saying. Despite the ubiquity of these conditions, there is little knowledge concerning the way in which various factors combine to influence listening comprehension and the effortfulness of listening. The aim of the present study was to investigate the combined effects of background noise, voice quality, and visual cues on childrens listening comprehension and effort. To achieve this aim, we performed a set of four well-controlled, yet ecologically valid, experiments with 245 eight-year-old participants. Classroom listening conditions were simulated using a digitally animated talker with a dysphonic (hoarse) voice and background babble noise composed of several children talking. Results show that even low levels of babble noise interfere with listening comprehension, and there was some evidence that this effect was reduced by seeing the talkers face. Dysphonia did not significantly reduce listening comprehension scores, but it was considered unpleasant and made listening seem difficult, probably by reducing motivation to listen. We found some evidence that listening comprehension performance under adverse conditions is positively associated with individual differences in executive function. Overall, these results suggest that multiple factors combine to influence listening comprehension and effort for child listeners in the primary school classroom. The constellation of these room, talker, modality, and listener factors should be taken into account in the planning and design of educational and learning activities.

Place, publisher, year, edition, pages
FRONTIERS MEDIA SA, 2018
Keywords
effort; motivation; listening comprehension; classroom; context; multi-talker babble noise; dysphonic voice; cognition
National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-149849 (URN)10.3389/fpsyg.2018.01193 (DOI)000438407500001 ()
Note

Funding Agencies|Swedish Research Council; Marcus and Amalia Wallenberg Foundation

Available from: 2018-08-02 Created: 2018-08-02 Last updated: 2018-08-20
Signoret, C., Blomberg, R., Dahlström, Ö., Rudner, M. & Rönnberg, J. (2018). Modulation of the neural expectation violation marker during speech perception in noise.. In: : . Paper presented at MEGNord 2018 Conference, Stockholm, Sweden, May 16-18 2018.
Open this publication in new window or tab >>Modulation of the neural expectation violation marker during speech perception in noise.
Show others...
2018 (English)Conference paper, Poster (with or without abstract) (Other academic)
National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-159500 (URN)
Conference
MEGNord 2018 Conference, Stockholm, Sweden, May 16-18 2018
Available from: 2019-08-09 Created: 2019-08-09 Last updated: 2019-08-20Bibliographically approved
Signoret, C., Blomberg, R., Dahlström, Ö., Andersen, L. M., Lundqvist, D., Rudner, M. & Rönnberg, J. (2018). Resolving discrepancies between incoming auditory information and linguistic expectations. In: Neuroscience 2018: 48th annual meeting of Society for Neuroscience. Paper presented at 48th annual meeting of Society for Neuroscience, San Diego, CA, USA, Nov 3-7, 2018. Society for Neuroscience
Open this publication in new window or tab >>Resolving discrepancies between incoming auditory information and linguistic expectations
Show others...
2018 (English)In: Neuroscience 2018: 48th annual meeting of Society for Neuroscience, Society for Neuroscience , 2018Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

Speech perception in noise is dependent on stimulus-driven and knowledge-driven processes. Here we investigate the neural correlates and time course of discrepancies between incoming auditory information (i.e. stimulus-driven processing) and linguistic expectations (knowledge-driven processing) by including 20 normal hearing adults in a MEG study. Participants read 48 rhyming sentence pairs beforehand. In the scanner, they listened to sentences that corresponded exactly to the read sentences except that the last word (presented after 1600 millisecond delay and with 50% intelligibility) was only correct in half of the cases. Otherwise, it was 1) phonologically but not semantically related, 2) semantically but not phonologically related, or 3) neither phonologically nor semantically related to the sentence. Participants indicated by button press whether the last word matched the sentence they had read outside the scanner. Behavioural results showed more errors in condition 1 than in conditions 2 or 3, suggesting that phonological compatibility overrides semantic discrepancy when intelligibility is poor. Event-related field analysis demonstrated larger activity on frontal sites for correct than unrelated words, suggesting that the former were more accurately expected than the latter. An early M170 component was also observed, possibly reflecting expectation violation in the auditory modality. Dipole analysis will reveal whether M170 could be modulated by type of linguistic discrepancy. Distributed-network analysis will further our understanding of the time course and neural correlates of discrepancies between incoming auditory information and linguistic expectations.

Place, publisher, year, edition, pages
Society for Neuroscience, 2018
National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-159499 (URN)
Conference
48th annual meeting of Society for Neuroscience, San Diego, CA, USA, Nov 3-7, 2018
Available from: 2019-08-09 Created: 2019-08-09 Last updated: 2019-08-09Bibliographically approved
Holmer, E., Heimann, M. & Rudner, M. (2017). Computerized Sign Language-Based Literacy Trainingfor Deaf and Hard-of-Hearing Children. Journal of Deaf Studies and Deaf Education, 22(4), 404-421
Open this publication in new window or tab >>Computerized Sign Language-Based Literacy Trainingfor Deaf and Hard-of-Hearing Children
2017 (English)In: Journal of Deaf Studies and Deaf Education, ISSN 1081-4159, E-ISSN 1465-7325, Vol. 22, no 4, p. 404-421Article in journal (Refereed) Published
Abstract [en]

Strengthening the connections between sign language and written language may improve reading skills in deaf and hard-of-hearing (DHH) signing children. The main aim of the present study was to investigate whether computerized sign language-based literacy training improves reading skills in DHH signing children who are learning to read. Further, longitudinal associations between sign language skills and developing reading skills were investigated. Participants were recruited from Swedish state special schools for DHH children, where pupils are taught in both sign language and spoken language. Reading skills were assessed at five occasions and the intervention was implemented in a cross-over design. Results indicated that reading skills improved over time and that development of word reading was predicted by the ability to imitate unfamiliar lexical signs, but there was only weak evidence that it was supported by the intervention. These results demonstrate for the first time a longitudinal link between sign-based abilities and word reading in DHH signing children who are learning to read. We suggest that the active construction of novel lexical forms may be a supramodal mechanism underlying word reading development.

Place, publisher, year, edition, pages
Oxford: Oxford University Press, 2017
National Category
Language Technology (Computational Linguistics) Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-141161 (URN)10.1093/deafed/enx023 (DOI)000412206300006 ()28961874 (PubMedID)
Note

Funding agencies: Swedish Research Council for Health, Working Life and Welfare [2008-0846]; Swedish Hearing Foundation [B2015/480]

Available from: 2017-09-25 Created: 2017-09-25 Last updated: 2018-01-13Bibliographically approved
Shirnin, D., Lyxell, B., Dahlström, Ö., Blomberg, R., Rudner, M., Rönnberg, J. & Signoret, C. (2017). Speech perception in noise: prediction patterns of neural pre-activation in lexical processing. In: : . Paper presented at Fourth International Conference on Cognitive Hearing Science for Communication (CHSCOM2017), Linköping, Sweden,June 18-22, 2017. Swedish Institute for Disability Research, Linköping University, Article ID 65.
Open this publication in new window or tab >>Speech perception in noise: prediction patterns of neural pre-activation in lexical processing
Show others...
2017 (English)Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

The purpose of this study is to examine whether the neural correlates of lexical expectations could be used to predict speech in noise perception. We analyse mag-netoencephalography (MEG) data from 20 normal hearing participants, who read a set of couplets (a pair of phrases with rhyming end words) prior to the experiment. During the experiment, the participants are asked to listen to the couplets, whose intelligibility is set to 80%. However, the last word is pronounced with a delay of 1600 ms (i.e. expectation gap) and is masked at 50% of intelligibility. At the end of each couplet, the participants are asked to indicate if the last word was cor-rect, i.e. corresponding to the expected word. Given the oscillatory characteristics of neural patterns of lexical expectations during the expectation gap, can we predict the participant’s actual perception of the last word? In order to approach this re-search question, we aim to identify the correlation patterns between the instances of neural pre-activation, occurring during the interval of the expectation gap and the type of the given answer. According to the sequential design of the experiment, the expectation gap is placed 4400 ms prior to the time interval dedicated to the participant’s answer. Machine Learning approach has been chosen as the main tool for the pattern recognition.

Place, publisher, year, edition, pages
Swedish Institute for Disability Research, Linköping University, 2017
National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-159501 (URN)
Conference
Fourth International Conference on Cognitive Hearing Science for Communication (CHSCOM2017), Linköping, Sweden,June 18-22, 2017
Available from: 2019-08-09 Created: 2019-08-09 Last updated: 2019-08-09Bibliographically approved
Stenfelt, S., Lunner, T., Ng, E., Lidestam, B., Zekveld, A., Sörqvist, P., . . . Rönnberg, J. (2016). Auditory, signal processing, and cognitive factors  influencing  speech  perception  in  persons with hearing loss fitted with hearing aids – the N200 study. In: : . Paper presented at IHCON2016, International Hearing Aid Research Conference, Tahoe City, California, USA, August 10–14, 2016. , Article ID B46.
Open this publication in new window or tab >>Auditory, signal processing, and cognitive factors  influencing  speech  perception  in  persons with hearing loss fitted with hearing aids – the N200 study
Show others...
2016 (English)Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

Objective: The aim of the current study was to assess aided speech-in-noise outcomes and relate those measures to auditory sensitivity and processing, different types of cognitive processing abilities, and signal processing in hearing aids.

Material and method: Participants were 200 hearing-aid wearers, with a mean age of 60.8 years, 43% females, with average hearing thresholds in the better ear of 37.4 dB HL. Tests of auditory functions were hearing thresholds, DPOAEs, tests of fine structure processing, IHC dead regions, spectro-temporal modulation, and speech recognition in quiet (PB words). Tests of cognitive processing function were tests of phonological skills, working memory, executive functions and inference making abilities, and general cognitive tests (e.g., tests of cognitive decline and IQ). The outcome test variables were the Hagerman sentences with 50 and 80% speech recognition levels, using two different noises (stationary speech weighted noise and 4-talker babble), and three types of signal processing (linear gain, fast acting compression, and linear gain plus a non-ideal binary mask). Another sentence test included typical and atypical sentences with contextual cues that were tested both audio-visually and in an auditory mode only. Moreover, HINT and SSQ were administrated.

Analysis: Factor analyses were performed separate for the auditory, cognitive, and outcome tests.

Results: The auditory tests resulted in two factors labeled SENSITIVITY and TEMPORAL FINE STRUCTURE, the cognitive tests in one factor (COGNITION), and the outcome tests in the two factors termed NO CONTEXT and CONTEXT that relates to the level of context in the different outcome tests. When age was partialled out, COGNITION was moderately correlated with the TEMPORAL FINE STRUCTURE and NO CONTEXT factors but only weakly correlated with the CONTEXT factor. SENSITIVITY correlated weakly with TEMPORAL FINE STRUCTURE and CONTEXT, and moderately with NO CONTEXT, while TEMPORAL FINE STRUCTURE showed weak correlation with CONTEXT and moderate correlation with NO CONTEXT. CONTEXT and NO CONTEXT had a  moderate correlation. Moreover, the overall results of the Hagerman sentences showed 0.9 dB worse SNR with fast acting compression compared with linear gain and 5.5 dB better SNR with linear  gain and noise reduction compared with only linear gain.

Conclusions: For hearing aid wearers, the ability to recognize speech in noise is associated with both sensory and cognitive processing abilities when the speech materials have low internal context. These associations are less prominent when the speech material has contextual cues.

National Category
Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:liu:diva-159504 (URN)
Conference
IHCON2016, International Hearing Aid Research Conference, Tahoe City, California, USA, August 10–14, 2016
Available from: 2019-08-09 Created: 2019-08-09 Last updated: 2019-08-09Bibliographically approved
Rudner, M., Keidser, G., Hygge, S. & Rönnberg, J. (2016). Better visuospatial working memory in adults who report profound deafness compared to those with normal or poor hearing: data from the UK Biobank resource. Ear and Hearing, 37(5), 620-622
Open this publication in new window or tab >>Better visuospatial working memory in adults who report profound deafness compared to those with normal or poor hearing: data from the UK Biobank resource
2016 (English)In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 37, no 5, p. 620-622Article in journal (Refereed) Published
Abstract [en]

Experimental work has shown better visuospatial working memory (VSWM) in profoundly deaf individuals compared to those with normal hearing. Other data, including the UK Biobank resource shows poorer VSWM in individuals with poorer hearing. Using the same database, the authors investigated VSWM in individuals who reported profound deafness. Included in this study were 112 participants who were profoundly deaf, 1310 with poor hearing and 74,635 with normal hearing. All participants performed a card-pair matching task as a test of VSWM. Although variance in VSWM performance was large among profoundly deaf participants, at group level it was superior to that of participants with both normal and poor hearing. VSWM in adults is related to hearing status but the association is not linear. Future study should investigate the mechanism behind enhanced VSWM in profoundly deaf adults.

Place, publisher, year, edition, pages
Lippincott Williams & Wilkins, 2016
National Category
Other Health Sciences
Identifiers
urn:nbn:se:liu:diva-126479 (URN)10.1097/AUD.0000000000000314 (DOI)000395797700020 ()27232076 (PubMedID)
Available from: 2016-03-29 Created: 2016-03-29 Last updated: 2017-11-30
Rudner, M. (2016). Cognitive spare capacity as an index of listening effort. Ear and Hearing, 37, 69S-76S
Open this publication in new window or tab >>Cognitive spare capacity as an index of listening effort
2016 (English)In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 37, p. 69S-76SArticle in journal (Refereed) Published
Place, publisher, year, edition, pages
Lippincott Williams & Wilkins, 2016
National Category
Basic Medicine
Identifiers
urn:nbn:se:liu:diva-126016 (URN)10.1097/AUD.0000000000000302 (DOI)000379372100008 ()
Available from: 2016-03-11 Created: 2016-03-11 Last updated: 2018-01-10
Organisations

Search in DiVA

Show all publications