The interplay of phonological and semantic knowledge during perception of degraded speech
2016 (English)Conference paper, Poster (Other academic)
The perceptual clarity of speech depends not only on acoustic quality of the sound, but also on linguistic support. In a set of three experiments, we investigated the interplay of phonological and semantic knowledge during speech perception in persons with normal (NH) and impaired hearing (IH). In three experiments, participants listened to grammatically correct spoken Swedish sentences at different sound quality levels (clear or degraded by noise vocoding). The sentences were more or less coherent and each spoken word (matching prime) or consonant strings (non-matching prime) was visually presented 200 ms beforehand. Analysis of variance in rated clarity showed significant interactions between coherence and prime type: a benefit of coherence with and without matching primes for NH but only with matching primes for IH was observed, although three-way interactions including sound quality levels somewhat modified this picture. Preliminary fMRI results from NH suggest that processing of semantic coherence in the absence of matching primes is supported by right middle temporal gyrus. These findings suggest that, when no phonological information is available, NH mobilize long-term semantic representations to successfully utilize the semantic information in spoken sentences that are moderately degraded. Future work should investigate what prevents IH from doing the same.
Place, publisher, year, edition, pages
IdentifiersURN: urn:nbn:se:liu:diva-132172OAI: oai:DiVA.org:liu-132172DiVA: diva2:1038771
33rd World Congress of Audiology 2016, September 18-21, Vancouver, Canada