Visual phonemic ambiguity and speechreading
2006 (English)In: Journal of Speech, Language and Hearing Research, ISSN 1092-4388, Vol. 49, no 4, 835-847 p.Article in journal (Refereed) Published
Purpose: To study the role of visual perception of phonemes in visual perception of sentences and words among normal-hearing individuals. Method: Twenty-four normal-hearing adults identified consonants, words, and sentences, spoken by either a human or a synthetic talker. The synthetic talker was programmed with identical parameters within phoneme groups, hypothetically resulting in simplified articulation. Proportions of correctly identified phonemes per participant, condition, and task, as well as sensitivity to single consonants and clusters of consonants, were measured. Groups of mutually exclusive consonants were used for sensitivity analyses and hierarchical cluster analyses. Results: Consonant identification performance did not differ as a function of talker, nor did average sensitivity to single consonants. The bilabial and labiodental clusters were most readily identified and cohesive for both talkers. Word and sentence identification was better for the human talker than the synthetic talker. The participants were more sensitive to the clusters of the least visible consonants with the human talker than with the synthetic talker. Conclusions: It is suggested that ability to distiguish between clusters of the least visually distinct phonemes is important in speechreading. Specifically, it reduces the number of candidates, and thereby facilitates lexical identification. © American Speech-Language-Hearing Association.
Place, publisher, year, edition, pages
2006. Vol. 49, no 4, 835-847 p.
Articulation, Normal hearing, Speechreading, Students
IdentifiersURN: urn:nbn:se:liu:diva-50163DOI: 10.1044/1092-4388(2006/059)OAI: oai:DiVA.org:liu-50163DiVA: diva2:271059