The general aim of this thesis was to test the effects of paralinguistic (emotional) and prior contextual (topical) cues on perception of poorly specified visual, auditory, and audiovisual speech. The specific purposes were to (1) examine if facially displayed emotions can facilitate speechreading performance; (2) to study the mechanism for such facilitation; (3) to map information-processing factors that are involved in processing of poorly specified speech; and (4) to present a comprehensive conceptual framework for speech perception, with specification of the signal being considered. Experi¬mental and correlational designs were used, and 399 normal-hearing adults participated in seven experiments. The main conclusions are summarised as follows. (a) Speechreading can be facilitated by paralinguistic information as constituted by facial displayed emotions. (b) The facilitatory effect of emitted emotional cues is mediated by their degree of specification in transmission and ambiguity as percepts; and by how distinct the perceived emotions combined with topical cues are as cues for lexical access. (c) The facially displayed emotions affect speech perception by conveying semantic cues; no effect via enhanced articulatory distinctiveness, nor of emotion-related state in the perceiver is needed for facilitation. (d) The combined findings suggest that emotional and topical cues provide constraints for activation spreading in the lexicon. (e) Both bottom-up and top-down factors are associated with perception of poorly specified speech, indicating that variation in information-processing abilities is a crucial factor for perception if there is paucity in sensory input. A conceptual framework for speech perception, comprising specification of the linguistic and paralinguistic information, as well as distinctiveness of primes, is presented. Generalisations of the findings to other forms of paralanguage and language processing are discussed.