liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 45 of 45
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Dahlström, Örjan
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    ent of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston Ontario, Canada.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Health Sciences.
    Individual differences in working memory capacity modulate frontal cortical activity while listening to speech in noise2012Conference paper (Other academic)
  • 2.
    Dahlström, Örjan
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Working Memory Processing for Sign and Speech in Broca's area2011Conference paper (Other academic)
  • 3.
    Davis, M.H.
    et al.
    Medical Research Council Cognition and Brain Sciences Unit.
    Ford, M.A.
    Medical Research Council Cognition and Brain Sciences Unit.
    Kherif, F.
    University of Lausanne.
    Johnsrude, Ingrid
    Queen's University.
    Does semantic context benefit speech understanding through top-down processes? Evidence from time-resolved sparse fMRI.2011In: Journal of cognitive neuroscience, ISSN 0898-929X, E-ISSN 1530-8898, Vol. 23, no 12, p. 3914-3932Article in journal (Refereed)
    Abstract [en]

    When speech is degraded, word report is higher for semantically coherent sentences (e.g., her new skirt was made of denim) than for anomalous sentences (e.g., her good slope was done in carrot). Such increased intelligibility is often described as resulting from “top–down” processes, reflecting an assumption that higher-level (semantic) neural processes support lower-level (perceptual) mechanisms. We used time-resolved sparse fMRI to test for top–down neural mechanisms, measuring activity while participants heard coherent and anomalous sentences presented in speech envelope/spectrum noise at varying signal-to-noise ratios (SNR). The timing of BOLD responses to more intelligible speech provides evidence of hierarchical organization, with earlier responses in peri-auditory regions of the posterior superior temporal gyrus than in more distant temporal and frontal regions. Despite Sentence content × SNR interactions in the superior temporal gyrus, prefrontal regions respond after auditory/perceptual regions. Although we cannot rule out top–down effects, this pattern is more compatible with a purely feedforward or bottom–up account, in which the results of lower-level perceptual processing are passed to inferior frontal regions. Behavioral and neural evidence that sentence content influences perception of degraded speech does not necessarily imply “top–down” neural processes.

  • 4.
    Heinrich, A.
    et al.
    MRC Cognition and Brain Sciences Unit.
    Carlyon, R.P.
    MRC Cognition and Brain Sciences Unit.
    Davis, M.H.
    MRC Cognition and Brain Sciences Unit.
    Johnsrude, Ingrid
    Queen's University.
    The continuity illusion does not depend on attentonal state: fMRI evidence from illusory vowels.2011In: Journal of cognitive neuroscience, ISSN 0898-929X, E-ISSN 1530-8898, Vol. 23, no 10, p. 2675-2689Article in journal (Refereed)
    Abstract [en]

    We investigate whether the neural correlates of the continuity illusion, as measured using fMRI, are modulated by attention. As we have shown previously, when two formants of a synthetic vowel are presented in an alternating pattern, the vowel can be identified if the gaps in each formant are filled with bursts of plausible masking noise, causing the illusory percept of a continuous vowel (“Illusion” condition). When the formant-to-noise ratio is increased so that noise no longer plausibly masks the formants, the formants are heard as interrupted (“Illusion Break” condition) and vowels are not identifiable. A region of the left middle temporal gyrus (MTG) is sensitive both to intact synthetic vowels (two formants present simultaneously) and to Illusion stimuli, compared to Illusion Break stimuli. Here, we compared these conditions in the presence and absence of attention. We examined fMRI signal for different sound types under three attentional conditions: full attention to the vowels; attention to a visual distracter; or attention to an auditory distracter. Crucially, although a robust main effect of attentional state was observed in many regions, the effect of attention did not differ systematically for the illusory vowels compared to either intact vowels or to the Illusion Break stimuli in the left STG/MTG vowel-sensitive region. This result suggests that illusory continuity of vowels is an obligatory perceptual process, and operates independently of attentional state. An additional finding was that the sensitivity of primary auditory cortex to the number of sound onsets in the stimulus was modulated by attention.

  • 5.
    Hervais-Adelman, Alexis G.
    et al.
    University of Geneva Medical School, Geneva, Switzerland.
    Carlyon, Robert P.
    MRC Cognition and Brain Sciences Unit, Cambridge, UK.
    Johnsrude, Ingrid
    Department of Psychology, Queen’s University, Kingston, Ontario, Canada.
    Davis, Matthew H.
    MRC Cognition and Brain Sciences Unit, Cambridge, UK.
    Brain regions recruited for the effortful comprehension of noise-vocoded words2012In: Language and cognitive processes (Print), ISSN 0169-0965, E-ISSN 1464-0732, Vol. 27, no 7-8, p. 1145-1166Article in journal (Refereed)
    Abstract [en]

    We used functional magnetic resonance imaging (fMRI) to investigate the neural basis of comprehension and perceptual learning of artificially degraded [noise vocoded (NV)] speech. Fifteen participants were scanned while listening to 6-channel vocoded words, which are difficult for naïve listeners to comprehend, but can be readily learned with appropriate feedback presentations. During three test blocks, we compared responses to potentially intelligible NV words, incomprehensible distorted words and clear speech. Training sessions were interleaved with the test sessions and included paired presentation of clear then noise-vocoded words: a type of feedback that enhances perceptual learning. Listeners' comprehension of NV words improved significantly as a consequence of training. Listening to NV compared to clear speech activated left insula, and prefrontal and motor cortices. These areas, which are implicated in speech production, may play an active role in supporting the comprehension of degraded speech. Elevated activation in the precentral gyrus during paired clear-then-distorted presentations that enhance learning further suggests a role for articulatory representations of speech in perceptual learning of degraded speech.

  • 6.
    Hervais-Adelman, Alexis G.
    et al.
    Functional Brain Mapping Lab .
    Davis, Matthew H.
    MRC Cognition and Brain Sciences Unit, Cambridge, UK.
    Johnsrude, Ingrid
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Taylor, Karen J.
    Carlyon, Robert P.
    MRC Cognition and Brain Sciences Unit, Cambridge, UK.
    Generalization of perceptual learning of vocoded speech2011In: Journal of Experimental Psychology: Human Perception and Performance, ISSN 0096-1523, E-ISSN 1939-1277, Vol. 37, no 1, p. 283-295Article in journal (Refereed)
    Abstract [en]

    Recent work demonstrates that learning to understand noise-vocoded (NV) speech alters sublexical perceptual processes but is enhanced by the simultaneous provision of higher-level, phonological, but not lexical content (Hervais-Adelman, Davis, Johnsrude, & Carlyon, 2008), consistent with top-down learning (Davis, Johnsrude, Hervais-Adelman, Taylor, & McGettigan, 2005; Hervais-Adelman et al., 2008). Here, we investigate whether training listeners with specific types of NV speech improves intelligibility of vocoded speech with different acoustic characteristics. Transfer of perceptual learning would provide evidence for abstraction from variable properties of the speech input. In Experiment 1, we demonstrate that learning of NV speech in one frequency region generalizes to an untrained frequency region. In Experiment 2, we assessed generalization among three carrier signals used to create NV speech: noise bands, pulse trains, and sine waves. Stimuli created using these three carriers possess the same slow, time-varying amplitude information and are equated for naïve intelligibility but differ in their temporal fine structure. Perceptual learning generalized partially, but not completely, among different carrier signals. These results delimit the functional and neural locus of perceptual learning of vocoded speech. Generalization across frequency regions suggests that learning occurs at a stage of processing at which some abstraction from the physical signal has occurred, while incomplete transfer across carriers indicates that learning occurs at a stage of processing that is sensitive to acoustic features critical for speech perception (e.g., noise, periodicity).

  • 7.
    Huyck, Julia, J.
    et al.
    Department of Psychology and Centre for Neuroscience Studies, Queen’s University.
    Johnsrude, Ingrid
    Department of Psychology and Centre for Neuroscience Studies, Queen’s University.
    Rapid perceptual learning of noise-vocoded speech requires attention2012In: Journal of the Acoustical society of America - Express Letters, Vol. 131, no 3, p. 236-242Article in journal (Refereed)
    Abstract [en]

    Humans are able to adapt to unfamiliar forms of speech (such as accented, time-compressed, or noise-vocoded speech) quite rapidly. Can such perceptual learning occur when attention is directed away from the speech signal? Here, participants were simultaneously exposed to noise-vocoded sentences, auditory distractors, and visual distractors. One group attended to the speech, listening to each sentence and reporting what they heard. Two other groups attended to either the auditory or visual distractors, performing a target-detection task. Only the attend-speech group benefited from the exposure when subsequently reporting noise-vocoded sentences. Thus, attention to noise-vocoded speech appears necessary for learning.

  • 8.
    Johnsrude, Ingrid S.
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Queens University, Canada .
    Mackey, Allison
    Queens University, Canada .
    Hakyemez, Hélène
    Queens University, Canada .
    Alexander, Elizabeth
    Queens University, Canada .
    Trang, Heather P.
    Queens University, Canada .
    Carlyon, Robert P.
    MRC Cognition & Brain Sciences Unit, Cambridge, England .
    Swinging at a Cocktail Party: Voice Familiarity Aids Speech Perception in the Presence of a Competing Voice2013In: Psychological Science, ISSN 0956-7976, E-ISSN 1467-9280, Vol. 24, no 10, p. 1995-2004Article in journal (Refereed)
    Abstract [en]

    People often have to listen to someone speak in the presence of competing voices. Much is known about the acoustic cues used to overcome this challenge, but almost nothing is known about the utility of cues derived from experience with particular voicescues that may be particularly important for older people and others with impaired hearing. Here, we use a version of the coordinate-response-measure procedure to show that people can exploit knowledge of a highly familiar voice (their spouses) not only to track it better in the presence of an interfering strangers voice, but also, crucially, to ignore it so as to comprehend a strangers voice more effectively. Although performance declines with increasing age when the target voice is novel, there is no decline when the target voice belongs to the listeners spouse. This finding indicates that older listeners can exploit their familiarity with a speakers voice to mitigate the effects of sensory and cognitive decline.

  • 9.
    Kitada, Ryo
    et al.
    National Institute for Physiological Sciences, Okazaki, Japan.
    Johnsrude, Ingrid
    Queen's University, Kingston, Canada.
    Kochiyama, Takanori
    ATR Brain Activity Imaging Center, Seika-cho, Japan.
    Lederman, Susan J.
    Queen's University, Kingston, Canada.
    Brain networks involved in haptic and visual identification of facial expressions of emotion: An fMRI study2010In: NeuroImage, ISSN 1053-8119, E-ISSN 1095-9572, Vol. 49, no 2, p. 1677-1689Article in journal (Refereed)
    Abstract [en]

    Previous neurophysiological and neuroimaging studies have shown that a cortical network involving the inferior frontal gyrus (IFG), inferior parietal lobe (IPL) and cortical areas in and around the posterior superior temporal sulcus (pSTS) region is employed in action understanding by vision and audition. However, the brain regions that are involved in action understanding by touch are unknown. Lederman et al. (2007) recently demonstrated that humans can haptically recognize facial expressions of emotion (FEE) surprisingly well. Here, we report a functional magnetic resonance imaging (fMRI) study in which we test the hypothesis that the IFG, IPL and pSTS regions are involved in haptic, as well as visual, FEE identification. Twenty subjects haptically or visually identified facemasks with three different FEEs (disgust, neutral and happiness) and casts of shoes (shoes) of three different types. The left posterior middle temporal gyrus, IPL, IFG and bilateral precentral gyrus were activated by FEE identification relative to that of shoes, regardless of sensory modality. By contrast, an inferomedial part of the left superior parietal lobule was activated by haptic, but not visual, FEE identification. Other brain regions, including the lingual gyrus and superior frontal gyrus, were activated by visual identification of FEEs, relative to haptic identification of FEEs. These results suggest that haptic and visual FEE identification rely on distinct but overlapping neural substrates including the IFG, IPL and pSTS region.

  • 10.
    Kitada, Ryo
    et al.
    Queen's University, Kingston, Canada.
    Johnsrude, Ingrid
    Queen's University, Kingston, Canada.
    Kochiyama, Takanori
    ATR Brain Activity Imaging Center, Seika-cho, Japan.
    Lederman, Susan J.
    Queen's University, Kingston, Canada.
    Functional specialization and convergence in the occipito-temporal cortex supporting haptic and visual identification of human faces and body parts: An fMRI study2009In: Journal of cognitive neuroscience, ISSN 0898-929X, E-ISSN 1530-8898, Vol. 21, no 10, p. 2027-2045Article in journal (Refereed)
    Abstract [en]

    Humans can recognize common objects by touch extremely well whenever vision is unavailable. Despite its importance to a thorough understanding of human object recognition, the neuroscientific study of this topic has been relatively neglected. To date, the few published studies have addressed the haptic recognition of nonbiological objects. We now focus on haptic recognition of the human body, a particularly salient object category for touch. Neuroimaging studies demonstrate that regions of the occipito-temporal cortex are specialized for visual perception of faces (fusiform face area, FFA) and other body parts (extrastriate body area, EBA). Are the same category-sensitive regions activated when these components of the body are recognized haptically? Here, we use fMRI to compare brain organization for haptic and visual recognition of human body parts. Sixteen subjects identified exemplars of faces, hands, feet, and nonbiological control objects using vision and haptics separately. We identified two discrete regions within the fusiform gyrus (FFA and the haptic face region) that were each sensitive to both haptically and visually presented faces; however, these two regions differed significantly in their response patterns. Similarly, two regions within the lateral occipito-temporal area (EBA and the haptic body region) were each sensitive to body parts in both modalities, although the response patterns differed. Thus, although the fusiform gyrus and the lateral occipito-temporal cortex appear to exhibit modality-independent, category-sensitive activity, our results also indicate a degree of functional specialization related to sensory modality within these structures.

  • 11.
    Munhall, K. G.
    et al.
    Queen’s University, Kingston, Ontario, Canada .
    MacDonald, E. N.
    Queen’s University, Kingston, Ontario, Canada .
    Byrne, S. K.
    Queen’s University, Kingston, Ontario, Canada .
    Johnsrude, Ingrid
    Queen’s University, Kingston, Ontario, Canada .
    Talkers alter vowel production in response to real-time formant perturbation even when instructed not to compensate2009In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 125, no 1, p. 384-390Article in journal (Refereed)
    Abstract [en]

    Talkers show sensitivity to a range of perturbations of auditory feedback (e.g., manipulation of vocal amplitude, fundamental frequency and formant frequency). Here, 50 subjects spoke a monosyllable (“head”), and the formants in their speech were shifted in real time using a custom signal processing system that provided feedback over headphones. First and second formants were altered so that the auditory feedback matched subjects’ production of “had.” Three different instructions were tested: (1) control, in which subjects were naïve about the feedback manipulation, (2) ignore headphones, in which subjects were told that their voice might sound different and to ignore what they heard in the headphones, and (3) avoid compensation, in which subjects were informed in detail about the manipulation and were told not to compensate. Despite explicit instruction to ignore the feedback changes, subjects produced a robust compensation in all conditions. There were no differences in the magnitudes of the first or second formant changes between groups. In general, subjects altered their vowel formant values in a direction opposite to the perturbation, as if to cancel its effects. These results suggest that compensation in the face of formant perturbation is relatively automatic, and the response is not easily modified by conscious strategy.

  • 12.
    Peelle, Jonathan E.
    et al.
    MRC Cognition and Brain Sciences Unit, Cambridge, UK.
    Johnsrude, Ingrid
    Queen’s University, Kingston, ON, Canada.
    Davis, Matthew H.
    MRC Cognition and Brain Sciences Unit, Cambridge, UK.
    Commentary: Hierarchical processing for speech in human auditory cortex and beyond2010In: Frontiers in Human Neuroscience, ISSN 1662-5161, E-ISSN 1662-5161, Vol. 4, no 51, p. 1-3Article in journal (Refereed)
  • 13.
    Ramezani, Mahdi
    et al.
    University of British Columbia, Canada; University of British Columbia, Canada.
    Abolmaesumi, Purang
    University of British Columbia, Canada.
    Marble, Kris
    Queens University, Canada.
    Trang, Heather
    Queens University, Canada.
    Johnsrude, Ingrid
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Queens University, Canada; Queens University, Canada.
    Fusion analysis of functional MRI data for classification of individuals based on patterns of activation2015In: BRAIN IMAGING AND BEHAVIOR, ISSN 1931-7557, Vol. 9, no 2, p. 149-161Article in journal (Refereed)
    Abstract [en]

    Classification of individuals based on patterns of brain activity observed in functional MRI contrasts may be helpful for diagnosis of neurological disorders. Prior work for classification based on these patterns have primarily focused on using a single contrast, which does not take advantage of complementary information that may be available in multiple contrasts. Where multiple contrasts are used, the objective has been only to identify the joint, distinct brain activity patterns that differ between groups of subjects; not to use the information to classify individuals. Here, we use joint Independent Component Analysis (jICA) within a Support Vector Machine (SVM) classification method, and take advantage of the relative contribution of activation patterns generated from multiple fMRI contrasts to improve classification accuracy. Young (age: 19-26) and older (age: 57-73) adults (16 each) were scanned while listening to noise alone and to speech degraded with noise, half of which contained meaningful context that could be used to enhance intelligibility. Functional contrasts based on these conditions (and a silent baseline condition) were used within jICA to generate spatially independent joint activation sources and their corresponding modulation profiles. Modulation profiles were used within a non-linear SVM framework to classify individuals as young or older. Results demonstrate that a combination of activation maps across the multiple contrasts yielded an area under ROC curve of 0.86, superior to classification resulting from individual contrasts. Moreover, class separability, measured by a divergence criterion, was substantially higher when using the combination of activation maps.

  • 14.
    Ramezani, Mahdi
    et al.
    University of British Columbia, Canada.
    Abolmaesumi, Purang
    University of British Columbia, Canada.
    Tahmasebi, Amir
    Philips Research North Amer, NY 10510 USA.
    Bosma, Rachael
    Queens University, Canada; Queens University, Canada.
    Tong, Ryan
    Queens University, Canada.
    Hollenstein, Tom
    Queens University, Canada; Queens University, Canada.
    Harkness, Kate
    Queens University, Canada; Queens University, Canada.
    Johnsrude, Ingrid
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Queens University, Canada; Queens University, Canada.
    Fusion analysis of first episode depression: Where brain shape deformations meet local composition of tissue2015In: NeuroImage: Clinical, ISSN 0353-8842, E-ISSN 2213-1582, Vol. 7, p. 114-121Article in journal (Refereed)
    Abstract [en]

    Computational neuroanatomical techniques that are used to evaluate the structural correlates of disorders in the brain typically measure regional differences in gray matter or white matter, or measure regional differences in the deformation fields required to warp individual datasets to a standard space. Our aim in this study was to combine measurements of regional tissue composition and of deformations in order to characterize a particular brain disorder (here, major depressive disorder). We use structural Magnetic Resonance Imaging (MRI) data from young adults in a first episode of depression, and from an age- and sex-matched group of non-depressed individuals, and create population gray matter (GM) and white matter (WM) tissue average templates using DARTEL groupwise registration. We obtained GM and WM tissue maps in the template space, along with the deformation fields required to co-register the DARTEL template and the GM and WM maps in the population. These three features, reflecting tissue composition and shape of the brain, were used within a joint independent components analysis (jICA) to extract spatially independent joint sources and their corresponding modulation profiles. Coefficients of the modulation profiles were used to capture differences between depressed and non-depressed groups. The combination of hippocampal shape deformations and local composition of tissue (but neither shape nor local composition of tissue alone) was shown to discriminate reliably between individuals in a first episode of depression and healthy controls, suggesting that brain structural differences between depressed and non-depressed individuals do not simply reflect chronicity of the disorder but arc there from the very outset.

  • 15.
    Ramezani, Mahdi
    et al.
    University of British Columbia, Canada.
    Johnsrude, Ingrid
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research. Queens University, Canada; Queens University, Canada.
    Rasoulian, Abtin
    University of British Columbia, Canada.
    Bosma, Rachael
    Queens University, Canada; Queens University, Canada.
    Tong, Ryan
    Queens University, Canada.
    Hollenstein, Tom
    Queens University, Canada; Queens University, Canada.
    Harkness, Kate
    Queens University, Canada; Queens University, Canada.
    Abolmaesumi, Purang
    University of British Columbia, Canada.
    Temporal-lobe morphology differs between healthy adolescents and those with early-onset of depression2014In: NEUROIMAGE-CLINICAL, ISSN 2213-1582, Vol. 6, p. 145-155Article in journal (Refereed)
    Abstract [en]

    Major depressive disorder (MDD) has previously been linked to structural changes in several brain regions, particularly in the medial temporal lobes (Bellani, Baiano, Brambilla, 2010; Bellani, Baiano, Brambilla, 2011). This has been determined using voxel-based morphometry, segmentation algorithms, and analysis of shape deformations (Bell-McGinty et al., 2002; Bergouignan et al., 2009; Posener et al., 2003; Vasic et al., 2008; Zhao et al., 2008): these are methods in which information related to the shape and the pose (the size, and anatomical position and orientation) of structures is lost. Here, we incorporate information about shape and pose to measure structural deformation in adolescents and young adults with and without depression (as measured using the Beck Depression Inventory and Diagnostic and Statistical Manual of Mental Disorders criteria). As a hypothesis-generating study, a significance level of p less than 0.05, uncorrected for multiple comparisons, was used, so that subtle morphological differences in brain structures between adolescent depressed individuals and control participants could be identified. We focus on changes in cortical and subcortical temporal structures, and use a multi-object statistical pose and shape model to analyze imaging data from 16 females (aged 16-21) and 3 males (aged 18) with early-onset MDD, and 25 female and 1 male normal control participants, drawn from the same age range. The hippocampus, parahippocampal gyrus, putamen, and superior, inferior and middle temporal gyri in both hemispheres of the brain were automatically segmented using the LONI Probabilistic Brain Atlas (Shattuck et al., 2008) in MNI space. Points on the surface of each structure in the atlas were extracted and warped to each participants structural MRI. These surface points were analyzed to extract the pose and shape features. Pose differences were detected between the two groups, particularly in the left and right putamina, right hippocampus, and left and right inferior temporal gyri. Shape differences were detected between the two groups, particularly in the left hippocampus and in the left and right parahippocampal gyri. Furthermore, pose measures were significantly correlated with BDI score across the whole (clinical and control) sample. Since the clinical participants were experiencing their very first episodes of MDD, morphological alteration in the medial temporal lobe appears to be an early sign of MDD, and is unlikely to result from treatment with antidepressants. Pose and shape measures of morphology, which are not usually analyzed in neuromorphometric studies, appear to be sensitive to depressive symptomatology.

  • 16. Rodd, J.M.
    et al.
    Davis, M.H.
    Johnsrude, Ingrid
    Linköping University, Department of Behavioural Sciences and Learning.
    The role of the LIFG in language comprehension: Evidence from the timecourse of neural responses to ambiguous words in sentences.In: Cerebral Cortex, ISSN 1047-3211, E-ISSN 1460-2199Article in journal (Refereed)
  • 17.
    Rodd, J.M.
    et al.
    University College London.
    Johnsrude, Ingrid
    Queen’s University.
    Davis, M.H.
    MRC Cognition and Brain Sciences Unit.
    The role of domain-general frontal systems in language comprehension:  Evidence from dual-task interference and semantic ambiguity.2010In: Brain and Language, ISSN 0093-934X, E-ISSN 1090-2155, Vol. 115, no 3, p. 182-188Article in journal (Refereed)
    Abstract [en]

    Neuroimaging studies have shown that the left inferior frontal gyrus (LIFG) plays a critical role in semantic and syntactic aspects of speech comprehension. It appears to be recruited when listeners are required to select the appropriate meaning or syntactic role for words within a sentence. However, this region is also recruited during tasks not involving sentence materials, suggesting that the systems involved in processing ambiguous words within sentences are also recruited for more domain-general tasks that involve the selection of task-relevant information. We use a novel dual-task methodology to assess whether the cognitive system(s) that are engaged in selecting word meanings are also involved in non-sentential tasks. In Experiment 1, listeners were slower to decide whether a visually presented letter is in upper or lower case when the sentence that they are simultaneously listening to contains words with multiple meanings (homophones), compared to closely matched sentences without homophones. Experiment 2 indicates that this interference effect is not tied to the occurrence of the homophone itself, but rather occurs when listeners must reinterpret a sentence that was initially misparsed. These results suggest some overlap between the cognitive system involved in semantic disambiguation and the domain-general process of response selection required for the case-judgement task. This cognitive overlap may reflect neural overlap in the networks supporting these processes, and is consistent with the proposal that domain-general selection processes in inferior frontal regions are critical for language comprehension.

  • 18.
    Rudner, Mary
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Orfanidou, Eleni
    University of Crete, Department of Psychology.
    Capek, Sheryl M.
    University of Manchester.
    Andin, Josefine
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Karlsson, Thomas
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Dahlström, Örjan
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Kästner, Lena
    Ruhr-University, Bochum.
    Cardin, Velia
    University College London, Department of Cognitive, Perceptual and Brain Sciences.
    Fransson, Peter
    Karolinska University Hospital, Stockholm, Sweden.
    Ingvar, Martin
    Karolinska University Hospital, Stockholm, Sweden.
    Johnsrude, Ingrid
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Woll, Benice
    University College London, Cognitive, Perceptual and Brain Sciences.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Sign Language phonology and its role in neurocognition2011Conference paper (Other academic)
  • 19.
    Rönnberg, Jerker
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Dahlström, Örjan
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Sörqvist, Patrik
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Lunner, Tomas
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Speech understanding in noise: the role of working memory capacity2012In: 41st International Congress and Exposition on Noise Control Engineering 2012 (INTER-NOISE 2012) / [ed] Burroughs, C., Institute of Noise Control Engineering , 2012, Vol. 10, p. 508-516Conference paper (Other academic)
  • 20.
    Rönnberg, Jerker
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Sörqvist, Patrik
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Dahlström, Örjan
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Lunner, Thomas
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Speech in noise and ease of language understanding: When and how working memory capacity plays a role2012In: Acoustics 2012, 2012Conference paper (Refereed)
  • 21.
    Rönnberg, Jerker
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Sörqvist, Patrik
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Dahlström, Örjan
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid S
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Stenfelt, Stefan
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Clinical and Experimental Medicine, Technical Audiology. Linköping University, Faculty of Health Sciences.
    Speech in noise and ease of language understanding: When and how working memory capacity plays a role2012Conference paper (Refereed)
    Abstract [en]

    A working memory based model for Ease of Language Understanding (ELU) has been developed (Rönnberg, 2003; Rönnberg et al., 2008; Rönnberg et al., 2011). It predicts that speech understanding in adverse, mismatching noise conditions is dependent on explicit processing resources such as working memory capacity (WMC). This presentation will examine the details of this prediction by addressing some recent data on (1) how brainstem responses are modulated by working memory load and WMC, (2) how cortical correlates of speech understanding in noise are modulated by WMC, and (3) how WMC determines episodic long-term memory for spoken discourse masked by speech.

  • 22.
    Rönnberg, Jerker
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Sörqvist, Patrik
    Linköping University, Department of Behavioural Sciences, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences. Department of Building, Energy and Environmental Engineering, University of Gävle, Gävle, Sweden.
    Dahlström, Örjan
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Johnsrude, Ingrid
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Behavioural Sciences, The Swedish Institute for Disability Research. Department of Psychology, Queen's University, Kingston, Ontario, Canada .
    Stenfelt, Stefan
    Linköping University, Department of Clinical and Experimental Medicine, Division of Neuroscience. Linköping University, Faculty of Health Sciences. Department of Clinical and Experimental Medicine, Linköping University, Sweden.
    Speech in Noise and Ease of Language Understanding: When and how working memory capacity plays a role2012Conference paper (Other academic)
    Abstract [en]

    This paper is about the role of working memory capacity in speech understanding under challenging listening conditions. The theoretical model that has driven most of the research reported in this paper is called the Ease-of-Language understanding model (Ronnberg, 2003; Ronnberg et al., 2008). The Ease-of-Language understanding model is part of a larger scientific endeavor called cognitive hearing science.

  • 23.
    Signoret, Carine
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Andin, Josefine
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Linköping University, The Swedish Institute for Disability Research. Brain and Mind Institute, National Centre for Audiology, School of Communication Sciences and Disorders, Western University, London, Ontario, Canada.
    Rudner, Mary
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Cumulative effects of prior knowledge and semantic coherence during speech perception: an fMRI study2015Conference paper (Other academic)
    Abstract [en]

    Semantic coherence and prior knowledge enhance perceptual clarity of degraded speech. Recent study by our team has shown that these two effects interact such that the perceptual clarity of noise-vocoded speech (NVS) is still enhanced by semantic coherence when prior knowledge is available from text cues and prior knowledge enhances perceptual clarity of NVS even when semantic coherence is low (Signoret et al., 2015). Here, we investigated the neural correlates of this interaction. We predicted 1) an effect of matching cues for both sentences with high and low semantic coherence in left-lateralized perisylvian areas (Zekveld et al., 2012) and right superior temporal gyrus (Wild et al., 2012), but stronger for low than for high coherent sentences since more resources are required to process sentences with low semantic coherence in the left inferior frontal gyrus (Obleser and Kotz, 2010) and 2) an effect of semantic coherence in temporal and inferior frontal cortex (Lau et al., 2008). The additive effect of semantic coherence when matching cues were provided should be observed in the angular gyrus (Obleser and Kotz, 2010). Twenty participants (age; M=25.14, SD=5.01) listened to sentences and performed an unrelated attentional task during sparse-imaging fMRI. The sentences had high or low semantic coherence, and were either clear, degraded (6-band NV) or unintelligible (1-band NV). Each spoken word was preceded (200 ms) by either a matching cue or a consonant string. Preliminary results revealed significant main effects of Cue (F(1,228) = 21.26; p < .05 FWE) in the left precentral gyrus, the left inferior frontal gyrus and the left middle temporal gyrus confirming the results of Zekveld et al (2012), but neither the main effect of Coherence nor the interaction between Cue and Coherence survived FWE correction. In accordance with our predictions, contrasts revealed a greater effect of matching cues for low than for high coherent sentences (t(19) = 6.25; p < .05 FWE) in the left superior temporal gyrus as well as left inferior frontal gyrus (BA 44 and 45), suggesting greater involvement of both top-down and bottom-up processing mechanisms during integration of prior knowledge with the auditory signal when sentence coherence is lower. There was a marginally greater effect of semantic coherence (t(19) = 3.58; p < .001unc) even when matching cues were provided in the left angular gyrus, the left middle frontal gyrus and the right superior frontal gyrus, suggesting greater involvement of top-down activation of semantic concepts, executive processes and the phonological store during integration of prior knowledge with the auditory signal when the semantic content of the speech is more readily available.

  • 24.
    Signoret, Carine
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Andin, Josefine
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Linköping University, The Swedish Institute for Disability Research. School of Communication Sciences and Disorders, University of Western Ontario, Canada.
    Rudner, Mary
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    The interplay between prior knowledge and semantic coherence during processing of degraded speech: an fMRI study2015In: Abstract book: Third International Conference on Cognitive Hearing Science for Communication, 2015, p. 181-181Conference paper (Other academic)
    Abstract [en]

    Degraded speech is rendered more intelligible both by semantic coherence and preceding text cues. Recently, we showed that the perceptual clarity of noise-vo-coded speech (NVS) is still enhanced by semantic coherence when cues are provided and that prior knowledge enhances perceptual clarity of NVS when semantic coherence is low (Signoret et al., 2015). Here, we investigated the neural correlates of this interaction. Twenty participants listened to sentences and performed an unrelated attentional task during sparse-imaging fMRI. The sentences had high or low semantic coherence, and were either clear, degraded (6-band NV) or unintelligible (1-band NV). Each spoken word was preceded (200 ms) by either a matching cue or a consonant string. Preliminary results revealed significant main effects of both Coherence and Cue in the superior temporal gyrus bilaterally and a significant interaction between Coherence and Cue when speech was degraded, in superior and middle temporal gyri bilaterally and left precentral gyrus. Investigation of this interaction revealed greater activation for high compared to low coherent sentences when cues were provided in the left-lateralized regions and greater activation without than with cues when semantic coherence was low in bilateral regions. The opposite contrasts elicited no significant activation. This pattern of results indicates that the increases in perceptual clarity of NVS attributable to semantic coherence and prior knowledge are supported by similar neural mechanisms organized in bilateral temporal regions, but that when perceptual clarity is optimized by both factors, it is supported by left-lateralized mechanisms.

  • 25.
    Signoret, Carine
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Johnsrude, Ingrid
    Univ Western Ontario, Sch Commun Sci & Disorders, London, Canada; Univ Western Ontario, Dept Psychol, London, Canada.
    Classon, Elisabet
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Combined Effects of Form- and Meaning-Based Predictability on Perceived Clarity of Speech2018In: Journal of Experimental Psychology: Human Perception and Performance, ISSN 0096-1523, E-ISSN 1939-1277, Vol. 44, no 2, p. 277-285Article in journal (Refereed)
    Abstract [en]

    The perceptual clarity of speech is influenced by more than just the acoustic quality of the sound; it also depends on contextual support. For example, a degraded sentence is perceived to be clearer when the content of the speech signal is provided with matching text (i.e., form-based predictability) before hearing the degraded sentence. Here, we investigate whether sentence-level semantic coherence (i.e., meaning-based predictability), enhances perceptual clarity of degraded sentences, and if so, whether the mechanism is the same as that underlying enhancement by matching text. We also ask whether form- and meaning-based predictability are related to individual differences in cognitive abilities. Twenty participants listened to spoken sentences that were either clear or degraded by noise vocoding and rated the clarity of each item. The sentences had either high or low semantic coherence. Each spoken word was preceded by the homologous printed word (matching text), or by a meaningless letter string (nonmatching text). Cognitive abilities were measured with a working memory test. Results showed that perceptual clarity was significantly enhanced both by matching text and by semantic coherence. Importantly, high coherence enhanced the perceptual clarity of the degraded sentences even when they were preceded by matching text, suggesting that the effects of form- and meaning-based predictions on perceptual clarity are independent and additive. However, when working memory capacity indexed by the Size-Comparison Span Test was controlled for, only form-based predictions enhanced perceptual clarity, and then only at some sound quality levels, suggesting that prediction effects are to a certain extent dependent on cognitive abilities. (PsycINFO Database Record

  • 26.
    Signoret, Carine
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Department of Psychology, Queen's University, Canada.
    Classon, Elisabet
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Does semantic context facilitate perceptual clarity?2012Conference paper (Other academic)
    Abstract [en]

    Giving people an opportunity to hear an unintelligible noise-vocoded (NV) sentence after they know its identity produces pop-out, a clearer percept of the NV sentence (Davis, Johnsrude, Hervais-Adelman, Taylor, & McGettigan, 2005), which can be measured using a magnitude-estimation procedure (Wild, Davis, & Johnsrude, 2012). Pop-out appears to occur when the auditory system is able to match input with top-down predictions that can be used to perceptually organize/’explain’ that input. Semantically coherent sentences (e.g. “his new clothes were from France”) are more predictable than matched anomalous sentences (e.g. “his great streets were from Smith”), raising the possibility that semantic information may also give rise to popout. In the present study we investigated how the magnitude of the pop-out effect produced by prior knowledge in the form of identical text cues (100% predictable) compared to that produced by semantic coherence. Twenty normal-hearing native Swedish-speaking participants listened to Swedish NV (1, 3, 6 and 12 bands) and clear sentences, and rated the clarity on a 7-point Likert scale. The sentences were semantically coherent or anomalous. Each spoken word was preceded (200 ms) by either its text equivalent or a consonant string of matched length. We observed the expected main effects of speech quality and text cues on clarity ratings (Wild et al., 2012). Semantically coherent sentences were rated as clearer than anomalous sentences, even when both types of sentences were preceded by identical text cues, suggesting that the effect of semantic context on perceptual clarity is not entirely due to greater predictability.

  • 27.
    Signoret, Carine
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Department of Psychology, Queen’s University, Canada.
    Classon, Elisabet
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Lexical access speed determines the role of working memory in pop-out2013In: Abstract book: Second International Conference on Cognitive Hearing Science for Communication, 2013Conference paper (Other academic)
    Abstract [en]

    Prior knowledge about what is going to be said produces a clearer percept ofunintelligible noise-vocoded (NV) sentences. This is called the pop-out effect andcan be measured using a magnitude-estimation procedure. Sentence coherencesubstantially improves intelligibility of NV sentences, suggesting that semanticcontext may produce a pop-out effect. Moreover, understanding speech in challengingconditions is supported by cognitive skills such as working-memorycapacity and inference-making. In the present study, we investigated whether apop-out effect could be identified for sentence coherence and whether such a popouteffect would be additive to the pop-out effect generated by prior knowledge.Twenty normal-hearing native Swedish-speaking participants listened to SwedishNV (1, 3, 6 and 12 bands) and clear sentences, and rated the clarity on a 7-pointscale. The sentences were semantically coherent (e.g. “his new clothes were fromFrance”) or incoherent (e.g. “his great streets were from Smith”). Each spokenword was preceded (200 ms) by either its text equivalent or a consonant string ofmatched length. We found a pop-out effect due to sentence coherence as well asa pop-out effect due to prior knowledge. These two effects interacted, suggestingthat they are supported by different mechanisms. Lexical access speed predictedthe magnitude of pop-out due to prior knowledge. Further, in participants withslow lexical access speed, working memory capacity predicted pop-out magnitudewhile in participants with high lexical access speed, pop-out magnitude was bestpredicted by inference-making ability.

  • 28.
    Tahmasebi, A.
    et al.
    Baycrest Centre for Geriatric Care.
    Davis, M.H.
    Medical Research Council Cognition and Brain Sciences Unit.
    Wild, C.
    Queen’s University.
    Rodd, J.M.
    University College London.
    Hakyemez, H.
    Queen’s University.
    Johnsrude, Ingrid
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Is the link between anatomical macrostructure and function equally strong at all cognitive levels of processing?2012In: Cerebral Cortex, ISSN 1047-3211, E-ISSN 1460-2199, Vol. 22, no 7, p. 1593-1603Article in journal (Refereed)
    Abstract [en]

    Whereas low-level sensory processes can be linked to macroanatomy with great confidence, the degree to which high-level cognitive                     processes map onto anatomy is less clear. If function respects anatomy, more accurate intersubject anatomical registration                     should result in better functional alignment. Here, we use auditory functional magnetic resonance imaging and compare the                     effectiveness of affine and nonlinear registration methods for aligning anatomy and functional activation across subjects.                     Anatomical alignment was measured using normalized cross-correlation within functionally defined regions of interest. Functional                     overlap was assessed using t-statistics from the group analyses and the degree to which group statistics predict high and consistent signal change in                     individual data sets. In regions related to early stages of auditory processing, nonlinear registration resulted in more accurate                     anatomical registration and stronger functional overlap among subjects compared with affine. In frontal and temporal areas                     reflecting high-level processing of linguistic meaning, nonlinear registration also improved the accuracy of anatomical registration.                     However, functional overlap across subjects was not enhanced in these regions. Therefore, functional organization, relative                     to anatomy, is more variable in the frontal and temporal areas supporting meaning-based processes than in areas devoted to                     sensory/perceptual auditory processing. This demonstrates for the first time that functional variability increases systematically                     between regions supporting lower and higher cognitive processes.                 

  • 29.
    Tahmasebi, A.M.
    et al.
    Queen's University.
    Abolmaesumi, P.
    University of British Columbia.
    Wild, C,.
    Queen's University.
    Johnsrude, Ingrid
    Queen's University.
    A validation framework for probabilistic maps using Heschl's gyrus as a model.2010In: NeuroImage, ISSN 1053-8119, E-ISSN 1095-9572, Vol. 50, no 2, p. 532-544Article in journal (Refereed)
    Abstract [en]

    Probabilistic maps are useful in functional neuroimaging research for anatomical labeling and for data analysis. The degree to which a probability map can accurately estimate the location of a structure of interest in a new individual depends on many factors, including variability in the morphology of the structure of interest over subjects, the registration (normalization procedure and template) applied to align the brains among individuals for constructing a probability map, and the registration used to map a new subject's data set to the frame of the probabilistic map. Here, we take Heschl's gyrus (HG) as our structure of interest, and explore the impact of different registration methods on the accuracy with which a probabilistic map of HG can approximate HG in a new individual. We assess and compare the goodness of fit of probability maps generated using five different registration techniques, as well as evaluating the goodness of fit of a previously published probabilistic map of HG generated using affine registration (Penhune et al., 1996). The five registration techniques are: three groupwise registration techniques (implicit reference-based or IRG, DARTEL, and BSpline-based); a high-dimensional pairwise registration (HAMMER) as well as a segmentation-based registration (unified segmentation of SPM5). The accuracy of the resulting maps in labeling HG was assessed using evidence-based diagnostic measures within a leave-one-out cross-validation framework. Our results demonstrated the out performance of IRG and DARTEL compared to other registration techniques in terms of sensitivity, specificity and positive predictive value (PPV). All the techniques displayed relatively low sensitivity rates, despite high PPV, indicating that the generated probability maps provide accurate but conservative estimates of the location and extent of HG in new individuals.

  • 30.
    Tahmasebi, Amir M.
    et al.
    Queen's University, Kingston, Canada.
    Abolmaesumi, Purang
    Queen's University, Kingston, Canada.
    Geng, Xiujuan
    National Institute on Drug Abuse, NIH, USA.
    Morosan, Patricia
    Institute of Medicine, Research Center Juelich, Germany.
    Amunts, Katrin
    Brain Imaging Center West, Germany.
    Christensen, Gary E.
    University of Iowa, USA.
    Johnsrude, Ingrid
    Queen's University, Kingston, Canada.
    A new approach for creating customizable cytoarchitectonic probabilistic maps wtihout a template2009In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2009: 12th International Conference London, UK September 20-24, 2009 Proceedings, Part II / [ed] Guang-Zhong Yang, David Hawkes, Daniel Rueckert, Alison Noble, Chris Taylor, Springer, 2009, p. 795-802Chapter in book (Refereed)
    Abstract [en]

    We present a novel technique for creating template-free probabilistic maps of the cytoarchitectonic areas using a groupwise registration. We use the technique to transform 10 human post-mortem structural MR data sets, together with their corresponding cytoarchitectonic information, to a common space. We have targeted the cytoarchitectonically defined subregions of the primary auditory cortex. Thanks to the template-free groupwise registration, the created maps are not macroanatomically biased towards a specific geometry/topology. The advantage of the groupwise versus pairwise registration in avoiding such anatomical bias is better revealed in studies with small number of subjects and a high degree of variability among the individuals such as the post-mortem data. A leave-one-out cross-validation method was used to compare the sensitivity, specificity and positive predictive value of the proposed and published maps. We observe a significant improvement in localization of cytoarchitectonically defined subregions in primary auditory cortex using the proposed maps. The proposed maps can be tailored to any subject space by registering the subject image to the average of the groupwise-registered post-mortem images.

  • 31.
    Tahmasebi, Amir M.
    et al.
    Queen’s University, Kingston, ON, Canada.
    Abolmaesumi, Purang
    Queen’s University, Kingston, ON, Canada.
    Zheng, Zane Z.
    Queen’s University, Kingston, ON, Canada.
    Munhall, Kevin G.
    Queen’s University, Kingston, ON, Canada.
    Johnsrude, Ingrid
    Queen’s University, Kingston, ON, Canada.
    Reducing inter-subject anatomical variation: Effect of normalization method on sensitivity of functional magnetic resonance imaging data anaysis in auditory cortex and the superior temporal region.2009In: NeuroImage, ISSN 1053-8119, E-ISSN 1095-9572, Vol. 47, no 4, p. 1522-1531Article in journal (Refereed)
    Abstract [en]

    Conventional group analysis of functional MRI (fMRI) data usually involves spatial alignment of anatomy across participants by registering every brain image to an anatomical reference image. Due to the high degree of inter-subject anatomical variability, a low-resolution average anatomical model is typically used as the target template, and/or smoothing kernels are applied to the fMRI data to increase the overlap among subjects’ image data. However, such smoothing can make it difficult to resolve small regions such as subregions of auditory cortex when anatomical morphology varies among subjects. Here, we use data from an auditory fMRI study to show that using a high-dimensional registration technique (HAMMER) results in an enhanced functional signal-to-noise ratio (fSNR) for functional data analysis within auditory regions, with more localized activation patterns. The technique is validated against DARTEL, a high-dimensional diffeomorphic registration, as well as against commonly used low-dimensional normalization techniques such as the techniques provided with SPM2 (cosine basis functions) and SPM5 (unified segmentation) software packages. We also systematically examine how spatial resolution of the template image and spatial smoothing of the functional data affect the results. Only the high-dimensional technique (HAMMER) appears to be able to capitalize on the excellent anatomical resolution of a single-subject reference template, and, as expected, smoothing increased fSNR, but at the cost of spatial resolution. In general, results demonstrate significant improvement in fSNR using HAMMER compared to analysis after normalization using DARTEL, or conventional normalization such as cosine basis function and unified segmentation in SPM, with more precisely localized activation foci, at least for activation in the region of auditory cortex.

  • 32.
    Wayne, Rachel V.
    et al.
    Queens University, Canada .
    Johnsrude, Ingrid S.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    The Role of Visual Speech Information in Supporting Perceptual Learning of Degraded Speech2012In: Journal of experimental psychology. Applied, ISSN 1076-898X, E-ISSN 1939-2192, Vol. 18, no 4, p. 419-435Article in journal (Refereed)
    Abstract [en]

    Following cochlear implantation, hearing-impaired listeners must adapt to speech as heard through their prosthesis. Visual speech information (VSI; the lip and facial movements of speech) is typically available in everyday conversation. Here, we investigate whether learning to understand a popular auditory simulation of speech as transduced by a cochlear implant (noise-vocoded [NV] speech) is enhanced by the provision of VSI. Experiment 1 demonstrates that provision of VSI concurrently with a clear auditory form of an utterance as feedback after each NV utterance during training does not enhance learning over clear auditory feedback alone, suggesting that VSI does not play a special role in retuning of perceptual representations of speech. Experiment 2 demonstrates that provision of VSI concurrently with NV speech (a simulation of typical real-world experience) facilitates perceptual learning of NV speech, but only when an NV-only repetition of each utterance is presented after the composite NV/VSI form during training. Experiment 3 shows that this more efficient learning of NV speech is probably due to the additional listening effort required to comprehend the utterance when clear feedback is never provided and is not specifically due to the provision of VSI. Our results suggest that rehabilitation after cochlear implantation does not necessarily require naturalistic audiovisual input, but may be most effective when (a) training utterances are relatively intelligible (approximately 85% of words reported correctly during effortful listening), and (b) the individual has the opportunity to map what they know of an utterances linguistic content onto the degraded form.

  • 33.
    Wild, Conor J.
    et al.
    Queen's University, Kingston ON Canada.
    Davis, Matthew H.
    Medical Research Council Cognition and Brain Sciences Unit, Cambridge, UK.
    Johnsrude, Ingrid
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Human auditory cortex is sensitive to the perceived clarity of speech2012In: NeuroImage, ISSN 1053-8119, E-ISSN 1095-9572, Vol. 60, no 2, p. 1490-1502Article in journal (Refereed)
    Abstract [en]

    Feedback connections among auditory cortical regions may play an important functional role in processing naturalistic speech, which is typically considered a problem solved through serial feed-forward processing stages. Here, we used fMRI to investigate whether activity within primary auditory cortex (PAC) is sensitive to the perceived clarity of degraded sentences. A region-of-interest analysis using probabilistic cytoarchitectonic maps of PAC revealed a modulation of activity, in the most primary-like subregion (area Te1.0). related to the intelligibility of naturalistic speech stimuli that cannot be driven by stimulus differences. Importantly, this effect was unique to those conditions accompanied by a perceptual increase in clarity. Connectivity analyses suggested sources of input to PAC are higher-order temporal, frontal and motor regions. These findings are incompatible with feed-forward models of speech perception, and suggest that this problem belongs amongst modern perceptual frameworks in which the brain actively predicts sensory input, rather than just passively receiving it.

  • 34.
    Wild, Conor J
    et al.
    Queen's University, Canada.
    Yusuf, Afiqah
    Queen's University, Canada.
    Wilson, Daryl E
    Queen's University, Canada.
    Peelle, Jonathan E
    MRC Cognition and Brain Sciences Unit, United Kingdom.
    Davis, Matthew H
    MRC Cognition and Brain Sciences Unit, United Kingdom.
    Johnsrude, Ingrid S
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Effortful listening: the processing of degraded speech depends critically on attention.2012In: Journal of Neuroscience, ISSN 0270-6474, E-ISSN 1529-2401, Vol. 32, no 40, p. 14010-21Article in journal (Refereed)
    Abstract [en]

    The conditions of everyday life are such that people often hear speech that has been degraded (e.g., by background noise or electronic transmission) or when they are distracted by other tasks. However, it remains unclear what role attention plays in processing speech that is difficult to understand. In the current study, we used functional magnetic resonance imaging to assess the degree to which spoken sentences were processed under distraction, and whether this depended on the acoustic quality (intelligibility) of the speech. On every trial, adult human participants attended to one of three simultaneously presented stimuli: a sentence (at one of four acoustic clarity levels), an auditory distracter, or a visual distracter. A postscan recognition test showed that clear speech was processed even when not attended, but that attention greatly enhanced the processing of degraded speech. Furthermore, speech-sensitive cortex could be parcellated according to how speech-evoked responses were modulated by attention. Responses in auditory cortex and areas along the superior temporal sulcus (STS) took the same form regardless of attention, although responses to distorted speech in portions of both posterior and anterior STS were enhanced under directed attention. In contrast, frontal regions, including left inferior frontal gyrus, were only engaged when listeners were attending to speech and these regions exhibited elevated responses to degraded, compared with clear, speech. We suggest this response is a neural marker of effortful listening. Together, our results suggest that attention enhances the processing of degraded speech by engaging higher-order mechanisms that modulate perceptual auditory processing.

  • 35.
    Zekveld, Adriana A
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. ENT/audiology, VU University Medical Center, the Netherlands.
    Rudner, Mary
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Department of Psychology, Queen’s University, Canada.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Working memory capacity mediates the facilitative effect of semantically related cues on the intelligibilityof speech in noise2013Conference paper (Other academic)
    Abstract [en]

    This study assessed the influence of masker type, working memory capacity (reading span and size comparison span) and linguistic closure ability (text reception threshold) on the benefit obtained from semantically related text cues during perception of speech in noise. Sentences were masked by stationary noise, fluctuating noise, or an interfering talker. Each sentence was preceded by three text cues that were either words that were semantically related to the sentence or unpronounceable nonwords. Speech perception thresholds were adaptively measured and delayed sentence recognition was subsequently assessed. Word cues facilitated speech perception in noise. The amount of benefit did not depend on masker type, but benefit correlated with reading span when speech was masked by interfering speech. Cue benefit was not related to reading span when other maskers were used and did not correlate with the text reception threshold or size comparison span. Larger working-memory capacity was furthermore associated with enhanced delayed recall of sentences preceded by word cues relative to nonword cues. This suggests that working memory capacity may be associated with release from informational masking by semantically related information, with keeping the cues in mind while disambiguating the sentence and for encoding of speech content into long-term memory.

  • 36.
    Zekveld, Adriana
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Section Audiology, Dept. of Otolaryngology-Head and Neck Surgery and EMGO Institute for Health and Care Research, VU University Medical Center, Amsterdam, Netherlands .
    Heslenfeld, D.J.
    Department of Psychology, VU University, BT Amsterdam, Netherlands.
    Johnsrude, Ingrid
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Department of Psychology, Queen's University, Kingston, Canada; The School of Communication Sciences and Disorders and The Brain and Mind Institute, Natural Sciences Centre, Western University, London, Canada .
    Versfeld, N.J.
    Section Audiology, Dept. of Otolaryngology-Head and Neck Surgery and EMGO Institute for Health and Care Research, VU University of Medical Center, Amsterdam, Netherlands.
    Kramer, S.E.
    Section Audiology, Dept. of Otolaryngology-Head and Neck Surgery and EMGO Institute for Health and Care Research, VU University of Medical Center, Amsterdam, Netherlands.
    The eye as a window to the listening brain: Neural correlates of pupil size as a measure of cognitive listening load2014In: NeuroImage, ISSN 1053-8119, E-ISSN 1095-9572, Vol. 101, p. 76-86Article in journal (Refereed)
    Abstract [en]

    An important aspect of hearing is the degree to which listeners have to deploy effort to understand speech. One promising measure of listening effort is task-evoked pupil dilation. Here, we use functional magnetic resonance imaging (fMRI) to identify the neural correlates of pupil dilation during comprehension of degraded spoken sentences in 17 normal-hearing listeners. Subjects listened to sentences degraded in three different ways: the target female speech was masked by fluctuating noise, by speech from a single male speaker, or the target speech was noise-vocoded. The degree of degradation was individually adapted such that 50% or 84% of the sentences were intelligible. Control conditions included clear speech in quiet, and silent trials.The peak pupil dilation was larger for the 50% compared to the 84% intelligibility condition, and largest for speech masked by the single-talker masker, followed by speech masked by fluctuating noise, and smallest for noise-vocoded speech. Activation in the bilateral superior temporal gyrus (STG) showed the same pattern, with most extensive activation for speech masked by the single-talker masker. Larger peak pupil dilation was associated with more activation in the bilateral STG, bilateral ventral and dorsal anterior cingulate cortex and several frontal brain areas. A subset of the temporal region sensitive to pupil dilation was also sensitive to speech intelligibility and degradation type. These results show that pupil dilation during speech perception in challenging conditions reflects both auditory and cognitive processes that are recruited to cope with degraded speech and the need to segregate target speech from interfering sounds. © 2014 Elsevier Inc.

  • 37.
    Zekveld, Adriana
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Festen, Joost M.
    VU University Medical Center Amsterdam, The Netherlands.
    Van Beek, Johannes H M
    VU University Medical Center Amsterdam, The Netherlands.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    The influence of semanically related and unrelated text cues on the intelligibility of sentences in noice2011In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, Vol. 32, no 6, p. 16-25Article in journal (Refereed)
    Abstract [en]

    Objectives: In two experiments with different subject groups, we explored the relationship between semantic context and intelligibility by examining the influence of visually presented, semantically related, and unrelated three-word text cues on perception of spoken sentences in stationary noise across a range of speech-to-noise ratios (SNRs). In addition, in Experiment (Exp) 2, we explored the relationship between individual differences in cognitive factors and the effect of the cues on speech intelligibility.

    Design: In Exp 1, cues had been generated by participants themselves in a previous test session (own) or by someone else (alien). These cues were either appropriate for that sentence (match) or for a different sentence (mismatch). A condition with nonword cues, generated by the experimenter, served as a control. Experimental sentences were presented at three SNRs (dB SNR) corresponding to the entirely correct repetition of 29%, 50%, or 71% of sentences (speech reception thresholds; SRTs). In Exp 2, semantically matching or mismatching cues and nonword cues were presented before sentences at SNRs corresponding to SRTs of 16% and 29%. The participants in Exp 2 also performed tests of verbal working memory capacity and the ability to read partially masked text.

    Results: In Exp 1, matching cues improved perception relative to the nonword and mismatching cues, with largest benefits at the SNR corresponding to 29% performance in the SRT task. Mismatching cues did not impair speech perception relative to the nonword cue condition, and no difference in the effect of own and alien matching cues was observed. In Exp 2, matching cues improved speech perception as measured using both the percentage of correctly reported words and the percentage of entirely correctly reported sentences. Mismatching cues reduced the percentage of repeated words (but not the sentence-based scores) compared with the nonword cue condition. Working memory capacity and ability to read partly masked sentences were positively associated with the number of sentences repeated entirely correctly in the mismatch condition at the 29% SNR.

    Conclusions: In difficult listening conditions, both relevant and irrelevant semantic context can influence speech perception in noise. High working memory capacity and good linguistic skills are associated with a greater ability to inhibit irrelevant context when uncued sentence intelligibility is around 29% correct.

  • 38.
    Zekveld, Adriana
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Festen, Joost M.
    ENT/Audiology & EMGO Institute for Health and Care Research, VU University medical center Amsterdam.
    van Beek, Johannes H M
    Vrije University of Amsterdam Medical Centre.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    The influence of semantically related and unrelated text cues on the intelligibility of sentences in noise2010Conference paper (Other academic)
  • 39.
    Zekveld, Adriana
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Cognition, Development and Disability. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Linköping University, Department of Behavioural Sciences and Learning. Linköping University, Faculty of Arts and Sciences.
    Festen, Joost M
    Vrije University of Amsterdam Medical Centre.
    van Beek, Johannes H M
    Vrije University of Amsterdam Medical Centre.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    The Influence of Semantically Related and Unrelated Text Cues on the Intelligibility of Sentences in Noise2011In: Ear and Hearing, ISSN 0196-0202, E-ISSN 1538-4667, ISSN 0196-0202, Vol. 32, no 6, p. E16-E25Article in journal (Refereed)
    Abstract [en]

    Objectives: In two experiments with different subject groups, we explored the relationship between semantic context and intelligibility by examining the influence of visually presented, semantically related, and unrelated three-word text cues on perception of spoken sentences in stationary noise across a range of speech-to-noise ratios (SNRs). In addition, in Experiment (Exp) 2, we explored the relationship between individual differences in cognitive factors and the effect of the cues on speech intelligibility. less thanbrgreater than less thanbrgreater thanDesign: In Exp 1, cues had been generated by participants themselves in a previous test session (own) or by someone else (alien). These cues were either appropriate for that sentence (match) or for a different sentence (mismatch). A condition with nonword cues, generated by the experimenter, served as a control. Experimental sentences were presented at three SNRs (dB SNR) corresponding to the entirely correct repetition of 29%, 50%, or 71% of sentences (speech reception thresholds; SRTs). In Exp 2, semantically matching or mismatching cues and nonword cues were presented before sentences at SNRs corresponding to SRTs of 16% and 29%. The participants in Exp 2 also performed tests of verbal working memory capacity and the ability to read partially masked text. less thanbrgreater than less thanbrgreater thanResults: In Exp 1, matching cues improved perception relative to the nonword and mismatching cues, with largest benefits at the SNR corresponding to 29% performance in the SRT task. Mismatching cues did not impair speech perception relative to the nonword cue condition, and no difference in the effect of own and alien matching cues was observed. In Exp 2, matching cues improved speech perception as measured using both the percentage of correctly reported words and the percentage of entirely correctly reported sentences. Mismatching cues reduced the percentage of repeated words (but not the sentence-based scores) compared with the nonword cue condition. Working memory capacity and ability to read partly masked sentences were positively associated with the number of sentences repeated entirely correctly in the mismatch condition at the 29% SNR. less thanbrgreater than less thanbrgreater thanConclusions: In difficult listening conditions, both relevant and irrelevant semantic context can influence speech perception in noise. High working memory capacity and good linguistic skills are associated with a greater ability to inhibit irrelevant context when uncued sentence intelligibility is around 29% correct.

  • 40.
    Zekveld, Adriana
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Johnsrude, Ingrid
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    Heslenfeld, D
    Festen, J
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences. Linköping University, The Swedish Institute for Disability Research.
    (Mis)match in the brain: Neural correlates of primed speech understanding in noise2010Conference paper (Other academic)
  • 41.
    Zekveld, Adriana
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Heslenfeld, Dirk J.
    Vrije University Amstedam.
    Rönnberg, Jerker
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    An fMRI study on the influence of semantically related and unrelated text cues on the intelligibility of sentences in noise2011Conference paper (Other academic)
  • 42.
    Zekveld, Adriana
    et al.
    Linköping University, The Swedish Institute for Disability Research. Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Heslenfeld, Dirk J
    Vrije University of Amsterdam, Netherlands .
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Behavioral and fMRI evidence that cognitive ability modulates the effect of semantic context on speech intelligibility2012In: Brain and Language, ISSN 0093-934X, E-ISSN 1090-2155, ISSN 0093-934X, Vol. 122, no 2, p. 103-113Article in journal (Refereed)
    Abstract [en]

    Text cues facilitate the perception of spoken sentences to which they are semantically related (Zekveld, Rudner, et al., 2011). In this study, semantically related and unrelated cues preceding sentences evoked more activation in middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) than nonword cues, regardless of acoustic quality (speech in noise or speech in quiet). Larger verbal working memory (WM) capacity (reading span) was associated with greater intelligibility benefit obtained from related cues, with less speech-related activation in the left superior temporal gyrus and left anterior IFG, and with more activation in right medial frontal cortex for related versus unrelated cues. Better ability to comprehend masked text was associated with greater ability to disregard unrelated cues, and with more activation in left angular gyrus (AG). We conclude that individual differences in cognitive abilities are related to activation in a speech-sensitive network including left MTG, IFG and AG during cued speech perception.

  • 43.
    Zekveld, Adriana
    et al.
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Rudner, Mary
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Johnsrude, Ingrid
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Heslenfeld, Dirk J
    VU University, Amsterdam, Netherlands .
    Rönnberg, Jerker
    Linköping University, Department of Behavioural Sciences and Learning, Disability Research. Linköping University, Faculty of Arts and Sciences.
    Corrigendum to “Behavioral and fMRI evidence that cognitive ability modulates the effect of semantic context on speech intelligibility” [Brain Lang. 122 (2012) 103–113]2012In: Brain and Language, ISSN 0093-934X, E-ISSN 1090-2155, Vol. 123, no 2, p. 143-143Article in journal (Refereed)
  • 44.
    Zheng, Zane Z.
    et al.
    Queen's University, Ontario, Canada.
    Munhall, Kevin G.
    Queen's University, Ontario, Canada.
    Johnsrude, Ingrid
    Queen's University, Ontario, Canada.
    Functional overlap between regions involved in speech perception and in monitoring one's own voice during speech production2010In: Journal of cognitive neuroscience, ISSN 0898-929X, E-ISSN 1530-8898, Vol. 22, no 8, p. 1770-1781Article in journal (Refereed)
    Abstract [en]

    The fluency and the reliability of speech production suggest a mechanism that links motor commands and sensory feedback. Here, we examined the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or not and by examining the overlap with the network recruited during passive listening to speech sounds. We used real-time signal processing to compare brain activity when participants whispered a consonant–vowel–consonant word (“Ted”) and either heard this clearly or heard voice-gated masking noise. We compared this to when they listened to yoked stimuli (identical recordings of “Ted” or noise) without speaking. Activity along the STS and superior temporal gyrus bilaterally was significantly greater if the auditory stimulus was (a) processed as the auditory concomitant of speaking and (b) did not match the predicted outcome (noise). The network exhibiting this Feedback Type × Production/Perception interaction includes a superior temporal gyrus/middle temporal gyrus region that is activated more when listening to speech than to noise. This is consistent with speech production and speech perception being linked in a control system that predicts the sensory outcome of speech acts and that processes an error signal in speech-sensitive regions when this and the sensory data do not match.

  • 45.
    Zheng, Z.Z.
    et al.
    Queen's University.
    MacDonald, E.
    Queen's University.
    Munhall, K.
    Queen's University.
    Johnsrude, Ingrid
    Queen's University.
    Perceiving a stranger's voice as being one's own:  a 'rubber voice' illusion?2011In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 6, no 4, p. e18655-Article in journal (Refereed)
    Abstract [en]

    We describe an illusion in which a stranger's voice, when presented as the auditory concomitant of a participant's own speech, is perceived as a modified version of their own voice. When the congruence between utterance and feedback breaks down, the illusion is also broken. Compared to a baseline condition in which participants heard their own voice as feedback, hearing a stranger's voice induced robust changes in the fundamental frequency (F0) of their production. Moreover, the shift in F0 appears to be feedback dependent, since shift patterns depended reliably on the relationship between the participant's own F0 and the stranger-voice F0. The shift in F0 was evident both when the illusion was present and after it was broken, suggesting that auditory feedback from production may be used separately for self-recognition and for vocal motor control. Our findings indicate that self-recognition of voices, like other body attributes, is malleable and context dependent.

1 - 45 of 45
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf