Socialization with relatives, friends and colleagues is often regarded as one of the main ingredients of life. Our thoughts, beliefs and ways of life ar affected by socialization. The question to be discussed in this chapter is how social interaction affects our mental processes, especially our memory processes.
Sjukdomstillstånd och skador i organ och vävnader orsakar diverse funktionella avvikelser, som i sin tur ger upphov till symptom, som personen ifråga kan iaktta och lida av. Dessa orsakar funktionsnedsättningar, som påverkar individens möjligheter att fungera och klara sig i sin dagliga miljö hemma, i olika arbetssituationer och i olika sociala sammanhang.
This study examined the extent to which different measures ofspeechreading performance correlated with particular cognitiveabilities in a population of hearing-impaired people. Althoughthe three speechreading tasks (isolated word identification,sentence comprehension, and text tracking) were highly intercorrelated,they tapped different cognitive skills. In this population,younger participants were better speechreaders, and, when agewas taken into account, speech tracking correlated primarilywith (written) lexical decision speed. In contrast, speechreadingfor sentence comprehension correlated most strongly with performanceon a phonological processing task (written pseudohomophone detection)but also on a span measure that may have utilized visual, nonverbalmemory for letters. We discuss the implications of this pattern.
Evidence suggests that the lag reported in mathematics for deaf signers derives from difficulties related to verbal processing of numbers, whereas magnitude processing seems unaffected by deafness. Neuroimaging evidence from hearing individuals suggests that verbal processing of numbers engages primarily left angular gyrus (lAG), whereas magnitude processing engages primarily the horizontal portion of the right intraparietal sulcus (rHIP). In a ROI analysis of brain imaging data from 16 adult deaf signers and 16 adult hearing non-signers, who did not differ on sex, age or education, we examined if activity in lAG and rHIP changed as a result of task (multiplication vs subtraction) and group (deaf signers and hearing non-signers). We found a significant main effect of brain region (F(1,30) = 117.00, p < .001, η_p^2 = .80) and an interaction effect between region and group (F(1,30) = 20.70, p < .001, η_p^2 = .41). Further analyses showed that there were no significant differences in average activation between groups in lAG (F(1,30) = 0.16, p = .70). However, in rHIP deaf signers showed significantly greater average activation compared to non-signers (F(1,30) = 15.20, p < .001, η_p^2 = .34). There were no significant differences in activation between subtraction and multiplication (F(1,30) = 0.66, p = .42) and no behavioural differences between groups (F(1,30) = 1.70, p = .20). These results suggest that when engaging in arithmetic tasks deaf signers successfully make use of qualitatively difference processes, compared to hearing non-signers, with stronger emphasis on brain regions relating to magnitude manipulation.
Profoundly deaf individuals sometimes have difficulty with arithmetic and phonological tasks. In the present study we investigate if these differences can be attributed to differences in recruitment of neurobiological networks. Seventeen hearing non-signers (HN) and sixteen deaf signers (DS) matched on age, gender and non-verbal intelligence took part in an fMRI study. In the scanner three digit/letter pairs were visually presented and the participants performed six different blocked tasks tapping processing of digit and letter order, multiplication, subtraction and phonological ability. Data were analysed using two 2x2x2 ANOVAs; process (arithmetic, language) x level (high, low) x group (DS, HN). A main effect of process revealed language networks in the left inferior frontal gryus, supramarginal gyrus, fusiform gyrus and insula. Arithmetic networks included left middle orbital gyrus and superior medial gyrus. A main effect of level revealed low level processing (digit/letter order) in the right middle occipital gyrus and the right precuneus and high level processing (subtraction/multiplication/phonological ability) in left inferior frontal gyrus. There was no main effect of group but a significant task x group interaction in the right temporal pole which in DS (but not HN) was activated more for arithmetic than language processing (pfwe = .022) when multiplication was included in the analysis. This region is implicated in conceptual representation. These results suggest that both arithmetic and language are processed similarly by DS and HN with possible between-group differences in the use of conceptual representation in arithmetic and language tasks.
Evidence suggests that the lag reported in mathematics for deaf signers derives from difficulties related to the verbal system of number processing as described in the triple code model. For hearing individuals the verbal system has been shown to be recruited for both arithmetic and language tasks. In the present study we investigate for the first time neuronal representations of arithmetic in deaf signers. We examine if the neural network supporting arithmetic and language, including the horizontal portion of the intraparietal sulcus (HIPS), the superior parietal lobule (SPL) bilaterally, the left angular gyrus (AG), pars opercularis (POPE) and pars triangularis (PTRI) of the left inferior frontal gyrus (IFG), is differently recruited for deaf and hearing individuals. Imaging data were collected from 16 deaf signers and 16 well-matched hearing nonsigners, using the same stimulus material for all tasks, but with different cues. During multiplication, deaf signers recruited rHIPS more than hearing non-signers, suggesting greater involvement of magnitude manipulation processes related to the quantity system, whereas there was no evidence that the verbal system was recruited. Further, there was no support for the notion of a common representation of phonology for sign and speech as previously suggested.
In hearing individuals, multiplication relies mainly on the phonological loop while subtraction relies on the visuo-spatial sketchpad (VSSP; Lee & Kang, 2002). Little is known about arithmetic neural networks in deaf signers (DS). Since DS often perform worse than hearing non-signers (NH) on arithmetic in general and multiplication in particular (Traxler, 2000), we hypothesized that there are strategic differences between how groups recruit the phonological loop in multiplication, but not in subtraction, leading to differential activation of phonological processing areas in left inferior frontal gyrus (Broca’s area). We investigated this using a blocked fMRI-design in which nine DS and 17 HN matched on age, gender, education and non-verbal intelligence (Raven & Raven, 1998) were tested on tasks of multiplication, subtraction and phonology (rhyme). The contrasts rhyme versus multiplication and rhyme versus subtraction were examined across groups within the region of interest defined by a probability map of Broca’s area (Amunts, 1999). We observed a significant interaction between task (multiplication and rhyme) and group (F = 12.64, p = .034, FWE-corrected), where the HN showed higher activation for rhyme than for multiplication (T = 4.55, p = .001, FWE-corrected) whereas there were no differences in activations between tasks for DS. For subtraction versus rhyme no interaction with group was found. These results suggest that there are differences between DS and HN in the phonology dependent neural networks in Broca’s area used during multiplication, which may be part of the explanation for poorer performance in DS.
Arithmetic and language processing involve similar neural networks, but the relative engagement remains unclear. In the present study we used fMRI to compare activation for phonological, multiplication and subtraction tasks, keeping the stimulus material constant, within a predefined language-calculation network including left inferior frontal gyrus and angular gyrus (AG) as well as superior parietal lobule and the intraparietal sulcus bilaterally. Results revealed a generally left lateralized activation pattern within the language-calculation network for phonology and a bilateral activation pattern for arithmetic, and suggested regional differences between tasks. In particular, we found a more prominent role for phonology than arithmetic in pars opercularis of the left inferior frontal gyrus but domain generality in pars triangularis. Parietal activation patterns demonstrated greater engagement of the visual and quantity systems for calculation than language. This set of findings supports the notion of a common, but regionally differentiated, language-calculation network. (C) 2015 The Authors. Published by Elsevier Inc.
Similar working memory (WM) for lexical items has been demonstrated for signers and non-signers while short-term memory (STM) is regularly poorer in deaf than hearing individuals. In the present study, we investigated digit-based WM and STM in Swedish and British deaf signers and hearing non-signers. To maintain good experimental control we used printed stimuli throughout and held response mode constant across groups. We showed that deaf signers have similar digit-based WM performance, despite shorter digit spans, compared to well-matched hearing non-signers. We found no difference between signers and non-signers on STM span for letters chosen to minimize phonological similarity or in the effects of recall direction. This set of findings indicates that similar WM for signers and non-signers can be generalized from lexical items to digits and suggests that poorer STM in deaf signers compared to hearing non-signers may be due to differences in phonological similarity across the language modalities of sign and speech.
Deaf students generally lag several years behind hearing peers in arithmetic, but little is known about the mechanisms behind this. In the present study we investigated how phonological skills interact with arithmetic. Eighteen deaf signers and eighteen hearing non-signers took part in an experiment that manipulated arithmetic and phonological knowledge in the language modalities of sign and speech. Independent tests of alphabetical and native language phonological skills were also administered. There was no difference in performance between groups on subtraction, but hearing non-signers performed better than deaf signers on multiplication. For the deaf signers but not the hearing non-signers, multiplicative reasoning was associated with both alphabetical and phonological skills. This indicates that deaf signing adults rely on language processes to solve multiplication tasks, possibly because automatization of multiplication is less well established in deaf adults.
n/a
The audiogram predicts less than a third of the variance in speech reception thresholds (SRTs) for hearing-impaired (HI) listeners properly fit with individualized frequency-dependent gain. The remaining variance is often attributed to a combination of su-prathreshold distortion in the auditory pathway and non-auditory factors such as cogni-tive processing. Distinguishing between these factors requires a measure of suprathresh-old auditory processing to account for the non-cognitive contributions. Preliminary re-sults in 12 HI listeners identified a correlation between spectrotemporal modulation (STM) sensitivity and speech intelligibility in noise presented over headphones. The cur-IHCON 2014 27 August 13-17, 2014rent study assessed the effectiveness of STM sensitivity as a measure of suprathreshold auditory function to predict free-field SRTs in noise for a larger group of 47 HI listeners with hearing aids.SRTs were measured for Hagerman sentences presented at 65 dB SPL in stationary speech-weighted noise or four-talker babble. Pre-recorded speech and masker stimuli were played through a small anechoic chamber equipped with a master hearing aid pro-grammed with individualized gain. The output from an IEC711 Ear Simulator was played binaurally through insert earphones. Three processing algorithms were examined: linear gain, linear gain plus noise reduction, or fast-acting compressive gain.STM stimuli consist of spectrally-rippled noise with spectral-peak frequencies that shift over time. STM with a 2-cycle/octave spectral-ripple density and a 4-Hz modulation rate was applied to a 2-kHz lowpass-filtered pink-noise carrier. Stimuli were presented over headphones at 80 dB SPL (±5-dB roving). The threshold modulation depth was estimated adaptively in a two-alternative forced-choice task.STM sensitivity was strongly correlated (R2=0.48) with the global SRT (i.e., the SRTs averaged across masker and processing conditions). The high-frequency pure-tone aver-age (3-8 kHz) and age together accounted for 23% of the variance in global SRT. STM sensitivity accounted for an additional 28% of the variance in global SRT (total R2=0.51) when combined with these two other metrics in a multiple-regression analysis. Correla-tions between STM sensitivity and SRTs for individual conditions were weaker for noise reduction than for the other algorithms, and marginally stronger for babble than for sta-tionary noise.The results are discussed in the context of previous work suggesting that STM sensitivity for low rates and low carrier frequencies is impaired by a reduced ability to use temporal fine-structure information to detect slowly shifting spectral peaks. STM detection is a fast, simple test of suprathreshold auditory function that accounts for a substantial pro-portion of variability in hearing-aid outcomes for speech perception in noise.
Purpose: This research aimed to increase the analogy between text reception threshold (TRT) and speech reception threshold (SRT) and to examine the TRT's value in estimating cognitive abilities important for speech comprehension in noise.
Method: We administered five TRT versions, SRT tests in stationary (SRTSTAT) and modulated (SRTMOD) noise, and two cognitive tests: a reading span (RSpan) test for working memory capacity, and a letter-digit-substitution test for information processing speed. Fifty-five normal hearing adults (18–78 years, mean = 44) participated. We examined mutual associations of the tests and their predictive value for the SRTs with correlation and linear regression analyses.
Results: SRTs and TRTs were well associated, also when controlling for age. Correlations for the SRTSTAT were generally lower than for the SRTMOD. The cognitive tests were only correlated to the SRTs when age was not controlled for. Age and the TRTs were the only significant predictors of SRTMOD. SRTSTATwas predicted by level of education and some of the TRT versions.
Conclusions: TRTs and SRTs are robustly associated, nearly independent of age. The association between SRTs and RSpan is largely age-dependent. The TRT test and the RSpan test measure different non-auditory components of linguistic processing relevant for speech perception in noise.
A sound localization aid based on eyeglasses with three microphones and four vibrators was tested in a sound-treated acoustic test room and in an ordinary office. A digital signal-processing algorithm provided a determination of the source angle, which was transformed into eight vibrator codes each corresponding to a 45 degrees sector. The instrument was tested on nine deaf and three deaf-blind individuals. The results show an average hit rate of about 80% in a sound-treated room with 100% for the front 135 degrees sector. The results in a realistic communication situation in an ordinary office room were 70% correct based on single presentations and 95% correct when more realistic criteria for an adequate reaction were used. Ten of the twelve subjects were interested in participating in field tests using a planned miniaturized version.