Home About us Contact | |||
Speech Sounds (speech + sound)
Selected AbstractsThe perception of speech sounds by the human brain as reflected by the mismatch negativity (MMN) and its magnetic equivalent (MMNm)PSYCHOPHYSIOLOGY, Issue 1 2001Risto Näätänen The present article outlines the contribution of the mismatch negativity (MMN), and its magnetic equivalent MMNm, to our understanding of the perception of speech sounds in the human brain. MMN data indicate that each sound, both speech and nonspeech, develops its neural representation corresponding to the percept of this sound in the neurophysiological substrate of auditory sensory memory. The accuracy of this representation, determining the accuracy of the discrimination between different sounds, can be probed with MMN separately for any auditory feature (e.g., frequency or duration) or stimulus type such as phonemes. Furthermore, MMN data show that the perception of phonemes, and probably also of larger linguistic units (syllables and words), is based on language-specific phonetic traces developed in the posterior part of the left-hemisphere auditory cortex. These traces serve as recognition models for the corresponding speech sounds in listening to speech. MMN studies further suggest that these language-specific traces for the mother tongue develop during the first few months of life. Moreover, MMN can also index the development of such traces for a foreign language learned later in life. MMN data have also revealed the existence of such neuronal populations in the human brain that can encode acoustic invariances specific to each speech sound, which could explain correct speech perception irrespective of the acoustic variation between the different speakers and word context. [source] Neurophysiologic evaluation of early cognitive development in high-risk infants and toddlersDEVELOPMENTAL DISABILITIES RESEARCH REVIEW, Issue 4 2005Raye-Ann deRegnier Abstract New knowledge of the perceptual, discriminative, and memory capabilities of very young infants has opened the door to further evaluation of these abilities in infants who have risk factors for cognitive impairments. A neurophysiologic technique that has been very useful in this regard is the recording of event-related potentials (ERPs). The event-related potential (ERP) technique is widely used by cognitive neuroscientists to study cognitive abilities such as discrimination, attention, and memory. This method has many attractive attributes for use in infants and children as it is relatively inexpensive, does not require sedation, has excellent temporal resolution, and can be used to evaluate early cognitive development in preverbal infants with limited behavioral repertories. In healthy infants and children, ERPs have been used to gain a further understanding of early cognitive development and the effect of experience on brain function. Recently, ERPs have been used to elucidate atypical memory development in infants of diabetic mothers, difficulties with perception and discrimination of speech sounds in infants at risk for dyslexia, and multiple areas of cognitive differences in extremely premature infants. Atypical findings seen in high-risk infants have correlated with later cognitive outcomes, but the sensitivity and specificity of the technique has not been studied, and thus evaluation of individual infants is not possible at this time. With further research, this technique may be very useful in identifying children with cognitive deficits during infancy. Because even young infants can be examined with ERPs, this technique is likely to be helpful in the development of focused early intervention programs used to improve cognitive function in high-risk infants and toddlers. © 2005 Wiley-Liss, Inc. MRDD Research Reviews 2005;11:317,324. [source] Tuned to the signal: the privileged status of speech for young infantsDEVELOPMENTAL SCIENCE, Issue 3 2004Athena Vouloumanos Do young infants treat speech as a special signal, compared with structurally similar non-speech sounds? We presented 2- to 7-month-old infants with nonsense speech sounds and complex non-speech analogues. The non-speech analogues retain many of the spectral and temporal properties of the speech signal, including the pitch contour information which is known to be salient to young listeners, and thus provide a stringent test for a potential listening bias for speech. Our results show that infants as young as 2 months of age listened longer to speech sounds. This listening selectivity indicates that early-functioning biases direct infants' attention to speech, granting speech a special status in relation to other sounds. [source] Evidence for early specialized processing of speech formant information in anterior and posterior human auditory cortexEUROPEAN JOURNAL OF NEUROSCIENCE, Issue 4 2010Barrie A. Edmonds Abstract Many speech sounds, such as vowels, exhibit a characteristic pattern of spectral peaks, referred to as formants, the frequency positions of which depend both on the phonological identity of the sound (e.g. vowel type) and on the vocal-tract length of the speaker. This study investigates the processing of formant information relating to vowel type and vocal-tract length in human auditory cortex by measuring electroencephalographic (EEG) responses to synthetic unvoiced vowels and spectrally matched noises. The results revealed specific sensitivity to vowel formant information in both anterior (planum polare) and posterior (planum temporale) regions of auditory cortex. The vowel-specific responses in these two areas appeared to have different temporal dynamics; the anterior source produced a sustained response for as long as the incoming sound was a vowel, whereas the posterior source responded transiently when the sound changed from a noise to a vowel, or when there was a change in vowel type. Moreover, the posterior source appeared to be largely invariant to changes in vocal-tract length. The current findings indicate that the initial extraction of vowel type from formant information is complete by the level of non-primary auditory cortex, suggesting that speech-specific processing may involve primary auditory cortex, or even subcortical structures. This challenges the view that specific sensitivity to speech emerges only beyond unimodal auditory cortex. [source] Neural bases of categorization of simple speech and nonspeech soundsHUMAN BRAIN MAPPING, Issue 8 2006Fatima T. Husain Abstract Categorization is fundamental to our perception and understanding of the environment. However, little is known about the neural bases underlying the categorization of sounds. Using human functional magnetic resonance imaging (fMRI) we compared the brain responses to a category discrimination task with an auditory discrimination task using identical sets of sounds. Our stimuli differed along two dimensions: a speech,nonspeech dimension and a fast,slow temporal dynamics dimension. All stimuli activated regions in the primary and nonprimary auditory cortices in the temporal cortex and in the parietal and frontal cortices for the two tasks. When comparing the activation patterns for the category discrimination task to those for the auditory discrimination task, the results show that a core group of regions beyond the auditory cortices, including inferior and middle frontal gyri, dorsomedial frontal gyrus, and intraparietal sulcus, were preferentially activated for familiar speech categories and for novel nonspeech categories. These regions have been shown to play a role in working memory tasks by a number of studies. Additionally, the categorization of nonspeech sounds activated left middle frontal gyrus and right parietal cortex to a greater extent than did the categorization of speech sounds. Processing the temporal aspects of the stimuli had a greater impact on the left lateralization of the categorization network than did other factors, particularly in the inferior frontal gyrus, suggesting that there is no inherent left hemisphere advantage in the categorical processing of speech stimuli, or for the categorization task itself. Hum Brain Mapp, 2005. © 2005 Wiley-Liss, Inc. [source] Vowel sound extraction in anterior superior temporal cortexHUMAN BRAIN MAPPING, Issue 7 2006Jonas Obleser Abstract We investigated the functional neuroanatomy of vowel processing. We compared attentive auditory perception of natural German vowels to perception of nonspeech band-passed noise stimuli using functional magnetic resonance imaging (fMRI). More specifically, the mapping in auditory cortex of first and second formants was considered, which spectrally characterize vowels and are linked closely to phonological features. Multiple exemplars of natural German vowels were presented in sequences alternating either mainly along the first formant (e.g., [u]-[o], [i]-[e]) or along the second formant (e.g., [u]-[i], [o]-[e]). In fixed-effects and random-effects analyses, vowel sequences elicited more activation than did nonspeech noise in the anterior superior temporal cortex (aST) bilaterally. Partial segregation of different vowel categories was observed within the activated regions, suggestive of a speech sound mapping across the cortical surface. Our results add to the growing evidence that speech sounds, as one of the behaviorally most relevant classes of auditory objects, are analyzed and categorized in aST. These findings also support the notion of an auditory "what" stream, with highly object-specialized areas anterior to primary auditory cortex. Hum. Brain Mapp, 2005. © 2005 Wiley-Liss, Inc. [source] Mirror Neurons, the Motor System and Language: From the Motor Theory to Embodied Cognition and BeyondLINGUISTICS & LANGUAGE COMPASS (ELECTRONIC), Issue 6 2009Jonathan H. Venezia The motor theory of speech perception states that phonetic segments in the acoustic speech stream activate stored motor commands in the brain that give rise to perception of discrete speech sounds. The motor theory fell out of favor when growing evidence from lesion and behavioral studies led aspects of the theory to appear untenable. However, with the recent discovery of mirror neurons and their potential role in action understanding, interest in the motor theory of speech perception is renewed. We review the function and properties of mirror neurons in monkeys, and briefly describe the current literature that focuses on the role of a putative human mirror system in cognition and language processing. Further, we describe proposed evidence for the involvement of the motor system in perceptive speech processing, and point out ambiguities in the literature that arise from the tight coupling of sensory and motor processes in speech comprehension. An alternative theory proposing that sensory representations in superior temporal cortex are mapped onto frontal production networks is offered. We cite evidence that confirms the failure of the motor theory to accurately describe perceptive processes in speech, and promote the conclusion that speech representations are fundamentally sensory in nature. [source] The perception of speech sounds by the human brain as reflected by the mismatch negativity (MMN) and its magnetic equivalent (MMNm)PSYCHOPHYSIOLOGY, Issue 1 2001Risto Näätänen The present article outlines the contribution of the mismatch negativity (MMN), and its magnetic equivalent MMNm, to our understanding of the perception of speech sounds in the human brain. MMN data indicate that each sound, both speech and nonspeech, develops its neural representation corresponding to the percept of this sound in the neurophysiological substrate of auditory sensory memory. The accuracy of this representation, determining the accuracy of the discrimination between different sounds, can be probed with MMN separately for any auditory feature (e.g., frequency or duration) or stimulus type such as phonemes. Furthermore, MMN data show that the perception of phonemes, and probably also of larger linguistic units (syllables and words), is based on language-specific phonetic traces developed in the posterior part of the left-hemisphere auditory cortex. These traces serve as recognition models for the corresponding speech sounds in listening to speech. MMN studies further suggest that these language-specific traces for the mother tongue develop during the first few months of life. Moreover, MMN can also index the development of such traces for a foreign language learned later in life. MMN data have also revealed the existence of such neuronal populations in the human brain that can encode acoustic invariances specific to each speech sound, which could explain correct speech perception irrespective of the acoustic variation between the different speakers and word context. [source] |