Temporal Sulcus (temporal + sulcus)

Distribution by Scientific Domains

Kinds of Temporal Sulcus

  • superior temporal sulcus


  • Selected Abstracts


    A comparison of five fMRI protocols for mapping speech comprehension systems

    EPILEPSIA, Issue 12 2008
    Jeffrey R. Binder
    Summary Aims:, Many fMRI protocols for localizing speech comprehension have been described, but there has been little quantitative comparison of these methods. We compared five such protocols in terms of areas activated, extent of activation, and lateralization. Methods:, fMRI BOLD signals were measured in 26 healthy adults during passive listening and active tasks using words and tones. Contrasts were designed to identify speech perception and semantic processing systems. Activation extent and lateralization were quantified by counting activated voxels in each hemisphere for each participant. Results:, Passive listening to words produced bilateral superior temporal activation. After controlling for prelinguistic auditory processing, only a small area in the left superior temporal sulcus responded selectively to speech. Active tasks engaged an extensive, bilateral attention, and executive processing network. Optimal results (consistent activation and strongly lateralized pattern) were obtained by contrasting an active semantic decision task with a tone decision task. There was striking similarity between the network of brain regions activated by the semantic task and the network of brain regions that showed task-induced deactivation, suggesting that semantic processing occurs during the resting state. Conclusions:, fMRI protocols for mapping speech comprehension systems differ dramatically in pattern, extent, and lateralization of activation. Brain regions involved in semantic processing were identified only when an active, nonlinguistic task was used as a baseline, supporting the notion that semantic processing occurs whenever attentional resources are not controlled. Identification of these lexical-semantic regions is particularly important for predicting language outcome in patients undergoing temporal lobe surgery. [source]


    Neuronal substrates of gaze following in monkeys

    EUROPEAN JOURNAL OF NEUROSCIENCE, Issue 8 2009
    Simone Kamphuis
    Abstract Human and non-human primates follow the gaze of their respective conspecific to identify objects of common interest. Whereas humans rely on eye-gaze for such purposes, monkeys preferentially use head-gaze information. Functional magnetic resonance imaging (fMRI) studies have delineated an area in the human superior temporal sulcus (STS), which is specifically activated when subjects actively follow the eye-gaze of others. Similarly, using fMRI, we have identified an analogous region in the monkey's middle STS responding to gaze following. Hence, although humans and monkeys might rely on different directional cues guiding their attention, they seem to deploy a similar and possibly homologous cortical area to follow the gaze of a conspecific. Our results support the idea that the eyes developed a new social function in human evolution, most likely to support cooperative mutual social interactions building on a phylogenetically old STS module for the processing of head cues. [source]


    Heteromodal connections supporting multisensory integration at low levels of cortical processing in the monkey

    EUROPEAN JOURNAL OF NEUROSCIENCE, Issue 11 2005
    Céline Cappe
    Abstract While multisensory integration is thought to occur in higher hierarchical cortical areas, recent studies in man and monkey have revealed plurisensory modulations of activity in areas previously thought to be unimodal. To determine the cortical network involved in multisensory interactions, we performed multiple injections of different retrograde tracers in unimodal auditory (core), somatosensory (1/3b) and visual (V2 and MT) cortical areas of the marmoset. We found three types of heteromodal connections linking unimodal sensory areas. Visuo-somatosensory projections were observed originating from visual areas [probably the ventral and dorsal fundus of the superior temporal area (FSTv and FSTd), and middle temporal crescent (MTc)] toward areas 1/3b. Somatosensory projections to the auditory cortex were present from S2 and the anterior bank of the lateral sulcus. Finally, a visuo-auditory projection arises from an area anterior to the superior temporal sulcus (STS) toward the auditory core. Injections in different sensory regions allow us to define the frontal convexity and the temporal opercular caudal cortex as putative polysensory areas. A quantitative analysis of the laminar distribution of projecting neurons showed that heteromodal connections could be either feedback or feedforward. Taken together, our results provide the anatomical pathway for multisensory integration at low levels of information processing in the primate and argue against a strict hierarchical model. [source]


    Reciprocal connections between olfactory structures and the cortex of the rostral superior temporal sulcus in the Macaca fascicularis monkey

    EUROPEAN JOURNAL OF NEUROSCIENCE, Issue 10 2005
    A. Mohedano-Moriano
    Abstract Convergence of sensory modalities in the nonhuman primate cerebral cortex is still poorly understood. We present an anatomical tracing study in which polysensory association cortex located at the fundus and upper bank of the rostral superior temporal sulcus presents reciprocal connections with primary olfactory structures. At the same time, projections from this polysensory area reach multiple primary olfactory centres. Retrograde (Fast Blue) and anterograde (biotinylated dextran,amine and 3H-amino acids) tracers were injected into primary olfactory structures and rostral superior temporal sulcus. Retrograde tracers restricted to the anterior olfactory nucleus resulted in labelled neurons in the rostral portion of the upper bank and fundus of superior temporal sulcus. Injections of biotinylated dextran,amine at the fundus and upper bank of the superior temporal sulcus confirmed this projection by labelling axons in the dorsal and lateral portions of the anterior olfactory nucleus, as well as piriform, periamygdaloid and entorhinal cortices. Retrograde tracer injections at the rostral superior temporal sulcus resulted in neuronal labelling in the anterior olfactory nucleus, piriform, periamygdaloid and entorhinal cortices, thus providing confirmation of the reciprocity between primary olfactory structures and the cortex at the rostral superior temporal sulcus. The reciprocal connections between the rostral part of superior temporal sulcus and primary olfactory structures represent a convergence for olfactory and other sensory modalities at the cortex of the rostral temporal lobe. [source]


    Comparative cytoarchitectonic analysis of the human and the macaque ventrolateral prefrontal cortex and corticocortical connection patterns in the monkey

    EUROPEAN JOURNAL OF NEUROSCIENCE, Issue 2 2002
    M. Petrides
    A comparison of the cytoarchitecture of the human and the macaque monkey ventrolateral prefrontal cortex demonstrated a region in the monkey that exhibits the architectonic characteristic of area 45 in the human brain. This region occupies the dorsal part of the ventrolateral prefrontal convexity just below area 9/46v. Rostroventral to area 45 in the human brain lies a large cortical region labelled as area 47 by Brodmann. The ventrolateral component of this region extending as far as the lateral orbital sulcus has architectonic characteristics similar to those of the ventrolateral prefrontal region labelled by Walker as area 12 in the macaque monkey. We designated this region in both the human and the monkey ventrolateral prefrontal cortex as area 47/12. Thus, area 47/12 designates the specific part of the zone previously labelled as area 47 in the human brain that has the same overall architectonic pattern as that of Walker's area 12 in the macaque monkey brain. The cortical connections of these two areas were examined in the monkey by injecting fluorescent retrograde tracers. Although both area 45 and area 47/12 as defined here had complex multimodal input, they could be differentiated in terms of some of their inputs. Retrograde tracers restricted to area 47/12 resulted in heavy labelling of neurons in the rostral inferotemporal visual association cortex and in temporal limbic areas (i.e. perirhinal and parahippocampal cortex). In contrast, injections of tracers into dorsally adjacent area 45 demonstrated strong labelling in the superior temporal gyrus (i.e. the auditory association cortex) and the multimodal cortex in the upper bank of the superior temporal sulcus. [source]


    The role of the superior temporal sulcus and the mirror neuron system in imitation

    HUMAN BRAIN MAPPING, Issue 9 2010
    Pascal Molenberghs
    Abstract It has been suggested that in humans the mirror neuron system provides a neural substrate for imitation behaviour, but the relative contributions of different brain regions to the imitation of manual actions is still a matter of debate. To investigate the role of the mirror neuron system in imitation we used fMRI to examine patterns of neural activity under four different conditions: passive observation of a pantomimed action (e.g., hammering a nail); (2) imitation of an observed action; (3) execution of an action in response to a word cue; and (4) self-selected execution of an action. A network of cortical areas, including the left supramarginal gyrus, left superior parietal lobule, left dorsal premotor area and bilateral superior temporal sulcus (STS), was significantly active across all four conditions. Crucially, within this network the STS bilaterally was the only region in which activity was significantly greater for action imitation than for the passive observation and execution conditions. We suggest that the role of the STS in imitation is not merely to passively register observed biological motion, but rather to actively represent visuomotor correspondences between one's own actions and the actions of others. Hum Brain Mapp, 2010. © 2010 Wiley-Liss, Inc. [source]


    Primary and multisensory cortical activity is correlated with audiovisual percepts

    HUMAN BRAIN MAPPING, Issue 4 2010
    Margo McKenna Benoit
    Abstract Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. Hum Brain Mapp, 2010. © 2009 Wiley-Liss, Inc. [source]


    Brain responses to auditory and visual stimulus offset: Shared representations of temporal edges

    HUMAN BRAIN MAPPING, Issue 3 2009
    Marcus Herdener
    Abstract Edges are crucial for the formation of coherent objects from sequential sensory inputs within a single modality. Moreover, temporally coincident boundaries of perceptual objects across different sensory modalities facilitate crossmodal integration. Here, we used functional magnetic resonance imaging in order to examine the neural basis of temporal edge detection across modalities. Onsets of sensory inputs are not only related to the detection of an edge but also to the processing of novel sensory inputs. Thus, we used transitions from input to rest (offsets) as convenient stimuli for studying the neural underpinnings of visual and acoustic edge detection per se. We found, besides modality-specific patterns, shared visual and auditory offset-related activity in the superior temporal sulcus and insula of the right hemisphere. Our data suggest that right hemispheric regions known to be involved in multisensory processing are crucial for detection of edges in the temporal domain across both visual and auditory modalities. This operation is likely to facilitate cross-modal object feature binding based on temporal coincidence. Hum Brain Mapp, 2009. © 2008 Wiley-Liss, Inc. [source]


    Functional segregation of cortical language areas by sentence repetition

    HUMAN BRAIN MAPPING, Issue 5 2006
    Ghislaine Dehaene-Lambertz
    Abstract The functional organization of the perisylvian language network was examined using a functional MRI (fMRI) adaptation paradigm with spoken sentences. In Experiment 1, a given sentence was presented every 14.4 s and repeated two, three, or four times in a row. The study of the temporal properties of the BOLD response revealed a temporal gradient along the dorsal,ventral and rostral,caudal directions: From Heschl's gyrus, where the fastest responses were recorded, responses became increasingly slower toward the posterior part of the superior temporal gyrus and toward the temporal poles and the left inferior frontal gyrus, where the slowest responses were observed. Repetition induced a decrease in amplitude and a speeding up of the BOLD response in the superior temporal sulcus (STS), while the most superior temporal regions were not affected. In Experiment 2, small blocks of six sentences were presented in which either the speaker voice or the linguistic content of the sentence, or both, were repeated. Data analyses revealed a clear asymmetry: While two clusters in the left superior temporal sulcus showed identical repetition suppression whether the sentences were produced by the same speaker or different speakers, the homologous right regions were sensitive to sentence repetition only when the speaker voice remained constant. Thus, hemispheric left regions encode linguistic content while homologous right regions encode more details about extralinguistic features like speaker voice. The results demonstrate the feasibility of using sentence-level adaptation to probe the functional organization of cortical language areas. Hum Brain Mapp, 2006. © 2006 Wiley-Liss, Inc. [source]


    Topographical and laminar distribution of cortical input to the monkey entorhinal cortex

    JOURNAL OF ANATOMY, Issue 2 2007
    A. Mohedano-Moriano
    Abstract Hippocampal formation plays a prominent role in episodic memory formation and consolidation. It is likely that episodic memory representations are constructed from cortical information that is mostly funnelled through the entorhinal cortex to the hippocampus. The entorhinal cortex returns processed information to the neocortex. Retrograde tracing studies have shown that neocortical afferents to the entorhinal cortex originate almost exclusively in polymodal association cortical areas. However, the use of retrograde studies does not address the question of the laminar and topographical distribution of cortical projections within the entorhinal cortex. We examined material from 60 Macaca fascicularis monkeys in which cortical deposits of either 3H-amino acids or biotinylated dextran-amine as anterograde tracers were made into different cortical areas (the frontal, cingulate, temporal and parietal cortices). The various cortical inputs to the entorhinal cortex present a heterogeneous topographical distribution. Some projections terminate throughout the entorhinal cortex (afferents from medial area 13 and posterior parahippocampal cortex), while others have more limited termination, with emphasis either rostrally (lateral orbitofrontal cortex, agranular insular cortex, anterior cingulate cortex, perirhinal cortex, unimodal visual association cortex), intermediate (upper bank of the superior temporal sulcus, unimodal auditory association cortex) or caudally (parietal and retrosplenial cortices). Many of these inputs overlap, particularly within the rostrolateral portion of the entorhinal cortex. Some projections were directed mainly to superficial layers (I,III) while others were heavier to deep layers (V,VI) although areas of dense projections typically spanned all layers. A primary report will provide a detailed analysis of the regional and laminar organization of these projections. Here we provide a general overview of these projections in relation to the known neuroanatomy of the entorhinal cortex. [source]


    The anatomy of language: contributions from functional neuroimaging

    JOURNAL OF ANATOMY, Issue 3 2000
    CATHY J. PRICE
    This article illustrates how functional neuroimaging can be used to test the validity of neurological and cognitive models of language. Three models of language are described: the 19th Century neurological model which describes both the anatomy and cognitive components of auditory and visual word processing, and 2 20th Century cognitive models that are not constrained by anatomy but emphasise 2 different routes to reading that are not present in the neurological model. A series of functional imaging studies are then presented which show that, as predicted by the 19th Century neurologists, auditory and visual word repetition engage the left posterior superior temporal and posterior inferior frontal cortices. More specifically, the roles Wernicke and Broca assigned to these regions lie respectively in the posterior superior temporal sulcus and the anterior insula. In addition, a region in the left posterior inferior temporal cortex is activated for word retrieval, thereby providing a second route to reading, as predicted by the 20th Century cognitive models. This region and its function may have been missed by the 19th Century neurologists because selective damage is rare. The angular gyrus, previously linked to the visual word form system, is shown to be part of a distributed semantic system that can be accessed by objects and faces as well as speech. Other components of the semantic system include several regions in the inferior and middle temporal lobes. From these functional imaging results, a new anatomically constrained model of word processing is proposed which reconciles the anatomical ambitions of the 19th Century neurologists and the cognitive finesse of the 20th Century cognitive models. The review focuses on single word processing and does not attempt to discuss how words are combined to generate sentences or how several languages are learned and interchanged. Progress in unravelling these and other related issues will depend on the integration of behavioural, computational and neurophysiological approaches, including neuroimaging. [source]


    Mechanisms of face perception in humans: A magneto- and electro-encephalographic study

    NEUROPATHOLOGY, Issue 1 2005
    Shoko Watanabe
    We have been studying the underlying mechanisms of face perception in humans using magneto- (MEG) and electro-encephalography (EEG) including (1) perception by viewing the static face, (2) differences in perception by viewing the eyes and whole face, (3) the face inversion effect, (4) the effect of gaze direction, (5) perception of eye motion, (6) perception of mouth motion, and (7) the interaction between auditory and visual stimuli related to the vowel sounds. In this review article, we mainly summarize our results obtained on 3, 5, and 6 above. With the presentation of both upright and inverted unfamiliar faces, the inferior temporal cortex (IT) centered on the fusiform gyrus, and the lateral temporal cortex (LT) near the superior temporal sulcus were activated simultaneously, but independently, between 140 and 200 ms post-stimulus. The right hemisphere IT and LT were both active in all subjects, and those in the left hemisphere in half of the subjects. Latencies with inverted faces relative to those with upright faces were longer in the right hemisphere, and shorter in the left hemisphere. Since the activated regions under upright and those under inverted face stimuli did not show a significant difference, we consider that differences in processing upright versus inverted faces are attributable to temporal processing differences rather than to processing of information by different brain regions. When viewing the motion of the mouth and eyes, a large clear MEG component, 1M (mean peak latency of approximately 160 ms), was elicited to both mouth and eye movement, and was generated mainly in the occipito-temporal border, at human MT/V5. The 1M to mouth movement and the 1M to eye movement showed no significant difference in amplitude or generator location. Therefore, our results indicate that human MT/V5 is active in the perception of both mouth and eye motion, and that the perception of movement of facial parts is probably processed similarly. [source]