Target Face (target + face)

Distribution by Scientific Domains


Selected Abstracts


Facial Emotion Recognition after Curative Nondominant Temporal Lobectomy in Patients with Mesial Temporal Sclerosis

EPILEPSIA, Issue 8 2006
Shearwood McClelland III
Summary:,Purpose: The right (nondominant) amygdala is crucial for processing facial emotion recognition (FER). Patients with temporal lobe epilepsy (TLE) associated with mesial temporal sclerosis (MTS) often incur right amygdalar damage, resulting in impaired FER if TLE onset occurred before age 6 years. Consequently, early right mesiotemporal insult has been hypothesized to impair plasticity, resulting in FER deficits, whereas damage after age 5 years results in no deficit. The authors performed this study to test this hypothesis in a uniformly seizure-free postsurgical population. Methods: Controls (n = 10), early-onset patients (n = 7), and late-onset patients (n = 5) were recruited. All patients had nondominant anteromedial temporal lobectomy (AMTL), Wada-confirmed left-hemisphere language dominance and memory support, MTS on both preoperative MRI and biopsy, and were Engel class I 5 years postoperatively. By using a standardized (Ekman and Friesen) human face series, subjects were asked to match the affect of one of two faces to that of a simultaneously presented target face. Target faces expressed fear, anger, or happiness. Results: Statistical analysis revealed that the early-onset group had significantly impaired FER (measured by percentage of faces correct) for fear (p = 0.036), whereas the FER of the late-onset group for fear was comparable to that of controls. FER for anger and happiness was comparable across all three groups. Conclusions: Despite seizure control/freedom after AMTL, early TLE onset continues to impair FER for frightened expressions (but not for angry or happy expression), whereas late TLE onset does not impair FER, with no indication that AMTL resulted in FER impairment. These results indicate that proper development of the right amygdala is necessary for optimal fear recognition, with other neural processes unable to compensate for early amygdalar damage. [source]


The influence of children's self-report trait anxiety and depression on visual search for emotional faces

THE JOURNAL OF CHILD PSYCHOLOGY AND PSYCHIATRY AND ALLIED DISCIPLINES, Issue 3 2003
Julie A. Hadwin
Background: This study presents two experiments that investigated the relationship between7- and 10-year-olds' levels of self-report trait anxiety and depression and their visual search for threatening (angry faces) and non-threatening (happy and neutral faces) stimuli. Method: In both experiments a visual search paradigm was used to measure participants' reaction times to detect the presence or absence of angry, happy or neutral schematic faces (Experiment 1) or cartoon drawings (Experiment 2). On target present trials, a target face was displayed alongside three, five or seven distractor items. On target absent trials all items were distractors. Results: Both experiments demonstrated that on target absent (but not present) trials, increased levels of anxiety produced significantly faster search times in the angry face condition, but not in the neutral condition. In Experiment 2 there was some trend towards significance between anxiety and searches for happy faces in absent trials. There were no effects of depression on search times in any condition. Conclusion: The results support previous work highlighting a specific link between anxiety and attention to threat in childhood. [source]


Articulatory suppression attenuates the verbal overshadowing effect: a role for verbal encoding in face identification

APPLIED COGNITIVE PSYCHOLOGY, Issue 2 2006
Lee H. V. Wickham
Verbal overshadowing is the phenomenon that verbally describing a face between presentation and test can impair identification of the face (Schooler & Engstler-Schooler, 1990). This study examined the effects of articulatory suppression and distinctiveness upon the magnitude of the verbal overshadowing effect. Participants engaged in articulatory suppression or a control task whilst viewing a target face. They then either described the face or completed a distractor task before selecting the target face from a line-up. This was repeated for 12 trials. Articulatory suppression impaired identification performance overall, and reduced the negative effects of description to non-significance, whereas the control group demonstrated the standard verbal overshadowing effect. Typical faces showed verbal overshadowing, whereas distinctive faces did not. These results are consistent with the view that verbal overshadowing arises because the description of the target face creates a verbal code that interferes with a verbal code created spontaneously during encoding. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Accurate automatic visible speech synthesis of arbitrary 3D models based on concatenation of diviseme motion capture data

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5 2004
Jiyong Ma
Abstract We present a technique for accurate automatic visible speech synthesis from textual input. When provided with a speech waveform and the text of a spoken sentence, the system produces accurate visible speech synchronized with the audio signal. To develop the system, we collected motion capture data from a speaker's face during production of a set of words containing all diviseme sequences in English. The motion capture points from the speaker's face are retargeted to the vertices of the polygons of a 3D face model. When synthesizing a new utterance, the system locates the required sequence of divisemes, shrinks or expands each diviseme based on the desired phoneme segment durations in the target utterance, then moves the polygons in the regions of the lips and lower face to correspond to the spatial coordinates of the motion capture data. The motion mapping is realized by a key-shape mapping function learned by a set of viseme examples in the source and target faces. A well-posed numerical algorithm estimates the shape blending coefficients. Time warping and motion vector blending at the juncture of two divisemes and the algorithm to search the optimal concatenated visible speech are also developed to provide the final concatenative motion sequence. Copyright © 2004 John Wiley & Sons, Ltd. [source]