Home About us Contact | |||
Motion Capture Data (motion + capture_data)
Selected AbstractsCompression of Human Motion Capture Data Using Motion Pattern IndexingCOMPUTER GRAPHICS FORUM, Issue 1 2009Qin Gu I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism; E.4 [Coding and Information Theory]: Data Compaction and Compression Abstract In this work, a novel scheme is proposed to compress human motion capture data based on hierarchical structure construction and motion pattern indexing. For a given sequence of 3D motion capture data of human body, the 3D markers are first organized into a hierarchy where each node corresponds to a meaningful part of the human body. Then, the motion sequence corresponding to each body part is coded separately. Based on the observation that there is a high degree of spatial and temporal correlation among the 3D marker positions, we strive to identify motion patterns that form a database for each meaningful body part. Thereafter, a sequence of motion capture data can be efficiently represented as a series of motion pattern indices. As a result, higher compression ratio has been achieved when compared with the prior art, especially for long sequences of motion capture data with repetitive motion styles. Another distinction of this work is that it provides means for flexible and intuitive global and local distortion controls. [source] A hybrid approach for simulating human motion in constrained environmentsCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2010Jia Pan Abstract We present a new algorithm to generate plausible motions for high-DOF human-like articulated figures in constrained environments with multiple obstacles. Our approach is general and makes no assumptions about the articulated model or the environment. The algorithm combines hierarchical model decomposition with sample-based planning to efficiently compute a collision-free path in tight spaces. Furthermore, we use path perturbation and replanning techniques to satisfy the kinematic and dynamic constraints on the motion. In order to generate realistic human-like motion, we present a new motion blending algorithm that refines the path computed by the planner with motion capture data to compute a smooth and plausible trajectory. We demonstrate the results of generating motion corresponding to placing or lifting object, walking, and bending for a 38-DOF articulated model. Copyright © 2010 John Wiley & Sons, Ltd. [source] Interactive animation of virtual humans based on motion capture dataCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5-6 2009Franck Multon Abstract This paper presents a novel, parameteric framework for synthesizing new character motions from existing motion capture data. Our framework can conduct morphological adaptation as well as kinematic and physically-based corrections. All these solvers are organized in layers in order to be easily combined together. Given locomotion as an example, the system automatically adapts the motion data to the size of the synthetic figure and to its environment; the character will correctly step over complex ground shapes and counteract with external forces applied to the body. Our framework is based on a frame-based solver. This ensures animating hundreds of humanoids with different morphologies in real-time. It is particularly suitable for interactive applications such as video games and virtual reality where a user interacts in an unpredictable way. Copyright © 2009 John Wiley & Sons, Ltd. [source] Accurate automatic visible speech synthesis of arbitrary 3D models based on concatenation of diviseme motion capture dataCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5 2004Jiyong Ma Abstract We present a technique for accurate automatic visible speech synthesis from textual input. When provided with a speech waveform and the text of a spoken sentence, the system produces accurate visible speech synchronized with the audio signal. To develop the system, we collected motion capture data from a speaker's face during production of a set of words containing all diviseme sequences in English. The motion capture points from the speaker's face are retargeted to the vertices of the polygons of a 3D face model. When synthesizing a new utterance, the system locates the required sequence of divisemes, shrinks or expands each diviseme based on the desired phoneme segment durations in the target utterance, then moves the polygons in the regions of the lips and lower face to correspond to the spatial coordinates of the motion capture data. The motion mapping is realized by a key-shape mapping function learned by a set of viseme examples in the source and target faces. A well-posed numerical algorithm estimates the shape blending coefficients. Time warping and motion vector blending at the juncture of two divisemes and the algorithm to search the optimal concatenated visible speech are also developed to provide the final concatenative motion sequence. Copyright © 2004 John Wiley & Sons, Ltd. [source] Compression of Human Motion Capture Data Using Motion Pattern IndexingCOMPUTER GRAPHICS FORUM, Issue 1 2009Qin Gu I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism; E.4 [Coding and Information Theory]: Data Compaction and Compression Abstract In this work, a novel scheme is proposed to compress human motion capture data based on hierarchical structure construction and motion pattern indexing. For a given sequence of 3D motion capture data of human body, the 3D markers are first organized into a hierarchy where each node corresponds to a meaningful part of the human body. Then, the motion sequence corresponding to each body part is coded separately. Based on the observation that there is a high degree of spatial and temporal correlation among the 3D marker positions, we strive to identify motion patterns that form a database for each meaningful body part. Thereafter, a sequence of motion capture data can be efficiently represented as a series of motion pattern indices. As a result, higher compression ratio has been achieved when compared with the prior art, especially for long sequences of motion capture data with repetitive motion styles. Another distinction of this work is that it provides means for flexible and intuitive global and local distortion controls. [source] Pedestrian Reactive Navigation for Crowd Simulation: a Predictive ApproachCOMPUTER GRAPHICS FORUM, Issue 3 2007Sébastien Paris This paper addresses the problem of virtual pedestrian autonomous navigation for crowd simulation. It describes a method for solving interactions between pedestrians and avoiding inter-collisions. Our approach is agent-based and predictive: each agent perceives surrounding agents and extrapolates their trajectory in order to react to potential collisions. We aim at obtaining realistic results, thus the proposed model is calibrated from experimental motion capture data. Our method is shown to be valid and solves major drawbacks compared to previous approaches such as oscillations due to a lack of anticipation. We first describe the mathematical representation used in our model, we then detail its implementation, and finally, its calibration and validation from real data. [source] Visyllable Based Speech AnimationCOMPUTER GRAPHICS FORUM, Issue 3 2003Sumedha Kshirsagar Visemes are visual counterpart of phonemes. Traditionally, the speech animation of 3D synthetic faces involvesextraction of visemes from input speech followed by the application of co-articulation rules to generate realisticanimation. In this paper, we take a novel approach for speech animation , using visyllables, the visual counterpartof syllables. The approach results into a concatenative visyllable based speech animation system. The key contributionof this paper lies in two main areas. Firstly, we define a set of visyllable units for spoken English along withthe associated phonological rules for valid syllables. Based on these rules, we have implemented a syllabificationalgorithm that allows segmentation of a given phoneme stream into syllables and subsequently visyllables. Secondly,we have recorded the database of visyllables using a facial motion capture system. The recorded visyllableunits are post-processed semi-automatically to ensure continuity at the vowel boundaries of the visyllables. We defineeach visyllable in terms of the Facial Movement Parameters (FMP). The FMPs are obtained as a result of thestatistical analysis of the facial motion capture data. The FMPs allow a compact representation of the visyllables.Further, the FMPs also facilitate the formulation of rules for boundary matching and smoothing after concatenatingthe visyllables units. Ours is the first visyllable based speech animation system. The proposed technique iseasy to implement, effective for real-time as well as non real-time applications and results into realistic speechanimation. Categories and Subject Descriptors (according to ACM CCS): 1.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism [source] |