Animation

Distribution by Scientific Domains

Kinds of Animation

  • cartoon animation
  • character animation
  • computer animation
  • facial animation

  • Terms modified by Animation

  • animation sequence

  • Selected Abstracts


    Geometry-Driven Local Neighbourhood Based Predictors for Dynamic Mesh Compression

    COMPUTER GRAPHICS FORUM, Issue 6 2010
    Libor Vá
    Computer Graphics [I.3.7]: Animation Abstract The task of dynamic mesh compression seeks to find a compact representation of a surface animation, while the artifacts introduced by the representation are as small as possible. In this paper, we present two geometric predictors, which are suitable for PCA-based compression schemes. The predictors exploit the knowledge about the geometrical meaning of the data, which allows a more accurate prediction, and thus a more compact representation. We also provide rate/distortion curves showing that our approach outperforms the current PCA-based compression methods by more than 20%. [source]


    ProcDef: Local-to-global Deformation for Skeleton-free Character Animation

    COMPUTER GRAPHICS FORUM, Issue 7 2009
    Takashi Ijiri
    Abstract Animations of characters with flexible bodies such as jellyfish, snails, and, hearts are difficult to design using traditional skeleton-based approaches. A standard approach is keyframing, but adjusting the shape of the flexible body for each key frame is tedious. In addition, the character cannot dynamically adjust its motion to respond to the environment or user input. This paper introduces a new procedural deformation framework (ProcDef) for designing and driving animations of such flexible objects. Our approach is to synthesize global motions procedurally by integrating local deformations. ProcDef provides an efficient design scheme for local deformation patterns; the user can control the orientation and magnitude of local deformations as well as the propagation of deformation signals by specifying line charts and volumetric fields. We also present a fast and robust deformation algorithm based on shape-matching dynamics and show some example animations to illustrate the feasibility of our framework. [source]


    Animating Quadrupeds: Methods and Applications

    COMPUTER GRAPHICS FORUM, Issue 6 2009
    Ljiljana Skrba
    I.3.7 [Computer Graphics]: 3D Graphics and Realism , Animation Abstract Films like Shrek, Madagascar, The Chronicles of Narnia and Charlotte's web all have something in common: realistic quadruped animations. While the animation of animals has been popular for a long time, the technical challenges associated with creating highly realistic, computer generated creatures have been receiving increasing attention recently. The entertainment, education and medical industries have increased the demand for simulation of realistic animals in the computer graphics area. In order to achieve this, several challenges need to be overcome: gathering and processing data that embodies the natural motion of an animal , which is made more difficult by the fact that most animals cannot be easily motion-captured; building accurate kinematic models for animals, with adapted animation skeletons in particular; and developing either kinematic or physically-based animation methods, either by embedding some a priori knowledge about the way that quadrupeds locomote and/or adopting examples of real motion. In this paper, we present an overview of the common techniques used to date for realistic quadruped animation. This includes an outline of the various ways that realistic quadruped motion can be achieved, through video-based acquisition, physics based models, inverse kinematics or some combination of the above. [source]


    Physically Guided Animation of Trees

    COMPUTER GRAPHICS FORUM, Issue 2 2009
    Ralf Habel
    Abstract This paper presents a new method to animate the interaction of a tree with wind both realistically and in real time. The main idea is to combine statistical observations with physical properties in two major parts of tree animation. First, the interaction of a single branch with the forces applied to it is approximated by a novel efficient two step nonlinear deformation method, allowing arbitrary continuous deformations and circumventing the need to segment a branch to model its deformation behavior. Second, the interaction of wind with the dynamic system representing a tree is statistically modeled. By precomputing the response function of branches to turbulent wind in frequency space, the motion of a branch can be synthesized efficiently by sampling a 2D motion texture. Using a hierarchical form of vertex displacement, both methods can be combined in a single vertex shader, fully leveraging the power of modern GPUs to realistically animate thousands of branches and ten thousands of leaves at practically no cost. [source]


    Transferring the Rig and Animations from a Character to Different Face Models

    COMPUTER GRAPHICS FORUM, Issue 8 2008
    Verónica Costa Orvalho
    I.3.7 Computer Graphics: Three-Dimensional Graphics and Realism. Animation Abstract We introduce a facial deformation system that allows artists to define and customize a facial rig and later apply the same rig to different face models. The method uses a set of landmarks that define specific facial features and deforms the rig anthropometrically. We find the correspondence of the main attributes of a source rig, transfer them to different three-demensional (3D) face models and automatically generate a sophisticated facial rig. The method is general and can be used with any type of rig configuration. We show how the landmarks, combined with other deformation methods, can adapt different influence objects (NURBS surfaces, polygon surfaces, lattice) and skeletons from a source rig to individual face models, allowing high quality geometric or physically-based animations. We describe how it is possible to deform the source facial rig, apply the same deformation parameters to different face models and obtain unique expressions. We enable reusing of existing animation scripts and show how shapes nicely mix one with the other in different face models. We describe how our method can easily be integrated in an animation pipeline. We end with the results of tests done with major film and game companies to show the strength of our proposal. [source]


    Layered Performance Animation with Correlation Maps

    COMPUTER GRAPHICS FORUM, Issue 3 2007
    Michael Neff
    Abstract Performance has a spontaneity and "aliveness" that can be difficult to capture in more methodical animation processes such as keyframing. Access to performance animation has traditionally been limited to either low degree of freedom characters or required expensive hardware. We present a performance-based animation system for humanoid characters that requires no special hardware, relying only on mouse and keyboard input. We deal with the problem of controlling such a high degree of freedom model with low degree of freedom input through the use of correlation maps which employ 2D mouse input to modify a set of expressively relevant character parameters. Control can be continuously varied by rapidly switching between these maps. We present flexible techniques for varying and combining these maps and a simple process for defining them. The tool is highly configurable, presenting suitable defaults for novices and supporting a high degree of customization and control for experts. Animation can be recorded on a single pass, or multiple layers can be used to increase detail. Results from a user study indicate that novices are able to produce reasonable animations within their first hour of using the system. We also show more complicated results for walking and a standing character that gestures and dances. [source]


    Wrinkling Coarse Meshes on the GPU

    COMPUTER GRAPHICS FORUM, Issue 3 2006
    J. Loviscach
    The simulation of complex layers of folds of cloth can be handled through algorithms which take the physical dynamics into account. In many cases, however, it is sufficient to generate wrinkles on a piece of garment which mostly appears spread out. This paper presents a corresponding fully GPU-based, easy-to-control, and robust method to generate and render plausible and detailed folds. This simulation is generated from an animated mesh. A relaxation step ensures that the behavior remains globally consistent. The resulting wrinkle field controls the lighting and distorts the texture in a way which closely simulates an actually deformed surface. No highly tessellated mesh is required to compute the position of the folds or to render them. Furthermore, the solution provides a 3D paint interface through which the user may bias the computation in such a way that folds already appear in the rest pose. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Animation, I.3.7 [Computer Graphics]: Color, shading, shadowing, and texture [source]


    Kinematics, Dynamics, Biomechanics: Evolution of Autonomy in Game Animation

    COMPUTER GRAPHICS FORUM, Issue 3 2005
    Steve Collins
    The believeable portrayal of character performances is critical in engaging the immersed player in interactive entertainment. The story, the emotion and the relationship between the player and the world they are interacting within are hugely dependent on how appropriately the world's characters look, move and behave. We're concerned here with the character's motion; with next generation game consoles like Xbox360TM and Playstation®3 the graphical representation of characters will take a major step forward which places even more emphasis on the motion of the character. The behavior of the character is driven by story and design which are adapted to game context by the game's AI system. The motion of the characters populating the game's world, however, is evolving to an interesting blend of kinematics, dynamics, biomechanics and AI drivenmotion planning. Our goal here is to present the technologies involved in creating what are essentially character automata, emotionless and largely brainless character shells that nevertheless exhibit enough "behavior" to move as directed while adapting to the environment through sensing and actuating responses. This abstracts the complexities of low level motion control, dynamics, collision detection etc. and allows the game's artificial intelligence system to direct these characters at a higher level. While much research has already been conducted in this area and some great results have been published, we will present the particular issues that face game developers working on current and next generation consoles, and how these technologies may be integrated into game production pipelines so to facilitate the creation of character performances in games. The challenges posed by the limited memory and CPU bandwidth (though this is changing somewhat with next generation) and the challenges of integrating these solutions with current game design approaches leads to some interesting problems, some of which the industry has solutions for and some others which still remain largely unsolved. [source]


    ACM/EG Symposium on Computer Animation 2004

    COMPUTER GRAPHICS FORUM, Issue 4 2004
    Ronan Boulic
    No abstract is available for this article. [source]


    A System for View-Dependent Animation

    COMPUTER GRAPHICS FORUM, Issue 3 2004
    Parag Chaudhuri
    In this paper, we present a novel system for facilitating the creation of stylized view-dependent 3D animation. Our system harnesses the skill and intuition of a traditionally trained animator by providing a convivial sketch based 2D to 3D interface. A base mesh model of the character can be modified to match closely to an input sketch, with minimal user interaction. To do this, we recover the best camera from the intended view direction in the sketch using robust computer vision techniques. This aligns the mesh model with the sketch. We then deform the 3D character in two stages - first we reconstruct the best matching skeletal pose from the sketch and then we deform the mesh geometry. We introduce techniques to incorporate deformations in the view-dependent setting. This allows us to set up view-dependent models for animation. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism - Animation 7 Figure 7. Our system takes as input a sketch (a), and a base mesh model (b), then recovers a camera to orient the base mesh (c), then reconstructs the skeleton pose (d), and finally deforms the mesh to find the best possible match with the sketch (e). [source]


    Visyllable Based Speech Animation

    COMPUTER GRAPHICS FORUM, Issue 3 2003
    Sumedha Kshirsagar
    Visemes are visual counterpart of phonemes. Traditionally, the speech animation of 3D synthetic faces involvesextraction of visemes from input speech followed by the application of co-articulation rules to generate realisticanimation. In this paper, we take a novel approach for speech animation , using visyllables, the visual counterpartof syllables. The approach results into a concatenative visyllable based speech animation system. The key contributionof this paper lies in two main areas. Firstly, we define a set of visyllable units for spoken English along withthe associated phonological rules for valid syllables. Based on these rules, we have implemented a syllabificationalgorithm that allows segmentation of a given phoneme stream into syllables and subsequently visyllables. Secondly,we have recorded the database of visyllables using a facial motion capture system. The recorded visyllableunits are post-processed semi-automatically to ensure continuity at the vowel boundaries of the visyllables. We defineeach visyllable in terms of the Facial Movement Parameters (FMP). The FMPs are obtained as a result of thestatistical analysis of the facial motion capture data. The FMPs allow a compact representation of the visyllables.Further, the FMPs also facilitate the formulation of rules for boundary matching and smoothing after concatenatingthe visyllables units. Ours is the first visyllable based speech animation system. The proposed technique iseasy to implement, effective for real-time as well as non real-time applications and results into realistic speechanimation. Categories and Subject Descriptors (according to ACM CCS): 1.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism [source]


    Reanimating Faces in Images and Video

    COMPUTER GRAPHICS FORUM, Issue 3 2003
    V. Blanz
    This paper presents a method for photo-realistic animation that can be applied to any face shown in a single imageor a video. The technique does not require example data of the person's mouth movements, and the image to beanimated is not restricted in pose or illumination. Video reanimation allows for head rotations and speech in theoriginal sequence, but neither of these motions is required. In order to animate novel faces, the system transfers mouth movements and expressions across individuals, basedon a common representation of different faces and facial expressions in a vector space of 3D shapes and textures. This space is computed from 3D scans of neutral faces, and scans of facial expressions. The 3D model's versatility with respect to pose and illumination is conveyed to photo-realistic image and videoprocessing by a framework of analysis and synthesis algorithms: The system automatically estimates 3D shape andall relevant rendering parameters, such as pose, from single images. In video, head pose and mouth movements aretracked automatically. Reanimated with new mouth movements, the 3D face is rendered into the original images. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Animation [source]


    Local Physical Models for Interactive Character Animation

    COMPUTER GRAPHICS FORUM, Issue 3 2002
    Sageev Oore
    Our goal is to design and build a tool for the creation of expressive character animation. Virtual puppetry, also known as performance animation, is a technique in which the user interactively controls a character's motion. In this paper we introduce local physical models for performance animation and describe how they can augment an existing kinematic method to achieve very effective animation control. These models approximate specific physically-generated aspects of a character's motion. They automate certain behaviours, while still letting the user override such motion via a PD-controller if he so desires. Furthermore, they can be tuned to ignore certain undesirable effects, such as the risk of having a character fall over, by ignoring corresponding components of the force. Although local physical models are a quite simple approximation to real physical behaviour, we show that they are extremely useful for interactive character control, and contribute positively to the expressiveness of the character's motion. In this paper, we develop such models at the knees and ankles of an interactively-animated 3D anthropomorphic character, and demonstrate a resulting animation. This approach can be applied in a straight-forward way to other joints. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism, Interaction Techniques [source]


    Dynamic Textures for Image-based Rendering of Fine-Scale 3D Structure and Animation of Non-rigid Motion

    COMPUTER GRAPHICS FORUM, Issue 3 2002
    Dana Cobza
    The problem of capturing real world scenes and then accurately rendering them is particularly difficult for fine-scale 3D structure. Similarly, it is difficult to capture, model and animate non-rigid motion. We present a method where small image changes are captured as a time varying (dynamic) texture. In particular, a coarse geometry is obtained from a sample set of images using structure from motion. This geometry is then used to subdivide the scene and to extract approximately stabilized texture patches. The residual statistical variability in the texture patches is captured using a PCA basis of spatial filters. The filters coefficients are parameterized in camera pose and object motion. To render new poses and motions, new texture patches are synthesized by modulating the texture basis. The texture is then warped back onto the coarse geometry. We demonstrate how the texture modulation and projective homography-based warps can be achieved in real-time using hardware accelerated OpenGL. Experiments comparing dynamic texture modulation to standard texturing are presented for objects with complex geometry (a flower) and non-rigid motion (human arm motion capturing the non-rigidities in the joints, and creasing of the shirt). Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Image Based Rendering [source]


    Control of Feature-point-driven Facial Animation Using a Hypothetical Face

    COMPUTER GRAPHICS FORUM, Issue 4 2001
    Ming-Shing Su
    A new approach to the generation of a feature-point-driven facial animation is presented. In the proposed approach, a hypothetical face is used to control the animation of a face model. The hypothetical face is constructed by connecting some predefined facial feature points to create a net so that each facet of the net is represented by a Coon's surface. Deformation of the face model is controlled by changing the shape of the hypothetical face, which is performed by changing the locations of feature points and their tangents. Experimental results show that this hypothetical-face-based method can generate facial expressions which are visually almost identical to those of a real face. [source]


    Animation for Russian Conversation,by MERRILL, JAMES, JULIA MIKHAILOVA, & MARIA ALLEY

    MODERN LANGUAGE JOURNAL, Issue 1 2010
    MARK J. ELSON
    No abstract is available for this article. [source]


    Cartoons from Another Planet: Japanese Animation as Cross-Cultural Communication

    THE JOURNAL OF AMERICAN CULTURE, Issue 1-2 2001
    Shinobu Price
    First page of article [source]


    Volume fraction based miscible and immiscible fluid animation

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2010
    Kai Bao
    Abstract We propose a volume fraction based approach to effectively simulate the miscible and immiscible flows simultaneously. In this method, a volume fraction is introduced for each fluid component and the mutual interactions between different fluids are simulated by tracking the evolution of the volume fractions. Different techniques are employed to handle the miscible and immiscible interactions and special treatments are introduced to handle flows involving multiple fluids and different kinds of interactions at the same time. With this method, second-order accuracy is preserved in both space and time. The experiment results show that the proposed method can well handle both immiscible and miscible interactions between fluids and much richer mixing detail can be generated. Also, the method shows good controllability. Different mixing effects can be obtained by adjusting the dynamic viscosities and diffusion coefficients. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Interactive animation of virtual humans based on motion capture data

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5-6 2009
    Franck Multon
    Abstract This paper presents a novel, parameteric framework for synthesizing new character motions from existing motion capture data. Our framework can conduct morphological adaptation as well as kinematic and physically-based corrections. All these solvers are organized in layers in order to be easily combined together. Given locomotion as an example, the system automatically adapts the motion data to the size of the synthetic figure and to its environment; the character will correctly step over complex ground shapes and counteract with external forces applied to the body. Our framework is based on a frame-based solver. This ensures animating hundreds of humanoids with different morphologies in real-time. It is particularly suitable for interactive applications such as video games and virtual reality where a user interacts in an unpredictable way. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Stylized lighting for cartoon shader

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2-3 2009
    Hideki Todo
    Abstract In the context of non-photorealistic imaging, such as digital cel animation, lighting is symbolic and stylized to depict the scene's mood and the geometric or physical features of the objects in the scene. Stylized light and shade should therefore be intentionally animated rather than rigorously simulated. However, it is difficult to achieve smooth animation of light and shade that are stylized with a user's intention, because such stylization cannot be achieved using just conventional 3D lighting. To address this problem, we propose a 3D stylized lighting method, focusing on several stylized effects including straight lighting, edge lighting, and detail lighting which are important features in hand-drawn cartoon animation. Our method is an extension of the conventional cartoon shader and introduces a light coordinate system for light shape control with smooth animations of light and shade. We also extend a toon mapping process for detailed feature lighting. Having these algorithms in a real-time cartoon shader, our prototype system allows the interactive creation of stylized lighting animations. We show several animation results obtained by our method to illustrate usefulness and effectiveness of our method. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Interactive shadowing for 2D Anime

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2-3 2009
    Eiji Sugisaki
    Abstract In this paper, we propose an instant shadow generation technique for 2D animation, especially Japanese Anime. In traditional 2D Anime production, the entire animation including shadows is drawn by hand so that it takes long time to complete. Shadows play an important role in the creation of symbolic visual effects. However shadows are not always drawn due to time constraints and lack of animators especially when the production schedule is tight. To solve this problem, we develop an easy shadowing approach that enables animators to easily create a layer of shadow and its animation based on the character's shapes. Our approach is both instant and intuitive. The only inputs required are character or object shapes in input animation sequence with alpha value generally used in the Anime production pipeline. First, shadows are automatically rendered on a virtual plane by using a Shadow Map1 based on these inputs. Then the rendered shadows can be edited by simple operations and simplified by the Gaussian Filter. Several special effects such as blurring can be applied to the rendered shadow at the same time. Compared to existing approaches, ours is more efficient and effective to handle automatic shadowing in real-time. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Directable animation of elastic bodies with point-constraints

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008
    Ryo Kondo
    Abstract We propose a simple framework for making elastic body animation with point constraints. In general, a physics-based approach for constraint animation offers a variety of animations with physically correct realism, which are achieved by solving the equations of motion. However, in the digital animation industry, solving the equations of motion is an indirect path to creating more art-directed animations that maintain a plausible realism. Our algorithms provide animators a practical way to make elastic body animation with plausible realism, while effectively using point-constraints to offer directatorial control. The animation examples illustrate that our framework creates a wide variety of point-constraint animations of elastic objects with greater directability than existing methods. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Extended spatial keyframing for complex character animation

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008
    Byungkuk Choi
    Abstract As 3D computer animation becomes more accessible to novice users, it makes it possible for these users to create high-quality animations. This paper introduces a more powerful system to create highly articulated character animations with an intuitive setup then the previous research, Spatial Keyframing (SK). As the main purpose of SK was the rapid generation of primitive animation over quality animation, we propose Extended Spatial Keyframing (ESK) that exploits a global control structure coupled with multiple sets of spatial keyframes, and hierarchical relationship between controllers. The generated structure can be flexibly embedded into the given rigged character, and the system enables the given character to be animated delicately by user performance. During the performance, the movement of the highest ranking controllers across the control hierarchy is recorded in layered style to increase the level of detail for final motions. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Physiologically correct animation of the heart

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008
    Kyoungju Park
    Abstract Physiologically correct animation of the heart should incorporate non-homogeneous and nonlinear motions of the heart. Therefore, we introduce a methodology that estimates deformations from volume images and utilizes them for animation. Since volume images are acquired at regular slicing intervals, they miss information between slices and recover deformation on the slices. Therefore, the estimated finite element models (FEMs) result in coarse meshes with chunk elements the sizes of which depend on the slice intervals. Thus, we introduce a method of generating a detailed model using implicit surfaces and transferring a deformation from a FEM to implicit surfaces. An implicit surface heart model is reconstructed using contour data points and then cross-parameterized to the heart FEM, the time-varying deformation of which has been estimated by tracking the insights of the heart wall. The implicit surface heart models are composed of four heart walls that are blended into one model. A correspondence map between the source and the target meshes is made using the template fitting method. Deformation coupling transfers the deformation of a coarse heart FEM model to a detailed implicit model by factorizing linear equations. We demonstrate the system and show the resulting deformation of an implicit heart model. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Myriad: scalable VR via peer-to-peer connectivity, PC clustering, and transient inconsistency

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2007
    Benjamin Schaeffer
    Abstract Distributed scene graphs are important in virtual reality, both in collaborative virtual environments and in cluster rendering. Modern scalable visualization systems have high local throughput, but collaborative virtual environments (VEs) over a wide-area network (WAN) share data at much lower rates. This complicates the use of one scene graph across the whole application. Myriad is an extension of the Syzygy VR toolkit in which individual scene graphs form a peer-to-peer network. Myriad connections filter scene graph updates and create flexible relationships between nodes of the scene graph. Myriad's sharing is fine-grained: the properties of individual scene graph nodes to share are dynamically specified (in C++ or Python). Myriad permits transient inconsistency, relaxing resource requirements in collaborative VEs. A test application, WorldWideCrowd, demonstrates collaborative prototyping of a 300-avatar crowd animation viewed on two PC-cluster displays and edited on low-powered laptops, desktops, and over a WAN. We have further used our framework to facilitate collaborative educational experiences and as a vehicle for undergraduates to experiment with shared virtual worlds. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    A kaleidoscope as a cyberworld and its animation: linear architecture and modeling based on an incrementally modular abstraction hierarchy

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2006
    Tosiyasu L. Kunii
    Abstract An incrementally modular abstraction hierarchy is known to effectively linearize cyberworlds and virtual worlds, which are combinatorially exploding and hardly managed. It climbs down from general level to specific model preserving the higher level modules as invariants. It not only prevents the combinatorial explosion but also benefits the reuse, development, testing and validation of cyberworld resources. By applying this incrementally modular abstraction hierarchy to a kaleidoscope animation, its architecture and modeling is also specified in this paper as a typical case of cyberworlds. In particular, a homotopy lifting property and a homotopy extension property, which satisfy a duality relation, are also described to show how a kaleidoscope world is systematically created top-down from the whole system and bottom-up from the components. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Automatic muscle generation for character skin deformation

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2006
    Xiaosong Yang
    Abstract As skin shape depends on the underlying anatomical structure, the anatomy-based techniques usually afford greater realism than the traditional skeleton-driven approach. On the downside, however, it is against the current animation workflow, as the animator has to model many individual muscles before the final skin layer arrives, resulting in an unintuitive modelling process. In this paper, we present a new anatomy-based technique that allows the animator to start from an already modelled character. Muscles having visible influence on the skin shape at the rest pose are extracted automatically by studying the surface geometry of the skin. The extracted muscles are then used to deform the skin in areas where there exist complex deformations. The remaining skin areas, unaffected or hardly affected by the muscles, are handled by the skeleton-driven technique, allowing both techniques to play their strengths. In order for the extracted muscles to produce realistic local skin deformation during animation, muscle bulging and special movements are both represented. Whereas the former ensues volume preservation, the latter allows a muscle not only to deform along a straight path, but also to slide and bend around joints and bones, resulting in the production of sophisticated muscle movements and deformations. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    As-consistent-As-possible compositing of virtual objects and video sequences

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2006
    Guofeng Zhang
    Abstract We present an efficient approach that merges the virtual objects into video sequences taken by a freely moving camera in a realistic manner. The composition is visually and geometrically consistent through three main steps. First, a robust camera tracking algorithm based on key frames is proposed, which precisely recovers the focal length with a novel multi-frame strategy. Next, the concerned 3D models of the real scenes are reconstructed by means of an extended multi-baseline algorithm. Finally, the virtual objects in the form of 3D models are integrated into the real scenes, with special cares on the interaction consistency including shadow casting, occlusions, and object animation. A variety of experiments have been implemented, which demonstrate the robustness and efficiency of our approach. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Generation of tree movement sound effects

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5 2005
    Katsutsugu Matsuyama
    Abstract This paper presents a method for automatically generating sound effects for an animation of branches and leaves moving in the wind. Each tree is divided into branches and leaves, and an independent sound effect generation process is employed for each element. The individual results are then compounded into one sound effect. For the branches, we employ an approach based on the frequencies of experimentally obtained Karman vortex streets. For the leaves, we use the leaf blade state as the input and assume a virtual musical instrument that uses wave tables as the sound source. All computations can be performed independently for each frame step. Therefore, each frame step can be executed on completion of the animation step. The results of the implementation of the approach are presented and it is shown that the process offers the possibility of real-time operation through the use of parallel computing techniques. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Natural head motion synthesis driven by acoustic prosodic features

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2005
    Carlos Busso
    Abstract Natural head motion is important to realistic facial animation and engaging human,computer interactions. In this paper, we present a novel data-driven approach to synthesize appropriate head motion by sampling from trained hidden markov models (HMMs). First, while an actress recited a corpus specifically designed to elicit various emotions, her 3D head motion was captured and further processed to construct a head motion database that included synchronized speech information. Then, an HMM for each discrete head motion representation (derived directly from data using vector quantization) was created by using acoustic prosodic features derived from speech. Finally, first-order Markov models and interpolation techniques were used to smooth the synthesized sequence. Our comparison experiments and novel synthesis results show that synthesized head motions follow the temporal dynamic behavior of real human subjects. Copyright © 2005 John Wiley & Sons, Ltd. [source]