Interactive Applications (interactive + application)

Distribution by Scientific Domains


Selected Abstracts


Interactive animation of virtual humans based on motion capture data

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5-6 2009
Franck Multon
Abstract This paper presents a novel, parameteric framework for synthesizing new character motions from existing motion capture data. Our framework can conduct morphological adaptation as well as kinematic and physically-based corrections. All these solvers are organized in layers in order to be easily combined together. Given locomotion as an example, the system automatically adapts the motion data to the size of the synthetic figure and to its environment; the character will correctly step over complex ground shapes and counteract with external forces applied to the body. Our framework is based on a frame-based solver. This ensures animating hundreds of humanoids with different morphologies in real-time. It is particularly suitable for interactive applications such as video games and virtual reality where a user interacts in an unpredictable way. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Analytical inverse kinematics with body posture control

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2 2008
Marcelo Kallmann
Abstract This paper presents a novel whole-body analytical inverse kinematics (IK) method integrating collision avoidance and customizable body control for animating reaching tasks in real-time. Whole-body control is achieved with the interpolation of pre-designed key body postures, which are organized as a function of the direction to the goal to be reached. Arm postures are computed by the analytical IK solution for human-like arms and legs, extended with a new simple search method for achieving postures avoiding joint limits and collisions. In addition, a new IK resolution is presented that directly solves for joints parameterized in the swing-and-twist decomposition. The overall method is simple to implement, fast, and accurate, and therefore suitable for interactive applications controlling the hands of characters. The source code of the IK implementation is provided. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Interactive soft-touch dynamic deformations

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3 2007
Hui Chen
Abstract It is crucial for the users to touch, grasp and manipulate the interested objects through our sense of touch in many interactive applications, such as on-line computer games, interactive cartoon design, and virtual prototyping. In this paper, we propose an interactive haptic deformation approach which incorporates the dynamic simulation of mass,spring systems and flexible control of free-form deformation in the touch-enabled soft-object deformation. Through distributing mass, spring and damping coefficients of the object to the bounded Bezier volume lattice, the deformation of the object related to the haptic avatar follows the physical laws and has high working rate. Both homogenous and inhomogenous materials are simulated. The anchor nodes of haptic input are specified to create amazing special effects during the interactive haptic deformation. Interactive haptic deformations of three-type tropic fishes, Angel, Demekin, and GuppyBlueGrass, have been experimented to simulate vivid fish swimming processes in the virtual ocean scene. Our proposed approach provides touch-enabled input and efficient performance in the flexible deforming controls, letting the objects move in a dynamic, cartoon-style deforming manner. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Tactics-Based Behavioural Planning for Goal-Driven Rigid Body Control

COMPUTER GRAPHICS FORUM, Issue 8 2009
Stefan Zickler
Computer Graphics [I.3.7]: Animation-Artificial Intelligence; [I.2.8]: Plan execution, formation, and generation; Computer Graphics [I.3.5]: Physically based modelling Abstract Controlling rigid body dynamic simulations can pose a difficult challenge when constraints exist on the bodies' goal states and the sequence of intermediate states in the resulting animation. Manually adjusting individual rigid body control actions (forces and torques) can become a very labour-intensive and non-trivial task, especially if the domain includes a large number of bodies or if it requires complicated chains of inter-body collisions to achieve the desired goal state. Furthermore, there are some interactive applications that rely on rigid body models where no control guidance by a human animator can be offered at runtime, such as video games. In this work, we present techniques to automatically generate intelligent control actions for rigid body simulations. We introduce sampling-based motion planning methods that allow us to model goal-driven behaviour through the use of non-deterministic,Tactics,that consist of intelligent, sampling-based control-blocks, called,Skills. We introduce and compare two variations of a Tactics-driven planning algorithm, namely behavioural Kinodynamic Rapidly Exploring Random Trees (BK-RRT) and Behavioural Kinodynamic Balanced Growth Trees (BK-BGT). We show how our planner can be applied to automatically compute the control sequences for challenging physics-based domains and that is scalable to solve control problems involving several hundred interacting bodies, each carrying unique goal constraints. [source]


Real-Time Depth-of-Field Rendering Using Point Splatting on Per-Pixel Layers

COMPUTER GRAPHICS FORUM, Issue 7 2008
Sungkil Lee
Abstract We present a real-time method for rendering a depth-of-field effect based on the per-pixel layered splatting where source pixels are scattered on one of the three layers of a destination pixel. In addition, the missing information behind foreground objects is filled with an additional image of the areas occluded by nearer objects. The method creates high-quality depth-of-field results even in the presence of partial occlusion, without major artifacts often present in the previous real-time methods. The method can also be applied to simulating defocused highlights. The entire framework is accelerated by GPU, enabling real-time post-processing for both off-line and interactive applications. [source]