Copyright

Distribution by Scientific Domains
Distribution within Engineering

Kinds of Copyright

  • crown copyright
  • etc. copyright
  • h. copyright
  • k. copyright
  • ltd. copyright
  • ph. copyright

  • Terms modified by Copyright

  • copyright law
  • copyright protection

  • Selected Abstracts


    ,MERELY MECHANICAL': ON THE ORIGINS OF PHOTOGRAPHIC COPYRIGHT IN FRANCE AND GREAT BRITAIN

    ART HISTORY, Issue 1 2008
    ANNE MCCAULEY
    The invention of the medium of photography and its commercialization as a cheap multiple during the 1850s and 1860s led to challenges to extant copyright laws in France and Great Britain. This paper traces the ways that debates over photographic copyright confronted current understandings of originality and mechanization and repeated arguments that had already been raised by laws governing prints and casts. The British Fine Arts Copyright Act of 1862, which extended statutory protection to all photographs, is contrasted with French cases, which struggled to accommodate photographs within the fine arts as defined by the copyright law of 1793. [source]


    Haptic-constraint modeling based on interactive metaballs

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5 2010
    Hui Chen
    Abstract Adding interactive haptic-constraint sensations is important in interactive computer gaming and 3D shape design. Usually constraints are set on vertices of the object to drive the deformation. How to simulate dynamic force constraints in interactive design is still a challenging task. In this paper, we propose a novel haptic-constraint modeling method based on interactive metaballs, during which the haptic-constraint tools are attracted to the target location and then control the touch-enabled deformation within the constrained areas. The interactive force feedbacks facilitate designers to accurately deform the target regions and fine carve the details as their intention on the objects. Our work studies how to apply touch sensation in such constrained deformations using interactive metaballs, thus users can truly feel and control the soft-touch objects during the deforming interactions. Experimental results show that the dynamic sense of touch during the haptic manipulation is intuitively simulated to users, via the interacting interface we have developed. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    A hybrid approach for simulating human motion in constrained environments

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2010
    Jia Pan
    Abstract We present a new algorithm to generate plausible motions for high-DOF human-like articulated figures in constrained environments with multiple obstacles. Our approach is general and makes no assumptions about the articulated model or the environment. The algorithm combines hierarchical model decomposition with sample-based planning to efficiently compute a collision-free path in tight spaces. Furthermore, we use path perturbation and replanning techniques to satisfy the kinematic and dynamic constraints on the motion. In order to generate realistic human-like motion, we present a new motion blending algorithm that refines the path computed by the planner with motion capture data to compute a smooth and plausible trajectory. We demonstrate the results of generating motion corresponding to placing or lifting object, walking, and bending for a 38-DOF articulated model. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Choreographing emotional facial expressions

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2010
    Robin J.S. Sloan
    Abstract While much is known about the appearance and human perception of emotional facial expressions, researchers and professionals experience difficulties when attempting to create believable animated characters. Methods for automating or capturing dynamic facial expressions have come on in leaps and bounds in recent years, resulting in increasingly realistic characters. However, accurate replication of naturalistic movement does not necessarily ensure authentic character performance. In this paper, the authors present a project which makes use of creative animation practices and artistic reflection as methods of research. The output of animation practice is tested experimentally by measuring observer perception and comparing the results with artistic observations and predictions. Ultimately, the authors aim to demonstrate that animation practice can generate new knowledge about dynamic character performance, and that arts-based methods can and should be considered valuable tools in a field often dominated by technical methods of research. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Situation agents: agent-based externalized steering logic

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2010
    Matthew Schuerman
    Abstract We present a simple and intuitive method for encapsulating part of agents' steering and coordinating abilities into a new class of agents, called situation agents. Situation agents have all the abilities of typical agents. In addition, they can influence the steering decisions of any agent, including other situation agents, within their sphere of influence. Encapsulating steering logic into moving agents is a powerful abstraction which provides more flexibility and efficiency than traditional informed environment approaches, and works with many of the current steering methodologies. We demonstrate our proposed approach in a number of challenging scenarios. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Inhomogeneous volumetric Laplacian deformation for rhinoplasty planning and simulation system

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2010
    Sheng-hui Liao
    Abstract This paper presents an intuitive rhinoplasty planning and simulation system, to provide high quality prediction of postoperative appearance, and design patient specific nose prosthesis automatically. The key component is a novel volumetric Laplacian deformation tool inspired by the state-of-the-art differential surface deformation techniques. Working on the volumetric domain and incorporating inhomogeneous material from CT data make the new approach suitable for soft tissue simulation. In particular, the system employs a special sketch contour driving deformation interface, which can provide realistic 3D rhinoplasty simulation with intuitive and straightforward 2D manipulation. When satisfied with the appearance, the change of soft tissue before and after simulation is utilized to generate the individual prosthesis model automatically. Clinical validation using post-operative CT data demonstrated that the system can provide prediction results of high quality. And the surgeons who used the system confirmed that this planning system is attractive and has potential for daily clinical practice. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Volume fraction based miscible and immiscible fluid animation

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2010
    Kai Bao
    Abstract We propose a volume fraction based approach to effectively simulate the miscible and immiscible flows simultaneously. In this method, a volume fraction is introduced for each fluid component and the mutual interactions between different fluids are simulated by tracking the evolution of the volume fractions. Different techniques are employed to handle the miscible and immiscible interactions and special treatments are introduced to handle flows involving multiple fluids and different kinds of interactions at the same time. With this method, second-order accuracy is preserved in both space and time. The experiment results show that the proposed method can well handle both immiscible and miscible interactions between fluids and much richer mixing detail can be generated. Also, the method shows good controllability. Different mixing effects can be obtained by adjusting the dynamic viscosities and diffusion coefficients. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Stable stylized wireframe rendering

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2010
    Chen Tang
    Abstract Stylized wireframe rendering of 3D model is widely used in animation software in order to depict the configuration of deformable model in comprehensible ways. However, since some inherent flaws in traditional depth test based rendering technology, shape of lines can not been preserved as continuous movement or deformation of models. There often exists severe aliasing like flickering artifact when objects rendered in line form animate, especially rendered with thick or dashed line. To cover this artifact, unlike traditional approach, we propose a novel fast line drawing method with high visual fidelity for wireframe depiction which only depends on intrinsic topology of primitives without any preprocessing step or extra adjacent information pre-stored. In contrast to previous widely-used solutions, our method is advantageous in highly accurate visibility, clear and stable line appearance without flickering even for thick and dashed lines with uniform width and steady configuration as model moves or animates, so that it is strongly suitable for animation system. In addition, our approach can be easily implemented and controlled without any additional preestimate parameters supplied by users. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Virtual humans elicit socially anxious interactants' verbal self-disclosure

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2010
    Sin-Hwa Kang
    Abstract We explored the relationship between interactants' social anxiety and the interactional fidelity of virtual humans. We specifically addressed whether the contingent non-verbal feedback of virtual humans affects the association between interactants' social anxiety and their verbal self-disclosure. This subject was investigated across three experimental conditions where participants interacted with real human videos and virtual humans in computer-mediated interview interactions. The results demonstrated that socially anxious people revealed more information and greater intimate information about themselves when interacting with a virtual human when compared with real human video interaction, whereas less socially anxious people did not show this difference. We discuss the implication of this association between the interactional fidelity of virtual humans and social anxiety in a human interactant on the design of an embodied virtual agent for social skills' training and psychotherapy. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Interactive animation of virtual humans based on motion capture data

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5-6 2009
    Franck Multon
    Abstract This paper presents a novel, parameteric framework for synthesizing new character motions from existing motion capture data. Our framework can conduct morphological adaptation as well as kinematic and physically-based corrections. All these solvers are organized in layers in order to be easily combined together. Given locomotion as an example, the system automatically adapts the motion data to the size of the synthetic figure and to its environment; the character will correctly step over complex ground shapes and counteract with external forces applied to the body. Our framework is based on a frame-based solver. This ensures animating hundreds of humanoids with different morphologies in real-time. It is particularly suitable for interactive applications such as video games and virtual reality where a user interacts in an unpredictable way. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Combined compression and simplification of dynamic 3D meshes

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 4 2009
    Libor Vá
    Abstract We present a new approach to dynamic mesh compression, which combines compression with simplification to achieve improved compression results, a natural support for incremental transmission and level of detail. The algorithm allows fast progressive transmission of dynamic 3D content. Our scheme exploits both temporal and spatial coherency of the input data, and is especially efficient for the case of highly detailed dynamic meshes. The algorithm can be seen as an ultimate extension of the clustering and local coordinate frame (LCF)-based approaches, where each vertex is expressed within its own specific coordinate system. The presented results show that we have achieved better compression efficiency compared to the state of the art methods. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Spatial camera orientation control by rotation-minimizing directed frames

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 4 2009
    Rida T. Farouki
    Abstract The use of rotation-minimizing directed frames (RMDFs) for defining smoothly varying camera orientations along given spatial paths, in real or virtual environments, is proposed. A directed frame on a space curve is a varying orthonormal basis for ,3 such that coincides with the unit polar vector from the origin to each curve point, and such a frame is rotation-minimizing if its angular velocity vector maintains a vanishing component along o. To facilitate computation of rotation-minimizing directed frames, it is shown that the basic theory is equivalent to the established theory for rotation-minimizing adapted frames,for which one frame vector coincides with the tangent at each curve point,if one replaces the given space curve by its anti-hodograph (i.e., indefinite integral). A family of polynomial curves on which RMDFs can be computed exactly by a rational function integration, the Pythagorean (P) curves, is also introduced, together with algorithms for their construction. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Stylized lighting for cartoon shader

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2-3 2009
    Hideki Todo
    Abstract In the context of non-photorealistic imaging, such as digital cel animation, lighting is symbolic and stylized to depict the scene's mood and the geometric or physical features of the objects in the scene. Stylized light and shade should therefore be intentionally animated rather than rigorously simulated. However, it is difficult to achieve smooth animation of light and shade that are stylized with a user's intention, because such stylization cannot be achieved using just conventional 3D lighting. To address this problem, we propose a 3D stylized lighting method, focusing on several stylized effects including straight lighting, edge lighting, and detail lighting which are important features in hand-drawn cartoon animation. Our method is an extension of the conventional cartoon shader and introduces a light coordinate system for light shape control with smooth animations of light and shade. We also extend a toon mapping process for detailed feature lighting. Having these algorithms in a real-time cartoon shader, our prototype system allows the interactive creation of stylized lighting animations. We show several animation results obtained by our method to illustrate usefulness and effectiveness of our method. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Furstyling on angle-split shell textures

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2-3 2009
    Bin Sheng
    Abstract This paper presents a new method for modeling and rendering fur with a wide variety of furstyles. We simulate virtual fur using shell textures,a multiple layers of textured slices for its generality and efficiency. As shell textures usually suffer from the inherent visual gap errors due to the uniform discretization nature, we present the angle-split shell textures (ASST) approach, which classifies the shell textures into different types with different numbers of texture layers, by splitting the angle space of the viewing angles between fur orientation and view direction. Our system can render the fur with biological patterns, and utilizes vector field and scalar field on ASST to control the geometric variations of the furry shape. Users can intuitively shape the fur by applying the combing, blowing, and interpolating effects in real time. Our approach is intuitive to implement without using complex data structures, with real-time performance for dynamic fur appearances. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Perceptual 3D pose distance estimation by boosting relational geometric features

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2-3 2009
    Cheng Chen
    Abstract Traditional pose similarity functions based on joint coordinates or rotations often do not conform to human perception. We propose a new perceptual pose distance: Relational Geometric Distance that accumulates the differences over a set of features that reflects the geometric relations between different body parts. An extensive relational geometric feature pool that contains a large number of potential features is defined, and the features effective for pose similarity estimation are selected using a set of labeled data by Adaboost. The extensive feature pool guarantees that a wide diversity of features is considered, and the boosting ensures that the selected features are optimized when used jointly. Finally, the selected features form a pose distance function that can be used for novel poses. Experiments show that our method outperforms others in emulating human perception in pose similarity. Our method can also adapt to specific motion types and capture the features that are important for pose similarity of a certain motion type. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Fast simulation of skin sliding

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2-3 2009
    Xiaosong Yang
    Abstract Skin sliding is the phenomenon of the skin moving over underlying layers of fat, muscle and bone. Due to the complex interconnections between these separate layers and their differing elasticity properties, it is difficult to model and expensive to compute. We present a novel method to simulate this phenomenon at real-time by remeshing the surface based on a parameter space resampling. In order to evaluate the surface parametrization, we borrow a technique from structural engineering known as the force density method (FDM)which solves for an energy minimizing form with a sparse linear system. Our method creates a realistic approximation of skin sliding in real-time, reducing texture distortions in the region of the deformation. In addition it is flexible, simple to use, and can be incorporated into any animation pipeline. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Interactive shadowing for 2D Anime

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2-3 2009
    Eiji Sugisaki
    Abstract In this paper, we propose an instant shadow generation technique for 2D animation, especially Japanese Anime. In traditional 2D Anime production, the entire animation including shadows is drawn by hand so that it takes long time to complete. Shadows play an important role in the creation of symbolic visual effects. However shadows are not always drawn due to time constraints and lack of animators especially when the production schedule is tight. To solve this problem, we develop an easy shadowing approach that enables animators to easily create a layer of shadow and its animation based on the character's shapes. Our approach is both instant and intuitive. The only inputs required are character or object shapes in input animation sequence with alpha value generally used in the Anime production pipeline. First, shadows are automatically rendered on a virtual plane by using a Shadow Map1 based on these inputs. Then the rendered shadows can be edited by simple operations and simplified by the Gaussian Filter. Several special effects such as blurring can be applied to the rendered shadow at the same time. Compared to existing approaches, ours is more efficient and effective to handle automatic shadowing in real-time. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Coherence aware GPU-based ray casting for virtual colonoscopy

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2009
    Taek Hee Lee
    Abstract In this paper, we propose a GPU-based volume ray casting for virtual colonoscopy to generate high-quality rendering images with a large screen size. Using the temporal coherence for ray casting, the empty space leaping can be efficiently done by reprojecting first-hit points of the previous frame; however, these approaches could produce artifacts such as holes or illegal starting positions due to the insufficient resolution of first-hit points. To eliminate these artifacts, we use a triangle mesh of first-hit points and check the intersection of each triangle with the corresponding real surface. Illegal starting positions can be avoided by replacing a false triangle cutting the real surface with five newly generated triangles. The proposed algorithm is best fit to the recent GPU architecture with Shader Model 4.0 which supports not only fast rasterization of a triangle mesh but also many flexible vertex operations. Experimental results on ATI 2900 with DirectX10 show perspective volume renderings of over 24fps on 1024,×,1024 screen size without any loss of image quality. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    GPU-based interactive visualization framework for ultrasound datasets

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2009
    Sukhyun Lim
    Abstract Ultrasound imaging is widely used in medical areas. By transmitting ultrasound signals into the human body, their echoed signals can be rendered to represent the shape of internal organs. Although its image quality is inferior to that of CT or MR, ultrasound is widely used for its speed and reasonable cost. Volume rendering techniques provide methods for rendering the 3D volume dataset intuitively. We present a visualization framework for ultrasound datasets that uses programmable graphics hardware. For this, we convert ultrasound coordinates into Cartesian form. In ultrasound datasets, however, since physical storage and representation space is different, we apply different sampling intervals adaptively for each ray. In addition, we exploit multiple filtered datasets in order to reduce noise. By our method, we can determine the adequate filter size without considering the filter size. As a result, our approach enables interactive volume rendering for ultrasound datasets, using a consumer-level PC. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    A comparative study of awareness methods for peer-to-peer distributed virtual environments

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5 2008
    S. Rueda
    Abstract The increasing popularity of multi-player online games is leading to the widespread use of large-scale Distributed Virtual Environments (DVEs) nowadays. In these systems, peer-to-peer (P2P) architectures have been proposed as an efficient and scalable solution for supporting massively multi-player applications. However, the main challenge for P2P architectures consists of providing each avatar with updated information about which other avatars are its neighbors. This problem is known as the awareness problem. In this paper, we propose a comparative study of the performance provided by those awareness methods that are supposed to fully solve the awareness problem. This study is performed using well-known performance metrics in distributed systems. Moreover, while the evaluations shown in the literature are performed by executing P2P simulations on a single (sequential) computer, this paper evaluates the performance of the considered methods on actually distributed systems. The evaluation results show that only a single method actually provides full awareness to avatars. This method also provides the best performance results. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Generalized minimum-norm perspective shadow maps

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5 2008
    Fan Zhang
    Abstract Shadow mapping has been extensively used for real-time shadow rendering in 3D computer games, though it suffers from the inherent aliasing problems due to its image-based nature. This paper presents an enhanced variant of light space perspective shadow maps to optimize perspective aliasing distribution in possible general cases where the light and view directions are not orthogonal. To be mathematically sound, the generalized representation of perspective aliasing errors has been derived in detail. Our experiments have shown the enhanced shadow quality using our algorithm in dynamic scenes. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Directable animation of elastic bodies with point-constraints

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008
    Ryo Kondo
    Abstract We propose a simple framework for making elastic body animation with point constraints. In general, a physics-based approach for constraint animation offers a variety of animations with physically correct realism, which are achieved by solving the equations of motion. However, in the digital animation industry, solving the equations of motion is an indirect path to creating more art-directed animations that maintain a plausible realism. Our algorithms provide animators a practical way to make elastic body animation with plausible realism, while effectively using point-constraints to offer directatorial control. The animation examples illustrate that our framework creates a wide variety of point-constraint animations of elastic objects with greater directability than existing methods. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Extended spatial keyframing for complex character animation

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008
    Byungkuk Choi
    Abstract As 3D computer animation becomes more accessible to novice users, it makes it possible for these users to create high-quality animations. This paper introduces a more powerful system to create highly articulated character animations with an intuitive setup then the previous research, Spatial Keyframing (SK). As the main purpose of SK was the rapid generation of primitive animation over quality animation, we propose Extended Spatial Keyframing (ESK) that exploits a global control structure coupled with multiple sets of spatial keyframes, and hierarchical relationship between controllers. The generated structure can be flexibly embedded into the given rigged character, and the system enables the given character to be animated delicately by user performance. During the performance, the movement of the highest ranking controllers across the control hierarchy is recorded in layered style to increase the level of detail for final motions. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    A social agent pedestrian model

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008
    Andrew Park
    Abstract This paper presents a social agent pedestrian model based on experiments with human subjects. Research studies of criminology and environmental psychology show that certain features of the urban environment generate fear in people, causing them to take alternate routes. The Crime Prevention Through Environmental Design (CPTED) strategy has been implemented to reduce fear of crime and crime itself. Our initial prototype of a pedestrian model was developed based on these findings of criminology research. In the course of validating our model, we constructed a virtual environment (VE) that resembles a well-known fear-generating area where several decision points were set up. 60 human subjects were invited to navigate the VE and their choices of routes and comments during the post interviews were analyzed using statistical techniques and content analysis. Through our experimental results, we gained new insights into pedestrians' behavior and suggest a new enhanced and articulated agent model of a pedestrian. Our research not only provides a realistic pedestrian model, but also a new methodology for criminology research. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Video completion and synthesis

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008
    Chunxia Xiao
    Abstract This paper presents a new exemplar-based framework for video completion, allowing aesthetically pleasing completion of large space-time holes. We regard video completion as a discrete global optimization on a 3D graph embedded in the space-time video volume. We introduce a new objective function which enforces global spatio-temporal consistency among patches that fill the hole and surrounding it, in terms of both color similarity and motion similarity. The optimization is solved by a novel algorithm, called weighted priority belief propagation (BP), which alleviates the problems of slow convergence and intolerable storage size when using the standard BP. This objective function can also handle video texture synthesis by extending an input video texture to a larger texture region. Experiments on a wide variety of video examples with complex dynamic scenes demonstrate the advantages of our method over existing techniques: salient structures and motion information are much better restored. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Perspective-aware cartoon clips synthesis

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008
    Yueting Zhuang
    Abstract In this paper we propose an approach, which allows the users to synthesize cartoon clips according to the perspective of the background image. In order to construct the cartoons smoothly, the character's edge distance and motion direction distance are demonstrated to be the factors affecting the human perception in similarity evaluation, and utilized in cartoon clips synthesis. When applying the generated cartoons to the background image, in which the perspective exists, the size of the character is coordinated according to the scaling factor calculated from the vanishing line. The experiment results demonstrate that our approach can synthesize the cartoon clips more smoothly compared with other single frame reusing strategies. The generated cartoons, which are applied to the background image, can be accepted by the human perception well. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Snap: A time critical decision-making framework for MOUT simulations

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008
    Shang-Ping Ting
    Abstract Deliberative reasoning based on the rational analysis of various alternatives often requires too much information and may be too slow in time critical situations. In these situations, humans rely mainly on their intuitions rather than some structured decision-making processes. An important and challenging problem in Military Operations on Urban Terrain (MOUT) simulations is how to generate realistic tactical behaviors for the non-player characters (also known as bots), as these bots often need to make quick decisions in time-critical and uncertain situations. In this paper, we describe our work on Snap, a time critical decision-making framework for the bots in MOUT simulations. The novel features of Snap include case-based reasoning (CBR) and thin slicing. CBR is used to make quick decisions by comparing the current situation with past experience cases. Thin slicing is used to model human's ability to quickly form up situation awareness under uncertain and complex situations using key cues from partial information. To assess the effectiveness of Snap, we have integrated it into Twilight City, a virtual environment for MOUT simulations. Experimental results show that Snap is very effective in generating quick decisions during time critical situations for MOUT simulations. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    3D virtual simulator for breast plastic surgery

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008
    Youngjun Kim
    Abstract We have proposed novel 3D virtual simulation software for breast plastic surgery. Our software comprises two processes: a 3D torso modeling and a virtual simulation of the surgery result. First, image-based modeling is performed in order to obtain a female subject's 3D torso data. Our image-based modeling method utilizes a template model, and this is deformed according to the patient's photographs. For the deformation, we applied procrustes analysis and radial basis functions (RBF). In order to enhance reality, the subject's photographs are mapped onto a mesh. Second, from the modeled subject data, we simulate the subject's virtual appearance after the plastic surgery by morphing the shape of the breasts. We solve the simulation problem by an example-based approach. The subject's virtual shape is obtained from the relations between the pair sets of feature points from previous patients' photographs obtained before and after the surgery. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Physiologically correct animation of the heart

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008
    Kyoungju Park
    Abstract Physiologically correct animation of the heart should incorporate non-homogeneous and nonlinear motions of the heart. Therefore, we introduce a methodology that estimates deformations from volume images and utilizes them for animation. Since volume images are acquired at regular slicing intervals, they miss information between slices and recover deformation on the slices. Therefore, the estimated finite element models (FEMs) result in coarse meshes with chunk elements the sizes of which depend on the slice intervals. Thus, we introduce a method of generating a detailed model using implicit surfaces and transferring a deformation from a FEM to implicit surfaces. An implicit surface heart model is reconstructed using contour data points and then cross-parameterized to the heart FEM, the time-varying deformation of which has been estimated by tracking the insights of the heart wall. The implicit surface heart models are composed of four heart walls that are blended into one model. A correspondence map between the source and the target meshes is made using the template fitting method. Deformation coupling transfers the deformation of a coarse heart FEM model to a detailed implicit model by factorizing linear equations. We demonstrate the system and show the resulting deformation of an implicit heart model. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Analytical inverse kinematics with body posture control

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2 2008
    Marcelo Kallmann
    Abstract This paper presents a novel whole-body analytical inverse kinematics (IK) method integrating collision avoidance and customizable body control for animating reaching tasks in real-time. Whole-body control is achieved with the interpolation of pre-designed key body postures, which are organized as a function of the direction to the goal to be reached. Arm postures are computed by the analytical IK solution for human-like arms and legs, extended with a new simple search method for achieving postures avoiding joint limits and collisions. In addition, a new IK resolution is presented that directly solves for joints parameterized in the swing-and-twist decomposition. The overall method is simple to implement, fast, and accurate, and therefore suitable for interactive applications controlling the hands of characters. The source code of the IK implementation is provided. Copyright © 2007 John Wiley & Sons, Ltd. [source]