Home About us Contact | |||
Real-time
Terms modified by Real-time Selected AbstractsDewetting of an Organic Semiconductor Thin Film Observed in Real-time,ADVANCED ENGINEERING MATERIALS, Issue 4 2009Stefan Kowarik We study the growth and the post-growth dewetting process of the organic semiconductor diindenoperylene (DIP) using real-time X-ray reflectivity measurements. We show that a DIP monolayer deposited in UHV onto silicon oxide dewets via the formation of bilayer islands. From the time resolved structural data we estimate the rate constant for interlayer diffusion of DIP molecules. Post-growth AFM measurements confirm the conclusions from the X-ray data and show the morphology of the dewetted film. [source] Real-time ultrasound-guided spinal anesthesia in patients with a challenging spinal anatomy: two case reportsACTA ANAESTHESIOLOGICA SCANDINAVICA, Issue 2 2010K. J. CHIN Spinal anesthesia may be challenging in patients with poorly palpable surface landmarks or abnormal spinal anatomy. Pre-procedural ultrasound imaging of the lumbar spine can help by providing additional anatomical information, thus permitting a more accurate estimation of the appropriate needle insertion site and trajectory. However, actual needle insertion in the pre-puncture ultrasound- assisted technique remains a ,blind' procedure. We describe two patients with an abnormal spinal anatomy in whom ultrasound-assisted spinal anesthesia was unsuccessful. Successful dural puncture was subsequently achieved using a technique of real-time ultrasound- guided spinal anesthesia. This may be a useful option in patients in whom landmark-guided and ultrasound-assisted techniques have failed. [source] Safety and efficacy of sonographic-guided random real-time core needle biopsy of the liverJOURNAL OF CLINICAL ULTRASOUND, Issue 3 2009Siddharth A. Padia MD Abstract Purpose. To determine the safety and efficacy of real-time, sonographic-guided, random percutaneous needle biopsy of the liver in a tertiary medical center. Method. From an IRB-approved biopsy database, all patients who had random liver biopsy performed over a 24-month period were selected. In 350 patients, 539 random percutaneous needle biopsies of the liver were performed under real-time sonographic visualization. The following were recorded from the electronic medical record: patient demographics, indication for biopsy procedure; radiologist's name; needle type and gauge and number of passes; use and amount of IV sedation or anesthesia; adequacy of the specimen; and complications following the procedure. Result. Of 539 biopsies, 378 (70%) biopsy procedures were performed on liver transplant recipients. Of the biopsy procedures in nontransplant patients, 81/161 (50%) concurrently underwent biopsy of a focal liver mass. An 18-gauge automated core biopsy needle was used in 536/539 (99%). Median number of passes per biopsy procedure was 1 (mean, 1.7; range, 1,6). Sedation using midazolam and fentanyl was used in 483/539 (90%). There were only 8 inadequate specimens (1.5%, [2.3, upper 95% confidence limit, fully described in Statistical Analysis]). Complications were identified in 11/539 biopsy procedures (2.0%, [2.6, upper 95% confidence limit]): 5 with severe postprocedural pain, 3 with symptomatic hemorrhage, 2 with infection, and 1 with a rash. There were no sedation-related complications and no deaths related to the procedure. Conclusion. Real-time, sonographic-guided, random core-needle liver biopsy is a safe and highly effective procedure. © 2009 Wiley Periodicals, Inc. J Clin Ultrasound 2009 [source] Real-time accelerated interactive MRI with adaptive TSENSE and UNFOLD,MAGNETIC RESONANCE IN MEDICINE, Issue 2 2003Michael A. Guttman Abstract Reduced field-of-view (FOV) acceleration using time-adaptive sensitivity encoding (TSENSE) or unaliasing by Fourier encoding the overlaps using the temporal dimension (UNFOLD) can improve the depiction of motion in real-time MRI. However, increased computational resources are required to maintain a high frame rate and low latency in image reconstruction and display. A high-performance software system has been implemented to perform TSENSE and UNFOLD reconstructions for real-time MRI with interactive, on-line display. Images were displayed in the scanner room to investigate image-guided procedures. Examples are shown for normal volunteers and cardiac interventional experiments in animals using a steady-state free precession (SSFP) sequence. In order to maintain adequate image quality for interventional procedures, the imaging rate was limited to seven frames per second after an acceleration factor of 2 with a voxel size of 1.8 × 3.5 × 8 mm. Initial experiences suggest that TSENSE and UNFOLD can each improve the compromise between spatial and temporal resolution in real-time imaging, and can function well in interactive imaging. UNFOLD places no additional constraints on receiver coils, and is therefore more flexible than SENSE methods; however, the temporal image filtering can blur motion and reduce the effective acceleration. Methods are proposed to overcome the challenges presented by the use of TSENSE in interactive imaging. TSENSE may be temporarily disabled after changing the imaging plane to avoid transient artifacts as the sensitivity coefficients adapt. For imaging with a combination of surface and interventional coils, a hybrid reconstruction approach is proposed whereby UNFOLD is used for the interventional coils, and TSENSE with or without UNFOLD is used for the surface coils. Magn Reson Med 50:315,321, 2003. Published 2003 Wiley-Liss, Inc. [source] Real-time, online teaching to enhance undergraduate learningMEDICAL EDUCATION, Issue 11 2009David C Howlett No abstract is available for this article. [source] Intermethod Reliability of Real-time Versus Delayed Videotaped Evaluation of a High-fidelity Medical Simulation Septic Shock ScenarioACADEMIC EMERGENCY MEDICINE, Issue 9 2009Justin B. Williams MD Abstract Objectives:, High-fidelity medical simulation (HFMS) is increasingly utilized in resident education and evaluation. No criterion standard of assessing performance currently exists. This study compared the intermethod reliability of real-time versus videotaped evaluation of HFMS participant performance. Methods:, Twenty-five emergency medicine residents and one transitional resident participated in a septic shock HFMS scenario. Four evaluators assessed the performance of participants on technical (26-item yes/no completion) and nontechnical (seven item, five-point Likert scale assessment) scorecards. Two evaluators provided assessment in real time, and two provided delayed videotape review. After 13 scenarios, evaluators crossed over and completed the scenarios in the opposite method. Real-time evaluations were completed immediately at the end of the simulation; videotape reviewers were allowed to review the scenarios with no time limit. Agreement between raters was tested using the intraclass correlation coefficient (ICC), with Cronbach's alpha used to measure consistency among items on the scores on the checklists. Results:, Bland-Altman plot analysis of both conditions revealed substantial agreement between the real-time and videotaped review scores by reviewers. The mean difference between the reviewers was 0.0 (95% confidence interval [CI] = ,3.7 to 3.6) on the technical evaluation and ,1.6 (95% CI = ,11.4 to 8.2) on the nontechnical scorecard assessment. Comparison of evaluations for the videotape technical scorecard demonstrated a Cronbach's alpha of 0.914, with an ICC of 0.842 (95% CI = 0.679 to 0.926), and the real-time technical scorecard demonstrated a Cronbach's alpha of 0.899, with an ICC of 0.817 (95% CI = 0.633 to 0.914), demonstrating excellent intermethod reliability. Comparison of evaluations for the videotape nontechnical scorecard demonstrated a Cronbach's alpha of 0.888, with an ICC of 0.798 (95% CI = 0.600 to 0.904), and the real-time nontechnical scorecard demonstrated a Cronbach's alpha of 0.833, with an ICC of 0.714 (95% CI = 0.457 to 0.861), demonstrating substantial interrater reliability. The raters were consistent in agreement on performance within each level of training, as the analysis of variance demonstrated no significant differences between the technical scorecard (p = 0.176) and nontechnical scorecard (p = 0.367). Conclusions:, Real-time and videotaped-based evaluations of resident performance of both technical and nontechnical skills during an HFMS septic shock scenario provided equally reliable methods of assessment. [source] Real-time multiplex PCR assay for detection of Yersinia pestis and Yersinia pseudotuberculosisAPMIS, Issue 1 2009PIRJO MATERO A multiplex real-time polymerase chain reaction (PCR) assay was developed for the detection of Yersinia pestis and Yersinia pseudotuberculosis. The assay includes four primer pairs, two of which are specific for Y. pestis, one for Y. pestis and Y. pseudotuberculosis and one for bacteriophage ,; the latter was used as an internal amplification control. The Y. pestis -specific target genes in the assay were ypo2088, a gene coding for a putative methyltransferase, and the pla gene coding for the plasminogen activator. In addition, the wzz gene was used as a target to specifically identify both Y. pestis and the closely related Y. pseudotuberculosis group. The primer and probe sets described for the different genes can be used either in single or in multiplex PCR assays because the individual probes were designed with different fluorochromes. The assays were found to be both sensitive and specific; the lower limit of the detection was 10,100 fg of extracted Y. pestis or Y. pseudotuberculosis total DNA. The sensitivity of the tetraplex assay was determined to be 1 cfu for the ypo2088 and pla probe labelled with FAM and JOE fluorescent dyes, respectively. [source] Interactive animation of virtual humans based on motion capture dataCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5-6 2009Franck Multon Abstract This paper presents a novel, parameteric framework for synthesizing new character motions from existing motion capture data. Our framework can conduct morphological adaptation as well as kinematic and physically-based corrections. All these solvers are organized in layers in order to be easily combined together. Given locomotion as an example, the system automatically adapts the motion data to the size of the synthetic figure and to its environment; the character will correctly step over complex ground shapes and counteract with external forces applied to the body. Our framework is based on a frame-based solver. This ensures animating hundreds of humanoids with different morphologies in real-time. It is particularly suitable for interactive applications such as video games and virtual reality where a user interacts in an unpredictable way. Copyright © 2009 John Wiley & Sons, Ltd. [source] Fast simulation of skin slidingCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2-3 2009Xiaosong Yang Abstract Skin sliding is the phenomenon of the skin moving over underlying layers of fat, muscle and bone. Due to the complex interconnections between these separate layers and their differing elasticity properties, it is difficult to model and expensive to compute. We present a novel method to simulate this phenomenon at real-time by remeshing the surface based on a parameter space resampling. In order to evaluate the surface parametrization, we borrow a technique from structural engineering known as the force density method (FDM)which solves for an energy minimizing form with a sparse linear system. Our method creates a realistic approximation of skin sliding in real-time, reducing texture distortions in the region of the deformation. In addition it is flexible, simple to use, and can be incorporated into any animation pipeline. Copyright © 2009 John Wiley & Sons, Ltd. [source] Interactive shadowing for 2D AnimeCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2-3 2009Eiji Sugisaki Abstract In this paper, we propose an instant shadow generation technique for 2D animation, especially Japanese Anime. In traditional 2D Anime production, the entire animation including shadows is drawn by hand so that it takes long time to complete. Shadows play an important role in the creation of symbolic visual effects. However shadows are not always drawn due to time constraints and lack of animators especially when the production schedule is tight. To solve this problem, we develop an easy shadowing approach that enables animators to easily create a layer of shadow and its animation based on the character's shapes. Our approach is both instant and intuitive. The only inputs required are character or object shapes in input animation sequence with alpha value generally used in the Anime production pipeline. First, shadows are automatically rendered on a virtual plane by using a Shadow Map1 based on these inputs. Then the rendered shadows can be edited by simple operations and simplified by the Gaussian Filter. Several special effects such as blurring can be applied to the rendered shadow at the same time. Compared to existing approaches, ours is more efficient and effective to handle automatic shadowing in real-time. Copyright © 2009 John Wiley & Sons, Ltd. [source] Analytical inverse kinematics with body posture controlCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2 2008Marcelo Kallmann Abstract This paper presents a novel whole-body analytical inverse kinematics (IK) method integrating collision avoidance and customizable body control for animating reaching tasks in real-time. Whole-body control is achieved with the interpolation of pre-designed key body postures, which are organized as a function of the direction to the goal to be reached. Arm postures are computed by the analytical IK solution for human-like arms and legs, extended with a new simple search method for achieving postures avoiding joint limits and collisions. In addition, a new IK resolution is presented that directly solves for joints parameterized in the swing-and-twist decomposition. The overall method is simple to implement, fast, and accurate, and therefore suitable for interactive applications controlling the hands of characters. The source code of the IK implementation is provided. Copyright © 2007 John Wiley & Sons, Ltd. [source] Approximating character biomechanics with real-time weighted inverse kinematicsCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 4-5 2007Michael Meredith Abstract In this paper we show how the expensive, offline dynamic simulations of character motions can be approximated using the cheaper weighted inverse kinematics (WIK)-based approach. We first show how a dynamics-based approach can be used to produce a motion that is representative of a real target actor using the motion of a different source actor and the biomechanics of the target actor. This is compared against a process that uses WIK to achieve the same motion mapping goal without direct biomechanical input. The parallels between the results of the two approaches are described and further reasoned from a mathematical perspective. Thus we demonstrate how character biomechanics can be approximated with real-time WIK. Copyright © 2007 John Wiley & Sons, Ltd. [source] Real-time locomotion control by sensing glovesCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5 2006Taku Komura Abstract Sensing gloves are often used as an input device for virtual 3D games. We propose a new method to control characters such as humans or animals in real-time by using sensing gloves. Based on existing motion data of the body, a new method to map the hand motion of the user to the locomotion of 3D characters in real-time is proposed. The method was applied to control locomotion of characters such as humans or dogs. Various motions such as trotting, running, hopping, and turning could be produced. As the computational cost needed for our method is low, the response of the system is short enough to satisfy the real-time requirements that are essential to be used for games. Using our method, users can directly control their characters intuitively and precisely than previous controlling devices such as mouse, keyboards or joysticks. Copyright © 2006 John Wiley & Sons, Ltd. [source] Real-time simulation of watery paintCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2005Tom Van Laerhoven Abstract Existing work on applications for thin watery paint is mostly focused on automatic generation of painterly-style images from input images, ignoring the fact that painting is a process that intuitively should be interactive. Efforts to create real-time interactive systems are limited to a single paint medium and results often suffer from a trade-off between real-timeness and simulation complexity. We report on the design of a new system that allows the real-time, interactive creation of images with thin watery paint. We mainly target the simulation of watercolor, but the system is also capable of simulating gouache and Oriental black ink. The motion of paint is governed by both physically based and heuristic rules in a layered canvas design. A final image is rendered by optically composing the layers using the Kubelka,Munk diffuse reflectance model. All algorithms that participate in the dynamics phase and the rendering phase of the simulation are implemented on graphics hardware. Images made with the system contain the typical effects that can be recognized in images produced with real thin paint, like the dark-edge effect, watercolor glazing, wet-on-wet painting and the use of different pigment types. Copyright © 2005 John Wiley & Sons, Ltd. [source] The virtual interaction panel: an easy control tool in augmented reality systemsCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2004M. L. Yuan Abstract In this paper, we propose and develop an easy control tool called Virtual Interaction Panel (VirIP) for Augmented Reality (AR) systems, which can be used to control AR systems. This tool is composed of two parts: the design of the VirIPs and the tracking of an interaction pen using a Restricted Coulomb Energy (RCE) neural network. The VirIP is composed of some virtual buttons, which have meaningful information that can be activated by an interaction pen during the augmentation process. The interaction pen is a general pen-like object with a certain color distribution. It is tracked using a RCE network in real-time and used to trigger the VirIPs for AR systems. In our system, only one camera is used for capturing the real world. Therefore, 2D information is used to trigger the virtual buttons to control the AR systems. The proposed method is real-time because the RCE-based image segmentation for a small region is fast. It can be used to control AR systems quite easily without any annoying sensors attached to entangling cables. This proposed method has good potential in many AR applications in manufacturing, such as assembly without the need for object recognition, collaborative product design, system control, etc. Copyright © 2004 John Wiley & Sons, Ltd. [source] A real-time computer-controlled simulator: For control systemsCOMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 2 2008I. H. Altas Abstract A real-time simulator to accompany automatic control system courses is introduced. The design and realization methods and processes are discussed. The simulator is basically a computer-controlled system that implements the developed user friendly virtual interface software to control the speed of a small size DC motor. The virtual interface includes digital implementation models of classical proportional, integral, derivative, and all combinations of them as well as a fuzzy logic controller. The user is able to select and adjust the parameters of any desired controller that is defined and represented virtually. © 2008 Wiley Periodicals, Inc. Comput Appl Eng Educ 16: 115,126, 2008; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20130 [source] Virtual laboratory: A distributed collaborative environmentCOMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 1 2004Tiranee Achalakul Abstract This article proposes the design framework of a distributed, real-time collaborative architecture. The architecture concept allows information to be fused, disseminated, and interpreted collaboratively among researchers who live across continents in real-time. The architecture is designed based on the distributed object technology, DCOM. In our framework, every module can be viewed as an object. Each of these objects communicates and passes data with one another via a set of interfaces and connection points. We constructed the virtual laboratory based on the proposed architecture. The laboratory allows multiple analysts to collaboratively work through a standard web-browser using a set of tools, namely, chat, whiteboard, audio/video exchange, file transfer and application sharing. Several existing technologies are integrated to provide collaborative functions, such as NetMeeting. Finally, the virtual laboratory quality evaluation is described with an example application of remote collaboration in satellite image fusion and analysis. © 2004 Wiley Periodicals, Inc. Comput Appl Eng Educ 12: 44,53, 2004; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20008 [source] Embedded Implicit Stand-Ins for Animated Meshes: A Case of Hybrid ModellingCOMPUTER GRAPHICS FORUM, Issue 1 2010D. Kravtsov Abstract In this paper, we address shape modelling problems, encountered in computer animation and computer games development that are difficult to solve just using polygonal meshes. Our approach is based on a hybrid-modelling concept that combines polygonal meshes with implicit surfaces. A hybrid model consists of an animated polygonal mesh and an approximation of this mesh by a convolution surface stand-in that is embedded within it or is attached to it. The motions of both objects are synchronised using a rigging skeleton. We model the interaction between an animated mesh object and a viscoelastic substance, which is normally represented in an implicit form. Our approach is aimed at achieving verisimilitude rather than physically based simulation. The adhesive behaviour of the viscous object is modelled using geometric blending operations on the corresponding implicit surfaces. Another application of this approach is the creation of metamorphosing implicit surface parts that are attached to an animated mesh. A prototype implementation of the proposed approach and several examples of modelling and animation with near real-time preview times are presented. [source] Direct Visualization of Deformation in VolumesCOMPUTER GRAPHICS FORUM, Issue 3 2009Stef Busking Abstract Deformation is a topic of interest in many disciplines. In particular in medical research, deformations of surfaces and even entire volumetric structures are of interest. Clear visualization of such deformations can lead to important insight into growth processes and progression of disease. We present new techniques for direct focus+context visualization of deformation fields representing transformations between pairs of volumetric datasets. Typically, such fields are computed by performing a non-rigid registration between two data volumes. Our visualization is based on direct volume rendering and uses the GPU to compute and interactively visualize features of these deformation fields in real-time. We integrate visualization of the deformation field with visualization of the scalar volume affected by the deformations. Furthermore, we present a novel use of texturing in volume rendered visualizations to show additional properties of the vector field on surfaces in the volume. [source] Fast GPU-based Adaptive Tessellation with CUDACOMPUTER GRAPHICS FORUM, Issue 2 2009Michael Schwarz Abstract Compact surface descriptions like higher-order surfaces are popular representations for both modeling and animation. However, for fast graphics-hardware-assisted rendering, they usually need to be converted to triangle meshes. In this paper, we introduce a new framework for performing on-the-fly crack-free adaptive tessellation of surface primitives completely on the GPU. Utilizing CUDA and its flexible memory write capabilities, we parallelize the tessellation task at the level of single surface primitives. We are hence able to derive tessellation factors, perform surface evaluation as well as generate the tessellation topology in real-time even for large collections of primitives. We demonstrate the power of our framework by exemplarily applying it to both bicubic rational Bézier patches and PN triangles. [source] Shrinkability Maps for Content-Aware Video ResizingCOMPUTER GRAPHICS FORUM, Issue 7 2008Yi-Fei Zhang Abstract A novel method is given for content-aware video resizing, i.e. targeting video to a new resolution (which may involve aspect ratio change) from the original. We precompute a per-pixel cumulative shrinkability map which takes into account both the importance of each pixel and the need for continuity in the resized result. (If both x and y resizing are required, two separate shrinkability maps are used, otherwise one suffices). A random walk model is used for efficient offline computation of the shrinkability maps. The latter are stored with the video to create a multi-sized video, which permits arbitrary-sized new versions of the video to be later very efficiently created in real-time, e.g. by a video-on-demand server supplying video streams to multiple devices with different resolutions. These shrinkability maps are highly compressible, so the resulting multi-sized videos are typically less than three times the size of the original compressed video. A scaling function operates on the multi-sized video, to give the new pixel locations in the result, giving a high-quality content-aware resized video. Despite the great efficiency and low storage requirements for our method, we produce results of comparable quality to state-of-the-art methods for content-aware image and video resizing. [source] Real-Time Depth-of-Field Rendering Using Point Splatting on Per-Pixel LayersCOMPUTER GRAPHICS FORUM, Issue 7 2008Sungkil Lee Abstract We present a real-time method for rendering a depth-of-field effect based on the per-pixel layered splatting where source pixels are scattered on one of the three layers of a destination pixel. In addition, the missing information behind foreground objects is filled with an additional image of the areas occluded by nearer objects. The method creates high-quality depth-of-field results even in the presence of partial occlusion, without major artifacts often present in the previous real-time methods. The method can also be applied to simulating defocused highlights. The entire framework is accelerated by GPU, enabling real-time post-processing for both off-line and interactive applications. [source] High-Quality Adaptive Soft Shadow MappingCOMPUTER GRAPHICS FORUM, Issue 3 2007Gaël Guennebaud Abstract The recent soft shadow mapping technique [GBP06] allows the rendering in real-time of convincing soft shadows on complex and dynamic scenes using a single shadow map. While attractive, this method suffers from shadow overestimation and becomes both expensive and approximate when dealing with large penumbrae. This paper proposes new solutions removing these limitations and hence providing an efficient and practical technique for soft shadow generation. First, we propose a new visibility computation procedure based on the detection of occluder contours, that is more accurate and faster while reducing aliasing. Secondly, we present a shadow map multi-resolution strategy keeping the computation complexity almost independent on the light size while maintaining high-quality rendering. Finally, we propose a view-dependent adaptive strategy, that automatically reduces the screen resolution in the region of large penumbrae, thus allowing us to keep very high frame rates in any situation. [source] Visyllable Based Speech AnimationCOMPUTER GRAPHICS FORUM, Issue 3 2003Sumedha Kshirsagar Visemes are visual counterpart of phonemes. Traditionally, the speech animation of 3D synthetic faces involvesextraction of visemes from input speech followed by the application of co-articulation rules to generate realisticanimation. In this paper, we take a novel approach for speech animation , using visyllables, the visual counterpartof syllables. The approach results into a concatenative visyllable based speech animation system. The key contributionof this paper lies in two main areas. Firstly, we define a set of visyllable units for spoken English along withthe associated phonological rules for valid syllables. Based on these rules, we have implemented a syllabificationalgorithm that allows segmentation of a given phoneme stream into syllables and subsequently visyllables. Secondly,we have recorded the database of visyllables using a facial motion capture system. The recorded visyllableunits are post-processed semi-automatically to ensure continuity at the vowel boundaries of the visyllables. We defineeach visyllable in terms of the Facial Movement Parameters (FMP). The FMPs are obtained as a result of thestatistical analysis of the facial motion capture data. The FMPs allow a compact representation of the visyllables.Further, the FMPs also facilitate the formulation of rules for boundary matching and smoothing after concatenatingthe visyllables units. Ours is the first visyllable based speech animation system. The proposed technique iseasy to implement, effective for real-time as well as non real-time applications and results into realistic speechanimation. Categories and Subject Descriptors (according to ACM CCS): 1.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism [source] Dynamic Textures for Image-based Rendering of Fine-Scale 3D Structure and Animation of Non-rigid MotionCOMPUTER GRAPHICS FORUM, Issue 3 2002Dana Cobza The problem of capturing real world scenes and then accurately rendering them is particularly difficult for fine-scale 3D structure. Similarly, it is difficult to capture, model and animate non-rigid motion. We present a method where small image changes are captured as a time varying (dynamic) texture. In particular, a coarse geometry is obtained from a sample set of images using structure from motion. This geometry is then used to subdivide the scene and to extract approximately stabilized texture patches. The residual statistical variability in the texture patches is captured using a PCA basis of spatial filters. The filters coefficients are parameterized in camera pose and object motion. To render new poses and motions, new texture patches are synthesized by modulating the texture basis. The texture is then warped back onto the coarse geometry. We demonstrate how the texture modulation and projective homography-based warps can be achieved in real-time using hardware accelerated OpenGL. Experiments comparing dynamic texture modulation to standard texturing are presented for objects with complex geometry (a flower) and non-rigid motion (human arm motion capturing the non-rigidities in the joints, and creasing of the shirt). Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Image Based Rendering [source] Real-Time OD Estimation Using Automatic Vehicle Identification and Traffic Count DataCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 1 2002Michael P. Dixon A key input to many advanced traffic management operations strategies are origin,destination (OD) matricies. In order to examine the possibility of estimating OD matricies in real-time, two constrained OD estimators, based on generalized least squares and Kalman filtering, were developed and tested. A one-at-a-time processing method was introduced to provide an efficient organized framework for incorporating observations from multiple data sources in real-time. The estimators were tested under different conditions based on the type of prior OD information available, the type of assignment available, and the type of link volume model used. The performance of the Kalman filter estimators also was compared to that of the generalized least squares estimator to provide insight regarding their performance characteristics relative to one another for given scenarios. Automatic vehicle identification (AVI) tag counts were used so that observed and estimated OD parameters could be compared. While the approach was motivated using AVI data, the methodology can be generalized to any situation where traffic counts are available and origin volumes can be estimated reliably. The primary means by which AVI data was utilized was through the incorporation of prior observed OD information as measurements, the inclusion of a deterministic link volume component that makes use of OD data extracted from the latest time interval from which all trips have been completed, and through the use of link choice proportions estimated based on link travel time data. It was found that utilizing prior observed OD data along with link counts improves estimator accuracy relative to OD estimation based exclusively on link counts. [source] Novel software architecture for rapid development of magnetic resonance applicationsCONCEPTS IN MAGNETIC RESONANCE, Issue 3 2002Josef Debbins Abstract As the pace of clinical magnetic resonance (MR) procedures grows, the need for an MR scanner software platform on which developers can rapidly prototype, validate, and produce product applications becomes paramount. A software architecture has been developed for a commercial MR scanner that employs state of the art software technologies including Java, C++, DICOM, XML, and so forth. This system permits graphical (drag and drop) assembly of applications built on simple processing building blocks, including pulse sequences, a user interface, reconstruction and postprocessing, and database control. The application developer (researcher or commercial) can assemble these building blocks to create custom applications. The developer can also write source code directly to create new building blocks and add these to the collection of components, which can be distributed worldwide over the internet. The application software and its components are developed in Java, which assures platform portability across any host computer that supports a Java Virtual Machine. The downloaded executable portion of the application is executed in compiled C++ code, which assures mission-critical real-time execution during fast MR acquisition and data processing on dedicated embedded hardware that supports C or C++. This combination permits flexible and rapid MR application development across virtually any combination of computer configurations and operating systems, and yet it allows for very high performance execution on actual scanner hardware. Applications, including prescan, are inherently real-time enabled and can be aggregated and customized to form "superapplications," wherein one or more applications work with another to accomplish the clinical objective with a very high transition speed between applications. © 2002 Wiley Periodicals, Inc. Concepts in Magnetic Resonance (Magn Reson Engineering) 15: 216,237, 2002 [source] Concurrent workload mapping for multicore security systemsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2009Benfano Soewito Abstract Multicore based network processors are promising components to build real-time and scalable security systems to protect the networks and systems. The parallel nature of the processing system makes it challenging for application developers to concurrently program security systems for high performance. In this paper we present an automatic programming methodology that considers application complexity, traffic variation, and attack signatures update. In particular, our mapping algorithm concurrently takes advantage of parallelism in the level of tasks, applications, and packets to achieve optimal performance. We present results that show the effectiveness of the analysis, mapping, and the performance of the model methodology. Copyright © 2009 John Wiley & Sons, Ltd. [source] HLA real-time extensionCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2004Hui Zhao Abstract The IEEE 1516 Standard ,High Level Architecture (HLA)' and its implementation ,Run-Time Infra-structure (RTI)' defines a general-purpose network communication mechanism for Distributed Interactive Simulation (DIS). However, it does not address real-time requirements of DIS. Current operating system technologies can provide real-time processing through some real-time operating systems (RTOSs) and the Internet is also moving to an age of Quality of Service (QoS), providing delay and jitter bounded services. With the availability of RTOSs and IP QoS, it is possible for HLA to be extended to take advantage of these technologies in order to construct an architecture for Real-Time DIS (RT-DIS). This extension will be a critical aspect of applications in virtual medicine, distributed virtual environments, weapon simulation, aerospace simulation and others. This paper outlines the current real-time technology with respect to operating systems and at the network infrastructure level. After summarizing the requirements and our experiences with RT-DIS, we present a proposal for HLA real-time extension and architecture for real-time RTI. Similar to the growth of real-time CORBA (Common Object Request Broker) after the mature based CORBA standard suite, Real-Time HLA is a natural extension following the standardization of HLA into IEEE 1516 in September 2000. Copyright © 2004 John Wiley & Sons, Ltd. [source] Grid services for earthquake scienceCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6-7 2002Geoffrey Fox Abstract We describe an information system architecture for the ACES (Asia,Pacific Cooperation for Earthquake Simulation) community. It addresses several key features of the field,simulations at multiple scales that need to be coupled together; real-time and archival observational data, which needs to be analyzed for patterns and linked to the simulations; a variety of important algorithms including partial differential equation solvers, particle dynamics, signal processing and data analysis; a natural three-dimensional space (plus time) setting for both visualization and observations; the linkage of field to real-time events both as an aid to crisis management and to scientific discovery. We also address the need to support education and research for a field whose computational sophistication is rapidly increasing and spans a broad range. The information system assumes that all significant data is defined by an XML layer which could be virtual, but whose existence ensures that all data is object-based and can be accessed and searched in this form. The various capabilities needed by ACES are defined as grid services, which are conformant with emerging standards and implemented with different levels of fidelity and performance appropriate to the application. Grid Services can be composed in a hierarchical fashion to address complex problems. The real-time needs of the field are addressed by high-performance implementation of data transfer and simulation services. Further, the environment is linked to real-time collaboration to support interactions between scientists in geographically distant locations. Copyright © 2002 John Wiley & Sons, Ltd. [source] |