Rendering Techniques (rendering + techniques)

Distribution by Scientific Domains


Selected Abstracts


GPU-based interactive visualization framework for ultrasound datasets

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2009
Sukhyun Lim
Abstract Ultrasound imaging is widely used in medical areas. By transmitting ultrasound signals into the human body, their echoed signals can be rendered to represent the shape of internal organs. Although its image quality is inferior to that of CT or MR, ultrasound is widely used for its speed and reasonable cost. Volume rendering techniques provide methods for rendering the 3D volume dataset intuitively. We present a visualization framework for ultrasound datasets that uses programmable graphics hardware. For this, we convert ultrasound coordinates into Cartesian form. In ultrasound datasets, however, since physical storage and representation space is different, we apply different sampling intervals adaptively for each ray. In addition, we exploit multiple filtered datasets in order to reduce noise. By our method, we can determine the adequate filter size without considering the filter size. As a result, our approach enables interactive volume rendering for ultrasound datasets, using a consumer-level PC. Copyright © 2009 John Wiley & Sons, Ltd. [source]


A framework for fusion methods and rendering techniques of multimodal volume data

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2 2004
Maria Ferre
Abstract Many different direct volume rendering methods have been developed to visualize 3D scalar fields on uniform rectilinear grids. However, little work has been done on rendering simultaneously various properties of the same 3D region measured with different registration devices or at different instants of time. The demand for this type of visualization is rapidly increasing in scientific applications such as medicine in which the visual integration of multiple modalities allows a better comprehension of the anatomy and a perception of its relationships with activity. This paper presents different strategies of direct multimodal volume rendering (DMVR). It is restricted to voxel models with a known 3D rigid alignment transformation. The paper evaluates at which steps of the rendering pipeline the data fusion must be realized in order to accomplish the desired visual integration and to provide fast re-renders when some fusion parameters are modified. In addition, it analyses how existing monomodal visualization algorithms can be extended to multiple datasets and it compares their efficiency and their computational cost. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Interaction-Dependent Semantics for Illustrative Volume Rendering

COMPUTER GRAPHICS FORUM, Issue 3 2008
Peter Rautek
In traditional illustration the choice of appropriate styles and rendering techniques is guided by the intention of the artist. For illustrative volume visualizations it is difficult to specify the mapping between the 3D data and the visual representation that preserves the intention of the user. The semantic layers concept establishes this mapping with a linguistic formulation of rules that directly map data features to rendering styles. With semantic layers fuzzy logic is used to evaluate the user defined illustration rules in a preprocessing step. In this paper we introduce interaction-dependent rules that are evaluated for each frame and are therefore computationally more expensive. Enabling interaction-dependent rules, however, allows the use of a new class of semantics, resulting in more expressive interactive illustrations. We show that the evaluation of the fuzzy logic can be done on the graphics hardware enabling the efficient use of interaction-dependent semantics. Further we introduce the flat rendering mode and discuss how different rendering parameters are influenced by the rule base. Our approach provides high quality illustrative volume renderings at interactive frame rates, guided by the specification of illustration rules. [source]


Illustrative Hybrid Visualization and Exploration of Anatomical and Functional Brain Data

COMPUTER GRAPHICS FORUM, Issue 3 2008
W. M. Jainek
Abstract Common practice in brain research and brain surgery involves the multi-modal acquisition of brain anatomy and brain activation data. These highly complex three-dimensional data have to be displayed simultaneously in order to convey spatial relationships. Unique challenges in information and interaction design have to be solved in order to keep the visualization sufficiently complete and uncluttered at the same time. The visualization method presented in this paper addresses these issues by using a hybrid combination of polygonal rendering of brain structures and direct volume rendering of activation data. Advanced rendering techniques including illustrative display styles and ambient occlusion calculations enhance the clarity of the visual output. The presented rendering pipeline produces real-time frame rates and offers a high degree of configurability. Newly designed interaction and measurement tools are provided, which enable the user to explore the data at large, but also to inspect specific features closely. We demonstrate the system in the context of a cognitive neurosciences dataset. An initial informal evaluation shows that our visualization method is deemed useful for clinical research. [source]


Carpal bone movements in gripping action of the giant panda (Ailuropoda melanoleuca)

JOURNAL OF ANATOMY, Issue 2 2001
HIDEKI ENDO
The movement of the carpal bones in gripping was clarified in the giant panda (Ailuropoda melanoleuca) by means of macroscopic anatomy, computed tomography (CT) and related 3-dimensional (3-D) volume rendering techniques. In the gripping action, 3-D CT images demonstrated that the radial and 4th carpal bones largely rotate or flex to the radial and ulnar sides respectively. This indicates that these carpal bones on both sides enable the panda to flex the palm from the forearm and to grasp objects by the manipulation mechanism that includes the radial sesamoid. In the macroscopic observations, we found that the smooth articulation surfaces are enlarged between the radial carpal and the radius on the radial side, and between the 4th and ulnar carpals on the ulnar side. The panda skilfully grasps using a double pincer-like apparatus with the huge radial sesamoid and accessory carpal. [source]


The use of three-dimensional computed tomography for assessing patients before laparoscopic adrenal-sparing surgery

BJU INTERNATIONAL, Issue 5 2006
Michael Mitterberger
OBJECTIVE To evaluate the efficacy of three-dimensional computed tomography (3D-CT) in delineating the relationship of the adrenal mass to adjacent normal structures in preparation for laparoscopic partial adrenalectomy. PATIENTS AND METHODS Multislice CT (1 mm slices, 0.5 s rotation time) was used to evaluate 12 patients before adrenal-sparing surgery for aldosterone-producing adenoma or phaeochromocytoma. The CT data were reconstructed using two rendering techniques; (i) volume rendering with the modified VOLREN software (Johns Hopkins Hospital, Baltimore, MD, USA) which allowed interactive 3D examination of the whole data volume within a few minutes; (ii) surface representations only of the interesting structures (kidney, adrenal tumour, vessels) represented in different colours and depicted together in a 3D scene using the software package 3DVIEWNIX. RESULTS In all, 14 adrenal masses in 12 patients were evaluated with 3D-CT; the number and location of lesions was accurate in all cases with both rendering techniques. The coloured surface-rendered images showed a consistently better delineation of the adrenal tumour from the normal tissue than did the volume-rendering technique. From this information all laparoscopic partial adrenalectomies could be completed as planned. CONCLUSIONS Interactive visualization of volume-rendered CT images was helpful for the planning and successful performance of the procedure, but coloured surface-rendered CT provided more convenient, immediate and accurate intraoperative information. [source]