Volume Rendering (volume + rendering)

Distribution by Scientific Domains


Selected Abstracts


Interaction-Dependent Semantics for Illustrative Volume Rendering

COMPUTER GRAPHICS FORUM, Issue 3 2008
Peter Rautek
In traditional illustration the choice of appropriate styles and rendering techniques is guided by the intention of the artist. For illustrative volume visualizations it is difficult to specify the mapping between the 3D data and the visual representation that preserves the intention of the user. The semantic layers concept establishes this mapping with a linguistic formulation of rules that directly map data features to rendering styles. With semantic layers fuzzy logic is used to evaluate the user defined illustration rules in a preprocessing step. In this paper we introduce interaction-dependent rules that are evaluated for each frame and are therefore computationally more expensive. Enabling interaction-dependent rules, however, allows the use of a new class of semantics, resulting in more expressive interactive illustrations. We show that the evaluation of the fuzzy logic can be done on the graphics hardware enabling the efficient use of interaction-dependent semantics. Further we introduce the flat rendering mode and discuss how different rendering parameters are influenced by the rule base. Our approach provides high quality illustrative volume renderings at interactive frame rates, guided by the specification of illustration rules. [source]


Interactive Volume Rendering with Dynamic Ambient Occlusion and Color Bleeding

COMPUTER GRAPHICS FORUM, Issue 2 2008
Timo Ropinski
Abstract We propose a method for rendering volumetric data sets at interactive frame rates while supporting dynamic ambient occlusion as well as an approximation to color bleeding. In contrast to ambient occlusion approaches for polygonal data, techniques for volumetric data sets have to face additional challenges, since by changing rendering parameters, such as the transfer function or the thresholding, the structure of the data set and thus the light interactions may vary drastically. Therefore, during a preprocessing step which is independent of the rendering parameters we capture light interactions for all combinations of structures extractable from a volumetric data set. In order to compute the light interactions between the different structures, we combine this preprocessed information during rendering based on the rendering parameters defined interactively by the user. Thus our method supports interactive exploration of a volumetric data set but still gives the user control over the most important rendering parameters. For instance, if the user alters the transfer function to extract different structures from a volumetric data set the light interactions between the extracted structures are captured in the rendering while still allowing interactive frame rates. Compared to known local illumination models for volume rendering our method does not introduce any substantial rendering overhead and can be integrated easily into existing volume rendering applications. In this paper we will explain our approach, discuss the implications for interactive volume rendering and present the achieved results. [source]


Instant Volumetric Understanding with Order-Independent Volume Rendering

COMPUTER GRAPHICS FORUM, Issue 3 2004
Benjamin Mora
Rapid, visual understanding of volumetric datasets is a crucial outcome of a good volume rendering application, but few current volume rendering systems deliver this result. Our goal is to reduce the volumetric surfing that is required to understand volumetric features by conveying more information in fewer images. In order to achieve this goal, and in contrast with most current methods which still use optical models and alpha blending, our approach reintroduces the order-independent contribution of every sample along the ray in order to have an equiprobable visualization of all the volume samples. Therefore, we demonstrate how order independent sampling can be suitable for fast volume understanding, show useful extensions to MIP and X-ray like renderings, and, finally, point out the special advantage of using stereo visualization in these models to circumvent the lack of depth cues. Categories and Subject Descriptors: I.3.3 [Computer Graphics]: Picture/Image, Generation, I.3.7 [Computer Graphics]: Three-Dimensional graphics and realism. [source]


Fast Volume Rendering and Data Classification Using Multiresolution in Min-Max Octrees

COMPUTER GRAPHICS FORUM, Issue 3 2000
Feng Dong
Large-sized volume datasets have recently become commonplace and users are now demanding that volume-rendering techniques to visualise such data provide acceptable results on relatively modest computing platforms. The widespread use of the Internet for the transmission and/or rendering of volume data is also exerting increasing demands on software providers. Multiresolution can address these issues in an elegant way. One of the fastest volume-rendering alrogithms is that proposed by Lacroute & Levoy 1 , which is based on shear-warp factorisation and min-max octrees (MMOs). Unfortunately, since an MMO captures only a single resolution of a volume dataset, this method is unsuitable for rendering datasets in a multiresolution form. This paper adapts the above algorithm to multiresolution volume rendering to enable near-real-time interaction to take place on a standard PC. It also permits the user to modify classification functions and/or resolution during rendering with no significant loss of rendering speed. A newly-developed data structure based on the MMO is employed, the multiresolution min-max octree, M 3 O, which captures the spatial coherence for datasets at all resolutions. Speed is enhanced by the use of multiresolution opacity transfer functions for rapidly determining and discarding transparent dataset regions. Some experimental results on sample volume datasets are presented. [source]


GPU-based interactive visualization framework for ultrasound datasets

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2009
Sukhyun Lim
Abstract Ultrasound imaging is widely used in medical areas. By transmitting ultrasound signals into the human body, their echoed signals can be rendered to represent the shape of internal organs. Although its image quality is inferior to that of CT or MR, ultrasound is widely used for its speed and reasonable cost. Volume rendering techniques provide methods for rendering the 3D volume dataset intuitively. We present a visualization framework for ultrasound datasets that uses programmable graphics hardware. For this, we convert ultrasound coordinates into Cartesian form. In ultrasound datasets, however, since physical storage and representation space is different, we apply different sampling intervals adaptively for each ray. In addition, we exploit multiple filtered datasets in order to reduce noise. By our method, we can determine the adequate filter size without considering the filter size. As a result, our approach enables interactive volume rendering for ultrasound datasets, using a consumer-level PC. Copyright © 2009 John Wiley & Sons, Ltd. [source]


A framework for fusion methods and rendering techniques of multimodal volume data

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2 2004
Maria Ferre
Abstract Many different direct volume rendering methods have been developed to visualize 3D scalar fields on uniform rectilinear grids. However, little work has been done on rendering simultaneously various properties of the same 3D region measured with different registration devices or at different instants of time. The demand for this type of visualization is rapidly increasing in scientific applications such as medicine in which the visual integration of multiple modalities allows a better comprehension of the anatomy and a perception of its relationships with activity. This paper presents different strategies of direct multimodal volume rendering (DMVR). It is restricted to voxel models with a known 3D rigid alignment transformation. The paper evaluates at which steps of the rendering pipeline the data fusion must be realized in order to accomplish the desired visual integration and to provide fast re-renders when some fusion parameters are modified. In addition, it analyses how existing monomodal visualization algorithms can be extended to multiple datasets and it compares their efficiency and their computational cost. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Projected slabs: approximation of perspective projection and error analysis,

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5 2001
A. Vilanova Bartrolí
Abstract Virtual endoscopy is a promising medical application for volume-rendering techniques where perspective projection is mandatory. Most of the acceleration techniques for direct volume rendering use parallel projection. This paper presents an algorithm to approximate perspective volume rendering using parallel projected slabs. The introduced error due to the approximation is investigated. An analytical study of the maximum and average error is made. This method is applied to VolumePro 500. Based on the error analysis, the basic algorithm is improved. This improvement increases the frame rate, keeping the global maximum error bounded. The usability of the algorithm is shown through the virtual endoscopic investigation of various types of medical data sets. Copyright © 2002 John Wiley & Sons, Ltd. [source]


An Exploratory Technique for Coherent Visualization of Time-varying Volume Data

COMPUTER GRAPHICS FORUM, Issue 3 2010
A. Tikhonova
Abstract The selection of an appropriate global transfer function is essential for visualizing time-varying simulation data. This is especially challenging when the global data range is not known in advance, as is often the case in remote and in-situ visualization settings. Since the data range may vary dramatically as the simulation progresses, volume rendering using local transfer functions may not be coherent for all time steps. We present an exploratory technique that enables coherent classification of time-varying volume data. Unlike previous approaches, which require pre-processing of all time steps, our approach lets the user explore the transfer function space without accessing the original 3D data. This is useful for interactive visualization, and absolutely essential for in-situ visualization, where the entire simulation data range is not known in advance. Our approach generates a compact representation of each time step at rendering time in the form of ray attenuation functions, which are used for subsequent operations on the opacity and color mappings. The presented approach offers interactive exploration of time-varying simulation data that alleviates the cost associated with reloading and caching large data sets. [source]


Visual Support for Interactive Post-Interventional Assessment of Radiofrequency Ablation Therapy

COMPUTER GRAPHICS FORUM, Issue 3 2010
Christian Rieder
Abstract Percutaneous radiofrequency (RF) ablation is a minimally invasive, image-guided therapy for the treatment of liver tumors. The assessment of the ablation area (coagulation) is performed to verify the treatment success as an essential part of the therapy. Traditionally, pre- and post-interventional CT images are used to visually compare the shape, size, and position of tumor and coagulation. In this work, we present a novel visualization as well as a navigation tool, the so-called tumor map. The tumor map is a pseudo-cylindrical mapping of the tumor surface onto a 2D image. It is used for a combined visualization of all ablation zones of the tumor to allow a reliable therapy assessment. Additionally, the tumor map serves as an interactive tool for intuitive navigation within the 3D volume rendering of the tumor vicinity as well as with familiar 2D viewers. [source]


Direct Visualization of Deformation in Volumes

COMPUTER GRAPHICS FORUM, Issue 3 2009
Stef Busking
Abstract Deformation is a topic of interest in many disciplines. In particular in medical research, deformations of surfaces and even entire volumetric structures are of interest. Clear visualization of such deformations can lead to important insight into growth processes and progression of disease. We present new techniques for direct focus+context visualization of deformation fields representing transformations between pairs of volumetric datasets. Typically, such fields are computed by performing a non-rigid registration between two data volumes. Our visualization is based on direct volume rendering and uses the GPU to compute and interactively visualize features of these deformation fields in real-time. We integrate visualization of the deformation field with visualization of the scalar volume affected by the deformations. Furthermore, we present a novel use of texturing in volume rendered visualizations to show additional properties of the vector field on surfaces in the volume. [source]


Illustrative Hybrid Visualization and Exploration of Anatomical and Functional Brain Data

COMPUTER GRAPHICS FORUM, Issue 3 2008
W. M. Jainek
Abstract Common practice in brain research and brain surgery involves the multi-modal acquisition of brain anatomy and brain activation data. These highly complex three-dimensional data have to be displayed simultaneously in order to convey spatial relationships. Unique challenges in information and interaction design have to be solved in order to keep the visualization sufficiently complete and uncluttered at the same time. The visualization method presented in this paper addresses these issues by using a hybrid combination of polygonal rendering of brain structures and direct volume rendering of activation data. Advanced rendering techniques including illustrative display styles and ambient occlusion calculations enhance the clarity of the visual output. The presented rendering pipeline produces real-time frame rates and offers a high degree of configurability. Newly designed interaction and measurement tools are provided, which enable the user to explore the data at large, but also to inspect specific features closely. We demonstrate the system in the context of a cognitive neurosciences dataset. An initial informal evaluation shows that our visualization method is deemed useful for clinical research. [source]


Interactive Volume Rendering with Dynamic Ambient Occlusion and Color Bleeding

COMPUTER GRAPHICS FORUM, Issue 2 2008
Timo Ropinski
Abstract We propose a method for rendering volumetric data sets at interactive frame rates while supporting dynamic ambient occlusion as well as an approximation to color bleeding. In contrast to ambient occlusion approaches for polygonal data, techniques for volumetric data sets have to face additional challenges, since by changing rendering parameters, such as the transfer function or the thresholding, the structure of the data set and thus the light interactions may vary drastically. Therefore, during a preprocessing step which is independent of the rendering parameters we capture light interactions for all combinations of structures extractable from a volumetric data set. In order to compute the light interactions between the different structures, we combine this preprocessed information during rendering based on the rendering parameters defined interactively by the user. Thus our method supports interactive exploration of a volumetric data set but still gives the user control over the most important rendering parameters. For instance, if the user alters the transfer function to extract different structures from a volumetric data set the light interactions between the extracted structures are captured in the rendering while still allowing interactive frame rates. Compared to known local illumination models for volume rendering our method does not introduce any substantial rendering overhead and can be integrated easily into existing volume rendering applications. In this paper we will explain our approach, discuss the implications for interactive volume rendering and present the achieved results. [source]


Fast Volume Rendering and Data Classification Using Multiresolution in Min-Max Octrees

COMPUTER GRAPHICS FORUM, Issue 3 2000
Feng Dong
Large-sized volume datasets have recently become commonplace and users are now demanding that volume-rendering techniques to visualise such data provide acceptable results on relatively modest computing platforms. The widespread use of the Internet for the transmission and/or rendering of volume data is also exerting increasing demands on software providers. Multiresolution can address these issues in an elegant way. One of the fastest volume-rendering alrogithms is that proposed by Lacroute & Levoy 1 , which is based on shear-warp factorisation and min-max octrees (MMOs). Unfortunately, since an MMO captures only a single resolution of a volume dataset, this method is unsuitable for rendering datasets in a multiresolution form. This paper adapts the above algorithm to multiresolution volume rendering to enable near-real-time interaction to take place on a standard PC. It also permits the user to modify classification functions and/or resolution during rendering with no significant loss of rendering speed. A newly-developed data structure based on the MMO is employed, the multiresolution min-max octree, M 3 O, which captures the spatial coherence for datasets at all resolutions. Speed is enhanced by the use of multiresolution opacity transfer functions for rapidly determining and discarding transparent dataset regions. Some experimental results on sample volume datasets are presented. [source]


A transgenic mouse that reveals cell shape and arrangement during ureteric bud branching

GENESIS: THE JOURNAL OF GENETICS AND DEVELOPMENT, Issue 2 2009
Xuan Chi
3-D images showing the outlines of individual cells in the ureteric bud tips of a Hoxb7/myr-Venus transgenic mouse kidney at embryonic day 15.5. Image stacks were acquired with a Bio-Rad laser scanning confocal microscope equipped with an Olympus U PlanApo/IR water lens 60x /NA1.2. The images were processed using blind deconvolution (AutoDeblur, Media Cybernetics, Bethesda, MD), followed by volume rendering (Volocity software). See the article by Chi et al. in this issue. [source]


Interactive visualization of quantum-chemistry data

ACTA CRYSTALLOGRAPHICA SECTION A, Issue 5 2010
Yun Jang
Simulation and computation in chemistry studies have improved as computational power has increased over recent decades. Many types of chemistry simulation results are available, from atomic level bonding to volumetric representations of electron density. However, tools for the visualization of the results from quantum-chemistry computations are still limited to showing atomic bonds and isosurfaces or isocontours corresponding to certain isovalues. In this work, we study the volumetric representations of the results from quantum-chemistry computations, and evaluate and visualize the representations directly on a modern graphics processing unit without resampling the result in grid structures. Our visualization tool handles the direct evaluation of the approximated wavefunctions described as a combination of Gaussian-like primitive basis functions. For visualizations, we use a slice-based volume-rendering technique with a two-dimensional transfer function, volume clipping and illustrative rendering in order to reveal and enhance the quantum-chemistry structure. Since there is no need to resample the volume from the functional representations for the volume rendering, two issues, data transfer and resampling resolution, can be ignored; therefore, it is possible to explore interactively a large amount of different information in the computation results. [source]


The use of three-dimensional computed tomography for assessing patients before laparoscopic adrenal-sparing surgery

BJU INTERNATIONAL, Issue 5 2006
Michael Mitterberger
OBJECTIVE To evaluate the efficacy of three-dimensional computed tomography (3D-CT) in delineating the relationship of the adrenal mass to adjacent normal structures in preparation for laparoscopic partial adrenalectomy. PATIENTS AND METHODS Multislice CT (1 mm slices, 0.5 s rotation time) was used to evaluate 12 patients before adrenal-sparing surgery for aldosterone-producing adenoma or phaeochromocytoma. The CT data were reconstructed using two rendering techniques; (i) volume rendering with the modified VOLREN software (Johns Hopkins Hospital, Baltimore, MD, USA) which allowed interactive 3D examination of the whole data volume within a few minutes; (ii) surface representations only of the interesting structures (kidney, adrenal tumour, vessels) represented in different colours and depicted together in a 3D scene using the software package 3DVIEWNIX. RESULTS In all, 14 adrenal masses in 12 patients were evaluated with 3D-CT; the number and location of lesions was accurate in all cases with both rendering techniques. The coloured surface-rendered images showed a consistently better delineation of the adrenal tumour from the normal tissue than did the volume-rendering technique. From this information all laparoscopic partial adrenalectomies could be completed as planned. CONCLUSIONS Interactive visualization of volume-rendered CT images was helpful for the planning and successful performance of the procedure, but coloured surface-rendered CT provided more convenient, immediate and accurate intraoperative information. [source]


Cardiovascular computed tomographic angiography evaluation following unsuccessful invasive angiography: The clinical utility of 3D volume rendering,

CATHETERIZATION AND CARDIOVASCULAR INTERVENTIONS, Issue 5 2010
Ambarish Gopal MD
Abstract In an appropriate clinical setting, cardiac CT angiography (CCT) can be used as a safe and effective noninvasive imaging modality for defining coronary arterial anatomy by providing detailed three-dimensional anatomic information that may be difficult to obtain with invasive coronary angiography (ICA). We present a patient where coronary angiography by ICA was unsuccessful and where the subsequent CCT proved to be very useful in providing us relevant information. © 2009 Wiley-Liss, Inc. [source]


Imaging microscopy of the middle and inner ear: Part I: CT microscopy

CLINICAL ANATOMY, Issue 8 2004
John I. Lane
Abstract Anatomic definition of the middle ear and bony labyrinth in the clinical setting remains limited despite significant technological advances in computed tomography (CT). Recent developments in ultra-high resolution imaging for use in the research laboratory on small animals and pathologic specimens have given rise to the field of imaging microscopy. We have taken advantage of this technique to image a human temporal bone cadaver specimen to delineate middle ear and labyrinthine structures, only seen previously using standard light microscopy. This approach to the study of the inner ear avoids tissue destruction inherent in histological preparations. We present high-resolution MicroCT images of the middle ear and bony labyrinth to highlight the utility of this technique in teaching radiologists and otolaryngologists clinically relevant temporal bone anatomy. This study is not meant to function as a complete anatomic atlas of the temporal bone. We have selected several structures that are routinely delineated on clinical scanners to highlight the utility of imaging microscopy in displaying critical anatomic relationships in three orthogonal planes. These anatomic relationships can be further enhanced using 3D volume rendering. Clin. Anat. 17:607,612, 2004. © 2004 Wiley-Liss, Inc. [source]


Coherence aware GPU-based ray casting for virtual colonoscopy

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2009
Taek Hee Lee
Abstract In this paper, we propose a GPU-based volume ray casting for virtual colonoscopy to generate high-quality rendering images with a large screen size. Using the temporal coherence for ray casting, the empty space leaping can be efficiently done by reprojecting first-hit points of the previous frame; however, these approaches could produce artifacts such as holes or illegal starting positions due to the insufficient resolution of first-hit points. To eliminate these artifacts, we use a triangle mesh of first-hit points and check the intersection of each triangle with the corresponding real surface. Illegal starting positions can be avoided by replacing a false triangle cutting the real surface with five newly generated triangles. The proposed algorithm is best fit to the recent GPU architecture with Shader Model 4.0 which supports not only fast rasterization of a triangle mesh but also many flexible vertex operations. Experimental results on ATI 2900 with DirectX10 show perspective volume renderings of over 24fps on 1024,×,1024 screen size without any loss of image quality. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Interaction-Dependent Semantics for Illustrative Volume Rendering

COMPUTER GRAPHICS FORUM, Issue 3 2008
Peter Rautek
In traditional illustration the choice of appropriate styles and rendering techniques is guided by the intention of the artist. For illustrative volume visualizations it is difficult to specify the mapping between the 3D data and the visual representation that preserves the intention of the user. The semantic layers concept establishes this mapping with a linguistic formulation of rules that directly map data features to rendering styles. With semantic layers fuzzy logic is used to evaluate the user defined illustration rules in a preprocessing step. In this paper we introduce interaction-dependent rules that are evaluated for each frame and are therefore computationally more expensive. Enabling interaction-dependent rules, however, allows the use of a new class of semantics, resulting in more expressive interactive illustrations. We show that the evaluation of the fuzzy logic can be done on the graphics hardware enabling the efficient use of interaction-dependent semantics. Further we introduce the flat rendering mode and discuss how different rendering parameters are influenced by the rule base. Our approach provides high quality illustrative volume renderings at interactive frame rates, guided by the specification of illustration rules. [source]


Septation of the anorectal and genitourinary tracts in the human embryo: Crucial role of the catenoidal shape of the urorectal sulcus

BIRTH DEFECTS RESEARCH, Issue 4 2002
Daniel S. Rogers
Background Previous studies of the tracheoesophageal sulcus and the sulci of the developing heart have suggested that the catenoidal or saddle-shaped configuration of the sulcus had mechanical properties that were important to developmental processes by causing regional growth limitation. We examined the development of the human perineal region to determine if a similar configuration exists in relation to the urorectal septum. We wished to re-examine the controversial issue of the role of the urorectal sulcus in the partitioning of the cloaca. Methods Digitally scanned photomicrographs of serial histologic sections of embryos from Carnegie stages 13, 15, 18, and 22, obtained from the Carnegie Embryological Collection were used. Each image was digitally stacked, aligned, and isolated using image-editing software. Images were compiled using 3-D image-visualization software (T-Vox), into full 3-D voxel-based volume renderings. Similarly, digital models were made of the urogenital sinus, anorectum, cloaca, allantois, mesonephric ducts, ureters, and kidneys by isolating their associated epithelium in each histologic section and compiling the data in T-Vox. Methods were developed to create registration models for determining the exact position and orientation of outlined structures within the embryos. Results Models were oriented and resectioned to determine the configuration of the urorectal sulcus. The results show that the urorectal sulcus maintains a catenoidal configuration during the developmental period studied and, thus, would be expected to limit caudal growth of the urorectal septum. Conclusion The observations support the concept that the urorectal septum is a passive structure that does not actively divide the cloaca into urogenital and anorectal components. Teratology 66:144,152, 2002. © 2002 Wiley-Liss, Inc. [source]