Distribution by Scientific Domains

Kinds of Rendering

  • surface rendering
  • volume rendering

  • Terms modified by Rendering

  • rendering system
  • rendering techniques

  • Selected Abstracts

    Sparsely Precomputing The Light Transport Matrix for Real-Time Rendering

    Fu-Chung Huang
    Precomputation-based methods have enabled real-time rendering with natural illumination, all-frequency shadows, and global illumination. However, a major bottleneck is the precomputation time, that can take hours to days. While the final real-time data structures are typically heavily compressed with clustered principal component analysis and/or wavelets, a full light transport matrix still needs to be precomputed for a synthetic scene, often by exhaustive sampling and raytracing. This is expensive and makes rapid prototyping of new scenes prohibitive. In this paper, we show that the precomputation can be made much more efficient by adaptive and sparse sampling of light transport. We first select a small subset of "dense vertices", where we sample the angular dimensions more completely (but still adaptively). The remaining "sparse vertices" require only a few angular samples, isolating features of the light transport. They can then be interpolated from nearby dense vertices using locally low rank approximations. We demonstrate sparse sampling and precomputation 5 × faster than previous methods. [source]

    Anomalous Dispersion in Predictive Rendering

    Andrea Weidlich
    Abstract In coloured media, the index of refraction does not decrease monotonically with increasing wavelength, but behaves in a quite non-monotonical way. This behaviour is called anomalous dispersion and results from the fact that the absorption of a material influences its index of refraction. So far, this interesting fact has not been widely acknowledged by the graphics community. In this paper, we demonstrate how to calculate the correct refractive index for a material based on its absorption spectrum with the Kramers-Kronig relation, and we discuss for which types of objects this effect is relevant in practice. [source]

    Gradient-based Interpolation and Sampling for Real-time Rendering of Inhomogeneous, Single-scattering Media

    Zhong Ren
    Abstract We present a real-time rendering algorithm for inhomogeneous, single scattering media, where all-frequency shading effects such as glows, light shafts, and volumetric shadows can all be captured. The algorithm first computes source radiance at a small number of sample points in the medium, then interpolates these values at other points in the volume using a gradient-based scheme that is efficiently applied by sample splatting. The sample points are dynamically determined based on a recursive sample splitting procedure that adapts the number and locations of sample points for accurate and efficient reproduction of shading variations in the medium. The entire pipeline can be easily implemented on the GPU to achieve real-time performance for dynamic lighting and scenes. Rendering results of our method are shown to be comparable to those from ray tracing. [source]

    Real-Time Depth-of-Field Rendering Using Point Splatting on Per-Pixel Layers

    Sungkil Lee
    Abstract We present a real-time method for rendering a depth-of-field effect based on the per-pixel layered splatting where source pixels are scattered on one of the three layers of a destination pixel. In addition, the missing information behind foreground objects is filled with an additional image of the areas occluded by nearer objects. The method creates high-quality depth-of-field results even in the presence of partial occlusion, without major artifacts often present in the previous real-time methods. The method can also be applied to simulating defocused highlights. The entire framework is accelerated by GPU, enabling real-time post-processing for both off-line and interactive applications. [source]

    Ptex: Per-Face Texture Mapping for Production Rendering

    Brent Burley
    Explicit parameterization of subdivision surfaces for texture mapping adds significant cost and complexity to film production. Most parameterization methods currently in use require setup effort, and none are completely general. We propose a new texture mapping method for Catmull-Clark subdivision surfaces that requires no explicit parameterization. Our method, Ptex, stores a separate texture per quad face of the subdivision control mesh, along with a novel per-face adjacency map, in a single texture file per surface. Ptex uses the adjacency data to perform seamless anisotropic filtering of multi-resolution textures across surfaces of arbitrary topology. Just as importantly, Ptex requires no manual setup and scales to models of arbitrary mesh complexity and texture detail. Ptex has been successfully used to texture all of the models in an animated theatrical short and is currently being applied to an entire animated feature. Ptex has eliminated UV assignment from our studio and significantly increased the efficiency of our pipeline. [source]

    Interaction-Dependent Semantics for Illustrative Volume Rendering

    Peter Rautek
    In traditional illustration the choice of appropriate styles and rendering techniques is guided by the intention of the artist. For illustrative volume visualizations it is difficult to specify the mapping between the 3D data and the visual representation that preserves the intention of the user. The semantic layers concept establishes this mapping with a linguistic formulation of rules that directly map data features to rendering styles. With semantic layers fuzzy logic is used to evaluate the user defined illustration rules in a preprocessing step. In this paper we introduce interaction-dependent rules that are evaluated for each frame and are therefore computationally more expensive. Enabling interaction-dependent rules, however, allows the use of a new class of semantics, resulting in more expressive interactive illustrations. We show that the evaluation of the fuzzy logic can be done on the graphics hardware enabling the efficient use of interaction-dependent semantics. Further we introduce the flat rendering mode and discuss how different rendering parameters are influenced by the rule base. Our approach provides high quality illustrative volume renderings at interactive frame rates, guided by the specification of illustration rules. [source]

    Interactive Volume Rendering with Dynamic Ambient Occlusion and Color Bleeding

    Timo Ropinski
    Abstract We propose a method for rendering volumetric data sets at interactive frame rates while supporting dynamic ambient occlusion as well as an approximation to color bleeding. In contrast to ambient occlusion approaches for polygonal data, techniques for volumetric data sets have to face additional challenges, since by changing rendering parameters, such as the transfer function or the thresholding, the structure of the data set and thus the light interactions may vary drastically. Therefore, during a preprocessing step which is independent of the rendering parameters we capture light interactions for all combinations of structures extractable from a volumetric data set. In order to compute the light interactions between the different structures, we combine this preprocessed information during rendering based on the rendering parameters defined interactively by the user. Thus our method supports interactive exploration of a volumetric data set but still gives the user control over the most important rendering parameters. For instance, if the user alters the transfer function to extract different structures from a volumetric data set the light interactions between the extracted structures are captured in the rendering while still allowing interactive frame rates. Compared to known local illumination models for volume rendering our method does not introduce any substantial rendering overhead and can be integrated easily into existing volume rendering applications. In this paper we will explain our approach, discuss the implications for interactive volume rendering and present the achieved results. [source]

    Dynamic Sampling and Rendering of Algebraic Point Set Surfaces

    Gaël Guennebaud
    Abstract Algebraic Point Set Surfaces (APSS) define a smooth surface from a set of points using local moving least-squares (MLS) fitting of algebraic spheres. In this paper we first revisit the spherical fitting problem and provide a new, more generic solution that includes intuitive parameters for curvature control of the fitted spheres. As a second contribution we present a novel real-time rendering system of such surfaces using a dynamic up-sampling strategy combined with a conventional splatting algorithm for high quality rendering. Our approach also includes a new view dependent geometric error tailored to efficient and adaptive up-sampling of the surface. One of the key features of our system is its high degree of flexibility that enables us to achieve high performance even for highly dynamic data or complex models by exploiting temporal coherence at the primitive level. We also address the issue of efficient spatial search data structures with respect to construction, access and GPU friendliness. Finally, we present an efficient parallel GPU implementation of the algorithms and search structures. [source]

    Volume and Isosurface Rendering with GPU-Accelerated Cell Projection,

    R. Marroquim
    Abstract We present an efficient Graphics Processing Unit GPU-based implementation of the Projected Tetrahedra (PT) algorithm. By reducing most of the CPU,GPU data transfer, the algorithm achieves interactive frame rates (up to 2.0 M Tets/s) on current graphics hardware. Since no topology information is stored, it requires substantially less memory than recent interactive ray casting approaches. The method uses a two-pass GPU approach with two fragment shaders. This work includes extended volume inspection capabilities by supporting interactive transfer function editing and isosurface highlighting using a Phong illumination model. [source]

    Data Preparation for Real-time High Quality Rendering of Complex Models

    Reinhard Klein
    The capability of current 3D acquisition systems to digitize the geometry reflection behaviour of objects as well as the sophisticated application of CAD techniques lead to rapidly growing digital models which pose new challenges for interaction and visualization. Due to the sheer size of the geometry as well as the texture and reflection data which are often in the range of several gigabytes, efficient techniques for analyzing, compressing and rendering are needed. In this talk I will present some of the research we did in our graphics group over the past years motivated by industrial partners in order to automate the data preparation step and allow for real-time high quality rendering e.g. in the context of VR-applications. Strength and limitations of the different techniques will be discussed and future challenges will be identified. The presentation will go along with live demonstrations. [source]

    Temporally Coherent Irradiance Caching for High Quality Animation Rendering

    aw Smyk
    First page of article [source]

    Instant Volumetric Understanding with Order-Independent Volume Rendering

    Benjamin Mora
    Rapid, visual understanding of volumetric datasets is a crucial outcome of a good volume rendering application, but few current volume rendering systems deliver this result. Our goal is to reduce the volumetric surfing that is required to understand volumetric features by conveying more information in fewer images. In order to achieve this goal, and in contrast with most current methods which still use optical models and alpha blending, our approach reintroduces the order-independent contribution of every sample along the ray in order to have an equiprobable visualization of all the volume samples. Therefore, we demonstrate how order independent sampling can be suitable for fast volume understanding, show useful extensions to MIP and X-ray like renderings, and, finally, point out the special advantage of using stereo visualization in these models to circumvent the lack of depth cues. Categories and Subject Descriptors: I.3.3 [Computer Graphics]: Picture/Image, Generation, I.3.7 [Computer Graphics]: Three-Dimensional graphics and realism. [source]

    Hardware-Accelerated Rendering of Photo Hulls

    Ming Li
    This paper presents an efficient hardware-accelerated method for novel view synthesis from a set of images or videos. Our method is based on the photo hull representation, which is the maximal photo-consistent shape. We avoid the explicit reconstruction of photo hulls by adopting a view-dependent plane-sweeping strategy. From the target viewpoint slicing planes are rendered with reference views projected onto them. Graphics hardware is exploited to verify the photo-consistency of each rasterized fragment. Visibilities with respect to reference views are properly modeled, and only photo-consistent fragments are kept and colored in the target view. We present experiments with real images and animation sequences. Thanks to the more accurate shape of the photo hull representation, our method generates more realistic rendering results than methods based on visual hulls. Currently, we achieve rendering frame rates of 2,3 fps. Compared to a pure software implementation, the performance of our hardware-accelerated method is approximately 7 times faster. Categories and Subject Descriptors (according to ACM CCS): CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism. [source]

    Dynamic Textures for Image-based Rendering of Fine-Scale 3D Structure and Animation of Non-rigid Motion

    Dana Cobza
    The problem of capturing real world scenes and then accurately rendering them is particularly difficult for fine-scale 3D structure. Similarly, it is difficult to capture, model and animate non-rigid motion. We present a method where small image changes are captured as a time varying (dynamic) texture. In particular, a coarse geometry is obtained from a sample set of images using structure from motion. This geometry is then used to subdivide the scene and to extract approximately stabilized texture patches. The residual statistical variability in the texture patches is captured using a PCA basis of spatial filters. The filters coefficients are parameterized in camera pose and object motion. To render new poses and motions, new texture patches are synthesized by modulating the texture basis. The texture is then warped back onto the coarse geometry. We demonstrate how the texture modulation and projective homography-based warps can be achieved in real-time using hardware accelerated OpenGL. Experiments comparing dynamic texture modulation to standard texturing are presented for objects with complex geometry (a flower) and non-rigid motion (human arm motion capturing the non-rigidities in the joints, and creasing of the shirt). Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Image Based Rendering [source]

    Artistic Surface Rendering Using Layout of Text

    Tatiana Surazhsky
    An artistic rendering method of free-form surfaces with the aid of half-toned text that is laid-out on the given surface is presented. The layout of the text is computed using symbolic composition of the free-form parametric surface S(u, v) with cubic or linear Bézier curve segments C(t) = {cu (t), cv (t)}, comprising the outline of the text symbols. Once the layout is constructed on the surface, a shading process is applied to the text, affecting the width of the symbols as well as their color, according to some shader function. The shader function depends on the surface orientation and the view direction as well as the color and the direction or position of the light source. [source]

    Rendering: Input and Output

    H. Rushmeier
    Rendering is the process of creating an image from numerical input data. In the past few years our ideas about methods for acquiring the input data and the form of the output have expanded. The availability of inexpensive cameras and scanners has influenced how we can obtain data needed for rendering. Input for rendering ranges from sets of images to complex geometric descriptions with detailed BRDF data. The images that are rendered may be simply arrays of RGB images, or they may be arrays with vectors or matrices of data defined for each pixel. The rendered images may not be intended for direct display, but may be textures for geometries that are to be transmitted to be rendered on another system. A broader range of parameters now need to be taken into account to render images that are perceptually consistent across displays that range from CAVEs to personal digital assistants. This presentation will give an overview of how new hardware and new applications have changed traditional ideas of rendering input and output. [source]

    Fast Volume Rendering and Data Classification Using Multiresolution in Min-Max Octrees

    Feng Dong
    Large-sized volume datasets have recently become commonplace and users are now demanding that volume-rendering techniques to visualise such data provide acceptable results on relatively modest computing platforms. The widespread use of the Internet for the transmission and/or rendering of volume data is also exerting increasing demands on software providers. Multiresolution can address these issues in an elegant way. One of the fastest volume-rendering alrogithms is that proposed by Lacroute & Levoy 1 , which is based on shear-warp factorisation and min-max octrees (MMOs). Unfortunately, since an MMO captures only a single resolution of a volume dataset, this method is unsuitable for rendering datasets in a multiresolution form. This paper adapts the above algorithm to multiresolution volume rendering to enable near-real-time interaction to take place on a standard PC. It also permits the user to modify classification functions and/or resolution during rendering with no significant loss of rendering speed. A newly-developed data structure based on the MMO is employed, the multiresolution min-max octree, M 3 O, which captures the spatial coherence for datasets at all resolutions. Speed is enhanced by the use of multiresolution opacity transfer functions for rapidly determining and discarding transparent dataset regions. Some experimental results on sample volume datasets are presented. [source]

    Incidence and cytological features of pulmonary hamartomas indeterminate on CT scan

    CYTOPATHOLOGY, Issue 3 2008
    A. Saqi
    Objective:, Pulmonary hamartomas have a characteristic heterogeneous radiological appearance. However, when composed predominantly of undifferentiated mesenchymal fibromyxoid component, their homogeneous appearance on computed tomography is indeterminate for malignancy. Rendering an accurate preoperative diagnosis in these cases can alter management. The aim of this study was to determine the incidence and accuracy of cytodiagnosis for hamartomas ,indeterminate' by imaging. Methods:, We retrospectively reviewed records for hamartomas diagnosed by transthoracic fine needle aspiration (FNA) including immediate impressions and final diagnoses. Cytological features evaluated included the presence of fibromyxoid stroma, bronchioloalveolar cell hyperplasia, fibroadipose tissue, cartilage and smooth muscle. Results:, Eighteen (1.3%) hamartomas were identified from 1355 transthoracic FNAs. The immediate impression was hamartoma in 13 (72%), carcinoid in one (6%), mucinous bronchioloalveolar carcinoma in two (11%) and non-diagnostic in two (11%). The final diagnosis of hamartoma in cases diagnosed as carcinoid, mucinous bronchioloalaveolar carcinoma and non-diagnostic on immediate impression was rendered following assessment of all cytological material. Conclusion:, Overall, FNAs are highly reliable for diagnosing hamartomas even when composed principally of undifferentiated mesenchymal fibromyxoid stroma, especially with the aid of all available preparations including Diff-Quik smears, Papanicolaou smears, ThinPreps and cell block material. [source]

    Rendering the World Unsafe: ,Vulnerability' as Western Discourse

    DISASTERS, Issue 1 2001
    Gregory Bankoff
    Disasters seem destined to be major issues of academic enquiry in the new century if for no other reason than that they are inseparably linked to questions of environmental conservation, resource depletion and migration patterns in an increasingly globalised world. Unfortunately, inadequate attention has been directed at considering the historical roots of the discursive framework within which hazard is generally presented, and how that might reflect particular cultural values to do with the way in which certain regions or zones of the world are usually imagined. This paper argues that tropicality, development and vulnerability form part of one and the same essentialising and generalising cultural discourse that denigrates large regions of world as disease-ridden, poverty-stricken and disaster-prone. [source]

    Rendering "More Equal": Eve's Changing Discourse in Paradise Lost

    MILTON QUARTERLY, Issue 3 2003
    Elisabeth Liebert
    First page of article [source]

    Hermeneutics of Translation: A Critical Consideration of the Term Dao in Two Renderings of the Analects

    Marc Andre Matten

    Stable stylized wireframe rendering

    Chen Tang
    Abstract Stylized wireframe rendering of 3D model is widely used in animation software in order to depict the configuration of deformable model in comprehensible ways. However, since some inherent flaws in traditional depth test based rendering technology, shape of lines can not been preserved as continuous movement or deformation of models. There often exists severe aliasing like flickering artifact when objects rendered in line form animate, especially rendered with thick or dashed line. To cover this artifact, unlike traditional approach, we propose a novel fast line drawing method with high visual fidelity for wireframe depiction which only depends on intrinsic topology of primitives without any preprocessing step or extra adjacent information pre-stored. In contrast to previous widely-used solutions, our method is advantageous in highly accurate visibility, clear and stable line appearance without flickering even for thick and dashed lines with uniform width and steady configuration as model moves or animates, so that it is strongly suitable for animation system. In addition, our approach can be easily implemented and controlled without any additional preestimate parameters supplied by users. Copyright © 2010 John Wiley & Sons, Ltd. [source]

    GPU-based interactive visualization framework for ultrasound datasets

    Sukhyun Lim
    Abstract Ultrasound imaging is widely used in medical areas. By transmitting ultrasound signals into the human body, their echoed signals can be rendered to represent the shape of internal organs. Although its image quality is inferior to that of CT or MR, ultrasound is widely used for its speed and reasonable cost. Volume rendering techniques provide methods for rendering the 3D volume dataset intuitively. We present a visualization framework for ultrasound datasets that uses programmable graphics hardware. For this, we convert ultrasound coordinates into Cartesian form. In ultrasound datasets, however, since physical storage and representation space is different, we apply different sampling intervals adaptively for each ray. In addition, we exploit multiple filtered datasets in order to reduce noise. By our method, we can determine the adequate filter size without considering the filter size. As a result, our approach enables interactive volume rendering for ultrasound datasets, using a consumer-level PC. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    Generalized minimum-norm perspective shadow maps

    Fan Zhang
    Abstract Shadow mapping has been extensively used for real-time shadow rendering in 3D computer games, though it suffers from the inherent aliasing problems due to its image-based nature. This paper presents an enhanced variant of light space perspective shadow maps to optimize perspective aliasing distribution in possible general cases where the light and view directions are not orthogonal. To be mathematically sound, the generalized representation of perspective aliasing errors has been derived in detail. Our experiments have shown the enhanced shadow quality using our algorithm in dynamic scenes. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    Myriad: scalable VR via peer-to-peer connectivity, PC clustering, and transient inconsistency

    Benjamin Schaeffer
    Abstract Distributed scene graphs are important in virtual reality, both in collaborative virtual environments and in cluster rendering. Modern scalable visualization systems have high local throughput, but collaborative virtual environments (VEs) over a wide-area network (WAN) share data at much lower rates. This complicates the use of one scene graph across the whole application. Myriad is an extension of the Syzygy VR toolkit in which individual scene graphs form a peer-to-peer network. Myriad connections filter scene graph updates and create flexible relationships between nodes of the scene graph. Myriad's sharing is fine-grained: the properties of individual scene graph nodes to share are dynamically specified (in C++ or Python). Myriad permits transient inconsistency, relaxing resource requirements in collaborative VEs. A test application, WorldWideCrowd, demonstrates collaborative prototyping of a 300-avatar crowd animation viewed on two PC-cluster displays and edited on low-powered laptops, desktops, and over a WAN. We have further used our framework to facilitate collaborative educational experiences and as a vehicle for undergraduates to experiment with shared virtual worlds. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    Real-time navigating crowds: scalable simulation and rendering

    Julien Pettré
    Abstract This paper introduces a framework for real-time simulation and rendering of crowds navigating in a virtual environment. The solution first consists in a specific environment preprocessing technique giving rise to navigation graphs, which are then used by the navigation and simulation tasks. Second, navigation planning interactively provides various solutions to the user queries, allowing to spread a crowd by individualizing trajectories. A scalable simulation model enables the management of large crowds, while saving computation time for rendering tasks. Pedestrian graphical models are divided into three rendering fidelities ranging from billboards to dynamic meshes, allowing close-up views of detailed digital actors with a large variety of locomotion animations. Examples illustrate our method in several environments with crowds of up to 35,000 pedestrians with real-time performance. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    Real-time cartoon animation of smoke

    Haitao He
    Abstract In this paper, we present a practical framework to generate cartoon style animations of smoke, which consists of two components: a smoke simulator and a rendering system. In the simulation stage, the smoke is modelled as a set of smoothed particles and the physical parameters such as velocity and force are defined on particles directly. The smoke is rendered in flicker-free cartoon style with two-tone shading and silhouettes. Both the simulation and rendering are intuitive and easy to implement. In the most moderate scale scene, an impressive cartoon animation is generated with about a thousand particles at real-time frame rate. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    Multiple path-based approach to image-based street walkthrough

    Dong Hoon Lee
    Abstract Image-based rendering for walkthrough in the virtual environment has many advantages should over the geometry-based approach, due to the fast construction of the environment and photo-realistic rendered results. In image-based rendering technique, rays from a set of input images are collected and a novel view image is rendered by the resampling of the stored rays. Current such techniques, however, are limited to a closed capture space. In this paper, we propose a multiple path-based capture configuration that can handle a large-scale scene and a disparity-based warping method for novel view generation. To acquire the disparity image, we segment the input image into vertical slit segments using a robust and inexpensive way of detecting vertical depth discontinuity. The depth slit segments, instead of depth pixels, reduce the processing time for novel view generation. We also discuss a dynamic cache strategy that supports real-time walkthroughs in large and complex street environments. The efficiency of the proposed method is demonstrated with several experiments. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    Mixing virtual and real scenes in the site of ancient Pompeii

    George Papagiannakis
    Abstract This paper presents an innovative 3D reconstruction of ancient fresco paintings through the real-time revival of their fauna and flora, featuring groups of virtual animated characters with artificial-life dramaturgical behaviours in an immersive, fully mobile augmented reality (AR) environment. The main goal is to push the limits of current AR and virtual storytelling technologies and to explore the processes of mixed narrative design of fictional spaces (e.g. fresco paintings) where visitors can experience a high degree of realistic immersion. Based on a captured/real-time video sequence of the real scene in a video-see-through HMD set-up, these scenes are enhanced by the seamless accurate real-time registration and 3D rendering of realistic complete simulations of virtual flora and fauna (virtual humans and plants) in a real-time storytelling scenario-based environment. Thus the visitor of the ancient site is presented with an immersive and innovative multi-sensory interactive trip to the past. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    A framework for fusion methods and rendering techniques of multimodal volume data

    Maria Ferre
    Abstract Many different direct volume rendering methods have been developed to visualize 3D scalar fields on uniform rectilinear grids. However, little work has been done on rendering simultaneously various properties of the same 3D region measured with different registration devices or at different instants of time. The demand for this type of visualization is rapidly increasing in scientific applications such as medicine in which the visual integration of multiple modalities allows a better comprehension of the anatomy and a perception of its relationships with activity. This paper presents different strategies of direct multimodal volume rendering (DMVR). It is restricted to voxel models with a known 3D rigid alignment transformation. The paper evaluates at which steps of the rendering pipeline the data fusion must be realized in order to accomplish the desired visual integration and to provide fast re-renders when some fusion parameters are modified. In addition, it analyses how existing monomodal visualization algorithms can be extended to multiple datasets and it compares their efficiency and their computational cost. Copyright © 2004 John Wiley & Sons, Ltd. [source]