Hardware

Distribution by Scientific Domains

Kinds of Hardware

  • computer hardware
  • graphics hardware
  • programmable graphics hardware
  • scanner hardware

  • Terms modified by Hardware

  • hardware architecture
  • hardware component
  • hardware implementation
  • hardware structure

  • Selected Abstracts


    Practical CFD Simulations on Programmable Graphics Hardware using SMAC,

    COMPUTER GRAPHICS FORUM, Issue 4 2005
    Carlos E. Scheidegger
    Abstract The explosive growth in integration technology and the parallel nature of rasterization-based graphics APIs (Application Programming Interface) changed the panorama of consumer-level graphics: today, GPUs (Graphics Processing Units) are cheap, fast and ubiquitous. We show how to harness the computational power of GPUs and solve the incompressible Navier-Stokes fluid equations significantly faster (more than one order of magnitude in average) than on CPU solvers of comparable cost. While past approaches typically used Stam's implicit solver, we use a variation of SMAC (Simplified Marker and Cell). SMAC is widely used in engineering applications, where experimental reproducibility is essential. Thus, we show that the GPU is a viable and affordable processor for scientific applications. Our solver works with general rectangular domains (possibly with obstacles), implements a variety of boundary conditions and incorporates energy transport through the traditional Boussinesq approximation. Finally, we discuss the implications of our solver in light of future GPU features, and possible extensions such as three-dimensional domains and free-boundary problems. [source]


    SIMD Optimization of Linear Expressions for Programmable Graphics Hardware

    COMPUTER GRAPHICS FORUM, Issue 4 2004
    Chandrajit Bajaj
    Abstract The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of, where A is a matrix, and and are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods. [source]


    DiFi: Fast 3D Distance Field Computation Using Graphics Hardware

    COMPUTER GRAPHICS FORUM, Issue 3 2004
    Avneesh Sud
    We present an algorithm for fast computation of discretized 3D distance fields using graphics hardware. Given a set of primitives and a distance metric, our algorithm computes the distance field for each slice of a uniform spatial grid baly rasterizing the distance functions of the primitives. We compute bounds on the spatial extent of the Voronoi region of each primitive. These bounds are used to cull and clamp the distance functions rendered for each slice. Our algorithm is applicable to all geometric models and does not make any assumptions about connectivity or a manifold representation. We have used our algorithm to compute distance fields of large models composed of tens of thousands of primitives on high resolution grids. Moreover, we demonstrate its application to medial axis evaluation and proximity computations. As compared to earlier approaches, we are able to achieve an order of magnitude improvement in the running time. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Distance fields, Voronoi regions, graphics hardware, proximity computations [source]


    Interactive Visualization with Programmable Graphics Hardware

    COMPUTER GRAPHICS FORUM, Issue 3 2002
    Thomas Ertl
    One of the main scientific goals of visualization is the development of algorithms and appropriate data models which facilitate interactive visual analysis and direct manipulation of the increasingly large data sets which result from simulations running on massive parallel computer systems, from measurements employing fast high-resolution sensors, or from large databases and hierarchical information spaces. This task can only be achieved with the optimization of all stages of the visualization pipeline: filtering, compression, and feature extraction of the raw data sets, adaptive visualization mappings which allow the users to choose between speed and accuracy, and exploiting new graphics hardware features for fast and high-quality rendering. The recent introduction of advanced programmability in widely available graphics hardware has already led to impressive progress in the area of volume visualization. However, besides the acceleration of the final rendering, flexible graphics hardware is increasingly being used also for the mapping and filtering stages of the visualization pipeline, thus giving rise to new levels of interactivity in visualization applications. The talk will present recent results of applying programmable graphics hardware in various visualization algorithms covering volume data, flow data, terrains, NPR rendering, and distributed and remote applications. [source]


    Hardware-Based Volumetric Knit-Wear

    COMPUTER GRAPHICS FORUM, Issue 3 2002
    Katja Daubert
    We present a hardware-based, volumetric approach for rendering knit wear at very interactive rates. A single stitch is represented by a volumetric texture with each voxel storing the main direction of the strands of yarn inside it. We render the knit wear in layers using an approximation of the Banks model. Our hardware implementation allows specular and diffuse material properties to change from one voxel to the next. This enables us to represent yarn made up of different components or render garments with complicated color patterns. Furthermore, our approach can handle self-shadowing of the stitches, and can easily be adapted to also include view-independent scattering. The resulting shader lends itself naturally to mip-mapping, and requires no reordering of the base geometry, making it versatile and easy to use. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Hardware Applications Volumetric Textures [source]


    Internet-assisted Real-time Experiments Using the Internet,Hardware and Software Considerations

    JOURNAL OF FOOD SCIENCE EDUCATION, Issue 1 2005
    R. Paul Singh
    ABSTRACT: The spectacular increase in Internet-based applications during the past decade has had a significant impact on the education delivery paradigms. The user interactivity aspect of the Internet has provided new opportunities to instructors to incorporate its use in developing new learning systems. The use of the Internet in carrying out live experiments has been a subject of interest that shows considerable promise. Using the common Internet browsers, it has become possible to develop engaging laboratory exercises that allow the user to operate experimental equipment from remote locations. To increase the availability of such experiments on the Internet, it would be beneficial to share methods employed in developing software and hardware of Internet-assisted experiments, among interested instructors. The objective of this paper is to present a description of the hardware and software required to create Internet-assisted laboratories. [source]


    Spatial Hardware and Software

    ARCHITECTURAL DESIGN, Issue 3 2008
    Rochus Urban Hinkel
    Abstract In light of his visit in 2007 to the Documenta 12 art institution in Kassel, Germany, Rochus Urban Hinkel speculates on the reciprocity of ,spatial hardware' and ,spatial software' to create interior atmosphere. This essay traverses between the two as he takes us through the exhibition spaces housed in the temporary urban and industrial ,gallery' environment, Aue Pavilion. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    GPU-based interactive visualization framework for ultrasound datasets

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2009
    Sukhyun Lim
    Abstract Ultrasound imaging is widely used in medical areas. By transmitting ultrasound signals into the human body, their echoed signals can be rendered to represent the shape of internal organs. Although its image quality is inferior to that of CT or MR, ultrasound is widely used for its speed and reasonable cost. Volume rendering techniques provide methods for rendering the 3D volume dataset intuitively. We present a visualization framework for ultrasound datasets that uses programmable graphics hardware. For this, we convert ultrasound coordinates into Cartesian form. In ultrasound datasets, however, since physical storage and representation space is different, we apply different sampling intervals adaptively for each ray. In addition, we exploit multiple filtered datasets in order to reduce noise. By our method, we can determine the adequate filter size without considering the filter size. As a result, our approach enables interactive volume rendering for ultrasound datasets, using a consumer-level PC. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    A survey of mobile and wireless technologies for augmented reality systems

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2008
    George Papagiannakis
    Abstract Recent advances in hardware and software for mobile computing have enabled a new breed of mobile augmented reality (AR) systems and applications. A new breed of computing called ,augmented ubiquitous computing' has resulted from the convergence of wearable computing, wireless networking, and mobile AR interfaces. In this paper, we provide a survey of different mobile and wireless technologies and how they have impact AR. Our goal is to place them into different categories so that it becomes easier to understand the state of art and to help identify new directions of research. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Real-time simulation of watery paint

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2005
    Tom Van Laerhoven
    Abstract Existing work on applications for thin watery paint is mostly focused on automatic generation of painterly-style images from input images, ignoring the fact that painting is a process that intuitively should be interactive. Efforts to create real-time interactive systems are limited to a single paint medium and results often suffer from a trade-off between real-timeness and simulation complexity. We report on the design of a new system that allows the real-time, interactive creation of images with thin watery paint. We mainly target the simulation of watercolor, but the system is also capable of simulating gouache and Oriental black ink. The motion of paint is governed by both physically based and heuristic rules in a layered canvas design. A final image is rendered by optically composing the layers using the Kubelka,Munk diffuse reflectance model. All algorithms that participate in the dynamics phase and the rendering phase of the simulation are implemented on graphics hardware. Images made with the system contain the typical effects that can be recognized in images produced with real thin paint, like the dark-edge effect, watercolor glazing, wet-on-wet painting and the use of different pigment types. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Planetary gear set and automatic transmission simulation for machine design courses,

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 3 2003
    Scott T. Dennis
    Abstract Due to their unique ability to provide a variety of gear ratios in a very compact space, planetary gear systems are seen in many applications from small powered screw drivers to automobile automatic transmissions. The versatile planetary gear device is often studied as part of an undergraduate mechanical engineering program. Textbook presentations typically illustrate how the different planetary gear components are connected. Understanding of the operation of the planetary gear set can be enhanced using actual hardware or simulations that show how the components move relative to each other. The Department of Engineering Mechanics at the United States Air Force Academy has developed a computer simulation of the planetary gear set and the Chrysler 42LE automatic transmission. Called "PG-Sim," the dynamic simulations complement a static textbook presentation. PG-Sim is used in several of our courses and assessment data clearly indicates students' appreciation of its visual and interactive features. In this paper, we present an overview of PG-Sim and then describe how the simulation courseware facilitates understanding of the planetary gear system. © 2003 Wiley Periodicals, Inc. Comput Appl Eng Educ 11: 144,155, 2003; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.10045 [source]


    Fast and Efficient Skinning of Animated Meshes

    COMPUTER GRAPHICS FORUM, Issue 2 2010
    L. Kavan
    Abstract Skinning is a simple yet popular deformation technique combining compact storage with efficient hardware accelerated rendering. While skinned meshes (such as virtual characters) are traditionally created by artists, previous work proposes algorithms to construct skinning automatically from a given vertex animation. However, these methods typically perform well only for a certain class of input sequences and often require long pre-processing times. We present an algorithm based on iterative coordinate descent optimization which handles arbitrary animations and produces more accurate approximations than previous techniques, while using only standard linear skinning without any modifications or extensions. To overcome the computational complexity associated with the iterative optimization, we work in a suitable linear subspace (obtained by quick approximate dimensionality reduction) and take advantage of the typically very sparse vertex weights. As a result, our method requires about one or two orders of magnitude less pre-processing time than previous methods. [source]


    Time-Adaptive Lines for the Interactive Visualization of Unsteady Flow Data Sets

    COMPUTER GRAPHICS FORUM, Issue 8 2009
    N. Cuntz
    I.3.3 [Computer Graphics]: Line and Curve Generation; I.3.1 [Computer Graphics]: Parallel Processing Abstract The quest for the ideal flow visualization reveals two major challenges: interactivity and accuracy. Interactivity stands for explorative capabilities and real-time control. Accuracy is a prerequisite for every professional visualization in order to provide a reliable base for analysis of a data set. Geometric flow visualization has a long tradition and comes in very different flavors. Among these, stream, path and streak lines are known to be very useful for both 2D and 3D flows. Despite their importance in practice, appropriate algorithms suited for contemporary hardware are rare. In particular, the adaptive construction of the different line types is not sufficiently studied. This study provides a profound representation and discussion of stream, path and streak lines. Two algorithms are proposed for efficiently and accurately generating these lines using modern graphics hardware. Each includes a scheme for adaptive time-stepping. The adaptivity for stream and path lines is achieved through a new processing idea we call ,selective transform feedback'. The adaptivity for streak lines combines adaptive time-stepping and a geometric refinement of the curve itself. Our visualization is applied, among others, to a data set representing a simulated typhoon. The storage as a set of 3D textures requires special attention. Both algorithms explicitly support this storage, as well as the use of precomputed adaptivity information. [source]


    Wind projection basis for real-time animation of trees

    COMPUTER GRAPHICS FORUM, Issue 2 2009
    Julien Diener
    This paper presents a real-time method to animate complex scenes of thousands of trees under a user-controllable wind load. Firstly, modal analysis is applied to extract the main modes of deformation from the mechanical model of a 3D tree. The novelty of our contribution is to precompute a new basis of the modal stress of the tree under wind load. At runtime, this basis allows to replace the modal projection of the external forces by a direct mapping for any directional wind. We show that this approach can be efficiently implemented on graphics hardware. This modal animation can be simulated at low computation cost even for large scenes containing thousands of trees. [source]


    Interaction-Dependent Semantics for Illustrative Volume Rendering

    COMPUTER GRAPHICS FORUM, Issue 3 2008
    Peter Rautek
    In traditional illustration the choice of appropriate styles and rendering techniques is guided by the intention of the artist. For illustrative volume visualizations it is difficult to specify the mapping between the 3D data and the visual representation that preserves the intention of the user. The semantic layers concept establishes this mapping with a linguistic formulation of rules that directly map data features to rendering styles. With semantic layers fuzzy logic is used to evaluate the user defined illustration rules in a preprocessing step. In this paper we introduce interaction-dependent rules that are evaluated for each frame and are therefore computationally more expensive. Enabling interaction-dependent rules, however, allows the use of a new class of semantics, resulting in more expressive interactive illustrations. We show that the evaluation of the fuzzy logic can be done on the graphics hardware enabling the efficient use of interaction-dependent semantics. Further we introduce the flat rendering mode and discuss how different rendering parameters are influenced by the rule base. Our approach provides high quality illustrative volume renderings at interactive frame rates, guided by the specification of illustration rules. [source]


    Lighting and Occlusion in a Wave-Based Framework

    COMPUTER GRAPHICS FORUM, Issue 2 2008
    Remo Ziegler
    Abstract We present novel methods to enhance Computer Generated Holography (CGH) by introducing a complex-valued wave-based occlusion handling method. This offers a very intuitive and efficient interface to introduce optical elements featuring physically-based light interaction exhibiting depth-of-field, diffraction, and glare effects. Fur-thermore, an efficient and flexible evaluation of lit objects on a full-parallax hologram leads to more convincing images. Previous illumination methods for CGH are not able to change the illumination settings of rendered holo-grams. In this paper we propose a novel method for real-time lighting of rendered holograms in order to change the appearance of a previously captured holographic scene. These functionalities are features of a bigger wave-based rendering framework which can be combined with 2D framebuffer graphics. We present an algorithm which uses graphics hardware to accelerate the rendering. [source]


    Volume and Isosurface Rendering with GPU-Accelerated Cell Projection,

    COMPUTER GRAPHICS FORUM, Issue 1 2008
    R. Marroquim
    Abstract We present an efficient Graphics Processing Unit GPU-based implementation of the Projected Tetrahedra (PT) algorithm. By reducing most of the CPU,GPU data transfer, the algorithm achieves interactive frame rates (up to 2.0 M Tets/s) on current graphics hardware. Since no topology information is stored, it requires substantially less memory than recent interactive ray casting approaches. The method uses a two-pass GPU approach with two fragment shaders. This work includes extended volume inspection capabilities by supporting interactive transfer function editing and isosurface highlighting using a Phong illumination model. [source]


    Layered Performance Animation with Correlation Maps

    COMPUTER GRAPHICS FORUM, Issue 3 2007
    Michael Neff
    Abstract Performance has a spontaneity and "aliveness" that can be difficult to capture in more methodical animation processes such as keyframing. Access to performance animation has traditionally been limited to either low degree of freedom characters or required expensive hardware. We present a performance-based animation system for humanoid characters that requires no special hardware, relying only on mouse and keyboard input. We deal with the problem of controlling such a high degree of freedom model with low degree of freedom input through the use of correlation maps which employ 2D mouse input to modify a set of expressively relevant character parameters. Control can be continuously varied by rapidly switching between these maps. We present flexible techniques for varying and combining these maps and a simple process for defining them. The tool is highly configurable, presenting suitable defaults for novices and supporting a high degree of customization and control for experts. Animation can be recorded on a single pass, or multiple layers can be used to increase detail. Results from a user study indicate that novices are able to produce reasonable animations within their first hour of using the system. We also show more complicated results for walking and a standing character that gestures and dances. [source]


    SIMD Optimization of Linear Expressions for Programmable Graphics Hardware

    COMPUTER GRAPHICS FORUM, Issue 4 2004
    Chandrajit Bajaj
    Abstract The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of, where A is a matrix, and and are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods. [source]


    DiFi: Fast 3D Distance Field Computation Using Graphics Hardware

    COMPUTER GRAPHICS FORUM, Issue 3 2004
    Avneesh Sud
    We present an algorithm for fast computation of discretized 3D distance fields using graphics hardware. Given a set of primitives and a distance metric, our algorithm computes the distance field for each slice of a uniform spatial grid baly rasterizing the distance functions of the primitives. We compute bounds on the spatial extent of the Voronoi region of each primitive. These bounds are used to cull and clamp the distance functions rendered for each slice. Our algorithm is applicable to all geometric models and does not make any assumptions about connectivity or a manifold representation. We have used our algorithm to compute distance fields of large models composed of tens of thousands of primitives on high resolution grids. Moreover, we demonstrate its application to medial axis evaluation and proximity computations. As compared to earlier approaches, we are able to achieve an order of magnitude improvement in the running time. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Distance fields, Voronoi regions, graphics hardware, proximity computations [source]


    Hardware-Accelerated Rendering of Photo Hulls

    COMPUTER GRAPHICS FORUM, Issue 3 2004
    Ming Li
    This paper presents an efficient hardware-accelerated method for novel view synthesis from a set of images or videos. Our method is based on the photo hull representation, which is the maximal photo-consistent shape. We avoid the explicit reconstruction of photo hulls by adopting a view-dependent plane-sweeping strategy. From the target viewpoint slicing planes are rendered with reference views projected onto them. Graphics hardware is exploited to verify the photo-consistency of each rasterized fragment. Visibilities with respect to reference views are properly modeled, and only photo-consistent fragments are kept and colored in the target view. We present experiments with real images and animation sequences. Thanks to the more accurate shape of the photo hull representation, our method generates more realistic rendering results than methods based on visual hulls. Currently, we achieve rendering frame rates of 2,3 fps. Compared to a pure software implementation, the performance of our hardware-accelerated method is approximately 7 times faster. Categories and Subject Descriptors (according to ACM CCS): CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism. [source]


    Interactive Visualization with Programmable Graphics Hardware

    COMPUTER GRAPHICS FORUM, Issue 3 2002
    Thomas Ertl
    One of the main scientific goals of visualization is the development of algorithms and appropriate data models which facilitate interactive visual analysis and direct manipulation of the increasingly large data sets which result from simulations running on massive parallel computer systems, from measurements employing fast high-resolution sensors, or from large databases and hierarchical information spaces. This task can only be achieved with the optimization of all stages of the visualization pipeline: filtering, compression, and feature extraction of the raw data sets, adaptive visualization mappings which allow the users to choose between speed and accuracy, and exploiting new graphics hardware features for fast and high-quality rendering. The recent introduction of advanced programmability in widely available graphics hardware has already led to impressive progress in the area of volume visualization. However, besides the acceleration of the final rendering, flexible graphics hardware is increasingly being used also for the mapping and filtering stages of the visualization pipeline, thus giving rise to new levels of interactivity in visualization applications. The talk will present recent results of applying programmable graphics hardware in various visualization algorithms covering volume data, flow data, terrains, NPR rendering, and distributed and remote applications. [source]


    Projective Texture Mapping with Full Panorama

    COMPUTER GRAPHICS FORUM, Issue 3 2002
    Dongho Kim
    Projective texture mapping is used to project a texture map onto scene geometry. It has been used in many applications, since it eliminates the assignment of fixed texture coordinates and provides a good method of representing synthetic images or photographs in image-based rendering. But conventional projective texture mapping has limitations in the field of view and the degree of navigation because only simple rectangular texture maps can be used. In this work, we propose the concept of panoramic projective texture mapping (PPTM). It projects cubic or cylindrical panorama onto the scene geometry. With this scheme, any polygonal geometry can receive the projection of a panoramic texture map, without using fixed texture coordinates or modeling many projective texture mapping. For fast real-time rendering, a hardware-based rendering method is also presented. Applications of PPTM include panorama viewer similar to QuicktimeVR and navigation in the panoramic scene, which can be created by image-based modeling techniques. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Viewing Algorithms; I.3.7 [Computer Graphics]: Color, Shading, Shadowing, and Texture [source]


    Dynamic Textures for Image-based Rendering of Fine-Scale 3D Structure and Animation of Non-rigid Motion

    COMPUTER GRAPHICS FORUM, Issue 3 2002
    Dana Cobza
    The problem of capturing real world scenes and then accurately rendering them is particularly difficult for fine-scale 3D structure. Similarly, it is difficult to capture, model and animate non-rigid motion. We present a method where small image changes are captured as a time varying (dynamic) texture. In particular, a coarse geometry is obtained from a sample set of images using structure from motion. This geometry is then used to subdivide the scene and to extract approximately stabilized texture patches. The residual statistical variability in the texture patches is captured using a PCA basis of spatial filters. The filters coefficients are parameterized in camera pose and object motion. To render new poses and motions, new texture patches are synthesized by modulating the texture basis. The texture is then warped back onto the coarse geometry. We demonstrate how the texture modulation and projective homography-based warps can be achieved in real-time using hardware accelerated OpenGL. Experiments comparing dynamic texture modulation to standard texturing are presented for objects with complex geometry (a flower) and non-rigid motion (human arm motion capturing the non-rigidities in the joints, and creasing of the shirt). Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Image Based Rendering [source]


    Hardware-Based Volumetric Knit-Wear

    COMPUTER GRAPHICS FORUM, Issue 3 2002
    Katja Daubert
    We present a hardware-based, volumetric approach for rendering knit wear at very interactive rates. A single stitch is represented by a volumetric texture with each voxel storing the main direction of the strands of yarn inside it. We render the knit wear in layers using an approximation of the Banks model. Our hardware implementation allows specular and diffuse material properties to change from one voxel to the next. This enables us to represent yarn made up of different components or render garments with complicated color patterns. Furthermore, our approach can handle self-shadowing of the stitches, and can easily be adapted to also include view-independent scattering. The resulting shader lends itself naturally to mip-mapping, and requires no reordering of the base geometry, making it versatile and easy to use. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Hardware Applications Volumetric Textures [source]


    Rendering: Input and Output

    COMPUTER GRAPHICS FORUM, Issue 3 2001
    H. Rushmeier
    Rendering is the process of creating an image from numerical input data. In the past few years our ideas about methods for acquiring the input data and the form of the output have expanded. The availability of inexpensive cameras and scanners has influenced how we can obtain data needed for rendering. Input for rendering ranges from sets of images to complex geometric descriptions with detailed BRDF data. The images that are rendered may be simply arrays of RGB images, or they may be arrays with vectors or matrices of data defined for each pixel. The rendered images may not be intended for direct display, but may be textures for geometries that are to be transmitted to be rendered on another system. A broader range of parameters now need to be taken into account to render images that are perceptually consistent across displays that range from CAVEs to personal digital assistants. This presentation will give an overview of how new hardware and new applications have changed traditional ideas of rendering input and output. [source]


    Are Points the Better Graphics Primitives?

    COMPUTER GRAPHICS FORUM, Issue 3 2001
    Markus Gross
    Since the early days of graphics the computer based representation of three-dimensional geometry has been one of the core research fields. Today, various sophisticated geometric modelling techniques including NURBS or implicit surfaces allow the creation of 3D graphics models with increasingly complex shape. In spite of these methods the triangle has survived over decades as the king of graphics primitives meeting the right balance between descriptive power and computational burden. As a consequence, today's consumer graphics hardware is heavily tailored for high performance triangle processing. In addition, a new generation of geometry processing methods including hierarchical representations, geometric filtering, or feature detection fosters the concept of triangle meshes for graphics modelling. Unlike triangles, points have amazingly been neglected as a graphics primitive. Although being included in APIs since many years, it is only recently that point samples experience a renaissance in computer graphics. Conceptually, points provide a mere discretization of geometry without explicit storage of topology. Thus, point samples reduce the representation to the essentials needed for rendering and enable us to generate highly optimized object representations. Although the loss of topology poses great challenges for graphics processing, the latest generation of algorithms features high performance rendering, point/pixel shading, anisotropic texture mapping, and advanced signal processing of point sampled geometry. This talk will give an overview of how recent research results in the processing of triangles and points are changing our traditional way of thinking of surface representations in computer graphics - and will discuss the question: Are Points the Better Graphics Primitives? [source]


    Drawing for Illustration and Annotation in 3D

    COMPUTER GRAPHICS FORUM, Issue 3 2001
    David Bourguignon
    We present a system for sketching in 3D, which strives to preserve the degree of expression, imagination, and simplicity of use achieved by 2D drawing. Our system directly uses user-drawn strokes to infer the sketches representing the same scene from different viewpoints, rather than attempting to reconstruct a 3D model. This is achieved by interpreting strokes as indications of a local surface silhouette or contour. Strokes thus deform and disappear progressively as we move away from the original viewpoint. They may be occluded by objects indicated by other strokes, or, in contrast, be drawn above such objects. The user draws on a plane which can be positioned explicitly or relative to other objects or strokes in the sketch. Our system is interactive, since we use fast algorithms and graphics hardware for rendering. We present applications to education, design, architecture and fashion, where 3D sketches can be used alone or as an annotation of an existing 3D model. [source]


    Proportional-Integral-Plus Control of an Intelligent Excavator

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 1 2004
    Jun Gu
    Previous work using LUCIE was based on the ubiquitous PI/PID control algorithm, tuned on-line, and implemented in a rather ad hoc manner. By contrast, the present research utilizes new hardware and advanced model-based control system design methods to improve the joint control and so provide smoother, more accurate movement of the excavator arm. In this article, a novel nonlinear simulation model of the system is developed for MATLAB/SIMULINK©, allowing for straightforward refinement of the control algorithm and initial evaluation. The PIP controller is compared with a conventionally tuned PID algorithm, with the final designs implemented on-line for the control of dipper angle. The simulated responses and preliminary implementation results demonstrate the feasibility of the approach. [source]


    Novel software architecture for rapid development of magnetic resonance applications

    CONCEPTS IN MAGNETIC RESONANCE, Issue 3 2002
    Josef Debbins
    Abstract As the pace of clinical magnetic resonance (MR) procedures grows, the need for an MR scanner software platform on which developers can rapidly prototype, validate, and produce product applications becomes paramount. A software architecture has been developed for a commercial MR scanner that employs state of the art software technologies including Java, C++, DICOM, XML, and so forth. This system permits graphical (drag and drop) assembly of applications built on simple processing building blocks, including pulse sequences, a user interface, reconstruction and postprocessing, and database control. The application developer (researcher or commercial) can assemble these building blocks to create custom applications. The developer can also write source code directly to create new building blocks and add these to the collection of components, which can be distributed worldwide over the internet. The application software and its components are developed in Java, which assures platform portability across any host computer that supports a Java Virtual Machine. The downloaded executable portion of the application is executed in compiled C++ code, which assures mission-critical real-time execution during fast MR acquisition and data processing on dedicated embedded hardware that supports C or C++. This combination permits flexible and rapid MR application development across virtually any combination of computer configurations and operating systems, and yet it allows for very high performance execution on actual scanner hardware. Applications, including prescan, are inherently real-time enabled and can be aggregated and customized to form "superapplications," wherein one or more applications work with another to accomplish the clinical objective with a very high transition speed between applications. © 2002 Wiley Periodicals, Inc. Concepts in Magnetic Resonance (Magn Reson Engineering) 15: 216,237, 2002 [source]