Descriptors

Distribution by Scientific Domains
Distribution within Chemistry

Kinds of Descriptors

  • additional descriptor
  • chemical descriptor
  • community descriptor
  • electronic descriptor
  • fourier descriptor
  • important descriptor
  • molecular descriptor
  • new descriptor
  • physicochemical descriptor
  • sensory descriptor
  • structural descriptor
  • structure descriptor
  • useful descriptor

  • Terms modified by Descriptors

  • descriptor selection
  • descriptor system
  • descriptor used

  • Selected Abstracts


    CONSUMER EVALUATION OF MILK AUTHENTICITY EXPLAINED BOTH BY CONSUMER BACKGROUND CHARACTERISTICS AND BY PRODUCT SENSORY DESCRIPTORS

    JOURNAL OF SENSORY STUDIES, Issue 6 2007
    L.W. FRANDSEN
    ABSTRACT Consumer authenticity tests were used to elicit consumer response to the influence of fodder and storage time on the flavor of cow milk. A panel of professional tasters was used to provide a descriptive profile of the sensory characteristics of the milk. Consumer background characteristics were collected through a questionnaire concerning demographic and consumption pattern variables as well as assessments using two attitude scales: a modified food neophobia questions and a set of milk xenophobia questions. A multivariate data analytical method (L-shaped partial least squares regression) was used to model the variation in the authenticity evaluation simultaneously from two different sources: the storage/feed effects as described by the sensory panel and the consumer background variables. Results showed that milk samples with storage/feed characteristics were evaluated as "foreign" (not Danish) by some segments of the consumers. PRACTICAL APPLICATIONS Very small differences in a food product, here milk, sometimes cannot be discerned by standard sensory methods. The test in this article , authenticity test , is able to assess such differences. In this article, it is studied whether there are influences of the consumer on the results of the authenticity test, to see if this test is broadly applicable. With respect to milk, a number of effects appear that have an effect on the acceptance of milk as a result of fodder and storage time. These factors can be of use for milk producers, and the differences in the acceptance of the products between the consumers may help milk producers to aim products to consumer segments. [source]


    Strain Energies as a Steric Descriptor in QSAR Calculations

    MOLECULAR INFORMATICS, Issue 7 2004
    Catherine
    Abstract The difference between the calculated heats of formation of gauche and anti conformers of monosubstituted propanes was determined and used as a new steric parameter (AG60 value) in QSAR calculations. The dihedral angle of the gauche conformation was fixed at 60° during the calculation to force interaction of the gauche groups. AG60 values are a thermodynamically determined steric measure in contrast to the Taft steric parameter, which is based upon kinetic measurements of ester hydrolyses. In comparisons to published QSAR studies, AG60 values correlated steric effects and biological activities very similarly to the Taft parameter. The average of r2 values from five QSAR studies for the Taft parameter was 0.887, while AG60 values averaged 0.883. Direct comparison of the Taft parameter and AG60 values showed a poor correlation (r2=0.300), indicating the two parameters are fundamentally different methods of measuring steric bulk. [source]


    Prediction of Carbonic Anhydrase Activation by Tri-/Tetrasubstituted-pyridinium-azole Compounds: A Computational Approach using Novel Topochemical Descriptor

    MOLECULAR INFORMATICS, Issue 7 2004
    Sanjay Bajaj
    Abstract A novel highly discriminating adjacency-cum-distance based topochemical descriptor, termed as Superadjacency topochemical index, has been derived and its discriminating power investigated with regard to activation of Carbonic anhydrase (CA) isozyme-I by tri-/tetrasubstituted-pyridinium-azole compounds. The new index is not only sensitive to the presence of heteroatoms but also overcomes the problem of degeneracy of many topological descriptors. The discriminating power of Superadjacency topochemical index was found to be far superior when compared with that of distance based Wiener,s index and adjacency based Molecular connectivity index. The values of Wiener,s index, path-one Molecular connectivity index and Superadjacency topochemical index of each of the 42 substituted-pyridinium-azole compounds comprising the dataset were computed. Resultant data was analyzed and suitable models developed after identification of the active ranges. Subsequently, a biological activity was assigned to each of the compounds involved in the dataset using these models, which was then compared with the reported activation constants for Carbonic Anhydrase isozyme-I. Excellent correlations were observed between the activation constants of CA isozyme-I and all the topological/topochemical descriptors. The overall accuracy of prediction was about 91% for models based upon both Molecular connectivity index as well as Wiener,s index, and 96% for model based upon Superadjacency topochemical index. Surprisingly, the accuracy of prediction in the active range was found to be 100% in all the models. Thus the proposed index offers a vast potential for structure-activity/structure-property studies. [source]


    3-Methyl-3-sulfanylhexan-1-ol as a Major Descriptor for the Human Axilla-Sweat Odour Profile

    CHEMISTRY & BIODIVERSITY, Issue 7 2004
    Myriam Troccaz
    This study sets out to redress the lack of knowledge in the area of volatile sulfur compounds (VSCs) in axillary sweat malodour. Sterile odourless underarm sweat (500,ml) was collected from 30 male volunteers after excessive sweating. Five strains of bacteria, Corynebacterium tuberculostearicum, Corynebacterium minutissimum, Staphylococcus epidermidis, Staphylococcus haemolyticus, and Bacillus licheniformis, were isolated and characterised for their ability to generate an authentic axillary odour from the sweat material collected. As expected, all of the five bacterial strains produced strong sweat odours. Surprisingly, after extensive olfactive evaluation, the strain of Staphylococcus haemolyticus produced the most sulfury sweat character. This strain was then chosen as the change agent for the 500,ml of odourless underarm sweat collected. After bacterial incubation, the 500-ml sample was further processed for GC-olfactometry (GC-O), GC/MS analysis. GC-O of an extract free of organic acids provided three zones of interest. The first was chicken-sulfury, the second zone was onion-like, and the third zone was sweat, clary sage-like. From the third zone, a new impact molecule, (R)- or (S)-3-methyl-3-sulfanylhexan-1-ol, was isolated and identified by GC/MS, MD-GC, and GC AED (atomic emission detector). (S)-3-methyl-3-sulfanylhexan-1-ol was sniff-evaluated upon elution from a chiral GC column and was described as sweat and onion-like; its opposite enantiomer, (R)-3-methyl-3-sulfanylhexan-1-ol, was described as fruity and grapefruit-like. The (S)-form was found to be the major enantiomer (75%). [source]


    Handwritten Thai Character Recognition Using Fourier Descriptors and Genetic Neural Networks

    COMPUTATIONAL INTELLIGENCE, Issue 3 2002
    Pisit Phokharatkul
    This article presents a method to solve the rotated and scaling character recognition problem using Fourier descriptors and genetic neural networks. The contours of character image are extracted and separated between the outer contour and inner or loop contours. The loop contours are a special characteristic of Thai characters, called the head of the character. The special features of Thai characters (loop contours) are used at the rough classification stage, and Fourier descriptors with genetic neural networks are used at the fine classification stage. The Fourier descriptors detect the outer contour of a character and it is fed to network. These features are recognized by a multilayer neural network. Genetic algorithms (GAs) are utilized to help compute the weights of the neural network optimally and reduce uncertain states in the neural networks output. Experimental results have shown that the combination of the Fourier descriptors with genetic neural networks, loop features, and local curvature charateristics of similar characters are powerful tools for successfully classifying Thai characters. The recognition rate by this method is 99.12% for 1200 examples of handwritten Thai words (a total of 13,500 characters) written by 60 persons. [source]


    Multiresolution Random Accessible Mesh Compression

    COMPUTER GRAPHICS FORUM, Issue 3 2006
    Junho Kim
    This paper presents a novel approach for mesh compression, which we call multiresolution random accessible mesh compression. In contrast to previous mesh compression techniques, the approach enables us to progressively decompress an arbitrary portion of a mesh without decoding other non-interesting parts. This simultaneous support of random accessibility and progressiveness is accomplished by adapting selective refinement of a multiresolution mesh to the mesh compression domain. We present a theoretical analysis of our connectivity coding scheme and provide several experimental results. The performance of our coder is about 11 bits for connectivity and 21 bits for geometry with 12-bit quantization, which can be considered reasonably good under the constraint that no fixed neighborhood information can be used for coding to support decompression in a random order. Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling [source]


    Wrinkling Coarse Meshes on the GPU

    COMPUTER GRAPHICS FORUM, Issue 3 2006
    J. Loviscach
    The simulation of complex layers of folds of cloth can be handled through algorithms which take the physical dynamics into account. In many cases, however, it is sufficient to generate wrinkles on a piece of garment which mostly appears spread out. This paper presents a corresponding fully GPU-based, easy-to-control, and robust method to generate and render plausible and detailed folds. This simulation is generated from an animated mesh. A relaxation step ensures that the behavior remains globally consistent. The resulting wrinkle field controls the lighting and distorts the texture in a way which closely simulates an actually deformed surface. No highly tessellated mesh is required to compute the position of the folds or to render them. Furthermore, the solution provides a 3D paint interface through which the user may bias the computation in such a way that folds already appear in the rest pose. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Animation, I.3.7 [Computer Graphics]: Color, shading, shadowing, and texture [source]


    Sweep-based Freeform Deformations

    COMPUTER GRAPHICS FORUM, Issue 3 2006
    Seung-Hyun Yoon
    We propose a sweep-based approach to the freeform deformation of three-dimensional objects. Instead of using a volume enclosing the whole object, we approximate only its deformable parts using sweep surfaces. The vertices on the object boundary are bound to the sweep surfaces and follow their deformation. Several sweep surfaces can be organized into a hierarchy so that they interact with each other in a controlled manner. Thus we can support intuitively plausible shape deformation of objects of arbitrary topology with multiple control handles. A sweep-based approach also provides important advantages such as volume preservation. We demonstrate the effectiveness of our technique in several examples. Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computational Geometry and Object Modeling]: Curve, surface, solid, and object representations [source]


    Implicit Surface Modelling with a Globally Regularised Basis of Compact Support

    COMPUTER GRAPHICS FORUM, Issue 3 2006
    C. Walder
    We consider the problem of constructing a globally smooth analytic function that represents a surface implicitly by way of its zero set, given sample points with surface normal vectors. The contributions of the paper include a novel means of regularising multi-scale compactly supported basis functions that leads to the desirable interpolation properties previously only associated with fully supported bases. We also provide a regularisation framework for simpler and more direct treatment of surface normals, along with a corresponding generalisation of the representer theorem lying at the core of kernel-based machine learning methods. We demonstrate the techniques on 3D problems of up to 14 million data points, as well as 4D time series data and four-dimensional interpolation between three-dimensional shapes. Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Curve, surface, solid, and object representations [source]


    A System for View-Dependent Animation

    COMPUTER GRAPHICS FORUM, Issue 3 2004
    Parag Chaudhuri
    In this paper, we present a novel system for facilitating the creation of stylized view-dependent 3D animation. Our system harnesses the skill and intuition of a traditionally trained animator by providing a convivial sketch based 2D to 3D interface. A base mesh model of the character can be modified to match closely to an input sketch, with minimal user interaction. To do this, we recover the best camera from the intended view direction in the sketch using robust computer vision techniques. This aligns the mesh model with the sketch. We then deform the 3D character in two stages - first we reconstruct the best matching skeletal pose from the sketch and then we deform the mesh geometry. We introduce techniques to incorporate deformations in the view-dependent setting. This allows us to set up view-dependent models for animation. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism - Animation 7 Figure 7. Our system takes as input a sketch (a), and a base mesh model (b), then recovers a camera to orient the base mesh (c), then reconstructs the skeleton pose (d), and finally deforms the mesh to find the best possible match with the sketch (e). [source]


    Dye Advection Without the Blur: A Level-Set Approach for Texture-Based Visualization of Unsteady Flow

    COMPUTER GRAPHICS FORUM, Issue 3 2004
    D. Weiskopf
    Dye advection is an intuitive and versatile technique to visualize both steady and unsteady flow. Dye can be easily combined with noise-based dense vector field representations and is an important element in user-centric visual exploration processes. However, fast texture-based implementations of dye advection rely on linear interpolation operations that lead to severe diffusion artifacts. In this paper, a novel approach for dye advection is proposed to avoid this blurring and to achieve long and clearly defined streaklines or extended streak-like patterns. The interface between dye and background is modeled as a level-set within a signed distance field. The level-set evolution is governed by the underlying flow field and is computed by a semi-Lagrangian method. A reinitialization technique is used to counteract the distortions introduced by the level-set evolution and to maintain a level-set function that represents a local distance field. This approach works for 2D and 3D flow fields alike. It is demonstrated how the texture-based level-set representation lends itself to an efficient GPU implementation and therefore facilitates interactive visualization. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image Generation I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism [source]


    Instant Volumetric Understanding with Order-Independent Volume Rendering

    COMPUTER GRAPHICS FORUM, Issue 3 2004
    Benjamin Mora
    Rapid, visual understanding of volumetric datasets is a crucial outcome of a good volume rendering application, but few current volume rendering systems deliver this result. Our goal is to reduce the volumetric surfing that is required to understand volumetric features by conveying more information in fewer images. In order to achieve this goal, and in contrast with most current methods which still use optical models and alpha blending, our approach reintroduces the order-independent contribution of every sample along the ray in order to have an equiprobable visualization of all the volume samples. Therefore, we demonstrate how order independent sampling can be suitable for fast volume understanding, show useful extensions to MIP and X-ray like renderings, and, finally, point out the special advantage of using stereo visualization in these models to circumvent the lack of depth cues. Categories and Subject Descriptors: I.3.3 [Computer Graphics]: Picture/Image, Generation, I.3.7 [Computer Graphics]: Three-Dimensional graphics and realism. [source]


    DiFi: Fast 3D Distance Field Computation Using Graphics Hardware

    COMPUTER GRAPHICS FORUM, Issue 3 2004
    Avneesh Sud
    We present an algorithm for fast computation of discretized 3D distance fields using graphics hardware. Given a set of primitives and a distance metric, our algorithm computes the distance field for each slice of a uniform spatial grid baly rasterizing the distance functions of the primitives. We compute bounds on the spatial extent of the Voronoi region of each primitive. These bounds are used to cull and clamp the distance functions rendered for each slice. Our algorithm is applicable to all geometric models and does not make any assumptions about connectivity or a manifold representation. We have used our algorithm to compute distance fields of large models composed of tens of thousands of primitives on high resolution grids. Moreover, we demonstrate its application to medial axis evaluation and proximity computations. As compared to earlier approaches, we are able to achieve an order of magnitude improvement in the running time. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Distance fields, Voronoi regions, graphics hardware, proximity computations [source]


    GPU-Based Nonlinear Ray Tracing

    COMPUTER GRAPHICS FORUM, Issue 3 2004
    Daniel Weiskopf
    In this paper, we present a mapping of nonlinear ray tracing to the GPU which avoids any data transfer back to main memory. The rendering process consists of the following parts: ray setup according to the camera parameters, ray integration, ray-object intersection, and local illumination. Bent rays are approximated by polygonal lines that are represented by textures. Ray integration is based on an iterative numerical solution of ordinary differential equations whose initial values are determined during ray setup. To improve the rendering performance, we propose acceleration techniques such as early ray termination and adaptive ray integration. Finally, we discuss a variety of applications that range from the visualization of dynamical systems to the general relativistic visualization in astrophysics and the rendering of the continuous refraction in media with varying density. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image Generation I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism [source]


    Hardware-Accelerated Rendering of Photo Hulls

    COMPUTER GRAPHICS FORUM, Issue 3 2004
    Ming Li
    This paper presents an efficient hardware-accelerated method for novel view synthesis from a set of images or videos. Our method is based on the photo hull representation, which is the maximal photo-consistent shape. We avoid the explicit reconstruction of photo hulls by adopting a view-dependent plane-sweeping strategy. From the target viewpoint slicing planes are rendered with reference views projected onto them. Graphics hardware is exploited to verify the photo-consistency of each rasterized fragment. Visibilities with respect to reference views are properly modeled, and only photo-consistent fragments are kept and colored in the target view. We present experiments with real images and animation sequences. Thanks to the more accurate shape of the photo hull representation, our method generates more realistic rendering results than methods based on visual hulls. Currently, we achieve rendering frame rates of 2,3 fps. Compared to a pure software implementation, the performance of our hardware-accelerated method is approximately 7 times faster. Categories and Subject Descriptors (according to ACM CCS): CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism. [source]


    Out-of-core compression and decompression of large n -dimensional scalar fields

    COMPUTER GRAPHICS FORUM, Issue 3 2003
    Lawrence Ibarria
    We present a simple method for compressing very large and regularly sampled scalar fields. Our method is particularlyattractive when the entire data set does not fit in memory and when the sampling rate is high relative to thefeature size of the scalar field in all dimensions. Although we report results foranddata sets, the proposedapproach may be applied to higher dimensions. The method is based on the new Lorenzo predictor, introducedhere, which estimates the value of the scalar field at each sample from the values at processed neighbors. The predictedvalues are exact when the n-dimensional scalar field is an implicit polynomial of degreen, 1. Surprisingly,when the residuals (differences between the actual and predicted values) are encoded using arithmetic coding,the proposed method often outperforms wavelet compression in anL,sense. The proposed approach may beused both for lossy and lossless compression and is well suited for out-of-core compression and decompression,because a trivial implementation, which sweeps through the data set reading it once, requires maintaining only asmall buffer in core memory, whose size barely exceeds a single (n,1)- dimensional slice of the data. Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Compression, scalar fields,out-of-core. [source]


    Hierarchical Context-based Pixel Ordering

    COMPUTER GRAPHICS FORUM, Issue 3 2003
    Ziv Bar-Joseph
    Abstract We present a context-based scanning algorithm which reorders the input image using a hierarchical representationof the image. Our algorithm optimally orders (permutes) the leaves corresponding to the pixels, by minimizing thesum of distances between neighboring pixels. The reordering results in an improved autocorrelation betweennearby pixels which leads to a smoother image. This allows us, for the first time, to improve image compressionrates using context-based scans. The results presented in this paper greatly improve upon previous work in bothcompression rate and running time. Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Computational Geometryand Object Modeling I.3.6 [Computer Graphics]: Methodology and Techniques [source]


    Adaptive Logarithmic Mapping For Displaying High Contrast Scenes

    COMPUTER GRAPHICS FORUM, Issue 3 2003
    F. Drago
    We propose a fast, high quality tone mapping technique to display high contrast images on devices with limited dynamicrange of luminance values. The method is based on logarithmic compression of luminance values, imitatingthe human response to light. A bias power function is introduced to adaptively vary logarithmic bases, resultingin good preservation of details and contrast. To improve contrast in dark areas, changes to the gamma correctionprocedure are proposed. Our adaptive logarithmic mapping technique is capable of producing perceptually tunedimages with high dynamic content and works at interactive speed. We demonstrate a successful application of ourtone mapping technique with a high dynamic range video player enabling to adjust optimal viewing conditions forany kind of display while taking into account user preference concerning brightness, contrast compression, anddetail reproduction. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Image Processing and Computer Vision]: Image Representation [source]


    Hierarchical Higher Order Face Cluster Radiosity for Global Illumination Walkthroughs of Complex Non-Diffuse Environments

    COMPUTER GRAPHICS FORUM, Issue 3 2003
    Enrico Gobbetti
    We present an algorithm for simulating global illumination in scenes composed of highly tessellated objects withdiffuse or moderately glossy reflectance. The solution method is a higher order extension of the face cluster radiositytechnique. It combines face clustering, multiresolution visibility, vector radiosity, and higher order baseswith a modified progressive shooting iteration to rapidly produce visually continuous solutions with limited memoryrequirements. The output of the method is a vector irradiance map that partitions input models into areaswhere global illumination is well approximated using the selected basis. The programming capabilities of moderncommodity graphics architectures are exploited to render illuminated models directly from the vector irradiancemap, exploiting hardware acceleration for approximating view dependent illumination during interactive walkthroughs.Using this algorithm, visually compelling global illumination solutions for scenes of over one millioninput polygons can be computed in minutes and examined interactively on common graphics personal computers. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture and Image Generation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism. [source]


    Visyllable Based Speech Animation

    COMPUTER GRAPHICS FORUM, Issue 3 2003
    Sumedha Kshirsagar
    Visemes are visual counterpart of phonemes. Traditionally, the speech animation of 3D synthetic faces involvesextraction of visemes from input speech followed by the application of co-articulation rules to generate realisticanimation. In this paper, we take a novel approach for speech animation , using visyllables, the visual counterpartof syllables. The approach results into a concatenative visyllable based speech animation system. The key contributionof this paper lies in two main areas. Firstly, we define a set of visyllable units for spoken English along withthe associated phonological rules for valid syllables. Based on these rules, we have implemented a syllabificationalgorithm that allows segmentation of a given phoneme stream into syllables and subsequently visyllables. Secondly,we have recorded the database of visyllables using a facial motion capture system. The recorded visyllableunits are post-processed semi-automatically to ensure continuity at the vowel boundaries of the visyllables. We defineeach visyllable in terms of the Facial Movement Parameters (FMP). The FMPs are obtained as a result of thestatistical analysis of the facial motion capture data. The FMPs allow a compact representation of the visyllables.Further, the FMPs also facilitate the formulation of rules for boundary matching and smoothing after concatenatingthe visyllables units. Ours is the first visyllable based speech animation system. The proposed technique iseasy to implement, effective for real-time as well as non real-time applications and results into realistic speechanimation. Categories and Subject Descriptors (according to ACM CCS): 1.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism [source]


    Reanimating Faces in Images and Video

    COMPUTER GRAPHICS FORUM, Issue 3 2003
    V. Blanz
    This paper presents a method for photo-realistic animation that can be applied to any face shown in a single imageor a video. The technique does not require example data of the person's mouth movements, and the image to beanimated is not restricted in pose or illumination. Video reanimation allows for head rotations and speech in theoriginal sequence, but neither of these motions is required. In order to animate novel faces, the system transfers mouth movements and expressions across individuals, basedon a common representation of different faces and facial expressions in a vector space of 3D shapes and textures. This space is computed from 3D scans of neutral faces, and scans of facial expressions. The 3D model's versatility with respect to pose and illumination is conveyed to photo-realistic image and videoprocessing by a framework of analysis and synthesis algorithms: The system automatically estimates 3D shape andall relevant rendering parameters, such as pose, from single images. In video, head pose and mouth movements aretracked automatically. Reanimated with new mouth movements, the 3D face is rendered into the original images. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Animation [source]


    Deferred, Self-Organizing BSP Trees

    COMPUTER GRAPHICS FORUM, Issue 3 2002
    Sigal Ar
    Abstract bsptrees and KD trees are fundamental data structures for collision detection in walkthrough environments. A basic issue in the construction of these hierarchical data structures is the choice of cutting planes. Rather than base these choices solely on the properties of the scene, we propose using information about how the tree is used in order to determine its structure. We demonstrate how this leads to the creation ofbsptrees that are small, do not require much preprocessing time, and respond very efficiently to sequences of collision queries. Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling I.3.6 [Computer Graphics]: Graphics data structures and data types, Interaction techniques I.3.7 [Computer Graphics]: Virtual reality [source]


    Local Physical Models for Interactive Character Animation

    COMPUTER GRAPHICS FORUM, Issue 3 2002
    Sageev Oore
    Our goal is to design and build a tool for the creation of expressive character animation. Virtual puppetry, also known as performance animation, is a technique in which the user interactively controls a character's motion. In this paper we introduce local physical models for performance animation and describe how they can augment an existing kinematic method to achieve very effective animation control. These models approximate specific physically-generated aspects of a character's motion. They automate certain behaviours, while still letting the user override such motion via a PD-controller if he so desires. Furthermore, they can be tuned to ignore certain undesirable effects, such as the risk of having a character fall over, by ignoring corresponding components of the force. Although local physical models are a quite simple approximation to real physical behaviour, we show that they are extremely useful for interactive character control, and contribute positively to the expressiveness of the character's motion. In this paper, we develop such models at the knees and ankles of an interactively-animated 3D anthropomorphic character, and demonstrate a resulting animation. This approach can be applied in a straight-forward way to other joints. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism, Interaction Techniques [source]


    Projective Texture Mapping with Full Panorama

    COMPUTER GRAPHICS FORUM, Issue 3 2002
    Dongho Kim
    Projective texture mapping is used to project a texture map onto scene geometry. It has been used in many applications, since it eliminates the assignment of fixed texture coordinates and provides a good method of representing synthetic images or photographs in image-based rendering. But conventional projective texture mapping has limitations in the field of view and the degree of navigation because only simple rectangular texture maps can be used. In this work, we propose the concept of panoramic projective texture mapping (PPTM). It projects cubic or cylindrical panorama onto the scene geometry. With this scheme, any polygonal geometry can receive the projection of a panoramic texture map, without using fixed texture coordinates or modeling many projective texture mapping. For fast real-time rendering, a hardware-based rendering method is also presented. Applications of PPTM include panorama viewer similar to QuicktimeVR and navigation in the panoramic scene, which can be created by image-based modeling techniques. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Viewing Algorithms; I.3.7 [Computer Graphics]: Color, Shading, Shadowing, and Texture [source]


    Dynamic Textures for Image-based Rendering of Fine-Scale 3D Structure and Animation of Non-rigid Motion

    COMPUTER GRAPHICS FORUM, Issue 3 2002
    Dana Cobza
    The problem of capturing real world scenes and then accurately rendering them is particularly difficult for fine-scale 3D structure. Similarly, it is difficult to capture, model and animate non-rigid motion. We present a method where small image changes are captured as a time varying (dynamic) texture. In particular, a coarse geometry is obtained from a sample set of images using structure from motion. This geometry is then used to subdivide the scene and to extract approximately stabilized texture patches. The residual statistical variability in the texture patches is captured using a PCA basis of spatial filters. The filters coefficients are parameterized in camera pose and object motion. To render new poses and motions, new texture patches are synthesized by modulating the texture basis. The texture is then warped back onto the coarse geometry. We demonstrate how the texture modulation and projective homography-based warps can be achieved in real-time using hardware accelerated OpenGL. Experiments comparing dynamic texture modulation to standard texturing are presented for objects with complex geometry (a flower) and non-rigid motion (human arm motion capturing the non-rigidities in the joints, and creasing of the shirt). Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Image Based Rendering [source]


    Hardware-Based Volumetric Knit-Wear

    COMPUTER GRAPHICS FORUM, Issue 3 2002
    Katja Daubert
    We present a hardware-based, volumetric approach for rendering knit wear at very interactive rates. A single stitch is represented by a volumetric texture with each voxel storing the main direction of the strands of yarn inside it. We render the knit wear in layers using an approximation of the Banks model. Our hardware implementation allows specular and diffuse material properties to change from one voxel to the next. This enables us to represent yarn made up of different components or render garments with complicated color patterns. Furthermore, our approach can handle self-shadowing of the stitches, and can easily be adapted to also include view-independent scattering. The resulting shader lends itself naturally to mip-mapping, and requires no reordering of the base geometry, making it versatile and easy to use. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Hardware Applications Volumetric Textures [source]


    Free-form sketching with variational implicit surfaces

    COMPUTER GRAPHICS FORUM, Issue 3 2002
    Olga Karpenko
    With the advent of sketch-based methods for shape construction, there's a new degree of power available in the rapid creation of approximate shapes. Sketch [Zeleznik, 1996] showed how a gesture-based modeler could be used to simplify conventional CSG-like shape creation. Teddy [Igarashi, 1999] extended this to more free-form models, getting much of its power from its "inflation" operation (which converted a simple closed curve in the plane into a 3D shape whose silhouette, from the current point of view, was that curve on the view plane) and from an elegant collection of gestures for attaching additional parts to a shape, cutting a shape, and deforming it. But despite the powerful collection of tools in Teddy, the underlying polygonal representation of shapes intrudes on the results in many places. In this paper, we discuss our preliminary efforts at using variational implicit surfaces [Turk, 2000] as a representation in a free-form modeler. We also discuss the implementation of several operations within this context, and a collection of user-interaction elements that work well together to make modeling interesting hierarchies simple. These include "stroke inflation" via implicit functions, blob-merging, automatic hierarchy construction, and local surface modification via silhouette oversketching. We demonstrate our results by creating several models. Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Modeling packages I.3.6 [Computer Graphics]: Interaction techniques [source]


    A Guide to Understanding and Developing Performance-Level Descriptors

    EDUCATIONAL MEASUREMENT: ISSUES AND PRACTICE, Issue 4 2008
    Marianne Perie
    There has been much discussion recently about why the percentage of students scoring Proficient or above varies as much as it does on state assessments across the country. However, most of these discussions center on the leniency or rigor of the cut score. Yet, the cut score is developed in a standard-setting process that depends heavily on the definition for each level of performance. Good performance-level descriptors (PLDs) can be the foundation of an assessment program, driving everything from item development to cut scores to reporting. PLDs should be written using a multistep process. First, policymakers determine the number and names of the levels. Next, they develop policy definitions specifying the level of rigor intended by each level, regardless of the grade or subject to which it is applied. Finally, content experts and education leaders should supplement these policy definitions with specific statements related to the content standards for each assessment. This article describes a process for developing PLDs, contrasts that with current state practice, and discusses the implication for interpreting the word "proficient," which is the keystone of No Child Left Behind. [source]


    Patient Descriptors in Injection Drug Abuse

    ACADEMIC EMERGENCY MEDICINE, Issue 6 2002
    Barbara Herbert MD
    No abstract is available for this article. [source]


    Development of Flavor Descriptors for Pawpaw Fruit Puree: A Step Toward the Establishment of a Native Tree Fruit Industry

    FAMILY & CONSUMER SCIENCES RESEARCH JOURNAL, Issue 2 2006
    Melani W. Duffrin
    The pawpaw (Asimina triloba) is a native tree fruit with potential as a high-value niche crop for farmers in fresh-market and processing ventures. With a flavor resembling a combination of banana, mango, and pineapple, this fruit could compete with exported specialty fruits in the United States such as mango and papaya. The study objective was to develop a descriptive language for frozen pawpaw fruit puree, thereby assisting growers in the selection of superior varieties for fresh-market and processing ventures. Panelists generated 13 visual, 17 flavor, and 12 texture puree descriptors. Using these descriptors with fruit collected from Southeast Ohio (SEO) wild patches and two varieties (1,23 and 10,35), panelists identified both sour and bitter tastes in SEO puree compared to puree from either variety. The varieties also displayed positive characteristics of stronger melon and fresh flavors compared to SEO puree. Additional language descriptors for pawpaw puree may be needed. [source]