Preprocessing Step (preprocessing + step)

Distribution by Scientific Domains


Selected Abstracts


Stable stylized wireframe rendering

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2010
Chen Tang
Abstract Stylized wireframe rendering of 3D model is widely used in animation software in order to depict the configuration of deformable model in comprehensible ways. However, since some inherent flaws in traditional depth test based rendering technology, shape of lines can not been preserved as continuous movement or deformation of models. There often exists severe aliasing like flickering artifact when objects rendered in line form animate, especially rendered with thick or dashed line. To cover this artifact, unlike traditional approach, we propose a novel fast line drawing method with high visual fidelity for wireframe depiction which only depends on intrinsic topology of primitives without any preprocessing step or extra adjacent information pre-stored. In contrast to previous widely-used solutions, our method is advantageous in highly accurate visibility, clear and stable line appearance without flickering even for thick and dashed lines with uniform width and steady configuration as model moves or animates, so that it is strongly suitable for animation system. In addition, our approach can be easily implemented and controlled without any additional preestimate parameters supplied by users. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Interaction-Dependent Semantics for Illustrative Volume Rendering

COMPUTER GRAPHICS FORUM, Issue 3 2008
Peter Rautek
In traditional illustration the choice of appropriate styles and rendering techniques is guided by the intention of the artist. For illustrative volume visualizations it is difficult to specify the mapping between the 3D data and the visual representation that preserves the intention of the user. The semantic layers concept establishes this mapping with a linguistic formulation of rules that directly map data features to rendering styles. With semantic layers fuzzy logic is used to evaluate the user defined illustration rules in a preprocessing step. In this paper we introduce interaction-dependent rules that are evaluated for each frame and are therefore computationally more expensive. Enabling interaction-dependent rules, however, allows the use of a new class of semantics, resulting in more expressive interactive illustrations. We show that the evaluation of the fuzzy logic can be done on the graphics hardware enabling the efficient use of interaction-dependent semantics. Further we introduce the flat rendering mode and discuss how different rendering parameters are influenced by the rule base. Our approach provides high quality illustrative volume renderings at interactive frame rates, guided by the specification of illustration rules. [source]


Interactive Volume Rendering with Dynamic Ambient Occlusion and Color Bleeding

COMPUTER GRAPHICS FORUM, Issue 2 2008
Timo Ropinski
Abstract We propose a method for rendering volumetric data sets at interactive frame rates while supporting dynamic ambient occlusion as well as an approximation to color bleeding. In contrast to ambient occlusion approaches for polygonal data, techniques for volumetric data sets have to face additional challenges, since by changing rendering parameters, such as the transfer function or the thresholding, the structure of the data set and thus the light interactions may vary drastically. Therefore, during a preprocessing step which is independent of the rendering parameters we capture light interactions for all combinations of structures extractable from a volumetric data set. In order to compute the light interactions between the different structures, we combine this preprocessed information during rendering based on the rendering parameters defined interactively by the user. Thus our method supports interactive exploration of a volumetric data set but still gives the user control over the most important rendering parameters. For instance, if the user alters the transfer function to extract different structures from a volumetric data set the light interactions between the extracted structures are captured in the rendering while still allowing interactive frame rates. Compared to known local illumination models for volume rendering our method does not introduce any substantial rendering overhead and can be integrated easily into existing volume rendering applications. In this paper we will explain our approach, discuss the implications for interactive volume rendering and present the achieved results. [source]


A universal metric for sequential MIMO detection,

EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 8 2007
Christian Kuhn
Conventionally, detection in multiple-antenna systems is based on a tree-search or a lattice-search with a metric that can be computed by recursively accumulating the corresponding metric increments for a given hypothesis. For that purpose, a multiple-antenna detector traditionally applies a preprocessing to obtain the search-metric in a suitable form. In contrast to that, we present a reformulation of the search-metric that directly allows for an appropriate evaluation of the metric on the underlying structure without the need for a computationally costly preprocessing step. Unlike the traditional approach, the new metric can also be applied when the system has fewer receive than transmit antennas. We present simulation results in which the new metric is applied for turbo detection involving the list-sequential (LISS) detector that was pioneered by Joachim Hagenauer. Copyright © 2007 John Wiley & Sons, Ltd. [source]


A rapidly converging filtered-error algorithm for multichannel active noise control

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 7 2007
A. P. Berkhoff
Abstract In this paper, a multichannel adaptive control algorithm is described which has good convergence properties while having relatively small computational complexity. This complexity is similar to that of the filtered-error algorithm. In order to obtain these properties, the algorithm is based on a preprocessing step for the actuator signals using a stable and causal inverse of the minimum-phase part of the transfer path between actuators and error sensors, the secondary path. The latter algorithm is known from the literature as postconditioned filtered-error algorithm, which improves convergence rate for the case that the minimum-phase part of the secondary path increases the eigenvalue spread. However, the convergence rate of this algorithm suffers from delays in the adaptation path because adaptation rates have to be reduced for larger delays. The contribution of this paper is to modify the postconditioned filtered-error scheme in such a way that the adaptation rate can be set to a higher value. Consequently, the scheme also provides good convergence if the system contains significant delays. Furthermore, a regularized extension of the scheme is given which can be used to limit the actuator signals. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Color reduction for complex document images

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 1 2009
Nikos Nikolaou
Abstract A new technique for color reduction of complex document images is presented in this article. It reduces significantly the number of colors of the document image (less than 15 colors in most of the cases) so as to have solid characters and uniform local backgrounds. Therefore, this technique can be used as a preprocessing step by text information extraction applications. Specifically, using the edge map of the document image, a representative set of samples is chosen that constructs a 3D color histogram. Based on these samples in the 3D color space, a relatively large number of colors (usually no more than 100 colors) are obtained by using a simple clustering procedure. The final colors are obtained by applying a mean-shift based procedure. Also, an edge preserving smoothing filter is used as a preprocessing stage that enhances significantly the quality of the initial image. Experimental results prove the method's capability of producing correctly segmented complex color documents where the character elements can be easily extracted as connected components. © 2009 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 19, 14,26, 2009 [source]


A fast spherical inflation method of the cerebral cortex by deformation of a simplex mesh on the polar coordinates

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 1 2008
Min Jeong Kwon
Abstract The convoluted shape of the cerebral cortex makes it difficult to analyze and visualize the neuronal activation area. One way of overcoming this problem is to use a spherical inflation method to draw a cerebral cortex on a spherical surface. The task of mapping the cortical surface on a sphere has several obstacles, namely, the overlap between the polygons of the surface, the heavy computation demand, and the geometric distortions inherent in the process. This article proposes a method of mapping the three-dimensional (3D) cortical surface represented in a simplex mesh to a sphere surface, which does not have any overlap between the polygons and minimizes the geometric distortions as well as the computation time. The proposed method consists of the two steps of preprocessing and refinement. In the preprocessing step, the 3D cortical surface is mapped onto a sphere without any overlap between the polygons by iterative deformation. In the refinement step, the mapped surface is adjusted to minimize its linear distortion. The experimental results show the efficiency and performance of the proposed mapping method. © 2008 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 18, 9,16, 2008 [source]


Correlation optimized warping and dynamic time warping as preprocessing methods for chromatographic data

JOURNAL OF CHEMOMETRICS, Issue 5 2004
Giorgio Tomasi
Abstract Two different algorithms for time-alignment as a preprocessing step in linear factor models are studied. Correlation optimized warping and dynamic time warping are both presented in the literature as methods that can eliminate shift-related artifacts from measurements by correcting a sample vector towards a reference. In this study both the theoretical properties and the practical implications of using signal warping as preprocessing for chromatographic data are investigated. The connection between the two algorithms is also discussed. The findings are illustrated by means of a case study of principal component analysis on a real data set, including manifest retention time artifacts, of extracts from coffee samples stored under different packaging conditions for varying storage times. We concluded that for the data presented here dynamic time warping with rigid slope constraints and correlation optimized warping are superior to unconstrained dynamic time warping; both considerably simplify interpretation of the factor model results. Unconstrained dynamic time warping was found to be too flexible for this chromatographic data set, resulting in an overcompensation of the observed shifts and suggesting the unsuitability of this preprocessing method for this type of signals. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Computation of Likelihood Ratios in Fingerprint Identification for Configurations of Any Number of Minutić

JOURNAL OF FORENSIC SCIENCES, Issue 1 2007
Cédric Neumann M.Sc.
ABSTRACT: Recent court challenges have highlighted the need for statistical research on fingerprint identification. This paper proposes a model for computing likelihood ratios (LRs) to assess the evidential value of comparisons with any number of minutić. The model considers minutiae type, direction and relative spatial relationships. It expands on previous work on three minutiae by adopting a spatial modeling using radial triangulation and a probabilistic distortion model for assessing the numerator of the LR. The model has been tested on a sample of 686 ulnar loops and 204 arches. Features vectors used for statistical analysis have been obtained following a preprocessing step based on Gabor filtering and image processing to extract minutiae data. The metric used to assess similarity between two feature vectors is based on an Euclidean distance measure. Tippett plots and rates of misleading evidence have been used as performance indicators of the model. The model has shown encouraging behavior with low rates of misleading evidence and a LR power of the model increasing significantly with the number of minutić. The LRs that it provides are highly indicative of identity of source on a significant proportion of cases, even when considering configurations with few minutić. In contrast with previous research, the model, in addition to minutia type and direction, incorporates spatial relationships of minutić without introducing probabilistic independence assumptions. The model also accounts for finger distortion. [source]


Generalizable mass spectrometry mining used to identify disease state biomarkers from blood serum

PROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 9 2003
Padraic Neville
Abstract We bring a "spectrum" of classical data mining and statistical analysis methods to bear on discrimination of two groups of spectra from 24 diseased and 17 normal patients. Our primary goal is to accurately estimate the generalizability of this small dataset. After an aggressive preprocessing step that reduces consideration to only 55 peaks, we conduct over 35 out-of-sample cross-validation simulations of logistic regression, binary decision trees, and linear discriminant analysis. Misclassification rates grow worse as the size of the holdout sample increases, with many exceeding 30 percent. The ability to generalize is clearly tempered by the statistical, instrumentation, and biophysical characteristics of the study. [source]


Uniform random sampling of planar graphs in linear time,

RANDOM STRUCTURES AND ALGORITHMS, Issue 4 2009
Éric Fusy
Abstract This article introduces new algorithms for the uniform random generation of labelled planar graphs. Its principles rely on Boltzmann samplers, as recently developed by Duchon, Flajolet, Louchard, and Schaeffer. It combines the Boltzmann framework, a suitable use of rejection, a new combinatorial bijection found by Fusy, Poulalhon, and Schaeffer, as well as a precise analytic description of the generating functions counting planar graphs, which was recently obtained by Giménez and Noy. This gives rise to an extremely efficient algorithm for the random generation of planar graphs. There is a preprocessing step of some fixed small cost; and the expected time complexity of generation is quadratic for exact-size uniform sampling and linear for uniform approximate-size sampling. This greatly improves on the best previously known time complexity for exact-size uniform sampling of planar graphs with n vertices, which was a little over O(n7). © 2009 Wiley Periodicals, Inc. Random Struct. Alg., 2009 [source]


Exploring QSAR for Substituted 2-Sulfonyl-Phenyl-Indol Derivatives as Potent and Selective COX-2 Inhibitors Using Different Chemometrics Tools

CHEMICAL BIOLOGY & DRUG DESIGN, Issue 6 2008
Mehdi Khoshneviszadeh
Selective inhibition of cyclooxygenase-2 inhibitors is an important strategy in designing of potent anti-inflammatory compounds with significantly reduced side effects. The present quantitative structure,activity relationship study, attempts to explore the structural and physicochemical requirements of 2-sulfonyl,phenyl,indol derivatives (n = 30) for COX-2 inhibitory activity using chemical, topological, geometrical, and quantum descriptors. Some statistical techniques like stepwise regression, multiple linear regression with factor analysis as the data preprocessing (FA-MLR), principal component regression analysis, and genetic algorithms partial least squares analysis were applied to derive the quantitative structure,activity relationship models. The generated equations were statistically validated using cross-validation and external test set. The quality of equations obtained from stepwise multiple linear regression, FA-MLR, principal component regression analysis and PLS were in the acceptable statistical range. The best multiple linear regression equation obtained from factor analysis (FA-MLR) as the preprocessing step could predict 77.5% of the variance of the cyclooxygenase-2 inhibitory activity whereas that derived from genetic algorithms partial least squares could predict 84.2% of variances. The results of quantitative structure,activity relationship models suggested the importance of lipophilicity, electronegativity, molecular area and steric parameters on the cyclooxygenase-2 inhibitory activity. [source]


Optimal job splitting on a multi-slot machine with applications in the printing industry

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 3 2010
Ali Ekici
Abstract In this article, we define a scheduling/packing problem called the Job Splitting Problem, motivated by the practices in the printing industry. There are n types of items to be produced on an m -slot machine. A particular assignment of the types to the slots is called a "run" configuration and requires a setup cost. Once a run begins, the production continues according to that configuration and the "length" of the run represents the quantity produced in each slot during that run. For each unit of production in excess of demand, there is a waste cost. Our goal is to construct a production plan, i.e., a set of runs, such that the total setup and waste cost is minimized. We show that the problem is strongly NP-hard and propose two integer programming formulations, several preprocessing steps, and two heuristics. We also provide a worst-case bound for one of the heuristics. Extensive tests on real-world and randomly generated instances show that the heuristics are both fast and effective, finding near-optimal solutions. © 2010 Wiley Periodicals, Inc. Naval Research Logistics, 2010 [source]