Feature Extraction (feature + extraction)

Distribution by Scientific Domains


Selected Abstracts


Multi-scale Feature Extraction on Point-Sampled Surfaces

COMPUTER GRAPHICS FORUM, Issue 3 2003
Mark Pauly
We present a new technique for extracting line-type features on point-sampled geometry. Given an unstructuredpoint cloud as input, our method first applies principal component analysis on local neighborhoods toclassify points according to the likelihood that they belong to a feature. Using hysteresis thresholding, we thencompute a minimum spanning graph as an initial approximation of the feature lines. To smooth out the featureswhile maintaining a close connection to the underlying surface, we use an adaptation of active contour models.Central to our method is a multi-scale classification operator that allows feature analysis at multiplescales, using the size of the local neighborhoods as a discrete scale parameter. This significantly improves thereliability of the detection phase and makes our method more robust in the presence of noise. To illustrate theusefulness of our method, we have implemented a non-photorealistic point renderer to visualize point-sampledsurfaces as line drawings of their extracted feature curves. [source]


Feature Extraction for Traffic Incident Detection Using Wavelet Transform and Linear Discriminant Analysis

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2000
A. Samant
To eliminate false alarms, an effective traffic incident detection algorithm must be able to extract incident-related features from the traffic patterns. A robust feature-extraction algorithm also helps reduce the dimension of the input space for a neural network model without any significant loss of related traffic information, resulting in a substantial reduction in the network size, the effect of random traffic fluctuations, the number of required training samples, and the computational resources required to train the neural network. This article presents an effective traffic feature-extraction model using discrete wavelet transform (DWT) and linear discriminant analysis (LDA). The DWT is first applied to raw traffic data, and the finest resolution coefficients representing the random fluctuations of traffic are discarded. Next, LDA is employed to the filtered signal for further feature extraction and reducing the dimensionality of the problem. The results of LDA are used as input to a neural network model for traffic incident detection. [source]


Feature extraction by autoregressive spectral analysis using maximum likelihood estimation: internal carotid arterial Doppler signals

EXPERT SYSTEMS, Issue 4 2008
Elif Derya Übeyli
Abstract: In this study, Doppler signals recorded from the internal carotid artery (ICA) of 97 subjects were processed by personal computer using classical and model-based methods. Fast Fourier transform (classical method) and autoregressive (model-based method) methods were selected for processing the ICA Doppler signals. The parameters in the autoregressive method were found by using maximum likelihood estimation. The Doppler power spectra of the ICA Doppler signals were obtained by using these spectral analysis techniques. The variations in the shape of the Doppler spectra as a function of time were presented in the form of sonograms in order to obtain medical information. These Doppler spectra and sonograms were then used to compare the applied methods in terms of their frequency resolution and the effects in determination of stenosis and occlusion in the ICA. Reliable information on haemodynamic alterations in the ICA can be obtained by evaluation of these sonograms. [source]


Interactive Visualization with Programmable Graphics Hardware

COMPUTER GRAPHICS FORUM, Issue 3 2002
Thomas Ertl
One of the main scientific goals of visualization is the development of algorithms and appropriate data models which facilitate interactive visual analysis and direct manipulation of the increasingly large data sets which result from simulations running on massive parallel computer systems, from measurements employing fast high-resolution sensors, or from large databases and hierarchical information spaces. This task can only be achieved with the optimization of all stages of the visualization pipeline: filtering, compression, and feature extraction of the raw data sets, adaptive visualization mappings which allow the users to choose between speed and accuracy, and exploiting new graphics hardware features for fast and high-quality rendering. The recent introduction of advanced programmability in widely available graphics hardware has already led to impressive progress in the area of volume visualization. However, besides the acceleration of the final rendering, flexible graphics hardware is increasingly being used also for the mapping and filtering stages of the visualization pipeline, thus giving rise to new levels of interactivity in visualization applications. The talk will present recent results of applying programmable graphics hardware in various visualization algorithms covering volume data, flow data, terrains, NPR rendering, and distributed and remote applications. [source]


Nondestructive Evaluation of Elastic Properties of Concrete Using Simulation of Surface Waves

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 8 2008
Jae Hong Kim
In this study, to evaluate information of a surface waveform beyond the simple wave velocity, artificial intelligence engines are employed to estimate simulation parameters, that is, the properties of elastic materials. The developed artificial neural networks are trained with a numerical database having secured its stability. In the process, the appropriate shape of the force,time function for an impact load is assumed so as to avoid Gibbs phenomenon, and the proposed principal wavelet-component analysis accomplishes a feature extraction with a wavelet transformed signal. The results of estimation are validated with experiments focused on concrete materials. [source]


Semi-Automatic 3D Reconstruction of Urban Areas Using Epipolar Geometry and Template Matching

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 7 2006
José Miguel Sales Dias
The main challenge is to compute the relevant information,building's height and volume, roof's description, and texture,algorithmically, because it is very time consuming and thus expensive to produce it manually for large urban areas. The algorithm requires some initial calibration input and is able to compute the above-mentioned building characteristics from the stereo pair and the availability of the 2D CAD and the digital elevation model of the same area, with no knowledge of the camera pose or its intrinsic parameters. To achieve this, we have used epipolar geometry, homography computation, automatic feature extraction and we have solved the feature correspondence problem in the stereo pair, by using template matching. [source]


Feature Extraction for Traffic Incident Detection Using Wavelet Transform and Linear Discriminant Analysis

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2000
A. Samant
To eliminate false alarms, an effective traffic incident detection algorithm must be able to extract incident-related features from the traffic patterns. A robust feature-extraction algorithm also helps reduce the dimension of the input space for a neural network model without any significant loss of related traffic information, resulting in a substantial reduction in the network size, the effect of random traffic fluctuations, the number of required training samples, and the computational resources required to train the neural network. This article presents an effective traffic feature-extraction model using discrete wavelet transform (DWT) and linear discriminant analysis (LDA). The DWT is first applied to raw traffic data, and the finest resolution coefficients representing the random fluctuations of traffic are discarded. Next, LDA is employed to the filtered signal for further feature extraction and reducing the dimensionality of the problem. The results of LDA are used as input to a neural network model for traffic incident detection. [source]


An Adaptive Conjugate Gradient Neural Network,Wavelet Model for Traffic Incident Detection

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2000
H. Adeli
Artificial neural networks are known to be effective in solving problems involving pattern recognition and classification. The traffic incident-detection problem can be viewed as recognizing incident patterns from incident-free patterns. A neural network classifier has to be trained first using incident and incident-free traffic data. The dimensionality of the training input data is high, and the embedded incident characteristics are not easily detectable. In this article we present a computational model for automatic traffic incident detection using discrete wavelet transform, linear discriminant analysis, and neural networks. Wavelet transform and linear discriminant analysis are used for feature extraction, denoising, and effective preprocessing of data before an adaptive neural network model is used to make the traffic incident detection. Simulated as well as actual traffic data are used to test the model. For incidents with a duration of more than 5 minutes, the incident-detection model yields a detection rate of nearly 100 percent and a false-alarm rate of about 1 percent for two- or three-lane freeways. [source]


A method of new filter design based on the co-occurrence histogram

ELECTRICAL ENGINEERING IN JAPAN, Issue 1 2009
Takayuki Fujiwara
Abstract We have proposed that the co-occurrence frequency image (CFI) based on the co-occurrence frequency histogram of the gray value of an image can be used in a new scheme for image feature extraction. This paper proposes new enhancement filters to achieve sharpening and smoothing of images. These filters are very similar in result but quite different in process from those which have been used previously. Thus, we show the possibility of a new paradigm for basic image enhancement filters making use of the CFI. © 2008 Wiley Periodicals, Inc. Electr Eng Jpn, 166(1): 36,42, 2009; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/eej.20699 [source]


Analysis of electrocardiographic changes in partial epileptic patients by combining eigenvector methods and support vector machines

EXPERT SYSTEMS, Issue 3 2009
Elif Derya Übeyli
Abstract: In the present study, the diagnostic accuracy of support vector machines (SVMs) on electrocardiogram (ECG) signals is evaluated. Two types of ECG beats (normal and partial epilepsy) were obtained from the Physiobank database. Decision making was performed in two stages: feature extraction by eigenvector methods and classification using the SVM trained on the extracted features. The present research demonstrates that the power levels of the power spectral densities obtained by eigenvector methods are features which represent the ECG signals well and SVMs trained on these features achieve high classification accuracies. [source]


Probabilistic neural networks combined with wavelet coefficients for analysis of electroencephalogram signals

EXPERT SYSTEMS, Issue 2 2009
Elif Derya Übeyli
Abstract: In this paper, the probabilistic neural network is presented for classification of electroencephalogram (EEG) signals. Decision making is performed in two stages: feature extraction by wavelet transform and classification using the classifiers trained on the extracted features. The purpose is to determine an optimum classification scheme for this problem and also to infer clues about the extracted features. The present research demonstrates that the wavelet coefficients obtained by the wavelet transform are features which represent the EEG signals well. The conclusions indicate that the probabilistic neural network trained on the wavelet coefficients achieves high classification accuracies (the total classification accuracy is 97.63%). [source]


Feature-space clustering for fMRI meta-analysis,

HUMAN BRAIN MAPPING, Issue 3 2001
Cyril Goutte
Abstract Clustering functional magnetic resonance imaging (fMRI) time series has emerged in recent years as a possible alternative to parametric modeling approaches. Most of the work so far has been concerned with clustering raw time series. In this contribution we investigate the applicability of a clustering method applied to features extracted from the data. This approach is extremely versatile and encompasses previously published results [Goutte et al., 1999] as special cases. A typical application is in data reduction: as the increase in temporal resolution of fMRI experiments routinely yields fMRI sequences containing several hundreds of images, it is sometimes necessary to invoke feature extraction to reduce the dimensionality of the data space. A second interesting application is in the meta-analysis of fMRI experiment, where features are obtained from a possibly large number of single-voxel analyses. In particular this allows the checking of the differences and agreements between different methods of analysis. Both approaches are illustrated on a fMRI data set involving visual stimulation, and we show that the feature space clustering approach yields nontrivial results and, in particular, shows interesting differences between individual voxel analysis performed with traditional methods. Hum. Brain Mapping 13:165,183, 2001. © 2001 Wiley-Liss, Inc. [source]


Finger vein recognition using minutia-based alignment and local binary pattern-based feature extraction

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 3 2009
Eui Chul Lee
Abstract With recent increases in security requirements, biometrics such as fingerprints, faces, and irises have been widely used in many recognition applications including door access control, personal authentication for computers, Internet banking, automatic teller machines, and border-crossing controls. Finger vein recognition uses the unique patterns of finger veins to identify individuals at a high level of accuracy. This article proposes a new finger vein recognition method using minutia-based alignment and local binary pattern (LBP)-based feature extraction. Our study makes three novelties compared to previous works. First, we use minutia points such as bifurcation and ending points of the finger vein region for image alignment. Second, instead of using the whole finger vein region, we use several extracted minutia points and a simple affine transform for alignment, which can be performed at fast computational speed. Third, after aligning the finger vein image based on minutia points, we extract a unique finger vein code using a LBP, which reduces false rejection error and thus the equal error rate (EER) significantly. Our resulting EER was 0.081% with a total processing time of 118.6 ms. © 2009 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 19, 179,186, 2009 [source]


An efficient approach to texture-based image retrieval

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 5 2007
Mahmoud R. Hejazi
Abstract In this article, we present an efficient approach for image retrieval based on the textural information of an image, such as orientation, directionality, and regularity. For this purpose, we apply the nonlinear modified discrete Radon transform to estimate these visual contents. We then utilize texture orientation to construct the rotated Gabor transform for extraction of the rotation-invariant texture feature. The rotation-invariant texture feature, directionality, and regularity are the main features used in the proposed approach for similarity assessment. Experimental results on a large number of texture and aerial images from standard databases show that the proposed schemes for feature extraction and image retrieval significantly outperform previous works, including methods based on the MPEG-7 texture descriptors. © 2008 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 17, 295,302, 2007 [source]


Using dendronal signatures for feature extraction and retrieval

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 4 2000
Luojian Chen
A dendrone is a hierarchical thresholding structure that can be automatically generated from a complex image. The dendrone structure captures the connectedness of objects and subobjects during successive brightness thresholding. Based on connectedness and changes in intensity contours, dendronic representations of objects in images capture the coarse-to-fine unfolding of finer and finer detail, creating a unique signature for target objects that is invariant to lighting, scale, and placement of the object within the image. Subdendrones within the hierarchy are recognizable as objects within the picture. Complex composite images can be autonomously analyzed to determine if they contain the unique dendronic signatures of particular target objects of interest. In this paper, we describe the initial design of the dendronic image characterization environment (DICE) for the generation of dendronic signatures from complex multiband remote imagery. By comparing subdendrones within an image to dendronic signatures of target objects of interest, DICE can be used to match/retrieve target features from a library of composite images. The DICE framework can organize and support a number of alternative object recognition and comparison techniques, depending on the application domain. © 2001 John Wiley & Sons, Inc. Int J Imaging Syst Technol 11, 243,253, 2000 [source]


Range error detection caused by occlusion in non-coaxial LADARs for scene interpretation

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 10 2005
Bingbing Liu
When processing laser detection and ranging (LADAR) sensor data for scene interpretation, for example, for the purposes of feature extraction and/or data association in mobile robotics, most previous work models such devices as processing range data which follows a normal distribution. In this paper, it is demonstrated that commonly used LADARs suffer from incorrect range readings at changes in surface reflectivity and/or range discontinuities, which can have a much more detrimental effect on such algorithms than random noise. Most LADARs fall into two categories: coaxial and separated transmitter and receiver configurations. The latter offer the advantage that optical crosstalk is eliminated, since it can be guaranteed that all of the transmitted light leaves the LADAR and is not in any way partially reflected within it due to the beam-splitting techniques necessary in coaxial LADARs. However, they can introduce a significant disparity effect, as the reflected laser energy from the target can be partially occluded from the receiver. As well as demonstrating that false range values can result due to this occlusion effect from scanned LADARs, the main contribution of this paper is that the occurrence of these values can be reliably predicted by monitoring the received signal strength and a quantity we refer to as the "transceiver separation angle" of the rotating mirror. This paper will demonstrate that a correct understanding of such systematic errors is essential for the correct further processing of the data. A useful design criterion for the optical separation of the receiver and transmitter is also derived for noncoaxial LADARs, based on the minimum detectable signal amplitude of a LADAR and environmental edge constraints. By investigating the effects of various sensor and environmental parameters on occlusion, some advice is given on how to make use of noncoaxial LADARs correctly so as to avoid range errors when scanning environmental discontinuities. © 2005 Wiley Periodicals, Inc. [source]


A classifying procedure for signalling turning points

JOURNAL OF FORECASTING, Issue 3 2004
Lasse Koskinen
Abstract A Hidden Markov Model (HMM) is used to classify an out-of-sample observation vector into either of two regimes. This leads to a procedure for making probability forecasts for changes of regimes in a time series, i.e. for turning points. Instead of estimating past turning points using maximum likelihood, the model is estimated with respect to known past regimes. This makes it possible to perform feature extraction and estimation for different forecasting horizons. The inference aspect is emphasized by including a penalty for a wrong decision in the cost function. The method, here called a ,Markov Bayesian Classifier (MBC)', is tested by forecasting turning points in the Swedish and US economies, using leading data. Clear and early turning point signals are obtained, contrasting favourably with earlier HMM studies. Some theoretical arguments for this are given. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Exploiting statistical properties of wavelet coefficient for face detection and recognition

PROCEEDINGS IN APPLIED MATHEMATICS & MECHANICS, Issue 1 2007
Naseer Al-Jawad
Wavelet transforms (WT) are widely accepted as an essential tool for image processing and analysis. Image and video compression, image watermarking, content-base image retrieval, face recognition, texture analysis, and image feature extraction are all but few examples. It provides an alternative tool for short time analysis of quasi-stationary signals, such as speech and image signals, in contrast to the traditional short-time Fourier transform. The Discrete Wavelet Transform (DWT) is a special case of the WT, which provides a compact representation of a signal in the time and frequency domain. In particular, wavelet transforms are capable of representing smooth patterns as well anomalies (e.g. edges and sharp corners) in images. We are focusing here on using wavelet transforms statistical properties for facial feature detection, which allows us to extract the image facial feature/edges easily. Wavelet sub-bands segmentation method been developed and used to clean up the non-significant wavelet coefficients in wavelet sub-band (k) based on the (k-1) sub-band. Moreover, erosion which is considered as one of the fundamental operation in morphological image processing, been used to reduce the unwanted edges in certain directions. For face detection, face template profiles been built for both the face and the eyes for different wavelet sub-band levels to achieve better computational performance, these profiles used to match the extracted profiles from the wavelet domain of the input image using the Dynamic Time Warping technique DTW. The DTW smallest distance allows identifying the face and the eyes location. The performance of face features distances and ratio has been also tested for face verification purposes. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Improved classification of crystallization images using data fusion and multiple classifiers

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 8 2008
Samarasena Buchala
Identifying the conditions that will produce diffraction-quality crystals can require very many crystallization experiments. The use of robots has increased the number of experiments performed in most laboratories, while in structural genomics centres tens of thousands of experiments can be produced every day. Reliable automated evaluation of these experiments is becoming increasingly important. A more robust classification is achieved by combining different methods of feature extraction with the use of multiple classifiers. [source]


Integrated state evaluation for the images of crystallization droplets utilizing linear and nonlinear classifiers

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 9 2006
Kuniaki Kawabata
In a usual crystallization process, the researchers evaluate the protein crystallization growth states based on visual impressions and repeatedly assign scores throughout the growth process. Although the development of crystallization robotic systems has generally realised the automation of the setup and storage of crystallization samples, evaluation of crystallization states has not yet been completely automated. The method presented here attempts to categorize individual crystallization droplet images into five classes using multiple classifiers. In particular, linear and nonlinear classifiers are utilized. The algorithm is comprised of pre-processing, feature extraction from images using texture analysis and a categorization process using linear discriminant analysis (LDA) and support vector machine (SVM). The performance of this method has been evaluated by comparing the results obtained using the method with the results obtained by a human expert and the concordance rate was 84.4%. [source]


Comparison between Principal Component Analysis and Independent Component Analysis in Electroencephalograms Modelling

BIOMETRICAL JOURNAL, Issue 2 2007
C. Bugli
Abstract Principal Component Analysis (PCA) is a classical technique in statistical data analysis, feature extraction and data reduction, aiming at explaining observed signals as a linear combination of orthogonal principal components. Independent Component Analysis (ICA) is a technique of array processing and data analysis, aiming at recovering unobserved signals or ,sources' from observed mixtures, exploiting only the assumption of mutual independence between the signals. The separation of the sources by ICA has great potential in applications such as the separation of sound signals (like voices mixed in simultaneous multiple records, for example), in telecommunication or in the treatment of medical signals. However, ICA is not yet often used by statisticians. In this paper, we shall present ICA in a statistical framework and compare this method with PCA for electroencephalograms (EEG) analysis. We shall see that ICA provides a more useful data representation than PCA, for instance, for the representation of a particular characteristic of the EEG named event-related potential (ERP). (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Combining wavelet-based feature extractions with relevance vector machines for stock index forecasting

EXPERT SYSTEMS, Issue 2 2008
Shian-Chang Huang
Abstract: The relevance vector machine (RVM) is a Bayesian version of the support vector machine, which with a sparse model representation has appeared to be a powerful tool for time-series forecasting. The RVM has demonstrated better performance over other methods such as neural networks or autoregressive integrated moving average based models. This study proposes a hybrid model that combines wavelet-based feature extractions with RVM models to forecast stock indices. The time series of explanatory variables are decomposed using some wavelet bases and the extracted time-scale features serve as inputs of an RVM to perform the non-parametric regression and forecasting. Compared with traditional forecasting models, our proposed method performs best. The root-mean-squared forecasting errors are significantly reduced. [source]