Home About us Contact | |||
Vector Quantization (vector + quantization)
Kinds of Vector Quantization Selected AbstractsNatural head motion synthesis driven by acoustic prosodic featuresCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2005Carlos Busso Abstract Natural head motion is important to realistic facial animation and engaging human,computer interactions. In this paper, we present a novel data-driven approach to synthesize appropriate head motion by sampling from trained hidden markov models (HMMs). First, while an actress recited a corpus specifically designed to elicit various emotions, her 3D head motion was captured and further processed to construct a head motion database that included synchronized speech information. Then, an HMM for each discrete head motion representation (derived directly from data using vector quantization) was created by using acoustic prosodic features derived from speech. Finally, first-order Markov models and interpolation techniques were used to smooth the synthesized sequence. Our comparison experiments and novel synthesis results show that synthesized head motions follow the temporal dynamic behavior of real human subjects. Copyright © 2005 John Wiley & Sons, Ltd. [source] Strategies for fault classification in transmission lines, using learning vector quantization neural networksEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 4 2006A. J. Mazón Abstract This paper analyses different approaches to fault classification, in two-terminal overhead transmission lines, using learning vector quantization (LVQ) neural networks, just verifying its efficiency. The objective is to classify the fault using the fundamental components of 50/60,Hz of fault and pre-fault voltage and current magnitudes. These magnitudes are measured in each phase at the reference end. The accuracy of these methods has been checked using properly validated fault simulation software developed with MATLAB. This software allows simulating faults in any location of the line, to obtain the fault and prefault voltage and current values. With these values, the fault can be classified. Copyright © 2006 John Wiley & Sons, Ltd. [source] Modelling small-business credit scoring by using logistic regression, neural networks and decision treesINTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 3 2005Mirta Bensic Previous research on credit scoring that used statistical and intelligent methods was mostly focused on commercial and consumer lending. The main purpose of this paper is to extract important features for credit scoring in small-business lending on a dataset with specific transitional economic conditions using a relatively small dataset. To do this, we compare the accuracy of the best models extracted by different methodologies, such as logistic regression, neural networks (NNs), and CART decision trees. Four different NN algorithms are tested, including backpropagation, radial basis function network, probabilistic and learning vector quantization, by using the forward nonlinear variable selection strategy. Although the test of differences in proportion and McNemar's test do not show a statistically significant difference in the models tested, the probabilistic NN model produces the highest hit rate and the lowest type I error. According to the measures of association, the best NN model also shows the highest degree of association with the data, and it yields the lowest total relative cost of misclassification for all scenarios examined. The best model extracts a set of important features for small-business credit scoring for the observed sample, emphasizing credit programme characteristics, as well as entrepreneur's personal and business characteristics as the most important ones. Copyright © 2005 John Wiley & Sons, Ltd. [source] Predicting direction shifts on Canadian,US exchange rates with artificial neural networksINTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 2 2001Jefferson T. Davis The paper presents a variety of neural network models applied to Canadian,US exchange rate data. Networks such as backpropagation, modular, radial basis functions, linear vector quantization, fuzzy ARTMAP, and genetic reinforcement learning are examined. The purpose is to compare the performance of these networks for predicting direction (sign change) shifts in daily returns. For this classification problem, the neural nets proved superior to the naďve model, and most of the neural nets were slightly superior to the logistic model. Using multiple previous days' returns as inputs to train and test the backpropagation and logistic models resulted in no increased classification accuracy. The models were not able to detect a systematic affect of previous days' returns up to fifteen days prior to the prediction day that would increase model performance. Copyright © 2001 John Wiley & Sons, Ltd. [source] Image coding based on wavelet feature vectorINTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 2 2005Shinfeng D. Lin Abstract In this article, an efficient image coding scheme that takes advantages of feature vector in wavelet domain is proposed. First, a multi-stage discrete wavelet transform is applied on the image. Then, the wavelet feature vectors are extracted from the wavelet-decomposed subimages by collecting the corresponding wavelet coefficients. And finally, the image is coded into bit-stream by applying vector quantization (VQ) on the extracted wavelet feature vectors. In the encoder, the wavelet feature vectors are encoded with a codebook where the dimension of codeword is less than that of wavelet feature vector. By this way, the coding system can greatly improve its efficiency. However, to fully reconstruct the image, the received indexes in the decoder are decoded with a codebook where the dimension of codeword is the same as that of wavelet feature vector. Therefore, the quality of reconstructed images can be preserved well. The proposed scheme achieves good compression efficiency by the following three methods. (1) Using the correlation among wavelet coefficients. (2) Placing different emphasis on wavelet coefficients at different decomposing levels. (3) Preserving the most important information of the image by coding the lowest-pass subimage individually. In our experiments, simulation results show that the proposed scheme outperforms the recent VQ-based image coding schemes and wavelet-based image coding techniques, respectively. Moreover, the proposed scheme is also suitable for very low bit rate image coding. © 2005 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 15, 123,130, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20045 [source] On the fast search algorithms for vector quantization encodingINTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 5 2002Wen-Shiung Chen Abstract One of the major difficulties arising in vector quantization (VQ) is high encoding time complexity. Based on the well-known partial distance search (PDS) method and a special order of codewords in VQ codebook, two simple and efficient methods are introduced in fast full search vector quantization to reduce encoding time complexity. The exploitation of the "move-to-front" method, which may get a smaller distortion as early as possible, combined with the PDS algorithm, is shown to improve the encoding efficiency of the PDS method. Because of the feature of energy compaction in DCT domain, search in DCT domain codebook may be further speeded up. The experimental results show that our fast algorithms may significantly reduce search time of VQ encoding. © 2003 Wiley Periodicals, Inc. Int J Imaging Syst Technol 12, 204,210, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10030 [source] Wavelet-based adaptive vector quantization for still-image codingINTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 4 2002Wen-Shiung Chen Abstract Wavelet transform coding (WTC) with vector quantization (VQ) has been shown to be efficient in the application of image compression. An adaptive vector quantization coding scheme with the Gold-Washing dynamic codebook-refining mechanism in the wavelet domain, called symmetric wavelet transform-based adaptive vector quantization (SWT-GW-AVQ), is proposed for still-image coding in this article. The experimental results show that the GW codebook-refining mechanism working in the wavelet domain rather than the spatial domain is very efficient, and the SWT-GW-AVQ coding scheme may improve the peak signal-to-noise ratio (PSNR) of the reconstructed images with a lower encoding time. © 2002 Wiley Periodicals, Inc. Int J Imaging Syst Technol 12, 166,174, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10024 [source] The analysis of motor vehicle crash clusters using the vector quantization techniqueJOURNAL OF ADVANCED TRANSPORTATION, Issue 3 2010Lorenzo Mussone Abstract In this paper, a powerful tool for analyzing motor vehicle data based on the vector quantization (VQ) technique is demonstrated. The technique uses an approximation of a probability density function for a stochastic vector without assuming an "a priori" distribution. A self-organizing map (SOM) is used to transform accident data from an N-dimensional space into a two-dimensional plane. The SOM retains all the original data yet provides an effective visual tool for describing patterns such as the frequency at which a particular category of events occurs. This enables new relationships to be identified. Accident data from three cities in Italy (Turin, Milan, and Legnano) are used to illustrate the usefulness of the technique. Crashes are aggregated and clustered crashes by type, severity, and along other dimensions. The paper includes discussion as to how this method can be utilized to further improve safety analysis. Copyright © 2010 John Wiley & Sons, Ltd. [source] Automated classification of crystallization experiments using wavelets and statistical texture characterization techniquesJOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 1 2008D. Watts A method is presented for the classification of protein crystallization images based on image decomposition using the wavelet transform. The distribution of wavelet coefficient values in each sub-band image is modelled by a generalized Gaussian distribution to provide discriminatory variables. These statistical descriptors, together with second-order statistics obtained from joint probability distributions, are used with learning vector quantization to classify protein crystallization images. [source] A Comparison of Neural Network, Statistical Methods, and Variable Choice for Life Insurers' Financial Distress PredictionJOURNAL OF RISK AND INSURANCE, Issue 3 2006Patrick L. Brockett This study examines the effect of the statistical/mathematical model selected and the variable set considered on the ability to identify financially troubled life insurers. Models considered are two artificial neural network methods (back-propagation and learning vector quantization (LVQ)) and two more standard statistical methods (multiple discriminant analysis and logistic regression analysis). The variable sets considered are the insurance regulatory information system (IRIS) variables, the financial analysis solvency tracking (FAST) variables, and Texas early warning information system (EWIS) variables, and a data set consisting of twenty-two variables selected by us in conjunction with the research staff at TDI and a review of the insolvency prediction literature. The results show that the back-propagation (BP) and LVQ outperform the traditional statistical approaches for all four variable sets with a consistent superiority across the two different evaluation criteria (total misclassification cost and resubstitution risk criteria), and that the twenty-two variables and the Texas EWIS variable sets are more efficient than the IRIS and the FAST variable sets for identification of financially troubled life insurers in most comparisons. [source] Supporting user-subjective categorization with self-organizing maps and learning vector quantizationJOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 4 2005Dina Goren-Bar Today, most document categorization in organizations is done manually. We save at work hundreds of files and e-mail messages in folders every day. While automatic document categorization has been widely studied, much challenging research still remains to support user-subjective categorization. This study evaluates and compares the application of self-organizing maps (SOMs) and learning vector quantization (LVQ) with automatic document classification, using a set of documents from an organization, in a specific domain, manually classified by a domain expert. After running the SOM and LVQ we requested the user to reclassify documents that were misclassified by the system. Results show that despite the subjective nature of human categorization, automatic document categorization methods correlate well with subjective, personal categorization, and the LVQ method outperforms the SOM. The reclassification process revealed an interesting pattern: About 40% of the documents were classified according to their original categorization, about 35% according to the system's categorization (the users changed the original categorization), and the remainder received a different (new) categorization. Based on these results we conclude that automatic support for subjective categorization is feasible; however, an exact match is probably impossible due to the users' changing categorization behavior. [source] Adaptive thinning of atmospheric observations in data assimilation with vector quantization and filtering methodsTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 613 2005T. Ochotta Abstract In data assimilation for numerical weather prediction, measurements of various observation systems are combined with background data to define initial states for the forecasts. Current and future observation systems, in particular satellite instruments, produce large numbers of measurements with high spatial and temporal density. Such datasets significantly increase the computational costs of the assimilation and, moreover, can violate the assumption of spatially independent observation errors. To ameliorate these problems, we propose two greedy thinning algorithms, which reduce the number of assimilated observations while retaining the essential information content of the data. In the first method, the number of points in the output set is increased iteratively. We use a clustering method with a distance metric that combines spatial distance with difference in observation values. In a second scheme, we iteratively estimate the redundancy of the current observation set and remove the most redundant data points. We evaluate the proposed methods with respect to a geometric error measure and compare them with a uniform sampling scheme. We obtain good representations of the original data with thinnings retaining only a small portion of observations. We also evaluate our thinnings of ATOVS satellite data using the assimilation system of the Deutscher Wetterdienst. Impact of the thinning on the analysed fields and on the subsequent forecasts is discussed. Copyright © 2005 Royal Meteorological Society [source] |