Quantization

Distribution by Scientific Domains

Kinds of Quantization

  • learning vector quantization
  • vector quantization


  • Selected Abstracts


    Quantization of the ab initio nonadiabatic coupling matrix: The C2H molecule as a case study

    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 4-5 2001
    Michael Baer
    Abstract The observation that, for a given sub-Hilbert space, diabatic potentials, just like adiabatic potentials, have to be single-valued in configuration space led to the unavoidable conclusion that the relevant nonadiabatic coupling matrix (i.e., the matrix that contains the vectorial electronic nonadiabatic coupling terms) has to be quantized along any contour in configuration space. In the present article this statement is tested with respect to the three (excited) states of the C2H molecule, i.e., the 22A,, 32A,, and 42A, states. For this purpose ab initio electronic nonadiabatic coupling matrices were calculated along various contours surrounding the relevant conical intersections (one conical intersection between the 22A, and 32A, states and two conical intersections between the 32A, and 42A, states). Employing the line-integral technique it was shown that as long as the contour that surrounds the (2,3) conical intersection is close enough to the CI and avoids the two (3,4) conical intersections, the 2×2 nonadiabatic coupling matrices are quantized. However they fail to be quantized for contours that also surround one or two of the other conical intersections. In this case one is obliged to employ the three-state nonadiabatic coupling matrix. Doing that, it was shown that it is the 3×3 matrices that satisfy the quantization condition. © 2001 John Wiley & Sons, Inc. Int J Quantum Chem, 2001 [source]


    Natural head motion synthesis driven by acoustic prosodic features

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2005
    Carlos Busso
    Abstract Natural head motion is important to realistic facial animation and engaging human,computer interactions. In this paper, we present a novel data-driven approach to synthesize appropriate head motion by sampling from trained hidden markov models (HMMs). First, while an actress recited a corpus specifically designed to elicit various emotions, her 3D head motion was captured and further processed to construct a head motion database that included synchronized speech information. Then, an HMM for each discrete head motion representation (derived directly from data using vector quantization) was created by using acoustic prosodic features derived from speech. Finally, first-order Markov models and interpolation techniques were used to smooth the synthesized sequence. Our comparison experiments and novel synthesis results show that synthesized head motions follow the temporal dynamic behavior of real human subjects. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Exact and Robust (Self-)Intersections for Polygonal Meshes

    COMPUTER GRAPHICS FORUM, Issue 2 2010
    Marcel Campen
    Abstract We present a new technique to implement operators that modify the topology of polygonal meshes at intersections and self-intersections. Depending on the modification strategy, this effectively results in operators for Boolean combinations or for the construction of outer hulls that are suited for mesh repair tasks and accurate mesh-based front tracking of deformable materials that split and merge. By combining an adaptive octree with nested binary space partitions (BSP), we can guarantee exactness (= correctness) and robustness (= completeness) of the algorithm while still achieving higher performance and less memory consumption than previous approaches. The efficiency and scalability in terms of runtime and memory is obtained by an operation localization scheme. We restrict the essential computations to those cells in the adaptive octree where intersections actually occur. Within those critical cells, we convert the input geometry into a plane-based BSP-representation which allows us to perform all computations exactly even with fixed precision arithmetics. We carefully analyze the precision requirements of the involved geometric data and predicates in order to guarantee correctness and show how minimal input mesh quantization can be used to safely rely on computations with standard floating point numbers. We properly evaluate our method with respect to precision, robustness, and efficiency. [source]


    Multiresolution Random Accessible Mesh Compression

    COMPUTER GRAPHICS FORUM, Issue 3 2006
    Junho Kim
    This paper presents a novel approach for mesh compression, which we call multiresolution random accessible mesh compression. In contrast to previous mesh compression techniques, the approach enables us to progressively decompress an arbitrary portion of a mesh without decoding other non-interesting parts. This simultaneous support of random accessibility and progressiveness is accomplished by adapting selective refinement of a multiresolution mesh to the mesh compression domain. We present a theoretical analysis of our connectivity coding scheme and provide several experimental results. The performance of our coder is about 11 bits for connectivity and 21 bits for geometry with 12-bit quantization, which can be considered reasonably good under the constraint that no fixed neighborhood information can be used for coding to support decompression in a random order. Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling [source]


    Pattern recognition in capillary electrophoresis data using dynamic programming in the wavelet domain

    ELECTROPHORESIS, Issue 13 2008
    Gerardo A. Ceballos
    Abstract A novel approach for CE data analysis based on pattern recognition techniques in the wavelet domain is presented. Low-resolution, denoised electropherograms are obtained by applying several preprocessing algorithms including denoising, baseline correction, and detection of the region of interest in the wavelet domain. The resultant signals are mapped into character sequences using first derivative information and multilevel peak height quantization. Next, a local alignment algorithm is applied on the coded sequences for peak pattern recognition. We also propose 2-D and 3-D representations of the found patterns for fast visual evaluation of the variability of chemical substances concentration in the analyzed samples. The proposed approach is tested on the analysis of intracerebral microdialysate data obtained by CE and LIF detection, achieving a correct detection rate of about 85% with a processing time of less than 0.3,s per 25,000-point electropherogram. Using a local alignment algorithm on low-resolution denoised electropherograms might have a great impact on high-throughput CE since the proposed methodology will substitute automatic fast pattern recognition analysis for slow, human based time-consuming visual pattern recognition methods. [source]


    Strategies for fault classification in transmission lines, using learning vector quantization neural networks

    EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 4 2006
    A. J. Mazón
    Abstract This paper analyses different approaches to fault classification, in two-terminal overhead transmission lines, using learning vector quantization (LVQ) neural networks, just verifying its efficiency. The objective is to classify the fault using the fundamental components of 50/60,Hz of fault and pre-fault voltage and current magnitudes. These magnitudes are measured in each phase at the reference end. The accuracy of these methods has been checked using properly validated fault simulation software developed with MATLAB. This software allows simulating faults in any location of the line, to obtain the fault and prefault voltage and current values. With these values, the fault can be classified. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Trellis coded quantization/trellis coded continuous phase modulation over Rician channel with imperfect phase reference

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 5 2004
    Osman Nuri Ucan
    In this paper, to improve bandwidth efficiency and error performance, trellis coded quantization/trellis continuous phase modulation (TCQ/TCPM) are combined and a trellis coded quantization/trellis coded continuous phase modulation (TCQ/TCCPM) scheme is introduced. Here, we use TCQ/TCM system as source coding because of its advantage over classical joint systems in terms of decoding time and complexity. We also present CPM for TCQ/TCM signals, since CPM provides low spectral occupancy and is suitable for power and bandwidth limited channels. The bit error performance of TCQ/TCCPM schemes is derived taking into account quantization noise over Rician channel with imperfect phase reference. The analytical upper bounds are obtained using Chernoff bounding technique, combined with the modified generating functional approach with no channel state information (CSI) and no side information for the phase noise process. As an example, TCQ/TCCPM scheme for 16CPFSK with modulation index h,=,1/2 (16CPFSK-TCQ/TCCPM) is investigated and compared to TCQ/TCM for 16PSK (16PSK-TCQ/TCM). It is shown that 16CPFSK-TCQ/TCCPM has better bit error performance than 16PSK-TCQ/TCM in all signal-to-noise-ratio (SNR) and quantization noise effect increases at high SNR values for both uniform and optimum quantization. Copyright © 2004 AEI. [source]


    Emergent 4D gravity from matrix models

    FORTSCHRITTE DER PHYSIK/PROGRESS OF PHYSICS, Issue 4-5 2008
    Article first published online: 8 APR 200, H. Steinacker
    Abstract Recent progress in the understanding of gravity on noncommutative spaces is discussed. A gravity theory naturally emerges from matrix models of noncommutative gauge theory. The effective metric depends on the dynamical Poisson structure, absorbing the degrees of freedom of the would-be U(1) gauge field. The gravity action is induced upon quantization. [source]


    Modelling small-business credit scoring by using logistic regression, neural networks and decision trees

    INTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 3 2005
    Mirta Bensic
    Previous research on credit scoring that used statistical and intelligent methods was mostly focused on commercial and consumer lending. The main purpose of this paper is to extract important features for credit scoring in small-business lending on a dataset with specific transitional economic conditions using a relatively small dataset. To do this, we compare the accuracy of the best models extracted by different methodologies, such as logistic regression, neural networks (NNs), and CART decision trees. Four different NN algorithms are tested, including backpropagation, radial basis function network, probabilistic and learning vector quantization, by using the forward nonlinear variable selection strategy. Although the test of differences in proportion and McNemar's test do not show a statistically significant difference in the models tested, the probabilistic NN model produces the highest hit rate and the lowest type I error. According to the measures of association, the best NN model also shows the highest degree of association with the data, and it yields the lowest total relative cost of misclassification for all scenarios examined. The best model extracts a set of important features for small-business credit scoring for the observed sample, emphasizing credit programme characteristics, as well as entrepreneur's personal and business characteristics as the most important ones. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Predicting direction shifts on Canadian,US exchange rates with artificial neural networks

    INTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 2 2001
    Jefferson T. Davis
    The paper presents a variety of neural network models applied to Canadian,US exchange rate data. Networks such as backpropagation, modular, radial basis functions, linear vector quantization, fuzzy ARTMAP, and genetic reinforcement learning are examined. The purpose is to compare the performance of these networks for predicting direction (sign change) shifts in daily returns. For this classification problem, the neural nets proved superior to the naïve model, and most of the neural nets were slightly superior to the logistic model. Using multiple previous days' returns as inputs to train and test the backpropagation and logistic models resulted in no increased classification accuracy. The models were not able to detect a systematic affect of previous days' returns up to fifteen days prior to the prediction day that would increase model performance. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Defect-tolerant nanoelectronic pattern classifiers

    INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 3 2007
    Jung Hoon Lee
    Abstract Mixed-signal neuromorphic networks (,CrossNets'), based on hybrid CMOS/nanodevice circuits, may provide unprecedented performance for important pattern classification tasks. The synaptic weights necessary for such tasks may be imported from an external ,precursor' network with either continuous or discrete synaptic weights (in the former case, with the quantization,,clipping',due to the binary character of the elementary synaptic nanodevices,latching switches.) Alternatively, the weights may be adjusted ,in situ' (inside the CrossNet) using a pseudo-stochastic method, or set-up using a mixed-mode method partly employing external circuitry. Our calculations have shown that CrossNet pattern classifiers, using any of these synaptic weight adjustment methods, may be remarkably resilient. For example, in a CrossNet with synapses in the form of two small square arrays with 4 × 4 nanodevices each, the resulting weight discreteness may have a virtually negligible effect on the classification fidelity, while the fraction of defective devices which affects the performance substantially ranges from ,20% to as high as 90% (!), depending on the training method. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Image coding based on wavelet feature vector

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 2 2005
    Shinfeng D. Lin
    Abstract In this article, an efficient image coding scheme that takes advantages of feature vector in wavelet domain is proposed. First, a multi-stage discrete wavelet transform is applied on the image. Then, the wavelet feature vectors are extracted from the wavelet-decomposed subimages by collecting the corresponding wavelet coefficients. And finally, the image is coded into bit-stream by applying vector quantization (VQ) on the extracted wavelet feature vectors. In the encoder, the wavelet feature vectors are encoded with a codebook where the dimension of codeword is less than that of wavelet feature vector. By this way, the coding system can greatly improve its efficiency. However, to fully reconstruct the image, the received indexes in the decoder are decoded with a codebook where the dimension of codeword is the same as that of wavelet feature vector. Therefore, the quality of reconstructed images can be preserved well. The proposed scheme achieves good compression efficiency by the following three methods. (1) Using the correlation among wavelet coefficients. (2) Placing different emphasis on wavelet coefficients at different decomposing levels. (3) Preserving the most important information of the image by coding the lowest-pass subimage individually. In our experiments, simulation results show that the proposed scheme outperforms the recent VQ-based image coding schemes and wavelet-based image coding techniques, respectively. Moreover, the proposed scheme is also suitable for very low bit rate image coding. © 2005 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 15, 123,130, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20045 [source]


    On the fast search algorithms for vector quantization encoding

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 5 2002
    Wen-Shiung Chen
    Abstract One of the major difficulties arising in vector quantization (VQ) is high encoding time complexity. Based on the well-known partial distance search (PDS) method and a special order of codewords in VQ codebook, two simple and efficient methods are introduced in fast full search vector quantization to reduce encoding time complexity. The exploitation of the "move-to-front" method, which may get a smaller distortion as early as possible, combined with the PDS algorithm, is shown to improve the encoding efficiency of the PDS method. Because of the feature of energy compaction in DCT domain, search in DCT domain codebook may be further speeded up. The experimental results show that our fast algorithms may significantly reduce search time of VQ encoding. © 2003 Wiley Periodicals, Inc. Int J Imaging Syst Technol 12, 204,210, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10030 [source]


    Wavelet-based adaptive vector quantization for still-image coding

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 4 2002
    Wen-Shiung Chen
    Abstract Wavelet transform coding (WTC) with vector quantization (VQ) has been shown to be efficient in the application of image compression. An adaptive vector quantization coding scheme with the Gold-Washing dynamic codebook-refining mechanism in the wavelet domain, called symmetric wavelet transform-based adaptive vector quantization (SWT-GW-AVQ), is proposed for still-image coding in this article. The experimental results show that the GW codebook-refining mechanism working in the wavelet domain rather than the spatial domain is very efficient, and the SWT-GW-AVQ coding scheme may improve the peak signal-to-noise ratio (PSNR) of the reconstructed images with a lower encoding time. © 2002 Wiley Periodicals, Inc. Int J Imaging Syst Technol 12, 166,174, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10024 [source]


    On holographic transform compression of images

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 5 2000
    Alfred M. Bruckstein
    Abstract Lossy transform compression of images is successful and widespread. The JPEG standard uses the discrete cosine transform on blocks of the image and a bit allocation process that takes advantage of the uneven energy distribution in the transform domain. For most images, 10:1 compression ratios can be achieved with no visible degradations. However, suppose that multiple versions of the compressed image exist in a distributed environment such as the internet, and several of them could be made available upon request. The classical approach would provide no improvement in the image quality if more than one version of the compressed image became available. In this paper, we propose a method, based on multiple description scalar quantization, that yields decompressed image quality that improves with the number of compressed versions available. © 2001 John Wiley & Sons, Inc. Int J Imaging Syst Technol, 11, 292,314, 2000 [source]


    Algebraic modifications to second quantization for non-Hermitian complex scaled hamiltonians with application to a quadratically convergent multiconfigurational self-consistent field method

    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 6 2005
    Danny L. Yeager
    Abstract The algebraic structure for creation and annihilation operators defined on orthogonal orbitals is generalized to permit easy development of bound-state techniques involving the use of non-Hermitian Hamiltonians arising from the use of complex-scaling or complex-absorbing potentials in the treatment of electron scattering resonances. These extensions are made possible by an orthogonal transformation of complex biorthogonal orbitals and states as opposed to the customary unitary transformation of real orthogonal orbitals and states and preserve all other formal and numerical simplicities of existing bound-state methods. The ease of application is demonstrated by deriving the modified equations for implementation of a quadratically convergent multiconfigurational self-consistent field (MCSCF) method for complex-scaled Hamiltonians but the generalizations are equally applicable for the extension of other techniques such as single and multireference coupled cluster (CC) and many-body perturbation theory (MBPT) methods for their use in the treatment of resonances. This extends the domain of applicability of MCSCF, CC, MBPT, and methods based on MCSCF states to an accurate treatment of resonances while still using L2 real basis sets. Modification of all other bound-state methods and codes should be similarly straightforward. © 2005 Wiley Periodicals, Inc. Int J Quantum Chem, 2005 [source]


    Minimal data rate stabilization of nonlinear systems over networks with large delays

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 10 2010
    C. De Persis
    Abstract Control systems over networks with a finite data rate can be conveniently modeled as hybrid (impulsive) systems. For the class of nonlinear systems in feedfoward form, we design a hybrid controller, which guarantees stability, in spite of the measurement noise due to the quantization, and of an arbitrarily large delay, which affects the communication channel. The rate at which feedback packets are transmitted from the sensors to the actuators is shown to be arbitrarily close to the infimal one. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Average consensus on networks with quantized communication

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 16 2009
    Paolo Frasca
    Abstract This work presents a contribution to the solution of the average agreement problem on a network with quantized links. Starting from the well-known linear diffusion algorithm, we propose a simple and effective adaptation that is able to preserve the average of states and to drive the system near to the consensus value, when a uniform quantization is applied to communication between agents. The properties of this algorithm are investigated both by a worst-case analysis and by a probabilistic analysis, and are shown to depend on the spectral properties of the evolution matrix. A special attention is devoted to the issue of the dependence of the performance on the number of agents, and several examples are given. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Robust and efficient quantization and coding for control of multidimensional linear systems under data rate constraints

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 10-11 2007
    K. Li
    Abstract Recently, we reported results on coding strategies for scalar feedback systems with data-rate-limited feedback channels in which the data-rate constraints are time varying. Such rate-varying channels are typically encountered in communication networks in which links between nodes are subject to noise, congestion, and intermittent disruption. The present paper describes results of extending this research into the multidimensional domain. An important consideration is that for systems of dimension greater than one, many classical feedback designs cannot be realized for operation near the theoretical minimum possible data rate. A novel control coding scheme will be presented, and in terms of this, it will be shown that the advantages of coarse signal quantization that had been reported earlier for scalar systems remain in the multidimensional case. The key is to allocate the communication bandwidth efficiently among faster and slower modes. We discuss various strategies that allocate bandwidth by scheduling the time slots assigned to each mode. In particular, we propose a ,robust attention varying' technique, whose merit will be discussed in terms of its robustness with respect to time-varying communication channel capacity and also in terms of how well it operates when the feedback channel capacity is near the theoretical minimum data rate. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    The analysis of motor vehicle crash clusters using the vector quantization technique

    JOURNAL OF ADVANCED TRANSPORTATION, Issue 3 2010
    Lorenzo Mussone
    Abstract In this paper, a powerful tool for analyzing motor vehicle data based on the vector quantization (VQ) technique is demonstrated. The technique uses an approximation of a probability density function for a stochastic vector without assuming an "a priori" distribution. A self-organizing map (SOM) is used to transform accident data from an N-dimensional space into a two-dimensional plane. The SOM retains all the original data yet provides an effective visual tool for describing patterns such as the frequency at which a particular category of events occurs. This enables new relationships to be identified. Accident data from three cities in Italy (Turin, Milan, and Legnano) are used to illustrate the usefulness of the technique. Crashes are aggregated and clustered crashes by type, severity, and along other dimensions. The paper includes discussion as to how this method can be utilized to further improve safety analysis. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Automated classification of crystallization experiments using wavelets and statistical texture characterization techniques

    JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 1 2008
    D. Watts
    A method is presented for the classification of protein crystallization images based on image decomposition using the wavelet transform. The distribution of wavelet coefficient values in each sub-band image is modelled by a generalized Gaussian distribution to provide discriminatory variables. These statistical descriptors, together with second-order statistics obtained from joint probability distributions, are used with learning vector quantization to classify protein crystallization images. [source]


    On the direct calculation of the free energy of quantization for molecular systems in the condensed phase

    JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 4 2009
    Daan P. Geerke
    Abstract Using the path integral formalism or the Feynman-Hibbs approach, various expressions for the free energy of quantization for a molecular system in the condensed phase can be derived. These lead to alternative methods to directly compute quantization free energies from molecular dynamics computer simulations, which were investigated with an eye to their practical use. For a test system of liquid neon, two methods are shown to be most efficient for a direct evaluation of the excess free energy of quantization. One of them makes use of path integral simulations in combination with a single-step free energy perturbation approach and was previously reported in the literature. The other method employs a Feynman-Hibbs effective Hamiltonian together with the thermodynamic integration formalism. However, both methods are found to give less accurate results for the excess free energy of quantization than the estimate obtained from explicit path integral calculations on the excess free energy of the neon liquid in the classical and quantum mechanical limit. Suggestions are made to make both methods more accurate. © 2008 Wiley Periodicals, Inc. J Comput Chem 2009 [source]


    A Comparison of Neural Network, Statistical Methods, and Variable Choice for Life Insurers' Financial Distress Prediction

    JOURNAL OF RISK AND INSURANCE, Issue 3 2006
    Patrick L. Brockett
    This study examines the effect of the statistical/mathematical model selected and the variable set considered on the ability to identify financially troubled life insurers. Models considered are two artificial neural network methods (back-propagation and learning vector quantization (LVQ)) and two more standard statistical methods (multiple discriminant analysis and logistic regression analysis). The variable sets considered are the insurance regulatory information system (IRIS) variables, the financial analysis solvency tracking (FAST) variables, and Texas early warning information system (EWIS) variables, and a data set consisting of twenty-two variables selected by us in conjunction with the research staff at TDI and a review of the insolvency prediction literature. The results show that the back-propagation (BP) and LVQ outperform the traditional statistical approaches for all four variable sets with a consistent superiority across the two different evaluation criteria (total misclassification cost and resubstitution risk criteria), and that the twenty-two variables and the Texas EWIS variable sets are more efficient than the IRIS and the FAST variable sets for identification of financially troubled life insurers in most comparisons. [source]


    Supporting user-subjective categorization with self-organizing maps and learning vector quantization

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 4 2005
    Dina Goren-Bar
    Today, most document categorization in organizations is done manually. We save at work hundreds of files and e-mail messages in folders every day. While automatic document categorization has been widely studied, much challenging research still remains to support user-subjective categorization. This study evaluates and compares the application of self-organizing maps (SOMs) and learning vector quantization (LVQ) with automatic document classification, using a set of documents from an organization, in a specific domain, manually classified by a domain expert. After running the SOM and LVQ we requested the user to reclassify documents that were misclassified by the system. Results show that despite the subjective nature of human categorization, automatic document categorization methods correlate well with subjective, personal categorization, and the LVQ method outperforms the SOM. The reclassification process revealed an interesting pattern: About 40% of the documents were classified according to their original categorization, about 35% according to the system's categorization (the users changed the original categorization), and the remainder received a different (new) categorization. Based on these results we conclude that automatic support for subjective categorization is feasible; however, an exact match is probably impossible due to the users' changing categorization behavior. [source]


    First-Order Schemes in the Numerical Quantization Method

    MATHEMATICAL FINANCE, Issue 1 2003
    V. Bally
    The numerical quantization method is a grid method that relies on the approximation of the solution to a nonlinear problem by piecewise constant functions. Its purpose is to compute a large number of conditional expectations along the path of the associated diffusion process. We give here an improvement of this method by describing a first-order scheme based on piecewise linear approximations. Main ingredients are correction terms in the transition probability weights. We emphasize the fact that in the case of optimal quantization, many of these correcting terms vanish. We think that this is a strong argument to use it. The problem of pricing and hedging American options is investigated and a priori estimates of the errors are proposed. [source]


    Stability of quantization dimension and quantization for homogeneous Cantor measures

    MATHEMATISCHE NACHRICHTEN, Issue 8 2007
    Marc Kesseböhmer
    Abstract We effect a stabilization formalism for dimensions of measures and discuss the stability of upper and lower quantization dimension. For instance, we show for a Borel probability measure with compact support that its stabilized upper quantization dimension coincides with its packing dimension and that the upper quantization dimension is finitely stable but not countably stable. Also, under suitable conditions explicit dimension formulae for the quantization dimension of homogeneous Cantor measures are provided. This allows us to construct examples showing that the lower quantization dimension is not even finitely stable. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    HRTEM, Raman and optical study of CdS1,xSex nanocrystals embedded in silicate glass

    PHYSICA STATUS SOLIDI (A) APPLICATIONS AND MATERIALS SCIENCE, Issue 13 2004
    V. Bellani
    Abstract We studied CdS1,xSex nanocrystals embedded in a silicate glass by means of complementary techniques like high resolution transmission electron microscopy (HRTEM), micro-Raman spectroscopy and optical transmission and reflectivity. Transmission Electron Microscopy gives complete information on crystallization and size distribution of the nanocrystals wile Raman scattering is particularly useful in the determination of the composition of the nanocrystals for low-concentration or small-crystallite size composite. Having the size distribution and composition of the nanocrystals we have explained the transmission spectra of the studied samples. Optical transmission spectra evidence the quantization of the electronic states of the nanoparticles system with a size distribution described by a Gaussian function. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Light,matter interaction in finite-size plasma systems

    PHYSICA STATUS SOLIDI (B) BASIC SOLID STATE PHYSICS, Issue 10 2007
    W. Hoyer
    Abstract It is well known that electromagnetic waves with frequencies below the plasma frequency cannot propagate inside an electron plasma. For plasmas with infinite extensions, this property can be mathematically described by a Bogoliubov transformation of the photonic operators. More generally, the presence of finite-size electron plasmas such as laser-induced atmospheric light strings or metallic nano structures including metamaterials leads to a modification of the light,matter interaction. It is shown how this geometric property can be fully accounted for with the help of adapted mode functions used for the quantization of the electromagnetic field. In addition to the analytical derivations, numerical results for luminescence spectra out of quasi-two-dimensional, planar plasma sheets are presented. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    The role of breathers in the anomalous decay of luminescence

    PHYSICA STATUS SOLIDI (C) - CURRENT TOPICS IN SOLID STATE PHYSICS, Issue 10 2006
    Eva Mihóková
    Abstract Luminescence of alkali halides doped by heavy ns2 ions exhibits an anomaly in the slow component emission decay. The anomaly is explained by the formation of a discrete breather in the immediate neighborhood of the impurity. We study properties of these breathers, their phase space structure, robustness, and propensity for formation. Under a wide range of parameters and interionic potentials they form 2-dimensional Kolmogorov-Arnold-Moser tori (less than generic) in phase space. We show strobed views of these tori, useful in quantization. All features support the thesis of breather formation as the explanation for the luminescence decay anomaly that first motivated our breather proposal. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Adaptive thinning of atmospheric observations in data assimilation with vector quantization and filtering methods

    THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 613 2005
    T. Ochotta
    Abstract In data assimilation for numerical weather prediction, measurements of various observation systems are combined with background data to define initial states for the forecasts. Current and future observation systems, in particular satellite instruments, produce large numbers of measurements with high spatial and temporal density. Such datasets significantly increase the computational costs of the assimilation and, moreover, can violate the assumption of spatially independent observation errors. To ameliorate these problems, we propose two greedy thinning algorithms, which reduce the number of assimilated observations while retaining the essential information content of the data. In the first method, the number of points in the output set is increased iteratively. We use a clustering method with a distance metric that combines spatial distance with difference in observation values. In a second scheme, we iteratively estimate the redundancy of the current observation set and remove the most redundant data points. We evaluate the proposed methods with respect to a geometric error measure and compare them with a uniform sampling scheme. We obtain good representations of the original data with thinnings retaining only a small portion of observations. We also evaluate our thinnings of ATOVS satellite data using the assimilation system of the Deutscher Wetterdienst. Impact of the thinning on the analysed fields and on the subsequent forecasts is discussed. Copyright © 2005 Royal Meteorological Society [source]