Random Noise (random + noise)

Distribution by Scientific Domains


Selected Abstracts


A Lepskij,type Stopping-Rule for Newton-type Methods with Random Noise

PROCEEDINGS IN APPLIED MATHEMATICS & MECHANICS, Issue 1 2005
Frank Bauer
Regularized Newton methods are one of the most popular approaches for the solution of inverse problems in differential equations. Since these problems are usually ill-posed, an appropriate stopping rule is an essential ingredient of such methods. In this paper we suggest an a-posteriori stopping rule of Lepskij-type which is appropriate for data perturbed by random noise. The numerical results for this look promising. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Signal de-noising in magnetic resonance spectroscopy using wavelet transforms

CONCEPTS IN MAGNETIC RESONANCE, Issue 6 2002
Hector F. Cancino-De-Greiff
Abstract Computer signal processing is used for quantitative data analysis (QDA) in magnetic resonance spectroscopy (MRS). The main difficulty in QDA is that MRS signals appear to be contaminated with random noise. Noise reduction can be achieved by coherent averaging, but it is not always possible to average many MRS waveforms. Wavelet shrinkage de-noising (WSD) is a technique that can be employed in this case. The potentialities of WSD in MRS, alone and combined with the Cadzow algorithm, are analyzed through computer simulations. The results can facilitate an appropriate application of WSD, as well as a deeper understanding of this technique. © 2002 Wiley Periodicals, Inc. Concepts Magn Reson 14: 388,401, 2002 [source]


A covariance-adaptive approach for regularized inversion in linear models

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2007
Christopher Kotsakis
SUMMARY The optimal inversion of a linear model under the presence of additive random noise in the input data is a typical problem in many geodetic and geophysical applications. Various methods have been developed and applied for the solution of this problem, ranging from the classic principle of least-squares (LS) estimation to other more complex inversion techniques such as the Tikhonov,Philips regularization, truncated singular value decomposition, generalized ridge regression, numerical iterative methods (Landweber, conjugate gradient) and others. In this paper, a new type of optimal parameter estimator for the inversion of a linear model is presented. The proposed methodology is based on a linear transformation of the classic LS estimator and it satisfies two basic criteria. First, it provides a solution for the model parameters that is optimally fitted (in an average quadratic sense) to the classic LS parameter solution. Second, it complies with an external user-dependent constraint that specifies a priori the error covariance (CV) matrix of the estimated model parameters. The formulation of this constrained estimator offers a unified framework for the description of many regularization techniques that are systematically used in geodetic inverse problems, particularly for those methods that correspond to an eigenvalue filtering of the ill-conditioned normal matrix in the underlying linear model. Our study lies on the fact that it adds an alternative perspective on the statistical properties and the regularization mechanism of many inversion techniques commonly used in geodesy and geophysics, by interpreting them as a family of ,CV-adaptive' parameter estimators that obey a common optimal criterion and differ only on the pre-selected form of their error CV matrix under a fixed model design. [source]


Joint inversion of multiple data types with the use of multiobjective optimization: problem formulation and application to the seismic anisotropy investigations

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2007
E. Kozlovskaya
SUMMARY In geophysical studies the problem of joint inversion of multiple experimental data sets obtained by different methods is conventionally considered as a scalar one. Namely, a solution is found by minimization of linear combination of functions describing the fit of the values predicted from the model to each set of data. In the present paper we demonstrate that this standard approach is not always justified and propose to consider a joint inversion problem as a multiobjective optimization problem (MOP), for which the misfit function is a vector. The method is based on analysis of two types of solutions to MOP considered in the space of misfit functions (objective space). The first one is a set of complete optimal solutions that minimize all the components of a vector misfit function simultaneously. The second one is a set of Pareto optimal solutions, or trade-off solutions, for which it is not possible to decrease any component of the vector misfit function without increasing at least one other. We investigate connection between the standard formulation of a joint inversion problem and the multiobjective formulation and demonstrate that the standard formulation is a particular case of scalarization of a multiobjective problem using a weighted sum of component misfit functions (objectives). We illustrate the multiobjective approach with a non-linear problem of the joint inversion of shear wave splitting parameters and longitudinal wave residuals. Using synthetic data and real data from three passive seismic experiments, we demonstrate that random noise in the data and inexact model parametrization destroy the complete optimal solution, which degenerates into a fairly large Pareto set. As a result, non-uniqueness of the problem of joint inversion increases. If the random noise in the data is the only source of uncertainty, the Pareto set expands around the true solution in the objective space. In this case the ,ideal point' method of scalarization of multiobjective problems can be used. If the uncertainty is due to inexact model parametrization, the Pareto set in the objective space deviates strongly from the true solution. In this case all scalarization methods fail to find the solution close to the true one and a change of model parametrization is necessary. [source]


Design of an FIR filter for the displacement reconstruction using measured acceleration in low-frequency dominant structures

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 4 2010
Hae Sung Lee
Abstract This paper presents a new class of displacement reconstruction scheme using only acceleration measured from a structure. For a given set of acceleration data, the reconstruction problem is formulated as a boundary value problem in which the acceleration is approximated by the second-order central finite difference of displacement. The displacement is reconstructed by minimizing the least-squared errors between measured and approximated acceleration within a finite time interval. An overlapping time window is introduced to improve the accuracy of the reconstructed displacement. The displacement reconstruction problem becomes ill-posed because the boundary conditions at both ends of each time window are not known a priori. Furthermore, random noise in measured acceleration causes physically inadmissible errors in the reconstructed displacement. A Tikhonov regularization scheme is adopted to alleviate the ill-posedness. It is shown that the proposed method is equivalent to an FIR filter designed in the time domain. The fundamental characteristics of the proposed method are presented in the frequency domain using the transfer function and the accuracy function. The validity of the proposed method is demonstrated by a numerical example, a laboratory experiment and a field test. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Perceptual denoising of color images

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 3 2010
Ilka A. Netravali
Abstract Denoising of color images is a trade-off between sharpness of an image and perceived noise. We formulate a novel optimization problem that can maximize sharpness of an image while limiting the perceived noise under a model of visibility of additive random noise. We derive a closed-form expression for an optimal two-dimensional finite impulse response filter, show its uniqueness and existence, and present simulation results for black and white as well as color images. Simulation results show remarkable reduction in perceptibility of noise, while preserving sharpness. The computational burden required for the optimal filter is reduced by a new adhoc filter which is simple but has near optimal performance. © 2010 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 20, 215,222, 2010. [source]


Image reconstruction for a partially immersed imperfectly conducting cylinder by genetic algorithm

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 4 2009
Wei Chien
Abstract This article presents a computational approach to the imaging of a partially immersed imperfectly conducting cylinder. An imperfectly conducting cylinder of unknown shape and conductivity scatters the incident transverse magnetic (TM) wave in free space while the scattered field is recorded outside. Based on the boundary condition and the measured scattered field, a set of nonlinear integral equations, and the inverse scattering problem are reformulated into an optimization problem. We use genetic algorithm (GA) to reconstruct the shape and the conductivity of a partially immersed imperfectly conducting cylinder. The genetic algorithm is then used to find out the global extreme solution of the cost function. Numerical results demonstrated that, even when the initial guess is far away from the exact one, good reconstruction can be obtained. In such a case, the gradient-based methods often get trapped in a local extreme. In addition, the effect of random noise on the reconstruction is investigated. © 2009 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 19, 299,305, 2009 [source]


Hidden Markov model-based real-time transient identifications in nuclear power plants

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 8 2002
Kee-Choon Kwon
In this article, a transient identification method based on a stochastic approach with the hidden Markov model (HMM) has been suggested and evaluated experimentally for the classification of nine types of transients in nuclear power plants (NPPs). A transient is defined as when a plant proceeds to an abnormal state from a normal state. Identification of the types of transients during an early accident stage in NPPs is crucial for proper action selection. The transient can be identified by its unique time-dependent patterns related to the principal variables. The HMM, a double-stochastic process, can be applied to transient identification that is a spatial and temporal classification problem under a statistical pattern-recognition framework. The trained HMM is created for each transient from a set of training data by the maximum-likelihood estimation method which uses a forward-backward algorithm and the Baum-Welch re-estimation algorithm. The transient identification is determined by calculating which model has the highest probability for given test data using the Viterbi algorithm. Several experimental tests have been performed with normalization methods, clustering algorithms, and a number of states in HMM. There are also a few experimental tests that have been performed, including superimposing random noise, adding systematic error, and adding untrained transients to verify its performance and robustness. The proposed real-time transient identification system has been proven to have many advantages, although there are still some problems that should be solved before applying it to an operating NPP. Further efforts are being made to improve the system performance and robustness in order to demonstrate reliability and accuracy to the required level. © 2002 Wiley Periodicals, Inc. [source]


Comparison of the numerical stability of methods for anharmonic calculations of vibrational molecular energies

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 10 2007
Petr Dan
Abstract On model examples, we compare the performance of the vibrational self-consistent field, variational, and four perturbational schemes used for computations of vibrational energies of semi-rigid molecules, with emphasis on the numerical stability. Although the accuracy of the energies is primarily dependent on the quality of the potential energy surface, approximate approaches to the anharmonic vibrational problem often do not converge to the same results due to the approximations involved. For furan, the sensitivity to variations of the anharmonic potential was systematically investigated by adding random noise to the cubic and quartic constants. The self-consistent field methods proved to be the most resistant to the potential variations. The second order perturbational techniques are sensitive to random degeneracies and provided the least stable results. However, their stability could be significantly improved by a simple generalization of the perturbational formula. The variational configuration interaction is practically limited by the size of the matrix that can be diagonalized for larger molecules; however, relatively fewer states need to be involved than for smaller ones, in favor of the computing. © 2007 Wiley Periodicals, Inc. J Comput Chem, 2007 [source]


Evolutionary history shapes the association between developmental instability and population-level genetic variation in three-spined sticklebacks

JOURNAL OF EVOLUTIONARY BIOLOGY, Issue 8 2009
S. VAN DONGEN
Abstract Developmental instability (DI) is the sensitivity of a developing trait to random noise and can be measured by degrees of directionally random asymmetry [fluctuating asymmetry (FA)]. FA has been shown to increase with loss of genetic variation and inbreeding as measures of genetic stress, but associations vary among studies. Directional selection and evolutionary change of traits have been hypothesized to increase the average levels of FA of these traits and to increase the association strength between FA and population-level genetic variation. We test these two hypotheses in three-spined stickleback (Gasterosteus aculeatus L.) populations that recently colonized the freshwater habitat. Some traits, like lateral bone plates, length of the pelvic spine, frontal gill rakers and eye size, evolved in response to selection regimes during colonization. Other traits, like distal gill rakers and number of pelvic fin rays, did not show such phenotypic shifts. Contrary to a priori predictions, average FA did not systematically increase in traits that were under presumed directional selection, and the increases observed in a few traits were likely to be attributable to other factors. However, traits under directional selection did show a weak but significantly stronger negative association between FA and selectively neutral genetic variation at the population level compared with the traits that did not show an evolutionary change during colonization. These results support our second prediction, providing evidence that selection history can shape associations between DI and population-level genetic variation at neutral markers, which potentially reflect genetic stress. We argue that this might explain at least some of the observed heterogeneities in the patterns of asymmetry. [source]


Range error detection caused by occlusion in non-coaxial LADARs for scene interpretation

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 10 2005
Bingbing Liu
When processing laser detection and ranging (LADAR) sensor data for scene interpretation, for example, for the purposes of feature extraction and/or data association in mobile robotics, most previous work models such devices as processing range data which follows a normal distribution. In this paper, it is demonstrated that commonly used LADARs suffer from incorrect range readings at changes in surface reflectivity and/or range discontinuities, which can have a much more detrimental effect on such algorithms than random noise. Most LADARs fall into two categories: coaxial and separated transmitter and receiver configurations. The latter offer the advantage that optical crosstalk is eliminated, since it can be guaranteed that all of the transmitted light leaves the LADAR and is not in any way partially reflected within it due to the beam-splitting techniques necessary in coaxial LADARs. However, they can introduce a significant disparity effect, as the reflected laser energy from the target can be partially occluded from the receiver. As well as demonstrating that false range values can result due to this occlusion effect from scanned LADARs, the main contribution of this paper is that the occurrence of these values can be reliably predicted by monitoring the received signal strength and a quantity we refer to as the "transceiver separation angle" of the rotating mirror. This paper will demonstrate that a correct understanding of such systematic errors is essential for the correct further processing of the data. A useful design criterion for the optical separation of the receiver and transmitter is also derived for noncoaxial LADARs, based on the minimum detectable signal amplitude of a LADAR and environmental edge constraints. By investigating the effects of various sensor and environmental parameters on occlusion, some advice is given on how to make use of noncoaxial LADARs correctly so as to avoid range errors when scanning environmental discontinuities. © 2005 Wiley Periodicals, Inc. [source]


Neural network approach to firm grip in the presence of small slips

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 6 2001
A. M. Al-Fahed Nuseirat
This paper presents a two stage method for constructing a firm grip that can tolerate small slips of the fingertips. The fingers are assumed to be of frictionless contact type. The first stage was to formulate the interaction in the gripper,object system as a linear complementarity problem (LCP). Then it was solved using a special neural network to find minimal fingers forces. The second stage was to use the obtained results in the first stage as a static mapping in training another neural network. The second neural network training included emulating the slips by random noise in the form of changes in the positions of the contact points relative to the reference coordinate system. This noisy training increased robustness against unexpected changes in fingers positions. Genetic algorithms were used in training the second neural network as global optimization techniques. The resulting neural network is a robust, reliable, and stable controller for rigid bodies that can be handled by a robot gripper. © 2001 John Wiley & Sons, Inc. [source]


Heterogeneous growth of cordierite in low P/T Tsukuba metamorphic rocks from central Japan

JOURNAL OF METAMORPHIC GEOLOGY, Issue 2 2001
K. Miyazaki
Abstract This paper examines the spatial statistics of matrix minerals and complex patterned cordierite porphyroblasts in the low-pressure, high-temperature (low P/T) Tsukuba metamorphic rocks from central Japan, using a density correlation function. The cordierite-producing reaction is sillimanite + biotite + quartz = K-feldspar + cordierite + water. The density correlation function shows that quartz is distributed randomly. However, the density correlation functions of biotite, plagioclase and K-feldspar show that their spatial distributions are clearly affected by the formation of cordierite porphyroblasts. These observations suggest that cordierite growth occurred through a selective growth mechanism: quartz adjacent to cordierite has a tendency to prevent the growth of cordierite, whereas other matrix minerals adjacent to cordierite have a tendency to enhance the growth of cordierite. The density correlation functions of complex patterned cordierite porphyroblasts show power-law behaviour. A selective growth mechanism alone cannot explain the origin of the power-law behaviour. Comparison of the morphology and fractal dimension of cordierite with two-dimensional sections from a three-dimensional diffusion-limited aggregation (DLA) suggests that the formation of cordierite porphyroblasts can be modelled as a DLA process. DLA is the simple statistical model for the universal fractal pattern developed in a macroscopic diffusion field. Diffusion-controlled growth interacting with a random field is essential to the formation of a DLA-like pattern. The selective growth mechanism will provide a random noise for the growth of cordierite due to random distribution of quartz. Therefore, a selective growth mechanism coupled with diffusion-controlled growth is proposed to explain the power-law behaviour of the density correlation function of complex patterned cordierite. The results in this paper suggest that not only the growth kinetics but also the spatial distribution of matrix minerals affect the progress of the metamorphic reaction and pattern formation of metamorphic rocks. [source]


Estimation of integrated squared density derivatives from a contaminated sample

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 4 2002
A. Delaigle
Summary. We propose a kernel estimator of integrated squared density derivatives, from a sample that has been contaminated by random noise. We derive asymptotic expressions for the bias and the variance of the estimator and show that the squared bias term dominates the variance term. This coincides with results that are available for non-contaminated observations. We then discuss the selection of the bandwidth parameter when estimating integrated squared density derivatives based on contaminated data. We propose a data-driven bandwidth selection procedure of the plug-in type and investigate its finite sample performance via a simulation study. [source]


A model of non-isothermal degradation of nutrients, pigments and enzymes

JOURNAL OF THE SCIENCE OF FOOD AND AGRICULTURE, Issue 3 2004
Maria G Corradini
Abstract Published isothermal degradation curves for chlorophyll A and thiamine in the range 100,150 °C and the inactivation curves of polyphenol oxidase (PPO) in the range 50,80 °C could be described by the model C(t)/C0 = exp[,b(T)tn] where C(t) and C0 are the momentary and initial concentrations, respectively, b(T) a temperature dependent ,rate parameter' and n, a constant. This suggested that the temporal degradation/inactivation events of all three had a Weibull distribution with a practically constant shape factor. The temperature dependence of the ,rate parameter' could be described by the log logistic model, b(T) = loge[1 + exp[k(T , Tc)], where Tc is a marker of the temperature level where the degradation/inactivation occurs at a significant rate and k the steepness of the b(T) increase once this temperature range has been exceeded. These two models were combined to produce a non-isothermal degradation/inactivation model, similar to one recently developed for microbial inactivation. It is based on the assumption that the local slope of the non-isothermal decay curve, ie the momentary decay rate, is the slope of the isothermal curve at the momentary temperature at a time that corresponds to the momentary concentration of the still intact or active molecules. This model, in the form of a differential equation, was solved numerically to produce degradation/inactivation curves under temperature profiles that included heating and cooling and oscillating temperatures. Such simulations can be used to assess the impact of planned commercial heat processes on the stability of compounds of nutritional and quality concerns and the efficacy of methods to inactivate enzymes. Simulated decay curves on which a random noise was superimposed were used to demonstrate that the degradation/inactivation parameters, k and Tc, can be calculated directly from non-isothermal decay curves, provided that the validity of the Weibullian and log logistic models and the constancy of the shape factor n could be assumed. Copyright © 2004 Society of Chemical Industry [source]


Model Development in Thermal Styrene Polymerization

MACROMOLECULAR SYMPOSIA, Issue 1 2007
Bryan Matthews
Abstract Summary: The thermal polymerization of styrene is usually modeled by relying on a reaction scheme and a set of equations that were developed more than three decades ago by Hui and Hamielec. Many detailed models of styrene polymerization are available in the open literature and they are mostly based on the work of Hui and Hamielec, which nearly makes this the standard to follow in explaining the behavior of polystyrene reactors. The model of Hui and Hamielec does a very nice job of describing monomer conversion data but discrepancies are seen between observed and predicted values of number and weight average molecular weights, Mn and Mw. Discrepancies in number average molecular weight seem to be the result of random noise. Discrepancies in weight average molecular weight grow as the polymerization temperature decreases and some of the trends observed in the residuals over the entire temperature range cannot be attributed to random noise. Hui and Hamielec attributed the observed deficiencies to a standard deviation of ±10% in their GPC measurements. A new data set with an experimental error of 2% for average molecular weights is presented. The set contains measured values of Mn, Mw and Mz, so the polymerization scheme has been extended to include third order moments. The data set also includes the effect of ethylbenzene as a chain transfer agent. We present the results of comparing model predictions to our measurements and the adjustments made in the original set of kinetic parameters published by Hui and Hamielec. [source]


Bayesian galaxy shape measurement for weak lensing surveys , II.

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2008
Application to simulations
ABSTRACT In this paper, we extend the Bayesian model fitting shape measurement method presented in Miller et al., and use the method to estimate the shear from the Shear TEsting Programme simulations (STEP). The method uses a fast model fitting algorithm that uses realistic galaxy profiles and analytically marginalizes over the position and amplitude of the model by doing the model fitting in Fourier space. This is used to find the full posterior probability in ellipticity. The shear is then estimated in a Bayesian way from this posterior probability surface. The Bayesian estimation allows measurement bias arising from the presence of random noise to be removed. In this paper, we introduce an iterative algorithm that can be used to estimate the intrinsic ellipticity prior and show that this is accurate and stable. We present results using the STEP parametrization that relates the input shear ,T to the estimated shear ,M by introducing a bias m and an offset c: ,M,,T=m,T+c. The average number density of galaxies used in the STEP1 analysis was 9 per square arcminute, for STEP2 the number density was 30 per square arcminute. By using the method to estimate the shear from the STEP1 simulations we find the method to have a shear bias of m= 0.006 ± 0.005 and a variation in shear offset with point spread function type of ,c= 0.0002. Using the method to estimate the shear from the STEP2 simulations we find that the shear bias and offset are m= 0.002 ± 0.016 and c=,0.0007 ± 0.0006, respectively. In addition, we find that the bias and offset are stable to changes in the magnitude and size of the galaxies. Such biases should yield any cosmological constraints from future weak lensing surveys robust to systematic effects in shape measurement. Finally, we present an alternative to the STEP parametrization by using a quality factor that relates the intrinsic shear variance in a simulation to the variance in shear that is measured and show that the method presented has an average of Q, 100 which is at least a factor of 10 times better than other shape measurement methods. [source]


Ionization-induced star formation , I. The collect-and-collapse model

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2007
J. E. Dale
ABSTRACT We conduct smoothed particle hydrodynamics simulations of the ,collect-and-collapse' scenario for star formation triggered by an expanding H ii region. We simulate the evolution of a spherical uniform molecular cloud with an ionizing source at its centre. The gas in the cloud is self-gravitating, although the cloud is prevented from globally collapsing. We find that the shell driven by the H ii region fragments to form numerous self-gravitating objects. We repeat our calculations at four numerical resolutions to ensure that they are converged. We compare our results to the analytical model of Whitworth et al. and show that our simulations and the predictions of Whitworth et al. are in good agreement in the sense that the shell fragments at the time and radius predicted by Whitworth et al. to within 20 and 25 per cent, respectively. Most of the fragments produced in our two highest resolution calculations are approximately half the mass of those predicted by Whitworth et al., but this conclusion is robust against both numerical resolution and the presence of random noise (local fluctuations in density of a factor of ,2) in the initial gas distribution. We conclude that such noise has little impact on the fragmentation process. [source]


A Lepskij,type Stopping-Rule for Newton-type Methods with Random Noise

PROCEEDINGS IN APPLIED MATHEMATICS & MECHANICS, Issue 1 2005
Frank Bauer
Regularized Newton methods are one of the most popular approaches for the solution of inverse problems in differential equations. Since these problems are usually ill-posed, an appropriate stopping rule is an essential ingredient of such methods. In this paper we suggest an a-posteriori stopping rule of Lepskij-type which is appropriate for data perturbed by random noise. The numerical results for this look promising. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Experimental and theoretical charge-density study of a tetranuclear cobalt carbonyl complex

ACTA CRYSTALLOGRAPHICA SECTION B, Issue 6 2009
Jacob Overgaard
Details of the complex bonding environment present in the molecular centre of an alkyne-bridged dicobalt complex have been examined using a combination of experimental and theoretical charge-density modelling for two compounds which share a central Co2C2 tetrahedral moiety as their common motif. Topological analysis of the experimental electron density illustrates the problem of separating the Co,C bond-critical points (b.c.p.s) from the intervening ring-critical point (r.c.p.), due largely to the flat nature of the electron density in the CoC2 triangles. Such a separation of critical points is immediately obtained from a topological analysis of the theoretical electron density as well as from the multipole-projected theoretical density; however, the addition of random noise to the theoretical structure factors prior to multipole modelling leads to a failure in consistently distinguishing two b.c.p.s and one r.c.p. in such close proximity within the particular environment of this Co2C2 centre. [source]


Nondipolar Content of T Wave Derived from a Myocardial Source Simulation with Increased Repolarization Inhomogeneity

ANNALS OF NONINVASIVE ELECTROCARDIOLOGY, Issue 2 2009
Milos Kesek M.D., Ph.D.
Background: Several conditions with repolarization disturbances are associated with increased level of nondipolar components of the T wave. The nondipolar content has been proposed as a measure of repolarization inhomogeneity. This computer simulation study examines the link between increased nondipolar components and increased repolarization inhomogeneity in an established model. Methods: The simulation was performed with Ecgsim software that uses the equivalent double-layer source model. In the model, the shape of transmembrane potential is derived from biological recordings. Increased repolarization inhomogeneity was simulated globally by increasing the variance in action potential duration and locally by introducing changes mimicking acute myocardial infarction. We synthesized surface ECG recordings with 12, 18, and 300 leads. The T-wave residue was calculated by singular value decomposition. The study examined the effects of the number of ECG leads, changes in definition of end of T wave and random noise added to the signal. Results: Normal myocardial source gave a low level of nondipolar content. Increased nondipolar content was observed in the two types of increased repolarization inhomogeneity. Noise gave a large increase in the nondipolar content. The sensitivity of the result to noise increased when a higher number of principal components were used in the computation. Conclusions: The nondipolar content of the T wave was associated with repolarization inhomogeneity in the computer model. The measure was very sensitive to noise, especially when principal components of high order were included in the computations. Increased number of ECG leads resulted in an increased signal-to-noise ratio. [source]