Regularization Parameter (regularization + parameter)

Distribution by Scientific Domains


Selected Abstracts


Iterative generalized cross-validation for fusing heteroscedastic data of inverse ill-posed problems

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2009
Peiliang Xu
SUMMARY The method of generalized cross-validation (GCV) has been widely used to determine the regularization parameter, because the criterion minimizes the average predicted residuals of measured data and depends solely on data. The data-driven advantage is valid only if the variance,covariance matrix of the data can be represented as the product of a given positive definite matrix and a scalar unknown noise variance. In practice, important geophysical inverse ill-posed problems have often been solved by combining different types of data. The stochastic model of measurements in this case contains a number of different unknown variance components. Although the weighting factors, or equivalently the variance components, have been shown to significantly affect joint inversion results of geophysical ill-posed problems, they have been either assumed to be known or empirically chosen. No solid statistical foundation is available yet to correctly determine the weighting factors of different types of data in joint geophysical inversion. We extend the GCV method to accommodate both the regularization parameter and the variance components. The extended version of GCV essentially consists of two steps, one to estimate the variance components by fixing the regularization parameter and the other to determine the regularization parameter by using the GCV method and by fixing the variance components. We simulate two examples: a purely mathematical integral equation of the first kind modified from the first example of Phillips (1962) and a typical geophysical example of downward continuation to recover the gravity anomalies on the surface of the Earth from satellite measurements. Based on the two simulated examples, we extensively compare the iterative GCV method with existing methods, which have shown that the method works well to correctly recover the unknown variance components and determine the regularization parameter. In other words, our method lets data speak for themselves, decide the correct weighting factors of different types of geophysical data, and determine the regularization parameter. In addition, we derive an unbiased estimator of the noise variance by correcting the biases of the regularized residuals. A simplified formula to save the time of computation is also given. The two new estimators of the noise variance are compared with six existing methods through numerical simulations. The simulation results have shown that the two new estimators perform as well as Wahba's estimator for highly ill-posed problems and outperform any existing methods for moderately ill-posed problems. [source]


A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2004
Colin G. Farquharson
SUMMARY Two automatic ways of estimating the regularization parameter in underdetermined, minimum-structure-type solutions to non-linear inverse problems are compared: the generalized cross-validation and L-curve criteria. Both criteria provide a means of estimating the regularization parameter when only the relative sizes of the measurement uncertainties in a set of observations are known. The criteria, which are established components of linear inverse theory, are applied to the linearized inverse problem at each iteration in a typical iterative, linearized solution to the non-linear problem. The particular inverse problem considered here is the simultaneous inversion of electromagnetic loop,loop data for 1-D models of both electrical conductivity and magnetic susceptibility. The performance of each criteria is illustrated with inversions of a variety of synthetic and field data sets. In the great majority of examples tested, both criteria successfully determined suitable values of the regularization parameter, and hence credible models of the subsurface. [source]


2D data modelling by electrical resistivity tomography for complex subsurface geology

GEOPHYSICAL PROSPECTING, Issue 2 2006
E. Cardarelli
ABSTRACT A new tool for two-dimensional apparent-resistivity data modelling and inversion is presented. The study is developed according to the idea that the best way to deal with ill-posedness of geoelectrical inverse problems lies in constructing algorithms which allow a flexible control of the physical and mathematical elements involved in the resolution. The forward problem is solved through a finite-difference algorithm, whose main features are a versatile user-defined discretization of the domain and a new approach to the solution of the inverse Fourier transform. The inversion procedure is based on an iterative smoothness-constrained least-squares algorithm. As mentioned, the code is constructed to ensure flexibility in resolution. This is first achieved by starting the inversion from an arbitrarily defined model. In our approach, a Jacobian matrix is calculated at each iteration, using a generalization of Cohn's network sensitivity theorem. Another versatile feature is the issue of introducing a priori information about the solution. Regions of the domain can be constrained to vary between two limits (the lower and upper bounds) by using inequality constraints. A second possibility is to include the starting model in the objective function used to determine an improved estimate of the unknown parameters and to constrain the solution to the above model. Furthermore, the possibility either of defining a discretization of the domain that exactly fits the underground structures or of refining the mesh of the grid certainly leads to more accurate solutions. Control on the mathematical elements in the inversion algorithm is also allowed. The smoothness matrix can be modified in order to penalize roughness in any one direction. An empirical way of assigning the regularization parameter (damping) is defined, but the user can also decide to assign it manually at each iteration. An appropriate tool was constructed with the purpose of handling the inversion results, for example to correct reconstructed models and to check the effects of such changes on the calculated apparent resistivity. Tests on synthetic and real data, in particular in handling indeterminate cases, show that the flexible approach is a good way to build a detailed picture of the prospected area. [source]


Application of Krylov subspaces to SPECT imaging

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 5 2002
P. Calvini
The application of the conjugate gradient (CG) algorithm to the problem of data reconstruction in SPECT imaging indicates that most of the useful information is already contained in Krylov subspaces of small dimension, ranging from 9 (two-dimensional case) to 15 (three-dimensional case). On this basis, a new, proposed approach can be basically summarized as follows: construction of a basis spanning a Krylov subspace of suitable dimension and projection of the projector,backprojector matrix (a 106 × 106 matrix in the three-dimensional case) onto such a subspace. In this way, one is led to a problem of low dimensionality, for which regularized solutions can be easily and quickly obtained. The required SPECT activity map is expanded as a linear combination of the basis elements spanning the Krylov subspace and the regularization acts by modifying the coefficients of such an expansion. By means of a suitable graphical interface, the tuning of the regularization parameter(s) can be performed interactively on the basis of the visual inspection of one or some slices cut from a reconstruction. © 2003 Wiley Periodicals, Inc. Int J Imaging Syst Technol 12, 217,228, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10026 [source]


Constrained total least-squares computations for high-resolution image reconstruction with multisensors

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 1 2002
Michael K. Ng
Multiple undersampled images of a scene are often obtained by using a charge-coupled device (CCD) detector array of sensors that are shifted relative to each other by subpixel displacements. This geometry of sensors, where each sensor has a subarray of sensing elements of suitable size, has been popular in the task of attaining spatial resolution enhancement from the acquired low-resolution degraded images that comprise the set of observations. With the objective of improving the performance of the signal processing algorithms in the presence of the ubiquitous perturbation errors of displacements around the ideal subpixel locations (because of imperfections in fabrication), in addition to noisy observation, the errors-in-variables or the total least-squares method is used in this paper. A regularized constrained total least-squares (RCTLS) solution to the problem is given, which requires the minimization of a nonconvex and nonlinear cost functional. Simulations indicate that the choice of the regularization parameter influences significantly the quality of the solution. The L-curve method is used to select the theoretically optimum value of the regularization parameter instead of the unsound but expedient trial-and-error approach. The expected superiority of this RCTLS approach over the conventional least-squares theory-based algorithm is substantiated by example. © 2002 John Wiley & Sons, Inc. Int J Imaging Syst Technol 12, 35,42, 2002 [source]


Modular solvers for image restoration problems using the discrepancy principle

NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 5 2002
Peter Blomgren
Abstract Many problems in image restoration can be formulated as either an unconstrained non-linear minimization problem, usually with a Tikhonov -like regularization, where the regularization parameter has to be determined; or as a fully constrained problem, where an estimate of the noise level, either the variance or the signal-to-noise ratio, is available. The formulations are mathematically equivalent. However, in practice, it is much easier to develop algorithms for the unconstrained problem, and not always obvious how to adapt such methods to solve the corresponding constrained problem. In this paper, we present a new method which can make use of any existing convergent method for the unconstrained problem to solve the constrained one. The new method is based on a Newton iteration applied to an extended system of non-linear equations, which couples the constraint and the regularized problem, but it does not require knowledge of the Jacobian of the irregularity functional. The existing solver is only used as a black box solver, which for a fixed regularization parameter returns an improved solution to the unconstrained minimization problem given an initial guess. The new modular solver enables us to easily solve the constrained image restoration problem; the solver automatically identifies the regularization parameter, during the iterative solution process. We present some numerical results. The results indicate that even in the worst case the constrained solver requires only about twice as much work as the unconstrained one, and in some instances the constrained solver can be even faster. Copyright © 2002 John Wiley & Sons, Ltd. [source]


The practice of non-parametric estimation by solving inverse problems: the example of transformation models

THE ECONOMETRICS JOURNAL, Issue 3 2010
Frédérique Fève
Summary, This paper considers a semi-parametric version of the transformation model , (Y) =,,X+U under exogeneity or instrumental variables assumptions (E(U,X) = 0 or . This model is used as an example to illustrate the practice of the estimation by solving linear functional equations. This paper is specially focused on the data-driven selection of the regularization parameter and of the bandwidths. Simulations experiments illustrate the relevance of this approach. [source]


Analysis of a regularized, time-staggered discretization method and its link to the semi-implicit method,

ATMOSPHERIC SCIENCE LETTERS, Issue 2 2005
J. Frank
Abstract A key aspect of the recently proposed Hamiltonian particle-mesh (HPM) method is its time-staggered discretization combined with a regularization of the continuous governing equations. In this article, the time discretization aspect of the HPM method is analysed for the linearized, rotating, shallow-water equations with orography, and the combined effect of time-staggering and regularization is compared analytically with the popular two-time-level semi-implicit time discretization of the unregularized equations. It is found that the two approaches are essentially equivalent, provided the regularization parameter is chosen appropriately in terms of the time step ,t. The article treats space as a continuum and, hence, its analysis is not limited to the HPM method. Copyright © 2005 Royal Meteorological Society [source]


Upper and lower bounds for natural frequencies: A property of the smoothed finite element methods

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 2 2010
Zhi-Qian Zhang
Abstract Node-based smoothed finite element method (NS-FEM) using triangular type of elements has been found capable to produce upper bound solutions (to the exact solutions) for force driving static solid mechanics problems due to its monotonic ,soft' behavior. This paper aims to formulate an NS-FEM for lower bounds of the natural frequencies for free vibration problems. To make the NS-FEM temporally stable, an ,-FEM is devised by combining the compatible and smoothed strain fields in a partition of unity fashion controlled by ,,[0, 1], so that both the properties of stiff FEM and the monotonically soft NS-FEM models can be properly combined for a desired purpose. For temporally stabilizing NS-FEM, , is chosen small so that it acts like a ,regularization parameter' making the NS-FEM stable, but still with sufficient softness ensuring lower bounds for natural frequency solution. Our numerical studies demonstrate that (1) using a proper ,, the spurious non-zero energy modes can be removed and the NS-FEM becomes temporally stable; (2) the stabilized NS-FEM becomes a general approach for solids to obtain lower bounds to the exact natural frequencies over the whole spectrum; (3) ,-FEM can even be tuned for obtaining nearly exact natural frequencies. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Automatic tuning of L2 -SVM parameters employing the extended Kalman filter

EXPERT SYSTEMS, Issue 2 2009
Tingting Mu
Abstract: We show that tuning of multiple parameters for a 2-norm support vector machine (L2 -SVM) could be viewed as an identification problem of a nonlinear dynamic system. Benefiting from the reachable smooth nonlinearity of an L2 -SVM, we propose to employ the extended Kalman filter to tune the kernel and regularization parameters automatically for the L2 -SVM. The proposed method is validated using three public benchmark data sets and compared with the gradient descent approach as well as the genetic algorithm in measures of classification accuracy and computing time. Experimental results demonstrate the effectiveness of the proposed method in higher classification accuracies, faster training speed and less sensitivity to the initial settings. [source]


Time continuity in cohesive finite element modeling

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 5 2003
Katerina D. Papoulia
Abstract We introduce the notion of time continuity for the analysis of cohesive zone interface finite element models. We focus on ,initially rigid' models in which an interface is inactive until the traction across it reaches a critical level. We argue that methods in this class are time discontinuous, unless special provision is made for the opposite. Time discontinuity leads to pitfalls in numerical implementations: oscillatory behavior, non-convergence in time and dependence on nonphysical regularization parameters. These problems arise at least partly from the attempt to extend uniaxial traction,displacement relationships to multiaxial loading. We also argue that any formulation of a time-continuous functional traction,displacement cohesive model entails encoding the value of the traction components at incipient softening into the model. We exhibit an example of such a model. Most of our numerical experiments concern explicit dynamics. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Parameter identification with weightless regularization,

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2001
Tomonari Furukawa
Although the regularization increased the popularity of parameter identification due to its capability of deriving a stable solution, the significant problem is that the solution depends upon the regularization parameters chosen. This paper presents a technique for deriving solutions without the use of the parameters and, further, an optimization method, which can work efficiently for problems of concern. Numerical examples show that the technique can efficiently search for appropriate solutions. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Analysis of the bounded variation and the G regularization for nonlinear inverse problems

MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 9 2010
I. Cimrák
Abstract We analyze the energy method for inverse problems. We study the unconstrained minimization of the energy functional consisting of a least-square fidelity term and two other regularization terms being the seminorm in the BV space and the norm in the G space. We consider a coercive (non)linear operator modelling the forward problem. We establish the uniqueness and stability results for the minimization problems. The stability is studied with respect to the perturbations in the data, in the operator, as well as in the regularization parameters. We settle convergence results for the general minimization schemes. Copyright © 2009 John Wiley & Sons, Ltd. [source]


A direct method for a regularized least-squares problem

NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 8 2009
Tommy Elfving
Abstract We consider a linear system of the form A1x1+ A2x2+,=b1. The vector , consists of identically distributed random variables all with mean zero. The unknowns are split into two groups x1 and x2. In the model usually there are more unknowns than observations and the resulting linear system is most often consistent having an infinite number of solutions. Hence some constraint on the parameter vector x is needed. One possibility is to avoid rapid variation in, e.g. the parameters x2. We formulate the problem as a partially regularized least-squares problem, and propose a direct solution method based on the QR decomposition of matrix blocks. Further we consider regularizing using one and two regularization parameters, respectively. We also discuss the choice of regularization parameters, and extend Reinsch's method to the case with two parameters. Also the cross-validation technique is treated. We present test examples taken from an application in modelling of the substance transport in rivers. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Analysis of a regularized, time-staggered discretization applied to a vertical slice model,

ATMOSPHERIC SCIENCE LETTERS, Issue 4 2006
Mark Dubal
Abstract A regularized and time-staggered discretization of the two-dimensional, vertical slice Euler equation set is described and analysed. A linear normal mode analysis of the time-discrete system indicates that unconditional stability is obtained, for appropriate values of the regularization parameters, for both the hydrostatic and non-hydrostatic cases. Furthermore, when these parameters take their optimal values, the stability behaviour of the normal modes is identical to that obtained from a semi-implicit discretization of the unregularized equations. © Crown Copyright 2006. Reproduced with the permission of the Controller of HMSO. Published by John Wiley & Sons, Ltd. [source]