Least-squares Methods (least-square + methods)

Distribution by Scientific Domains


Selected Abstracts


Addressing non-uniqueness in linearized multichannel surface wave inversion

GEOPHYSICAL PROSPECTING, Issue 1 2009
Michele Cercato
ABSTRACT The multichannel analysis of the surface waves method is based on the inversion of observed Rayleigh-wave phase-velocity dispersion curves to estimate the shear-wave velocity profile of the site under investigation. This inverse problem is nonlinear and it is often solved using ,local' or linearized inversion strategies. Among linearized inversion algorithms, least-squares methods are widely used in research and prevailing in commercial software; the main drawback of this class of methods is their limited capability to explore the model parameter space. The possibility for the estimated solution to be trapped in local minima of the objective function strongly depends on the degree of nonuniqueness of the problem, which can be reduced by an adequate model parameterization and/or imposing constraints on the solution. In this article, a linearized algorithm based on inequality constraints is introduced for the inversion of observed dispersion curves; this provides a flexible way to insert a priori information as well as physical constraints into the inversion process. As linearized inversion methods are strongly dependent on the choice of the initial model and on the accuracy of partial derivative calculations, these factors are carefully reviewed. Attention is also focused on the appraisal of the inverted solution, using resolution analysis and uncertainty estimation together with a posteriori effective-velocity modelling. Efficiency and stability of the proposed approach are demonstrated using both synthetic and real data; in the latter case, cross-hole S-wave velocity measurements are blind-compared with the results of the inversion process. [source]


A comparison of modern data analysis methods for X-ray and neutron specular reflectivity data

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 5 2007
A. Van Der Lee
Data analysis methods for specular X-ray or neutron reflectivity are compared. The methods that have been developed over the years can be classified into different types. The so-called classical methods are based on Parrat's or Abelès' formalism and rely on minimization using more or less evolved Levenberg,Marquardt or simplex routines. A second class uses the same formalism, but optimization is carried out using simulated annealing or genetic algorithms. A third class uses alternative expressions for the reflectivity, such as the Born approximation or distorted Born approximation. This makes it easier to invert the specular data directly, coupled or not with classical least-squares or iterative methods using over-relaxation or charge-flipping techniques. A fourth class uses mathematical methods founded in scattering theory to determine the phase of the scattered waves, but has to be coupled in certain cases with (magnetic) reference layers. The strengths and weaknesses of a number of these methods are evaluated using simulated and experimental data. It is shown that genetic algorithms are by far superior to traditional and advanced least-squares methods, but that they fail when the layers are less well defined. In the latter case, the methods from the third or fourth class are the better choice, because they permit at least a first estimate of the density profile to be obtained that can be refined using the classical methods of the first class. It is also shown that different analysis programs may calculate different reflectivities for a similar chemical system. One reason for this is that the representation of the layers is either described by chemical composition or by scattering length or electronic densities, between which the conversion of the absorptive part is not straightforward. A second important reason is that routines that describe the convolution with the instrumental resolution function are not identical. [source]


Prediction and nonparametric estimation for time series with heavy tails

JOURNAL OF TIME SERIES ANALYSIS, Issue 3 2002
PETER HALL
Motivated by prediction problems for time series with heavy-tailed marginal distributions, we consider methods based on `local least absolute deviations' for estimating a regression median from dependent data. Unlike more conventional `local median' methods, which are in effect based on locally fitting a polynomial of degree 0, techniques founded on local least absolute deviations have quadratic bias right up to the boundary of the design interval. Also in contrast to local least-squares methods based on linear fits, the order of magnitude of variance does not depend on tail-weight of the error distribution. To make these points clear, we develop theory describing local applications to time series of both least-squares and least-absolute-deviations methods, showing for example that, in the case of heavy-tailed data, the conventional local-linear least-squares estimator suffers from an additional bias term as well as increased variance. [source]


On the subdomain-Galerkin/least squares method for 2- and 3-D mixed elliptic problems with reaction terms

NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 6 2002
Suh-Yuh Yang
Abstract In this article we apply the subdomain-Galerkin/least squares method, which is first proposed by Chang and Gunzburger for first-order elliptic systems without reaction terms in the plane, to solve second-order non-selfadjoint elliptic problems in two- and three-dimensional bounded domains with triangular or tetrahedral regular triangulations. This method can be viewed as a combination of a direct cell vertex finite volume discretization step and an algebraic least-squares minimization step in which the pressure is approximated by piecewise linear elements and the flux by the lowest order Raviart-Thomas space. This combined approach has the advantages of both finite volume and least-squares methods. Among other things, the combined method is not subject to the Ladyzhenskaya-Babus,ka-Brezzi condition, and the resulting linear system is symmetric and positive definite. An optimal error estimate in the H1(,) × H(div; ,) norm is derived. An equivalent residual-type a posteriori error estimator is also given. © 2002 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 18: 738,751, 2002; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/num.10030. [source]


Design of experiments with unknown parameters in variance

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 3 2002
Valerii V. Fedorov
Abstract Model fitting when the variance function depends on unknown parameters is a popular problem in many areas of research. Iterated estimators which are asymptotically equivalent to maximum likelihood estimators are proposed and their convergence is discussed. From a computational point of view, these estimators are very close to the iteratively reweighted least-squares methods. The additive structure of the corresponding information matrices allows us to apply convex design theory which leads to optimal design algorithms. We conclude with examples which illustrate how to bridge our general results with specific applied needs. In particular, a model with experimental costs is introduced and is studied within the normalized design paradigm. Copyright © 2002 John Wiley & Sons, Ltd. [source]