Least-squares

Distribution by Scientific Domains

Kinds of Least-squares

  • generalized least-square
  • moving least-square

  • Terms modified by Least-squares

  • least-square algorithm
  • least-square analysis
  • least-square approximation
  • least-square criterion
  • least-square estimation
  • least-square estimator
  • least-square fit
  • least-square functional
  • least-square method
  • least-square methods
  • least-square procedure
  • least-square refinement
  • least-square regression
  • least-square regression analysis

  • Selected Abstracts


    Least-square-based radial basis collocation method for solving inverse problems of Laplace equation from noisy data

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 1 2010
    Xian-Zhong Mao
    Abstract The inverse problem of 2D Laplace equation involves an estimation of unknown boundary values or the locations of boundary shape from noisy observations on over-specified boundary or internal data points. The application of radial basis collocation method (RBCM), one of meshless and non-iterative numerical schemes, directly induces this inverse boundary value problem (IBVP) to a single-step solution of a system of linear algebraic equations in which the coefficients matrix is inherently ill-conditioned. In order to solve the unstable problem observed in the conventional RBCM, an effective procedure that builds an over-determined linear system and combines with least-square technique is proposed to restore the stability of the solution in this paper. The present work investigates three examples of IBVPs using over-specified boundary conditions or internal data with simulated noise and obtains stable and accurate results. It underlies that least-square-based radial basis collocation method (LS-RBCM) poses a significant advantage of good stability against large noise levels compared with the conventional RBCM. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    On-line identification of non-linear hysteretic structural systems using a variable trace approach

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 9 2001
    Jeng-Wen Lin
    Abstract In this paper, an adaptive on-line parametric identification algorithm based on the variable trace approach is presented for the identification of non-linear hysteretic structures. At each time step, this recursive least-square-based algorithm upgrades the diagonal elements of the adaptation gain matrix by comparing the values of estimated parameters between two consecutive time steps. Such an approach will enforce a smooth convergence of the parameter values, a fast tracking of the parameter changes and will remain adaptive as time progresses. The effectiveness and efficiency of the proposed algorithm is shown by considering the effects of excitation amplitude, of the measurement units, of larger sampling time interval and of measurement noise. The cases of exact-, under-, over-parameterization of the structural model have been analysed. The proposed algorithm is also quite effective in identifying time-varying structural parameters to simulate cumulative damage in structural systems. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Least-square-based radial basis collocation method for solving inverse problems of Laplace equation from noisy data

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 1 2010
    Xian-Zhong Mao
    Abstract The inverse problem of 2D Laplace equation involves an estimation of unknown boundary values or the locations of boundary shape from noisy observations on over-specified boundary or internal data points. The application of radial basis collocation method (RBCM), one of meshless and non-iterative numerical schemes, directly induces this inverse boundary value problem (IBVP) to a single-step solution of a system of linear algebraic equations in which the coefficients matrix is inherently ill-conditioned. In order to solve the unstable problem observed in the conventional RBCM, an effective procedure that builds an over-determined linear system and combines with least-square technique is proposed to restore the stability of the solution in this paper. The present work investigates three examples of IBVPs using over-specified boundary conditions or internal data with simulated noise and obtains stable and accurate results. It underlies that least-square-based radial basis collocation method (LS-RBCM) poses a significant advantage of good stability against large noise levels compared with the conventional RBCM. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Arbitrary placement of local meshes in a global mesh by the interface-element method (IEM)

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 15 2003
    Hyun-Gyu KimArticle first published online: 25 FEB 200
    Abstract A new method is proposed to place local meshes in a global mesh with the aid of the interface-element method (IEM). The interface-elements use moving least-square (MLS)-based shape functions to join partitioned finite-element domains with non-matching interfaces. The supports of nodes are defined to satisfy the continuity condition on the interfaces by introducing pseudonodes on the boundaries of interface regions. Particularly, the weight functions of nodes on the boundaries of interface regions span only neighbouring nodes, ensuring that the resulting shape functions are identical to those of adjoining finite-elements. The completeness of the shape functions of the interface-elements up to the order of basis provides a reasonable transfer of strain fields through the non-matching interfaces between partitioned domains. Taking these great advantages of the IEM, local meshes can be easily inserted at arbitrary places in a global mesh. Several numerical examples show the effectiveness of this technique for modelling of local regions in a global domain. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Anomalies in the Foundations of Ridge Regression: Some Clarifications

    INTERNATIONAL STATISTICAL REVIEW, Issue 2 2010
    Prasenjit Kapat
    Summary Several anomalies in the foundations of ridge regression from the perspective of constrained least-square (LS) problems were pointed out in Jensen & Ramirez. Some of these so-called anomalies, attributed to the non-monotonic behaviour of the norm of unconstrained ridge estimators and the consequent lack of sufficiency of Lagrange's principle, are shown to be incorrect. It is noted in this paper that, for a fixed,Y, norms of unconstrained ridge estimators corresponding to the given basis are indeed strictly monotone. Furthermore, the conditions for sufficiency of Lagrange's principle are valid for a suitable range of the constraint parameter. The discrepancy arose in the context of one data set due to confusion between estimates of the parameter vector,,,, corresponding to different parametrization (choice of bases) and/or constraint norms. In order to avoid such confusion, it is suggested that the parameter,,,corresponding to each basis be labelled appropriately. Résumé Plusieurs anomalies ont été récemment relevées par Jensen et Ramirez (2008) dans les fondements théoriques de la "ridge regression" considérée dans une perspective de moindres carrés constraints. Certaines de ces anomalies ont été attribuées au comportement non monotone de la norme des "ridge-estimateurs" non contraints, ainsi qu'au caractère non suffisant du principe de Lagrange. Nous indiquons dans cet article que, pour une valeur fixée de,Y, la norme des ridge-estimateurs correspondant à une base donnée sont strictement monotones. En outre, les conditions assurant le caractère suffisant du principe de Lagrange sont satisfaites pour un ensemble adéquat de valeurs du paramètre contraint. L'origine des anomalies relevées se trouve donc ailleurs. Cette apparente contradiction prend son origine, dans le contexte de l'étude d'un ensemble de données particulier, dans la confusion entre les estimateurs du vecteur de paramètres,,,correspondant à différentes paramétrisations (associées à différents choix d'une base) et/ou à différentes normes. Afin d'éviter ce type de confusion, il est suggéré d'indexer le paramètre de façon adéquate au moyen de la base choisie. [source]


    Application of support vector regression for developing soft sensors for nonlinear processes,

    THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 5 2010
    Saneej B. Chitralekha
    Abstract The field of soft sensor development has gained significant importance in the recent past with the development of efficient and easily employable computational tools for this purpose. The basic idea is to convert the information contained in the input,output data collected from the process into a mathematical model. Such a mathematical model can be used as a cost efficient substitute for hardware sensors. The Support Vector Regression (SVR) tool is one such computational tool that has recently received much attention in the system identification literature, especially because of its successes in building nonlinear blackbox models. The main feature of the algorithm is the use of a nonlinear kernel transformation to map the input variables into a feature space so that their relationship with the output variable becomes linear in the transformed space. This method has excellent generalisation capabilities to high-dimensional nonlinear problems due to the use of functions such as the radial basis functions which have good approximation capabilities as kernels. Another attractive feature of the method is its convex optimization formulation which eradicates the problem of local minima while identifying the nonlinear models. In this work, we demonstrate the application of SVR as an efficient and easy-to-use tool for developing soft sensors for nonlinear processes. In an industrial case study, we illustrate the development of a steady-state Melt Index soft sensor for an industrial scale ethylene vinyl acetate (EVA) polymer extrusion process using SVR. The SVR-based soft sensor, valid over a wide range of melt indices, outperformed the existing nonlinear least-square-based soft sensor in terms of lower prediction errors. In the remaining two other case studies, we demonstrate the application of SVR for developing soft sensors in the form of dynamic models for two nonlinear processes: a simulated pH neutralisation process and a laboratory scale twin screw polymer extrusion process. A heuristic procedure is proposed for developing a dynamic nonlinear-ARX model-based soft sensor using SVR, in which the optimal delay and orders are automatically arrived at using the input,output data. Le domaine du développement des capteurs logiciels a récemment gagné en importance avec la création d'outils de calcul efficaces et facilement utilisables à cette fin. L'idée de base est de convertir l'information obtenue dans les données d'entrée et de sortie recueillies à partir du processus dans un modèle mathématique. Un tel modèle mathématique peut servir de solution de rechange économique pour les capteurs matériels. L'outil de régression par machine à vecteur de support (RMVS) constitue un outil de calcul qui a récemment été l'objet de beaucoup d'attention dans la littérature des systèmes d'identification, surtout en raison de ses succès dans la création de modèles de boîte noire non linéaires. Dans ce travail, nous démontrons l'application de la RMVS comme outil efficace et facile à utiliser pour la création de capteurs logiciels pour les procédés non linéaires. Dans une étude de cas industrielle, nous illustrons le développement d'un capteur logiciel à indice de fluidité à état permanent pour un processus d'extrusion du polymère d'acétate de vinyle-éthylène à l'échelle industrielle en utilisant la RMVS. Le capteur logiciel fondé sur la RMVS, valide sur une vaste gamme d'indices de fluidité, a surclassé le capteur logiciel fondé sur les moindres carrés non linéaires existant en matière d'erreurs de prédiction plus faibles. Dans les deux autres études de cas, nous démontrons l'application de la RMVS pour la création de capteurs logiciels sous la forme de modèles dynamiques pour deux procédés non linéaires: un processus de neutralisation du pH simulé et un processus d'extrusion de polymère à deux vis à l'échelle laboratoire. Une procédure heuristique est proposée pour la création d'un capteur logiciel fondé sur un modèle ARX non linéaire dynamique en utilisant la RMVS, dans lequel on atteint automatiquement le délai optimal et les ordres en utilisant les données d'entrée et de sortie. [source]


    Dynamic Sampling and Rendering of Algebraic Point Set Surfaces

    COMPUTER GRAPHICS FORUM, Issue 2 2008
    Gaël Guennebaud
    Abstract Algebraic Point Set Surfaces (APSS) define a smooth surface from a set of points using local moving least-squares (MLS) fitting of algebraic spheres. In this paper we first revisit the spherical fitting problem and provide a new, more generic solution that includes intuitive parameters for curvature control of the fitted spheres. As a second contribution we present a novel real-time rendering system of such surfaces using a dynamic up-sampling strategy combined with a conventional splatting algorithm for high quality rendering. Our approach also includes a new view dependent geometric error tailored to efficient and adaptive up-sampling of the surface. One of the key features of our system is its high degree of flexibility that enables us to achieve high performance even for highly dynamic data or complex models by exploiting temporal coherence at the primitive level. We also address the issue of efficient spatial search data structures with respect to construction, access and GPU friendliness. Finally, we present an efficient parallel GPU implementation of the algorithms and search structures. [source]


    THE INTERACTION OF ANTISOCIAL PROPENSITY AND LIFE-COURSE VARYING PREDICTORS OF DELINQUENT BEHAVIOR: DIFFERENCES BY METHOD OF ESTIMATION AND IMPLICATIONS FOR THEORY,

    CRIMINOLOGY, Issue 2 2007
    GRAHAM C. OUSEY
    Recent criminological research has explored the extent to which stable propensity and life-course perspectives may be integrated to provide a more comprehensive explanation of variation in individual criminal offending. One line of these integrative efforts focuses on the ways that stable individual characteristics may interact with, or modify, the effects of life-course varying social factors. Given their consistency with the long-standing view that person,environment interactions contribute to variation in human social behavior, these theoretical integration attempts have great intuitive appeal. However, a review of past criminological research suggests that conceptual and empirical complexities have, so far, somewhat dampened the development of a coherent theoretical understanding of the nature of interaction effects between stable individual antisocial propensity and time-varying social variables. In this study, we outline and empirically assess several of the sometimes conflicting hypotheses regarding the ways that antisocial propensity moderates the influence of time-varying social factors on delinquent offending. Unlike some prior studies, however, we explicitly measure the interactive effects of stable antisocial propensity and time-varying measures of selected social variables on changes in delinquent offending. In addition, drawing on recent research that suggests that the relative ubiquity of interaction effects in past studies may be partly from the poorly suited application of linear statistical models to delinquency data, we alternatively test our interaction hypotheses using least-squares and tobit estimation frameworks. Our findings suggest that method of estimation matters, with interaction effects appearing readily in the former but not in the latter. The implications of these findings for future conceptual and empirical work on stable propensity/time-varying social variable interaction effects are discussed. [source]


    Parameter identification of framed structures using an improved finite element model-updating method,Part I: formulation and verification

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 5 2007
    Eunjong Yu
    Abstract In this study, we formulate an improved finite element model-updating method to address the numerical difficulties associated with ill conditioning and rank deficiency. These complications are frequently encountered model-updating problems, and occur when the identification of a larger number of physical parameters is attempted than that warranted by the information content of the experimental data. Based on the standard bounded variables least-squares (BVLS) method, which incorporates the usual upper/lower-bound constraints, the proposed method (henceforth referred to as BVLSrc) is equipped with novel sensitivity-based relative constraints. The relative constraints are automatically constructed using the correlation coefficients between the sensitivity vectors of updating parameters. The veracity and effectiveness of BVLSrc is investigated through the simulated, yet realistic, forced-vibration testing of a simple framed structure using its frequency response function as input data. By comparing the results of BVLSrc with those obtained via (the competing) pure BVLS and regularization methods, we show that BVLSrc and regularization methods yield approximate solutions with similar and sufficiently high accuracy, while pure BVLS method yields physically inadmissible solutions. We further demonstrate that BVLSrc is computationally more efficient, because, unlike regularization methods, it does not require the laborious a priori calculations to determine an optimal penalty parameter, and its results are far less sensitive to the initial estimates of the updating parameters. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Allometric scaling of maximum metabolic rate: the influence of temperature

    FUNCTIONAL ECOLOGY, Issue 4 2008
    C. R. White
    Summary 1Maximum aerobic metabolic rate, measured in terms of rate of oxygen consumption during exercise (), is well known to scale to body mass (M) with an exponent greater than the value of 0·75 predicted by models based on the geometry of systems that supply nutrients. 2Recently, the observed scaling for (,M0·872) has been hypothesized to arise because of the temperature dependence of biological processes, and because large species show a greater increase in muscle temperature when exercising than do small species. 3Based on this hypothesis, we predicted that will be positively related to ambient temperature, because heat loss is restricted at high temperatures and body temperature is likely to be elevated to a greater extent than during exercise in the cold. 4This prediction was tested using a comparative phylogenetic generalized least-squares (PGLS) approach, and 34 measurements of six species of rodent (20·5,939 g) maximally exercising at temperatures from ,16 to 30 °C. 5 is unrelated to testing temperature, but is negatively related to acclimation temperature. We conclude that prolonged cold exposure increases exercise-induced by acting as a form of aerobic training in mammals, and that elevated muscle temperatures of large species do not explain the scaling of across taxa. [source]


    Joint full-waveform analysis of off-ground zero-offset ground penetrating radar and electromagnetic induction synthetic data for estimating soil electrical properties

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2010
    D. Moghadas
    SUMMARY A joint analysis of full-waveform information content in ground penetrating radar (GPR) and electromagnetic induction (EMI) synthetic data was investigated to reconstruct the electrical properties of multilayered media. The GPR and EMI systems operate in zero-offset, off-ground mode and are designed using vector network analyser technology. The inverse problem is formulated in the least-squares sense. We compared four approaches for GPR and EMI data fusion. The two first techniques consisted of defining a single objective function, applying different weighting methods. As a first approach, we weighted the EMI and GPR data using the inverse of the data variance. The ideal point method was also employed as a second weighting scenario. The third approach is the naive Bayesian method and the fourth technique corresponds to GPR,EMI and EMI,GPR sequential inversions. Synthetic GPR and EMI data were generated for the particular case of a two-layered medium. Analysis of the objective function response surfaces from the two first approaches demonstrated the benefit of combining the two sources of information. However, due to the variations of the GPR and EMI model sensitivities with respect to the medium electrical properties, the formulation of an optimal objective function based on the weighting methods is not straightforward. While the Bayesian method relies on assumptions with respect to the statistical distribution of the parameters, it may constitute a relevant alternative for GPR and EMI data fusion. Sequential inversions of different configurations for a two layered medium show that in the case of high conductivity or permittivity for the first layer, the inversion scheme can not fully retrieve the soil hydrogeophysical parameters. But in the case of low permittivity and conductivity for the first layer, GPR,EMI inversion provides proper estimation of values compared to the EMI,GPR inversion. [source]


    Full waveform inversion of seismic waves reflected in a stratified porous medium

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2010
    Louis De Barros
    SUMMARY In reservoir geophysics applications, seismic imaging techniques are expected to provide as much information as possible on fluid-filled reservoir rocks. Since seismograms are, to some degree, sensitive to the mechanical parameters and fluid properties of porous media, inversion methods can be devised to directly estimate these quantities from the waveforms obtained in seismic reflection experiments. An inversion algorithm that uses a generalized least-squares, quasi-Newton approach is described to determine the porosity, permeability, interstitial fluid properties and mechanical parameters of porous media. The proposed algorithm proceeds by iteratively minimizing a misfit function between observed data and synthetic wavefields computed with the Biot theory. Simple models consisting of plane-layered, fluid-saturated and poro-elastic media are considered to demonstrate the concept and evaluate the performance of such a full waveform inversion scheme. Numerical experiments show that, when applied to synthetic data, the inversion procedure can accurately reconstruct the vertical distribution of a single model parameter, if all other parameters are perfectly known. However, the coupling between some of the model parameters does not permit the reconstruction of several model parameters at the same time. To get around this problem, we consider composite parameters defined from the original model properties and from a priori information, such as the fluid saturation rate or the lithology, to reduce the number of unknowns. Another possibility is to apply this inversion algorithm to time-lapse surveys carried out for fluid substitution problems, such as CO2 injection, since in this case only a few parameters may vary as a function of time. We define a two-step differential inversion approach which allows us to reconstruct the fluid saturation rate in reservoir layers, even though the medium properties are poorly known. [source]


    A covariance-adaptive approach for regularized inversion in linear models

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2007
    Christopher Kotsakis
    SUMMARY The optimal inversion of a linear model under the presence of additive random noise in the input data is a typical problem in many geodetic and geophysical applications. Various methods have been developed and applied for the solution of this problem, ranging from the classic principle of least-squares (LS) estimation to other more complex inversion techniques such as the Tikhonov,Philips regularization, truncated singular value decomposition, generalized ridge regression, numerical iterative methods (Landweber, conjugate gradient) and others. In this paper, a new type of optimal parameter estimator for the inversion of a linear model is presented. The proposed methodology is based on a linear transformation of the classic LS estimator and it satisfies two basic criteria. First, it provides a solution for the model parameters that is optimally fitted (in an average quadratic sense) to the classic LS parameter solution. Second, it complies with an external user-dependent constraint that specifies a priori the error covariance (CV) matrix of the estimated model parameters. The formulation of this constrained estimator offers a unified framework for the description of many regularization techniques that are systematically used in geodetic inverse problems, particularly for those methods that correspond to an eigenvalue filtering of the ill-conditioned normal matrix in the underlying linear model. Our study lies on the fact that it adds an alternative perspective on the statistical properties and the regularization mechanism of many inversion techniques commonly used in geodesy and geophysics, by interpreting them as a family of ,CV-adaptive' parameter estimators that obey a common optimal criterion and differ only on the pre-selected form of their error CV matrix under a fixed model design. [source]


    Contemporary kinematics of the southern Aegean and the Mediterranean Ridge

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2004
    Corné Kreemer
    SUMMARY This study focuses on the kinematics of the southern Aegean and the Mediterranean Ridge (MR). A quantification of the deformation of the MR is essential for both evaluating physical models of accretionary wedges in general and for obtaining a self-consistent model of the surface deformation over the entire Nubia,Eurasia (NU,EU) plate boundary zone in the eastern Mediterranean. Previous kinematic studies have not properly considered the deformation field south of the Hellenic arc. Although this study focuses on the deformation field of the MR, we also discuss the kinematics of the southern Aegean, because the geometry and movement of the Hellenic arc determine to a large extent the kinematic boundary conditions for kinematic studies of the MR. We calculate a continuous velocity and strain rate field by interpolating model velocities that are fitted in a least-squares sense to published Global Positioning System (GPS) velocities. In the interpolation, we use information from a detailed data set of onshore and offshore active faulting to place constraints on the expected style and direction of the model strain rate field. In addition, we use the orientations of tracks left by seamounts travelling into the wedge to further constrain the offshore deformation pattern. Our model results highlight the presence of active shear partitioning within the Mediterranean ridge. High compressional strain rates between the ridge crest and the deformation front accommodate approximately 60,70 per cent of the total motion over the wedge, and the outward growth rate of the frontal thrust is , 4 mm yr,1. Strain partitioning within the wedge leads to 19,23 mm yr,1 of dextral motion at the wedge,backstop contact of the western MR, whereas the Pliny and Strabo trenches in the eastern MR accommodate 21,23 mm yr,1 of sinistral motion. The backstop of the western MR is kinematically part of the southern Aegean, which moves as a single block [the Aegean block (AE)] at 33,34 mm yr,1 in the direction of S24°W ± 1° towards stable Nubia (NU). Our model confirms that there is a clear divergence between the western and eastern Hellenic arc and we argue for a causal relation between the outward motion of the arc and the gradient in the regional geoid anomaly. Our results suggest that a significant driving source of the surface velocity field lies south of the Hellenic arc and only for the southeastern Aegean could there be some effect as a result of gravitational collapse associated with density differences within the overriding plate. [source]


    Constraints on earthquake epicentres independent of seismic velocity models

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2004
    T. Nicholson
    SUMMARY We investigate the constraints that may be placed on earthquake epicentres without assuming a model for seismic wave speed variation within the Earth. This allows location improvements achieved using 1-D or 3-D models to be put into perspective. A simple, arrival order misfit criterion is proposed that may be used in standard location schemes. The arrival order misfit criterion does not use a seismic velocity model but simply assumes that the traveltime curve for a particular phase is monotonic with distance. Greater robustness is achieved by including a contribution from every possible pairing of stations and the effect of timing inconsistencies reduced by smoothing. An expression is found that relates the smoothing parameter to the number of observations. A typical event is studied in detail to demonstrate the properties of the misfit function. A pathological case is shown that illustrates that, like other location methods, the arrival order misfit is susceptible to poor station distribution. 25 ground truth and 5000 other teleseismically observed events are relocated and the arrival order locations compared to those found using a least-squares approach and a 1-D earth model. The arrival order misfit is found to be surprisingly accurate when more than 50 observations are used and may be useful in obtaining a model independent epicentre estimate in regions of poorly known velocity structure or the starting point for another location scheme. [source]


    Velocity analysis based on data correlation

    GEOPHYSICAL PROSPECTING, Issue 6 2008
    T. Van Leeuwen
    ABSTRACT Several methods exist to automatically obtain a velocity model from seismic data via optimization. Migration velocity analysis relies on an imaging condition and seeks the velocity model that optimally focuses the migrated image. This approach has been proven to be very successful. However, most migration methods use simplified physics to make them computationally feasible and herein lies the restriction of migration velocity analysis. Waveform inversion methods use the full wave equation to model the observed data and more complicated physics can be incorporated. Unfortunately, due to the band-limited nature of the data, the resulting inverse problem is highly nonlinear. Simply fitting the data in a least-squares sense by using a gradient-based optimization method is sometimes problematic. In this paper, we propose a novel method that measures the amount of focusing in the data domain rather than the image domain. As a first test of the method, we include some examples for 1D velocity models and the convolutional model. [source]


    Comparison of waveform inversion, part 3: amplitude approach

    GEOPHYSICAL PROSPECTING, Issue 4 2007
    Sukjoon Pyun
    ABSTRACT In the second paper of this three part series, we studied the case of conventional and logarithmic phase-only approaches to full-waveform inversion. Here, we concentrate on deriving amplitude-only approaches for both conventional- and logarithmic-based methods. We define two amplitude-only objective functions by simply assuming that the phase of the modelled wavefield is equal to that of the observed wavefield. We do this for both the conventional least-squares approach and the logarithmic approach of Shin and Min. We show that these functions can be optimized using the same reverse-time propagation algorithm of the full conventional methodology. Although the residuals in this case are not really residual wavefields, they can both be considered and utilized in that sense. In contrast to the case for our phase-only algorithms, we show through numerical tests that the conventional amplitude-only inversion is better than the logarithmic method. [source]


    A numerical comparison of 2D resistivity imaging with 10 electrode arrays

    GEOPHYSICAL PROSPECTING, Issue 5 2004
    Torleif Dahlin
    ABSTRACT Numerical simulations are used to compare the resolution and efficiency of 2D resistivity imaging surveys for 10 electrode arrays. The arrays analysed include pole-pole (PP), pole-dipole (PD), half-Wenner (HW), Wenner-, (WN), Schlumberger (SC), dipole-dipole (DD), Wenner-, (WB), ,-array (GM), multiple or moving gradient array (GD) and midpoint-potential-referred measurement (MPR) arrays. Five synthetic geological models, simulating a buried channel, a narrow conductive dike, a narrow resistive dike, dipping blocks and covered waste ponds, were used to examine the surveying efficiency (anomaly effects, signal-to-noise ratios) and the imaging capabilities of these arrays. The responses to variations in the data density and noise sensitivities of these electrode configurations were also investigated using robust (L1 -norm) inversion and smoothness-constrained least-squares (L2 -norm) inversion for the five synthetic models. The results show the following. (i) GM and WN are less contaminated by noise than the other electrode arrays. (ii) The relative anomaly effects for the different arrays vary with the geological models. However, the relatively high anomaly effects of PP, GM and WB surveys do not always give a high-resolution image. PD, DD and GD can yield better resolution images than GM, PP, WN and WB, although they are more susceptible to noise contamination. SC is also a strong candidate but is expected to give more edge effects. (iii) The imaging quality of these arrays is relatively robust with respect to reductions in the data density of a multi-electrode layout within the tested ranges. (iv) The robust inversion generally gives better imaging results than the L2 -norm inversion, especially with noisy data, except for the dipping block structure presented here. (v) GD and MPR are well suited to multichannel surveying and GD may produce images that are comparable to those obtained with DD and PD. Accordingly, the GD, PD, DD and SC arrays are strongly recommended for 2D resistivity imaging, where the final choice will be determined by the expected geology, the purpose of the survey and logistical considerations. [source]


    Adaptive subtraction of multiples using the L1 -norm

    GEOPHYSICAL PROSPECTING, Issue 1 2004
    A. Guitton
    ABSTRACT A strategy for multiple removal consists of estimating a model of the multiples and then adaptively subtracting this model from the data by estimating shaping filters. A possible and efficient way of computing these filters is by minimizing the difference or misfit between the input data and the filtered multiples in a least-squares sense. Therefore, the signal is assumed to have minimum energy and to be orthogonal to the noise. Some problems arise when these conditions are not met. For instance, for strong primaries with weak multiples, we might fit the multiple model to the signal (primaries) and not to the noise (multiples). Consequently, when the signal does not exhibit minimum energy, we propose using the L1 -norm, as opposed to the L2 -norm, for the filter estimation step. This choice comes from the well-known fact that the L1 -norm is robust to ,large' amplitude differences when measuring data misfit. The L1 -norm is approximated by a hybrid L1/L2 -norm minimized with an iteratively reweighted least-squares (IRLS) method. The hybrid norm is obtained by applying a simple weight to the data residual. This technique is an excellent approximation to the L1 -norm. We illustrate our method with synthetic and field data where internal multiples are attenuated. We show that the L1 -norm leads to much improved attenuation of the multiples when the minimum energy assumption is violated. In particular, the multiple model is fitted to the multiples in the data only, while preserving the primaries. [source]


    A new method to discriminate between a valid IP response and EM coupling effects

    GEOPHYSICAL PROSPECTING, Issue 6 2002
    Jianping Xiang
    ABSTRACT The problem of discrimination between a valid induced polarization (IP) response and electromagnetic (EM) coupling effects is considered and an effective solution is provided. First, a finite dimensional approximation to the Cole-Cole model is investigated. Using the least-squares approach, the parameters of the approximate model are obtained. Next, based on the analysis of overvoltage, a finite dimensional structure of the IP model is produced. Using this overvoltage-based structure, a specific finite dimensional approximation of the Cole-Cole model is proposed. Summarizing the analysis of the finite dimensional IP model, it is concluded that the proposed IP model, which fits the field data much better than the traditional Cole-Cole model, is essentially an RC-circuit. From a circuit-analysis point of view, it is well known that an electromagnetic effect can be described by an RL-circuit. The simulation results on experimental data support this conception. According to this observation, a new method to discriminate between a valid IP response and EM coupling effects is proposed as follows: (i) use a special finite dimensional model for IP,EM systems; (ii) obtain the parameters for the model using a least-squares approach; (iii) separate RC-type terms and RL-type terms , the first models the IP behaviour, the latter represents the EM part. Simulation on experimental data shows that the method is very simple and effective. [source]


    Comparison of Linear Regression Models for Quantitative Geochemical Analysis: An Example Using X-Ray Fluorescence Spectrometry

    GEOSTANDARDS & GEOANALYTICAL RESEARCH, Issue 3 2005
    Mirna Guevara
    calibration analytique; régression linéaire; matériaux de référence en géochimie; géochimie analytique; loi de propagation d'erreurs This paper presents statistical aspects related to the calibration process and a comparison of different regression approaches of relevance to almost all analytical techniques. The models for ordinary least-squares (OLS), weighted least-squares (WLS), and maximum likelihood fitting (MLF) were evaluated and, as a case study, X-ray fluorescence (XRF) calibration curves for major elements in geochemical reference materials were used. The results showed that WLS and MLF models were statistically more consistent in comparison with the usually applied OLS approach. The use of uncertainty on independent and dependent variables during the calibration process and the calculation of final uncertainty on individual results using error propagation equations are the novel aspects of our work. Cet article présente les aspects statistiques liés au processus de calibration et fait une comparaison des différents calculs de régression utilisés dans pratiquement toutes les techniques analytiques. Les modèles des moindres carrés ordinaires (MCO) et pondérés (MCP), et d'ajustement de maximum de vraisemblance (AMV) ont étéévalués et appliqués aux courbes de calibration d'éléments majeurs obtenues en analyse par fluorescence X (XRF) de matériaux certifiés de référence. Les résultats obtenus avec les modèles MCP et AMV sont plus cohérents statistiquement que ceux obtenus la méthode classique des MCO. L'utilisation de l'incertitudes sur des variables indépendantes ou dépendantes durant la procédure de calibration et le calcul de l'incertitude finale sur chaque résultat à partir des lois de propagation d'erreur sont des aspects novateurs de ce travail. [source]


    Using multilevel models for assessing the variability of multinational resource use and cost data

    HEALTH ECONOMICS, Issue 2 2005
    Richard Grieve
    Abstract Multinational economic evaluations often calculate a single measure of cost-effectiveness using cost data pooled across several countries. To assess the validity of pooling international cost data the reasons for cost variation across countries need to be assessed. Previously, ordinary least-squares (OLS) regression models have been used to identify factors associated with variability in resource use and total costs. However, multilevel models (MLMs), which accommodate the hierarchical structure of the data, may be more appropriate. This paper compares these different techniques using a multinational dataset comprising case-mix, resource use and cost data on 1300 stroke admissions from 13 centres in 11 European countries. OLS and MLMs were used to estimate the effect of patient and centre-level covariates on the total length of hospital stay (LOS) and total cost. MLMs with normal and gamma distributions for the data within centres were compared. The results from the OLS model showed that both patient and centre-level covariates were associated with LOS and total cost. The estimates from the MLMs showed that none of the centre-level characteristics were associated with LOS, and the level of spending on health was the centre-level variable most highly associated with total cost. We conclude that using OLS models for assessing international variation can lead to incorrect inferences, and that MLMs are more appropriate for assessing why resource use and costs vary across centres. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Semi-analytical solution for a slug test in partially penetrating wells including the effect of finite-thickness skin

    HYDROLOGICAL PROCESSES, Issue 18 2008
    Hund-Der Yeh
    Abstract This paper presents a new semi-analytical solution for a slug test in a well partially penetrating a confined aquifer, accounting for the skin effect. This solution is developed based on the solution for a constant-flux pumping test and a formula given by Peres and co-workers in 1989. The solution agrees with that of Cooper and co-workers and the KGS model when the well is fully penetrating. The present solution can be applied to simulate the temporal and spatial head distributions in both the skin and formation zones. It can also be used to demonstrate the influences of skin type or skin thickness on the well water level and to estimate the hydraulic parameters of the skin and formation zones using a least-squares approach. The results of this study indicate that the determination of hydraulic conductivity using a conventional slug-test data analysis that neglects the presence of a skin zone will give an incorrect result if the aquifer has a skin zone. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Least-square support vector machine applied to settlement of shallow foundations on cohesionless soils

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 17 2008
    Pijush Samui
    Abstract This paper examines the potential of least-square support vector machine (LSVVM) in the prediction of settlement of shallow foundation on cohesionless soil. In LSSVM, Vapnik's ,-insensitive loss function has been replaced by a cost function that corresponds to a form of ridge regression. The LSSVM involves equality instead of inequality constraints and works with a least-squares cost function. The five input variables used for the LSSVM for the prediction of settlement are footing width (B), footing length (L), footing net applied pressure (P), average standard penetration test value (N) and footing embedment depth (d). Comparison between LSSVM and some of the traditional interpretation methods are also presented. LSSVM has been used to compute error bar. The results presented in this paper clearly highlight that the LSSVM is a robust tool for prediction of settlement of shallow foundation on cohesionless soil. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Parameter identification for lined tunnels in a viscoplastic medium

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 12 2002
    B. Lecampion
    Abstract This paper is dedicated to the identification of constitutive parameters of elasto-viscoplastic constitutive law from measurements performed on deep underground cavities (typically tunnels). This inverse problem is solved by the minimization of a cost functional of least-squares type. The exact gradient is computed by the direct differentiation method and the descent is done using the Levenberg,Marquardt algorithm. The method is presented for lined or unlined structures and is applied for an elastoviscoplastic constitutive law of the Perzyna class. Several identification problems are presented in one and two dimensions for different tunnel geometries. The used measurements have been obtained by a preliminary numerical simulation and perturbed with a white noise. The identified responses match the measurements. We also discuss the usage of the sensitivity analysis of the system, provided by the direct differentiation method, for the optimization of in situ monitoring. The sensitivity distribution in space and time assess the location of the measurements points as well as the time of observation needed for reliable identification. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    An accelerated algorithm for parameter identification in a hierarchical plasticity model accounting for material constraints

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 3 2001
    L. Simoni
    Abstract The parameter identification procedure proposed in this paper is based on the solution of an inverse problem, which relies on the minimization of an error function of least-squares type. The solution of the ensuing optimization problem, which is a constrained one owing to the presence of physical links between the optimization parameters, is performed by means of a particular technique of the feasible direction type, which is modified and improved when the problem turns to an unconstrained one. The algorithm is particularly efficient in the presence of hierarchical material models. The numerical properties of the proposed procedure are discussed and its behaviour is compared with usual optimization methods when applied to constrained and unconstrained problems. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    An EM-like reconstruction method for diffuse optical tomography

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 9 2010
    *Article first published online: 28 JUN 2010, Caifang Wang
    Abstract Diffuse optical tomography (DOT) is an optical imaging modality which provides the spatial distributions of optical parameters inside an object. The forward model of DOT is described by the diffusion approximation of radiative transfer equation, while the DOT is to reconstruct the optical parameters from boundary measurements. In this paper, an EM-like iterative reconstruction method specifically for the steady state DOT problem is developed. Previous iterative reconstruction methods are mostly based on the assumption that the measurement noise is Gaussian, and are of least-squares type. In this paper, with the assumption that the boundary measurements have independent and identical Poisson distributions, the inverse problem of DOT is solved by maximizing a log-likelihood functional with inequality constraints, and then an EM-like reconstruction algorithm is developed according to the Kuhn,Tucker condition. The proposed algorithm is a variant of the well-known EM algorithm. The performance of the proposed algorithm is tested with three-dimensional numerical simulation. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Analysis of thick functionally graded plates by local integral equation method

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 8 2007
    J. Sladek
    Abstract Analysis of functionally graded plates under static and dynamic loads is presented by the meshless local Petrov,Galerkin (MLPG) method. Plate bending problem is described by Reissner,Mindlin theory. Both isotropic and orthotropic material properties are considered in the analysis. A weak formulation for the set of governing equations in the Reissner,Mindlin theory with a unit test function is transformed into local integral equations considered on local subdomains in the mean surface of the plate. Nodal points are randomly spread on this surface and each node is surrounded by a circular subdomain, rendering integrals which can be simply evaluated. The meshless approximation based on the moving least-squares (MLS) method is employed in the numerical implementation. Numerical results for simply supported and clamped plates are presented. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    CBS versus GLS stabilization of the incompressible Navier,Stokes equations and the role of the time step as stabilization parameter

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 2 2002
    R. Codina
    Abstract In this work we compare two apparently different stabilization procedures for the finite element approximation of the incompressible Navier,Stokes equations. The first is the characteristic-based split (CBS). It combines the characteristic Galerkin method to deal with convection dominated flows with a classical splitting technique, which in some cases allows us to use equal velocity,pressure interpolations. The second approach is the Galerkin-least-squares (GLS) method, in which a least-squares form of the element residual is added to the basic Galerkin equations. It is shown that both formulations display similar stabilization mechanisms, provided the stabilization parameter of the GLS method is identified with the time step of the CBS approach. This identification can be understood from a formal Fourier analysis of the linearized problem. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Complex variable moving least-squares method: a meshless approximation technique

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 1 2007
    K. M. Liew
    Abstract Based on the moving least-squares (MLS) approximation, we propose a new approximation method,the complex variable moving least-squares (CVMLS) approximation. With the CVMLS approximation, the trial function of a two-dimensional problem is formed with a one-dimensional basis function. The number of unknown coefficients in the trial function of the CVMLS approximation is less than in the trial function of the MLS approximation, and we can thus select fewer nodes in the meshless method that is formed from the CVMLS approximation than are required in the meshless method of the MLS approximation with no loss of precision. The meshless method that is derived from the CVMLS approximation also has a greater computational efficiency. From the CVMLS approximation, we propose a new meshless method for two-dimensional elasticity problems,the complex variable meshless method (CVMM),and the formulae of the CVMM for two-dimensional elasticity problems are obtained. Compared with the conventional meshless method, the CVMM has a greater precision and computational efficiency. For the purposes of demonstration, some selected numerical examples are solved using the CVMM. Copyright © 2006 John Wiley & Sons, Ltd. [source]