Home About us Contact | |||
Inversion Problem (inversion + problem)
Selected AbstractsJoint inversion of multiple data types with the use of multiobjective optimization: problem formulation and application to the seismic anisotropy investigationsGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2007E. Kozlovskaya SUMMARY In geophysical studies the problem of joint inversion of multiple experimental data sets obtained by different methods is conventionally considered as a scalar one. Namely, a solution is found by minimization of linear combination of functions describing the fit of the values predicted from the model to each set of data. In the present paper we demonstrate that this standard approach is not always justified and propose to consider a joint inversion problem as a multiobjective optimization problem (MOP), for which the misfit function is a vector. The method is based on analysis of two types of solutions to MOP considered in the space of misfit functions (objective space). The first one is a set of complete optimal solutions that minimize all the components of a vector misfit function simultaneously. The second one is a set of Pareto optimal solutions, or trade-off solutions, for which it is not possible to decrease any component of the vector misfit function without increasing at least one other. We investigate connection between the standard formulation of a joint inversion problem and the multiobjective formulation and demonstrate that the standard formulation is a particular case of scalarization of a multiobjective problem using a weighted sum of component misfit functions (objectives). We illustrate the multiobjective approach with a non-linear problem of the joint inversion of shear wave splitting parameters and longitudinal wave residuals. Using synthetic data and real data from three passive seismic experiments, we demonstrate that random noise in the data and inexact model parametrization destroy the complete optimal solution, which degenerates into a fairly large Pareto set. As a result, non-uniqueness of the problem of joint inversion increases. If the random noise in the data is the only source of uncertainty, the Pareto set expands around the true solution in the objective space. In this case the ,ideal point' method of scalarization of multiobjective problems can be used. If the uncertainty is due to inexact model parametrization, the Pareto set in the objective space deviates strongly from the true solution. In this case all scalarization methods fail to find the solution close to the true one and a change of model parametrization is necessary. [source] Migration velocity analysis and waveform inversionGEOPHYSICAL PROSPECTING, Issue 6 2008William W. Symes ABSTRACT Least-squares inversion of seismic reflection waveform data can reconstruct remarkably detailed models of subsurface structure and take into account essentially any physics of seismic wave propagation that can be modelled. However, the waveform inversion objective has many spurious local minima, hence convergence of descent methods (mandatory because of problem size) to useful Earth models requires accurate initial estimates of long-scale velocity structure. Migration velocity analysis, on the other hand, is capable of correcting substantially erroneous initial estimates of velocity at long scales. Migration velocity analysis is based on prestack depth migration, which is in turn based on linearized acoustic modelling (Born or single-scattering approximation). Two major variants of prestack depth migration, using binning of surface data and Claerbout's survey-sinking concept respectively, are in widespread use. Each type of prestack migration produces an image volume depending on redundant parameters and supplies a condition on the image volume, which expresses consistency between data and velocity model and is hence a basis for velocity analysis. The survey-sinking (depth-oriented) approach to prestack migration is less subject to kinematic artefacts than is the binning-based (surface-oriented) approach. Because kinematic artefacts strongly violate the consistency or semblance conditions, this observation suggests that velocity analysis based on depth-oriented prestack migration may be more appropriate in kinematically complex areas. Appropriate choice of objective (differential semblance) turns either form of migration velocity analysis into an optimization problem, for which Newton-like methods exhibit little tendency to stagnate at nonglobal minima. The extended modelling concept links migration velocity analysis to the apparently unrelated waveform inversion approach to estimation of Earth structure: from this point of view, migration velocity analysis is a solution method for the linearized waveform inversion problem. Extended modelling also provides a basis for a nonlinear generalization of migration velocity analysis. Preliminary numerical evidence suggests a new approach to nonlinear waveform inversion, which may combine the global convergence of velocity analysis with the physical fidelity of model-based data fitting. [source] Minimum weighted norm wavefield reconstruction for AVA imagingGEOPHYSICAL PROSPECTING, Issue 6 2005Mauricio D. Sacchi ABSTRACT Seismic wavefield reconstruction is posed as an inversion problem where, from inadequate and incomplete data, we attempt to recover the data we would have acquired with a denser distribution of sources and receivers. A minimum weighted norm interpolation method is proposed to interpolate prestack volumes before wave-equation amplitude versus angle imaging. Synthetic and real data were used to investigate the effectiveness of our wavefield reconstruction scheme when preconditioning seismic data for wave-equation amplitude versus angle imaging. [source] An implementation of radiative transfer in the cosmological simulation code gadgetMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 3 2009Margarita Petkova ABSTRACT We present a novel numerical implementation of radiative transfer in the cosmological smoothed particle hydrodynamics (SPH) simulation code gadget. It is based on a fast, robust and photon-conserving integration scheme where the radiation transport problem is approximated in terms of moments of the transfer equation and by using a variable Eddington tensor as a closure relation, following the Optically Thin Variable Eddington Tensor suggestion of Gnedin & Abel. We derive a suitable anisotropic diffusion operator for use in the SPH discretization of the local photon transport, and we combine this with an implicit solver that guarantees robustness and photon conservation. This entails a matrix inversion problem of a huge, sparsely populated matrix that is distributed in memory in our parallel code. We solve this task iteratively with a conjugate gradient scheme. Finally, to model photon sink processes we consider ionization and recombination processes of hydrogen, which is represented with a chemical network that is evolved with an implicit time integration scheme. We present several tests of our implementation, including single and multiple sources in static uniform density fields with and without temperature evolution, shadowing by a dense clump and multiple sources in a static cosmological density field. All tests agree quite well with analytical computations or with predictions from other radiative transfer codes, except for shadowing. However, unlike most other radiative transfer codes presently in use for studying re-ionization, our new method can be used on-the-fly during dynamical cosmological simulation, allowing simultaneous treatments of galaxy formation and the re-ionization process of the Universe. [source] Particle Size Distributions from Static Light Scattering with Regularized Non-Negative Least Squares ConstraintsPARTICLE & PARTICLE SYSTEMS CHARACTERIZATION, Issue 6 2006Alejandro R. Roig Abstract Simulated data from static light scattering produced by several particle size distributions (PSD) of spherical particles in dilute solution is analyzed with a regularized non-negative least squares method (r-NNLS). Strong fluctuations in broad PSD's obtained from direct application of NNLS are supressed through an averaging procedure, as introduced long ago in the inversion problem in dynamic light scattering. A positive correlation between the best PSD obtained from several averaging schemes and the condition number of the respective data transfer matrices was obtained. The performance of the method is found to be similar to that of constrained regularization (CONTIN), which uses also NNLS as a starting solution, but incorporates another regularizing strategy. [source] Thermal versus dynamical tropopause in upper-tropospheric balanced flow anomaliesTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 562 2000V. Wirth Abstract This paper systematically investigates differences between the thermal and the dynamical tropopause for upper-tropospheric balanced flow anomalies. Idealized cyclonic and anticyclonic anomalies are considered, which are either axisymmetric or plane symmetric. Given a distribution of potential vorticity (PV), the inversion problem is solved numerically to obtain the corresponding balanced flow (i.e. wind and temperature). The control parameter is the aspect ratio of the PV anomaly, which governs the partitioning into a thermal and a dynamical anomaly. For PV anomalies of intermediate and tall aspect ratios, the location of the thermal tropopause differs significantly from the location of the dynamical tropopause. The thermal tropopause is rather indistinct for intermediate aspect ratios, while it is sharp and well defined for both tall and shallow anomalies. A barotropic deformation flow field superimposed on a plane symmetric anomaly induces an ageostrophic wind which modifies the static stability throughout the PV anomaly such that the thermal and dynamical tropopauses evolve differently. Recent observations concerning the correlation between the thermal and ozone tropopauses can be interpreted consistently in terms of the present results. [source] |