Regularization

Distribution by Scientific Domains

Terms modified by Regularization

  • regularization parameter
  • regularization procedure
  • regularization scheme
  • regularization technique
  • regularization term

  • Selected Abstracts


    A graphical generalized implementation of SENSE reconstruction using Matlab

    CONCEPTS IN MAGNETIC RESONANCE, Issue 3 2010
    Hammad Omer
    Abstract Parallel acquisition of Magnetic Resonance Imaging (MRI) has the potential to significantly reduce the scan time. SENSE is one of the many techniques for the reconstruction of parallel MRI images. A generalized algorithm for SENSE reconstruction and theoretical background is presented. This algorithm can be used for SENSE reconstruction for any acceleration factor between 2 and 8, for any Phase Encode direction (Horizontal or Vertical), with or without Regularization. The user can select a particular type of Regularization. A GUI based implementation of the algorithm is also given. Signal-to-noise ratio, artefact power, and g -factor map are used to quantify the quality of reconstruction. The effects of different acceleration factors on these parameters are also discussed. The GUI based implementation of SENSE reconstruction provides an easy selection of various parameters needed for reconstruction of parallel MRI images and helps in an efficient reconstruction and analysis of the quality of reconstruction. © 2010 Wiley Periodicals, Inc. Concepts Magn Reson Part A 36A: 178,186, 2010. [source]


    On Constraining Pilot Point Calibration with Regularization in PEST

    GROUND WATER, Issue 6 2009
    Michael N. Fienen
    Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models,pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. [source]


    Regularized semiparametric model identification with application to nuclear magnetic resonance signal quantification with unknown macromolecular base-line

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2006
    Diana M. Sima
    Summary., We formulate and solve a semiparametric fitting problem with regularization constraints. The model that we focus on is composed of a parametric non-linear part and a nonparametric part that can be reconstructed via splines. Regularization is employed to impose a certain degree of smoothness on the nonparametric part. Semiparametric regression is presented as a generalization of non-linear regression, and all important differences that arise from the statistical and computational points of view are highlighted. We motivate the problem formulation with a biomedical signal processing application. [source]


    Regularization of the non-stationary Schrödinger operator

    MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 5 2009
    Paula Cerejeiras
    Abstract In this paper we prove a Lp -decomposition where one of the components is the kernel of a first-order differential operator that factorizes the non-stationary Schrödinger operator ,,,i,t. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Regularization and preconditioning of KKT systems arising in nonnegative least-squares problems

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 1 2009
    Stefania Bellavia
    Abstract A regularized Newton-like method for solving nonnegative least-squares problems is proposed and analysed in this paper. A preconditioner for KKT systems arising in the method is introduced and spectral properties of the preconditioned matrix are analysed. A bound on the condition number of the preconditioned matrix is provided. The bound does not depend on the interior-point scaling matrix. Preliminary computational results confirm the effectiveness of the preconditioner and fast convergence of the iterative method established by the analysis performed in this paper. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Impact of Right Ventricular Pacing Sites on Exercise Capacity during Ventricular Rate Regularization in Patients with Permanent Atrial Fibrillation

    PACING AND CLINICAL ELECTROPHYSIOLOGY, Issue 12 2009
    HUNG-FAT TSE M.D., Ph.D.
    Background:The deleterious effects of right ventricular apical (RVA) pacing may offset the potential benefit of ventricular rate (VR) regularization and rate adaptation during an exercise in patient's atrial fibrillation (AF). Methods:We studied 30 patients with permanent AF and symptomatic bradycardia who receive pacemaker implantation with RVA (n = 15) or right ventricular septal (RVS, n = 15) pacing. All the patients underwent an acute cardiopulmonary exercise testing using VVI-mode (VVI-OFF) and VVI-mode with VR regularization (VRR) algorithm on (VVI-ON). Results:There were no significant differences in the baseline characteristics between the two groups, except pacing QRS duration was significantly shorter during RVS pacing than RVA pacing (138.9 ± 5 vs 158.4 ± 6.1 ms, P = 0.035). Overall, VVI-ON mode increased the peak exercise VR, exercise time, metabolic equivalents (METs), and peak oxygen consumption (VO2max), and decreased the VR variability compared with VVI-OFF mode during exercise (P < 0.05), suggesting that VRR pacing improved exercise capacity during exercise. However, further analysis on the impact of VRR pacing with different pacing sites revealed that only patients with RVS pacing but not patients with RVA pacing had significant increased exercise time, METs, and VO2max during VVI-ON compared with VVI-OFF, despite similar changes in peaked exercise VR and VR variability. Conclusion:In patients with permanent AF, VRR pacing at RVS, but not at RVA, improved exercise capacity during exercise. [source]


    On Regularization of Pseudoperturbation Method for Sharpening of Approximately Given Generalized Jordan Chains

    PROCEEDINGS IN APPLIED MATHEMATICS & MECHANICS, Issue 1 2003
    Boris Loginov Prof. Dr.-Ing.
    Combination of pseudoperturbation and Newton-Kantorovich (N.-K.) methods are applied for the sharpening of approximately given eigenvalue and generalized Jordan chains (GJChs) of linear by spectral parameter operator function. [source]


    Array-conditioned deconvolution of multiple-component teleseismic recordings

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2010
    C.-W. Chen
    SUMMARY We investigate the applicability of an array-conditioned deconvolution technique, developed for analysing borehole seismic exploration data, to teleseismic receiver functions and data pre-processing steps for scattered wavefield imaging. This multichannel deconvolution technique constructs an approximate inverse filter to the estimated source signature by solving an overdetermined set of deconvolution equations, using an array of receivers detecting a common source. We find that this technique improves the efficiency and automation of receiver function calculation and data pre-processing workflow. We apply this technique to synthetic experiments and to teleseismic data recorded in a dense array in northern Canada. Our results show that this optimal deconvolution automatically determines and subsequently attenuates the noise from data, enhancing P -to- S converted phases in seismograms with various noise levels. In this context, the array-conditioned deconvolution presents a new, effective and automatic means for processing large amounts of array data, as it does not require any ad-hoc regularization; the regularization is achieved naturally by using the noise present in the array itself. [source]


    A covariance-adaptive approach for regularized inversion in linear models

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2007
    Christopher Kotsakis
    SUMMARY The optimal inversion of a linear model under the presence of additive random noise in the input data is a typical problem in many geodetic and geophysical applications. Various methods have been developed and applied for the solution of this problem, ranging from the classic principle of least-squares (LS) estimation to other more complex inversion techniques such as the Tikhonov,Philips regularization, truncated singular value decomposition, generalized ridge regression, numerical iterative methods (Landweber, conjugate gradient) and others. In this paper, a new type of optimal parameter estimator for the inversion of a linear model is presented. The proposed methodology is based on a linear transformation of the classic LS estimator and it satisfies two basic criteria. First, it provides a solution for the model parameters that is optimally fitted (in an average quadratic sense) to the classic LS parameter solution. Second, it complies with an external user-dependent constraint that specifies a priori the error covariance (CV) matrix of the estimated model parameters. The formulation of this constrained estimator offers a unified framework for the description of many regularization techniques that are systematically used in geodetic inverse problems, particularly for those methods that correspond to an eigenvalue filtering of the ill-conditioned normal matrix in the underlying linear model. Our study lies on the fact that it adds an alternative perspective on the statistical properties and the regularization mechanism of many inversion techniques commonly used in geodesy and geophysics, by interpreting them as a family of ,CV-adaptive' parameter estimators that obey a common optimal criterion and differ only on the pre-selected form of their error CV matrix under a fixed model design. [source]


    Geodetic imaging: reservoir monitoring using satellite interferometry

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2002
    D. W. Vasco
    Summary Fluid fluxes within subsurface reservoirs give rise to surface displacements, particularly over periods of a year or more. Observations of such deformation provide a powerful tool for mapping fluid migration within the Earth, providing new insights into reservoir dynamics. In this paper we use Interferometric Synthetic Aperture Radar (InSAR) range changes to infer subsurface fluid volume strain at the Coso geothermal field. Furthermore, we conduct a complete model assessment, using an iterative approach to compute model parameter resolution and covariance matrices. The method is a generalization of a Lanczos-based technique which allows us to include fairly general regularization, such as roughness penalties. We find that we can resolve quite detailed lateral variations in volume strain both within the reservoir depth range (0.4,2.5 km) and below the geothermal production zone (2.5,5.0 km). The fractional volume change in all three layers of the model exceeds the estimated model parameter uncertainty by a factor of two or more. In the reservoir depth interval (0.4,2.5 km), the predominant volume change is associated with northerly and westerly oriented faults and their intersections. However, below the geothermal production zone proper [the depth range 2.5,5.0 km], there is the suggestion that both north- and northeast-trending faults may act as conduits for fluid flow. [source]


    Seismic data reconstruction using multidimensional prediction filters

    GEOPHYSICAL PROSPECTING, Issue 2 2010
    M. Naghizadeh
    ABSTRACT In this paper we discuss a beyond-alias multidimensional implementation of the multi-step autoregressive reconstruction algorithm for data with missing spatial samples. The multi-step autoregressive method is summarized as follows: vital low-frequency information is first regularized adopting a Fourier based method (minimum weighted norm interpolation); the reconstructed data are then used to estimate prediction filters that are used to interpolate higher frequencies. This article discusses the implementation of the multi-step autoregressive method to data with more than one spatial dimension. Synthetic and real data examples are used to examine the performance of the proposed method. Field data are used to illustrate the applicability of multidimensional multi-step autoregressive operators for regularization of seismic data. [source]


    Operator-oriented CRS interpolation

    GEOPHYSICAL PROSPECTING, Issue 6 2009
    German Hoecht
    ABSTRACT In common-reflection-surface imaging the reflection arrival time field is parameterized by operators that are of higher dimension or order than in conventional methods. Using the common-reflection-surface approach locally in the unmigrated prestack data domain opens a potential for trace regularization and interpolation. In most data interpolation methods based on local coherency estimation, a single operator is designed for a target sample and the output amplitude is defined as a weighted average along the operator. This approach may fail in presence of interfering events or strong amplitude and phase variations. In this paper we introduce an alternative scheme in which there is no need for an operator to be defined at the target sample itself. Instead, the amplitude at a target sample is constructed from multiple operators estimated at different positions. In this case one operator may contribute to the construction of several target samples. Vice versa, a target sample might receive contributions from different operators. Operators are determined on a grid which can be sparser than the output grid. This allows to dramatically decrease the computational costs. In addition, the use of multiple operators for a single target sample stabilizes the interpolation results and implicitly allows several contributions in case of interfering events. Due to the considerable computational expense, common-reflection-surface interpolation is limited to work in subsets of the prestack data. We present the general workflow of a common-reflection-surface-based regularization/interpolation for 3D data volumes. This workflow has been applied to an OBC common-receiver volume and binned common-offset subsets of a 3D marine data set. The impact of a common-reflection-surface regularization is demonstrated by means of a subsequent time migration. In comparison to the time migrations of the original and DMO-interpolated data, the results show particular improvements in view of the continuity of reflections events. This gain is confirmed by an automatic picking of a horizon in the stacked time migrations. [source]


    On Constraining Pilot Point Calibration with Regularization in PEST

    GROUND WATER, Issue 6 2009
    Michael N. Fienen
    Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models,pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. [source]


    Damage-viscoplastic consistency model for rock fracture in heterogeneous rocks under dynamic loading

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 10 2010
    Timo Saksala
    Abstract This paper presents a damage-viscoplastic consistency model for numerical simulation of brittle fracture in heterogeneous rocks. The model is based on a combination of the recent viscoplastic consistency model by Wang and the isotropic damage concept with separate damage variables in tension and compression. This approach does not suffer from ill-posedness, caused by strain softening, of the underlying boundary/initial value problem since viscoplasticity provides the regularization by introducing a length scale effect under dynamic loading conditions. The model uses the Mohr,Coulomb yield criterion with the Rankine criterion as a tensile cut-off. The damage law in compression is calibrated via the degradation index concept of Fang and Harrison. Thereby, the model is able to capture the brittle-to-ductile transition occurring in confined compression at a certain level of confinement. The heterogeneity of rock is accounted for by the statistical approach based on the Weibull distribution. Numerical simulations of confined compression test in plane strain conditions demonstrate a good agreement with the experiments at both the material point and structural levels as the fracture modes are realistically predicted. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    A dual mortar approach for 3D finite deformation contact with consistent linearization

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 11 2010
    Alexander Popp
    Abstract In this paper, an approach for three-dimensional frictionless contact based on a dual mortar formulation and using a primal,dual active set strategy for direct constraint enforcement is presented. We focus on linear shape functions, but briefly address higher order interpolation as well. The study builds on previous work by the authors for two-dimensional problems. First and foremost, the ideas of a consistently linearized dual mortar scheme and of an interpretation of the active set search as a semi-smooth Newton method are extended to the 3D case. This allows for solving all types of nonlinearities (i.e. geometrical, material and contact) within one single Newton scheme. Owing to the dual Lagrange multiplier approach employed, this advantage is not accompanied by an undesirable increase in system size as the Lagrange multipliers can be condensed from the global system of equations. Moreover, it is pointed out that the presented method does not make use of any regularization of contact constraints. Numerical examples illustrate the efficiency of our method and the high quality of results in 3D finite deformation contact analysis. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    A fictitious energy approach for shape optimization

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2010
    M. Scherer
    Abstract This paper deals with shape optimization of continuous structures. As in early works on shape optimization, coordinates of boundary nodes of the FE-domain are directly chosen as design variables. Convergence problems and problems with jagged shapes are eliminated by a new regularization technique: an artificial inequality constraint added to the optimization problem limits a fictitious total strain energy that measures the shape change of the design with respect to a reference design. The energy constraint defines a feasible design space whose size can be varied by one parameter, the upper energy limit. By construction, the proposed regularization is applicable to a wide range of problems; although in this paper, the application is restricted to linear elastostatic problems. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Simultaneous untangling and smoothing of moving grids

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 7 2008
    Ezequiel J. López
    Abstract In this work, a technique for simultaneous untangling and smoothing of meshes is presented. It is based on an extension of an earlier mesh smoothing strategy developed to solve the computational mesh dynamics stage in fluid,structure interaction problems. In moving grid problems, mesh untangling is necessary when element inversion happens as a result of a moving domain boundary. The smoothing strategy, formerly published by the authors, is defined in terms of the minimization of a functional associated with the mesh distortion by using a geometric indicator of the element quality. This functional becomes discontinuous when an element has null volume, making it impossible to obtain a valid mesh from an invalid one. To circumvent this drawback, the functional proposed is transformed in order to guarantee its continuity for the whole space of nodal coordinates, thus achieving the untangling technique. This regularization depends on one parameter, making the recovery of the original functional possible as this parameter tends to 0. This feature is very important: consequently, it is necessary to regularize the functional in order to make the mesh valid; then, it is advisable to use the original functional to make the smoothing optimal. Finally, the simultaneous untangling and smoothing technique is applied to several test cases, including 2D and 3D meshes with simplicial elements. As an additional example, the application of this technique to a mesh generation case is presented. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Asymptotic numerical methods for unilateral contact

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 6 2006
    W. Aggoune
    Abstract New algorithms based upon the asymptotic numerical method (ANM) are proposed to solve unilateral contact problems. ANM leads to a representation of a solution path in terms of series or Padé approximants. To get a smooth solution path, a hyperbolic relation between contact forces and clearance is introduced. Three key points are discussed: the influence of the regularization of the contact law, the discretization of the contact force by Lagrange multipliers and prediction,correction algorithms. Simple benchmarks are considered to evaluate the relevance of the proposed algorithms. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Local maximum-entropy approximation schemes: a seamless bridge between finite elements and meshfree methods

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 13 2006
    M. Arroyo
    Abstract We present a one-parameter family of approximation schemes, which we refer to as local maximum-entropy approximation schemes, that bridges continuously two important limits: Delaunay triangulation and maximum-entropy (max-ent) statistical inference. Local max-ent approximation schemes represent a compromise,in the sense of Pareto optimality,between the competing objectives of unbiased statistical inference from the nodal data and the definition of local shape functions of least width. Local max-ent approximation schemes are entirely defined by the node set and the domain of analysis, and the shape functions are positive, interpolate affine functions exactly, and have a weak Kronecker-delta property at the boundary. Local max-ent approximation may be regarded as a regularization, or thermalization, of Delaunay triangulation which effectively resolves the degenerate cases resulting from the lack or uniqueness of the triangulation. Local max-ent approximation schemes can be taken as a convenient basis for the numerical solution of PDEs in the style of meshfree Galerkin methods. In test cases characterized by smooth solutions we find that the accuracy of local max-ent approximation schemes is vastly superior to that of finite elements. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    An orthotropic damage model for masonry structures

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 2 2002
    Luisa Berto
    Abstract An orthotropic damage model specifically developed for the analysis of brittle masonry subjected to in-plane loading is described. Four independent internal damage parameters, one in compression and one in tension for each of the two natural axes of the masonry, are defined allowing the stiffness recovery at crack closure as well as the different inelastic behaviour along each natural axis to be considered. The damage field of the material is defined in terms of four equivalent stresses and results, in the space of the in-plane effective stresses, in a double pyramid with a rectangular base where the slopes of the faces correspond to the internal friction angle of the material. The equivalent stresses also control the growth of the damage parameters. The returning path from the effective to the damaged stresses is given by multiplication by a fourth-rank damage effect tensor, which is a function of the damage parameters and of the effective stress state. Mesh size regularization is achieved by means of an enhanced local method taking into account the finite element size. Good agreement has been found in the comparison between numerical results and experimental data both for masonry shear panels and for a large-scale masonry holed wall. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Improved implicit integrators for transient impact problems,geometric admissibility within the conserving framework

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 2 2002
    T. A. Laursen
    Abstract The value of energy and momentum conserving algorithms has been well established for the analysis of highly non-linear systems, including those characterized by the nonsmooth non-linearities of an impact event. This work proposes an improved integration scheme for frictionless dynamic contact, seeking to preserve the stability properties of exact energy and momentum conservation without the heretofore unavoidable compromise of violating geometric admissibility as established by the contact constraints. The physically motivated introduction of a discrete contact velocity provides an algorithmic framework that ensures exact conservation locally while remaining independent of the choice of constraint treatment, thus making full conservation equally possible in conjunction with a penalty regularization as with an exact Lagrange multiplier enforcement. The discrete velocity effects are incorporated as a post-convergence update to the system velocities, and thus have no direct effect on the non-linear solution of the displacement equilibrium equation. The result is a robust implicit algorithmic treatment of dynamic frictionless impact, appropriate for large deformations and fully conservative for a range of geometric constraints. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    On the formulation of closest-point projection algorithms in elastoplasticity,part I: The variational structure

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 2 2002
    F. Armero
    Abstract We present in this paper the characterization of the variational structure behind the discrete equations defining the closest-point projection approximation in elastoplasticity. Rate-independent and viscoplastic formulations are considered in the infinitesimal and the finite deformation range, the later in the context of isotropic finite-strain multiplicative plasticity. Primal variational principles in terms of the stresses and stress-like hardening variables are presented first, followed by the formulation of dual principles incorporating explicitly the plastic multiplier. Augmented Lagrangian extensions are also presented allowing a complete regularization of the problem in the constrained rate-independent limit. The variational structure identified in this paper leads to the proper framework for the development of new improved numerical algorithms for the integration of the local constitutive equations of plasticity as it is undertaken in Part II of this work. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Parameter identification with weightless regularization,

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2001
    Tomonari Furukawa
    Although the regularization increased the popularity of parameter identification due to its capability of deriving a stable solution, the significant problem is that the solution depends upon the regularization parameters chosen. This paper presents a technique for deriving solutions without the use of the parameters and, further, an optimization method, which can work efficiently for problems of concern. Numerical examples show that the technique can efficiently search for appropriate solutions. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Suppression of vortex shedding for flow around a circular cylinder using optimal control

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 1 2002
    C. Homescu
    Abstract Adjoint formulation is employed for the optimal control of flow around a rotating cylinder, governed by the unsteady Navier,Stokes equations. The main objective consists of suppressing Karman vortex shedding in the wake of the cylinder by controlling the angular velocity of the rotating body, which can be constant in time or time-dependent. Since the numerical control problem is ill-posed, regularization is employed. An empirical logarithmic law relating the regularization coefficient to the Reynolds number was derived for 60,Re,140. Optimal values of the angular velocity of the cylinder are obtained for Reynolds numbers ranging from Re=60 to Re=1000. The results obtained by the computational optimal control method agree with previously obtained experimental and numerical observations. A significant reduction of the amplitude of the variation of the drag coefficient is obtained for the optimized values of the rotation rate. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Iterative ultrasonic signal and image deconvolution for estimation of the complex medium response

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 6 2005
    Zhiping Mu
    Abstract The ill-conditioned inverse problem of estimating ultrasonic medium responses by deconvolution of RF signals is investigated. The primary difference between the proposed method and others is that the medium response function is assumed to be complex-valued rather than restricted to being real-valued. Derived from the complex medium model, complex Wiener filtering is presented, and a Hilbert transform related limitation to inverse filtering type methods is discussed. We introduce a nonparametric iterative algorithm, the least squares method with point count regularization (LSPC). The algorithm is successfully applied to simulated and experimental data and demonstrates the capability of recovering both the real and imaginary parts of the medium response. The simulation results indicate that the LSPC method can outperform Wiener filters and improve the resolution of the ultrasound system by factors as high as 3.7. Experimental results using a single element transducer and a conventional medical ultrasound system with a linear array transducer show that despite the errors in pulse estimation and the noise in the RF signals, excellent results can be obtained, demonstrating the stability and robustness of the algorithm. © 2006 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 15, 266,277, 2005 [source]


    A practical approach for estimating illumination distribution from shadows using a single image

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 2 2005
    Taeone Kim
    Abstract This article presents a practical method that estimates illumination distribution from shadows using only a single image. The shadows are assumed to be cast on a textured, Lambertian surface by an object of known shape. Previous methods for illumination estimation from shadows usually require that the reflectance property of the surface on which shadows are cast be constant or uniform, or need an additional image to cancel out the effects of varying albedo of the textured surface on illumination estimation. But, our method deals with an estimation problem for which surface albedo information is not available. In this case, the estimation problem corresponds to an underdetermined one. We show that the combination of regularization by correlation and some user-specified information can be a practical method for solving the underdetermined problem. In addition, as an optimization tool for solving the problem, we develop a constrained Non-Negative Quadratic Programming (NNQP) technique into which not only regularization but also multiple linear constraints induced by user-specified information are easily incorporated. We test and validate our method on both synthetic and real images and present some experimental results. © 2005 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 15, 143,154, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20047 [source]


    Neural network-based image restoration using scaled residual with space-variant regularization

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 6 2002
    E. Salari
    Abstract Image restoration is aimed to recover the original scene from its degraded version. This paper presents a new method for image restoration. In this technique, an evaluation function which combines a scaled residual with space-variant regularization is established and minimized using a Hopfield network to obtain a restored image from a noise corrupted and blurred image. Simulation results demonstrate that the proposed evaluation function leads to a more efficient restoration process which offers a fast convergence and improved restored image quality. © 2003 Wiley Periodicals, Inc. Int J Imaging Syst Technol 12, 247,253, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10034 [source]


    Flexible constraints for regularization in learning from data

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 6 2004
    Eyke Hüllermeier
    By its very nature, inductive inference performed by machine learning methods mainly is data driven. Still, the incorporation of background knowledge,if available,can help to make inductive inference more efficient and to improve the quality of induced models. Fuzzy set,based modeling techniques provide a convenient tool for making expert knowledge accessible to computational methods. In this article, we exploit such techniques within the context of the regularization (penalization) framework of inductive learning. The basic idea is to express knowledge about an underlying data-generating process in terms of flexible constraints and to penalize those models violating these constraints. An optimal model is one that achieves an optimal trade-off between fitting the data and satisfying the constraints. © 2004 Wiley Periodicals, Inc. [source]


    Bayesian regularization: application to calibration in NIR spectroscopy

    JOURNAL OF CHEMOMETRICS, Issue 11 2009
    C. E. Alciaturi
    Abstract The use of a Bayesian regularization algorithm is proposed for calibration in near-infrared spectroscopy (NIR) with linear models. The algorithm used in this work is based upon the concepts developed by MacKay for inference and model comparison in artificial neural networks. It is demonstrated that this algorithm is fast, easy to use, and shows good generalization properties without previous dimensionality reduction. Examples are shown for NIR spectroscopy calibration and synthetic data. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Regression by L1 regularization of smart contrasts and sums (ROSCAS) beats PLS and elastic net in latent variable model

    JOURNAL OF CHEMOMETRICS, Issue 5 2009
    Cajo J. F. ter Braak
    Abstract This paper proposes a regression method, ROSCAS, which regularizes smart contrasts and sums of regression coefficients by an L1 penalty. The contrasts and sums are based on the sample correlation matrix of the predictors and are suggested by a latent variable regression model. The contrasts express the idea that a priori correlated predictors should have similar coefficients. The method has excellent predictive performance in situations, where there are groups of predictors with each group representing an independent feature that influences the response. In particular, when the groups differ in size, ROSCAS can outperform LASSO, elastic net, partial least squares (PLS) and ridge regression by a factor of two or three in terms of mean squared error. In other simulation setups and on real data, ROSCAS performs competitively. Copyright © 2009 John Wiley & Sons, Ltd. [source]