Home About us Contact | |||
Noisy Observations (noisy + observation)
Selected AbstractsConstrained total least-squares computations for high-resolution image reconstruction with multisensorsINTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 1 2002Michael K. Ng Multiple undersampled images of a scene are often obtained by using a charge-coupled device (CCD) detector array of sensors that are shifted relative to each other by subpixel displacements. This geometry of sensors, where each sensor has a subarray of sensing elements of suitable size, has been popular in the task of attaining spatial resolution enhancement from the acquired low-resolution degraded images that comprise the set of observations. With the objective of improving the performance of the signal processing algorithms in the presence of the ubiquitous perturbation errors of displacements around the ideal subpixel locations (because of imperfections in fabrication), in addition to noisy observation, the errors-in-variables or the total least-squares method is used in this paper. A regularized constrained total least-squares (RCTLS) solution to the problem is given, which requires the minimization of a nonconvex and nonlinear cost functional. Simulations indicate that the choice of the regularization parameter influences significantly the quality of the solution. The L-curve method is used to select the theoretically optimum value of the regularization parameter instead of the unsound but expedient trial-and-error approach. The expected superiority of this RCTLS approach over the conventional least-squares theory-based algorithm is substantiated by example. © 2002 John Wiley & Sons, Inc. Int J Imaging Syst Technol 12, 35,42, 2002 [source] Efficient estimation of three-dimensional curves and their derivatives by free-knot regression splines, applied to the analysis of inner carotid artery centrelinesJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 3 2009Laura M. Sangalli Summary., We deal with the problem of efficiently estimating a three-dimensional curve and its derivatives, starting from a discrete and noisy observation of the curve. This problem is now arising in many applicative contexts, thanks to the advent of devices that provide three-dimensional images and measures, such as three-dimensional scanners in medical diagnostics. Our research, in particular, stems from the need for accurate estimation of the curvature of an artery, from image reconstructions of three-dimensional angiographies. This need has emerged within the AneuRisk project, a scientific endeavour which aims to investigate the role of vessel morphology, blood fluid dynamics and biomechanical properties of the vascular wall, on the pathogenesis of cerebral aneurysms. We develop a regression technique that exploits free-knot splines in a novel setting, to estimate three-dimensional curves and their derivatives. We thoroughly compare this technique with a classical regression method, local polynomial smoothing, showing that three-dimensional free-knot regression splines yield more accurate and efficient estimates. [source] Global statistical analysis of MISR aerosol data: a massive data product from NASA's Terra satelliteENVIRONMETRICS, Issue 7 2007Tao Shi Abstract In climate models, aerosol forcing is the major source of uncertainty in climate forcing, over the industrial period. To reduce this uncertainty, instruments on satellites have been put in place to collect global data. However, missing and noisy observations impose considerable difficulties for scientists researching the global distribution of aerosols, aerosol transportation, and comparisons between satellite observations and global-climate-model outputs. In this paper, we fit a Spatial Mixed Effects (SME) statistical model to predict the missing values, denoise the observed values, and quantify the spatial-prediction uncertainties. The computations associated with the SME model are linear scalable to the number of data points, which makes it feasible to process massive global satellite data. We apply the methodology, which is called Fixed Rank Kriging (FRK), to the level-3 Aerosol Optical Depth (AOD) dataset collected by NASA's Multi-angle Imaging SpectroRadiometer (MISR) instrument flying on the Terra satellite. Overall, our results were superior to those from non-statistical methods and, importantly, FRK has an uncertainty measure associated with it that can be used for comparisons over different regions or at different time points. Copyright © 2007 John Wiley & Sons, Ltd. [source] Basin-Scale Transmissivity and Storativity Estimation Using Hydraulic TomographyGROUND WATER, Issue 5 2008Kristopher L Kuhlman While tomographic inversion has been successfully applied to laboratory- and field-scale tests, here we address the new issue of scale that arises when extending the method to a basin. Specifically, we apply the hydraulic tomography (HT) concept to jointly interpret four multiwell aquifer tests in a synthetic basin to illustrate the superiority of this approach to a more traditional Theis analysis of the same tests. Transmissivity and storativity are estimated for each element of a regional numerical model using the geostatistically based sequential successive linear estimator (SSLE) inverse solution method. We find that HT inversion is an effective strategy for incorporating data from potentially disparate aquifer tests into a basin-wide aquifer property estimate. The robustness of the SSLE algorithm is investigated by considering the effects of noisy observations, changing the variance of the true aquifer parameters, and supplying incorrect initial and boundary conditions to the inverse model. Ground water flow velocities and total confined storage are used as metrics to compare true and estimated parameter fields; they quantify the effectiveness of HT and SSLE compared to a Theis solution methodology. We discuss alternative software that can be used for implementing tomography inversion. [source] Least-square-based radial basis collocation method for solving inverse problems of Laplace equation from noisy dataINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 1 2010Xian-Zhong Mao Abstract The inverse problem of 2D Laplace equation involves an estimation of unknown boundary values or the locations of boundary shape from noisy observations on over-specified boundary or internal data points. The application of radial basis collocation method (RBCM), one of meshless and non-iterative numerical schemes, directly induces this inverse boundary value problem (IBVP) to a single-step solution of a system of linear algebraic equations in which the coefficients matrix is inherently ill-conditioned. In order to solve the unstable problem observed in the conventional RBCM, an effective procedure that builds an over-determined linear system and combines with least-square technique is proposed to restore the stability of the solution in this paper. The present work investigates three examples of IBVPs using over-specified boundary conditions or internal data with simulated noise and obtains stable and accurate results. It underlies that least-square-based radial basis collocation method (LS-RBCM) poses a significant advantage of good stability against large noise levels compared with the conventional RBCM. Copyright © 2010 John Wiley & Sons, Ltd. [source] An efficient approach for computing non-Gaussian ARMA model coefficients using Pisarenko's methodINTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 3 2005Adnan Al-Smadi Abstract This paper addresses the problem of estimating the coefficients of a general autoregressive moving average (ARMA) model from only third order cumulants (TOCs) of the noisy observations of the system output. The observed signal may be corrupted by additive coloured Gaussian noise. The system is driven by a zero-mean independent and identically distributed (i.i.d.) non-Gaussian sequence. The input is not observed. The unknown model coefficients are obtained using eigenvalue,eigenvector decomposition. The derivation of this procedure is an extension of Pisarenko harmonic autocorrelation-based (PHA) method to third order statistics. It will be shown that the desired ARMA coefficients vector corresponds to the eigenvector associated with the minimum eigenvalue of a data covariance matrix of TOCs. The proposed method is also compared with well-known algorithms as well as with the PHA method. Copyright © 2005 John Wiley & Sons, Ltd. [source] OPTIMAL CONSUMPTION AND PORTFOLIO DECISIONS WITH PARTIALLY OBSERVED REAL PRICESMATHEMATICAL FINANCE, Issue 2 2009Alain Bensoussan We consider optimal consumption and portfolio investment problems of an investor who is interested in maximizing his utilities from consumption and terminal wealth subject to a random inflation in the consumption basket price over time. We consider two cases: (i) when the investor observes the basket price and (ii) when he receives only noisy observations on the basket price. We derive the optimal policies and show that a modified Mutual Fund Theorem consisting of three funds holds in both cases. The compositions of the funds in the two cases are the same, but in general the investor's allocations of his wealth into these funds will differ. However, in the particular case when the investor has constant relative risk-aversion (CRRA) utility, his optimal investment allocations into these funds are also the same in both cases. [source] High-fidelity spectroscopy at the highest resolutionsASTRONOMISCHE NACHRICHTEN, Issue 5 2010D. Dravins Abstract High-fidelity spectroscopy presents challenges for both observations and in designing instruments. High-resolution and high-accuracy spectra are required for verifying hydrodynamic stellar atmospheres and for resolving intergalactic absorption-line structures in quasars. Even with great photon fluxes from large telescopes with matching spectrometers, precise measurements of line profiles and wavelength positions encounter various physical, observational, and instrumental limits. The analysis may be limited by astrophysical and telluric blends, lack of suitable lines, imprecise laboratory wavelengths, or instrumental imperfections. To some extent, such limits can be pushed by forming averages over many similar spectral lines, thus averaging away small random blends and wavelength errors. In situations where theoretical predictions of lineshapes and shifts can be accurately made (e.g., hydrodynamic models of solar-type stars), the consistency between noisy observations and theoretical predictions may be verified; however this is not feasible for, e.g., the complex of intergalactic metal lines in spectra of distant quasars, where the primary data must come from observations. To more fully resolve lineshapes and interpret wavelength shifts in stars and quasars alike, spectral resolutions on order R = 300 000 or more are required; a level that is becoming (but is not yet) available. A grand challenge remains to design efficient spectrometers with resolutions approaching R = 1 000 000 for the forthcoming generation of extremely large telescopes (© 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] |