Random Fields (random + field)

Distribution by Scientific Domains

Kinds of Random Fields

  • gaussian random field
  • markov random field


  • Selected Abstracts


    Gaussian Markov Random Fields: Theory and Applications

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 3 2007
    Peter Congdon
    No abstract is available for this article. [source]


    Random fields,Union intersection tests for detecting functional connectivity in EEG/MEG imaging

    HUMAN BRAIN MAPPING, Issue 8 2009
    Felix Carbonell
    Abstract Electrophysiological (EEG/MEG) imaging challenges statistics by providing two views of the same underlying spatio-temporal brain activity: a topographic view (EEG/MEG) and tomographic view (EEG/MEG source reconstructions). It is a common practice that statistical parametric mapping (SPM) for these two situations is developed separately. In particular, assessing statistical significance of functional connectivity is a major challenge in these types of studies. This work introduces statistical tests for assessing simultaneously the significance of spatio-temporal correlation structure between ERP/ERF components as well as that of their generating sources. We introduce a greatest root statistic as the multivariate test statistic for detecting functional connectivity between two sets of EEG/MEG measurements at a given time instant. We use some new results in random field theory to solve the multiple comparisons problem resulting from the correlated test statistics at each time instant. In general, our approach using the union-intersection (UI) principle provides a framework for hypothesis testing about any linear combination of sensor data, which allows the analysis of the correlation structure of both topographic and tomographic views. The performance of the proposed method is illustrated with real ERP data obtained from a face recognition experiment. Hum Brain Mapp 2009. © 2009 Wiley-Liss, Inc. [source]


    Variable smoothing in Bayesian intrinsic autoregressions

    ENVIRONMETRICS, Issue 8 2007
    Mark J. Brewer
    Abstract We introduce an adapted form of the Markov random field (MRF) for Bayesian spatial smoothing with small-area data. This new scheme allows the amount of smoothing to vary in different parts of a map by employing area-specific smoothing parameters, related to the variance of the MRF. We take an empirical Bayes approach, using variance information from a standard MRF analysis to provide prior information for the smoothing parameters of the adapted MRF. The scheme is shown to produce proper posterior distributions for a broad class of models. We test our method on both simulated and real data sets, and for the simulated data sets, the new scheme is found to improve modelling of both slowly-varying levels of smoothness and discontinuities in the response surface. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Space varying coefficient models for small area data

    ENVIRONMETRICS, Issue 5 2003
    Renato M. Assunção
    Abstract Many spatial regression problems using area data require more flexible forms than the usual linear predictor for modelling the dependence of responses on covariates. One direction for doing this is to allow the coefficients to vary as smooth functions of the area's geographical location. After presenting examples from the scientific literature where these spatially varying coefficients are justified, we briefly review some of the available alternatives for this kind of modelling. We concentrate on a Bayesian approach for generalized linear models proposed by the author which uses a Markov random field to model the coefficients' spatial dependency. We show that, for normally distributed data, Gibbs sampling can be used to sample from the posterior and we prove a result showing the equivalence between our model and other usual spatial regression models. We illustrate our approach with a number of rather complex applied problems, showing that the method is computationally feasible and provides useful insights in substantive problems. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    A high frequency kriging approach for non-stationary environmental processes

    ENVIRONMETRICS, Issue 5 2001
    Montserrat Fuentes
    Abstract Emission reductions were mandated in the Clean Air Act Amendments of 1990 with the expectation that they would result in major reductions in the concentrations of atmospherically transported pollutants. The emission reductions are intended to reduce public health risks and to protect sensitive ecosystems. To determine whether the emission reductions are having the intended effect on atmospheric concentrations, monitoring data must be analyzed taking into consideration the spatial structure shown by the data. Maps of pollutant concentrations and fluxes are useful over different geopolitical boundaries, to discover when, where, and to what extent the U.S. Nation's air quality is improving or declining. Since the spatial covariance structure shown by the data changes with location, the standard kriging methodology for spatial interpolation cannot be used because it assumes stationarity of the process. We present a new methodology for spatial interpolation of non-stationary processes. In this method the field is represented locally as a stationary isotropic random field, but the parameters of the stationary random field are allowed to vary across space. A procedure for interpolation is presented that uses an expression for the spectral density at high frequencies. New fitting algorithms are developed using spectral approaches. In cases where the data are distributed exactly or approximately on a lattice, it is argued that spectral approaches have potentially enormous computational benefits compared with maximum likelihood. The methods are extended to interpolation questions using approximate Bayesian approaches to account for parameter uncertainty. We develop applications to obtain the total loading of pollutant concentrations and fluxes over different geo-political boundaries. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Geostatistical Analysis of Rainfall

    GEOGRAPHICAL ANALYSIS, Issue 2 2010
    David I. F. Grimes
    Rainfall can be modeled as a spatially correlated random field superimposed on a background mean value; therefore, geostatistical methods are appropriate for the analysis of rain gauge data. Nevertheless, there are certain typical features of these data that must be taken into account to produce useful results, including the generally non-Gaussian mixed distribution, the inhomogeneity and low density of observations, and the temporal and spatial variability of spatial correlation patterns. Many studies show that rigorous geostatistical analysis performs better than other available interpolation techniques for rain gauge data. Important elements are the use of climatological variograms and the appropriate treatment of rainy and nonrainy areas. Benefits of geostatistical analysis for rainfall include ease of estimating areal averages, estimation of uncertainties, and the possibility of using secondary information (e.g., topography). Geostatistical analysis also facilitates the generation of ensembles of rainfall fields that are consistent with a given set of observations, allowing for a more realistic exploration of errors and their propagation in downstream models, such as those used for agricultural or hydrological forecasting. This article provides a review of geostatistical methods used for kriging, exemplified where appropriate by daily rain gauge data from Ethiopia. La precipitación puede ser modelada como un campo aleatorio correlacionado espacialmente sobrepuesto a un valor de fondo (background) medio. Dadas estas propiedades, resulta apropiado utilizar métodos geoestadísticos para el análisis de datos registrados con pluviómetros distribuidos en estaciones meteorológicas. Existen sin embargo, ciertas características de este tipo de datos que deben ser tomados en cuenta para producir resultados útiles:a) la distribución de datos tiende a ser mixta y no ser normal; b) las observaciones son heterogéneas y de escasa densidad espacial; y c) los patrones de correlación espacial son varían considerablemente en el tiempo y espacio. Numerosos estudios han demostrado ya que un análisis geoestadístico riguroso ofrece mejores resultados que las otras técnicas de interpolación disponibles para este tipo de datos. Cabe resaltar que en la aplicación de estas técnicas, el uso de variogramas climatológicos y el tratamiento apropiado de áreas lluviosas versus áreas no lluviosas son consideraciones importantes. El análisis geoestadístico de lluvias tiene además la ventaja de estimar promedios areales con facilidad, proporcionar una estimación espacial de la incertidumbre, y la posibilidad de incorporar información secundaria (ej. topografía) en el modelo. Asimismo, los métodos geoestadísticos también facilitan la generación de campos de lluvia que son consistentes con las observaciones. Esto hace posible exploraciones más realistas del error y la estimación de su propagación en modelos aplicados subsecuentemente, como por ejemplo en los modelos utilizados en predicción agrícola e hidrológica. Los autores reseñan los métodos geoestadísticos utilizados para krijeage o krijeado (kriging) mediante ejemplos de su uso apropiado con datos pluviométricos en Etiopia. [source]


    A random field model for generating synthetic microstructures of functionally graded materials

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 7 2008
    Sharif Rahman
    Abstract This article presents a new level-cut, inhomogeneous, filtered Poisson random field model for representing two-phase microstructures of statistically inhomogeneous, functionally graded materials with fully penetrable embedded particles. The model involves an inhomogeneous, filtered Poisson random field comprising a sum of deterministic kernel functions that are scaled by random variables and a cut of the filtered Poisson field above a specified level. The resulting level-cut field depends on the Poisson intensity, level, kernel functions, random scaling variables, and random rotation matrices. A reconstruction algorithm including model calibration and Monte Carlo simulation is presented for generating samples of two-phase microstructures of statistically inhomogeneous media. Numerical examples demonstrate that the model developed is capable of producing a wide variety of two- and three-dimensional microstructures of functionally graded composites containing particles of various sizes, shapes, densities, gradations, and orientations. An example involving finite element analyses of random microstructures, leading to statistics of effective properties of functionally graded composites, illustrates the usefulness of the proposed model. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    On the investigation of shell buckling due to random geometrical imperfections implemented using Karhunen,Loève expansions

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2008
    K. J. Craig
    Abstract For the accurate prediction of the collapse behaviour of thin cylindrical shells, it is accepted that geometrical and other imperfections in material properties and loading have to be accounted for in the simulation. There are different methods of incorporating imperfections, depending on the availability of accurate imperfection data. The current paper uses a spectral decomposition of geometrical uncertainty (Karhunen,Loève expansions). To specify the covariance of the required random field, two methods are used. First, available experimentally measured imperfection fields are used as input for a principal component analysis based on pattern recognition literature, thereby reducing the cost of the eigenanalysis. Second, the covariance function is specified analytically and the resulting Friedholm integral equation of the second kind is solved using a wavelet-Galerkin approach. Experimentally determined correlation lengths are used as input for the analytical covariance functions. The above procedure enables the generation of imperfection fields for applications where the geometry is slightly modified from the original measured geometry. For example, 100 shells are perturbed with the resulting random fields obtained from both methods, and the results in the form of temporal normal forces during buckling, as simulated using LS-DYNA®, as well as the statistics of a Monte Carlo analysis of the 100 shells in each case are presented. Although numerically determined mean values of the limit load of the current and another numerical study differ from the experimental results due to the omission of imperfections other than geometrical, the coefficients of variation are shown to be in close agreement. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Heterogeneous growth of cordierite in low P/T Tsukuba metamorphic rocks from central Japan

    JOURNAL OF METAMORPHIC GEOLOGY, Issue 2 2001
    K. Miyazaki
    Abstract This paper examines the spatial statistics of matrix minerals and complex patterned cordierite porphyroblasts in the low-pressure, high-temperature (low P/T) Tsukuba metamorphic rocks from central Japan, using a density correlation function. The cordierite-producing reaction is sillimanite + biotite + quartz = K-feldspar + cordierite + water. The density correlation function shows that quartz is distributed randomly. However, the density correlation functions of biotite, plagioclase and K-feldspar show that their spatial distributions are clearly affected by the formation of cordierite porphyroblasts. These observations suggest that cordierite growth occurred through a selective growth mechanism: quartz adjacent to cordierite has a tendency to prevent the growth of cordierite, whereas other matrix minerals adjacent to cordierite have a tendency to enhance the growth of cordierite. The density correlation functions of complex patterned cordierite porphyroblasts show power-law behaviour. A selective growth mechanism alone cannot explain the origin of the power-law behaviour. Comparison of the morphology and fractal dimension of cordierite with two-dimensional sections from a three-dimensional diffusion-limited aggregation (DLA) suggests that the formation of cordierite porphyroblasts can be modelled as a DLA process. DLA is the simple statistical model for the universal fractal pattern developed in a macroscopic diffusion field. Diffusion-controlled growth interacting with a random field is essential to the formation of a DLA-like pattern. The selective growth mechanism will provide a random noise for the growth of cordierite due to random distribution of quartz. Therefore, a selective growth mechanism coupled with diffusion-controlled growth is proposed to explain the power-law behaviour of the density correlation function of complex patterned cordierite. The results in this paper suggest that not only the growth kinetics but also the spatial distribution of matrix minerals affect the progress of the metamorphic reaction and pattern formation of metamorphic rocks. [source]


    Blur-generated non-separable space,time models

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 4 2000
    Patrick E. Brown
    Statistical space,time modelling has traditionally been concerned with separable covariance functions, meaning that the covariance function is a product of a purely temporal function and a purely spatial function. We draw attention to a physical dispersion model which could model phenomena such as the spread of an air pollutant. We show that this model has a non-separable covariance function. The model is well suited to a wide range of realistic problems which will be poorly fitted by separable models. The model operates successively in time: the spatial field at time t +1 is obtained by ,blurring' the field at time t and adding a spatial random field. The model is first introduced at discrete time steps, and the limit is taken as the length of the time steps goes to 0. This gives a consistent continuous model with parameters that are interpretable in continuous space and independent of sampling intervals. Under certain conditions the blurring must be a Gaussian smoothing kernel. We also show that the model is generated by a stochastic differential equation which has been studied by several researchers previously. [source]


    A continuous latent spatial model for crack initiation in bone cement

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 1 2008
    Elizabeth A. Heron
    Summary., Hip replacements rovide a means of achieving a higher quality of life for individuals who have, through aging or injury, accumulated damage to their natural joints. This is a very common operation, with over a million people a year benefiting from the procedure. The replacements themselves fail mainly as a result of the mechanical loosening of the components of the artificial joint due to damage accumulation. This damage accumulation consists of the initiation and growth of cracks in the bone cement which is used to fixate the replacement in the human body. The data come from laboratory experiments that are designed to assess the effectiveness of the bone cement in resisting damage. We examine the properties of the bone cement, with the aim being to estimate the effect that both observable and unobservable spatially varying factors have on causing crack initiation. To do this, an explicit model for the damage process is constructed taking into account the tension and compression at different locations in the specimens. A gamma random field is used to model any latent spatial factors that may be influential in crack initiation. Bayesian inference is carried out for the parameters of this field and related covariates by using Markov chain Monte Carlo techniques. [source]


    Non-ideal evolution of non-axisymmetric, force-free magnetic fields in a magnetar

    MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2008
    A. Mastrano
    ABSTRACT Recent numerical magnetohydrodynamic calculations by Braithwaite and collaborators support the ,fossil field' hypothesis regarding the origin of magnetic fields in compact stars and suggest that the resistive evolution of the fossil field can explain the reorganization and decay of magnetar magnetic fields. Here, these findings are modelled analytically by allowing the stellar magnetic field to relax through a quasi-static sequence of non-axisymmetric, force-free states, by analogy with spheromak relaxation experiments, starting from a random field. Under the hypothesis that the force-free modes approach energy equipartition in the absence of resistivity, the output of the numerical calculations is semiquantitatively recovered: the field settles down to a linked poloidal,toroidal configuration, which inflates and becomes more toroidal as time passes. A qualitatively similar (but not identical) end state is reached if the magnetic field evolves by exchanging helicity between small and large scales according to an ,-dynamo-like, mean-field mechanism, arising from the fluctuating electromotive force produced by the initial random field. The impossibility of matching a force-free internal field to a potential exterior field is discussed in the magnetar context. [source]


    Stochastic perturbation approach to the wavelet-based analysis

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 4 2004
    M. Kami
    Abstract The wavelet-based decomposition of random variables and fields is proposed here in the context of application of the stochastic second order perturbation technique. A general methodology is employed for the first two probabilistic moments of a linear algebraic equations system solution, which are obtained instead of a single solution projection in the deterministic case. The perturbation approach application allows determination of the closed formulas for a wavelet decomposition of random fields. Next, these formulas are tested by symbolic projection of some elementary random field. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Organic glasses: cluster structure of the random energy landscape

    ANNALEN DER PHYSIK, Issue 12 2009
    S.V. Novikov
    Abstract An appropriate model for the random energy landscape in organic glasses is a spatially correlated Gaussian field. We calculated the distribution of the average value of a Gaussian random field in a finite domain. The results of the calculation demonstrate a strong dependence of the width of the distribution on the spatial correlations of the field. Comparison with the simulation results for the distribution of the size of the cluster indicates that the distribution of an average field could serve as a useful tool for the estimation of the asymptotic behavior of the distribution of the size of the clusters for "deep" clusters where value of the field on each site is much greater than the rms disorder. We also demonstrate significant modification of the properties of energetic disorder in organic glasses at the vicinity of the electrode. [source]


    Generalized Hierarchical Multivariate CAR Models for Areal Data

    BIOMETRICS, Issue 4 2005
    Xiaoping Jin
    Summary In the fields of medicine and public health, a common application of areal data models is the study of geographical patterns of disease. When we have several measurements recorded at each spatial location (for example, information on p, 2 diseases from the same population groups or regions), we need to consider multivariate areal data models in order to handle the dependence among the multivariate components as well as the spatial dependence between sites. In this article, we propose a flexible new class of generalized multivariate conditionally autoregressive (GMCAR) models for areal data, and show how it enriches the MCAR class. Our approach differs from earlier ones in that it directly specifies the joint distribution for a multivariate Markov random field (MRF) through the specification of simpler conditional and marginal models. This in turn leads to a significant reduction in the computational burden in hierarchical spatial random effect modeling, where posterior summaries are computed using Markov chain Monte Carlo (MCMC). We compare our approach with existing MCAR models in the literature via simulation, using average mean square error (AMSE) and a convenient hierarchical model selection criterion, the deviance information criterion (DIC; Spiegelhalter et al., 2002, Journal of the Royal Statistical Society, Series B64, 583,639). Finally, we offer a real-data application of our proposed GMCAR approach that models lung and esophagus cancer death rates during 1991,1998 in Minnesota counties. [source]


    Spatial point-process statistics: concepts and application to the analysis of lead contamination in urban soil,

    ENVIRONMETRICS, Issue 4 2005
    Christian Walter
    Abstract This article explores the use of spatial point-process analysis as an aid to describe topsoil lead distribution in urban environments. The data used were collected in Glebe, an inner suburb of Sydney. The approach focuses on the locations of punctual events defining a point pattern, which can be statistically described through local intensity estimates and between-point distance functions. F -, G - and K -surfaces of a marked spatial point pattern were described and used to estimate nearest distance functions over a sliding band of quantiles belonging to the marking variable. This provided a continuous view of the point pattern properties as a function of the marking variable. Several random fields were simulated by selecting points from random, clustered or regular point processes and diffusing them. Recognition of the underlying point process using variograms derived from dense sampling was difficult because, structurally, the variograms were very similar. Point-event distance functions were useful complimentary tools that, in most cases, enabled clear recognition of the clustered processes. Spatial sampling quantile point pattern analysis was defined and applied to the Glebe data set. The analysis showed that the highest lead concentrations were strongly clustered. The comparison of this data set with the simulation confidence limits of a Poisson process, a short-radius clustered point process and a geostatistical simulation showed a random process for the third quartile of lead concentrations but strong clustering for the data in the upper quartile. Thus the distribution of topsoil lead concentrations over Glebe may have resulted from several contamination processes, mainly from regular or random processes with large diffusion ranges and short-range clustered processes for the hot spots. Point patterns with the same characteristics as the Glebe experimental pattern could be generated by separate additive geostatistical simulation. Spatial sampling quantile point patterns statistics can, in an easy and accurate way, be used complementarily with geostatistical methods. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    The bootstrap and cross-validation in neuroimaging applications: Estimation of the distribution of extrema of random fields for single volume tests, with an application to ADC maps

    HUMAN BRAIN MAPPING, Issue 10 2007
    Roberto Viviani
    Abstract We discuss the assessment of signal change in single magnetic resonance images (MRI) based on quantifying significant departure from a reference distribution estimated from a large sample of normal subjects. The parametric approach is to build a test based on the expected distribution of extrema in random fields. However, in conditions where the variance is not uniform across the volume and the smoothness of the images is moderate to low, this test may be rather conservative. Furthermore, parametric tests are limited to datasets for which distributional assumptions hold. This paper investigates resampling methods that improve statistical tests for signal changes in single images in such adverse conditions, and that can be used for the assessment of images taken for clinical purposes. Two methods, the bootstrap and cross-validation, are compared. It is shown that the bootstrap may fail to provide a good estimate of the distribution of extrema of parametric maps. In contrast, calibration of the significance threshold by means of cross-validation (or related sampling without replacement techniques) address three issues at once: improved power, better voxel-by-voxel estimate of variance by local pooling, and adaptation to departures from ideal distributional assumptions on the signal. We apply the cross-validated tests to apparent diffusion coefficient maps, a type of MRI capable of detecting changes in the microstructural organization of brain parenchyma. We show that deviations from parametric assumptions are strong enough to cast doubt on the correctness of parametric tests for these images. As case studies, we present parametric maps of lesions in patients suffering from stroke and glioblastoma at different stages of evolution. Hum Brain Mapp 2007. © 2007 Wiley-Liss, Inc. [source]


    Apparent/spurious multifractality of data sampled from fractional Brownian/Lévy motions

    HYDROLOGICAL PROCESSES, Issue 15 2010
    Shlomo P. Neuman
    Abstract Many earth and environmental variables appear to be self-affine (monofractal) or multifractal with spatial (or temporal) increments having exceedance probability tails that decay as powers of , , where 1 < , , 2. The literature considers self-affine and multifractal modes of scaling to be fundamentally different, the first arising from additive and the second from multiplicative random fields or processes. We demonstrate theoretically that data having finite support, sampled across a finite domain from one or several realizations of an additive Gaussian field constituting fractional Brownian motion (fBm) characterized by , = 2, give rise to positive square (or absolute) increments which behave as if the field was multifractal when in fact it is monofractal. Sampling such data from additive fractional Lévy motions (fLm) with 1 < , < 2 causes them to exhibit spurious multifractality. Deviations from apparent multifractal behaviour at small and large lags are due to nonzero data support and finite domain size, unrelated to noise or undersampling (the causes cited for such deviations in the literature). Our analysis is based on a formal decomposition of anisotropic fLm (fBm when , = 2) into a continuous hierarchy of statistically independent and homogeneous random fields, or modes, which captures the above behaviour in terms of only E + 3 parameters where E is Euclidean dimension. Although the decomposition is consistent with a hydrologic rationale proposed by Neuman (2003), its mathematical validity is independent of such a rationale. Our results suggest that it may be worth checking how closely would variables considered in the literature to be multifractal (e.g. experimental and simulated turbulent velocities, some simulated porous flow velocities, landscape elevations, rain intensities, river network area and width functions, river flow series, soil water storage and physical properties) fit the simpler monofractal model considered in this paper (such an effort would require paying close attention to the support and sampling window scales of the data). Parsimony would suggest associating variables found to fit both models equally well with the latter. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Is representative elementary area defined by a simple mixing of variable small streams in headwater catchments?

    HYDROLOGICAL PROCESSES, Issue 5 2010
    Yuko Asano
    Abstract The spatial variability of hydrology may decrease with an increase in catchment area as a result of mixing of numerous small-scale hydrological conditions. At some point, it is possible that a threshold area, the representative elementary area (REA), can be identified beyond which an average hydrologic response occurs. This hypothesis has been tested mainly via numerical simulations, with only a few field studies involving simple mixing. We tested this premise quantitatively using dissolved silica (SiO2) concentrations at 96 locations that included zero-order hollow discharges through sixth-order streams, collected under low-flow conditions within the 4·27-km2 Fudoji catchment. The catchment possesses a simple topography consisting almost solely of hillslopes and stream channels, uniform bedrock geology, soil type and land use in the Tanakami Mountains in central Japan. Dissolved SiO2 provides a useful tracer in hydrological studies insofar as it is responsive to flowpath depth on hillslopes of uniform geology. Our results demonstrate that even in a catchment with an almost homogeneous geology and simple topography, dissolved SiO2 concentrations in zero-order hollow discharges largely varied in space and they became similar among sampling locations with area of more than 10,1,100 km2. Relationships between stream order and standard deviation of SiO2 concentration closely matched the theoretical predictions from simple mixing of random fields. That is, our field data supported the existence of the REA and showed that the REA was produced by the simple mixing of numerous small-scale hydrological conditions. The study emphasizes the need to consider both the heterogeneous nature of small-scale hydrology and the landscape structure when assessing the characteristics of catchment runoff. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Estimating the snow water equivalent on the Gatineau catchment using hierarchical Bayesian modelling

    HYDROLOGICAL PROCESSES, Issue 4 2006
    Ousmane Seidou
    Abstract One of the most important parameters for spring runoff forecasting is the snow water equivalent on the watershed, often estimated by kriging using in situ measurements, and in some cases by remote sensing. It is known that kriging techniques provide little information on uncertainty, aside from the kriging variance. In this paper, two approaches using Bayesian hierarchical modelling are compared with ordinary kriging; Bayesian hierarchical modelling is a flexible and general statistical approach that uses observations and prior knowledge to make inferences on both unobserved data (snow water equivalent on the watershed where there is no measurements) and on the parameters (influence of the covariables, spatial interactions between the values of the process at various sites). The first approach models snow water equivalent as a Gaussian spatial process, for which the mean varies in space, and the other uses the theory of Markov random fields. Although kriging and the Bayesian models give similar point estimates, the latter provide more information on the distribution of the snow water equivalent. Furthermore, kriging may considerably underestimate interpolation error. Copyright © 2006 Environment Canada. Published by John Wiley & Sons, Ltd. [source]


    Effect of spatial variability of cross-correlated soil properties on bearing capacity of strip footing

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 1 2010
    Sung Eun Cho
    Abstract Geotechnical engineering problems are characterized by many sources of uncertainty. Some of these sources are connected to the uncertainties of soil properties involved in the analysis. In this paper, a numerical procedure for a probabilistic analysis that considers the spatial variability of cross-correlated soil properties is presented and applied to study the bearing capacity of spatially random soil with different autocorrelation distances in the vertical and horizontal directions. The approach integrates a commercial finite difference method and random field theory into the framework of a probabilistic analysis. Two-dimensional cross-correlated non-Gaussian random fields are generated based on a Karhunen,Loève expansion in a manner consistent with a specified marginal distribution function, an autocorrelation function, and cross-correlation coefficients. A Monte Carlo simulation is then used to determine the statistical response based on the random fields. A series of analyses was performed to study the effects of uncertainty due to the spatial heterogeneity on the bearing capacity of a rough strip footing. The simulations provide insight into the application of uncertainty treatment to geotechnical problems and show the importance of the spatial variability of soil properties with regard to the outcome of a probabilistic assessment. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Analysis and implementation issues for the numerical approximation of parabolic equations with random coefficients

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 6-7 2009
    F. Nobile
    Abstract We consider the problem of numerically approximating statistical moments of the solution of a time-dependent linear parabolic partial differential equation (PDE), whose coefficients and/or forcing terms are spatially correlated random fields. The stochastic coefficients of the PDE are approximated by truncated Karhunen,Loève expansions driven by a finite number of uncorrelated random variables. After approximating the stochastic coefficients, the original stochastic PDE turns into a new deterministic parametric PDE of the same type, the dimension of the parameter set being equal to the number of random variables introduced. After proving that the solution of the parametric PDE problem is analytic with respect to the parameters, we consider global polynomial approximations based on tensor product, total degree or sparse polynomial spaces and constructed by either a Stochastic Galerkin or a Stochastic Collocation approach. We derive convergence rates for the different cases and present numerical results that show how these approaches are a valid alternative to the more traditional Monte Carlo Method for this class of problems. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    On the investigation of shell buckling due to random geometrical imperfections implemented using Karhunen,Loève expansions

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2008
    K. J. Craig
    Abstract For the accurate prediction of the collapse behaviour of thin cylindrical shells, it is accepted that geometrical and other imperfections in material properties and loading have to be accounted for in the simulation. There are different methods of incorporating imperfections, depending on the availability of accurate imperfection data. The current paper uses a spectral decomposition of geometrical uncertainty (Karhunen,Loève expansions). To specify the covariance of the required random field, two methods are used. First, available experimentally measured imperfection fields are used as input for a principal component analysis based on pattern recognition literature, thereby reducing the cost of the eigenanalysis. Second, the covariance function is specified analytically and the resulting Friedholm integral equation of the second kind is solved using a wavelet-Galerkin approach. Experimentally determined correlation lengths are used as input for the analytical covariance functions. The above procedure enables the generation of imperfection fields for applications where the geometry is slightly modified from the original measured geometry. For example, 100 shells are perturbed with the resulting random fields obtained from both methods, and the results in the form of temporal normal forces during buckling, as simulated using LS-DYNA®, as well as the statistics of a Monte Carlo analysis of the 100 shells in each case are presented. Although numerically determined mean values of the limit load of the current and another numerical study differ from the experimental results due to the omission of imperfections other than geometrical, the coefficients of variation are shown to be in close agreement. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Parameter estimation in Bayesian reconstruction of SPECT images: An aid in nuclear medicine diagnosis

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 1 2004
    Antonio López
    Abstract Despite the adequacy of Bayesian methods to reconstruct nuclear medicine SPECT (single-photon emission computed tomography) images, they are rarely used in everyday medical practice. This is primarily because of their computational cost and the need to appropriately select the prior model hyperparameters. We propose a simple procedure for the estimation of these hyperparameters and the reconstruction of the original image and test the procedure on both synthetic and real SPECT images. The experimental results demonstrate that the proposed hyperparameter estimation method produces satisfactory reconstructions. Although we have used generalized Gaussian Markov random fields (GGMRF) as prior models, the proposed estimation method can be applied to any priors with convex potential and tractable partition function with respect to the scale hyperparameter. © 2004 Wiley Periodicals, Inc. Int J Imaging Syst Technol 14, 21,27, 2004; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20003 [source]


    The local theory of the cosmic skeleton

    MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 2 2009
    D. Pogosyan
    ABSTRACT The local theory of the critical lines of two- and three-dimensional random fields that underline the cosmic structures is presented. In the context of cosmological matter distribution, the subset of critical lines of the three-dimensional density field serves to delineate the skeleton of the observed filamentary structure at large scales. A stiff approximation used to quantitatively describe the filamentary skeleton shows that the flux of the skeleton lines is related to the average Gaussian curvature of the (N , 1) dimensional sections of the field. The distribution of the length of the critical lines with threshold is analysed in detail, while the extended descriptors of the skeleton , its curvature and singular points , are introduced and briefly described. Theoretical predictions are compared to measurements of the skeleton in realizations of Gaussian random fields in two and three dimensions. It is found that the stiff approximation accurately predicts the shape of the differential length, allows for analytical insight and explicit closed form solutions. Finally, it provides a simple classification of the singular points of the critical lines: (i) critical points; (ii) bifurcation points and (iii) slopping plateaux. [source]


    Multiscale morphology of the galaxy distribution

    MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 3 2007
    Enn Saar
    ABSTRACT Many statistical methods have been proposed in the last years for analysing the spatial distribution of galaxies. Very few of them, however, can handle properly the border effects of complex observational sample volumes. In this paper, we first show how to calculate the Minkowski Functionals (MFs) taking into account these border effects. We then present a multiscale extension of the MF which gives us more information about how the galaxies are spatially distributed. A range of examples using Gaussian random fields illustrate the results. Finally, we have applied the Multiscale Minkowski Functionals (MMFs) to the 2dF Galaxy Redshift Survey data. The MMF clearly indicates an evolution of morphology with scale. We also compare the 2dF real catalogue with mock catalogues and found that , cold dark matter simulations roughly fit the data, except at the finest scale. [source]


    Stochastic perturbation approach to the wavelet-based analysis

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 4 2004
    M. Kami
    Abstract The wavelet-based decomposition of random variables and fields is proposed here in the context of application of the stochastic second order perturbation technique. A general methodology is employed for the first two probabilistic moments of a linear algebraic equations system solution, which are obtained instead of a single solution projection in the deterministic case. The perturbation approach application allows determination of the closed formulas for a wavelet decomposition of random fields. Next, these formulas are tested by symbolic projection of some elementary random field. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Relaxor-based thin film memories and the depolarizing field problem

    PHYSICA STATUS SOLIDI (A) APPLICATIONS AND MATERIALS SCIENCE, Issue 6 2007
    Manuel I. Marqués
    Abstract A simple model for a thin film memory based on a first neighbor interacting model is studied in detail. We have found that the minimum possible value for the thickness (D) as a function of the lateral size of the memory (L), the screening of the charges at the substrate (S) and the strength of the ferroelectric interaction (J), in order to obtain spontaneous polarization is D = SL /2J. We propose a new mechanism to obtain miniaturization of thin film memories to a single layer based on the use of relaxor ferroelectrics instead of regular ferroelectrics. Under the hypothesis of an internal organization of the random fields inside the nanofilm we show analytically how it should be possible to miniaturize the memory to a width as small as D = 1 for any value of L, J and S. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    An investigation of tests for linearity and the accuracy of likelihood based inference using random fields

    THE ECONOMETRICS JOURNAL, Issue 2 2002
    Christian M. Dahl
    Summary We analyze the random field regression model approach recently suggested by Hamilton (2001, Econometrica, 69, 537,73). We show through extensive simulation studies that although the random field approach is indeed very closely related to the non-parametric spline smoother it seems to offer several advantages over the latter. First, tests for neglected nonlinearity based on Hamilton's random field approach seem to be more powerful than existing test statistics developed within the context of the multivariate spline smoother approach. Second, the convergence properties of the random field approach in limited samples appear to be significantly better than those of the multivariate spline smoother. Finally, when compared to the popular neural network approach the random field approach also performs very well. These results provide strong support for the view of Harvey and Koopman (2000, Econometrics Journal, 3, 84,107) that model-based kernels or splines have a sounder statistical justification than those typically used in non-parametric work. [source]


    Increased Apoptosis in Human Amnion is Associated with Labor at Term

    AMERICAN JOURNAL OF REPRODUCTIVE IMMUNOLOGY, Issue 5 2000
    CHAUR-DONG HSU
    PROBLEM: To characterize whether increased apoptosis in human amnion was associated with labor at term. METHOD OF STUDY: Human amnion were obtained from term patients with vaginal delivery (n=5) or who underwent elective Cesarean section (C/S) without labor (n=5). Apoptosis was performed by the TUNEL (Terminal dUTP Nuclear End Labeling) assay. All nucleated cells stained with propidium iodide in the amnion epithelial cells were identified in red fluorescence. TUNEL positive apoptotic nuclei were identified in green fluorescence. Five random fields of each specimen were blindly counted by investigators. The percentage of apoptotic nuclei of total nuclei (apoptotic index) was calculated and compared between the two groups (25 microscopic fields for each group, respectively). RESULTS: Patients with term labor had a significantly higher mean apoptotic index in amnion epithelial cells than that with elective C/S without labor (27.3±4.1% versus 3.6±1.6%, P<0.001). CONCLUSIONS: Our data indicate that apoptosis in human amnion is significantly increased and associated with labor at term. [source]