Statistical Uncertainty (statistical + uncertainty)

Distribution by Scientific Domains


Selected Abstracts


Calcium Isotopic Composition of Various Reference Materials and Seawater

GEOSTANDARDS & GEOANALYTICAL RESEARCH, Issue 1 2003
Dorothee Hippler
composition isotopique du calcium; eau de mer; paléocéanographie; NIST SRM 915a A compilation of ,44/40Ca (,44/40Ca) data sets of different calcium reference materials is presented, based on measurements in three different laboratories (Institute of Geological Sciences, Bern; Centre de Géochimie de la Surface, Strasbourg; GEOMAR, Kiel) to support the establishment of a calcium isotope reference standard. Samples include a series of international and internal Ca reference materials, including NIST SRM 915a, seawater, two calcium carbonates and a CaF2 reference sample. The deviations in ,44/40Ca for selected pairs of reference samples have been defined and are consistent within statistical uncertainties in all three laboratories. Emphasis has been placed on characterising both NIST SRM 915a as an internationally available high purity Ca reference sample and seawater as representative of an important and widely available geological reservoir. The difference between ,44/40Ca of NIST SRM 915a and seawater is defined as -1.88 O.O4%o (,44/42CaNISTSRM915a/Sw= -0.94 0.07%o). The conversion of values referenced to NIST SRM 915a to seawater can be described by the simplified equation ,44/40CaSa/Sw=,44/40CaSa/NIST SRM 915a - 1.88 (,44/42CaSa/Sw=,44/42CaSa/NIST SRM 915a - 0.94). We propose the use of NIST SRM 915a as general Ca isotope reference standard, with seawater being defined as the major reservoir with respect to oceanographic studies. On présente ici une compilation de données de ,44/40Ca (,44/42Ca) obtenues sur différents matériaux de référence, à partir d'analyses effectuées dans trois laboratoires (Institute of Geological Sciences, Berne; Centre de Géochimie de la Surface, Strasbourg; GEOMAR, Kiel) dans le but de définir des matériaux standards de référence pour isotopie du calcium. Les échantillons comprenaient une série de matériaux standards, internes et internationaux, de référence pour le calcium, avec NIST SRM 915a, l'eau de mer, deux carbonates de calcium, et un échantillon de CaF2 de référence. Les déviations en ,44/40Ca pour des paires sélectionnées d'échantillons de référence ont été définies et sont en accord, compte tenu des incertitudes statistiques, entre les trois laboratoires. L'accent a été mis sur la nécessité de caractériser à la fois NIST SRM 915a, en tant que matériau de référence très pur, internationalement disponible, et l'eau de mer comme représentant d'un réservoir géologique très important et disponible partout. La différence entre les ,44/40Ca de NIST SRM 915a et de l'eau de mer est définie comme étant de -1.88 0.04%0,44/42CaNIST SRM 915a/Sw= -0.94 0.07%0). La conversion des données référencées par rapport à NIST SRM 915a à la référence -eau de mer- se fait selon l'équation simplifiée équation ,44/40CaSa/Sw=,44/40CaSa/NIST SRM 915a - 1.88 (,44/42Ca Sa/Sw=,44/42CaSa/NIST SRM 915a - 0.94). Nous proposons l'utilisation de NIST SRM 915a comme matériau standard de référence pour les isotopes de Ca, avec l'eau de mer comme réservoir majeur adapté aux études océanographiques. [source]


An analysis of uncertainty in non-equilibrium and equilibrium geothermobarometry

JOURNAL OF METAMORPHIC GEOLOGY, Issue 9 2004
J. R. ASHWORTH
Abstract In statistically optimised P,T estimation, the contributions to overall uncertainty from different sources are represented by ellipses. One source, for a diffusion-controlled reaction at non-equilibrium, is diffusion modelling of the reaction texture. This modelling is used to estimate ratios, Q, between free-energy differences, ,G, of reactions among mineral end-members, to replace the equilibrium condition ,G = 0. The associated uncertainty is compared with those already inherent in the equilibrium case (from end-member data, activity models and mineral compositions). A compact matrix formulation is introduced for activity coefficients, and their partial derivatives governing error propagation. The non-equilibrium example studied is a corona reaction with the assemblage Grt,Opx,Cpx,Pl,Qtz. Two garnet compositions are used, from opposite sides of the corona. In one of them, affected by post-reaction Fe, Mg exchange with pyroxene, the problem of reconstructing the original composition is overcome by direct use of ratios between chemical-potential differences, given by the diffusion modelling. The number of geothermobarometers in the optimisation is limited by near-degeneracies. Their weightings are affected by strong correlations among Q ratios. Uncertainty from diffusion modelling is not large in comparison with other sources. Overall precision is limited mainly by uncertainties in activity models. Hypothetical equilibrium P,T are also estimated for both garnet compositions. By this approach, departure from equilibrium can be measured, with statistical uncertainties. For the example, the result for difference from equilibrium pressure is 1.2 ± 0.7 kbar. [source]


About estimation of fitted parameters' statistical uncertainties in EXAFS.

JOURNAL OF SYNCHROTRON RADIATION, Issue 3 2005
Critical approach on usual, Monte Carlo methods
An important step in X-ray absorption spectroscopy (XAS) analysis is the fitting of a model to the experimental spectra, with a view to obtaining structural parameters. It is important to estimate the errors on these parameters, and three methods are used for this purpose. This article presents the conditions for applying these methods. It is shown that the usual equation is not applicable for fitting in R space or on filtered XAS data; a formula is established to treat these cases, and the equivalence between the usual formula and the brute-force method is evidenced. Lastly, the problem of the nonlinearity of the XAS models and a comparison with Monte Carlo methods are addressed. [source]


Transients from initial conditions in cosmological simulations

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2006
Martín Crocce
ABSTRACT We study the impact of setting initial conditions in numerical simulations using the standard procedure based on the Zel'dovich approximation (ZA). As it is well known from the perturbation theory, ZA initial conditions have incorrect second- and higher-order growth and therefore excite long-lived transients in the evolution of the statistical properties of density and velocity fields. We also study the improvement brought by using more accurate initial conditions based on second-order Lagrangian perturbation theory (2LPT). We show that 2LPT initial conditions reduce transients significantly and thus are much more appropriate for numerical simulations devoted to precision cosmology. Using controlled numerical experiments with ZA and 2LPT initial conditions, we show that simulations started at redshift zi= 49 using the ZA underestimate the power spectrum in the non-linear regime by about 2, 4 and 8 per cent at z= 0, 1, and 3, respectively, whereas the mass function of dark matter haloes is underestimated by 5 per cent at m= 1015 M, h,1 (z= 0) and 10 per cent at m= 2 × 1014 M, h,1 (z= 1). The clustering of haloes is also affected to the few per cent level at z= 0. These systematics effects are typically larger than statistical uncertainties in recent mass function and power spectrum fitting formulae extracted from numerical simulations. At large scales, the measured transients in higher-order correlations can be understood from first principle calculations based on perturbation theory. [source]


A Probabilistic Framework for Bayesian Adaptive Forecasting of Project Progress

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 3 2007
Paolo Gardoni
An adaptive Bayesian updating method is used to assess the unknown model parameters based on recorded data and pertinent prior information. Recorded data can include equality, upper bound, and lower bound data. The proposed approach properly accounts for all the prevailing uncertainties, including model errors arising from an inaccurate model form or missing variables, measurement errors, statistical uncertainty, and volitional uncertainty. As an illustration of the proposed approach, the project progress and final time-to-completion of an example project are forecasted. For this illustration construction of civilian nuclear power plants in the United States is considered. This application considers two cases (1) no information is available prior to observing the actual progress data of a specified plant and (2) the construction progress of eight other nuclear power plants is available. The example shows that an informative prior is important to make accurate predictions when only a few records are available. This is also the time when forecasts are most valuable to the project manager. Having or not having prior information does not have any practical effect on the forecast when progress on a significant portion of the project has been recorded. [source]


Mitotic counting in surgical pathology: sampling bias, heterogeneity and statistical uncertainty

HISTOPATHOLOGY, Issue 1 2001
F B J M Thunnissen
Mitotic counting in surgical pathology: sampling bias, heterogeneity and statistical uncertainty Although several articles on the methodological aspects of mitotic counting have been published, the effects of macroscopic sampling and tumour heterogeneity have not been discussed in any detail. In this review the essential elements for a standardized mitotic counting protocol are described, including microscopic calibration, specific morphological criteria, macroscopic selection, counting procedure, effect of biological variation, threshold, and the setting of an area of uncertainty (,grey area'). We propose that the use of a standard area for mitotic quantification and of a grey area in mitotic counting protocols will facilitate the application of mitotic counting in diagnostic and prognostic pathology. [source]


Variance-reduced Monte Carlo solutions of the Boltzmann equation for low-speed gas flows: A discontinuous Galerkin formulation

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 4 2008
Lowell L. Baker
Abstract We present and discuss an efficient, high-order numerical solution method for solving the Boltzmann equation for low-speed dilute gas flows. The method's major ingredient is a new Monte Carlo technique for evaluating the weak form of the collision integral necessary for the discontinuous Galerkin formulation used here. The Monte Carlo technique extends the variance reduction ideas first presented in Baker and Hadjiconstantinou (Phys. Fluids 2005; 17, art. no. 051703) and makes evaluation of the weak form of the collision integral not only tractable but also very efficient. The variance reduction, achieved by evaluating only the deviation from equilibrium, results in very low statistical uncertainty and the ability to capture arbitrarily small deviations from equilibrium (e.g. low-flow speed) at a computational cost that is independent of the magnitude of this deviation. As a result, for low-signal flows the proposed method holds a significant computational advantage compared with traditional particle methods such as direct simulation Monte Carlo (DSMC). Copyright © 2008 John Wiley & Sons, Ltd. [source]


Glucosamine sulphate in the treatment of knee osteoarthritis: cost-effectiveness comparison with paracetamol

INTERNATIONAL JOURNAL OF CLINICAL PRACTICE, Issue 6 2010
S. Scholtissen
Summary Introduction:, The aim of this study was to explore the cost-effectiveness of glucosamine sulphate (GS) compared with paracetamol and placebo (PBO) in the treatment of knee osteoarthritis. For this purpose, a 6-month time horizon and a health care perspective was used. Material and methods:, The cost and effectiveness data were derived from Western Ontario and McMaster Universities Osteoarthritis Index data of the Glucosamine Unum In Die (once-a-day) Efficacy trial study by Herrero-Beaumont et al. Clinical effectiveness was converted into utility scores to allow for the computation of cost per quality-adjusted life year (QALY) For the three treatment arms Incremental Cost-Effectiveness Ratio were calculated and statistical uncertainty was explored using a bootstrap simulation. Results:, In terms of mean utility score at baseline, 3 and 6 months, no statistically significant difference was observed between the three groups. When considering the mean utility score changes from baseline to 3 and 6 months, no difference was observed in the first case but there was a statistically significant difference from baseline to 6 months with a p-value of 0.047. When comparing GS with paracetamol, the mean baseline incremental cost-effectiveness ratio (ICER) was dominant and the mean ICER after bootstrapping was ,1376 ,/QALY indicating dominance (with 79% probability). When comparing GS with PBO, the mean baseline and after bootstrapping ICER were 3617.47 and 4285 ,/QALY, respectively. Conclusion:, The results of the present cost-effectiveness analysis suggested that GS is a highly cost-effective therapy alternative compared with paracetamol and PBO to treat patients diagnosed with primary knee OA. [source]


Modelling trends in central England temperatures

JOURNAL OF FORECASTING, Issue 1 2003
David I. Harvey
Abstract Trends are extracted from the central England temperature (CET) data available from 1723, using both annual and seasonal averages. Attention is focused on fitting non-parametric trends and it is found that, while there is no compelling evidence of a trend increase in the CET, there have been three periods of cooling, stability, and warming, roughly associated with the beginning and the end of the Industrial Revolution. There does appear to have been an upward shift in trend spring temperatures, but forecasting of current trends is hazardous because of the statistical uncertainty surrounding them. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Counting statistics of X-ray detectors at high counting rates

JOURNAL OF SYNCHROTRON RADIATION, Issue 3 2003
David Laundy
Modern synchrotron radiation sources with insertion devices and focusing optics produce high fluxes of X-rays at the sample, which leads to a requirement for photon-counting detectors to operate at high counting rates. With high counting rates there can be significant non-linearity in the response of the detector to incident X-ray flux, where this non-linearity is caused by the overlap of the electronic pulses that are produced by each X-ray. A model that describes the overlap of detector pulses is developed in this paper. This model predicts that the correction to the counting rate for pulse overlap is the same as a conventional dead-time correction. The model is also used to calculate the statistical uncertainty of a measurement and predicts that the error associated with a measurement can be increased significantly over that predicted by Poisson () statistics. The error differs from that predicted by a conventional dead-time treatment. [source]


Determination of the profile of atmospheric optical turbulence strength from SLODAR data

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 2 2006
T. Butterley
ABSTRACT Slope Detection and Ranging (SLODAR) is a technique for the measurement of the vertical profile of atmospheric optical turbulence strength. Its main applications are astronomical site characterization and real-time optimization of imaging with adaptive optical correction. The turbulence profile is recovered from the cross-covariance of the slope of the optical phase aberration for a double star source, measured at the telescope with a wavefront sensor (WFS). Here, we determine the theoretical response of a SLODAR system based on a Shack,Hartmann WFS to a thin turbulent layer at a given altitude, and also as a function of the spatial power spectral index of the optical phase aberrations. Recovery of the turbulence profile via fitting of these theoretical response functions is explored. The limiting resolution in altitude of the instrument and the statistical uncertainty of the measured profiles are discussed. We examine the measurement of the total integrated turbulence strength (the seeing) from the WFS data and, by subtraction, the fractional contribution from all turbulence above the maximum altitude for direct sensing of the instrument. We take into account the effects of noise in the measurement of wavefront slopes from centroids and the form of the spatial structure function of the atmospheric optical aberrations. [source]


Lagrangian simulation of wind transport in the urban environment

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 643 2009
Dr J. D. Wilson
Abstract Fluid element trajectories are computed in inhomogeneous urban-like flows, the needed wind statistics being furnished by a Reynolds-averaged Navier,Stokes (RANS) model that explicitly resolves obstacles. Performance is assessed against pre-existing measurements in flows ranging from the horizontally uniform atmospheric surface layer (no buildings), through regular obstacle arrays in a water-channel wall shear layer, to full-scale observations at street scale in an urban core (the Oklahoma City tracer dispersion experiment Joint Urban 2003). Agreement with observations is encouraging, e.g. for an Oklahoma City tracer trial in which sixteen detectors reported non-zero concentration, modelled concentration lies within a factor of two of the corresponding observation in nine cases (FAC2 = 56%). Although forward and backward simulations offer comparable fidelity relative to the data, interestingly they differ (by a margin far exceeding statistical uncertainty) wherever trajectories from source to receptor traverse regions of abrupt change in the Reynolds stress tensor. Copyright © 2009 Royal Meteorological Society [source]


Sensitivity of one-dimensional radiative biases to vertical cloud-structure assumptions: Validation with aircraft data

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 608 2005
F. Di Giuseppe
Abstract Three representations of an observed stratocumulus system are generated by combining aircraft observations with a simple statistical model. The realizations differ in their representation of the vertical cloud structure while the horizontal variability is identical. In the control case (A) both the adiabatic liquid-water profile and the effect of wind-shear induced vertical decorrelation are represented. The second simulation (B) removes the wind-shear effect by assuming maximum overlap between adjacent layers. The third case (C) instead removes vertical variability by averaging the in-cloud liquid water for each column. For each of these scenes Monte Carlo simulated solar fluxes are compared against observed flux measurements. Cases A and B agree with observed (horizontal) flux variations within statistical uncertainty, while case C, which neglects vertical variability, is not able to reproduce the observed fluxes. The comparison between the radiative fields produced by these three representations of the stratocumulus system, calculated using a three-dimensional radiative-transfer solution, an independent pixel approximation (IPA) and a plane-parallel (PP) approach, shows substantial differences. Not accounting for the adiabatic liquid-water profile generates a systematic increase in the optical depth, , when the effective radius is computed from mean liquid-water content and droplet-number concentration, that can be responsible for a 5% increase in the reflection for shallow boundary-layer cloud systems (,,1). A much stronger effect in the radiative properties is produced by varying the cloud-overlap rule applied. While changing from maximum to random overlap does not introduce any variation in the optical depth of the cloud scene, it does introduce an increase in the reflection that is proportional to the relative change in total cloud fraction. The magnitude of these latter biases is comparable to that produced by unresolved horizontal variability. Moreover, it is shown that, when the vertical cloud structure is properly resolved, the effect of horizontal fluctuations is also reduced. Copyright © 2005 Royal Meteorological Society [source]


Modelling the impact of an influenza A/H1N1 pandemic on critical care demand from early pathogenicity data: the case for sentinel reporting

ANAESTHESIA, Issue 9 2009
A. Ercole
Summary Projected critical care demand for pandemic influenza H1N1 in England was estimated in this study. The effect of varying hospital admission rates under statistical uncertainty was examined. Early in a pandemic, uncertainty in epidemiological parameters leads to a wide range of credible scenarios, with projected demand ranging from insignificant to overwhelming. However, even small changes to input assumptions make the major incident scenario increasingly likely. Before any cases are admitted to hospital, 95% confidence limit on admission rates led to a range in predicted peak critical care bed occupancy of between 0% and 37% of total critical care bed capacity, half of these cases requiring ventilatory support. For hospital admission rates above 0.25%, critical care bed availability would be exceeded. Further, only 10% of critical care beds in England are in specialist paediatric units, but best estimates suggest that 30% of patients requiring critical care will be children. Paediatric intensive care facilities are likely to be quickly exhausted and suggest that older children should be managed in adult critical care units to allow resource optimisation. Crucially this study highlights the need for sentinel reporting and real-time modelling to guide rational decision making. [source]