Measurement Uncertainty (measurement + uncertainty)

Distribution by Scientific Domains


Selected Abstracts


GROWTH AND MEASUREMENT UNCERTAINTY IN AN UNREGULATED FISHERY

NATURAL RESOURCE MODELING, Issue 3 2009
ANNE B. JOHANNESEN
Abstract Complete information is usually assumed in harvesting models of marine and terrestrial resources. In reality, however, complete information never exists. Fish and wildlife populations often fluctuate unpredictably in numbers, and measurement problems are frequent. In this paper, we analyze a time-discrete fishery model that distinguishes between uncertain natural growth and measurement error and in which exploitation takes place in an unregulated manner. Depending on the parameterization of the model and at which point of time uncertainty is resolved, it is shown that expected harvest under ecological uncertainty may be below or above that of the benchmark model with no uncertainty. On the other hand, when stock measurement is uncertain, expected harvest never exceeds the benchmark level. We also demonstrate that the harvesting profit, or rent, under uncertainty may be above that of the benchmark situation of complete information. In other words, less information may be beneficial for the fishermen. [source]


Measuring finite-frequency body-wave amplitudes and traveltimes

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2006
Karin Sigloch
SUMMARY We have developed a method to measure finite-frequency amplitude and traveltime anomalies of teleseismic P waves. We use a matched filtering approach that models the first 25 s of a seismogram after the P arrival, which includes the depth phases pP and sP. Given a set of broad-band seismograms from a teleseismic event, we compute synthetic Green's functions using published moment tensor solutions. We jointly deconvolve global or regional sets of seismograms with their Green's functions to obtain the broad-band source time function. The matched filter of a seismogram is the convolution of the Green's function with the source time function. Traveltimes are computed by cross-correlating each seismogram with its matched filter. Amplitude anomalies are defined as the multiplicative factors that minimize the RMS misfit between matched filters and data. The procedure is implemented in an iterative fashion, which allows for joint inversion for the source time function, amplitudes, and a correction to the moment tensor. Cluster analysis is used to identify azimuthally distinct groups of seismograms when source effects with azimuthal dependence are prominent. We then invert for one source time function per group. We implement this inversion for a range of source depths to determine the most likely depth, as indicated by the overall RMS misfit, and by the non-negativity and compactness of the source time function. Finite-frequency measurements are obtained by filtering broad-band data and matched filters through a bank of passband filters. The method is validated on a set of 15 events of magnitude 5.8 to 6.9. Our focus is on the densely instrumented Western US. Quasi-duplet events (,quplets') are used to estimate measurement uncertainty on real data. Robust results are achieved for wave periods between 24 and 2 s. Traveltime dispersion is on the order of 0.5 s. Amplitude anomalies are on the order of 1 db in the lowest bands and 3 db in the highest bands, corresponding to amplification factors of 1.2 and 2.0, respectively. Measurement uncertainties for amplitudes and traveltimes depend mostly on station coverage, accuracy of the moment tensor estimate, and frequency band. We investigate the influence of those parameters in tests on synthetic data. Along the RISTRA array in the Western US, we observe amplitude and traveltime patterns that are coherent on scales of hundreds of kilometres. Below two sections of the array, we observe a combination of frequency-dependent amplitude and traveltime patterns that strongly suggest wavefront healing effects. [source]


Messunsicherheit in der Werkstoffprüfung

MATERIALWISSENSCHAFT UND WERKSTOFFTECHNIK, Issue 5 2007
T. Polzin Dr.-Ing.
Uncertainty; tensile test; charpy test; hardness test Abstract Bei der Werkstoffprüfung muss bei jedem Messwert die jeweilige Messunsicherheit angegeben werden. Aus der 1995 erschienenen GUM [1] wurde 2000 der Uncert Report für verschiedenen Messverfahren entwickelt und als Code of Practice (COP) [2,4]. veröffentlicht. Diese sollen in Zusammenhang mit den in den Normen empfohlenen Verfahren und der praktischen Umsetzung dargestellt werden. Measurement uncertainty in testing of materials For the testing of materials the respective uncertainty has to be indicated for each measured value. From the 1995 published GUM [1] 2000 the Uncert report for different measuring methods was developed and published as Code of Practice (COP) [2,4]. These are to be represented in connection with the procedures recommended in the standards and the practical implementation. [source]


A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2004
Colin G. Farquharson
SUMMARY Two automatic ways of estimating the regularization parameter in underdetermined, minimum-structure-type solutions to non-linear inverse problems are compared: the generalized cross-validation and L-curve criteria. Both criteria provide a means of estimating the regularization parameter when only the relative sizes of the measurement uncertainties in a set of observations are known. The criteria, which are established components of linear inverse theory, are applied to the linearized inverse problem at each iteration in a typical iterative, linearized solution to the non-linear problem. The particular inverse problem considered here is the simultaneous inversion of electromagnetic loop,loop data for 1-D models of both electrical conductivity and magnetic susceptibility. The performance of each criteria is illustrated with inversions of a variety of synthetic and field data sets. In the great majority of examples tested, both criteria successfully determined suitable values of the regularization parameter, and hence credible models of the subsurface. [source]


High-resolution extracted ion chromatography, a new tool for metabolomics and lipidomics using a second-generation orbitrap mass spectrometer

RAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 10 2009
Albert Koulman
Most analytical methods in metabolomics are based on one of two strategies. The first strategy is aimed at specifically analysing a limited number of known metabolites or compound classes. Alternatively, an unbiased approach can be used for profiling as many features as possible in a given metabolome without prior knowledge of the identity of these features. Using high-resolution mass spectrometry with instruments capable of measuring m/z ratios with sufficiently low mass measurement uncertainties and simultaneous high scan speeds, it is possible to combine these two strategies, allowing unbiased profiling of biological samples and targeted analysis of specific compounds at the same time without compromises. Such high mass accuracy and mass resolving power reduces the number of candidate metabolites occupying the same retention time and m/z ratio space to a minimum. In this study, we demonstrate how targeted analysis of phospholipids as well as unbiased profiling is achievable using a benchtop orbitrap instrument after high-speed reversed-phase chromatography. The ability to apply both strategies in one experiment is an important step forward in comprehensive analysis of the metabolome. Copyright © 2009 John Wiley & Sons, Ltd. [source]


A multi-dating approach applied to proglacial sediments attributed to the Most Extensive Glaciation of the Swiss Alps

BOREAS, Issue 3 2010
ANDREAS DEHNERT
Dehnert, A., Preusser, F., Kramers, J. D., Akçar, N., Kubik, P. W., Reber, R. & Schlüchter, C. 2010: A multi-dating approach applied to proglacial sediments attributed to the Most Extensive Glaciation of the Swiss Alps. Boreas, Vol. 39, pp. 620,632. 10.1111/j.1502-3885.2010.00146.x. ISSN 0300-9483. The number and the timing of Quaternary glaciations of the Alps are poorly constrained and, in particular, the age of the Most Extensive Glaciation (MEG) in Switzerland remains controversial. This ice advance has previously been tentatively correlated with the Riss Glaciation of the classical alpine stratigraphy and with Marine Isotope Stage (MIS) 6 (186,127 ka). An alternative interpretation, based on pollen analysis and stratigraphic correlations, places the MEG further back in the Quaternary, with an age equivalent to MIS 12 (474,427 ka), or even older. To re-evaluate this issue in the Swiss glaciation history, a multi-dating approach was applied to proglacial deltaic ,Höhenschotter' deposits in locations outside the ice extent of the Last Glacial Maximum. Results of U/Th and luminescence dating suggest a correlation of the investigated deposits with MIS 6 and hence with the Riss Glaciation. Cosmogenic burial dating suffered from large measurement uncertainties and unusually high 26Al/10Be ratios and did not provide robust age estimates. [source]


Improvement and validation of a snow saltation model using wind tunnel measurements

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 14 2008
Andrew Clifton
Abstract A Lagrangian snow saltation model has been extended for application to a wide variety of snow surfaces. Important factors of the saltation process, namely number of entrained particles, ejection angle and speed, have been parameterized from data in the literature. The model can now be run using simple descriptors of weather and snow conditions, such as wind, ambient pressure and temperature, snow particle sizes and surface density. Sensitivity of the total mass flux to the new parameterizations is small. However, the model refinements also allow concentration and mass flux profiles to be calculated, for comparison with measurements. Sensitivity of the profiles to the new parameterizations is considerable. Model results have then been compared with a complete set of drifting snow data from our cold wind tunnel. Simulation mass flux results agree with wind tunnel data to within the bounds of measurement uncertainty. Simulated particle sizes at 50 mm above the surface are generally larger than seen in the tunnel, probably as the model only describes particles in saltation, while additional smaller particles may be present in the wind tunnel at this height because of suspension. However, the smaller particles carry little mass, and so the impact on the mass flux is low. The use of simple input data, and parameterization of the saltation process, allows the model to be used predictively. This could include applications from avalanche warning to glacier mass balance. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Measuring finite-frequency body-wave amplitudes and traveltimes

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2006
Karin Sigloch
SUMMARY We have developed a method to measure finite-frequency amplitude and traveltime anomalies of teleseismic P waves. We use a matched filtering approach that models the first 25 s of a seismogram after the P arrival, which includes the depth phases pP and sP. Given a set of broad-band seismograms from a teleseismic event, we compute synthetic Green's functions using published moment tensor solutions. We jointly deconvolve global or regional sets of seismograms with their Green's functions to obtain the broad-band source time function. The matched filter of a seismogram is the convolution of the Green's function with the source time function. Traveltimes are computed by cross-correlating each seismogram with its matched filter. Amplitude anomalies are defined as the multiplicative factors that minimize the RMS misfit between matched filters and data. The procedure is implemented in an iterative fashion, which allows for joint inversion for the source time function, amplitudes, and a correction to the moment tensor. Cluster analysis is used to identify azimuthally distinct groups of seismograms when source effects with azimuthal dependence are prominent. We then invert for one source time function per group. We implement this inversion for a range of source depths to determine the most likely depth, as indicated by the overall RMS misfit, and by the non-negativity and compactness of the source time function. Finite-frequency measurements are obtained by filtering broad-band data and matched filters through a bank of passband filters. The method is validated on a set of 15 events of magnitude 5.8 to 6.9. Our focus is on the densely instrumented Western US. Quasi-duplet events (,quplets') are used to estimate measurement uncertainty on real data. Robust results are achieved for wave periods between 24 and 2 s. Traveltime dispersion is on the order of 0.5 s. Amplitude anomalies are on the order of 1 db in the lowest bands and 3 db in the highest bands, corresponding to amplification factors of 1.2 and 2.0, respectively. Measurement uncertainties for amplitudes and traveltimes depend mostly on station coverage, accuracy of the moment tensor estimate, and frequency band. We investigate the influence of those parameters in tests on synthetic data. Along the RISTRA array in the Western US, we observe amplitude and traveltime patterns that are coherent on scales of hundreds of kilometres. Below two sections of the array, we observe a combination of frequency-dependent amplitude and traveltime patterns that strongly suggest wavefront healing effects. [source]


Stochastic Cost Optimization of Multistrategy DNAPL Site Remediation

GROUND WATER MONITORING & REMEDIATION, Issue 3 2010
Jack Parker
This paper investigates numerical optimization of dense nonaqueous phase liquid (DNAPL) site remediation design considering effects of prediction and measurement uncertainty. Results are presented for a hypothetical problem involving remediation using thermal source reduction (TSR) and bioremediation with electron donor (ED) injection. Pump-and-treat is utilized as a backup measure if compliance criteria are not met. Remediation system design variables are optimized to minimize expected net present value (ENPV) cost. Adaptive criteria are assumed for real-time control of TSR and ED duration. Source zone dissolved concentration data enabled more reliable and lower cost operation of TSR than soil concentration data, but using both soil and dissolved data improved results sufficiently to more than offset the additional cost. Decisions to terminate remediation and monitoring or to initiate pump-and-treat are complicated by measurement noise. Simultaneous optimization of monitoring frequency, averaging period, and lookback periods to confirm decisions, in addition to remediation design variables, reduced ENPV cost. Results indicate that remediation design under conditions of uncertainty is affected by subtle interactions and tradeoffs between design variables, compliance rules, site characteristics, and uncertainty in model predictions and monitoring data. Optimized designs yielded cost savings of up to approximately 50% compared with a nonoptimized design based on common engineering practices. Significant improvements in accuracy and reductions in cost were achieved by recalibrating the model to data collected during remediation and re-optimizing design variables. Repeating this process periodically is advisable to minimize total costs and maximize reliability. [source]


Process Considerations for Trolling Borehole Flow Logs

GROUND WATER MONITORING & REMEDIATION, Issue 3 2006
Phil L. Oberlander
Horizontal hydraulic conductivity with depth is often understood only as a depth-integrated property based on pumping tests or estimated from geophysical logs and the lithology. A more explicit method exists for determining hydraulic conductivity over small vertical intervals by collecting borehole flow measurements while the well is being pumped. Borehole flow rates were collected from 15 deep monitoring wells on the Nevada Test Site and the Nevada Test and Training Range while continuously raising and lowering a high-precision impeller borehole flowmeter. Repeated logging passes at different logging speeds and pumping rates typically provided nine unique flow logs for each well. Over 60 km of borehole flow logs were collected at a 6.1-cm vertical resolution. Processing these data necessitated developing a methodology to delete anomalous values, smooth small-scale flow variations, combine multiple borehole flow logs, characterize measurement uncertainty, and determine the interval-specific lower limit to flow rate quantification. There are decision points in the data processing where judgment and ancillary analyses are needed to extract subtle hydrogeologic information. The analysis methodology indicates that processed measurements from a high-precision trolling impeller flowmeter in a screened well can confidently detect changes in borehole flow rate of ,0.7% of the combined trolling and borehole flow rate. An advantage of trolling the flowmeter is that the impeller is nearly always spinning as it is raised and lowered in the well and borehole flow rates can be measured at lower values than if measurements were taken while the flowmeter was held at a fixed depth. [source]


Hierarchical spatial models for predicting pygmy rabbit distribution and relative abundance

JOURNAL OF APPLIED ECOLOGY, Issue 2 2010
Tammy L. Wilson
Summary 1.,Conservationists routinely use species distribution models to plan conservation, restoration and development actions, while ecologists use them to infer process from pattern. These models tend to work well for common or easily observable species, but are of limited utility for rare and cryptic species. This may be because honest accounting of known observation bias and spatial autocorrelation are rarely included, thereby limiting statistical inference of resulting distribution maps. 2.,We specified and implemented a spatially explicit Bayesian hierarchical model for a cryptic mammal species (pygmy rabbit Brachylagus idahoensis). Our approach used two levels of indirect sign that are naturally hierarchical (burrows and faecal pellets) to build a model that allows for inference on regression coefficients as well as spatially explicit model parameters. We also produced maps of rabbit distribution (occupied burrows) and relative abundance (number of burrows expected to be occupied by pygmy rabbits). The model demonstrated statistically rigorous spatial prediction by including spatial autocorrelation and measurement uncertainty. 3.,We demonstrated flexibility of our modelling framework by depicting probabilistic distribution predictions using different assumptions of pygmy rabbit habitat requirements. 4.,Spatial representations of the variance of posterior predictive distributions were obtained to evaluate heterogeneity in model fit across the spatial domain. Leave-one-out cross-validation was conducted to evaluate the overall model fit. 5.,Synthesis and applications. Our method draws on the strengths of previous work, thereby bridging and extending two active areas of ecological research: species distribution models and multi-state occupancy modelling. Our framework can be extended to encompass both larger extents and other species for which direct estimation of abundance is difficult. [source]


Matrix effects on accurate mass measurements of low-molecular weight compounds using liquid chromatography-electrospray-quadrupole time-of-flight mass spectrometry,

JOURNAL OF MASS SPECTROMETRY (INCORP BIOLOGICAL MASS SPECTROMETRY), Issue 3 2006
F. Calbiani
Abstract Liquid chromatography (LC) with high-resolution mass spectrometry (HRMS) represents a powerful technique for the identification and/or confirmation of small molecules, i.e. drugs, metabolites or contaminants, in different matrices. However, reliability of analyte identification by HRMS is being challenged by the uncertainty that affects the exact mass measurement. This parameter, characterized by accuracy and precision, is influenced by sample matrix and interferent compounds so that questions about how to develop and validate reliable LC-HRMS-based methods are being raised. Experimental approaches for studying the effects of various key factors influencing mass accuracy on low-molecular weight compounds (MW < 150 Da) when using a quadrupole-time-of-flight (QTOF) mass analyzer were described. Biogenic amines in human plasma were considered for the purpose and the effects of peak shape, ion abundance, resolution and data processing on accurate mass measurements of the analytes were evaluated. In addition, the influence of the matrix on the uncertainty associated with their identification and quantitation is discussed. A critical evaluation on the calculation of the limits of detection was carried out, considering the uncertainty associated with exact mass measurement of HRMS-based methods. The minimum concentration level of the analytes that was able to provide a statistical error lower than 5 ppm in terms of precision was 10 times higher than those calculated with S/N = 3, thus suggesting the importance of considering both components of exact mass measurement uncertainty in the evaluation of the limit of detection. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Microwave measurement uncertainty due to applied magnetic field

PHYSICA STATUS SOLIDI (A) APPLICATIONS AND MATERIALS SCIENCE, Issue 12 2007
S. Perero
Abstract In recent years there has been a wide interest in the production and analysis of films and nanostructures of different types for their microwave properties up to the mm-wave range. In order to characterize the electromagnetic behavior of these devices new experimental techniques need to be developed and assessed. Typically the measurements involve the use of vector network analyzer, and require several calibration steps. In this paper, we present a summary of the calibration techniques and evaluate the uncertainties obtained under different conditions, with a particular focus on the effect of the applied magnetic field upon uncertainty. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


A pilot evaluation of tibia lead concentrations in Taiwan

AMERICAN JOURNAL OF INDUSTRIAL MEDICINE, Issue 2 2001
Andrew C. Todd
Abstract Background The aims of this study were to examine some of the factors that influence tibia lead concentrations, tibia lead x-ray fluorescence measurement uncertainty and blood lead concentrations, and to compare tibia lead concentrations in Taiwanese lead workers to those observed in lead workers from other countries. Methods A pilot evaluation of 43 adult lead workers who underwent measurements of tibia lead and blood lead concentrations. Results Mean and maximum tibia lead concentrations were 54 ,g of Pb per g of bone mineral(,g/g) and 193 ,g/g, respectively. Mean and maximum blood lead concentrations were 44 ,g/dl and 92 ,g/dl, respectively. Conclusion Past occupational control of lead exposure in Taiwan, ROC, did not prevent these workers from accumulating tibia lead concentrations greater than those in similar workers elsewhere in the world. Am. J. Ind. Med. 40:127,132, 2001. © 2001 Wiley-Liss, Inc. [source]


Microbial contaminants in food: a big issue for a working group of the MoniQA NoE project

QUALITY ASSURANCE & SAFETY OF CROPS & FOOD, Issue 2 2009
A. Hoehl
Abstract Introduction The MoniQA Network of Excellence is an EC funded project working towards the harmonization of analytical methods for monitoring food quality and safety along the food supply chain. This paper summarises both the structure and tasks of the working group on microbial contaminants within the MoniQA NoE and specifically focuses on harmonisation strategies important in the microbiological analysis of food. Objectives There is a need for rapid microbiological methods in order to quickly and efficiently identify harmful pathogens in food sources. However, one of the major problems encountered with many new methods is their market acceptance, as they have to pass extensive validation/standardisation studies before they can be declared as official standard methods. Methods The working group on microbiological contaminants aims to contribute towards speeding up these prerequisites by collecting information on food law, quality assurance, quality control, sampling, economic impact, measurement uncertainty, validation protocols, official standard methods and alternative methods. Results The present report provides an overview of currently existing methodologies and regulations and addresses issues concerning harmonisation needs. One of the deliverables of the working group is the development of extended fact sheets and reviews based on relevant ,hot' topics and methods. The selection of food borne analytes for these fact sheets have been selected based on global, local and individual parameters. The working group has identified 5 groups of stakeholders (governmental bodies, standardisation/validation organisations, test kit/equipment manufacturers, food industry and consumers). Conclusion Current challenges of food microbiology are driven by new analytical methods, changes in the food market and altered consumer desires. The MoniQA NoE is contributing in overcoming these risks and challenges by providing a profound platform on microbiological rapid methods in food analysis to all stakeholders and it is expected that strong interaction within the network and beyond will foster harmonization. [source]


Reduction of bias in static closed chamber measurement of ,13C in soil CO2 efflux

RAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 2 2010
K. E. Anders Ohlsson
The 13C/12C ratio of soil CO2 efflux (,e) is an important parameter in studies of ecosystem C dynamics, where the accuracy of estimated C flux rates depends on the measurement uncertainty of ,e. The static closed chamber method is frequently used in the determination of ,e, where the soil CO2 efflux is accumulated in the headspace of a chamber placed on top of the soil surface. However, it has recently been shown that the estimate of ,e obtained by using this method could be significantly biased, which potentially diminish the usefulness of ,e for field applications. Here, analytical and numerical models were used to express the bias in ,e as mathematical functions of three system parameters: chamber height (H), chamber radius (Rc), and soil air-filled porosity (,). These expressions allow optimization of chamber size to yield a bias, which is at a level suitable for each particular application of the method. The numerical model was further used to quantify the effects on the ,e bias from (i) various designs for sealing of the chamber to ground, and (ii) inclusion of the commonly used purging step for reduction of the initial headspace CO2 concentration. The present modeling work provided insights into the effects on the ,e bias from retardation and partial chamber bypass of the soil CO2 efflux. The results presented here supported the continued use of the static closed chamber method for the determination of ,e, with improved control of the bias component of its measurement uncertainty. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Comparison of ozone fluxes over grassland by gradient and eddy covariance technique

ATMOSPHERIC SCIENCE LETTERS, Issue 3 2009
Jennifer B. A. Muller
Abstract Ozone flux measurements over vegetation are important to estimate surface losses and ozone uptake by plants. The gradient and eddy covariance flux technique were used for measurements over grassland at a flux-monitoring site in southern Scotland during August 2007. The comparison of the two methods shows that the aerodynamic flux-gradient method provides very similar long-term average fluxes of ozone as the eddy covariance method. The eddy covariance technique is better at capturing diurnal cycles and short-term changes, but the comparison of two fast analysers illustrate that there can be considerable measurement uncertainty. Copyright © 2009 Royal Meteorological Society [source]