Uncertainty Analysis (uncertainty + analysis)

Distribution by Scientific Domains


Selected Abstracts


Identifying the Potential Loss of Monitoring Wells Using an Uncertainty Analysis

GROUND WATER, Issue 6 2005
Vicky L. Freedman
From the mid-1940s through the 1980s, large volumes of waste water were discharged at the Hanford Site in southeastern Washington State, causing a large-scale rise (>20 m) in the water table. When waste water discharges ceased in 1988, ground water mounds began to dissipate. This caused a large number of wells to go dry and has made it difficult to monitor contaminant plume migration. To identify monitoring wells that will need replacement, a methodology has been developed using a first-order uncertainty analysis with UCODE, a nonlinear parameter estimation code. Using a three-dimensional, finite-element ground water flow code, key parameters were identified by calibrating to historical hydraulic head data. Results from the calibration period were then used to check model predictions by comparing monitoring wells' wet/dry status with field data. This status was analyzed using a methodology that incorporated the 0.3 cumulative probability derived from the confidence and prediction intervals. For comparison, a nonphysically based trend model was also used as a predictor of wells' wet/dry status. Although the numerical model outperformed the trend model, for both models, the central value of the intervals was a better predictor of a wet well status. The prediction interval, however, was more successful at identifying dry wells. Predictions made through the year 2048 indicated that 46% of the wells in the monitoring well network are likely to go dry in areas near the river and where the ground water mound is dissipating. [source]


Estimating metabolic biotransformation rates in fish from laboratory data

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 2 2008
Jon A. Arnot
Abstract A method is proposed for estimating metabolic biotransformation rate constants for nonionic organic chemicals from measured laboratory bioconcentration and dietary bioaccumulation data in fish. Data have been selected based on a quality review to reduce uncertainty in the measured values. A kinetic mass balance model is used to estimate rates of chemical uptake and elimination. Biotransformation rate constants are essentially calculated as the difference between two quantities, a measured bio-concentration factor or elimination rate constant, and a model-derived bioconcentration factor or elimination rate constant estimated assuming no biotransformation. Model parameterization exploits key empirical data when they are available and assumes default values when study specific data are unavailable. Uncertainty analyses provide screening level assessments for confidence in the biotransformation rate constant estimates. The uncertainty analyses include the range for 95% of the predicted values and 95% confidence intervals for the calculated biotransformation values. Case studies are provided to illustrate the calculation and uncertainty methods. Biotransformation rate constants calculated by the proposed method are compared with other published estimates for 31 chemicals that range in octanol,water partition coefficients from approximately 101 to 108 and represent over four orders of magnitude in biotransformation potential. The comparison of previously published values with those calculated by the proposed method shows general agreement with 82% of the estimated values falling within a factor of three. [source]


A Grid-enabled problem-solving environment for advanced reservoir uncertainty analysis

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 18 2008
Zhou Lei
Abstract Uncertainty analysis is critical for conducting reservoir performance prediction. However, it is challenging because it relies on (1) massive modeling-related, geographically distributed, terabyte, or even petabyte scale data sets (geoscience and engineering data), (2) needs to rapidly perform hundreds or thousands of flow simulations, being identical runs with different models calculating the impacts of various uncertainty factors, (3) an integrated, secure, and easy-to-use problem-solving toolkit to assist uncertainty analysis. We leverage Grid computing technologies to address these challenges. We design and implement an integrated problem-solving environment ResGrid to effectively improve reservoir uncertainty analysis. The ResGrid consists of data management, execution management, and a Grid portal. Data Grid tools, such as metadata, replica, and transfer services, are used to meet massive size and geographically distributed characteristics of data sets. Workflow, task farming, and resource allocation are used to support large-scale computation. A Grid portal integrates the data management and the computation solution into a unified easy-to-use interface, enabling reservoir engineers to specify uncertainty factors of interest and perform large-scale reservoir studies through a web browser. The ResGrid has been used in petroleum engineering. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Uncertainty analysis of single-concentration exposure data for risk assessment,introducing the species effect distribution approach

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 11 2006
Janeck J. Scott-fordsmand
Abstract In recent years, the inclusion of uncertainty analysis in risk assessment has been much debated. One pertinent issue is the translation of the effects observed with a limited number of test species to a general protection level for most or all species present in the environment. In a number of cases, toxicity data may consist of data from tests employing only a control and one treatment. Given that more species (or processes) have been tested with the same treatment, the treatment can be considered as fixed, and the effect level of the individual species (or processes) can be considered as variable. The distribution of effects can be viewed as a species effect distribution for that treatment. The distribution will represent all organisms and may be used to predict the maximum impact on any fraction of all organisms (e.g., 95% of all species). Hence, it is possible to predict the maximum effect level, with a selected certainty, for a given fraction of all species. [source]


Uncertainty analysis of heat release rate measurement from oxygen consumption calorimetry

FIRE AND MATERIALS, Issue 6 2005
Sylvain BrohezArticle first published online: 1 JUL 200
Abstract Oxygen consumption calorimetry remains the most widespread method for the measurement of the heat release rate from experimental fire tests. In a first step, this paper examines by theoretical analysis the uncertainty associated with this measurement, especially when CO and soot corrections are applied. Application of theoretical equations is presented for chlorobenzene which leads to high values of CO and soot yields. It appears that the uncertainty of CO and soot corrections are high when the fuel composition is unknown. In a second step, a theoretical analysis is provided when the simplest measurement procedure is used for oxygen consumption calorimetry. The overall uncertainty can be dominated either by the uncertainty associated with the oxygen concentration, the assumed heat of combustion, the fumes mass flow rate or the assumed combustion expansion factor depending on the oxygen depletion. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Energy, exergy and uncertainty analyses of the thermal response test for a ground heat exchanger

INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 6 2009
M. H. Sharqawy
Abstract This paper presents energy, exergy and uncertainty analyses for the thermal response test of a ground heat exchanger. In this study, a vertical U-shaped ground heat exchanger with 80,m depth and 20,cm borehole diameter is installed for the first time at the university premises in Saudi Arabia. A mobile thermal response apparatus is constructed and used to measure the performance of the ground heat exchanger. The thermal response test was carried out four times at different thermal loads from September 2007 to April 2008. The energy and exergy transports of these thermal response tests were analyzed using the experimental results obtained in this period. The analysis provides a better understanding of the overall performance of vertical ground heat exchangers, verifies the thermal response test results and improves the experimental setup. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Procedure for determining the uncertainty of photovoltaic module outdoor electrical performance,

PROGRESS IN PHOTOVOLTAICS: RESEARCH & APPLICATIONS, Issue 2 2001
K. Whitfield
This paper sets forth an uncertainty estimation procedure for the measurement of photovoltaic (PV) electrical performance using natural sunlight and calibrated secondary reference cells. The actual test irradiance should be restricted to values between 800 and 1000,W/m2 in order to assume that maximum power varies linearly with irradiance. Only the uncertainty of maximum power at standard test conditions (STC), i.e., 1000,W/m2 plane-of-array irradiance and 25°C cell temperature, is developed in its entirety. The basic uncertainty analysis principles developed herein, however, can be applied to any electrical variable of interest (e.g., short-circuit current, open-circuit voltage and fill factor). Although the equations presented appear cumbersome, they are easily implemented into a computer spreadsheet. Examples of uncertainty analyses are also presented herein to make the concepts more concrete. Published in 2001 by John Wiley & Sons, Ltd. [source]


Projecting the burden of diabetes in Australia , what is the size of the matter?

AUSTRALIAN AND NEW ZEALAND JOURNAL OF PUBLIC HEALTH, Issue 6 2009
Dianna J. Magliano
Abstract Objective: To analyse the implications of using different methods to predict diabetes prevalence for the future. Approach: Different methods used to predict diabetes were compared and recommendations are made. Conclusion: We recommend that all projections take a conservative approach to diabetes prevalence prediction and present a ,base case' using the most robust, contemporary data available. We also recommend that uncertainty analyses be included in all analyses. Implications: Despite variation in assumptions and methodology used, all the published predictions demonstrate that diabetes is an escalating problem for Australia. We can safely assume that unless trends in diabetes incidence are reversed there will be at least 2 million Australian adults with diabetes by 2025. If obesity and diabetes incidence trends, continue upwards, and mortality continues to decline, up to 3 million people will have diabetes by 2025, with the figure closer to 3.5 million by 2033. The impact of this for Australia has not been measured. [source]


Uncertainty and Sensitivity Analysis of Damage Identification Results Obtained Using Finite Element Model Updating

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2009
Babak Moaveni
The shake table tests were designed so as to damage the building progressively through several historical seismic motions reproduced on the shake table. A sensitivity-based finite element (FE) model updating method was used to identify damage in the building. The estimation uncertainty in the damage identification results was observed to be significant, which motivated the authors to perform, through numerical simulation, an uncertainty analysis on a set of damage identification results. This study investigates systematically the performance of FE model updating for damage identification. The damaged structure is simulated numerically through a change in stiffness in selected regions of a FE model of the shear wall test structure. The uncertainty of the identified damage (location and extent) due to variability of five input factors is quantified through analysis-of-variance (ANOVA) and meta-modeling. These five input factors are: (1,3) level of uncertainty in the (identified) modal parameters of each of the first three longitudinal modes, (4) spatial density of measurements (number of sensors), and (5) mesh size in the FE model used in the FE model updating procedure (a type of modeling error). A full factorial design of experiments is considered for these five input factors. In addition to ANOVA and meta-modeling, this study investigates the one-at-a-time sensitivity analysis of the identified damage to the level of uncertainty in the identified modal parameters of the first three longitudinal modes. The results of this investigation demonstrate that the level of confidence in the damage identification results obtained through FE model updating, is a function of not only the level of uncertainty in the identified modal parameters, but also choices made in the design of experiments (e.g., spatial density of measurements) and modeling errors (e.g., mesh size). Therefore, the experiments can be designed so that the more influential input factors (to the total uncertainty/variability of the damage identification results) are set at optimum levels so as to yield more accurate damage identification results. [source]


A Grid-enabled problem-solving environment for advanced reservoir uncertainty analysis

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 18 2008
Zhou Lei
Abstract Uncertainty analysis is critical for conducting reservoir performance prediction. However, it is challenging because it relies on (1) massive modeling-related, geographically distributed, terabyte, or even petabyte scale data sets (geoscience and engineering data), (2) needs to rapidly perform hundreds or thousands of flow simulations, being identical runs with different models calculating the impacts of various uncertainty factors, (3) an integrated, secure, and easy-to-use problem-solving toolkit to assist uncertainty analysis. We leverage Grid computing technologies to address these challenges. We design and implement an integrated problem-solving environment ResGrid to effectively improve reservoir uncertainty analysis. The ResGrid consists of data management, execution management, and a Grid portal. Data Grid tools, such as metadata, replica, and transfer services, are used to meet massive size and geographically distributed characteristics of data sets. Workflow, task farming, and resource allocation are used to support large-scale computation. A Grid portal integrates the data management and the computation solution into a unified easy-to-use interface, enabling reservoir engineers to specify uncertainty factors of interest and perform large-scale reservoir studies through a web browser. The ResGrid has been used in petroleum engineering. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Forks in the Road: Choices in Procedures for Designing Wildland Linkages

CONSERVATION BIOLOGY, Issue 4 2008
PAUL BEIER
análisis de sensibilidad; conectividad; corredor de vida silvestre; enlace; diseño de reservas Abstract:,Models are commonly used to identify lands that will best maintain the ability of wildlife to move between wildland blocks through matrix lands after the remaining matrix has become incompatible with wildlife movement. We offer a roadmap of 16 choices and assumptions that arise in designing linkages to facilitate movement or gene flow of focal species between 2 or more predefined wildland blocks. We recommend designing linkages to serve multiple (rather than one) focal species likely to serve as a collective umbrella for all native species and ecological processes, explicitly acknowledging untested assumptions, and using uncertainty analysis to illustrate potential effects of model uncertainty. Such uncertainty is best displayed to stakeholders as maps of modeled linkages under different assumptions. We also recommend modeling corridor dwellers (species that require more than one generation to move their genes between wildland blocks) differently from passage species (for which an individual can move between wildland blocks within a few weeks). We identify a problem, which we call the subjective translation problem, that arises because the analyst must subjectively decide how to translate measurements of resource selection into resistance. This problem can be overcome by estimating resistance from observations of animal movement, genetic distances, or interpatch movements. There is room for substantial improvement in the procedures used to design linkages robust to climate change and in tools that allow stakeholders to compare an optimal linkage design to alternative designs that minimize costs or achieve other conservation goals. Resumen:,Los modelos son utilizados comúnmente para identificar tierras que mantengan la habilidad de la vida silvestre para moverse entre bloques de tierras silvestres a través de una matriz de tierras que habían sido incompatibles con el movimiento de vida silvestre. Ofrecemos 16 opciones y supuestos que se originan en el diseño de enlaces para facilitar el movimiento o el flujo de genes de especies focales entre 2 o más bloques de tierras silvestres predefinidos. Recomendamos el diseño de enlaces que sirvan a múltiples (y solo a una) especies focales que funjan como una sombrilla colectiva para todas las especies nativas y los procesos ecológicos, que explícitamente admitan supuestos no comprobados y que utilicen análisis de incertidumbre para ilustrar efectos potenciales de la incertidumbre del modelo. La mejor forma de mostrar tal incertidumbre a los interesados es mediante mapas de los enlaces modelados bajo diferentes suposiciones. También recomendamos modelar a habitantes de corredores (especies que requieren más de una generación para mover sus genes entre bloques de tierra silvestre) de manera diferente que las especies pasajeras (un individuo se puede mover entre bloques de tierras silvestres en unas cuantas semanas). Identificamos un problema, que denominamos el problema de traducción subjetiva, que surge porque un analista debe decidir subjetivamente cómo traducir medidas de selección de recursos a resistencia. Este problema puede ser sobrepuesto mediante la estimación de la resistencia a partir de observaciones de movimientos de animales, distancias genéticas o movimientos entre fragmentos. Hay espacio para la mejora sustancial de los procedimientos utilizados para diseñar enlaces robustos ante el cambio climático y en herramientas que permiten que los interesados comparen un diseño óptimo con diseños alternativos que minimicen costos o alcancen otras metas de conservación. [source]


Accounting for uncertainty in DEMs from repeat topographic surveys: improved sediment budgets

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 2 2010
Joseph M. Wheaton
Abstract Repeat topographic surveys are increasingly becoming more affordable, and possible at higher spatial resolutions and over greater spatial extents. Digital elevation models (DEMs) built from such surveys can be used to produce DEM of Difference (DoD) maps and estimate the net change in storage terms for morphological sediment budgets. While these products are extremely useful for monitoring and geomorphic interpretation, data and model uncertainties render them prone to misinterpretation. Two new methods are presented, which allow for more robust and spatially variable estimation of DEM uncertainties and propagate these forward to evaluate the consequences for estimates of geomorphic change. The first relies on a fuzzy inference system to estimate the spatial variability of elevation uncertainty in individual DEMs while the second approach modifies this estimate on the basis of the spatial coherence of erosion and deposition units. Both techniques allow for probabilistic representation of uncertainty on a cell-by-cell basis and thresholding of the sediment budget at a user-specified confidence interval. The application of these new techniques is illustrated with 5 years of high resolution survey data from a 1,km long braided reach of the River Feshie in the Highlands of Scotland. The reach was found to be consistently degradational, with between 570 and 1970,m3 of net erosion per annum, despite the fact that spatially, deposition covered more surface area than erosion. In the two wetter periods with extensive braid-plain inundation, the uncertainty analysis thresholded at a 95% confidence interval resulted in a larger percentage (57% for 2004,2005 and 59% for 2006,2007) of volumetric change being excluded from the budget than the drier years (24% for 2003,2004 and 31% for 2005,2006). For these data, the new uncertainty analysis is generally more conservative volumetrically than a standard spatially-uniform minimum level of detection analysis, but also produces more plausible and physically meaningful results. The tools are packaged in a wizard-driven Matlab software application available for download with this paper, and can be calibrated and extended for application to any topographic point cloud (x,y,z). Copyright © 2009 John Wiley & Sons, Ltd. [source]


Evaluation of the SWEEP model during high winds on the Columbia Plateau ,

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 11 2009
G. Feng
Abstract A standalone version of the Wind Erosion Prediction System (WEPS) erosion submodel, the Single-event Wind Erosion Evaluation Program (SWEEP), was released in 2007. A limited number of studies exist that have evaluated SWEEP in simulating soil loss subject to different tillage systems under high winds. The objective of this study was to test SWEEP under contrasting tillage systems employed during the summer fallow phase of a winter wheat,summer fallow rotation within eastern Washington. Soil and PM10 (particulate matter ,10 µm in diameter) loss and soil and crop residue characteristics were measured in adjacent fields managed using conventional and undercutter tillage during summer fallow in 2005 and 2006. While differences in soil surface conditions resulted in measured differences in soil and PM10 loss between the tillage treatments, SWEEP failed to simulate any difference in soil or PM10 loss between conventional and undercutter tillage. In fact, the model simulated zero erosion for all high wind events observed over the two years. The reason for the lack of simulated erosion is complex owing to the number of parameters and interaction of these parameters on erosion processes. A possible reason might be overestimation of the threshold friction velocity in SWEEP since friction velocity must exceed the threshold to initiate erosion. Although many input parameters are involved in the estimation of threshold velocity, internal empirical coefficients and equations may affect the simulation. Calibration methods might be useful in adjusting the internal coefficients and empirical equations. Additionally, the lack of uncertainty analysis is an important gap in providing reliable output from this model. Published in 2009 by John Wiley & Sons, Ltd. [source]


Characterization and uncertainty analysis of VOCs emissions from industrial wastewater treatment plants

ENVIRONMENTAL PROGRESS & SUSTAINABLE ENERGY, Issue 3 2010
Kaishan Zhang
Abstract Air toxics from the industrial wastewater treatment plants (IWTPs) impose serious health concerns on its surrounding residential neighborhoods. To address such health concerns, one of the key challenges is to quantify the air emissions from the IWTPs. The objective here is to characterize the air emissions from the IWTPs and quantify its associated uncertainty. An IWTP receiving the wastewaters from an airplane maintenance facility is used for illustration with focus on the quantification of air emissions for benzyl alcohol, phenol, methylene chloride, 2-butanone, and acetone. Two general fate models, i.e., WATER9 and TOXCHEM+V3.0 were used to model the IWTP and quantify the air emissions. Monte Carlo and Bootstrap simulation were used for uncertainty analysis. On average, air emissions from the IWTP were estimated to range from 0.003 lb/d to approximately 16 lb/d with phenol being the highest and benzyl alcohol being the least. However, emissions are associated with large uncertainty. The ratio of the 97.5th percentile to the 2.5th percentile air emissions ranged from 5 to 50 depending on pollutants. This indicates point estimates of air emissions might fail to capture the worst scenarios, leading to inaccurate conclusion when used for health risk assessment. © 2009 American Institute of Chemical Engineers Environ Prog, 2010 [source]


Modeling the environmental fate of perfluorooctanoate and its precursors from global fluorotelomer acrylate polymer use

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 11 2008
Rosalie van Zelm
Abstract The environment contains various direct and indirect sources of perfluorooctanoic acid (PFOA). The present study uses a dynamic multispecies environmental fate model to analyze the potential formation of perfluorooctanoate (PFO), the anion of PFOA, in the environment from fluorotelomer acrylate polymer (FTacrylate) emitted to landfills and wastewater, residual fluorotelomer alcohol (8:2 FTOH) in FTacrylate, and residual PFOA in FTacrylate. A multispecies version of the SimpleBox model, which is capable of determining the fate of a chemical and its degradation products, was developed for this purpose. An uncertainty analysis on the chemical-specific input parameters was performed to examine for uncertainty in modeled concentrations. In 2005, residual 8:2 FTOH made up 80% of the total contribution of FTacrylate use to PFO concentrations in global oceans, and residual PFOA in FTacrylate contributed 15% to PFO concentrations from FTacrylate use in global oceans. After hundreds of years, however, the main source of PFO from total historical FTacrylate production is predicted to be FTacrylate degrading in soil following land application of sludge from sewage treatment plants, followed by FTacrylate still present in landfills. Uncertainty in modeled PFO concentrations was up to a factor of 3.3. Current FTacrylate use contributes less than 1% of the PFO in seawater, but because direct PFOA emission sources are reduced and PFOA continues to be formed from FTacrylate in soil and in landfills, this fraction grows over time. [source]


Uncertainty analysis of single-concentration exposure data for risk assessment,introducing the species effect distribution approach

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 11 2006
Janeck J. Scott-fordsmand
Abstract In recent years, the inclusion of uncertainty analysis in risk assessment has been much debated. One pertinent issue is the translation of the effects observed with a limited number of test species to a general protection level for most or all species present in the environment. In a number of cases, toxicity data may consist of data from tests employing only a control and one treatment. Given that more species (or processes) have been tested with the same treatment, the treatment can be considered as fixed, and the effect level of the individual species (or processes) can be considered as variable. The distribution of effects can be viewed as a species effect distribution for that treatment. The distribution will represent all organisms and may be used to predict the maximum impact on any fraction of all organisms (e.g., 95% of all species). Hence, it is possible to predict the maximum effect level, with a selected certainty, for a given fraction of all species. [source]


Evaluating and expressing the propagation of uncertainty in chemical fate and bioaccumulation models

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 4 2002
Matthew MacLeod
Abstract First-order analytical sensitivity and uncertainty analysis for environmental chemical fate models is described and applied to a regional contaminant fate model and a food web bioaccumulation model. By assuming linear relationships between inputs and outputs, independence, and log-normal distributions of input variables, a relationship between uncertainty in input parameters and uncertainty in output parameters can be derived, yielding results that are consistent with a Monte Carlo analysis with similar input assumptions. A graphical technique is devised for interpreting and communicating uncertainty propagation as a function of variance in input parameters and model sensitivity. The suggested approach is less calculationally intensive than Monte Carlo analysis and is appropriate for preliminary assessment of uncertainty when models are applied to generic environments or to large geographic areas or when detailed parameterization of input uncertainties is unwarranted or impossible. This approach is particularly useful as a starting point for identification of sensitive model inputs at the early stages of applying a generic contaminant fate model to a specific environmental scenario, as a tool to support refinements of the model and the uncertainty analysis for site-specific scenarios, or for examining defined end points. The analysis identifies those input parameters that contribute significantly to uncertainty in outputs, enabling attention to be focused on defining median values and more appropriate distributions to describe these variables. [source]


Trivial reductions of dimensionality in the propagation of uncertainties: a physical example

ENVIRONMETRICS, Issue 1 2004
Ricardo Bolado
Abstract When performing uncertainty analysis on a mathematical model of a physical process, some coefficients of the differential equations appear as a result of elementary operations of other coefficients. It is shown in this article that variance reduction techniques should be applied on the ,final' or ,reduced' coefficients and not on the original ones, thus reducing the variance of the estimators of the parameters of the output variable distribution. We illustrate the methodology with an application to a physical problem, a radioactive contaminant transport code. A substantial variance reduction is achieved for the estimators of the distribution function, the mean and the variance of the output. Copyright © 2003 John Wiley & Sons, Ltd. [source]


The Uncertainty in SCHF-DT Thermal Conductivity Measurements of Lotus-Type Porous Copper

ADVANCED ENGINEERING MATERIALS, Issue 10 2009
Hiroshi Chiba
Abstract Lotus-type porous metals with many straight pores are attractive for use as heat-sinks because a large heat-transfer capacity can be obtained, due to the small diameter of the pores. In order to use lotus-type porous copper effectively as a heat sink, it is important to know the effective thermal conductivity considering the effect of pores on heat conduction in the material. Since these metals have anisotropic pores, a steady-state comparative longitudinal heat-flow method for measuring thermal conductivity, referring to an ASTM standard, is better than other methods. So far, the effective thermal conductivity of lotus-type porous copper has been measured by using specimens of different thickness (the SCHF-DT method). In this paper, the uncertainty in the effective thermal conductivity of a specimen measured using this method was evaluated by comparison between numerical analysis and current experimental data. The following conclusions were drawn: 1) The uncertainty showed good agreement with the uncertainty analysis; 2) The contribution of the thermal grease thickness was large, based on a combined standard uncertainty analysis; and, 3) The effective thermal conductivity perpendicular to the pores of lotus copper can be measured within 10% uncertainty by this method. [source]


Use of VFSA for resolution, sensitivity and uncertainty analysis in 1D DC resistivity and IP inversion

GEOPHYSICAL PROSPECTING, Issue 5 2003
Bimalendu B. Bhattacharya
ABSTRACT We present results from the resolution and sensitivity analysis of 1D DC resistivity and IP sounding data using a non-linear inversion. The inversion scheme uses a theoretically correct Metropolis,Gibbs' sampling technique and an approximate method using numerous models sampled by a global optimization algorithm called very fast simulated annealing (VFSA). VFSA has recently been found to be computationally efficient in several geophysical parameter estimation problems. Unlike conventional simulated annealing (SA), in VFSA the perturbations are generated from the model parameters according to a Cauchy-like distribution whose shape changes with each iteration. This results in an algorithm that converges much faster than a standard SA. In the course of finding the optimal solution, VFSA samples several models from the search space. All these models can be used to obtain estimates of uncertainty in the derived solution. This method makes no assumptions about the shape of an a posteriori probability density function in the model space. Here, we carry out a VFSA-based sensitivity analysis with several synthetic and field sounding data sets for resistivity and IP. The resolution capability of the VFSA algorithm as seen from the sensitivity analysis is satisfactory. The interpretation of VES and IP sounding data by VFSA, incorporating resolution, sensitivity and uncertainty of layer parameters, would generally be more useful than the conventional best-fit techniques. [source]


Soil organic carbon stock change due to land use activity along the agricultural frontier of the southwestern Amazon, Brazil, between 1970 and 2002

GLOBAL CHANGE BIOLOGY, Issue 10 2010
STOÉCIO M. F. MAIA
Abstract The southwestern portion of the Brazilian Amazon arguably represents the largest agricultural frontier in the world, and within this region the states of Rondônia and Mato Grosso have about 24% and 32% of their respective areas under agricultural management, which is almost half of the total area deforested in the Brazilian Amazon biome. Consequently, it is assumed that deforestation in this region has caused substantial loss of soil organic carbon (SOC). In this study, the changes in SOC stocks due to the land use change and management in the southwestern Amazon were estimated for two time periods from 1970,1985 and 1985,2002. An uncertainty analysis was also conducted using a Monte Carlo approach. The results showed that mineral soils converted to agricultural management lost a total of 5.37 and 3.74 Tg C yr,1 between 1970,1985 and 1985,2002, respectively, along the Brazilian Agricultural Frontier in the states of Mato Grosso and Rondônia. Uncertainties in these estimates were ±37.3% and ±38.6% during the first and second time periods, respectively. The largest sources of uncertainty were associated with reference carbon (C) stocks, expert knowledge surveys about grassland condition, and the management factors for nominal and degraded grasslands. These results showed that land use change and management created a net loss of C from soils, however, the change in SOC stocks decreased substantially from the first to the second time period due to the increase in land under no-tillage. [source]


Large annual net ecosystem CO2 uptake of a Mojave Desert ecosystem

GLOBAL CHANGE BIOLOGY, Issue 7 2008
GEORG WOHLFAHRT
Abstract The net ecosystem CO2 exchange (NEE) between a Mojave Desert ecosystem and the atmosphere was measured over the course of 2 years at the Mojave Global Change Facility (MGCF, Nevada, USA) using the eddy covariance method. The investigated desert ecosystem was a sink for CO2, taking up 102±67 and 110±70 g C m,2 during 2005 and 2006, respectively. A comprehensive uncertainty analysis showed that most of the uncertainty of the inferred sink strength was due to the need to account for the effects of air density fluctuations on CO2 densities measured with an open-path infrared gas analyser. In order to keep this uncertainty within acceptable bounds, highest standards with regard to maintenance of instrumentation and flux measurement postprocessing have to be met. Most of the variability in half-hourly NEE was explained by the amount of incident photosynthetically active radiation (PAR). On a seasonal scale, PAR and soil water content were the most important determinants of NEE. Precipitation events resulted in an initial pulse of CO2 to the atmosphere, temporarily reducing NEE or even causing it to switch sign. During summer, when soil moisture was low, a lag of 3,4 days was observed before the correlation between NEE and precipitation switched from positive to negative, as opposed to conditions of high soil water availability in spring, when this transition occurred within the same day the rain took place. Our results indicate that desert ecosystem CO2 exchange may be playing a much larger role in global carbon cycling and in modulating atmospheric CO2 levels than previously assumed , especially since arid and semiarid biomes make up >30% of Earth's land surface. [source]


A Stable and Efficient Numerical Algorithm for Unconfined Aquifer Analysis

GROUND WATER, Issue 4 2009
Elizabeth Keating
The nonlinearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to the solution of Richard's equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table, does not require "dry" cells to convert to inactive cells, and allows recharge to flow through relatively dry cells to the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem as well. [source]


Identifying the Potential Loss of Monitoring Wells Using an Uncertainty Analysis

GROUND WATER, Issue 6 2005
Vicky L. Freedman
From the mid-1940s through the 1980s, large volumes of waste water were discharged at the Hanford Site in southeastern Washington State, causing a large-scale rise (>20 m) in the water table. When waste water discharges ceased in 1988, ground water mounds began to dissipate. This caused a large number of wells to go dry and has made it difficult to monitor contaminant plume migration. To identify monitoring wells that will need replacement, a methodology has been developed using a first-order uncertainty analysis with UCODE, a nonlinear parameter estimation code. Using a three-dimensional, finite-element ground water flow code, key parameters were identified by calibrating to historical hydraulic head data. Results from the calibration period were then used to check model predictions by comparing monitoring wells' wet/dry status with field data. This status was analyzed using a methodology that incorporated the 0.3 cumulative probability derived from the confidence and prediction intervals. For comparison, a nonphysically based trend model was also used as a predictor of wells' wet/dry status. Although the numerical model outperformed the trend model, for both models, the central value of the intervals was a better predictor of a wet well status. The prediction interval, however, was more successful at identifying dry wells. Predictions made through the year 2048 indicated that 46% of the wells in the monitoring well network are likely to go dry in areas near the river and where the ground water mound is dissipating. [source]


SWAT2000: current capabilities and research opportunities in applied watershed modelling

HYDROLOGICAL PROCESSES, Issue 3 2005
J. G. Arnold
Abstract SWAT (Soil and Water Assessment Tool) is a conceptual, continuous time model that was developed in the early 1990s to assist water resource managers in assessing the impact of management and climate on water supplies and non-point source pollution in watersheds and large river basins. SWAT is the continuation of over 30 years of model development within the US Department of Agriculture's Agricultural Research Service and was developed to ,scale up' past field-scale models to large river basins. Model components include weather, hydrology, erosion/sedimentation, plant growth, nutrients, pesticides, agricultural management, stream routing and pond/reservoir routing. The latest version, SWAT2000, has several significant enhancements that include: bacteria transport routines; urban routines; Green and Ampt infiltration equation; improved weather generator; ability to read in daily solar radiation, relative humidity, wind speed and potential ET; Muskingum channel routing; and modified dormancy calculations for tropical areas. A complete set of model documentation for equations and algorithms, a user manual describing model inputs and outputs, and an ArcView interface manual are now complete for SWAT2000. The model has been recoded into Fortran 90 with a complete data dictionary, dynamic allocation of arrays and modular subroutines. Current research is focusing on bacteria, riparian zones, pothole topography, forest growth, channel downcutting and widening, and input uncertainty analysis. The model SWAT is meanwhile used in many countries all over the world. Recent developments in European Environmental Policy, such as the adoption of the European Water Framework directive in December 2000, demand tools for integrative river basin management. The model SWAT is applicable for this purpose. It is a flexible model that can be used under a wide range of different environmental conditions, as this special issue will show. The papers compiled here are the result of the first International SWAT Conference held in August 2001 in Rauischholzhausen, Germany. More than 50 participants from 14 countries discussed their modelling experiences with the model development team from the USA. Nineteen selected papers with issues reaching from the newest developments, the evaluation of river basin management, interdisciplinary approaches for river basin management, the impact of land use change, methodical aspects and models derived from SWAT are published in this special issue. Copyright © 2005 John Wiley & Sons, Ltd. [source]


The need for adequate quality assurance/quality control measures for selenium larval deformity assessments: Implications for tissue residue guidelines

INTEGRATED ENVIRONMENTAL ASSESSMENT AND MANAGEMENT, Issue 3 2009
Blair G McDonald
Abstract Assessing the frequency and severity of larval fish deformities is a subjective exercise that is subject to considerable parameter uncertainty unless appropriate quality assurance/quality control (QA/QC) measures are incorporated. This issue has received limited attention in the literature. Only one study was identified that contained adequate data to evaluate the reproducibility of larval deformity data. Parameter uncertainty was substantially larger than expected. There was poor reproducibility between observers for nearly all types and magnitudes of deformities, and there were particularly large differences in how mild deformities were assessed. The reproducibility of the edema endpoint was the poorest of the 4 types of deformity evaluated. Specific recommendations for improving the QA/QC aspects of larval deformity assessments include blind and nonsequential labeling; explicit effort on the development and application of an a priori framework; internal QC checks to quantify the influence of sample preservatives, observer drift, or multiple observers; and an external QC check of a minimum of 10% of all larval fish. Future selenium reproductive studies should include an explicit uncertainty analysis and disclose raw deformity data to facilitate recalculation of tissue residue guidelines as the science in this area advances. [source]


Reduced-order modeling of parameterized PDEs using time,space-parameter principal component analysis,

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 8 2009
C. Audouze
Abstract This paper presents a methodology for constructing low-order surrogate models of finite element/finite volume discrete solutions of parameterized steady-state partial differential equations. The construction of proper orthogonal decomposition modes in both physical space and parameter space allows us to represent high-dimensional discrete solutions using only a few coefficients. An incremental greedy approach is developed for efficiently tackling problems with high-dimensional parameter spaces. For numerical experiments and validation, several non-linear steady-state convection,diffusion,reaction problems are considered: first in one spatial dimension with two parameters, and then in two spatial dimensions with two and five parameters. In the two-dimensional spatial case with two parameters, it is shown that a 7 × 7 coefficient matrix is sufficient to accurately reproduce the expected solution, while in the five parameters problem, a 13 × 6 coefficient matrix is shown to reproduce the solution with sufficient accuracy. The proposed methodology is expected to find applications to parameter variation studies, uncertainty analysis, inverse problems and optimal design. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Hybrid Framework for Managing Uncertainty in Life Cycle Inventories

JOURNAL OF INDUSTRIAL ECOLOGY, Issue 6 2009
Eric D. Williams
Summary Life cycle assessment (LCA) is increasingly being used to inform decisions related to environmental technologies and polices, such as carbon footprinting and labeling, national emission inventories, and appliance standards. However, LCA studies of the same product or service often yield very different results, affecting the perception of LCA as a reliable decision tool. This does not imply that LCA is intrinsically unreliable; we argue instead that future development of LCA requires that much more attention be paid to assessing and managing uncertainties. In this article we review past efforts to manage uncertainty and propose a hybrid approach combining process and economic input,output (I-O) approaches to uncertainty analysis of life cycle inventories (LCI). Different categories of uncertainty are sometimes not tractable to analysis within a given model framework but can be estimated from another perspective. For instance, cutoff or truncation error induced by some processes not being included in a bottom-up process model can be estimated via a top-down approach such as the economic I-O model. A categorization of uncertainty types is presented (data, cutoff, aggregation, temporal, geographic) with a quantitative discussion of methods for evaluation, particularly for assessing temporal uncertainty. A long-term vision for LCI is proposed in which hybrid methods are employed to quantitatively estimate different uncertainty types, which are then reduced through an iterative refinement of the hybrid LCI method. [source]


Characterizing, Propagating, and Analyzing Uncertainty in Life-Cycle Assessment: A Survey of Quantitative Approaches

JOURNAL OF INDUSTRIAL ECOLOGY, Issue 1 2007
Shannon M. Lloyd
Summary Life-cycle assessment (LCA) practitioners build models to quantify resource consumption, environmental releases, and potential environmental and human health impacts of product systems. Most often, practitioners define a model structure, assign a single value to each parameter, and build deterministic models to approximate environmental outcomes. This approach fails to capture the variability and uncertainty inherent in LCA. To make good decisions, decision makers need to understand the uncertainty in and divergence between LCA outcomes for different product systems. Several approaches for conducting LCA under uncertainty have been proposed and implemented. For example, Monte Carlo simulation and fuzzy set theory have been applied in a limited number of LCA studies. These approaches are well understood and are generally accepted in quantitative decision analysis. But they do not guarantee reliable outcomes. A survey of approaches used to incorporate quantitative uncertainty analysis into LCA is presented. The suitability of each approach for providing reliable outcomes and enabling better decisions is discussed. Approaches that may lead to overconfident or unreliable results are discussed and guidance for improving uncertainty analysis in LCA is provided. [source]


A C1 microkinetic model for methane conversion to syngas on Rh/Al2O3

AICHE JOURNAL, Issue 4 2009
Matteo Maestri
Abstract A microkinetic model capable of describing multiple processes related to the conversion of natural gas to syngas and hydrogen on Rh is derived. The parameters of microkinetic models are subject to (intrinsic) uncertainty arising from estimation. It is shown that intrinsic uncertainty could markedly affect even qualitative model predictions (e.g., the rate-determining step). In order to render kinetic models predictive, we propose a hierarchical, data-driven methodology, where microkinetic model analysis is combined with a comprehensive, kinetically relevant set of nearly isothermal experimental data. The new, thermodynamically consistent model is capable of predicting several processes, including methane steam and dry reforming, catalytic partial oxidation, H2 and CO rich combustion, water-gas shift and its reverse at different temperatures, space velocities, compositions and reactant dilutions, using the measured Rh dispersion as an input. Comparison with other microkinetic models is undertaken. Finally, an uncertainty analysis assesses the effect of intrinsic uncertainty and catalyst heterogeneity on the overall model predictions. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source]