Home About us Contact | |||
Model Evaluation (model + evaluation)
Selected AbstractsGEOLOGICAL MODEL EVALUATION THROUGH WELL TEST SIMULATION: A CASE STUDY FROM THE WYTCH FARM OILFIELD, SOUTHERN ENGLANDJOURNAL OF PETROLEUM GEOLOGY, Issue 1 2007S.Y. Zheng This paper presents an approach to the evaluation of reservoir models using transient pressure data. Braided fluvial sandstones exposed in cliffs in SW England were studied as the surface equivalent of the Triassic Sherwood Sandstone, a reservoir unit at the nearby Wytch Farm oilfield. Three reservoir models were built; each used a different modelling approach ranging in complexity from stochastic pixel-based modelling using commercially available software, to a spreadsheet random number generator. In order to test these models, numerical well test simulations were conducted using sector models extracted from the geological models constructed. The simulation results were then evaluated against the actual well test data in order to find the model which best represented the field geology. Two wells at Wytch Farm field were studied. The results suggested that for one of the sampled wells, the model built using the spreadsheet random number generator gave the best match to the well test data. In the well, the permeability from the test interpretation matched the geometric average permeability. This average is the "correct" upscaled permeability for a random system, and this was consistent with the random nature of the geological model. For the second well investigated, a more complex "channel object" model appeared to fit the dynamic data better. All the models were built with stationary properties. However, the well test data suggested that some parts of the field have different statistical properties and hence show non-stationarity. These differences would have to be built into the model representing the local geology. This study presents a workflow that is not yet considered standard in the oil industry, and the use of dynamic data to evaluate geological models requires further development. The study highlights the fact that the comparison or matching of results from reservoir models and well-test analyses is not always straightforward in that different models may match different wells. The study emphasises the need for integrated analyses of geological and engineering data. The methods and procedures presented are intended to form a feedback loop which can be used to evaluate the representivity of a geological model. [source] Electrothermal Model Evaluation of Grain Size and Disorder Effects on Pulsed Voltage Response of Microstructured ZnO VaristorsJOURNAL OF THE AMERICAN CERAMIC SOCIETY, Issue 4 2008Guogang Zhao Time-dependent, two-dimensional, electrothermal simulations based on random Voronoi networks have been developed to study the internal heating, current distributions and breakdown effects in ZnO varistors in response to high-voltage pulsing. The simulations allow for dynamic predictions of internal failures and to track the progression of hot-spots and thermal stresses. The focus is on internal grain-size variations and relative disorder including micropores. Our results predict that parameters such as the hold-off voltage, internal temperature, and average dissipated energy density would be higher with more uniform grains. This uniformity is also predicted to produce lower thermal stresses and to allow for the application of longer duration pulses. It is shown that the principal failure mechanism arises from internal localized melting, while thermal stresses are well below the thresholds for cracking. Finally, detrimental effects of micropores have been quantified and shown to be in agreement with experimental trends. [source] Environmental determinants of vascular plant species richness in the Austrian AlpsJOURNAL OF BIOGEOGRAPHY, Issue 7 2005Dietmar Moser Abstract Aim, To test predictions of different large-scale biodiversity hypotheses by analysing species richness patterns of vascular plants in the Austrian Alps. Location, The Austrian part of the Alps (c. 53,500 km2). Methods, Within the floristic inventory of Central Europe the Austrian part of the Alps were systematically mapped for vascular plants. Data collection was based on a rectangular grid of 5 × 3 arc minutes (34,35 km2). Emerging species richness patterns were correlated with several environmental factors using generalized linear models. Primary environmental variables like temperature, precipitation and evapotranspiration were used to test climate-related hypotheses of species richness. Additionally, spatial and temporal variations in climatic conditions were considered. Bedrock geology, particularly the amount of calcareous substrates, the proximity to rivers and lakes and secondary variables like topographic, edaphic and land-use heterogeneity were used as additional predictors. Model results were evaluated by correlating modelled and observed species numbers. Results, Our final multiple regression model explains c. 50% of the variance in species richness patterns. Model evaluation results in a correlation coefficient of 0.64 between modelled and observed species numbers in an independent test data set. Climatic variables like temperature and potential evapotranspiration (PET) proved to be by far the most important predictors. In general, variables indicating climatic favourableness like the maxima of temperature and PET performed better than those indicating stress, like the respective minima. Bedrock mineralogy, especially the amount of calcareous substrate, had some additional explanatory power but was less influential than suggested by comparable studies. The amount of precipitation does not have any effect on species richness regionally. Among the descriptors of heterogeneity, edaphic and land-use heterogeneity are more closely correlated with species numbers than topographic heterogeneity. Main conclusions, The results support energy-driven processes as primary determinants of vascular plant species richness in temperate mountains. Stressful conditions obviously decrease species numbers, but presence of favourable habitats has higher predictive power in the context of species richness modelling. The importance of precipitation for driving global species diversity patterns is not necessarily reflected regionally. Annual range of temperature, an indicator of short-term climatic stability, proved to be of minor importance for the determination of regional species richness patterns. In general, our study suggests environmental heterogeneity to be of rather low predictive value for species richness patterns regionally. However, it may gain importance at more local scales. [source] A ground-level ozone forecasting model for Santiago, ChileJOURNAL OF FORECASTING, Issue 6 2002Héctor Jorquera Abstract A physically based model for ground-level ozone forecasting is evaluated for Santiago, Chile. The model predicts the daily peak ozone concentration, with the daily rise of air temperature as input variable; weekends and rainy days appear as interventions. This model was used to analyse historical data, using the Linear Transfer Function/Finite Impulse Response (LTF/FIR) formalism; the Simultaneous Transfer Function (STF) method was used to analyse several monitoring stations together. Model evaluation showed a good forecasting performance across stations,for low and high ozone impacts,with power of detection (POD) values between 70 and 100%, Heidke's Skill Scores between 40% and 70% and low false alarm rates (FAR). The model consistently outperforms a pure persistence forecast. Model performance was not sensitive to different implementation options. The model performance degrades for two- and three-days ahead forecast, but is still acceptable for the purpose of developing an environmental warning system at Santiago. Copyright © 2002 John Wiley & Sons, Ltd. [source] A general model for predicting brown tree snake capture ratesENVIRONMETRICS, Issue 3 2003Richard M. Engeman Abstract The inadvertent introduction of the brown tree snake (Boiga irregularis) to Guam has resulted in the extirpation of most of the island's native terrestrial vertebrates, has presented a health hazard to small children, and also has produced economic problems. Trapping around ports and other cargo staging areas is central to a program designed to deter dispersal of the species. Sequential trapping of smaller plots is also being used to clear larger areas of snakes in preparation for endangered species reintroductions. Traps and trapping personnel are limited resources, which places a premium on the ability to plan the deployment of trapping efforts. In a series of previous trapping studies, data on brown tree snake removal from forested plots was found to be well modeled by exponential decay functions. For the present article, we considered a variety of model forms and estimation procedures, and used capture data from individual plots as random subjects to produce a general random coefficients model for making predictions of brown tree snake capture rates. The best model was an exponential decay with positive asymptote produced using nonlinear mixed model estimation where variability among plots was introduced through the scale and asymptote parameters. Practical predictive abilities were used in model evaluation so that a manager could project capture rates in a plot after a period of time, or project the amount of time required for trapping to reduce capture rates to a desired level. The model should provide managers with a tool for optimizing the allocation of limited trapping resources. Copyright © 2003 John Wiley & Sons, Ltd. [source] Diagnostic evaluation of conceptual rainfall,runoff models using temporal clusteringHYDROLOGICAL PROCESSES, Issue 20 2010N. J. de Vos Abstract Given the structural shortcomings of conceptual rainfall,runoff models and the common use of time-invariant model parameters, these parameters can be expected to represent broader aspects of the rainfall,runoff relationship than merely the static catchment characteristics that they are commonly supposed to quantify. In this article, we relax the common assumption of time-invariance of parameters, and instead seek signature information about the dynamics of model behaviour and performance. We do this by using a temporal clustering approach to identify periods of hydrological similarity, allowing the model parameters to vary over the clusters found in this manner, and calibrating these parameters simultaneously. The diagnostic information inferred from these calibration results, based on the patterns in the parameter sets of the various clusters, is used to enhance the model structure. This approach shows how diagnostic model evaluation can be used to combine information from the data and the functioning of the hydrological model in a useful manner. Copyright © 2010 John Wiley & Sons, Ltd. [source] Assessment of rainfall-runoff models based upon wavelet analysisHYDROLOGICAL PROCESSES, Issue 5 2007Stuart N. Lane Abstract A basic hypothesis is proposed: given that wavelet-based analysis has been used to interpret runoff time-series, it may be extended to evaluation of rainfall-runoff model results. Conventional objective functions make certain assumptions about the data series to which they are applied (e.g. uncorrelated error, homoscedasticity). The difficulty that objective functions have in distinguishing between different realizations of the same model, or different models of the same system, is that they may have contributed in part to the occurrence of model equifinality. Of particular concern is the fact that the error present in a rainfall-runoff model may be time dependent, requiring some form of time localization in both identification of error and derivation of global objective functions. We explore the use of a complex Gaussian (order 2) wavelet to describe: (1) a measured hydrograph; (2) the same hydrograph with different simulated errors introduced; and (3) model predictions of the same hydrograph based upon a modified form of TOPMODEL. The analysis of results was based upon: (a) differences in wavelet power (the wavelet power error) between the measured hydrograph and both the simulated error and modelled hydrographs; and (b) the wavelet phase. Power difference and wavelet phase were used to develop two objective functions, RMSE(power) and RMS(phase), which were shown to distinguish between simulated errors and model predictions with similar values of the commonly adopted Nash-Sutcliffe efficiency index. These objective functions suffer because they do not retain time, frequency or time-frequency localization. Consideration of wavelet power spectra and time- and frequency-integrated power spectra shows that the impacts of different types of simulated error can be seen through retention of some localization, especially in relation to when and the scale over which error was manifest. Theoretical objections to the use of wavelet analysis for this type of application are noted, especially in relation to the dependence of findings upon the wavelet chosen. However, it is argued that the benefits of localization and the qualitatively low sensitivity of wavelet power and phase to wavelet choice are sufficient to warrant further exploration of wavelet-based approaches to rainfall-runoff model evaluation. Copyright © 2006 John Wiley & Sons, Ltd. [source] Application of the distributed hydrology soil vegetation model to Redfish Creek, British Columbia: model evaluation using internal catchment dataHYDROLOGICAL PROCESSES, Issue 2 2003Andrew Whitaker Abstract The Distributed Hydrology Soil Vegetation Model is applied to the Redfish Creek catchment to investigate the suitability of this model for simulation of forested mountainous watersheds in interior British Columbia and other high-latitude and high-altitude areas. On-site meteorological data and GIS information on terrain parameters, forest cover, and soil cover are used to specify model input. A stepwise approach is taken in calibrating the model, in which snow accumulation and melt parameters for clear-cut and forested areas were optimized independent of runoff production parameters. The calibrated model performs well in reproducing year-to-year variability in the outflow hydrograph, including peak flows. In the subsequent model performance evaluation for simulation of catchment processes, emphasis is put on elevation and temporal differences in snow accumulation and melt, spatial patterns of snowline retreat, water table depth, and internal runoff generation, using internal catchment data as much as possible. Although the overall model performance based on these criteria is found to be good, some issues regarding the simulation of internal catchment processes remain. These issues are related to the distribution of meteorological variables over the catchment and a lack of information on spatial variability in soil properties and soil saturation patterns. Present data limitations for testing internal model accuracy serve to guide future data collection at Redfish Creek. This study also illustrates the challenges that need to be overcome before distributed physically based hydrologic models can be used for simulating catchments with fewer data resources. Copyright © 2003 John Wiley & Sons, Ltd. [source] Nutrient fluxes at the river basin scale.HYDROLOGICAL PROCESSES, Issue 5 2001I: the PolFlow model Abstract Human activity has resulted in increased nutrient levels in rivers and coastal seas all over Europe. Models that can describe nutrient fluxes from pollution sources to river outlets may help policy makers to select the most effective source control measures to achieve a reduction of nutrient levels in rivers and coastal seas. Part I of this paper describes the development of such a model: PolFlow. PolFlow was specially designed for operation at the river basin scale and is here applied to model 5-year average nitrogen and phosphorus fluxes in two European river basins (Rhine and Elbe) covering the period 1970,1995. Part II reports an error analysis and model evaluation, and compares PolFlow to simpler alternative models. Copyright © 2001 John Wiley & Sons, Ltd. [source] Downscaling simulations of future global climate with application to hydrologic modellingINTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 4 2005Eric P. Salathé Jr Abstract This study approaches the problem of downscaling global climate model simulations with an emphasis on validating and selecting global models. The downscaling method makes minimal, physically based corrections to the global simulation while preserving much of the statistics of interannual variability in the climate model. Differences among the downscaled results for simulations of present-day climate form a basis for model evaluation. The downscaled results are used to simulate streamflow in the Yakima River, a mountainous basin in Washington, USA, to illustrate how model differences affect streamflow simulations. The downscaling is applied to the output of three models (ECHAM4, HADCM3, and NCAR-PCM) for simulations of historic conditions (1900,2000) and two future emissions scenarios (A2 and B2 for 2000,2100) from the IPCC assessment. The ECHAM4 simulation closely reproduces the observed statistics of temperature and precipitation for the 42 year period 1949,90. Streamflow computed from this climate simulation likewise produces similar statistics to streamflow computed from the observed data. Downscaled climate-change scenarios from these models are examined in light of the differences in the present-day simulations. Streamflows simulated from the ECHAM4 results show the greatest sensitivity to climate change, with the peak in summertime flow occurring 2 months earlier by the end of the 21st century. Copyright © 2005 Royal Meteorological Society. [source] Shared environment representation for a human-robot team performing information fusionJOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 11-12 2007Tobias Kaupp This paper addresses the problem of building a shared environment representation by a human-robot team. Rich environment models are required in real applications for both autonomous operation of robots and to support human decision-making. Two probabilistic models are used to describe outdoor environment features such as trees: geometric (position in the world) and visual. The visual representation is used to improve data association and to classify features. Both models are able to incorporate observations from robotic platforms and human operators. Physically, humans and robots form a heterogeneous sensor network. In our experiments, the human-robot team consists of an unmanned air vehicle, a ground vehicle, and two human operators. They are deployed for an information gathering task and perform information fusion cooperatively. All aspects of the system including the fusion algorithms are fully decentralized. Experimental results are presented in form of the acquired multi-attribute feature map, information exchange patterns demonstrating human-robot information fusion, and quantitative model evaluation. Learned lessons from deploying the system in the field are also presented. © 2007 Wiley Periodicals, Inc. [source] Processing and classification of protein mass spectraMASS SPECTROMETRY REVIEWS, Issue 3 2006Melanie Hilario Abstract Among the many applications of mass spectrometry, biomarker pattern discovery from protein mass spectra has aroused considerable interest in the past few years. While research efforts have raised hopes of early and less invasive diagnosis, they have also brought to light the many issues to be tackled before mass-spectra-based proteomic patterns become routine clinical tools. Known issues cover the entire pipeline leading from sample collection through mass spectrometry analytics to biomarker pattern extraction, validation, and interpretation. This study focuses on the data-analytical phase, which takes as input mass spectra of biological specimens and discovers patterns of peak masses and intensities that discriminate between different pathological states. We survey current work and investigate computational issues concerning the different stages of the knowledge discovery process: exploratory analysis, quality control, and diverse transforms of mass spectra, followed by further dimensionality reduction, classification, and model evaluation. We conclude after a brief discussion of the critical biomedical task of analyzing discovered discriminatory patterns to identify their component proteins as well as interpret and validate their biological implications. © 2006 Wiley Periodicals, Inc., Mass Spec Rev 25:409,449, 2006 [source] Microcellular model evaluation for the deformation of dynamically vulcanized EPDM/iPP blendsPOLYMER ENGINEERING & SCIENCE, Issue 3 2003Kathryn J. Wright The origins of elasticity in thermoplastic vulcanizates have been debated for the past decade. Previous modeling attempts provide numerical solutions that make assessment of constituent concentration and interaction unclear. A microcellular modeling approach is proposed and evaluated herein to describe the steady-state behavior of dynamically vulcanized blends of ethylene-propylene-diene monomer (EPDM) and isotactic polypropylene (iPP). This approach provides an analytic result including terms for composition and cure state. Three types of deformation are accounted for: elastic and plastic deformation of iPP, elastic deformation of EPDM, and localized elastic and plastic rotation about iPP junction points. The viability of the constitutive model is evaluated in terms of iPP concentration and EPDM cure state. [source] Vancomycin dosing assessment in intensive care unit patients based on a population pharmacokinetic/pharmacodynamic simulationBRITISH JOURNAL OF CLINICAL PHARMACOLOGY, Issue 2 2010Natalia Revilla WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT , Despite the frequent use of vancomycin in intensive care unit (ICU) patients, few studies aimed at characterizing vancomycin population pharmacokinetics have been performed in this critical population. , Population pharmacokinetics coupled with pharmacodynamic analysis, in order to optimize drug exposure and hence antibacterial effectiveness, has been little applied in these specific patients. WHAT THIS STUDY ADDS , Our population model characterized the pharmacokinetic profile of vancomycin in adult ICU patients, higher distribution volume values (V) being observed when the patient's serum creatinine (CrSe) was greater than 1 mg dl,1. , Age and creatinine clearance (CLcr) were identified as the main covariates explaining the pharmacokinetic variability in vancomycin CL. , Our pharmacokinetic/pharmacodynamic (PK/PD) simulation should aid clinicians to select initial vancomycin doses that will maximize the rate of response in the ICU setting, taking into account the patient's age and renal function as well as the susceptibility of Staphylococcus aureus. AIM To estimate the vancomycin pharmacokinetic profile in adult ICU patients and to assess vancomycin dosages for increasing the likelihood of optimal exposure. METHODS Five hundred and sixty-nine concentration,time data from 191 patients were analysed using a population pharmacokinetic approach (NONMENÔ). External model evaluation was made in 46 additional patients. The 24 h area under the concentration,time curve (AUC(0,24 h)) was derived from the final model. Minimum inhibitory concentration (MIC) values for S. aureus were obtained from the EUCAST database. AUC(0,24 h) : MIC , 400 was considered as PK/PD efficacy index. The probability of different dosages attaining the target considering different strains of S. aureus and patient subgroups was estimated with Monte Carlo simulation. RESULTS Vancomycin CL showed a significant dependence on patient age and renal function whereas CrSe > 1 mg dl,1 increased V more than twofold. For our representative ICU patient, 61 years, 73 kg, CrSe= 1.4 mg dl,1, measured CLCr= 74.7 ml min,1, the estimated values were CL = 1.06 ml min,1 kg,1 and V= 2.04 l kg,1. The cumulative fraction of response for a standard vancomycin dose (2 g day,1) was less than 25% for VISA strains, and 33% to 95% for susceptible S. aureus, depending on patient characteristics. CONCLUSIONS Simulations provide useful information regarding the initial assessment of vancomycin dosing, the conventional dosing regimen probably being suboptimal in adult ICU patients. A graphic approach provides the recommended dose for any selected probability of attaining the PK/PD efficacy target or to evaluate the cumulative fraction of response for any dosing regimen in this population. [source] Comparison of models for genetic evaluation of survival traits in dairy cattle: a simulation studyJOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 2 2008J. Jamrozik Summary Three models for the analysis of functional survival data in dairy cattle were compared using stochastic simulation. The simulated phenotype for survival was defined as a month after the first calving (from 1 to 100) in which a cow was involuntarily removed from the herd. Parameters for simulation were based on survival data of the Canadian Jersey population. Three different levels of heritability of survival (0.100, 0.050 and 0.025) and two levels of numbers of females per generation (2000 or 4000) were considered in the simulation. Twenty generations of random mating and selection (on a second trait, uncorrelated with survival) with 20 replicates were simulated for each scenario. Sires were evaluated for survival of their daughters by three models: proportional hazard (PH), linear multiple-trait (MT), and random regression (RR) animal models. Different models gave different ranking of sires with respect to survival of their daughters. Correlations between true and estimated breeding values for survival to five different points in a cow's lifetime after the first calving (120 and 240 days in milk after first, second, third and fourth calving) favoured the PH model, followed by the RR model evaluations. Rankings of models were independent of the heritability level, female population size and sire progeny group size (20 or 100). The RR model, however, showed a slight superiority over MT and PH models in predicting the proportion of sire's daughters that survived to the five different end-points after the first calving. [source] |