Model Validation (model + validation)

Distribution by Scientific Domains


Selected Abstracts


Application of Synchrotron Radiation Techniques for Model Validation of Advanced Structural Materials,

ADVANCED ENGINEERING MATERIALS, Issue 6 2009
Annick Froideval
Abstract Synchrotron radiation techniques represent powerful tools to characterize materials down to the nanometer level. This paper presents a survey of the state-of-the-art synchrotron-based techniques which are particularly well-suited for investigating materials properties. Complementary X-ray absorption techniques such as extended X-ray absorption fine structure (EXAFS), X-ray magnetic circular dichroism (XMCD), photoemission electron microscopy (PEEM) are used to address the individual local atomic structure and magnetic moments in Fe,Cr model systems. The formation of atomic clusters/precipitates in such systems is also investigated by means of scanning transmission X-ray microscopy (STXM). Such advanced analytical techniques can not only offer valuable structural and magnetic information on such systems, they can also serve for validating computational calculations performed at different time and length scales which can help improve materials lifetime predictions. [source]


A dynamic simulation model for powdery mildew epidemics on winter wheat,

EPPO BULLETIN, Issue 3 2003
V. Rossi
A system dynamic model for epidemics of Blumeria graminis (powdery mildew) on wheat was elaborated, based on the interaction between stages of the disease cycle, weather conditions and host characteristics. The model simulates the progress of disease severity, expressed as a percentage of powdered leaf area, on individual leaves, with a time step of one day, as a result of two processes: the growth of fungal colonies already present on the leaves and the appearance of new colonies. By means of mathematical equations, air temperature, vapour pressure deficit, rainfall and wind are used to calculate incubation, latency and sporulation periods, the growth of pathogen colonies, infection and spore survival. Effects of host susceptibility to infection, and of leaf position within the plant canopy, are also included. Model validation was carried out by comparing model outputs with the dynamics of epidemics observed on winter wheat grown at several locations in northern Italy (1991,98). Simulations were performed using meteorological data measured in standard meteorological stations. As there was good agreement between model outputs and actual disease severity, the model can be considered a satisfactory simulator of the effect of environmental conditions on the progress of powdery mildew epidemics. [source]


Validation of Numerical Ground Water Models Used to Guide Decision Making

GROUND WATER, Issue 2 2004
Ahmed E. Hassan
Many sites of ground water contamination rely heavily on complex numerical models of flow and transport to develop closure plans. This complexity has created a need for tools and approaches that can build confidence in model predictions and provide evidence that these predictions are sufficient for decision making. Confidence building is a long-term, iterative process and the author believes that this process should be termed model validation. Model validation is a process, not an end result. That is, the process of model validation cannot ensure acceptable prediction or quality of the model. Rather, it provides an important safeguard against faulty models or inadequately developed and tested models. If model results become the basis for decision making, then the validation process provides evidence that the model is valid for making decisions (not necessarily a true representation of reality). Validation, verification, and confirmation are concepts associated with ground water numerical models that not only do not represent established and generally accepted practices, but there is not even widespread agreement on the meaning of the terms as applied to models. This paper presents a review of model validation studies that pertain to ground water flow and transport modeling. Definitions, literature debates, previously proposed validation strategies, and conferences and symposia that focused on subsurface model validation are reviewed and discussed. The review is general and focuses on site-specific, predictive ground water models used for making decisions regarding remediation activities and site closure. The aim is to provide a reasonable starting point for hydrogeologists facing model validation for ground water systems, thus saving a significant amount of time, effort, and cost. This review is also aimed at reviving the issue of model validation in the hydrogeologic community and stimulating the thinking of researchers and practitioners to develop practical and efficient tools for evaluating and refining ground water predictive models. [source]


Docking Studies of Structurally Diverse Antimalarial Drugs Targeting PfATP6: No Correlation between in,silico Binding Affinity and in,vitro Antimalarial Activity.

CHEMMEDCHEM, Issue 9 2009
Fatima Bousejra-El Garah
Abstract PfATP6, a calcium-dependent ATPase of Plasmodium falciparum, is considered the putative target of the antimalarial drug artemisinin and its derivatives. Herein, the 3D structure of PfATP6 was modeled on the basis of the crystal structure of SERCA,1a, the mammalian homologue. Model validation was achieved using protein structure checking tools. AutoDock4 was used to predict the binding affinities of artemisinin (and analogues) and various other antimalarial agents for PfATP6, for which in,vitro activity is also reported. No correlation was found between the affinity of the compounds for PfATP6 predicted by AutoDock4 and their antimalarial activity. [source]


Broad Beam Ion Sources for Electrostatic Space Propulsion and Surface Modification Processes: From Roots to Present Applications

CONTRIBUTIONS TO PLASMA PHYSICS, Issue 7 2007
H. Neumann
Abstract Ion thrusters or broad beam ion sources are widely used in electrostatic space propulsion and in high-end surface modification processes. A short historical review of the roots of electric space propulsion is given. In the following, we introduce the electrostatic ion thrusters and broad beam ion sources based on different plasma excitation principles and describe the similarities as well as the differences briefly. Furthermore, an overview on source plasma and ion beam characterisation methods is presented. Apart from that, a beam profile modelling strategy with the help of numerical trajectory codes as basis for a special grid system design is outlined. This modelling represents the basis for the adaptation of a grid system for required technological demands. Examples of model validation demonstrate their reliability. One of the main challenges in improvement of ion beam technologies is the customisation of the ion beam properties, e.g. the ion current density profile for specific demands. Methods of an ex-situ and in-situ beam profile control will be demonstrated. Examples for the use of ion beam technologies in space and on earth , the RIT-10 rescue mission of ESA's satellite Artemis, the RIT-22 for BepiColombo mission and the deposition of multilayer stacks for EUVL (Extreme Ultra Violet Lithography) mask blank application are provided in order to illustrate the potential of plasma-based ion beam sources. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Measurement and data analysis methods for field-scale wind erosion studies and model validation,

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 11 2003
Ted M. Zobeck
Abstract Accurate and reliable methods of measuring windblown sediment are needed to con,rm, validate, and improve erosion models, assess the intensity of aeolian processes and related damage, determine the source of pollutants, and for other applications. This paper outlines important principles to consider in conducting ,eld-scale wind erosion studies and proposes strategies of ,eld data collection for use in model validation and development. Detailed discussions include consideration of ,eld characteristics, sediment sampling, and meteorological stations. The ,eld shape used in ,eld-scale wind erosion research is generally a matter of preference and in many studies may not have practical signi,cance. Maintaining a clear non-erodible boundary is necessary to accurately determine erosion fetch distance. A ,eld length of about 300 m may be needed in many situations to approach transport capacity for saltation ,ux in bare agricultural ,elds. Field surface conditions affect the wind pro,le and other processes such as sediment emission, transport, and deposition and soil erodibility. Knowledge of the temporal variation in surface conditions is necessary to understand aeolian processes. Temporal soil properties that impact aeolian processes include surface roughness, dry aggregate size distribution, dry aggregate stability, and crust characteristics. Use of a portable 2 tall anemometer tower should be considered to quantify variability of friction velocity and aerodynamic roughness caused by surface conditions in ,eld-scale studies. The types of samplers used for sampling aeolian sediment will vary depending upon the type of sediment to be measured. The Big Spring Number Eight (BSNE) and Modi,ed Wilson and Cooke (MWAC) samplers appear to be the most popular for ,eld studies of saltation. Suspension ,ux may be measured with commercially available instruments after modi,cations are made to ensure isokinetic conditions at high wind speeds. Meteorological measurements should include wind speed and direction, air temperature, solar radiation, relative humidity, rain amount, soil temperature and moisture. Careful consideration of the climatic, sediment, and soil surface characteristics observed in future ,eld-scale wind erosion studies will ensure maximum use of the data collected. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Advancing Loss Given Default Prediction Models: How the Quiet Have Quickened

ECONOMIC NOTES, Issue 2 2005
Greg M. Gupton
We describe LossCalcÔ version 2.0: the Moody's KMV model to predict loss given default (LGD), the equivalent of (1 , recovery rate). LossCalc is a statistical model that applies multiple predictive factors at different information levels: collateral, instrument, firm, industry, country and the macroeconomy to predict LGD. We find that distance-to-default measures (from the Moody's KMV structural model of default likelihood) compiled at both the industry and firm levels are predictive of LGD. We find that recovery rates worldwide are predictable within a common statistical framework, which suggests that the estimation of economic firm value (which is then available to allocate to claimants according to each country's bankruptcy laws) is a dominant step in LGD determination. LossCalc is built on a global dataset of 3,026 recovery observations for loans, bonds and preferred stock from 1981 to 2004. This dataset includes 1,424 defaults of both public and private firms , both rated and unrated instruments , in all industries. We demonstrate out-of-sample and out-of-time LGD model validation. The model significantly improves on the use of historical recovery averages to predict LGD. [source]


Catabolite repression in Escherichia coli, a comparison of modelling approaches

FEBS JOURNAL, Issue 2 2009
Andreas Kremling
The phosphotransferase system in Escherichia coli is a transport and sensory system and, in this function, is one of the key players of catabolite repression. Mathematical modelling of signal transduction and gene expression of the enzymes involved in the transport of carbohydrates is a promising approach in biotechnology, as it offers the possibility to achieve higher production rates of desired components. In this article, the relevance of methods and approaches concerning mathematical modelling in systems biology is discussed by assessing and comparing two comprehensive mathematical models that describe catabolite repression. The focus is thereby on modular modelling with the relevant input in the central modules, the impact of quantitative model validation, the identification of control structures and the comparison of model predictions with respect to the available experimental data. [source]


Validation of Numerical Ground Water Models Used to Guide Decision Making

GROUND WATER, Issue 2 2004
Ahmed E. Hassan
Many sites of ground water contamination rely heavily on complex numerical models of flow and transport to develop closure plans. This complexity has created a need for tools and approaches that can build confidence in model predictions and provide evidence that these predictions are sufficient for decision making. Confidence building is a long-term, iterative process and the author believes that this process should be termed model validation. Model validation is a process, not an end result. That is, the process of model validation cannot ensure acceptable prediction or quality of the model. Rather, it provides an important safeguard against faulty models or inadequately developed and tested models. If model results become the basis for decision making, then the validation process provides evidence that the model is valid for making decisions (not necessarily a true representation of reality). Validation, verification, and confirmation are concepts associated with ground water numerical models that not only do not represent established and generally accepted practices, but there is not even widespread agreement on the meaning of the terms as applied to models. This paper presents a review of model validation studies that pertain to ground water flow and transport modeling. Definitions, literature debates, previously proposed validation strategies, and conferences and symposia that focused on subsurface model validation are reviewed and discussed. The review is general and focuses on site-specific, predictive ground water models used for making decisions regarding remediation activities and site closure. The aim is to provide a reasonable starting point for hydrogeologists facing model validation for ground water systems, thus saving a significant amount of time, effort, and cost. This review is also aimed at reviving the issue of model validation in the hydrogeologic community and stimulating the thinking of researchers and practitioners to develop practical and efficient tools for evaluating and refining ground water predictive models. [source]


Parameter estimation in semi-distributed hydrological catchment modelling using a multi-criteria objective function

HYDROLOGICAL PROCESSES, Issue 22 2007
Hamed Rouhani
Abstract Output generated by hydrologic simulation models is traditionally calibrated and validated using split-samples of observed time series of total water flow, measured at the drainage outlet of the river basin. Although this approach might yield an optimal set of model parameters, capable of reproducing the total flow, it has been observed that the flow components making up the total flow are often poorly reproduced. Previous research suggests that notwithstanding the underlying physical processes are often poorly mimicked through calibration of a set of parameters hydrologic models most of the time acceptably estimates the total flow. The objective of this study was to calibrate and validate a computer-based hydrologic model with respect to the total and slow flow. The quick flow component used in this study was taken as the difference between the total and slow flow. Model calibrations were pursued on the basis of comparing the simulated output with the observed total and slow flow using qualitative (graphical) assessments and quantitative (statistical) indicators. The study was conducted using the Soil and Water Assessment Tool (SWAT) model and a 10-year historical record (1986,1995) of the daily flow components of the Grote Nete River basin (Belgium). The data of the period 1986,1989 were used for model calibration and data of the period 1990,1995 for model validation. The predicted daily average total flow matched the observed values with a Nash,Sutcliff coefficient of 0·67 during calibration and 0·66 during validation. The Nash,Sutcliff coefficient for slow flow was 0·72 during calibration and 0·61 during validation. Analysis of high and low flows indicated that the model is unbiased. A sensitivity analysis revealed that for the modelling of the daily total flow, accurate estimation of all 10 calibration parameters in the SWAT model is justified, while for the slow flow processes only 4 out of the set of 10 parameters were identified as most sensitive. Copyright © 2007 John Wiley & Sons, Ltd. [source]


A field-scale infiltration model accounting for spatial heterogeneity of rainfall and soil saturated hydraulic conductivity

HYDROLOGICAL PROCESSES, Issue 7 2006
Renato Morbidelli
Abstract This study first explores the role of spatial heterogeneity, in both the saturated hydraulic conductivity Ks and rainfall intensity r, on the integrated hydrological response of a natural slope. On this basis, a mathematical model for estimating the expected areal-average infiltration is then formulated. Both Ks and r are considered as random variables with assessed probability density functions. The model relies upon a semi-analytical component, which describes the directly infiltrated rainfall, and an empirical component, which accounts further for the infiltration of surface water running downslope into pervious soils (the run-on effect). Monte Carlo simulations over a clay loam soil and a sandy loam soil were performed for constructing the ensemble averages of field-scale infiltration used for model validation. The model produced very accurate estimates of the expected field-scale infiltration rate, as well as of the outflow generated by significant rainfall events. Furthermore, the two model components were found to interact appropriately for different weights of the two infiltration mechanisms involved. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Predicting river water temperatures using the equilibrium temperature concept with application on Miramichi River catchments (New Brunswick, Canada)

HYDROLOGICAL PROCESSES, Issue 11 2005
Daniel Caissie
Abstract Water temperature influences most of the physical, chemical and biological properties of rivers. It plays an important role in the distribution of fish and the growth rates of many aquatic organisms. Therefore, a better understanding of the thermal regime of rivers is essential for the management of important fisheries resources. This study deals with the modelling of river water temperature using a new and simplified model based on the equilibrium temperature concept. The equilibrium temperature concept is an approach where the net heat flux at the water surface can be expressed by a simple equation with fewer meteorological parameters than required with traditional models. This new water temperature model was applied on two watercourses of different size and thermal characteristics, but within a similar meteorological region, i.e., the Little Southwest Miramichi River and Catamaran Brook (New Brunswick, Canada). A study of the long-term thermal characteristics of these two rivers revealed that the greatest differences in water temperatures occurred during mid-summer peak temperatures. Data from 1992 to 1994 were used for the model calibration, while data from 1995 to 1999 were used for the model validation. Results showed a slightly better agreement between observed and predicted water temperatures for Catamaran Brook during the calibration period, with a root-mean-square error (RMSE) of 1·10 °C (Nash coefficient, NTD = 0·95) compared to 1·45 °C for the Little Southwest Miramichi River (NTD = 0·94). During the validation period, RMSEs were calculated at 1·31 °C for Catamaran Brook and 1·55 °C for the Little Southwest Miramichi River. Poorer model performances were generally observed early in the season (e.g., spring) for both rivers due to the influence of snowmelt conditions, while late summer to autumn modelling performances showed better results. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Factors governing the formation and persistence of layers in a subalpine snowpack

HYDROLOGICAL PROCESSES, Issue 7 2004
David Gustafsson
Abstract The layered structure of a snowpack has a great effect on several important physical processes, such as water movement, reflection of solar radiation or avalanche release. Our aim was to investigate what factors are most important with respect to the formation and persistence of distinct layers in a subalpine environment. We used a physically based numerical one-dimensional model to simulate the development of a snowpack on a subalpine meadow in central Switzerland during one winter season (1998,99). A thorough model validation was based on extensive measurement data including meteorological and snow physical parameters. The model simulated the snow water equivalent and the depth of the snowpack as well as the energy balance accurately. The observed strong layering of the snowpack, however, was not reproduced satisfactorily. In a sensitivity analysis, we tested different model options and parameter settings significant for the formation of snow layers. The neglection of effects of snow microstructure on the compaction rate, and the current description of the water redistribution inside the snowpack, which disregard capillary barrier effects, preferential flow and lateral water flow, were the major limitations for a more realistic simulation of the snowpack layering. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Surrogate model-based strategy for cryogenic cavitation model validation and sensitivity evaluation

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 9 2008
Tushar Goel
Abstract The study of cavitation dynamics in cryogenic environment has critical implications for the performance and safety of liquid rocket engines, but there is no established method to estimate cavitation-induced loads. To help develop such a computational capability, we employ a multiple-surrogate model-based approach to aid in the model validation and calibration process of a transport-based, homogeneous cryogenic cavitation model. We assess the role of empirical parameters in the cavitation model and uncertainties in material properties via global sensitivity analysis coupled with multiple surrogates including polynomial response surface, radial basis neural network, kriging, and a predicted residual sum of squares-based weighted average surrogate model. The global sensitivity analysis results indicate that the performance of cavitation model is more sensitive to the changes in model parameters than to uncertainties in material properties. Although the impact of uncertainty in temperature-dependent vapor pressure on the predictions seems significant, uncertainty in latent heat influences only temperature field. The influence of wall heat transfer on pressure load is insignificant. We find that slower onset of vapor condensation leads to deviation of the predictions from the experiments. The recalibrated model parameters rectify the importance of evaporation source terms, resulting in significant improvements in pressure predictions. The model parameters need to be adjusted for different fluids, but for a given fluid, they help capture the essential fluid physics with different geometry and operating conditions. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Modeling and experimental studies on combustion characteristics of porous coal char: Volume reaction model

INTERNATIONAL JOURNAL OF CHEMICAL KINETICS, Issue 5 2010
Anup Kumar Sadhukhan
A generalized single-particle model for the prediction of combustion dynamics of a porous coal char in a fluidized bed is analyzed in the present work using a volume reaction model (VRM). A fully transient nonisothermal model involving both heterogeneous and homogeneous chemical reactions, multicomponent mass transfer, heat transfer with intraparticle resistances, as well as char structure evolution is developed. The model takes into account convection and diffusion inside the particle pores, as well as in the boundary layer. By addressing the Stefan flow originated due to nonequimolar mass transfer and chemical reactions, this work enables a more realistic analysis of the combustion process. The model, characterized by a set of partial differential equations coupled with nonlinear boundary conditions, is solved numerically using the implicit finite volume method (FVM) with a FORTRAN code developed in-house. The use of a FVM for solving such an elaborate char combustion model, based on the VRM, was not reported earlier. Experiments consisting of fluidized-bed combustion of a single char particle were carried out to determine the internal surface area of a partially burned char particle and to enable model validation. Predicted results are found to compare well with the reported experimental results for porous coal char combustion. The effects of various parameters (i.e., bulk temperature and initial particle radius) are examined on the dynamics of combustion of coal char. The phenomena of ignition and extinction are also investigated. © 2010 Wiley Periodicals, Inc. Int J Chem Kinet 42: 299,315, 2010 [source]


Predicting spatio-temporal recolonization of large carnivore populations and livestock depredation risk: wolves in the Italian Alps

JOURNAL OF APPLIED ECOLOGY, Issue 4 2010
F. Marucco
Summary 1.,Wolves Canis lupus recently recolonized the Western Alps through dispersal from the Italian Apennines, representing one of several worldwide examples of large carnivores increasing in highly human-dominated landscapes. Understanding and predicting expansion of this population is important for conservation because of its direct impact on livestock and its high level of societal opposition. 2.,We built a predictive, spatially explicit, individual-based model to examine wolf population expansion in this fragmented landscape, and livestock depredation risk. We developed the model based on known demographic processes, social structure, behaviour and habitat selection of wolves collected during a 10-year intensive field study of this wolf population. 3.,During model validation, our model accurately described the recolonization process within the Italian Alps, correctly predicting wolf pack locations, pack numbers and wolf population size, between 1999 and 2008. 4.,We then projected packs and dispersers over the entire Italian Alps for 2013, 2018 and 2023. We predicted 25 packs (95% CI: 19,32) in 2013, 36 (23,47) in 2018 and 49 (29,68) in 2023. The South-Western Alps were the main source for wolves repopulating the Alps from 1999 to 2008. The source area for further successful dispersers will probably shift to the North-Western Alps after 2008, but the large lakes in the Central Alps will probably act as a spatial barrier slowing the wolf expansion. 5.,Using the pack presence forecasts, we estimated spatially explicit wolf depredation risk on livestock, allowing tailored local and regional management actions. 6.,Synthesis and applications. Our predictive model is novel because we follow the spatio-temporal dynamics of packs, not just population size, which have substantially different requirements and impacts on wolf,human conflicts than wandering dispersers. Our approach enables prioritization of management efforts, including minimizing livestock depredations, identifying important corridors and barriers, and locating future source populations for successful wolf recolonization of the Alps. [source]


Forecasting migration of cereal aphids (Hemiptera: Aphididae) in autumn and spring

JOURNAL OF APPLIED ENTOMOLOGY, Issue 5 2009
A. M. Klueken
Abstract The migration of cereal aphids and the time of their arrival on winter cereal crops in autumn and spring are of particular importance for plant disease (e.g. barley yellow dwarf virus infection) and related yield losses. In order to identify days with migration potentials in autumn and spring, suction trap data from 29 and 45 case studies (locations and years), respectively, were set-off against meteorological parameters, focusing on the early immigration periods in autumn (22 September to 1 November) and spring (1 May to 9 June). The number of cereal aphids caught in a suction trap increased with increasing temperature, global radiation and duration of sunshine and decreased with increasing precipitation, relative humidity and wind speed. According to linear regression analyses, the temperature, global radiation and wind speed were most frequently and significantly associated with migration, suggesting that they have a major impact on flight activity. For subsequent model development, suction trap catches from different case studies were pooled and binarily classified as days with or without migration as defined by a certain number of migrating cereal aphids. Linear discriminant analyses of several predictor variables (assessed during light hours of a given day) were then performed based on the binary response variables. Three models were used to predict days with suction trap catches ,1, ,4 or ,10 migrating cereal aphids in autumn. Due to the predominance of Rhopalosiphum padi individuals (99.3% of total cereal aphid catch), no distinction between species (R. padi and Sitobion avenae) was made in autumn. As the suction trap catches were lower and species dominance changed in spring, three further models were developed for analysis of all cereal aphid species, R. padi only, and Metopolophium dirhodum and S. avenae combined in spring. The empirical, cross-classification and receiver operating characteristic analyses performed for model validation showed different levels of prediction accuracy. Additional datasets selected at random before model construction and parameterization showed that predictions by the six migration models were 33,81% correct. The models are useful for determining when to start field evaluations. Furthermore, they provide information on the size of the migrating aphid population and, thus, on the importance of immigration for early aphid population development in cereal crops in a given season. [source]


Deterministic fallacies and model validation

JOURNAL OF CHEMOMETRICS, Issue 3-4 2010
Douglas M. Hawkins
Abstract Stochastic settings differ from deterministic ones in many subtle ways, making it easy to slip into errors through applying deterministic thinking inappropriately. We suspect this is the cause of much of the disagreement about model validation. A further technical issue is a common misapplication of cross-validation, in which it is applied only partially, leading to incorrect results. Statistical theory and empirical investigation verify the efficacy of cross-validation when it is applied correctly. In settings where data are relatively scarce, cross-validation is attractive in that it makes the maximum possible use of all available information, at the cost of potentially substantial computation. The bootstrap is another method that makes full use of all available data for both model fitting and model validation, at a cost of substantially increased computation, and it shares many of the broad philosophical background of cross-validation. Increasingly, the computational cost of these methods is not a major concern, leading to the recommendation, in most circumstances, to use cross-validation or bootstrapping rather than the earlier standard method of splitting the available data into a learning and a testing portion. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Robust methods for partial least squares regression

JOURNAL OF CHEMOMETRICS, Issue 10 2003
M. Hubert
Abstract Partial least squares regression (PLSR) is a linear regression technique developed to deal with high-dimensional regressors and one or several response variables. In this paper we introduce robustified versions of the SIMPLS algorithm, this being the leading PLSR algorithm because of its speed and efficiency. Because SIMPLS is based on the empirical cross-covariance matrix between the response variables and the regressors and on linear least squares regression, the results are affected by abnormal observations in the data set. Two robust methods, RSIMCD and RSIMPLS, are constructed from a robust covariance matrix for high-dimensional data and robust linear regression. We introduce robust RMSECV and RMSEP values for model calibration and model validation. Diagnostic plots are constructed to visualize and classify the outliers. Several simulation results and the analysis of real data sets show the effectiveness and robustness of the new approaches. Because RSIMPLS is roughly twice as fast as RSIMCD, it stands out as the overall best method. Copyright © 2003 John Wiley & Sons, Ltd. [source]


An improved independent component regression modeling and quantitative calibration procedure

AICHE JOURNAL, Issue 6 2010
Chunhui Zhao
Abstract An improved independent component regression (M-ICR) algorithm is proposed by constructing joint latent variable (LV) based regressors, and a quantitative statistical analysis procedure is designed using a bootstrap technique for model validation and performance evaluation. First, the drawbacks of the conventional regression modeling algorithms are analyzed. Then the proposed M-ICR algorithm is formulated for regressor design. It constructs a dual-objective optimization criterion function, simultaneously incorporating quality-relevance and independence into the feature extraction procedure. This ties together the ideas of partial-least squares (PLS), and independent component regression (ICR) under the same mathematical umbrella. By adjusting the controllable suboptimization objective weights, it adds insight into the different roles of quality-relevant and independent characteristics in calibration modeling, and, thus, provides possibilities to combine the advantages of PLS and ICR. Furthermore, a quantitative statistical analysis procedure based on a bootstrapping technique is designed to identify the effects of LVs, determine a better model rank and overcome ill-conditioning caused by model over-parameterization. A confidence interval on quality prediction is also approximated. The performance of the proposed method is demonstrated using both numerical and real world data. © 2009 American Institute of Chemical Engineers AIChE J, 2010 [source]


Comprehensive process design study for layered-NOX -control in a tangentially coal fired boiler

AICHE JOURNAL, Issue 3 2010
Wei Zhou
Abstract As emissions regulations for coal-fired power plants become stricter worldwide, layering combustion modification and post-combustion NOX control technologies can be an attractive option for efficient and cost-effective NOX control in comparison to selective catalytic reduction (SCR) technology. The layered control technology approach designed in this article consists of separate overfire air (SOFA), reburn, and selective noncatalytic reduction (SNCR). The combined system can achieve up to 75% NOX reduction. The work presented in this article successfully applied this technology to NRG Somerset Unit 6, a 120-MW tangential coal-fired utility boiler, to reduce NOX emissions to 0.11 lb/MMBtu (130 mg/Nm3), well under the US EPA SIP Call target of 0.15 lb/MMBtu. The article reviews an integrated design study for the layered system at Somerset and evaluates the performance of different layered-NOX -control scenarios including standalone SNCR (baseline), separated overfire air (SOFA) with SNCR, and gas reburn with SNCR. Isothermal physical flow modeling and computational fluid dynamics simulation (CFD) were applied to understand the boiler flow patterns, the combustible distributions and the impact of combustion modifications on boiler operation and SNCR performance. The modeling results were compared with field data for model validation and verification. The study demonstrates that a comprehensive process design using advanced engineering tools is beneficial to the success of a layered low NOX system. © 2009 American Institute of Chemical Engineers AIChE J, 2010 [source]


HYDROLOGIC SIMULATION OF THE LITTLE WASHITA RIVER EXPERIMENTAL WATERSHED USING SWAT,

JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 2 2003
Michael W. Van Liew
ABSTRACT: Precipitation and streamflow data from three nested subwatersheds within the Little Washita River Experimental Watershed (LWREW) in southwestern Oklahoma were used to evaluate the capabilities of the Soil and Water Assessment Tool (SWAT) to predict streamflow under varying climatic conditions. Eight years of precipitation and streamflow data were used to calibrate parameters in the model, and 15 years of data were used for model validation. SWAT was calibrated on the smallest and largest sub-watersheds for a wetter than average period of record. The model was then validated on a third subwatershed for a range in climatic conditions that included dry, average, and wet periods. Calibration of the model involved a multistep approach. A preliminary calibration was conducted to estimate model parameters so that measured versus simulated yearly and monthly runoff were in agreement for the respective calibration periods. Model parameters were then fine tuned based on a visual inspection of daily hydrographs and flow frequency curves. Calibration on a daily basis resulted in higher baseflows and lower peak runoff rates than were obtained in the preliminary calibration. Test results show that once the model was calibrated for wet climatic conditions, it did a good job in predicting streamflow responses over wet, average, and dry climatic conditions selected for model validation. Monthly coefficients of efficiencies were 0.65, 0.86, and 0.45 for the dry, average, and wet validation periods, respectively. Results of this investigation indicate that once calibrated, SWAT is capable of providing adequate simulations for hydrologic investigations related to the impact of climate variations on water resources of the LWREW. [source]


Residual analysis for spatial point processes (with discussion)

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 5 2005
A. Baddeley
Summary., We define residuals for point process models fitted to spatial point pattern data, and we propose diagnostic plots based on them. The residuals apply to any point process model that has a conditional intensity; the model may exhibit spatial heterogeneity, interpoint interaction and dependence on spatial covariates. Some existing ad hoc methods for model checking (quadrat counts, scan statistic, kernel smoothed intensity and Berman's diagnostic) are recovered as special cases. Diagnostic tools are developed systematically, by using an analogy between our spatial residuals and the usual residuals for (non-spatial) generalized linear models. The conditional intensity , plays the role of the mean response. This makes it possible to adapt existing knowledge about model validation for generalized linear models to the spatial point process context, giving recommendations for diagnostic plots. A plot of smoothed residuals against spatial location, or against a spatial covariate, is effective in diagnosing spatial trend or co-variate effects. Q,Q -plots of the residuals are effective in diagnosing interpoint interaction. [source]


A novel prognostic model based on serum levels of total bilirubin and creatinine early after liver transplantation

LIVER INTERNATIONAL, Issue 6 2007
Xiao Xu
Abstract Background/aim: We aim to evaluate the impact of early renal dysfunction (ERD), early allograft dysfunction (EAD) on post-transplant mortality, and further explore a simple and accurate model to predict prognosis. Patients: A total of 161 adult patients who underwent liver transplantation for benign end-stage liver diseases were enrolled in the retrospective study. Another 38 patients were used for model validation. Results: Poor patient survival was associated with ERD or EAD. A post-transplant model for predicting mortality (PMPM) based on serum levels of total bilirubin and creatinine at 24-h post-transplantation was then established according to multivariate logistic regression. At 3 months, 6 months and 1 year, the area under receiver operating characteristic curves (AUC) of PMPM score at 24-h post-transplantation (0.876, 0.878 and 0.849, respectively) were significantly higher than those of pre-transplant model for end-stage liver diseases (MELD) score (0.673, 0.674 and 0.618, respectively) or the post-transplant MELD score at 24-h post-transplantation (0.787, 0.787 and 0.781, respectively) (P<0.05). Patients with PMPM score ,,1.4 (low-risk group, n=114) achieved better survival than those with PMPM score >,1.4 (high-risk group, n=47) (P<0.001). The patients in the high-risk group showed a relatively good outcome if their PMPM scores decreased to ,,1.4 at post-transplant day 7. The subsequent validation study showed that PMPM functioned with a predictive accuracy of 100%. Conclusion: The PMPM score could effectively predict short- and medium-term mortality in liver transplant recipients. [source]


Predictive 3D-Quantitative Structure-Activity Relationship for A1 and A2A Adenosine Receptor Ligands

MOLECULAR INFORMATICS, Issue 11-12 2009
Olga Yuzlenko
Abstract The use of QSAR applications to develop adenosine receptor (AR) antagonists is not so common. A library of all xanthine derivatives, obtained at the Department of Technology and Biotechnology of Drugs, was created. Sixty-three active adenosine A1 receptor ligands and one hundred thirty nine active adenosine A2A receptor ligands were used for 3D-QSAR investigation. The 3D-QSAR equations with a high predictive power in estimating the binding affinity values of potential A1 and A2A ARs ligands were derived. For the first time, hybrid shape-property descriptors were used in 3D-QSAR for xanthine ARs ligands. The obtained models were characterized by a high regression and cross-validation coefficients. Two types of the model validation were tested , dividing the library into the training set for model development and external set for model validation and increasing the number of library components and checking the model by cross-validated regression coefficient. The analysis of the results depicts that for the A1 AR binding activity it is important for ligands to possess R1 -propyl substituents along with the phenyl or benzyl substituents bearing halogen atom and phenethyl moiety. For A2A AR affinity it could be favorable to introduce phenethyl or phenyl substituent connected with the tricyclic ring by the alkoxy chain. The nature of R1 group may not significantly affect the A2A AR affinity. High predictive power of the equations suggests their use for further development of adenosine receptor antagonists within xanthine derivatives. [source]


Measurement of pesticide residues in peppers by near-infrared reflectance spectroscopy

PEST MANAGEMENT SCIENCE (FORMERLY: PESTICIDE SCIENCE), Issue 6 2010
María-Teresa Sánchez
Abstract BACKGROUND: Peppers are a frequent object of food safety alerts in various member states of the European Union owing to the presence in some batches of unauthorised pesticide residues. This study assessed the viability of near-infrared reflectance spectroscopy (NIRS) for the measurement of pesticide residues in peppers. Commercially available spectrophotometers using different sample-presentation methods were evaluated for this purpose: a diode-array spectrometer for intact raw peppers and two scanning monochromators fitted with different sample-presentation accessories (transport and spinning modules) for crushed peppers and for dry extract system for infrared analysis (DESIR), respectively. RESULTS: Models developed using partial least squares,discriminant analysis (PLS2-DA) correctly classified between 62 and 68% of samples by presence/absence of pesticides, depending on the instrument used. At model validation, the highest percentage of correctly classified samples,75 and 82% for pesticide-free and pesticide-containing samples respectively,were obtained for intact peppers using the diode-array spectrometer. CONCLUSION: The results obtained confirmed that NIRS technology may be used to provide swift, non-destructive preliminary screening for pesticide residues; suspect samples may then be analysed by other confirmatory analytical methods. Copyright © 2010 Society of Chemical Industry [source]


Flow front measurements and model validation in the vacuum assisted resin transfer molding process

POLYMER COMPOSITES, Issue 4 2001
R. Mathuw
Through-thickness measurements were recorded to experimentally investigate the through thickness flow and to validate a closed form solution of the resin flow during the vacuum assisted resin transfer molding process (VARFM). During the VART'M process, a highly permeable distribution medium is incorporated into the preform as a surface layer and resin is inftised Into the mold, under vacuum. During Infusion, the resin flaws preferentially across the surface and simultaneously through the thickness of the preform, giving rise to a three dimensional-flow front. The time to fill the mold and the shape of the flow front, which plays a key role in dry spot formation, are critical for the optimal manufacture of large composite parts. An analytical model predicts the flow times and flow front shapes as a function of the properties of the preform, distribution media and resin. It was found that the flow front profile reaches a parabolic steady state shape and the length of the region saturated by resin is proportional to the square root of the time elapsed. Experimental measurements of the flow front in the process were carried out using embedded sensors to detect the flow of resin through the thickness of the preform layer and the progression of flow along the length of the part. The time to fill the part, the length of flow front and its shapes show good agreement between experiments and the analytical model. The experimental study demonstrates the need for control and optimization of resin injection during the manufacture of large parts by VARTM. [source]


Analysis of split-plot designs: an overview and comparison of methods

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 7 2007
T. Næs
Abstract Split-plot designs are frequently needed in practice because of practical limitations and issues related to cost. This imposes extra challenges on the experimenter, both when designing the experiment and when analysing the data, in particular for non-replicated cases. This paper is an overview and discussion of some of the most important methods for analysing split-plot data. The focus is on estimation, testing and model validation. Two examples from an industrial context are given to illustrate the most important techniques. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Sea-land breeze development during a summer bora event along the north-eastern Adriatic coast

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 651 2010
Maja Teli, man Prtenjak
Abstract The interaction of a summer frontal bora and the sea-land breeze along the north-eastern Adriatic coast was investigated by means of numerical simulations and available observations. Available measurements (in situ, radiosonde, satellite images) provided model validation. The modelled wind field revealed several regions where the summer bora (weaker than 6 m s,1) allowed sea-breeze development: in the western parts of the Istrian peninsula and Rijeka Bay and along the north-western coast of the island of Rab. Along the western Istrian coast, the position of the narrow convergence zone that formed depended greatly on the balance between the bora jets northward and southward of Istria. In the case of a strong northern (Trieste) bora jet, the westerly Istrian onshore flow presented the superposition of the dominant swirled bora flow and local weak thermal flow. It collided then with the easterly bora flow within the zone. With weakening of the Trieste bora jet, the convergence zone was a result of the pure westerly sea breeze and the easterly bora wind. In general, during a bora event, sea breezes were somewhat later and shorter, with limited horizontal extent. The spatial position of the convergence zone caused by the bora and sea-breeze collision was strongly curved. The orientation of the head (of the thermally-induced flow) was more in the vertical causing larger horizontal pressure gradients and stronger daytime maximum wind speed than in undisturbed conditions. Except for the island of Rab, other lee-side islands in the area investigated did not provide favourable conditions for the sea-breeze formation. Within a bora wake near the island of Krk, onshore flow occurred as well, although not as a sea-breeze flow, but as the bottom branch of the lee rotor that was associated with the hydraulic jump-like feature in the lee of the Velika Kapela Mountain. Copyright © 2010 Royal Meteorological Society [source]


Features and development of Coot

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 4 2010
P. Emsley
Coot is a molecular-graphics application for model building and validation of biological macromolecules. The program displays electron-density maps and atomic models and allows model manipulations such as idealization, real-space refinement, manual rotation/translation, rigid-body fitting, ligand search, solvation, mutations, rotamers and Ramachandran idealization. Furthermore, tools are provided for model validation as well as interfaces to external programs for refinement, validation and graphics. The software is designed to be easy to learn for novice users, which is achieved by ensuring that tools for common tasks are `discoverable' through familiar user-interface elements (menus and toolbars) or by intuitive behaviour (mouse controls). Recent developments have focused on providing tools for expert users, with customisable key bindings, extensions and an extensive scripting interface. The software is under rapid development, but has already achieved very widespread use within the crystallographic community. The current state of the software is presented, with a description of the facilities available and of some of the underlying methods employed. [source]