| |||
Calibration Procedure (calibration + procedure)
Selected AbstractsEvaluation of the PESERA model in two contrasting environmentsEARTH SURFACE PROCESSES AND LANDFORMS, Issue 5 2009F. Licciardello Abstract The performance of the Pan-European Soil Erosion Risk Assessment (PESERA) model was evaluated by comparison with existing soil erosion data collected in plots under different land uses and climate conditions in Europe. In order to identify the most important sources of error, the PESERA model was evaluated by comparing model output with measured values as well as by assessing the effect of the various model components on prediction accuracy through a multistep approach. First, the performance of the hydrological and erosion components of PESERA was evaluated separately by comparing both runoff and soil loss predictions with measured values. In order to assess the performance of the vegetation growth component of PESERA, the predictions of the model based on observed values of vegetation ground cover were also compared with predictions based on the simulated vegetation cover values. Finally, in order to evaluate the sediment transport model, predicted monthly erosion rates were also calculated using observed values of runoff and vegetation cover instead of simulated values. Moreover, in order to investigate the capability of PESERA to reproduce seasonal trends, the observed and simulated monthly runoff and erosion values were aggregated at different temporal scale and we investigated at what extend the model prediction error could be reduced by output aggregation. PESERA showed promise to predict annual average spatial variability quite well. In its present form, short-term temporal variations are not well captured probably due to various reasons. The multistep approach showed that this is not only due to unrealistic simulation of cover and runoff, being erosion prediction also an important source of error. Although variability between the investigated land uses and climate conditions is well captured, absolute rates are strongly underestimated. A calibration procedure, focused on a soil erodibility factor, is proposed to reduce the significant underestimation of soil erosion rates. Copyright © 2009 John Wiley & Sons, Ltd. [source] Towards a simple dynamic process conceptualization in rainfall,runoff models using multi-criteria calibration and tracers in temperate, upland catchmentsHYDROLOGICAL PROCESSES, Issue 3 2010C. Birkel Abstract Empirically based understanding of streamflow generation dynamics in a montane headwater catchment formed the basis for the development of simple, low-parameterized, rainfall,runoff models. This study was based in the Girnock catchment in the Cairngorm Mountains of Scotland, where runoff generation is dominated by overland flow from peaty soils in valley bottom areas that are characterized by dynamic expansion and contraction of saturation zones. A stepwise procedure was used to select the level of model complexity that could be supported by field data. This facilitated the assessment of the way the dynamic process representation improved model performance. Model performance was evaluated using a multi-criteria calibration procedure which applied a time series of hydrochemical tracers as an additional objective function. Flow simulations comparing a static against the dynamic saturation area model (SAM) substantially improved several evaluation criteria. Multi-criteria evaluation using ensembles of performance measures provided a much more comprehensive assessment of the model performance than single efficiency statistics, which alone, could be misleading. Simulation of conservative source area tracers (Gran alkalinity) as part of the calibration procedure showed that a simple two-storage model is the minimum complexity needed to capture the dominant processes governing catchment response. Additionally, calibration was improved by the integration of tracers into the flow model, which constrained model uncertainty and improved the hydrodynamics of simulations in a way that plausibly captured the contribution of different source areas to streamflow. This approach contributes to the quest for low-parameter models that can achieve process-based simulation of hydrological response. Copyright © 2009 John Wiley & Sons, Ltd. [source] Multi-variable parameter estimation to increase confidence in hydrological modellingHYDROLOGICAL PROCESSES, Issue 2 2002Sten Bergström Abstract The expanding use and increased complexity of hydrological runoff models has given rise to a concern about overparameterization and risks for compensating errors. One proposed way out is the calibration and validation against additional observations, such as snow, soil moisture, groundwater or water quality. A general problem, however, when calibrating the model against more than one variable is the strategy for parameter estimation. The most straightforward method is to calibrate the model components sequentially. Recent results show that in this way the model may be locked up in a parameter setting, which is good enough for one variable but excludes proper simulation of other variables. This is particularly the case for water quality modelling, where a small compromise in terms of runoff simulation may lead to dramatically better simulations of water quality. This calls for an integrated model calibration procedure with a criterion that integrates more aspects on model performance than just river runoff. The use of multi-variable parameter estimation and internal control of the HBV hydrological model is discussed and highlighted by two case studies. The first example is from a forested basin in northern Sweden and the second one is from an agricultural basin in the south of the country. A new calibration strategy, which is integrated rather than sequential, is proposed and tested. It is concluded that comparison of model results with more measurements than only runoff can lead to increased confidence in the physical relevance of the model, and that the new calibration strategy can be useful for further model development. Copyright © 2002 John Wiley & Sons, Ltd. [source] Reconstructing floodplain sedimentation rates from heavy metal profiles by inverse modellingHYDROLOGICAL PROCESSES, Issue 1 2002Dr Hans Middelkoop Abstract The embanked floodplains of the lower River Rhine in the Netherlands contain large amounts of heavy metals, which is a result of many years deposition of contaminated overbank sediments. Depending on local sedimentation rates and changing pollution trends in the past, the metal pollution varies greatly between different floodplain sections as well as vertically within the floodplain soil profiles. Maximum metal concentrations in floodplain soils vary from 30 to 130 mg/kg for Cu, from 70 to 490 mg/kg for Pb and from 170 to 1450 mg/kg for Zn. In the present study these metals were used as a tracer to reconstruct sedimentation rates at 28 sites on the lower River Rhine floodplains. The temporal trend in pollution of the lower River Rhine over the past 150 years was reconstructed on the basis of metal concentrations in sediments from small ponds within the floodplain area. Using a one-dimensional sedimentation model, average sedimentation rates over the past century were determined using an inverse modelling calibration procedure. The advantage of this method is that it uses information over an entire profile, it requires only a limited number of samples, it accounts for post-depositional redistribution of the metals, and it provides quantitative estimates of the precision of the sedimentation rates obtained. Estimated sedimentation rates vary between about 0·2 mm/year and 15 mm/year. The lowest metal concentrations are found in the distal parts of floodplain sections with low flooding frequencies and where average sedimentation rates have been less than about 5 mm/year. The largest metal accumulations occur in low-lying floodplain sections where average sedimentation rates have been more than 10 mm/year. Copyright © 2002 John Wiley & Sons, Ltd. [source] A destructuration theory and its application to SANICLAY modelINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 10 2010Mahdi Taiebat Abstract Many natural clays have an undisturbed shear strength in excess of the remoulded strength. Destructuration modeling provides a means to account for such sensitivity in a constitutive model. This paper extends the SANICLAY model to include destructuration. Two distinct types of destructuration are considered: isotropic and frictional. The former is a concept already presented in relation to other models and in essence constitutes a mechanism of isotropic softening of the yield surface with destructuration. The latter refers to the reduction of the critical stress ratio reflecting the effect of destructuration on the friction angle, and is believed to be a novel proposition. Both the types depend on a measure of destructuration rate expressed in terms of combined plastic volumetric and deviatoric strain rates. The SANICLAY model itself is generalized from its previous form by additional dependence of the yield surface on the third isotropic stress invariant. Such a generalization allows to obtain as particular cases simplified model versions of lower complexity including one with a single surface and associative flow rule, by simply setting accordingly parameters of the generalized version. A detailed calibration procedure of the relatively few model constants is presented, and the performance of three versions of the model, in descending order of complexity, is validated by comparison of simulations to various data for oedometric consolidation followed by triaxial undrained compression and extension tests on two structured clays. Copyright © 2009 John Wiley & Sons, Ltd. [source] SANISAND: Simple anisotropic sand plasticity modelINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 8 2008Mahdi Taiebat Abstract SANISAND is the name used for a family of simple anisotropic sand constitutive models developed over the past few years within the framework of critical state soil mechanics and bounding surface plasticity. The existing SANISAND models use a narrow open cone-type yield surface with apex at the origin obeying rotational hardening, which implies that only changes of the stress ratio can cause plastic deformations, while constant stress-ratio loading induces only elastic response. In order to circumvent this limitation, the present member of the SANISAND family introduces a modified eight-curve equation as the analytical description of a narrow but closed cone-type yield surface that obeys rotational and isotropic hardening. This modification enables the prediction of plastic strains during any type of constant stress-ratio loading, a feature lacking from the previous SANISAND models, without losing their well-established predictive capability for all other loading conditions including the cyclic. In the process the plausible assumption is made that the plastic strain rate decomposes in two parts, one due to the change of stress ratio and a second due to loading under constant stress ratio, with isotropic hardening depending on the volumetric component of the latter part only. The model formulation is presented firstly in the triaxial stress space and subsequently its multiaxial generalization is developed following systematically the steps of the triaxial one. A detailed calibration procedure for the model constants is presented, while successful simulation of both drained and undrained behavior of sands under constant and variable stress-ratio loadings at various densities and confining pressures is obtained by the model. Copyright © 2007 John Wiley & Sons, Ltd. [source] A rate-dependent cohesive crack model based on anisotropic damage coupled to plasticityINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 9 2006Per-Ola Svahn Abstract In quasi-brittle material the complex process of decohesion between particles in microcracks and localization of the displacement field into macrocracks is limited to a narrow fracture zone, and it is often modelled with cohesive crack models. Since the anisotropic nature of the decohesion process in separation and sliding is essential, it is particularly focused in this paper. Moreover, for cyclic and dynamic loading the unloading, load reversal (including crack closure) and rate dependency are essential features that are included in a new model. The modelling of degradation is based on a ,localized' version of anisotropic continuum damage coupled to inelasticity. The concept of strain energy equivalence between the states in the effective and nominal settings is adopted in order to define the free energy of the interface. The proposed fracture criterion is of the Mohr type, with a smooth transition of the failure and kinematics (slip and dilatation) characteristics between tension and shear. The chosen potential, of the Lemaitre-type, for evolution of the dissipative processes is additively decomposed into plastic and damage parts, and non-associative constitutive equations are obtained. The constitutive equations are integrated by applying the backward Euler rule and by using Newton iteration. The proposed model is assessed analytically and numerically and a typical calibration procedure for concrete is proposed. Copyright © 2006 John Wiley & Sons, Ltd. [source] Anthropological and physicochemical investigation of the burnt remains of Tomb IX in the ,Sa Figu' hypogeal necropolis (Sassari, Italy) , Early Bronze AgeINTERNATIONAL JOURNAL OF OSTEOARCHAEOLOGY, Issue 2 2008G. Piga Abstract Excavations carried out in Tomb IX of the hypogeic necropolis of ,Sa Figu', near the village of Ittiri (Sassari, Italy), supplied burnt human bone remains and pottery unambiguously referred to the Early Bronze Age (characterised by the local culture of ,Bonnannaro'). Besides the anthropological study, we have investigated and evaluated the possibility of a funerary cremation practice in Sardinian pre-history, a subject that has previously not been considered from a scientific point of view. Making use of a calibration procedure based on X-ray diffraction (XRD) line-broadening analysis, related to the microstructural properties, it was possible to estimate the combustion temperature to which the fragmented bones were subjected. It was found that the studied bones reached temperatures varying from 400°C up to a maximum of 850°C. This spread of values suggested inhomogeneous combustion of the bones, which seems compatible with funerary cremation practices. Copyright © 2007 John Wiley & Sons, Ltd. [source] An improved independent component regression modeling and quantitative calibration procedureAICHE JOURNAL, Issue 6 2010Chunhui Zhao Abstract An improved independent component regression (M-ICR) algorithm is proposed by constructing joint latent variable (LV) based regressors, and a quantitative statistical analysis procedure is designed using a bootstrap technique for model validation and performance evaluation. First, the drawbacks of the conventional regression modeling algorithms are analyzed. Then the proposed M-ICR algorithm is formulated for regressor design. It constructs a dual-objective optimization criterion function, simultaneously incorporating quality-relevance and independence into the feature extraction procedure. This ties together the ideas of partial-least squares (PLS), and independent component regression (ICR) under the same mathematical umbrella. By adjusting the controllable suboptimization objective weights, it adds insight into the different roles of quality-relevant and independent characteristics in calibration modeling, and, thus, provides possibilities to combine the advantages of PLS and ICR. Furthermore, a quantitative statistical analysis procedure based on a bootstrapping technique is designed to identify the effects of LVs, determine a better model rank and overcome ill-conditioning caused by model over-parameterization. A confidence interval on quality prediction is also approximated. The performance of the proposed method is demonstrated using both numerical and real world data. © 2009 American Institute of Chemical Engineers AIChE J, 2010 [source] Monitoring of mutarotation of monosaccharides by hydrophilic interaction chromatographyJOURNAL OF SEPARATION SCIENCE, JSS, Issue 6-7 2010í Pazourek Abstract Calibration based on the "single-point calibration method", a simple exponential transformation of the response function of an evaporative light scattering detector was improved and applied to analysis of selected saccharides under hydrophilic interaction chromatography mode (a polar phase LiChrospher100 DIOL, mobile phase acetonitrile/water). The improved approach to the calibration procedure yielded a calibration curve with an excellent linearity (quality coefficient <5%). This quantitative evaluation of chromatograms of D -galactose suggested that not only anomers but even pyranose and furanose forms of the anomers could be resolved , the resulting calculations of abundance of the anomeric form strongly correlated with data from the literature obtained mostly by NMR studies (analogous results were also obtained for D -arabinose, D -glucose, and D -mannose). Because of the rapid separation (retention time less than 10,min), the observed correlation enabled to monitor anomeric conversion (mutarotation) of monosaccharides. [source] Characterization of Low-Molecular-Weight PLA using HPLCMACROMOLECULAR MATERIALS & ENGINEERING, Issue 1 2010Fabio Codari Abstract HPLC is applied and assessed as an effective tool to investigate both the production of PLA by polycondensation and its corresponding degradation. A new HPLC calibration procedure, through which it is possible to fully characterize LMW PLA samples by determining the concentration of each individual oligomer, is developed. A comparison between HPLC, 1H NMR spectroscopy and non-aqueous solution titration is also reported in order to confirm the reliability of the proposed method. Finally, the proposed analytical technique is applied to monitor the development of a polycondensation reaction performed at 150,°C and 133.3,mbar for 12,h. [source] Improved method for isotopic and quantitative analysis of dissolved inorganic carbon in natural water samplesRAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 15 2006Nelly Assayag We present here an improved and reliable method for measuring the concentration of dissolved inorganic carbon (DIC) and its isotope composition (,13CDIC) in natural water samples. Our apparatus, a gas chromatograph coupled to an isotope ratio mass spectrometer (GCIRMS), runs in a quasi-automated mode and is able to analyze about 50 water samples per day. The whole procedure (sample preparation, CO2(g),CO2(aq) equilibration time and GCIRMS analysis) requires 2 days. It consists of injecting an aliquot of water into a H3PO4 -loaded and He-flushed 12,mL glass tube. The H3PO4 reacts with the water and converts the DIC into aqueous and gaseous CO2. After a CO2(g),CO2(aq) equilibration time of between 15 and 24,h, a portion of the headspace gas (mainly CO2+He) is introduced into the GCIRMS, to measure the carbon isotope ratio of the released CO2(g), from which the ,13CDIC is determined via a calibration procedure. For standard solutions with DIC concentrations ranging from 1 to 25,mmol,·,L,1 and solution volume of 1,mL (high DIC concentration samples) or 5,mL (low DIC concentration samples), ,13CDIC values are determined with a precision (1,) better than 0.1,. Compared with previously published headspace equilibration methods, the major improvement presented here is the development of a calibration procedure which takes the carbon isotope fractionation associated with the CO2(g),CO2(aq) partition into account: the set of standard solutions and samples has to be prepared and analyzed with the same ,gas/liquid' and ,H3PO4/water' volume ratios. A set of natural water samples (lake, river and hydrothermal springs) was analyzed to demonstrate the utility of this new method. Copyright © 2006 John Wiley & Sons, Ltd. [source] Valuation of floating range notes in a LIBOR market modelTHE JOURNAL OF FUTURES MARKETS, Issue 7 2008Ting-Pin Wu This study derives an approximate pricing formula of floating range notes (FRNs) within the multifactor LIBOR market model (LMM) framework. The LMM features the ease for calibration procedure, and the resulting pricing formula is more tractable. In addition, since the underlying rate of FRNs is usually the LIBOR rate, the pricing of the FRNs under the LMM is more direct and full of intuition. © 2008 Wiley Periodicals, Inc. Jrl Fut Mark 28:697,710, 2008 [source] 2324: Comparison of algorithms for oximetry in vivo and ex vivoACTA OPHTHALMOLOGICA, Issue 2010D DE BROUWERE Purpose Several authors have proposed a number of algorithms to extract the oxygen saturation in retinal blood vessels based on multispectral image analysis. We evaluated the outcomes of seven known algorithms based on hyperspectral retinal images. Methods Hyperspectral images are acquired using a fundus camera where a slit spectrograph is registered onto a retinal image. This combination compromises both accurate spatial and spectral information over the selected slit. Hyperspectral image analysis was used as input for the oximetry calculations described in the literature. We used a model eye to evaluate the different techniques in a controlled setup. Defibrinated horse blood was perfused through microtubules placed in front of a white (spectralon) background. Oxygen saturation was controlled by mixing different concentrations of sodium dithianate in the blood. Results Oxygen saturation was varied in five equidistant steps between 0 and 1. We correlated the outcomes to the metric of Harvey et al. [Biomed Optics 6631, 2007] Linear correlation with other algorithms resulted in r2 values between 0.881 and 0.985, however we observed a large discrepancy of the slope of each correlation line. The algorithms were also evaluated in images recorded in five healthy volunteers. In all techniques, veins could be separated from arteries by their reduces oxygen saturation, although values varied strongly between the different techniques. Conclusion Our findings confirm the working of a number of noninvasive retinal oximetry algorithms. Different readings can be can be attributed to an offset caused by an uncertainty of pigmentation and scattering parameters in the calibration procedure. [source] Improved lateral force calibration based on the angle conversion factor in atomic force microscopyJOURNAL OF MICROSCOPY, Issue 2 2007DUKHYUN CHOI Summary A novel calibration method is proposed for determining lateral forces in atomic force microscopy (AFM), by introducing an angle conversion factor, which is defined as the ratio of the twist angle of a cantilever to the corresponding lateral signal. This factor greatly simplifies the calibration procedures. Once the angle conversion factor is determined in AFM, the lateral force calibration factors of any rectangular cantilever can be obtained by simple computation without further experiments. To determine the angle conversion factor, this study focuses on the determination of the twist angle of a cantilever during lateral force calibration in AFM. Since the twist angle of a cantilever cannot be directly measured in AFM, the angles are obtained by means of the moment balance equations between a rectangular AFM cantilever and a simple commercially available step grating. To eliminate the effect of the adhesive force, the gradients of the lateral signals and the twist angles as a function of normal force are used in calculating the angle conversion factor. To verify reliability and reproducibility of the method, two step gratings with different heights and two different rectangular cantilevers were used in lateral force calibration in AFM. The results showed good agreement, to within 10%. This method was validated by comparing the coefficient of friction of mica so determined with values in the literature. [source] Calibration of pesticide leaching models: critical review and guidance for reportingPEST MANAGEMENT SCIENCE (FORMERLY: PESTICIDE SCIENCE), Issue 8 2002Igor G Dubus Abstract Calibration of pesticide leaching models may be undertaken to evaluate the ability of models to simulate experimental data, to assist in their parameterisation where values for input parameters are difficult to determine experimentally, to determine values for specific model inputs (eg sorption and degradation parameters) and to allow extrapolations to be carried out. Although calibration of leaching models is a critical phase in the assessment of pesticide exposure, lack of guidance means that calibration procedures default to the modeller. This may result in different calibration and extrapolation results for different individuals depending on the procedures used, and thus may influence decisions regarding the placement of crop-protection products on the market. A number of issues are discussed in this paper including data requirements and assessment of data quality, the selection of a model and parameters for performing calibration, the use of automated calibration techniques as opposed to more traditional trial-and-error approaches, difficulties in the comparison of simulated and measured data, differences in calibration procedures, and the assessment of parameter values derived by calibration. Guidelines for the reporting of calibration activities within the scope of pesticide registration are proposed. © 2002 Society of Chemical Industry [source] In Vitro Sunscreen Transmittance Measurement with Concomitant Evaluation of Photostability: Evolution of a MethodPHOTOCHEMISTRY & PHOTOBIOLOGY, Issue 4 2009Robert M. Sayre The recent paper by Miura et al. (Photochem. Photobiol. 84[6], 1569,1575) offers a re-examination of extant in vitro methods for dynamically measuring sunscreen photodegradation under continuous irradiation in situ. We commend the authors' efforts toward developing an improved system for accurate in vitro sunscreen assessment. This work describes an alternate derivative apparatus incorporating an improved detector which may prove an exceptionally valuable contribution toward that goal. Unfortunately their report suffers from insufficient detail in instrumentation description and lacks requisite calibration procedures. Their utilization of a solar simulator filtered for conventional in vivo sun protection factor (SPF) testing poses transmittance measurement limitations at short wavelengths that are not adequately addressed and is also deficient, relative to sunlight, in longer UVA wavelengths shown to contribute to sunscreen photoinstability. We concur that the in vitro sunscreen testing should utilize continuous or multiple irradiation doses and should ideally use the same 2 mg cm,2 product application amount as does the human SPF test. We encourage their proposal that methodology, which simultaneously measures sunscreen spectral transmittance and photodegradation under continuous irradiation to an accumulated erythemic endpoint, as we previously described, be developed into a consensus test standard. [source] A uniform phenomenological constitutive model for glassy and semicrystalline polymersPOLYMER ENGINEERING & SCIENCE, Issue 8 2001Y. Duan A phenomenological constitutive model is proposed on the basis of four models: the Johnson-Cook model, the G'Sell-Jonas model, the Matsuoka model, and the Brooks model. The proposed constitutive model has a concise expression of stress dependence on strain, strain rate and temperature. It is capable of uniformly describing the entire range of deformation behavior of glassy and semicrystalline polymers, especially the intrinsic strain softening and subsequent orientation hardening of glassy polymers. At least three experimental stress-strain curves including variation with strain rate and temperature are needed to calibrate the eight material coefficients. Sequential calibration procedures of the eight material coefficients are given in detail. Predictions from the proposed constitutive model are compared with experimental data of two glassy polymers, polymethyl-methacrylate and polycarbonate under various deformation conditions, and with that of the G'Sell-Jonas model for polyamide 12, a semicrystalline polymer. [source] Review Article: Ocular blood flow assessment using continuous laser Doppler flowmetryACTA OPHTHALMOLOGICA, Issue 6 2010Charles E. Riva Acta Ophthalmol. 2010: 88: 622,629 Abstract. This article describes the technique of continuous laser Doppler flowmetry (LDF) as applied to the measurement of the flux of red blood cells in the optic nerve head, iris and subfoveal choroid. Starting with the exposition of the physical principles underlying LDF, we first describe the various devices developed to perform LDF in these vascular beds. We then discuss the clinical protocols, blood flow parameters, calibration procedures, reproducibility and limitations of the LDF technique. Various problems still need to be solved in order to bring to light the full potential of LDF in the assessment of microcirculatory haemodynamics. [source] |