Model Performance (model + performance)

Distribution by Scientific Domains


Selected Abstracts


Towards a simple dynamic process conceptualization in rainfall,runoff models using multi-criteria calibration and tracers in temperate, upland catchments

HYDROLOGICAL PROCESSES, Issue 3 2010
C. Birkel
Abstract Empirically based understanding of streamflow generation dynamics in a montane headwater catchment formed the basis for the development of simple, low-parameterized, rainfall,runoff models. This study was based in the Girnock catchment in the Cairngorm Mountains of Scotland, where runoff generation is dominated by overland flow from peaty soils in valley bottom areas that are characterized by dynamic expansion and contraction of saturation zones. A stepwise procedure was used to select the level of model complexity that could be supported by field data. This facilitated the assessment of the way the dynamic process representation improved model performance. Model performance was evaluated using a multi-criteria calibration procedure which applied a time series of hydrochemical tracers as an additional objective function. Flow simulations comparing a static against the dynamic saturation area model (SAM) substantially improved several evaluation criteria. Multi-criteria evaluation using ensembles of performance measures provided a much more comprehensive assessment of the model performance than single efficiency statistics, which alone, could be misleading. Simulation of conservative source area tracers (Gran alkalinity) as part of the calibration procedure showed that a simple two-storage model is the minimum complexity needed to capture the dominant processes governing catchment response. Additionally, calibration was improved by the integration of tracers into the flow model, which constrained model uncertainty and improved the hydrodynamics of simulations in a way that plausibly captured the contribution of different source areas to streamflow. This approach contributes to the quest for low-parameter models that can achieve process-based simulation of hydrological response. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Runoff and suspended sediment yields from an unpaved road segment, St John, US Virgin Islands

HYDROLOGICAL PROCESSES, Issue 1 2007
Carlos E. Ramos-Scharrón
Abstract Unpaved roads are believed to be the primary source of terrigenous sediments being delivered to marine ecosystems around the island of St John in the eastern Caribbean. The objectives of this study were to: (1) measure runoff and suspended sediment yields from a road segment; (2) develop and test two event-based runoff and sediment prediction models; and (3) compare the predicted sediment yields against measured values from an empirical road erosion model and from a sediment trap. The runoff models use the Green,Ampt infiltration equation to predict excess precipitation and then use either an empirically derived unit hydrograph or a kinematic wave to generate runoff hydrographs. Precipitation, runoff, and suspended sediment data were collected from a 230 m long, mostly unpaved road segment over an 8-month period. Only 3,5 mm of rainfall was sufficient to initiate runoff from the road surface. Both models simulated similar hydrographs. Model performance was poor for storms with less than 1 cm of rainfall, but improved for larger events. The largest source of error was the inability to predict initial infiltration rates. The two runoff models were coupled with empirical sediment rating curves, and the predicted sediment yields were approximately 0·11 kg per square meter of road surface per centimetre of precipitation. The sediment trap data indicated a road erosion rate of 0·27 kg m,2 cm,1. The difference in sediment production between these two methods can be attributed to the fact that the suspended sediment samples were predominantly sand and silt, whereas the sediment trap yielded mostly sand and gravel. The combination of these data sets yields a road surface erosion rate of 0·31 kg m,2 cm,1, or approximately 36 kg m,2 year,1. This is four orders of magnitude higher than the measured erosion rate from undisturbed hillslopes. The results confirm the importance of unpaved roads in altering runoff and erosion rates in a tropical setting, provide insights into the controlling processes, and provide guidance for predicting runoff and sediment yields at the road-segment scale. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Modelling stream flow for use in ecological studies in a large, arid zone river, central Australia

HYDROLOGICAL PROCESSES, Issue 6 2005
Justin F. Costelloe
Abstract Australian arid zone ephemeral rivers are typically unregulated and maintain a high level of biodiversity and ecological health. Understanding the ecosystem functions of these rivers requires an understanding of their hydrology. These rivers are typified by highly variable hydrological regimes and a paucity, often a complete absence, of hydrological data to describe these flow regimes. A daily time-step, grid-based, conceptual rainfall,runoff model was developed for the previously uninstrumented Neales River in the arid zone of northern South Australia. Hourly, logged stage data provided a record of stream-flow events in the river system. In conjunction with opportunistic gaugings of stream-flow events, these data were used in the calibration of the model. The poorly constrained spatial variability of rainfall distribution and catchment characteristics (e.g. storage depths) limited the accuracy of the model in replicating the absolute magnitudes and volumes of stream-flow events. In particular, small but ecologically important flow events were poorly modelled. Model performance was improved by the application of catchment-wide processes replicating quick runoff from high intensity rainfall and improving the area inundated versus discharge relationship in the channel sections of the model. Representing areas of high and low soil moisture storage depths in the hillslope areas of the catchment also improved the model performance. The need for some explicit representation of the spatial variability of catchment characteristics (e.g. channel/floodplain, low storage hillslope and high storage hillslope) to effectively model the range of stream-flow events makes the development of relatively complex rainfall,runoff models necessary for multisite ecological studies in large, ungauged arid zone catchments. Grid-based conceptual models provide a good balance between providing the capacity to easily define land types with differing rainfall,runoff responses, flexibility in defining data output points and a parsimonious water-balance,routing model. Copyright © 2004 John Wiley & Sons, Ltd. [source]


An operational model predicting autumn bird migration intensities for flight safety

JOURNAL OF APPLIED ECOLOGY, Issue 4 2007
J. VAN BELLE
Summary 1Forecasting migration intensity can improve flight safety and reduce the operational costs of collisions between aircraft and migrating birds. This is particularly true for military training flights, which can be rescheduled if necessary and often take place at low altitudes and during the night. Migration intensity depends strongly on weather conditions but reported effects of weather differ among studies. It is therefore unclear to what extent existing predictive models can be extrapolated to new situations. 2We used radar measurements of bird densities in the Netherlands to analyse the relationship between weather and nocturnal migration. Using our data, we tested the performance of three regression models that have been developed for other locations in Europe. We developed and validated new models for different combinations of years to test whether regression models can be used to predict migration intensity in independent years. Model performance was assessed by comparing model predictions against benchmark predictions based on measured migration intensity of the previous night and predictions based on a 6-year average trend. We also investigated the effect of the size of the calibration data set on model robustness. 3All models performed better than the benchmarks, but the mismatch between measurements and predictions was large for existing models. Model performance was best for newly developed regression models. The performance of all models was best at intermediate migration intensities. The performance of our models clearly increased with sample size, up to about 90 nocturnal migration measurements. Significant input variables included seasonal migration trend, wind profit, 24-h trend in barometric pressure and rain. 4Synthesis and applications. Migration intensities can be forecast with a regression model based on meteorological data. This and other existing models are only valid locally and cannot be extrapolated to new locations. Model development for new locations requires data sets with representative inter- and intraseasonal variability so that cross-validation can be applied effectively. The Royal Netherlands Air Force currently uses the regression model developed in this study to predict migration intensities 3 days ahead. This improves the reliability of migration intensity warnings and allows rescheduling of training flights if needed. [source]


Modelling survival in acute severe illness: Cox versus accelerated failure time models

JOURNAL OF EVALUATION IN CLINICAL PRACTICE, Issue 1 2008
John L. Moran MBBS FRACP FJFICM MD
Abstract Background, The Cox model has been the mainstay of survival analysis in the critically ill and time-dependent covariates have infrequently been incorporated into survival analysis. Objectives, To model 28-day survival of patients with acute lung injury (ALI) and acute respiratory distress syndrome (ARDS), and compare the utility of Cox and accelerated failure time (AFT) models. Methods, Prospective cohort study of 168 adult patients enrolled at diagnosis of ALI in 21 adult ICUs in three Australian States with measurement of survival time, censored at 28 days. Model performance was assessed as goodness-of-fit [GOF, cross-products of quantiles of risk and time intervals (P , 0.1), Cox model] and explained variation (,R2', Cox and ATF). Results, Over a 2-month study period (October,November 1999), 168 patients with ALI were identified, with a mean (SD) age of 61.5 (18) years and 30% female. Peak mortality hazard occurred at days 7,8 after onset of ALI/ARDS. In the Cox model, increasing age and female gender, plus interaction, were associated with an increased mortality hazard. Time-varying effects were established for patient severity-of-illness score (decreasing hazard over time) and multiple-organ-dysfunction score (increasing hazard over time). The Cox model was well specified (GOF, P > 0.34) and R2 = 0.546, 95% CI: 0.390, 0.781. Both log-normal (R2 = 0.451, 95% CI: 0.321, 0.695) and log-logistic (R2 0.470, 95% CI: 0.346, 0.714) AFT models identified the same predictors as the Cox model, but did not demonstrate convincingly superior overall fit. Conclusions, Time dependence of predictors of survival in ALI/ARDS exists and must be appropriately modelled. The Cox model with time-varying covariates remains a flexible model in survival analysis of patients with acute severe illness. [source]


A ground-level ozone forecasting model for Santiago, Chile

JOURNAL OF FORECASTING, Issue 6 2002
Héctor Jorquera
Abstract A physically based model for ground-level ozone forecasting is evaluated for Santiago, Chile. The model predicts the daily peak ozone concentration, with the daily rise of air temperature as input variable; weekends and rainy days appear as interventions. This model was used to analyse historical data, using the Linear Transfer Function/Finite Impulse Response (LTF/FIR) formalism; the Simultaneous Transfer Function (STF) method was used to analyse several monitoring stations together. Model evaluation showed a good forecasting performance across stations,for low and high ozone impacts,with power of detection (POD) values between 70 and 100%, Heidke's Skill Scores between 40% and 70% and low false alarm rates (FAR). The model consistently outperforms a pure persistence forecast. Model performance was not sensitive to different implementation options. The model performance degrades for two- and three-days ahead forecast, but is still acceptable for the purpose of developing an environmental warning system at Santiago. Copyright © 2002 John Wiley & Sons, Ltd. [source]


The Impact of Injury Coding Schemes on Predicting Hospital Mortality After Pediatric Injury

ACADEMIC EMERGENCY MEDICINE, Issue 7 2009
Randall S. Burd MD
Abstract Objectives:, Accurate adjustment for injury severity is needed to evaluate the effectiveness of trauma management. While the choice of injury coding scheme used for modeling affects performance, the impact of combining coding schemes on performance has not been evaluated. The purpose of this study was to use Bayesian logistic regression to develop models predicting hospital mortality in injured children and to compare the performance of models developed using different injury coding schemes. Methods:, Records of children (age < 15 years) admitted after injury were obtained from the National Trauma Data Bank (NTDB) and the National Pediatric Trauma Registry (NPTR) and used to train Bayesian logistic regression models predicting mortality using three injury coding schemes (International Classification of Disease-9th revision [ICD-9] injury codes, the Abbreviated Injury Scale [AIS] severity scores, and the Barell matrix) and their combinations. Model performance was evaluated using independent data from the NTDB and the Kids' Inpatient Database 2003 (KID). Results:, Discrimination was optimal when modeling both ICD-9 and AIS severity codes (area under the receiver operating curve [AUC] = 0.921 [NTDB] and 0.967 [KID], Hosmer-Lemeshow [HL] h-statistic = 115 [NTDB] and 147 [KID]), while calibration was optimal when modeling coding based on the Barell matrix (AUC = 0.882 [NTDB] and 0.936 [KID], HL h-statistic = 19 [NTDB] and 69 [KID]). When compared to models based on ICD-9 codes alone, models that also included AIS severity scores and coding from the Barell matrix showed improved discrimination and calibration. Conclusions:, Mortality models that incorporate additional injury coding schemes perform better than those based on ICD-9 codes alone in the setting of pediatric trauma. Combining injury coding schemes may be an effective approach for improving the predictive performance of empirically derived estimates of injury mortality. [source]


A genetic model for determining MSH2 and MLH1 carrier probabilities based on family history and tumor microsatellite instability

CLINICAL GENETICS, Issue 3 2006
F Marroni
Mutation-predicting models can be useful when deciding on the genetic testing of individuals at risk and in determining the cost effectiveness of screening strategies at the population level. The aim of this study was to evaluate the performance of a newly developed genetic model that incorporates tumor microsatellite instability (MSI) information, called the AIFEG model, and in predicting the presence of mutations in MSH2 and MLH1 in probands with suspected hereditary non-polyposis colorectal cancer. The AIFEG model is based on published estimates of mutation frequencies and cancer penetrances in carriers and non-carriers and employs the program MLINK of the FASTLINK package to calculate the proband's carrier probability. Model performance is evaluated in a series of 219 families screened for mutations in both MSH2 and MLH1, in which 68 disease-causing mutations were identified. Predictions are first obtained using family history only and then converted into posterior probabilities using information on MSI. This improves predictions substantially. Using a probability threshold of 10% for mutation analysis, the AIFEG model applied to our series has 100% sensitivity and 71% specificity. [source]


Effects of species and habitat positional errors on the performance and interpretation of species distribution models

DIVERSITY AND DISTRIBUTIONS, Issue 4 2009
Patrick E. Osborne
Abstract Aim, A key assumption in species distribution modelling is that both species and environmental data layers contain no positional errors, yet this will rarely be true. This study assesses the effect of introduced positional errors on the performance and interpretation of species distribution models. Location, Baixo Alentejo region of Portugal. Methods, Data on steppe bird occurrence were collected using a random stratified sampling design on a 1-km2 pixel grid. Environmental data were sourced from satellite imagery and digital maps. Error was deliberately introduced into the species data as shifts in a random direction of 0,1, 2,3, 4,5 and 0,5 pixels. Whole habitat layers were shifted by 1 pixel to cause mis-registration, and the cumulative effect of one to three shifted layers investigated. Distribution models were built for three species using three algorithms with three replicates. Test models were compared with controls without errors. Results, Positional errors in the species data led to a drop in model performance (larger errors having larger effects , typically up to 10% drop in area under the curve on average), although not enough for models to be rejected. Model interpretation was more severely affected with inconsistencies in the contributing variables. Errors in the habitat layers had similar although lesser effects. Main conclusions, Models with species positional errors are hard to detect, often statistically good, ecologically plausible and useful for prediction, but interpreting them is dangerous. Mis-registered habitat layers produce smaller effects probably because shifting entire layers does not break down the correlation structure to the same extent as random shifts in individual species observations. Spatial autocorrelation in the habitat layers may protect against species positional errors to some extent but the relationship is complex and requires further work. The key recommendation must be that positional errors should be minimised through careful field design and data processing. [source]


Assessing a numerical cellular braided-stream model with a physical model

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 5 2005
Andrea B. Doeschl-Wilson
Abstract A. B. Murray and C. Paola (1994, Nature, vol. 371, pp. 54,57; 1997, Earth Surface Processes and Landforms, vol. 22, pp. 1001,1025) proposed a cellular model for braided river dynamics as an exploratory device for investigating the conditions necessary for the occurrence of braiding. The model reproduces a number of the general morphological and dynamic features of braided rivers in a simplified form. Here we test the representation of braided channel morphodynamics in the Murray,Paola model against the known characteristics (mainly from a sequence of high resolution digital elevation models) of a physical model of a braided stream. The overall aim is to further the goals of the exploratory modelling approach by first investigating the capabilities and limitations of the existing model and then by proposing modifications and alternative approaches to modelling of the essential features of braiding. The model confirms the general inferences of Murray and Paola (1997) about model performance. However, the modelled evolution shows little resemblance to the real evolution of the small-scale laboratory river, although this depends to some extent on the coarseness of the grid used in the model relative to the scale of the topography. The model does not reproduce the bar-scale topography and dynamics even when the grid scale and amplitude of topography are adapted to be equivalent to the original Murray,Paola results. Strong dependence of the modelled processes on local bed slopes and the tendency for the model to adopt its own intrinsic scale, rather than adapt to the scale of the pre-existing topography, appear to be the main causes of the differences between numerical model results and the physical model morphology and dynamics. The model performance can be improved by modification of the model equations to more closely represent the water surface but as an exploratory approach hierarchical modelling promises greater success in overcoming the identified shortcomings. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Testing a model for predicting the timing and location of shallow landslide initiation in soil-mantled landscapes

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 9 2003
M. Casadei
Abstract The growing availability of digital topographic data and the increased reliability of precipitation forecasts invite modelling efforts to predict the timing and location of shallow landslides in hilly and mountainous areas in order to reduce risk to an ever-expanding human population. Here, we exploit a rare data set to develop and test such a model. In a 1·7 km2 catchment a near-annual aerial photographic coverage records just three single storm events over a 45 year period that produced multiple landslides. Such data enable us to test model performance by running the entire rainfall time series and determine whether just those three storms are correctly detected. To do this, we link a dynamic and spatially distributed shallow subsurface runoff model (similar to TOPMODEL) to an in,nite slope model to predict the spatial distribution of shallow landsliding. The spatial distribution of soil depth, a strong control on local landsliding, is predicted from a process-based model. Because of its common availability, daily rainfall data were used to drive the model. Topographic data were derived from digitized 1 : 24 000 US Geological Survey contour maps. Analysis of the landslides shows that 97 occurred in 1955, 37 in 1982 and ,ve in 1998, although the heaviest rainfall was in 1982. Furthermore, intensity,duration analysis of available daily and hourly rainfall from the closest raingauges does not discriminate those three storms from others that did not generate failures. We explore the question of whether a mechanistic modelling approach is better able to identify landslide-producing storms. Landslide and soil production parameters were ,xed from studies elsewhere. Four hydrologic parameters characterizing the saturated hydraulic conductivity of the soil and underlying bedrock and its decline with depth were ,rst calibrated on the 1955 landslide record. Success was characterized as the most number of actual landslides predicted with the least amount of total area predicted to be unstable. Because landslide area was consistently overpredicted, a threshold catchment area of predicted slope instability was used to de,ne whether a rainstorm was a signi,cant landslide producer. Many combinations of the four hydrological parameters performed equally well for the 1955 event, but only one combination successfully identi,ed the 1982 storm as the only landslide-producing storm during the period 1980,86. Application of this parameter combination to the entire 45 year record successfully identi,ed the three events, but also predicted that two other landslide-producing events should have occurred. This performance is signi,cantly better than the empirical intensity,duration threshold approach, but requires considerable calibration effort. Overprediction of instability, both for storms that produced landslides and for non-producing storms, appears to arise from at least four causes: (1) coarse rainfall data time scale and inability to document short rainfall bursts and predict pressure wave response; (2) absence of local rainfall data; (3) legacy effect of previous landslides; and (4) inaccurate topographic and soil property data. Greater resolution of spatial and rainfall data, as well as topographic data, coupled with systematic documentation of landslides to create time series to test models, should lead to signi,cant improvements in shallow landslides forecasting. Copyright © 2003 John Wiley & Sons, Ltd. [source]


NeuralEnsembles: a neural network based ensemble forecasting program for habitat and bioclimatic suitability analysis

ECOGRAPHY, Issue 1 2009
Jesse R. O'Hanley
NeuralEnsembles is an integrated modeling and assessment tool for predicting areas of species habitat/bioclimatic suitability based on presence/absence data. This free, Windows based program, which comes with a friendly graphical user interface, generates predictions using ensembles of artificial neural networks. Models can quickly and easily be produced for multiple species and subsequently be extrapolated either to new regions or under different future climate scenarios. An array of options is provided for optimizing the construction and training of ensemble models. Main outputs of the program include text files of suitability predictions, maps and various statistical measures of model performance and accuracy. [source]


Effects of species' ecology on the accuracy of distribution models

ECOGRAPHY, Issue 1 2007
Jana M. McPherson
In the face of accelerating biodiversity loss and limited data, species distribution models , which statistically capture and predict species' occurrences based on environmental correlates , are increasingly used to inform conservation strategies. Additionally, distribution models and their fit provide insights on the broad-scale environmental niche of species. To investigate whether the performance of such models varies with species' ecological characteristics, we examined distribution models for 1329 bird species in southern and eastern Africa. The models were constructed at two spatial resolutions with both logistic and autologistic regression. Satellite-derived environmental indices served as predictors, and model accuracy was assessed with three metrics: sensitivity, specificity and the area under the curve (AUC) of receiver operating characteristics plots. We then determined the relationship between each measure of accuracy and ten ecological species characteristics using generalised linear models. Among the ecological traits tested, species' range size, migratory status, affinity for wetlands and endemism proved most influential on the performance of distribution models. The number of habitat types frequented (habitat tolerance), trophic rank, body mass, preferred habitat structure and association with sub-resolution habitats also showed some effect. In contrast, conservation status made no significant impact. These findings did not differ from one spatial resolution to the next. Our analyses thus provide conservation scientists and resource managers with a rule of thumb that helps distinguish, on the basis of ecological traits, between species whose occurrence is reliably or less reliably predicted by distribution models. Reasonably accurate distribution models should, however, be attainable for most species, because the influence ecological traits bore on model performance was only limited. These results suggest that none of the ecological traits tested provides an obvious correlate for environmental niche breadth or intra-specific niche differentiation. [source]


Wavelet analysis of the scale- and location-dependent correlation of modelled and measured nitrous oxide emissions from soil

EUROPEAN JOURNAL OF SOIL SCIENCE, Issue 1 2005
A. E. Milne
Summary We used the wavelet transform to quantify the performance of models that predict the rate of emission of nitrous oxide (N2O) from soil. Emissions of N2O and other soil variables that influence emissions were measured on soil cores collected at 256 locations across arable land in Bedfordshire, England. Rate-limiting models of N2O emissions were constructed and fitted to the data by functional analysis. These models were then evaluated by wavelet variance and wavelet correlations, estimated from coefficients of the adapted maximal overlap discrete wavelet transform (AMODWT), of the fitted and measured emission rates. We estimated wavelet variances to assess whether the partition of the variance of modelled rates of N2O emission between scales reflected that of the data. Where the relative distribution of variance in the model is more skewed to coarser scales than is the case for the observation, for example, this indicates that the model predictions are too smooth spatially, and fail adequately to represent some of the variation at finer scales. Scale-dependent wavelet correlations between model and data were used to quantify the model performance at each scale, and in several cases to determine the scale at which the model description of the data broke down. We detected significant changes in correlation between modelled and predicted emissions at each spatial scale, showing that, at some scales, model performance was not uniform in space. This suggested that the influence of a soil variable on N2O emissions, important in one region but not in another, had been omitted from the model or modelled poorly. Change points usually occurred at field boundaries or where soil textural class changed. We show that wavelet analysis can be used to quantify aspects of model performance that other methods cannot. By evaluating model behaviour at several scales and positions wavelet analysis helps us to determine whether a model is suitable for a particular purpose. [source]


Selecting discriminant function models for predicting the expected richness of aquatic macroinvertebrates

FRESHWATER BIOLOGY, Issue 2 2006
JOHN VAN SICKLE
Summary 1. The predictive modelling approach to bioassessment estimates the macroinvertebrate assemblage expected at a stream site if it were in a minimally disturbed reference condition. The difference between expected and observed assemblages then measures the departure of the site from reference condition. 2. Most predictive models employ site classification, followed by discriminant function (DF) modelling, to predict the expected assemblage from a suite of environmental variables. Stepwise DF analysis is normally used to choose a single subset of DF predictor variables with a high accuracy for classifying sites. An alternative is to screen all possible combinations of predictor variables, in order to identify several ,best' subsets that yield good overall performance of the predictive model. 3. We applied best-subsets DF analysis to assemblage and environmental data from 199 reference sites in Oregon, U.S.A. Two sets of 66 best DF models containing between one and 14 predictor variables (that is, having model orders from one to 14) were developed, for five-group and 11-group site classifications. 4. Resubstitution classification accuracy of the DF models increased consistently with model order, but cross-validated classification accuracy did not improve beyond seventh or eighth-order models, suggesting that the larger models were overfitted. 5. Overall predictive model performance at model training sites, measured by the root-mean-squared error of the observed/expected species richness ratio, also improved steadily with DF model order. But high-order DF models usually performed poorly at an independent set of validation sites, another sign of model overfitting. 6. Models selected by stepwise DF analysis showed evidence of overfitting and were outperformed by several of the best-subsets models. 7. The group separation strength of a DF model, as measured by Wilks',, was more strongly correlated with overall predictive model performance at training sites than was DF classification accuracy. 8. Our results suggest improved strategies for developing reliable, parsimonious predictive models. We emphasise the value of independent validation data for obtaining a realistic picture of model performance. We also recommend assessing not just one or two, but several, candidate models based on their overall performance as well as the performance of their DF component. 9. We provide links to our free software for stepwise and best-subsets DF analysis. [source]


Comparing and evaluating process-based ecosystem model predictions of carbon and water fluxes in major European forest biomes

GLOBAL CHANGE BIOLOGY, Issue 12 2005
Pablo Morales
Abstract Process-based models can be classified into: (a) terrestrial biogeochemical models (TBMs), which simulate fluxes of carbon, water and nitrogen coupled within terrestrial ecosystems, and (b) dynamic global vegetation models (DGVMs), which further couple these processes interactively with changes in slow ecosystem processes depending on resource competition, establishment, growth and mortality of different vegetation types. In this study, four models , RHESSys, GOTILWA+, LPJ-GUESS and ORCHIDEE , representing both modelling approaches were compared and evaluated against benchmarks provided by eddy-covariance measurements of carbon and water fluxes at 15 forest sites within the EUROFLUX project. Overall, model-measurement agreement varied greatly among sites. Both modelling approaches have somewhat different strengths, but there was no model among those tested that universally performed well on the two variables evaluated. Small biases and errors suggest that ORCHIDEE and GOTILWA+ performed better in simulating carbon fluxes while LPJ-GUESS and RHESSys did a better job in simulating water fluxes. In general, the models can be considered as useful tools for studies of climate change impacts on carbon and water cycling in forests. However, the various sources of variation among models simulations and between models simulations and observed data described in this study place some constraints on the results and to some extent reduce their reliability. For example, at most sites in the Mediterranean region all models generally performed poorly most likely because of problems in the representation of water stress effects on both carbon uptake by photosynthesis and carbon release by heterotrophic respiration (Rh). The use of flux data as a means of assessing key processes in models of this type is an important approach to improving model performance. Our results show that the models have value but that further model development is necessary with regard to the representation of the some of the key ecosystem processes. [source]


Estimating diurnal to annual ecosystem parameters by synthesis of a carbon flux model with eddy covariance net ecosystem exchange observations

GLOBAL CHANGE BIOLOGY, Issue 2 2005
Bobby H. Braswell
Abstract We performed a synthetic analysis of Harvard Forest net ecosystem exchange of CO2 (NEE) time series and a simple ecosystem carbon flux model, the simplified Photosynthesis and Evapo-Transpiration model (SIPNET). SIPNET runs at a half-daily time step, and has two vegetation carbon pools, a single aggregated soil carbon pool, and a simple soil moisture sub-model. We used a stochastic Bayesian parameter estimation technique that provided posterior distributions of the model parameters, conditioned on the observed fluxes and the model equations. In this analysis, we estimated the values of all quantities that govern model behavior, including both rate constants and initial conditions for carbon pools. The purpose of this analysis was not to calibrate the model to make predictions about future fluxes but rather to understand how much information about process controls can be derived directly from the NEE observations. A wavelet decomposition enabled us to assess model performance at multiple time scales from diurnal to decadal. The model parameters are most highly constrained by eddy flux data at daily to seasonal time scales, suggesting that this approach is not useful for calculating annual integrals. However, the ability of the model to fit both the diurnal and seasonal variability patterns in the data simultaneously, using the same parameter set, indicates the effectiveness of this parameter estimation method. Our results quantify the extent to which the eddy covariance data contain information about the ecosystem process parameters represented in the model, and suggest several next steps in model development and observations for improved synthesis of models with flux observations. [source]


Simulated and observed fluxes of sensible and latent heat and CO2 at the WLEF-TV tower using SiB2.5

GLOBAL CHANGE BIOLOGY, Issue 9 2003
Ian Baker
Abstract Three years of meteorological data collected at the WLEF-TV tower were used to drive a revised version of the Simple Biosphere (SiB 2.5) Model. Physiological properties and vegetation phenology were specified from satellite imagery. Simulated fluxes of heat, moisture, and carbon were compared to eddy covariance measurements taken onsite as a means of evaluating model performance on diurnal, synoptic, seasonal, and interannual time scales. The model was very successful in simulating variations of latent heat flux when compared to observations, slightly less so in the simulation of sensible heat flux. The model overestimated peak values of sensible heat flux on both monthly and diurnal scales. There was evidence that the differences between observed and simulated fluxes might be linked to wetlands near the WLEF tower, which were not present in the SiB simulation. The model overestimated the magnitude of the net ecosystem exchange of CO2 in both summer and winter. Mid-day maximum assimilation was well represented by the model, but late afternoon simulations showed excessive carbon uptake due to misrepresentation of within-canopy shading in the model. Interannual variability was not well simulated because only a single year of satellite imagery was used to parameterize the model. [source]


A Stable and Efficient Numerical Algorithm for Unconfined Aquifer Analysis

GROUND WATER, Issue 4 2009
Elizabeth Keating
The nonlinearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to the solution of Richard's equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table, does not require "dry" cells to convert to inactive cells, and allows recharge to flow through relatively dry cells to the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem as well. [source]


Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses

HEALTH SERVICES RESEARCH, Issue 4p1 2004
Colin Preyra
Objective. To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources. Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design. Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings. Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions. Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. [source]


Can the choice of interpolation method explain the difference between swap prices and futures prices?

ACCOUNTING & FINANCE, Issue 2 2005
Rob Brown
G13 Abstract The standard model linking the swap rate to the rates in a contemporaneous strip of futures interest rate contracts typically produces biased estimates of the swap rate. Institutional differences usually require some form of interpolation to be employed and may in principle explain this empirical result. Using Australian data, we find evidence consistent with this explanation and show that model performance is greatly improved if an alternative interpolation method is used. In doing so, we also provide the first published Australian evidence on the accuracy of the futures-based approach to pricing interest rate swaps. [source]


Towards a simple dynamic process conceptualization in rainfall,runoff models using multi-criteria calibration and tracers in temperate, upland catchments

HYDROLOGICAL PROCESSES, Issue 3 2010
C. Birkel
Abstract Empirically based understanding of streamflow generation dynamics in a montane headwater catchment formed the basis for the development of simple, low-parameterized, rainfall,runoff models. This study was based in the Girnock catchment in the Cairngorm Mountains of Scotland, where runoff generation is dominated by overland flow from peaty soils in valley bottom areas that are characterized by dynamic expansion and contraction of saturation zones. A stepwise procedure was used to select the level of model complexity that could be supported by field data. This facilitated the assessment of the way the dynamic process representation improved model performance. Model performance was evaluated using a multi-criteria calibration procedure which applied a time series of hydrochemical tracers as an additional objective function. Flow simulations comparing a static against the dynamic saturation area model (SAM) substantially improved several evaluation criteria. Multi-criteria evaluation using ensembles of performance measures provided a much more comprehensive assessment of the model performance than single efficiency statistics, which alone, could be misleading. Simulation of conservative source area tracers (Gran alkalinity) as part of the calibration procedure showed that a simple two-storage model is the minimum complexity needed to capture the dominant processes governing catchment response. Additionally, calibration was improved by the integration of tracers into the flow model, which constrained model uncertainty and improved the hydrodynamics of simulations in a way that plausibly captured the contribution of different source areas to streamflow. This approach contributes to the quest for low-parameter models that can achieve process-based simulation of hydrological response. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Uncertainty and multiple objective calibration in regional water balance modelling: case study in 320 Austrian catchments

HYDROLOGICAL PROCESSES, Issue 4 2007
J. Parajka
Abstract We examine the value of additional information in multiple objective calibration in terms of model performance and parameter uncertainty. We calibrate and validate a semi-distributed conceptual catchment model for two 11-year periods in 320 Austrian catchments and test three approaches of parameter calibration: (a) traditional single objective calibration (SINGLE) on daily runoff; (b) multiple objective calibration (MULTI) using daily runoff and snow cover data; (c) multiple objective calibration (APRIORI) that incorporates an a priori expert guess about the parameter distribution as additional information to runoff and snow cover data. Results indicate that the MULTI approach performs slightly poorer than the SINGLE approach in terms of runoff simulations, but significantly better in terms of snow cover simulations. The APRIORI approach is essentially as good as the SINGLE approach in terms of runoff simulations but is slightly poorer than the MULTI approach in terms of snow cover simulations. An analysis of the parameter uncertainty indicates that the MULTI approach significantly decreases the uncertainty of the model parameters related to snow processes but does not decrease the uncertainty of other model parameters as compared to the SINGLE case. The APRIORI approach tends to decrease the uncertainty of all model parameters as compared to the SINGLE case. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Use of multi-platform, multi-temporal remote-sensing data for calibration of a distributed hydrological model: an application in the Arno basin, Italy

HYDROLOGICAL PROCESSES, Issue 13 2006
Lorenzo Campo
Abstract Images from satellite platforms are a valid aid in order to obtain distributed information about hydrological surface states and parameters needed in calibration and validation of the water balance and flood forecasting. Remotely sensed data are easily available on large areas and with a frequency compatible with land cover changes. In this paper, remotely sensed images from different types of sensor have been utilized as a support to the calibration of the distributed hydrological model MOBIDIC, currently used in the experimental system of flood forecasting of the Arno River Basin Authority. Six radar images from ERS-2 synthetic aperture radar (SAR) sensors (three for summer 2002 and three for spring,summer 2003) have been utilized and a relationship between soil saturation indexes and backscatter coefficient from SAR images has been investigated. Analysis has been performed only on pixels with meagre or no vegetation cover, in order to legitimize the assumption that water content of the soil is the main variable that influences the backscatter coefficient. Such pixels have been obtained by considering vegetation indexes (NDVI) and land cover maps produced by optical sensors (Landsat-ETM). In order to calibrate the soil moisture model based on information provided by SAR images, an optimization algorithm has been utilized to minimize the regression error between saturation indexes from model and SAR data and error between measured and modelled discharge flows. Utilizing this procedure, model parameters that rule soil moisture fluxes have been calibrated, obtaining not only a good match with remotely sensed data, but also an enhancement of model performance in flow prediction with respect to a previous calibration with river discharge data only. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Error analysis for the evaluation of model performance: rainfall,runoff event time series data

HYDROLOGICAL PROCESSES, Issue 8 2005
Edzer J. Pebesma
Abstract This paper provides a procedure for evaluating model performance where model predictions and observations are given as time series data. The procedure focuses on the analysis of error time series by graphing them, summarizing them, and predicting their variability through available information (recalibration). We analysed two rainfall,runoff events from the R-5 data set, and evaluated 12 distinct model simulation scenarios for these events, of which 10 were conducted with the quasi-physically-based rainfall,runoff model (QPBRRM) and two with the integrated hydrology model (InHM). The QPBRRM simulation scenarios differ in their representation of saturated hydraulic conductivity. Two InHM simulation scenarios differ with respect to the inclusion of the roads at R-5. The two models, QPBRRM and InHM, differ strongly in the complexity and number of processes included. For all model simulations we found that errors could be predicted fairly well to very well, based on model output, or based on smooth functions of lagged rainfall data. The errors remaining after recalibration are much more alike in terms of variability than those without recalibration. In this paper, recalibration is not meant to fix models, but merely as a diagnostic tool that exhibits the magnitude and direction of model errors and indicates whether these model errors are related to model inputs such as rainfall. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Modelling stream flow for use in ecological studies in a large, arid zone river, central Australia

HYDROLOGICAL PROCESSES, Issue 6 2005
Justin F. Costelloe
Abstract Australian arid zone ephemeral rivers are typically unregulated and maintain a high level of biodiversity and ecological health. Understanding the ecosystem functions of these rivers requires an understanding of their hydrology. These rivers are typified by highly variable hydrological regimes and a paucity, often a complete absence, of hydrological data to describe these flow regimes. A daily time-step, grid-based, conceptual rainfall,runoff model was developed for the previously uninstrumented Neales River in the arid zone of northern South Australia. Hourly, logged stage data provided a record of stream-flow events in the river system. In conjunction with opportunistic gaugings of stream-flow events, these data were used in the calibration of the model. The poorly constrained spatial variability of rainfall distribution and catchment characteristics (e.g. storage depths) limited the accuracy of the model in replicating the absolute magnitudes and volumes of stream-flow events. In particular, small but ecologically important flow events were poorly modelled. Model performance was improved by the application of catchment-wide processes replicating quick runoff from high intensity rainfall and improving the area inundated versus discharge relationship in the channel sections of the model. Representing areas of high and low soil moisture storage depths in the hillslope areas of the catchment also improved the model performance. The need for some explicit representation of the spatial variability of catchment characteristics (e.g. channel/floodplain, low storage hillslope and high storage hillslope) to effectively model the range of stream-flow events makes the development of relatively complex rainfall,runoff models necessary for multisite ecological studies in large, ungauged arid zone catchments. Grid-based conceptual models provide a good balance between providing the capacity to easily define land types with differing rainfall,runoff responses, flexibility in defining data output points and a parsimonious water-balance,routing model. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Application of the distributed hydrology soil vegetation model to Redfish Creek, British Columbia: model evaluation using internal catchment data

HYDROLOGICAL PROCESSES, Issue 2 2003
Andrew Whitaker
Abstract The Distributed Hydrology Soil Vegetation Model is applied to the Redfish Creek catchment to investigate the suitability of this model for simulation of forested mountainous watersheds in interior British Columbia and other high-latitude and high-altitude areas. On-site meteorological data and GIS information on terrain parameters, forest cover, and soil cover are used to specify model input. A stepwise approach is taken in calibrating the model, in which snow accumulation and melt parameters for clear-cut and forested areas were optimized independent of runoff production parameters. The calibrated model performs well in reproducing year-to-year variability in the outflow hydrograph, including peak flows. In the subsequent model performance evaluation for simulation of catchment processes, emphasis is put on elevation and temporal differences in snow accumulation and melt, spatial patterns of snowline retreat, water table depth, and internal runoff generation, using internal catchment data as much as possible. Although the overall model performance based on these criteria is found to be good, some issues regarding the simulation of internal catchment processes remain. These issues are related to the distribution of meteorological variables over the catchment and a lack of information on spatial variability in soil properties and soil saturation patterns. Present data limitations for testing internal model accuracy serve to guide future data collection at Redfish Creek. This study also illustrates the challenges that need to be overcome before distributed physically based hydrologic models can be used for simulating catchments with fewer data resources. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Multi-variable parameter estimation to increase confidence in hydrological modelling

HYDROLOGICAL PROCESSES, Issue 2 2002
Sten Bergström
Abstract The expanding use and increased complexity of hydrological runoff models has given rise to a concern about overparameterization and risks for compensating errors. One proposed way out is the calibration and validation against additional observations, such as snow, soil moisture, groundwater or water quality. A general problem, however, when calibrating the model against more than one variable is the strategy for parameter estimation. The most straightforward method is to calibrate the model components sequentially. Recent results show that in this way the model may be locked up in a parameter setting, which is good enough for one variable but excludes proper simulation of other variables. This is particularly the case for water quality modelling, where a small compromise in terms of runoff simulation may lead to dramatically better simulations of water quality. This calls for an integrated model calibration procedure with a criterion that integrates more aspects on model performance than just river runoff. The use of multi-variable parameter estimation and internal control of the HBV hydrological model is discussed and highlighted by two case studies. The first example is from a forested basin in northern Sweden and the second one is from an agricultural basin in the south of the country. A new calibration strategy, which is integrated rather than sequential, is proposed and tested. It is concluded that comparison of model results with more measurements than only runoff can lead to increased confidence in the physical relevance of the model, and that the new calibration strategy can be useful for further model development. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Predicting direction shifts on Canadian,US exchange rates with artificial neural networks

INTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 2 2001
Jefferson T. Davis
The paper presents a variety of neural network models applied to Canadian,US exchange rate data. Networks such as backpropagation, modular, radial basis functions, linear vector quantization, fuzzy ARTMAP, and genetic reinforcement learning are examined. The purpose is to compare the performance of these networks for predicting direction (sign change) shifts in daily returns. For this classification problem, the neural nets proved superior to the naïve model, and most of the neural nets were slightly superior to the logistic model. Using multiple previous days' returns as inputs to train and test the backpropagation and logistic models resulted in no increased classification accuracy. The models were not able to detect a systematic affect of previous days' returns up to fifteen days prior to the prediction day that would increase model performance. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Empirical slip and viscosity model performance for microscale gas flow

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2005
Matthew J. McNenly
Abstract For the simple geometries of Couette and Poiseuille flows, the velocity profile maintains a similar shape from continuum to free molecular flow. Therefore, modifications to the fluid viscosity and slip boundary conditions can improve the continuum based Navier,Stokes solution in the non-continuum non-equilibrium regime. In this investigation, the optimal modifications are found by a linear least-squares fit of the Navier,Stokes solution to the non-equilibrium solution obtained using the direct simulation Monte Carlo (DSMC) method. Models are then constructed for the Knudsen number dependence of the viscosity correction and the slip model from a database of DSMC solutions for Couette and Poiseuille flows of argon and nitrogen gas, with Knudsen numbers ranging from 0.01 to 10. Finally, the accuracy of the models is measured for non-equilibrium cases both in and outside the DSMC database. Flows outside the database include: combined Couette and Poiseuille flow, partial wall accommodation, helium gas, and non-zero convective acceleration. The models reproduce the velocity profiles in the DSMC database within an L2 error norm of 3% for Couette flows and 7% for Poiseuille flows. However, the errors in the model predictions outside the database are up to five times larger. Copyright © 2005 John Wiley & Sons, Ltd. [source]