Model Inputs (model + input)

Distribution by Scientific Domains

Terms modified by Model Inputs

  • model input parameter

  • Selected Abstracts


    Effect of growing watershed imperviousness on hydrograph parameters and peak discharge

    HYDROLOGICAL PROCESSES, Issue 13 2008
    Huang-jia Huang
    Abstract An increasing impervious area is quickly extending over the Wu-Tu watershed due to the endless demands of the people. Generally, impervious paving is a major result of urbanization and more recently has had the potential to produce more enormous flood disasters than those of the past. In this study, 40 available rainfall,runoff events were chosen to calibrate the applicable parameters of the models and to determine the relationships between the impervious surfaces and the calibrated parameters. Model inputs came from the outcomes of the block kriging method and the non-linear programming method. In the optimal process, the shuffled complex evolution method and three criteria were applied to compare the observed and simulated hydrographs. The tendencies of the variations of the parameters with their corresponding imperviousness were established through regression analysis. Ten cases were used to examine the established equations of the parameters and impervious covers. Finally, the design flood routines of various return periods were furnished through use of approaches containing a design storm, block kriging, the SCS model, and a rainfall-runoff model with established functional relationships. These simulated flood hydrographs were used to compare and understand the past, present, and future hydrological conditions of the watershed studied. In the research results, the time to peak of flood hydrographs for various storms was diminished approximately from 11 h to 6 h in different decrements, whereas peak flow increased respectively from 127 m3 s,1 to 629 m3 s,1 for different storm intensities. In addition, this study provides a design diagram for the peak flow ratio to help engineers and designers to construct hydraulic structures efficiently and prevent possible damage to human life and property. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Economic evaluation of erythropoiesis-stimulating agents for anemia related to cancer

    CANCER, Issue 13 2010
    Scott Klarenbach MD
    Abstract BACKGROUND: Erythropoiesis-stimulating agents (ESA) administered to cancer patients with anemia reduce the need for blood transfusions and improve quality-of-life (QOL). Concerns about toxicity have led to more restrictive recommendations for ESA use; however, the incremental costs and benefits of such a strategy are unknown. METHODS: The authors created a decision model to examine the costs and consequences of ESA use in patients with anemia and cancer from the perspective of the Canadian public healthcare system. Model inputs were informed by a recent systematic review. Extensive sensitivity analyses and scenario analysis rigorously assessed QOL benefits and more conservative ESA administration practices (initial hemoglobin [Hb] <10 g/dL, target Hb ,12 g/dL, and chemotherapy induced anemia only). RESULTS: Compared with supportive transfusions only, conventional ESA treatment was associated with an incremental cost per quality-adjusted life year (QALY) gained of $267,000 during a 15-week time frame. During a 1.3-year time horizon, ESA was associated with higher costs and worse clinical outcomes. In scenarios where multiple assumptions regarding QOL all favored ESA, the lowest incremental cost per QALY gained was $126,000. Analyses simulating the use of ESA in accordance with recently issued guidelines resulted in incremental cost per QALY gained of >$100,000 or ESA being dominated (greater costs with lower benefit) in the majority of the scenarios, although greater variability in the cost-utility ratio was present. CONCLUSIONS: Use of ESA for anemia related to cancer is associated with incremental cost-effectiveness ratios that are not economically attractive, even when used in a conservative fashion recommended by current guidelines. Cancer 2010. © 2010 American Cancer Society. [source]


    Pan-European regional-scale modelling of water and N efficiencies of rapeseed cultivation for biodiesel production

    GLOBAL CHANGE BIOLOGY, Issue 1 2009
    MARIJN VAN DER VELDE
    Abstract The energy produced from the investment in biofuel crops needs to account for the environmental impacts on soil, water, climate change and ecosystem services. A regionalized approach is needed to evaluate the environmental costs of large-scale biofuel production. We present a regional pan-European simulation of rapeseed (Brassica napus) cultivation. Rapeseed is the European Union's dominant biofuel crop with a share of about 80% of the feedstock. To improve the assessment of the environmental impact of this biodiesel production, we performed a pan-European simulation of rapeseed cultivation at a 10 × 10 km scale with Environmental Policy Integrated Climate (EPIC). The model runs with a daily time step and model input consists of spatialized meteorological measurements, and topographic, soil, land use, and farm management practices data and information. Default EPIC model parameters were calibrated based on literature. Modelled rapeseed yields were satisfactory compared with yields at regional level reported for 151 regions obtained for the period from 1995 to 2003 for 27 European Union member countries, along with consistent modelled and reported yield responses to precipitation, radiation and vapour pressure deficit at regional level. The model is currently set up so that plant nutrient stress is not occurring. Total fertilizer consumption at country level was compared with IFA/FAO data. This approach allows us to evaluate environmental pressures and efficiencies arising from and associated with rapeseed cultivation to further complete the environmental balance of biofuel production and consumption. [source]


    Inverse Modeling Approach to Allogenic Karst System Characterization

    GROUND WATER, Issue 3 2009
    N. Dörfliger
    Allogenic karst systems function in a particular way that is influenced by the type of water infiltrating through river water losses, by karstification processes, and by water quality. Management of this system requires a good knowledge of its structure and functioning, for which a new methodology based on an inverse modeling approach appears to be well suited. This approach requires both spring and river inflow discharge measurements and a continuous record of chemical parameters in the river and at the spring. The inverse model calculates unit hydrographs and the impulse responses of fluxes from rainfall hydraulic head at the spring or rainfall flux data, the purpose of which is hydrograph separation. Hydrograph reconstruction is done using rainfall and river inflow data as model input and enables definition at each time step of the ratio of each component. Using chemical data, representing event and pre-event water, as input, it is possible to determine the origin of spring water (either fast flow through the epikarstic zone or slow flow through the saturated zone). This study made it possible to improve a conceptual model of allogenic karst system functioning. The methodology is used to study the Bas-Agly and the Cent Font karst systems, two allogenic karst systems in Southern France. [source]


    Application of the distributed hydrology soil vegetation model to Redfish Creek, British Columbia: model evaluation using internal catchment data

    HYDROLOGICAL PROCESSES, Issue 2 2003
    Andrew Whitaker
    Abstract The Distributed Hydrology Soil Vegetation Model is applied to the Redfish Creek catchment to investigate the suitability of this model for simulation of forested mountainous watersheds in interior British Columbia and other high-latitude and high-altitude areas. On-site meteorological data and GIS information on terrain parameters, forest cover, and soil cover are used to specify model input. A stepwise approach is taken in calibrating the model, in which snow accumulation and melt parameters for clear-cut and forested areas were optimized independent of runoff production parameters. The calibrated model performs well in reproducing year-to-year variability in the outflow hydrograph, including peak flows. In the subsequent model performance evaluation for simulation of catchment processes, emphasis is put on elevation and temporal differences in snow accumulation and melt, spatial patterns of snowline retreat, water table depth, and internal runoff generation, using internal catchment data as much as possible. Although the overall model performance based on these criteria is found to be good, some issues regarding the simulation of internal catchment processes remain. These issues are related to the distribution of meteorological variables over the catchment and a lack of information on spatial variability in soil properties and soil saturation patterns. Present data limitations for testing internal model accuracy serve to guide future data collection at Redfish Creek. This study also illustrates the challenges that need to be overcome before distributed physically based hydrologic models can be used for simulating catchments with fewer data resources. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    MODELING THE LONG TERM IMPACTS OF USING RIGID STRUCTURES IN STREAM CHANNEL RESTORATION1

    JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 6 2006
    Sue L. Niezgoda
    Abstract: Natural channel designs often incorporate rigid instream structures to protect channel banks, provide grade control, promote flow deflection, or otherwise improve channel stability. The long term impact of rigid structures on natural stream processes is relatively unknown. The objective of this study was to use long term alluvial channel modeling to evaluate the effect of rigid structures on channel processes and assess current and future stream channel stability. The study was conducted on Oliver Run, a small stream in Pennsylvania relocated due to highway construction. Field data were collected for one year along the 107 m reach to characterize the stream and provide model input, calibration, and verification data. FLUVIAL-12 was used to evaluate the long term impacts of rigid structures on natural channel adjustment, overall channel stability, and changing form and processes. Based on a consideration of model limitations and results, it was concluded that the presence of rigid structures reduced channel width-to-depth ratios, minimized bed elevation changes due to long term aggradation and degradation, limited lateral channel migration, and increased the mean bed material particle size throughout the reach. Results also showed how alluvial channel modeling can be used to improve the stream restoration design effort. [source]


    Modeling the deglaciation of the Green Bay Lobe of the southern Laurentide Ice Sheet

    BOREAS, Issue 1 2004
    CORNELIA WINGUTH
    We use a time-dependent two-dimensional ice-flow model to explore the development of the Green Bay Lobe, an outlet glacier of the southern Laurentide Ice Sheet, leading up to the time of maximum ice extent and during subsequent deglaciation (c. 30 to 8 cal. ka BP). We focus on conditions at the ice-bed interface in order to evaluate their possible impact on glacial landscape evolution. Air temperatures for model input have been reconstructed using the GRIP ,8O record calibrated to speleothem records from Missouri that cover the time periods of c. 65 to 30 cal. ka BP and 13.25 to 12.4 cal. ka BP. Using that input, the known ice extents during maximum glaciation and early deglaciation can be reproduced reasonably well. The model fails, however, to reproduce short-term ice margin retreat and readvance events during later stages of deglaciation. Model results indicate that the area exposed after the retreat of the Green Bay Lobe was characterized by permafrost until at least 14 cal. ka BP. The extensive drumlin zones that formed behind the ice margins of the outermost Johnstown phase and the later Green Lake phase are associated with modeled ice margins that were stable for at least 1000 years, high basal shear stresses (c. 100 kPa) and permafrost depths of 80,200 m. During deglaciation, basal meltwater and sliding became more important. [source]


    Network models for capillary porous media: application to drying technology

    CHEMIE-INGENIEUR-TECHNIK (CIT), Issue 6 2010
    T. Metzger Jun.-Prof.
    Abstract Network models offer an efficient pore-scale approach to investigate transport in partially saturated porous materials and are particularly suited to study capillarity. Drying is a prime model application since it involves a range of physical effects: capillary pumping, viscous liquid flow, phase transition, vapor diffusion, heat transfer, but also cracks and shrinkage. This review article gives an introduction to this modern technique addressing required model input, sketching important elements of the computational algorithm and commenting on the nature of simulation results. For the case of drying, it is illustrated how network models can help analyze the influence of pore structure on process kinetics and gain a deeper understanding of the role of individual transport phenomena. Finally, a combination of pore network model and discrete element method is presented, extending the application range to mechanical effects caused by capillary forces. [source]


    Employee Stock Option Fair-Value Estimates: Do Managerial Discretion and Incentives Explain Accuracy?,

    CONTEMPORARY ACCOUNTING RESEARCH, Issue 4 2006
    Leslie Hodder
    Abstract We examine the determinants of managers' use of discretion over employee stock option (ESO) valuation-model inputs that determine ESO fair values. We also explore the consequences of such discretion. Firms exercise considerable discretion over all model inputs, and this discretion results in material differences in ESO fair-value estimates. Contrary to conventional wisdom, we find that a large proportion of firms exercise value-increasing discretion. Importantly, we find that using discretion improves predictive accuracy for about half of our sample firms. Moreover, we find that both opportunistic and informational managerial incentives together explain the accuracy of firms' ESO fair-value estimates. Partitioning on the direction of discretion improves our understanding of managerial incentives. Our analysis confirms that financial statement readers can use mandated contextual disclosures to construct powerful ex ante predictions of ex post accuracy. [source]


    Evaluating and expressing the propagation of uncertainty in chemical fate and bioaccumulation models

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 4 2002
    Matthew MacLeod
    Abstract First-order analytical sensitivity and uncertainty analysis for environmental chemical fate models is described and applied to a regional contaminant fate model and a food web bioaccumulation model. By assuming linear relationships between inputs and outputs, independence, and log-normal distributions of input variables, a relationship between uncertainty in input parameters and uncertainty in output parameters can be derived, yielding results that are consistent with a Monte Carlo analysis with similar input assumptions. A graphical technique is devised for interpreting and communicating uncertainty propagation as a function of variance in input parameters and model sensitivity. The suggested approach is less calculationally intensive than Monte Carlo analysis and is appropriate for preliminary assessment of uncertainty when models are applied to generic environments or to large geographic areas or when detailed parameterization of input uncertainties is unwarranted or impossible. This approach is particularly useful as a starting point for identification of sensitive model inputs at the early stages of applying a generic contaminant fate model to a specific environmental scenario, as a tool to support refinements of the model and the uncertainty analysis for site-specific scenarios, or for examining defined end points. The analysis identifies those input parameters that contribute significantly to uncertainty in outputs, enabling attention to be focused on defining median values and more appropriate distributions to describe these variables. [source]


    The behaviour of soil process models of ammonia volatilization at contrasting spatial scales

    EUROPEAN JOURNAL OF SOIL SCIENCE, Issue 6 2008
    R. Corstanje
    Summary Process models are commonly used in soil science to obtain predictions at a spatial scale that is different from the scale at which the model was developed, or the scale at which information on model inputs is available. When this happens, the model and its inputs require aggregation or disaggregation to the application scale, and this is a complex problem. Furthermore, the validity of the aggregated model predictions depends on whether the model describes the key processes that determine the process outcome at the target scale. Different models may therefore be required at different spatial scales. In this paper we develop a diagnostic framework which allows us to judge whether a model is appropriate for use at one or more spatial scales both with respect to the prediction of variations at those scale and in the requirement for disaggregation of the inputs. We show that spatially nested analysis of the covariance of predictions with measured process outcomes is an efficient way to do this. This is applied to models of the processes that lead to ammonia volatilization from soil after the application of urea. We identify the component correlations at different scales of a nested scheme as the diagnostic with which to evaluate model behaviour. These correlations show how well the model emulates components of spatial variation of the target process at the scales of the sampling scheme. Aggregate correlations were identified as the most pertinent to evaluate models for prediction at particular scales since they measure how well aggregated predictions at some scale correlate with aggregated values of the measured outcome. There are two circumstances under which models are used to make predictions. In the first case only the model is used to predict, and the most useful diagnostic is the concordance aggregate correlation. In the second case model predictions are assimilated with observations which should correct bias in the prediction, and errors in the variance; the aggregate correlations would be the most suitable diagnostic. [source]


    Identification of application-specific critical inputs for the 1991 Johnson and Ettinger vapor intrusion algorithm

    GROUND WATER MONITORING & REMEDIATION, Issue 1 2005
    Paul C. Johnson
    At sites where soil or ground water contains chemicals of concern, there is the potential for chemical vapors to migrate through the subsurface to nearby basements, buildings, and other enclosed spaces. The 1991 Johnson and Ettinger algorithm and subsequent refinements are often used to assess the significance of this pathway and to establish target cleanup levels. To facilitate its use, the U.S. EPA distributes spreadsheets programmed with the 1991 Johnson and Ettinger algorithm. These user-friendly spreadsheets make the equations more accessible; however, the U.S. EPA spreadsheets require a large number of inputs (>20), and as a result, relationships between model inputs and outputs are not well understood and users are not able to identify and focus on the critical inputs. The U.S. EPA spreadsheets also allow users to inadvertently enter inconsistent and unreasonable sets of input values, and these often lead to unreasonable outputs. The objective of this work, therefore, is to help users develop a better understanding of the relationships between inputs and outputs so that they can identify critical inputs and also to ensure reasonableness of inputs and outputs. The 1991 Johnson and Ettinger algorithm is introduced, and differences between it and its U.S. EPA spreadsheet implementation are identified. Next, results from a parametric analysis of the algorithm are used to create a flowchart-based approach for identifying the application-specific critical inputs. Use of the flowchart-based approach is then illustrated and validated through comparison with the results of a more traditional sensitivity analysis for four scenarios. Recommendations are also given for the reformulation of inputs to minimize misapplication of the algorithm and the spreadsheets, and reasonable ranges for reformulated input values are discussed. [source]


    Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA

    HEALTH ECONOMICS, Issue 10 2007
    Anthony O'Hagan
    Abstract Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Snow-distribution and melt modelling for glaciers in Zackenberg river drainage basin, north-eastern Greenland

    HYDROLOGICAL PROCESSES, Issue 24 2007
    Sebastian H. Mernild
    Abstract A physically based snow-evolution modelling system (SnowModel) that includes four sub-models: MicroMet, EnBal, SnowPack, and SnowTran-3D, was used to simulate eight full-year evolutions of snow accumulation, distribution, sublimation, and surface melt from glaciers in the Zackenberg river drainage basin, in north-east Greenland. Meteorological observations from two meteorological stations were used as model inputs, and spatial snow depth observations, snow melt depletion curves from photographic time lapse, and a satellite image were used for model testing of snow and melt simulations, which differ from previous SnowModel tests methods used on Greenland glaciers. Modelled test-period-average end-of-winter snow water equivalent (SWE) depth for the depletion area differs by a maximum of 14 mm w.eq., or ,6%, more than the observed, and modelled test-period-average snow cover extent differs by a maximum of 5%, or 0·8 km2, less than the observed. Furthermore, comparison with a satellite image indicated a 7% discrepancy between observed and modelled snow cover extent for the entire drainage basin. About 18% (31 mm w.eq.) of the solid precipitation was returned to the atmosphere by sublimation. Modelled mean annual snow melt and glacier ice melt for the glaciers in the Zackenberg river drainage basin from 1997 through 2005 (September,August) averaged 207 mm w.eq. year,1 and 1198 mm w.eq. year,1, respectively, yielding a total averaging 1405 mm w.eq. year,1. Total modelled mean annual surface melt varied from 960 mm w.eq. year,1 to 1989 mm w.eq. year,1. The surface-melt period started between mid-May and the beginning of June and lasted until mid-September. Annual calculated runoff averaged 1487 mm w.eq. year,1 (,150 × 106 m3) (1997,2005) with variations from 1031 mm w.eq. year,1 to 2051 mm w.eq. year,1. The model simulated a total glacier recession averaging , 1347 mm w.eq. year,1 (,136 × 106 m3) (1997,2005), which was almost equal to previous basin average hydrological water balance storage studies , 244 mm w.eq. year,1 (,125 × 106 m3) (1997,2003). Copyright © 2007 John Wiley & Sons, Ltd. [source]


    SWAT2000: current capabilities and research opportunities in applied watershed modelling

    HYDROLOGICAL PROCESSES, Issue 3 2005
    J. G. Arnold
    Abstract SWAT (Soil and Water Assessment Tool) is a conceptual, continuous time model that was developed in the early 1990s to assist water resource managers in assessing the impact of management and climate on water supplies and non-point source pollution in watersheds and large river basins. SWAT is the continuation of over 30 years of model development within the US Department of Agriculture's Agricultural Research Service and was developed to ,scale up' past field-scale models to large river basins. Model components include weather, hydrology, erosion/sedimentation, plant growth, nutrients, pesticides, agricultural management, stream routing and pond/reservoir routing. The latest version, SWAT2000, has several significant enhancements that include: bacteria transport routines; urban routines; Green and Ampt infiltration equation; improved weather generator; ability to read in daily solar radiation, relative humidity, wind speed and potential ET; Muskingum channel routing; and modified dormancy calculations for tropical areas. A complete set of model documentation for equations and algorithms, a user manual describing model inputs and outputs, and an ArcView interface manual are now complete for SWAT2000. The model has been recoded into Fortran 90 with a complete data dictionary, dynamic allocation of arrays and modular subroutines. Current research is focusing on bacteria, riparian zones, pothole topography, forest growth, channel downcutting and widening, and input uncertainty analysis. The model SWAT is meanwhile used in many countries all over the world. Recent developments in European Environmental Policy, such as the adoption of the European Water Framework directive in December 2000, demand tools for integrative river basin management. The model SWAT is applicable for this purpose. It is a flexible model that can be used under a wide range of different environmental conditions, as this special issue will show. The papers compiled here are the result of the first International SWAT Conference held in August 2001 in Rauischholzhausen, Germany. More than 50 participants from 14 countries discussed their modelling experiences with the model development team from the USA. Nineteen selected papers with issues reaching from the newest developments, the evaluation of river basin management, interdisciplinary approaches for river basin management, the impact of land use change, methodical aspects and models derived from SWAT are published in this special issue. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Simulating pan-Arctic runoff with a macro-scale terrestrial water balance model

    HYDROLOGICAL PROCESSES, Issue 13 2003
    Michael A. Rawlins
    Abstract A terrestrial hydrological model, developed to simulate the high-latitude water cycle, is described, along with comparisons with observed data across the pan-Arctic drainage basin. Gridded fields of plant rooting depth, soil characteristics (texture, organic content), vegetation, and daily time series of precipitation and air temperature provide the primary inputs used to derive simulated runoff at a grid resolution of 25 km across the pan-Arctic. The pan-Arctic water balance model (P/WBM) includes a simple scheme for simulating daily changes in soil frozen and liquid water amounts, with the thaw,freeze model (TFM) driven by air temperature, modelled soil moisture content, and physiographic data. Climate time series (precipitation and air temperature) are from the National Centers for Environmental Prediction (NCEP) reanalysis project for the period 1980,2001. P/WBM-generated maximum summer active-layer thickness estimates differ from a set of observed data by an average of 12 cm at 27 sites in Alaska, with many of the differences within the variability (1,) seen in field samples. Simulated long-term annual runoffs are in the range 100 to 400 mm year,1. The highest runoffs are found across northeastern Canada, southern Alaska, and Norway, and lower estimates are noted along the highest latitudes of the terrestrial Arctic in North America and Asia. Good agreement exists between simulated and observed long-term seasonal (winter, spring, summer,fall) runoff to the ten Arctic sea basins (r = 0·84). Model water budgets are most sensitive to changes in precipitation and air temperature, whereas less affect is noted when other model parameters are altered. Increasing daily precipitation by 25% amplifies annual runoff by 50 to 80% for the largest Arctic drainage basins. Ignoring soil ice by eliminating the TFM sub-model leads to runoffs that are 7 to 27% lower than the control run. The results of these model sensitivity experiments, along with other uncertainties in both observed validation data and model inputs, emphasize the need to develop improved spatial data sets of key geophysical quantities (particularly climate time series) to estimate terrestrial Arctic hydrological budgets better. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Modelling plumes of overland flow from logging tracks

    HYDROLOGICAL PROCESSES, Issue 12 2002
    P. B. Hairsine
    Abstract Most land-based forestry systems use extensive networks of unsealed tracks to access the timber resource. These tracks are normally drained by constructing cross-banks, or water bars, across the tracks immediately following logging. Cross-banks serve three functions in controlling sediment movement within forestry compartments: 1.they define the specific catchment area of the snig track (also known as skid trails) so that the overland flow does not develop sufficient energy to cause gullies, and sheet and rill erosion is reduced; 2.they induce some sediment deposition as flow velocity reduces at the cross-bank; 3.they redirect overland flow into the adjacent general harvesting area (GHA) so that further sediment deposition may take place. This paper describes a simple model that predicts the third of these functions in which the rate of runoff from the track is combined with spatial attributes of the track and stream network. Predictions of the extent of the overland flow plumes and the volume of water delivered to streams is probabilistically presented for a range of rainfall-event scenarios with rainfall intensity, time since logging and compartment layout as model inputs. Generic equations guiding the trade-off between intercross-bank length and flow path distance from cross-bank outlet to the stream network needed for infiltration of track runoff are derived. Copyright © 2002 John Wiley & Sons, Ltd. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    A data mining approach to financial time series modelling and forecasting

    INTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 4 2001
    Zoran Vojinovic
    This paper describes one of the relatively new data mining techniques that can be used to forecast the foreign exchange time series process. The research aims to contribute to the development and application of such techniques by exposing them to difficult real-world (non-toy) data sets. The results reveal that the prediction of a Radial Basis Function Neural Network model for forecasting the daily $US/$NZ closing exchange rates is significantly better than the prediction of a traditional linear autoregressive model in both directional change and prediction of the exchange rate itself. We have also investigated the impact of the number of model inputs (model order), the number of hidden layer neurons and the size of training data set on prediction accuracy. In addition, we have explored how the three different methods for placement of Gaussian radial basis functions affect its predictive quality and singled out the best one. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Shear band evolution and accumulated microstructural development in Cosserat media

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 10 2004
    A. Tordesillas
    Abstract This paper prepares the ground for the continuum analysis of shear band evolution using a Cosserat/micropolar constitutive equation derived from micromechanical considerations. The nature of the constitutive response offers two key advantages over other existing models. Firstly, its non-local character obviates the mathematical difficulties of traditional analyses, and facilitates an investigation of the shear band evolution (i.e. the regime beyond the onset of localization). Secondly, the constitutive model parameters are physical properties of particles and their interactions (e.g. particle stiffness coefficients, coefficients of inter-particle rolling friction and sliding friction), as opposed to poorly understood fitting parameters. In this regard, the model is based on the same material properties used as model inputs to a discrete element (DEM) analysis, therefore, the micromechanics approach provides the vehicle for incorporating results not only from physical experiments but also from DEM simulations. Although the capabilities of such constitutive models are still limited, much can be discerned from their general rate form. In this paper, an attempt is made to distinguish between those aspects of the continuum theory of localization that are independent of the constitutive model, and those that require significant advances in the understanding of micromechanics. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Moment independent and variance-based sensitivity analysis with correlations: An application to the stability of a chemical reactor

    INTERNATIONAL JOURNAL OF CHEMICAL KINETICS, Issue 11 2008
    E. Borgonovo
    Recent works have attracted interest toward sensitivity measures that use the entire model output distribution, without dependence on any of its particular moments (e.g., variance). However, the computation of moment-independent importance measures in the presence of dependencies among model inputs has not been dealt with yet. This work has two purposes. On the one hand, to introduce moment independent techniques in the analysis of chemical reaction models. On the other hand, to allow their computation in the presence of correlations. To do so, a new approach based on Gibbs sampling is presented that allows the joint estimation of variance-based and moment independent sensitivity measures in the presence of correlations. The application to the stability of a chemical reactor is then discussed, allowing full consideration of historical data that included a correlation coefficient of 0.7 between two of the model parameters. © 2008 Wiley Periodicals, Inc. Int J Chem Kinet 40: 687,698, 2008 [source]


    Measuring predictability: theory and macroeconomic applications

    JOURNAL OF APPLIED ECONOMETRICS, Issue 6 2001
    Francis X. Diebold
    We propose a measure of predictability based on the ratio of the expected loss of a short-run forecast to the expected loss of a long-run forecast. This predictability measure can be tailored to the forecast horizons of interest, and it allows for general loss functions, univariate or multivariate information sets, and covariance stationary or difference stationary processes. We propose a simple estimator, and we suggest resampling methods for inference. We then provide several macroeconomic applications. First, we illustrate the implementation of predictability measures based on fitted parametric models for several US macroeconomic time series. Second, we analyze the internal propagation mechanism of a standard dynamic macroeconomic model by comparing the predictability of model inputs and model outputs. Third, we use predictability as a metric for assessing the similarity of data simulated from the model and actual data. Finally, we outline several non-parametric extensions of our approach. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    STORMFLOW SIMULATION USING A GEOGRAPHICAL INFORMATION SYSTEM WITH A DISTRIBUTED APPROACH,

    JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 4 2001
    Zhongbo Yu
    ABSTRACT: With the increasing availability of digital and remotely sensed data such as land use, soil texture, and digital elevation models (DEMs), geographic information systems (GIS) have become an indispensable tool in preprocessing data sets for watershed hydrologic modeling and post processing simulation results. However, model inputs and outputs must be transferred between the model and the GIS. These transfers can be greatly simplified by incorporating the model itself into the GIS environment. To this end, a simple hydrologic model, which incorporates the curve number method of rainfall-runoff partitioning, the ground-water base-flow routine, and the Muskingum flow routing procedure, was implemented on the GIS. The model interfaces directly with stream network, flow direction, and watershed boundary data generated using standard GIS terrain analysis tools; and while the model is running, various data layers may be viewed at each time step using the full display capabilities. The terrain analysis tools were first used to delineate the drainage basins and stream networks for the Susquehanna River. Then the model was used to simulate the hydrologic response of the Upper West Branch of the Susquehanna to two different storms. The simulated streamflow hydrographs compare well with the observed hydrographs at the basin outlet. [source]


    Calibration of pesticide leaching models: critical review and guidance for reporting

    PEST MANAGEMENT SCIENCE (FORMERLY: PESTICIDE SCIENCE), Issue 8 2002
    Igor G Dubus
    Abstract Calibration of pesticide leaching models may be undertaken to evaluate the ability of models to simulate experimental data, to assist in their parameterisation where values for input parameters are difficult to determine experimentally, to determine values for specific model inputs (eg sorption and degradation parameters) and to allow extrapolations to be carried out. Although calibration of leaching models is a critical phase in the assessment of pesticide exposure, lack of guidance means that calibration procedures default to the modeller. This may result in different calibration and extrapolation results for different individuals depending on the procedures used, and thus may influence decisions regarding the placement of crop-protection products on the market. A number of issues are discussed in this paper including data requirements and assessment of data quality, the selection of a model and parameters for performing calibration, the use of automated calibration techniques as opposed to more traditional trial-and-error approaches, difficulties in the comparison of simulated and measured data, differences in calibration procedures, and the assessment of parameter values derived by calibration. Guidelines for the reporting of calibration activities within the scope of pesticide registration are proposed. © 2002 Society of Chemical Industry [source]


    Structural acclimation and radiation regime of silver fir (Abies alba Mill.) shoots along a light gradient

    PLANT CELL & ENVIRONMENT, Issue 3 2003
    A. CESCATTI
    ABSTRACT Shoot architecture has been investigated using the ratio of mean shoot silhouette area to total needle area ( ) as a structural index of needle clumping in shoot space, and as the effective extinction coefficient of needle area. Although can be used effectively for the prediction of canopy gap fraction, it does not provide information about the within-shoot radiative regime. For this purpose, the estimation of three architectural properties of the shoots is required: needle area density, angular distribution and spatial aggregation. To estimate these features, we developed a method based on the inversion of a Markov three-dimensional interception model. This approach is based on the turbid medium approximation for needle area in the shoot volume, and assumes an ellipsoidal angular distribution of the normals to the needle area. Observed shoot dimensions and silhouette areas for different vertical and azimuth angles (AS) are used as model inputs. The shape coefficient of the ellipsoidal distribution (c) and the Markov clumping index (,0) are estimated by a least square procedure, in order to minimize the differences between model prediction and measurements of AS. This methodology was applied to silver fir (Abies alba Mill.) shoots collected in a mixed fir,beech,spruce forest in the Italian Alps. The model worked effectively over the entire range of shoot morphologies: c ranged from 1 to 8 and ,0 from 0·3 to 1 moving from the top to the base of the canopy. Finally, the shoot model was applied to reconstruct the within-shoot light regime, and the potential of this technique in upscaling photosynthesis to the canopy level is discussed. [source]


    Assessing the effectiveness of conservation management decisions: likely effects of new protection measures for Hector's dolphin (Cephalorhynchus hectori)

    AQUATIC CONSERVATION: MARINE AND FRESHWATER ECOSYSTEMS, Issue 3 2010
    Elisabeth Slooten
    Abstract 1.Fisheries bycatch affects many species of marine mammals, seabirds, turtles and other marine animals. 2.New Zealand's endemic Hector's dolphins overlap with gillnet and trawl fisheries throughout their geographic range. The species is listed as Endangered by the IUCN. In addition, the North Island subspecies has been listed as Critically Endangered. 3.Estimates of catch rates in commercial gillnets from an observer programme (there are no quantitative estimates of bycatch by amateur gillnetters or in trawl fisheries) were used in a simple population viability analysis to predict the impact of this fishery under three scenarios: Option (A) status-quo management, (B) new regulations announced by the Minister of Fisheries in 2008 and (C) total protection. 4.Uncertainty in estimates of population size and growth rate, number of dolphins caught and other model inputs are explicitly included in the analysis. Sensitivity analyses are carried out to examine the effect of variation in catch rate and the extent to which fishing effort is removed from protected areas but displaced to unprotected areas. 5.These methods are applicable to many other situations in which animals are removed from populations, whether deliberately (e.g. fishing) or not (e.g. bycatch). 6.The current Hector's dolphin population is clearly depleted, at an estimated 27% of the 1970 population. Population projections to 2050 under Options A and B predict that the total population is likely to continue declining. In the case of Option B this is driven mainly by continuing bycatch due to the much weaker protection measures on the South Island west coast. 7.Without fishing mortality (Option C) all populations are projected to increase, with the total population approximately doubling by 2050 and reaching half of its 1970 population size in just under 40 years. Copyright © 2009 John Wiley & Sons, Ltd. [source]