Models Used (models + used)

Distribution by Scientific Domains

Kinds of Models Used

  • animal models used


  • Selected Abstracts


    Evaluation of 6 Prognostic Models Used to Calculate Mortality Rates in Elderly Heart Failure Patients With a Fatal Heart Failure Admission

    CONGESTIVE HEART FAILURE, Issue 5 2010
    Andria L. Nutter
    The objective was to evaluate 6 commonly used heart failure (HF) prognostic models in an elderly, fatal HF population. Predictive models have been established to quantify risk among HF patients. The validation of these models has not been adequately studied, especially in an elderly cohort. Applying a single-center, retrospective study of serially admitted HF patients who died while in the hospital or within 30 days of discharge, the authors evaluated 6 prognostic models: the Seattle Heart Failure Model (SHFM), Heywood's model, Classification and Regression Tree (CART) Analysis, the Heart Failure Survival Score (HFSS), Heart Failure Risk Scoring System, and Pocock's score. Eighty patients were included (mean age, 82.7 ± 8.2 years). Twenty-three patients (28.75%) died in the hospital. The remainder died within 30 days of discharge. The models' predictions varied considerably from one another and underestimated the patients' actual mortality. This study demonstrates that these models underestimate the mortality risk in an elderly cohort at or approaching the end of life. Moreover, the predictions made by each model vary greatly from one another. Many of the models used were not intended for calculation during hospitalization. Development of improved models for the range of patients with HF syndromes is needed. Congest Heart Fail. 2010;16:196,201. © 2010 Wiley Periodicals, Inc. [source]


    Validation of Numerical Ground Water Models Used to Guide Decision Making

    GROUND WATER, Issue 2 2004
    Ahmed E. Hassan
    Many sites of ground water contamination rely heavily on complex numerical models of flow and transport to develop closure plans. This complexity has created a need for tools and approaches that can build confidence in model predictions and provide evidence that these predictions are sufficient for decision making. Confidence building is a long-term, iterative process and the author believes that this process should be termed model validation. Model validation is a process, not an end result. That is, the process of model validation cannot ensure acceptable prediction or quality of the model. Rather, it provides an important safeguard against faulty models or inadequately developed and tested models. If model results become the basis for decision making, then the validation process provides evidence that the model is valid for making decisions (not necessarily a true representation of reality). Validation, verification, and confirmation are concepts associated with ground water numerical models that not only do not represent established and generally accepted practices, but there is not even widespread agreement on the meaning of the terms as applied to models. This paper presents a review of model validation studies that pertain to ground water flow and transport modeling. Definitions, literature debates, previously proposed validation strategies, and conferences and symposia that focused on subsurface model validation are reviewed and discussed. The review is general and focuses on site-specific, predictive ground water models used for making decisions regarding remediation activities and site closure. The aim is to provide a reasonable starting point for hydrogeologists facing model validation for ground water systems, thus saving a significant amount of time, effort, and cost. This review is also aimed at reviving the issue of model validation in the hydrogeologic community and stimulating the thinking of researchers and practitioners to develop practical and efficient tools for evaluating and refining ground water predictive models. [source]


    Status of Observational Models Used in Design and Control of Products and Processes

    COMPREHENSIVE REVIEWS IN FOOD SCIENCE AND FOOD SAFETY, Issue 1 2008
    Shyam S. Sablani
    This article is part of a collection entitled "Models for Safety, Quality, and Competitiveness of the Food Processing Sector," published in Comprehensive Reviews in Food Science and Food Safety. It has been peer-reviewed and was written as a follow-up of a pre-IFT workshop, partially funded by the USDA NRI grant 2005-35503-16208. ABSTRACT:, Modeling techniques can play a vital role in developing and characterizing food products and processes. Physical, chemical, and biological changes that take place during food and bioproduct processing are very complex and experimental investigation may not always be possible due to time, cost, effort, and skills needed. In some cases even experiments are not feasible to conduct. Often it is difficult to visualize the complex behavior of a data set. In addition, modeling is a must for process design, optimization, and control. With the rapid development of computer technology over the past few years, more and more food scientists have begun to use computer-aided modeling techniques. Observation-based modeling methods can be very useful where time and resources do not allow complete physics-based understanding of the process. This review discusses the state of selected observation-based modeling techniques in the context of industrial food processing. [source]


    A Risk-Cost Optimized Maintenance Strategy for Corrosion-Affected Concrete Structures

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2007
    Chun-Qing Li
    It is also observed that some severely deteriorated concrete structures survive for many years without maintenance. This raises the question of why and how to maintain corrosion-affected concrete structures, in particular in the climate of an increasing scarcity of resources. The present article attempts to formulate a maintenance strategy based on risk-cost optimization of a structure during its whole service life. A time-dependent reliability method is employed to determine the probability of exceeding a limit state at each phase of the service life. To facilitate practical application of the formulated maintenance strategy, an algorithm is developed and programmed in a user-friendly manner with a worked example. A merit of the proposed maintenance strategy is that models used in risk assessment for corrosion-affected concrete structures are related to some of the design criteria used by practitioners. It is found in the article that there exists an optimal number of maintenances for cracking and delamination that returns the minimum total cost for the structure in its whole life. The maintenance strategy presented in the article can help structural engineers, operators, and asset managers develop a cost-effective management scheme for corrosion-affected concrete structures. [source]


    Evaluation of 6 Prognostic Models Used to Calculate Mortality Rates in Elderly Heart Failure Patients With a Fatal Heart Failure Admission

    CONGESTIVE HEART FAILURE, Issue 5 2010
    Andria L. Nutter
    The objective was to evaluate 6 commonly used heart failure (HF) prognostic models in an elderly, fatal HF population. Predictive models have been established to quantify risk among HF patients. The validation of these models has not been adequately studied, especially in an elderly cohort. Applying a single-center, retrospective study of serially admitted HF patients who died while in the hospital or within 30 days of discharge, the authors evaluated 6 prognostic models: the Seattle Heart Failure Model (SHFM), Heywood's model, Classification and Regression Tree (CART) Analysis, the Heart Failure Survival Score (HFSS), Heart Failure Risk Scoring System, and Pocock's score. Eighty patients were included (mean age, 82.7 ± 8.2 years). Twenty-three patients (28.75%) died in the hospital. The remainder died within 30 days of discharge. The models' predictions varied considerably from one another and underestimated the patients' actual mortality. This study demonstrates that these models underestimate the mortality risk in an elderly cohort at or approaching the end of life. Moreover, the predictions made by each model vary greatly from one another. Many of the models used were not intended for calculation during hospitalization. Development of improved models for the range of patients with HF syndromes is needed. Congest Heart Fail. 2010;16:196,201. © 2010 Wiley Periodicals, Inc. [source]


    Positive selection for CD90 as a purging option in acute myeloid leukemia stem cell transplants,

    CYTOMETRY, Issue 1 2008
    Nicole Feller
    Abstract Background: Several studies showed the benefit of purging of acute myeloid leukemia (AML) stem cell transplants. We reported previously that purging by positive selection of CD34+ and CD133+ cells resulted in a 3,4 log tumor cell reduction (TCR) in CD34, and/or CD133, AML, but has been shown to be potentially applicable in only about 50% of cases. Similar to CD34 and CD133, CD90 marks the hematopoietic CD34 positive stem cells capable of full hematopoietic recovery after myeloablative chemotherapy, and therefore, in the present study, we explored whether a similar purging approach is possible using CD90. Methods: CD90 expression was established by flowcytometry in diagnosis AML on the clonogenic AML CD34+ blast population by flow cytometry. Positivity was defined as >3% CD90 (CD34+) expression on blasts. For the calculation of the efficacy of TCR by positive selection, AML blasts were recognized by either prelabeling diagnosis blasts with CD45-FITC in spiking model experiments or using expression of leukemia associated marker combinations both in spiking experiments and in real transplants. Results: In 119 patients with AML and myelodysplastic syndrome, we found coexpression of CD34 and CD90 (>3%) in 42 cases (35%). In AML patients 60 years or younger, representing the patients who are eligible for transplantation, only 23% (16/69) of the patients showed CD90 expression. Positive selection for CD90 in transplants containing CD90 negative AML resulted in a 2.8,4 log TCR in the models used. Conclusions: Purging by positive selection using CD90 can potentially be applied effectively in the majority of AML patients 60 years or younger. © 2007 Clinical Cytometry Society [source]


    Hysteretic models that incorporate strength and stiffness deterioration

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 12 2005
    Luis F. Ibarra
    Abstract This paper presents the description, calibration and application of relatively simple hysteretic models that include strength and stiffness deterioration properties, features that are critical for demand predictions as a structural system approaches collapse. Three of the basic hysteretic models used in seismic demand evaluation are modified to include deterioration properties: bilinear, peak-oriented, and pinching. The modified models include most of the sources of deterioration: i.e. various modes of cyclic deterioration and softening of the post-yielding stiffness, and also account for a residual strength after deterioration. The models incorporate an energy-based deterioration parameter that controls four cyclic deterioration modes: basic strength, post-capping strength, unloading stiffness, and accelerated reloading stiffness deterioration. Calibration of the hysteretic models on steel, plywood, and reinforced-concrete components demonstrates that the proposed models are capable of simulating the main characteristics that influence deterioration. An application of a peak-oriented deterioration model in the seismic evaluation of single-degree-of-freedom (SDOF) systems is illustrated. The advantages of using deteriorating hysteretic models for obtaining the response of highly inelastic systems are discussed. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    The Economist as Engineer: Game Theory, Experimentation, and Computation as Tools for Design Economics

    ECONOMETRICA, Issue 4 2002
    Alvin E. Roth
    Economists have lately been called upon not only to analyze markets, but to design them. Market design involves a responsibility for detail, a need to deal with all of a market's complications, not just its principle features. Designers therefore cannot work only with the simple conceptual models used for theoretical insights into the general working of markets. Instead, market design calls for an engineering approach. Drawing primarily on the design of the entry level labor market for American doctors (the National Resident Matching Program), and of the auctions of radio spectrum conducted by the Federal Communications Commission, this paper makes the case that experimental and computational economics are natural complements to game theory in the work of design. The paper also argues that some of the challenges facing both markets involve dealing with related kinds of complementarities, and that this suggests an agenda for future theoretical research. [source]


    Dietary uptake models used for modeling the bioaccumulation of organic contaminants in fish,,

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 4 2008
    M. Craig Barber
    Abstract Numerous models have been developed to predict the bioaccumulation of organic chemicals in fish. Although chemical dietary uptake can be modeled using assimilation efficiencies, bioaccumulation models fall into two distinct groups. The first group implicitly assumes that assimilation efficiencies describe the net chemical exchanges between fish and their food. These models describe chemical elimination as a lumped process that is independent of the fish's egestion rate or as a process that does not require an explicit fecal excretion term. The second group, however, explicitly assumes that assimilation efficiencies describe only actual chemical uptake and formulates chemical fecal and gill excretion as distinct, thermodynamically driven processes. After reviewing the derivations and assumptions of the algorithms that have been used to describe chemical dietary uptake of fish, their application, as implemented in 16 published bioaccumulation models, is analyzed for largemouth bass (Micropterus salmoides), walleye (Sander vitreus = Stizostedion vitreum), and rainbow trout (Oncorhynchus mykiss) that bioaccumulate an unspecified, poorly metabolized, hydrophobic chemical possessing a log KOW of 6.5 (i.e., a chemical similar to a pentachlorobiphenyl). [source]


    Assessing trace-metal exposure to American dippers in mountain streams of southwestern British Columbia, Canada

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 4 2005
    Christy A. Morrissey
    Abstract To develop a suitable biomonitor of metal pollution in watersheds, we examined trends in exposure to nine trace elements in the diet (benthic invertebrates and fish), feathers (n = 104), and feces (n = 14) of an aquatic passerine, the American dipper (Cinclus mexicanus), from the Chilliwack watershed in British Columbia, Canada. We hypothesized that key differences may exist in exposure to metals for resident dippers that occupy the main river year-round and altitudinal migrants that breed on higher elevation tributaries because of differences in prey metal levels between locations or possible differences in diet composition. Metals most commonly detected in dipper feather samples in decreasing order were Zn > Cu > Hg > Se > Pb > Mn > Cd > Al > As. Resident dipper feathers contained significantly higher mean concentrations of mercury (0.64 ,g/g dry wt), cadmium (0.19 ,g/g dry wt), and copper (10.8 ,g/g dry wt) relative to migrants. Mass balance models used to predict daily metal exposure for dippers with different diets and breeding locations within a watershed showed that variation in metal levels primarily was attributed to differences in the proportion offish and invertebrates in the diet of residents and migrants. In comparing predicted metal exposure values to tolerable daily intakes (TDI), we found that most metals were below or within the range of TDI, except selenium, aluminum, and zinc. Other metals, such as cadmium, copper, and arsenic, were only of concern for dippers mainly feeding on insects; mercury was only of concern for dippers consuming high fish diets. The models were useful tools to demonstrate how shifts in diet and breeding location within a single watershed can result in changes in exposure that may be of toxicological significance. [source]


    Review of the validation of models used in Federal Insecticide, Fungicide, and Rodenticide Act Environmental exposure assessments

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 8 2002
    Russell L. Jmones
    Abstract The first activity of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) Environmental Model Validation Task Force, established to increase confidence in the use of environmental models used in regulatory assessments, was to review the literature information on validation of the pesticide root zone model (PRZM) and the groundwater loading effects of agricultural management systems (GLEAMS). This literature information indicates that these models generally predict the same or greater leaching than observed in actual field measurements, suggesting that these models are suitable for use in regulatory assessments. However, additional validation research conducted using the newest versions of the models would help improve confidence in runoff and leaching predictions because significant revisions have been made in models over the years, few of the literature studies focused on runoff losses, the number of studies having quantitative validation results is minimal, and modelers were aware of the field results in most of the literature studies. Areas for special consideration in conducting model validation research include improving the process for selecting input parameters, developing recommendations for performing calibration simulations, devising appropriate procedures for keeping results of field studies from modelers performing simulations to validate model predictions while providing access for calibration simulations, and developing quantitative statistical procedures for comparing model predictions with experimental results. [source]


    Explanatory models in the interpretations of clinical features of dental patients within a university dental education setting

    EUROPEAN JOURNAL OF DENTAL EDUCATION, Issue 1 2002
    Gerardo Maupome
    Clinicians may acquire biased perceptions during their dental education that can affect decisions about treatment/management of dental decay. This study established explanatory models used by students to interpret clinical features of patients. It employed a stereotypical dental patient under standardised consultation conditions to identify the interpretation of oral health/disease features in the eyes of student clinicians. The study aimed to establish the perceptions of the patient as a client of the university dental clinic, as seen through the ideological lens of a formal Dental Education system. The discourse during simulated clinical consultations was qualitatively analysed to interpret values and concepts relevant to the assessment of restorative treatment needs and oral health status. Three constructs during the consultation were identified: the Dual Therapeutic Realms, the Choices Underlying Treatment Options, and the High-Risk Triad. Comparing these discourse components, the Patient Factors of the Bader and Shugars model for treatment decisions supported the existence of a core set of themes. It was concluded that certain consultation circumstances influenced the adequacy of diagnostic strategies, mainly by introducing loosely defined but highly specific socio-cultural biases ingrained in the Dental Education concepts and diagnostic/treatment needs systems. [source]


    Scaling analysis of water retention curves for unsaturated sandy loam soils by using fractal geometry

    EUROPEAN JOURNAL OF SOIL SCIENCE, Issue 3 2010
    C. Fallico
    Fractal geometry was deployed to analyse water retention curves (WRC). The three models used to estimate the curves were the general pore-solid fractal (PSF) model and two specific cases of the PSF model: the Tyler & Wheatcraft (TW) and the Rieu & Sposito (RS) models. The study was conducted on 30 undisturbed, sandy loam soil samples taken from a field and subjected to laboratory analysis. The fractal dimension, a non-variable scale factor characterizing each water retention model proposed, was estimated by direct scaling. The method for determining the fractal dimension proposed here entails limiting the analysis to the interval between an upper and lower pressure head cut-off on a log-log plot, and defining the dimension itself as the straight regression line that interpolates the points in the interval with the largest coefficient of determination, R2. The scale relative to the cut-off interval used to determine the fractal behaviour in each model used is presented. Furthermore, a second range of pressure head values was analysed to approximate the fractal dimension of the pore surface. The PSF model exhibited greater spatial variation than the TW or RS models for the parameter values typical of a sandy loam soil. An indication of the variability of the fractal dimension across the entire area studied is also provided. [source]


    The mammalian exercise pressor reflex in health and disease

    EXPERIMENTAL PHYSIOLOGY, Issue 1 2006
    Scott A. Smith
    The exercise pressor reflex (a peripheral neural reflex originating in skeletal muscle) contributes significantly to the regulation of the cardiovascular system during exercise. Exercise-induced signals that comprise the afferent arm of the reflex are generated by activation of mechanically (muscle mechanoreflex) and chemically sensitive (muscle metaboreflex) skeletal muscle receptors. Activation of these receptors and their associated afferent fibres reflexively adjusts sympathetic and parasympathetic nerve activity during exercise. In heart failure, the cardiovascular response to exercise is augmented. Owing to the peripheral skeletal myopathy that develops in heart failure (e.g. muscle atrophy, decreased peripheral blood flow, fibre-type transformation and reduced oxidative capacity), the exercise pressor reflex has been implicated as a possible mechanism by which the cardiovascular response to physical activity is exaggerated in this disease. Accumulating evidence supports this conclusion. This review therefore focuses on the role of the exercise pressor reflex in regulating the cardiovascular system during exercise in both health and disease. Updates on our current understanding of the exercise pressor reflex neural pathway as well as experimental models used to study this reflex are presented. In addition, special emphasis is placed on the changes in exercise pressor reflex activity that develop in heart failure, including the contributions of the muscle mechanoreflex and metaboreflex to this pressor reflex dysfunction. [source]


    Two-stage fatigue loading of woven carbon fibre reinforced laminates

    FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 1 2003
    M. S. FOUND
    ABSTRACT A brief review of the models used to predict the cumulative fatigue damage in FRP composites is presented. Two-stage fatigue loading of a [0/90,± 452,0/90]s quasi- isotropic woven carbon fibre/epoxy resin laminate was evaluated at stress ratio R = 0.05 and the failure mechanisms investigated using x-radiography after each loading stage. The results are presented in terms of fatigue strength and damage growth and are compared with those in the literature. A low-to-high loading sequence is more damaging than a high-to-low one and the Palmgren-Miner linear damage rule may no longer be valid for this kind of material, as previously reported. [source]


    Anaerobic culture conditions favor biofilm-like phenotypes in Pseudomonas aeruginosa isolates from patients with cystic fibrosis

    FEMS IMMUNOLOGY & MEDICAL MICROBIOLOGY, Issue 3 2006
    Che Y. O'May
    Abstract Pseudomonas aeruginosa causes chronic infections in the lungs of cystic fibrosis (CF) individuals and remains the leading cause of morbidity and mortality associated with the disease. Biofilm growth and phenotypic diversification are factors thought to contribute to this organism's persistence. Most studies have focused on laboratory isolates such as strain PAO1, and there are relatively few reports characterizing the properties of CF strains, especially under decreased oxygen conditions such as occur in the CF lung. This study compared the phenotypic and functional properties of P. aeruginosa from chronically infected CF adults with those of strain PAO1 and other clinical non-CF isolates under aerobic and anaerobic culture conditions. The CF isolates overall displayed a reduced ability to form biofilms in standard in vitro short-term models. They also grew more slowly in culture, and exhibited decreased adherence to glass and decreased motilities (swimming, swarming and twitching). All of these characteristics were markedly accentuated by anaerobic growth conditions. Moreover, the CF strain phenotypes were not readily reversed by culture manipulations designed to encourage planktonic growth. The CF strains were thus inherently different from strain PAO1 and most of the other non-CF clinical P. aeruginosa isolates tested. In vitro models used to research CF isolate biofilm growth need to take the above properties of these strains into account. [source]


    Artificial neural networks for parameter estimation in geophysics

    GEOPHYSICAL PROSPECTING, Issue 1 2000
    Carlos Calderón-Macías
    Artificial neural systems have been used in a variety of problems in the fields of science and engineering. Here we describe a study of the applicability of neural networks to solving some geophysical inverse problems. In particular, we study the problem of obtaining formation resistivities and layer thicknesses from vertical electrical sounding (VES) data and that of obtaining 1D velocity models from seismic waveform data. We use a two-layer feedforward neural network (FNN) that is trained to predict earth models from measured data. Part of the interest in using FNNs for geophysical inversion is that they are adaptive systems that perform a non-linear mapping between two sets of data from a given domain. In both of our applications, we train FNNs using synthetic data as input to the networks and a layer parametrization of the models as the network output. The earth models used for network training are drawn from an ensemble of random models within some prespecified parameter limits. For network training we use the back-propagation algorithm and a hybrid back-propagation,simulated-annealing method for the VES and seismic inverse problems, respectively. Other fundamental issues for obtaining accurate model parameter estimates using trained FNNs are the size of the training data, the network configuration, the description of the data and the model parametrization. Our simulations indicate that FNNs, if adequately trained, produce reasonably accurate earth models when observed data are input to the FNNs. [source]


    New insights into global patterns of ocean temperature anomalies: implications for coral reef health and management

    GLOBAL ECOLOGY, Issue 3 2010
    Elizabeth R. Selig
    ABSTRACT Aim, Coral reefs are widely considered to be particularly vulnerable to changes in ocean temperatures, yet we understand little about the broad-scale spatio-temporal patterns that may cause coral mortality from bleaching and disease. Our study aimed to characterize these ocean temperature patterns at biologically relevant scales. Location, Global, with a focus on coral reefs. Methods, We created a 4-km resolution, 21-year global ocean temperature anomaly (deviations from long-term means) database to quantify the spatial and temporal characteristics of temperature anomalies related to both coral bleaching and disease. Then we tested how patterns varied in several key metrics of disturbance severity, including anomaly frequency, magnitude, duration and size. Results, Our analyses found both global variation in temperature anomalies and fine-grained spatial variability in the frequency, duration and magnitude of temperature anomalies. However, we discovered that even during major climatic events with strong spatial signatures, like the El Niño,Southern Oscillation, areas that had high numbers of anomalies varied between years. In addition, we found that 48% of bleaching-related anomalies and 44% of disease-related anomalies were less than 50 km2, much smaller than the resolution of most models used to forecast climate changes. Main conclusions, The fine-scale variability in temperature anomalies has several key implications for understanding spatial patterns in coral bleaching- and disease-related anomalies as well as for designing protected areas to conserve coral reefs in a changing climate. Spatial heterogeneity in temperature anomalies suggests that certain reefs could be targeted for protection because they exhibit differences in thermal stress. However, temporal variability in anomalies could complicate efforts to protect reefs, because high anomalies in one year are not necessarily predictive of future patterns of stress. Together, our results suggest that temperature anomalies related to coral bleaching and disease are likely to be highly heterogeneous and could produce more localized impacts of climate change. [source]


    Validation of Numerical Ground Water Models Used to Guide Decision Making

    GROUND WATER, Issue 2 2004
    Ahmed E. Hassan
    Many sites of ground water contamination rely heavily on complex numerical models of flow and transport to develop closure plans. This complexity has created a need for tools and approaches that can build confidence in model predictions and provide evidence that these predictions are sufficient for decision making. Confidence building is a long-term, iterative process and the author believes that this process should be termed model validation. Model validation is a process, not an end result. That is, the process of model validation cannot ensure acceptable prediction or quality of the model. Rather, it provides an important safeguard against faulty models or inadequately developed and tested models. If model results become the basis for decision making, then the validation process provides evidence that the model is valid for making decisions (not necessarily a true representation of reality). Validation, verification, and confirmation are concepts associated with ground water numerical models that not only do not represent established and generally accepted practices, but there is not even widespread agreement on the meaning of the terms as applied to models. This paper presents a review of model validation studies that pertain to ground water flow and transport modeling. Definitions, literature debates, previously proposed validation strategies, and conferences and symposia that focused on subsurface model validation are reviewed and discussed. The review is general and focuses on site-specific, predictive ground water models used for making decisions regarding remediation activities and site closure. The aim is to provide a reasonable starting point for hydrogeologists facing model validation for ground water systems, thus saving a significant amount of time, effort, and cost. This review is also aimed at reviving the issue of model validation in the hydrogeologic community and stimulating the thinking of researchers and practitioners to develop practical and efficient tools for evaluating and refining ground water predictive models. [source]


    Dynamic versus static models in cost-effectiveness analyses of anti-viral drug therapy to mitigate an influenza pandemic

    HEALTH ECONOMICS, Issue 5 2010
    Anna K. Lugnér
    Abstract Conventional (static) models used in health economics implicitly assume that the probability of disease exposure is constant over time and unaffected by interventions. For transmissible infectious diseases this is not realistic and another class of models is required, so-called dynamic models. This study aims to examine the differences between one dynamic and one static model, estimating the effects of therapeutic treatment with antiviral (AV) drugs during an influenza pandemic in the Netherlands. Specifically, we focus on the sensitivity of the cost-effectiveness ratios to model choice, to the assumed drug coverage, and to the value of several epidemiological factors. Therapeutic use of AV-drugs is cost-effective compared with non-intervention, irrespective of which model approach is chosen. The findings further show that: (1) the cost-effectiveness ratio according to the static model is insensitive to the size of a pandemic, whereas the ratio according to the dynamic model increases with the size of a pandemic; (2) according to the dynamic model, the cost per infection and the life-years gained per treatment are not constant but depend on the proportion of cases that are treated; and (3) the age-specific clinical attack rates affect the sensitivity of cost-effectiveness ratio to model choice. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Validation of hydrological models for climate scenario simulation: the case of Saguenay watershed in Quebec

    HYDROLOGICAL PROCESSES, Issue 23 2007
    Yonas B. Dibike
    Abstract This paper presents the results of an investigation into the problems associated with using downscaled meteorological data for hydrological simulations of climate scenarios. The influence of both the hydrological models and the meteorological inputs driving these models on climate scenario simulation studies are investigated. A regression-based statistical tool (SDSM) is used to downscale the daily precipitation and temperature data based on climate predictors derived from the Canadian global climate model (CGCM1), and two types of hydrological model, namely the physically based watershed model WatFlood and the lumped-conceptual modelling system HBV-96, are used to simulate the flow regimes in the major rivers of the Saguenay watershed in Quebec. The models are validated with meteorological inputs from both the historical records and the statistically downscaled outputs. Although the two hydrological models demonstrated satisfactory performances in simulating stream flows in most of the rivers when provided with historic precipitation and temperature records, both performed less well and responded differently when provided with downscaled precipitation and temperature data. By demonstrating the problems in accurately simulating river flows based on downscaled data for the current climate, we discuss the difficulties associated with downscaling and hydrological models used in estimating the possible hydrological impact of climate change scenarios. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    The evolution of mathematical immunology

    IMMUNOLOGICAL REVIEWS, Issue 1 2007
    Yoram Louzoun
    Summary:, The types of mathematical models used in immunology and their scope have changed drastically in the past 10 years. Classical models were based on ordinary differential equations (ODEs), difference equations, and cellular automata. These models focused on the ,simple' dynamics obtained between a small number of reagent types (e.g. one type of receptor and one type of antigen or two T-cell populations). With the advent of high-throughput methods, genomic data, and unlimited computing power, immunological modeling shifted toward the informatics side. Many current applications of mathematical models in immunology are now focused around the concepts of high-throughput measurements and system immunology (immunomics), as well as the bioinformatics analysis of molecular immunology. The types of models have shifted from mainly ODEs of simple systems to the extensive use of Monte Carlo simulations. The transition to a more molecular and more computer-based attitude is similar to the one occurring over all the fields of complex systems analysis. An interesting additional aspect in theoretical immunology is the transition from an extreme focus on the adaptive immune system (that was considered more interesting from a theoretical point of view) to a more balanced focus taking into account the innate immune system also. We here review the origin and evolution of mathematical modeling in immunology and the contribution of such models to many important immunological concepts. [source]


    Non-locking tetrahedral finite element for surgical simulation

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 7 2009
    Grand Roman Joldes
    Abstract To obtain a very fast solution for finite element models used in surgical simulations, low-order elements, such as the linear tetrahedron or the linear under-integrated hexahedron, must be used. Automatic hexahedral mesh generation for complex geometries remains a challenging problem, and therefore tetrahedral or mixed meshes are often necessary. Unfortunately, the standard formulation of the linear tetrahedral element exhibits volumetric locking in case of almost incompressible materials. In this paper, we extend the average nodal pressure (ANP) tetrahedral element proposed by Bonet and Burton for a better handling of multiple material interfaces. The new formulation can handle multiple materials in a uniform way with better accuracy, while requiring only a small additional computation effort. We discuss some implementation issues and show how easy an existing Total Lagrangian Explicit Dynamics algorithm can be modified in order to support the new element formulation. The performance evaluation of the new element shows the clear improvement in reaction forces and displacements predictions compared with the ANP element in case of models consisting of multiple materials. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Efficient modal analysis of systems with local stiffness uncertainties

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 6-7 2009
    S. F. Wojtkiewicz
    Abstract The characterization of the uncertainty in modal quantities of an uncertain linear structural system is essential to the rapid determination of its response to arbitrary loadings. Although the size of many computational structural models used is extremely large, i.e. thousands of equations, the uncertainty to be analyzed is oftentimes localized to very small regions of the model. This paper addresses the development of an efficient, computational methodology for the modal analysis of linear structural systems with local stiffness uncertainties. The newly developed methodology utilizes an enriched basis that consists of the sub-spectrum of a nominal structural system augmented with additional basis vectors generated from a knowledge of the structure of the stiffness uncertainty. In addition, methods for determining bounds on the approximate modal frequencies and mode shapes are discussed. Numerical results demonstrate that the algorithm produces highly accurate results with greatly reduced computational effort. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Voxel-based meshing and unit-cell analysis of textile composites

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 7 2003
    Hyung Joo Kim
    Abstract Unit-cell homogenization techniques are frequently used together with the finite element method to compute effective mechanical properties for a wide range of different composites and heterogeneous materials systems. For systems with very complicated material arrangements, mesh generation can be a considerable obstacle to usage of these techniques. In this work, pixel-based (2D) and voxel-based (3D) meshing concepts borrowed from image processing are thus developed and employed to construct the finite element models used in computing the micro-scale stress and strain fields in the composite. The potential advantage of these techniques is that generation of unit-cell models can be automated, thus requiring far less human time than traditional finite element models. Essential ideas and algorithms for implementation of proposed techniques are presented. In addition, a new error estimator based on sensitivity of virtual strain energy to mesh refinement is presented and applied. The computational costs and rate of convergence for the proposed methods are presented for three different mesh-refinement algorithms: uniform refinement; selective refinement based on material boundary resolution; and adaptive refinement based on error estimation. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Comparative study of the continuous phase flow in a cyclone separator using different turbulence models,

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2005
    H. Shalaby
    Abstract Numerical calculations were carried out at the apex cone and various axial positions of a gas cyclone separator for industrial applications. Two different NS-solvers (a commercial one (CFX 4.4 ANSYS GmbH, Munich, Germany, CFX Solver Documentation, 1998), and a research code (Post-doctoral Thesis, Technical University of Chemnitz, Germany, September, 2002)) based on a pressure correction algorithm of the SIMPLE method have been applied to predict the flow behaviour. The flow was assumed as unsteady, incompressible and isothermal. A k,, turbulence model has been applied first using the commercial code to investigate the gas flow. Due to the nature of cyclone flows, which exhibit highly curved streamlines and anisotropic turbulence, advanced turbulence models such as Reynolds stress model (RSM) and large eddy simulation (LES) have been used as well. The RSM simulation was performed using the commercial package activating the Launder et al.'s (J. Fluid. Mech. 1975; 68(3):537,566) approach, while for the LES calculations the research code has been applied utilizing the Smagorinsky model. It was found that the k,, model cannot predict flow phenomena inside the cyclone properly due to the strong curvature of the streamlines. The RSM results are comparable with LES results in the area of the apex cone plane. However, the application of the LES reveals qualitative agreement with the experimental data, but requires higher computer capacity and longer running times than RSM. This paper is organized into five sections. The first section consists of an introduction and a summary of previous work. Section 2 deals with turbulence modelling including the governing equations and the three turbulence models used. In Section 3, computational parameters are discussed such as computational grids, boundary conditions and the solution algorithm with respect to the use of MISTRAL/PartFlow-3D. In Section 4, prediction profiles of the gas flow at axial and apex cone positions are presented and discussed. Section 5 summarizes and concludes the paper. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    A review of climate risk information for adaptation and development planning

    INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 9 2009
    R. L. Wilby
    Abstract Although the use of climate scenarios for impact assessment has grown steadily since the 1990s, uptake of such information for adaptation is lagging by nearly a decade in terms of scientific output. Nonetheless, integration of climate risk information in development planning is now a priority for donor agencies because of the need to prepare for climate change impacts across different sectors and countries. This urgency stems from concerns that progress made against Millennium Development Goals (MDGs) could be threatened by anthropogenic climate change beyond 2015. Up to this time the human signal, though detectable and growing, will be a relatively small component of climate variability and change. This implies the need for a twin-track approach: on the one hand, vulnerability assessments of social and economic strategies for coping with present climate extremes and variability, and, on the other hand, development of climate forecast tools and scenarios to evaluate sector-specific, incremental changes in risk over the next few decades. This review starts by describing the climate outlook for the next couple of decades and the implications for adaptation assessments. We then review ways in which climate risk information is already being used in adaptation assessments and evaluate the strengths and weaknesses of three groups of techniques. Next we identify knowledge gaps and opportunities for improving the production and uptake of climate risk information for the 2020s. We assert that climate change scenarios can meet some, but not all, of the needs of adaptation planning. Even then, the choice of scenario technique must be matched to the intended application, taking into account local constraints of time, resources, human capacity and supporting infrastructure. We also show that much greater attention should be given to improving and critiquing models used for climate impact assessment, as standard practice. Finally, we highlight the over-arching need for the scientific community to provide more information and guidance on adapting to the risks of climate variability and change over nearer time horizons (i.e. the 2020s). Although the focus of the review is on information provision and uptake in developing regions, it is clear that many developed countries are facing the same challenges. Copyright © 2009 Royal Meteorological Society [source]


    On the effect of the local turbulence scales on the mixing rate of diffusion flames: assessment of two different combustion models

    INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 10 2002
    Jose Lopes
    Abstract A mathematical model for the prediction of the turbulent flow, diffusion combustion process, heat transfer including thermal radiation and pollutants formation inside combustion chambers is described. In order to validate the model the results are compared herein against experimental data available in the open literature. The model comprises differential transport equations governing the above-mentioned phenomena, resulting from the mathematical and physical modelling, which are solved by the control volume formulation technique. The results yielded by the two different turbulent-mixing physical models used for combustion, the simple chemical reacting system (SCRS) and the eddy break-up (EBU), are analysed so that the need to make recourse to local turbulent scales to evaluate the reactants' mixing rate is assessed. Predictions are performed for a gaseous-fuelled combustor fired with two different burners that induce different aerodynamic conditions inside the combustion chamber. One of the burners has a typical geometry of that used in gaseous fired boilers,fuel firing in the centre surrounded by concentric oxidant firing,while the other burner introduces the air into the combustor through two different swirling concentric streams. Generally, the results exhibit a good agreement with the experimental values. Also, NO predictions are performed by a prompt-NO formation model used as a post-processor together with a thermal-NO formation model, the results being generally in good agreement with the experimental values. The predictions revealed that the mixture between the reactants occurred very close to the burner and almost instantaneously, that is, immediately after the fuel-containing eddies came into contact with the oxidant-containing eddies. As a result, away from the burner, the SCRS model, that assumes an infinitely fast mixing rate, appeared to be as accurate as the EBU model for the present predictions. Closer to the burner, the EBU model, that establishes the reactants mixing rate as a function of the local turbulent scales, yielded slightly slower rates of mixture, the fuel and oxidant concentrations which are slightly higher than those obtained with the SCRS model. As a consequence, the NO concentration predictions with the EBU combustion model are generally higher than those obtained with the SCRS model. This is due to the existence of higher concentrations of fuel and oxygen closer to the burner when predictions were performed taking into account the local turbulent scales in the mixing process of the reactants. The SCRS, being faster and as accurate as the EBU model in the predictions of combustion properties appears to be more appropriate. However, should NO be a variable that is predicted, then the EBU model becomes more appropriate. This is due to the better results of oxygen concentration yielded by that model, since it solves a transport equation for the oxidant concentration, which plays a dominant role in the prompt-NO formation rate. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Duchenne's muscular dystrophy: animal models used to investigate pathogenesis and develop therapeutic strategies

    INTERNATIONAL JOURNAL OF EXPERIMENTAL PATHOLOGY, Issue 4 2003
    C.A. Collins
    Summary., Duchenne's muscular dystrophy (DMD) is a lethal childhood disease caused by mutations of the dystrophin gene, the protein product of which, dystrophin, has a vital role in maintaining muscle structure and function. Homologues of DMD have been identified in several animals including dogs, cats, mice, fish and invertebrates. The most notable of these are the extensively studied mdx mouse, a genetic and biochemical model of the human disease, and the muscular dystrophic Golden Retriever dog, which is the nearest pathological counterpart of DMD. These models have been used to explore potential therapeutic approaches along a number of avenues including gene replacement and cell transplantation strategies. High-throughput screening of pharmacological and genetic therapies could potentially be carried out in recently available smaller models such as zebrafish and Caenorhabditis elegans. It is possible that a successful treatment will eventually be identified through the integration of studies in multiple species differentially suited to addressing particular questions. [source]


    The economic value of technical trading rules: a nonparametric utility-based approach

    INTERNATIONAL JOURNAL OF FINANCE & ECONOMICS, Issue 1 2005
    Hans Dewachter
    Abstract We adapt Brandt's (1999) nonparametric approach to determine the optimal portfolio choice of a risk averse foreign exchange investor who uses moving average trading signals as the information instrument for investment opportunities. Additionally, we assess the economic value of the estimated optimal trading rules based on the investor's preferences. The approach consists of a conditional generalized method of moments (GMM) applied to the conditional Euler optimality conditions. The method presents two main advantages: (i) it avoids ad hoc specifications of statistical models used to explain return predictability; and (ii) it implicitly incorporates all return moments in the investor's expected utility maximization problem. We apply the procedure to different moving average trading rules for the German mark,US dollar exchange rate for the period 1973,2001. We find that technical trading rules are partially recovered and that the estimated optimal trading rules represent a significant economic value for the investor. Copyright © 2005 John Wiley & Sons, Ltd. [source]