Model Accuracy (model + accuracy)

Distribution by Scientific Domains


Selected Abstracts


A predictive high-throughput scale-down model of monoclonal antibody production in CHO cells

BIOTECHNOLOGY & BIOENGINEERING, Issue 6 2009
Rachel Legmann
Abstract Multi-factorial experimentation is essential in understanding the link between mammalian cell culture conditions and the glycoprotein product of any biomanufacturing process. This understanding is increasingly demanded as bioprocess development is influenced by the Quality by Design paradigm. We have developed a system that allows hundreds of micro-bioreactors to be run in parallel under controlled conditions, enabling factorial experiments of much larger scope than is possible with traditional systems. A high-throughput analytics workflow was also developed using commercially available instruments to obtain product quality information for each cell culture condition. The micro-bioreactor system was tested by executing a factorial experiment varying four process parameters: pH, dissolved oxygen, feed supplement rate, and reduced glutathione level. A total of 180 micro-bioreactors were run for 2 weeks during this DOE experiment to assess this scaled down micro-bioreactor system as a high-throughput tool for process development. Online measurements of pH, dissolved oxygen, and optical density were complemented by offline measurements of glucose, viability, titer, and product quality. Model accuracy was assessed by regressing the micro-bioreactor results with those obtained in conventional 3,L bioreactors. Excellent agreement was observed between the micro-bioreactor and the bench-top bioreactor. The micro-bioreactor results were further analyzed to link parameter manipulations to process outcomes via leverage plots, and to examine the interactions between process parameters. The results show that feed supplement rate has a significant effect (P,<,0.05) on all performance metrics with higher feed rates resulting in greater cell mass and product titer. Culture pH impacted terminal integrated viable cell concentration, titer and intact immunoglobulin G titer, with better results obtained at the lower pH set point. The results demonstrate that a micro-scale system can be an excellent model of larger scale systems, while providing data sets broader and deeper than are available by traditional methods. Biotechnol. Bioeng. 2009; 104: 1107,1120. © 2009 Wiley Periodicals, Inc. [source]


The Challenge of Predicting Demand for Emergency Department Services

ACADEMIC EMERGENCY MEDICINE, Issue 4 2008
Melissa L. McCarthy MS
Abstract Objectives:, The objective was to develop methodology for predicting demand for emergency department (ED) services by characterizing ED arrivals. Methods:, One year of ED arrival data from an academic ED were merged with local climate data. ED arrival patterns were described; Poisson regression was selected to represent the count of hourly ED arrivals as a function of temporal, climatic, and patient factors. The authors evaluated the appropriateness of prediction models by whether the data met key Poisson assumptions, including variance proportional to the mean, positive skewness, and absence of autocorrelation among hours. Model accuracy was assessed by comparing predicted and observed histograms of arrival counts and by how frequently the observed hourly count fell within the 50 and 90% prediction intervals. Results:, Hourly ED arrivals were obtained for 8,760 study hours. Separate models were fit for high- versus low-acuity patients because of significant arrival pattern differences. The variance was approximately equal to the mean in the high- and low-acuity models. There was no residual autocorrelation (r = 0) present after controlling for temporal, climatic, and patient factors that influenced the arrival rate. The observed hourly count fell within the 50 and 90% prediction intervals 50 and 90% of the time, respectively. The observed histogram of arrival counts was nearly identical to the histogram predicted by a Poisson process. Conclusions:, At this facility, demand for ED services was well approximated by a Poisson regression model. The expected arrival rate is characterized by a small number of factors and does not depend on recent numbers of arrivals. [source]


Effects of species' ecology on the accuracy of distribution models

ECOGRAPHY, Issue 1 2007
Jana M. McPherson
In the face of accelerating biodiversity loss and limited data, species distribution models , which statistically capture and predict species' occurrences based on environmental correlates , are increasingly used to inform conservation strategies. Additionally, distribution models and their fit provide insights on the broad-scale environmental niche of species. To investigate whether the performance of such models varies with species' ecological characteristics, we examined distribution models for 1329 bird species in southern and eastern Africa. The models were constructed at two spatial resolutions with both logistic and autologistic regression. Satellite-derived environmental indices served as predictors, and model accuracy was assessed with three metrics: sensitivity, specificity and the area under the curve (AUC) of receiver operating characteristics plots. We then determined the relationship between each measure of accuracy and ten ecological species characteristics using generalised linear models. Among the ecological traits tested, species' range size, migratory status, affinity for wetlands and endemism proved most influential on the performance of distribution models. The number of habitat types frequented (habitat tolerance), trophic rank, body mass, preferred habitat structure and association with sub-resolution habitats also showed some effect. In contrast, conservation status made no significant impact. These findings did not differ from one spatial resolution to the next. Our analyses thus provide conservation scientists and resource managers with a rule of thumb that helps distinguish, on the basis of ecological traits, between species whose occurrence is reliably or less reliably predicted by distribution models. Reasonably accurate distribution models should, however, be attainable for most species, because the influence ecological traits bore on model performance was only limited. These results suggest that none of the ecological traits tested provides an obvious correlate for environmental niche breadth or intra-specific niche differentiation. [source]


Application of the distributed hydrology soil vegetation model to Redfish Creek, British Columbia: model evaluation using internal catchment data

HYDROLOGICAL PROCESSES, Issue 2 2003
Andrew Whitaker
Abstract The Distributed Hydrology Soil Vegetation Model is applied to the Redfish Creek catchment to investigate the suitability of this model for simulation of forested mountainous watersheds in interior British Columbia and other high-latitude and high-altitude areas. On-site meteorological data and GIS information on terrain parameters, forest cover, and soil cover are used to specify model input. A stepwise approach is taken in calibrating the model, in which snow accumulation and melt parameters for clear-cut and forested areas were optimized independent of runoff production parameters. The calibrated model performs well in reproducing year-to-year variability in the outflow hydrograph, including peak flows. In the subsequent model performance evaluation for simulation of catchment processes, emphasis is put on elevation and temporal differences in snow accumulation and melt, spatial patterns of snowline retreat, water table depth, and internal runoff generation, using internal catchment data as much as possible. Although the overall model performance based on these criteria is found to be good, some issues regarding the simulation of internal catchment processes remain. These issues are related to the distribution of meteorological variables over the catchment and a lack of information on spatial variability in soil properties and soil saturation patterns. Present data limitations for testing internal model accuracy serve to guide future data collection at Redfish Creek. This study also illustrates the challenges that need to be overcome before distributed physically based hydrologic models can be used for simulating catchments with fewer data resources. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Erosion prediction on unpaved mountain roads in northern Thailand: validation of dynamic erodibility modelling using KINEROS2

HYDROLOGICAL PROCESSES, Issue 3 2001
Alan D. Ziegler
Abstract The event- and physics-based KINEROS2 runoff/erosion model for predicting overland flow generation and sediment production was applied to unpaved mountain roads. Field rainfall simulations conducted in northern Thailand provided independent data for model calibration and validation. Validation shows that KINEROS2 can be parameterized to simulate total discharge, sediment transport and sediment concentration on small-scale road plots, for a range of slopes, during simulated rainfall events. The KINEROS2 model, however, did not accurately predict time-dependent changes in sediment output and concentration. In particular, early flush peaks and the temporal decay in sediment output were not predicted, owing to the inability of KINEROS2 to model removal of a surface sediment layer of finite depth. After 15,20 min, sediment transport declines as the supply of loose superficial material becomes depleted. Modelled erosion response was improved by allowing road erodibility to vary during an event. Changing the model values of erosion detachment parameters in response to changes in surface sediment availability improved model accuracy of predicted sediment transport by 30,40%. A predictive relationship between road erodibility ,states' and road surface sediment depth is presented. This relationship allows implementation of the dynamic erodibility (DE) method to events where pre-storm sediment depth can be estimated (e.g., from traffic usage variables). Copyright © 2001 John Wiley & Sons, Ltd. [source]


Toward better scoring metrics for pseudo-independent models

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 8 2004
Y. Xiang
Learning belief networks from data is NP-hard in general. A common method used in heuristic learning is the single-link lookahead search. When the problem domain is pseudo-independent (PI), the method cannot discover the underlying probabilistic model. In learning these models, to explicitly trade model accuracy and model complexity, parameterization of PI models is necessary. Understanding of PI models also provides a new dimension of trade-off in learning even when the underlying model may not be PI. In this work, we adopt a hypercube perspective to analyze PI models and derive an improved result for computing the maximum number of parameters needed to specify a full PI model. We also present results on parameterization of a subclass of partial PI models. © 2004 Wiley Periodicals, Inc. Int J Int Syst 19: 749,768, 2004. [source]


Spatially autocorrelated sampling falsely inflates measures of accuracy for presence-only niche models

JOURNAL OF BIOGEOGRAPHY, Issue 12 2009
Samuel D. Veloz
Abstract Aim, Environmental niche models that utilize presence-only data have been increasingly employed to model species distributions and test ecological and evolutionary predictions. The ideal method for evaluating the accuracy of a niche model is to train a model with one dataset and then test model predictions against an independent dataset. However, a truly independent dataset is often not available, and instead random subsets of the total data are used for ,training' and ,testing' purposes. The goal of this study was to determine how spatially autocorrelated sampling affects measures of niche model accuracy when using subsets of a larger dataset for accuracy evaluation. Location, The distribution of Centaurea maculosa (spotted knapweed; Asteraceae) was modelled in six states in the western United States: California, Oregon, Washington, Idaho, Wyoming and Montana. Methods, Two types of niche modelling algorithms , the genetic algorithm for rule-set prediction (GARP) and maximum entropy modelling (as implemented with Maxent) , were used to model the potential distribution of C. maculosa across the region. The effect of spatially autocorrelated sampling was examined by applying a spatial filter to the presence-only data (to reduce autocorrelation) and then comparing predictions made using the spatial filter with those using a random subset of the data, equal in sample size to the filtered data. Results, The accuracy of predictions from both algorithms was sensitive to the spatial autocorrelation of sampling effort in the occurrence data. Spatial filtering led to lower values of the area under the receiver operating characteristic curve plot but higher similarity statistic (I) values when compared with predictions from models built with random subsets of the total data, meaning that spatial autocorrelation of sampling effort between training and test data led to inflated measures of accuracy. Main conclusions, The findings indicate that care should be taken when interpreting the results from presence-only niche models when training and test data have been randomly partitioned but occurrence data were non-randomly sampled (in a spatially autocorrelated manner). The higher accuracies obtained without the spatial filter are a result of spatial autocorrelation of sampling effort between training and test data inflating measures of prediction accuracy. If independently surveyed data for testing predictions are unavailable, then it may be necessary to explicitly account for the spatial autocorrelation of sampling effort between randomly partitioned training and test subsets when evaluating niche model predictions. [source]


A computationally inexpensive modification of the point dipole electrostatic polarization model for molecular simulations

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 3 2003
George A. Kaminski
Abstract We present an approximation, which allows reduction of computational resources needed to explicitly incorporate electrostatic polarization into molecular simulations utilizing empirical force fields. The proposed method is employed to compute three-body energies of molecular complexes with dipolar electrostatic probes, gas-phase dimerization energies, and pure liquid properties for five systems that are important in biophysical and organic simulations,water, methanol, methylamine, methanethiol, and acetamide. In all the cases, the three-body energies agreed with high level ab initio data within 0.07 kcal/mol, dimerization energies,within 0.43 kcal/mol (except for the special case of the CH3SH), and computed heats of vaporization and densities differed from the experimental results by less than 2%. Moreover, because the presented method allows a significant reduction in computational cost, we were able to carry out the liquid-state calculations with Monte Carlo technique. Comparison with the full-scale point dipole method showed that the computational time was reduced by 3.5 to more than 20 times, depending on the system in hand and on the desired level of the full-scale model accuracy, while the difference in energetic results between the full-scale and the presented approximate model was not great in the most cases. Comparison with the nonpolarizable OPLS-AA force field for all the substances involved and with the polarizable POL3 and q90 models for water and methanol, respectively, demonstrates that the presented technique allows reduction of computational cost with no sacrifice of accuracy. We hope that the proposed method will be of benefit to research employing molecular modeling technique in the biophysical and physical organic chemistry areas. © 2003 Wiley Periodicals, Inc. J Comput Chem 24: 267,276, 2003 [source]


Laboratory evaluation of two bioenergetics models applied to yellow perch: identification of a major source of systematic error

JOURNAL OF FISH BIOLOGY, Issue 2 2003
P. G. Bajer
Laboratory growth and food consumption data for two size classes of age 2 year yellow perch Perca flavescens, each fed on two distinct feeding schedules at 21° C, were used to evaluate the abilities of the Wisconsin (WI) and Karas,Thoresson (KT) bioenergetics models to predict fish growth and cumulative consumption. Neither model exhibited consistently better performance for predicting fish body masses across all four fish size and feeding regime combinations. Results indicated deficiencies in estimates of resting routine metabolism by both models. Both the WI and KT models exhibited errors for predicting growth rates, which were strongly correlated with food consumption rate. Consumption-dependent prediction errors may be common in bioenergetics models and are probably the result of deficiencies in parameter values or assumptions within the models for calculating energy costs of specific dynamic action, feeding activity metabolism or egestion and excretion. Inter-model differences in growth and consumption predictions were primarily the result of differences in egestion and excretion costs calculated by the two models. The results highlighted the potential importance of parameters describing egestion and excretion costs to the accuracy of bioenergetics model predictions, even though bioenergetics models are generally regarded as being insensitive to these parameters. The findings strongly emphasize the utility and necessity of performing laboratory evaluations of all bioenergetics models for assurance of model accuracy and for facilitation of model refinement. [source]


Behavioral modeling of GaN-based power amplifiers: Impact of electrothermal feedback on the model accuracy and identification

MICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 11 2009
Roberto Quaglia
Abstract In this article, we discuss the accuracy of behavioral models in simulating the intermodulation distortion (IMD) of microwave GaN-based high-power amplifiers in the presence of strong electrothermal (ET) feedback. Exploiting an accurate self-consistent ET model derived from measurements and thermal finite-element method simulations, we show that behavioral models are able to yield accurate results, provided that the model identification is carried out with signals with wide bandwidth and large dynamics. © 2009 Wiley Periodicals, Inc. Microwave Opt Technol Lett 51: 2789,2792, 2009; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.24732 [source]


Prescribing Flood Regimes to Sustain Riparian Ecosystems along Meandering Rivers

CONSERVATION BIOLOGY, Issue 5 2000
Brian D. Richter
By managing river flows for water supplies and power generation, water management agencies have inadvertently caused considerable degradation of riverine ecosystems and associated biodiversity. New approaches for meeting human needs for water while conserving the ecological integrity of riverine ecosystems are greatly needed. We describe an approach for identifying the natural flooding characteristics that must be protected or restored to maintain riparian ( floodplain) ecosystems along meandering rivers. We developed a computer model to simulate flood-driven changes in the relative abundance of riparian patch types along the Yampa River in Colorado ( U.S.A.). The model is based on research suggesting that the duration of flooding at or above 209 m3 per second (125% of bankfull discharge) is particularly important in driving lateral channel migration, which is responsible for initiating ecological succession in the Yampa's riparian forest. Other hydrologic variables, such as the magnitude of annual peak flows, were not as strongly correlated with lateral channel migration rates. Model simulations enabled us to tentatively identify a threshold of alteration of flood duration that could lead to substantial changes in the abundance of forest patch types over time should river flows be regulated by future water projects. Based on this analysis, we suggest an ecologically compatible water management approach that avoids crossing flood alteration thresholds and provides opportunity to use a portion of flood waters for human purposes. Recommended improvements to the Yampa model include obtaining additional low-elevation aerial photographs of the river corridor to enable better estimation of channel migration rates and vegetation changes. These additional data should greatly improve the model's accuracy and predictive capabilities and therefore its management value. Resumen: La composición y estructura de ecosistemas ribereños están fuertemente ligadas a la variabilidad hidrológica natural. Al manejar el flujo de ríos para abastecer agua y generar energía, las agencias de manejo de agua han causado inadvertidamente una degradación considerable de los ecosistemas ribereños y la biodiversidad asociada a ellos. Se necesitan nuevas estrategias para satisfacer las necesidades humanas de agua al mismo tiempo que se conserva la integridad de los ecosistemas ribereños. Describimos una estrategia para identificar las características de inundaciones naturales que deben ser protegidas o restauradas para mantener ecosistemas riparios ( planicies de inundación) a lo largo de ríos sinuosos. Desarrollamos un modelo de computadora para simular los cambios causados por inundaciones en la abundancia relativa de tipos de parche ripario a lo largo del río Yampa, en Colorado ( Estados Unidos de Norteamérica). Este modelo se basa en investigación que sugiere que la duración de la inundación a, o mayor a, 209 m3 por segundo (125% de descarga del banco lleno a su capacidad) es particularmente importante en la conducción de la migración de canales laterales, lo cual es responsable de la iniciación de la sucesión ecológica en el bosque ripario del río Yampa. Otras variables hidrológicas, como lo es la magnitud del pico de los flujos anuales no estuvieron tan fuertemente correlacionadas con las tasas de migración lateral de canales. Las simulaciones del modelo nos permitieron identificar límites tentativos de alteración de la duración de la inundación que podrían conducir a cambios sustanciales en la abundancia de tipos de parches forestales en el tiempo si los flujos de los ríos son regulados en proyectos de agua futuros. En base a este análisis, sugerimos una estrategia de manejo de agua ecológicamente compatible que evita sobrepasar los límites de alteración de las inundaciones y provee la oportunidad de usar una porción del agua de las inundaciones para fines humanos. Las recomendaciones de mejoras al modelo del río Yampa incluyen la necesidad de obtener fotografías aéreas de baja elevación adicionales del corredor del río, que permitan una mejor estimación de las tasas de migración de los canales y los cambios en la vegetación. Estos datos adicionales deberán mejorar en gran medida la precisión del modelo y sus capacidades predictivas y, por lo tanto, su valor de manejo. [source]


Off-site monitoring systems for predicting bank underperformance: a comparison of neural networks, discriminant analysis, and professional human judgment

INTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 3 2001
Philip Swicegood
This study compares the ability of discriminant analysis, neural networks, and professional human judgment methodologies in predicting commercial bank underperformance. Experience from the banking crisis of the 1980s and early 1990s suggest that improved prediction models are needed for helping prevent bank failures and promoting economic stability. Our research seeks to address this issue by exploring new prediction model techniques and comparing them to existing approaches. When comparing the predictive ability of all three models, the neural network model shows slightly better predictive ability than that of the regulators. Both the neural network model and regulators significantly outperform the benchmark discriminant analysis model's accuracy. These findings suggest that neural networks show promise as an off-site surveillance methodology. Factoring in the relative costs of the different types of misclassifications from each model also indicates that neural network models are better predictors, particularly when weighting Type I errors more heavily. Further research with neural networks in this field should yield workable models that greatly enhance the ability of regulators and bankers to identify and address weaknesses in banks before they approach failure. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Predictors of weight loss during radiotherapy in patients with stage I or II head and neck cancer

CANCER, Issue 9 2010
Alice Nourissat MD
Abstract BACKGROUND: The purpose of the study was to identify predictors of weight loss during radiotherapy (RT) in patients with stage I or II head and neck (HN) cancer. METHODS: This study was conducted as part of a phase 3 chemoprevention trial. A total of 540 patients were randomized. The patients were weighed before and after RT. Their baseline characteristics, including lifestyle habits, diet, and quality of life, were assessed as potential predictors. Predictors were identified using multiple linear regressions. The reliability of the model was assessed by bootstrap resampling. A receiver operating characteristics curve was generated to estimate the model's accuracy in predicting critical weight loss (,5%). RESULTS: The mean weight loss was 2.2 kg (standard deviation, 3.4). Five factors were associated with a greater weight loss: all HN cancer sites other than the glottic larynx (P<.001), higher pre-RT body weight (P<.001), stage II disease (P = .002), dysphagia and/or odynophagia before RT (P = .001), and a lower Karnofsky performance score (P = .028). There was no association with pre-RT lifestyle habits, diet, or quality of life. The bootstrapping method confirmed the reliability of this predictive model. The area under the curve was 71.3% (95% confidence interval, 65.8-76.9), which represents an acceptable ability of the model to predict critical weight loss. CONCLUSIONS: These results could be useful to clinicians for screening patients with early stage HN cancer treated by RT who require special nutritional attention. Cancer 2010. © 2010 American Cancer Society. [source]