Home About us Contact | |||
Model Structure (model + structure)
Selected AbstractsUsing Logistic Regression to Analyze the Sensitivity of PVA Models: a Comparison of Methods Based on African Wild Dog ModelsCONSERVATION BIOLOGY, Issue 5 2001Paul C. Cross Standardized coefficients from the logistic regression analyses indicated that pup survival explained the most variability in the probability of extinction, regardless of whether or not the model incorporated density dependence. Adult survival and the standard deviation of pup survival were the next most important parameters in density-dependent simulations, whereas the severity and probability of catastrophe were more important during density-independent simulations. The inclusion of density dependence decreased the probability of extinction, but neither the abruptness nor the inclusion of density dependence were important model parameters. Results of both relative sensitivity analyses that altered each parameter by 10% of its range and life-stage-simulation analyses of deterministic matrix models supported the logistic regression results, indicating that pup survival and its variation were more important than other parameters. But both conventional sensitivity analysis of the stochastic model which changed each parameter by 10% of its mean value and elasticity analyses indicated that adult survival was more important than pup survival. We evaluated the advantages and disadvantages of using logistic regression to analyze the sensitivity of stochastic population viability models and conclude that it is a powerful method because it can address interactions among input parameters and can incorporate the range of parameter variability, although the standardized regression coefficients are not comparable between studies. Model structure, method of analysis, and parameter uncertainty affect the conclusions of sensitivity analyses. Therefore, rigorous model exploration and analysis should be conducted to understand model behavior and management implications. Resumen: Utilizamos la regresión logística como un método de análisis de sensibilidad par a un modelo de análisis de viabilidad poblacional de perros silvestres Africanos ( Lycaon pictus) y comparamos estos resultados con análisis de sensibilidad convencionales de modelos estocásticos y determinísticos. Coeficientes estandarizados de los análisis de regresión logística indicaron que la supervivencia de cachorros explicaba la mayor variabilidad en la probabilidad de extinción, independientemente de que el modelo incorporara la denso-dependencia. La supervivencia de adultos y la desviación estándar de la supervivencia de cachorros fueron los parámetros que siguieron en importancia en simulaciones de denso-dependencia, mientras que la severidad y la probabilidad de catástrofes fueron más importantes durante simulaciones denso-independientes. La inclusión de la denso dependencia disminuyó la probabilidad de extinción, pero ni la severidad ni la inclusión de denso-dependencia fueron parámetros importantes. Resultados de los análisis de sensibilidad relativa que alteraron cada parámetro en 10% de su rango y análisis de la simulación de etapas de vida de modelos matriciales determinísticos apoyaron los resultados de la regresión logística, indicando que la supervivencia de cachorros y su variación fueron más importantes que otros parámetros. Sin embargo, el análisis de sensibilidad convencional del modelo estocástico que cambiaron cada parámetro en 10% de su valor medio y el análisis de elasticidad indicaron que la supervivencia de adultos fue más importante que la supervivencia de cachorros. Evaluamos las ventajas y desventajas de utilizar la regresión logística para analizar la sensibilidad de modelos estocásticos de viabilidad poblacional y concluimos que es un método poderoso porque puede atender interacciones entre parámetros ingresados e incorporar el rango de variabilidad de parámetros, aunque los coeficientes de regresión estandarizada no son comparables entre estudios. La estructura del modelo, el método de análisis y la incertidumbre en los parámetros afectan las conclusiones del análisis de sensibilidad. Por lo tanto, se debe realizar una rigurosa exploración y análisis del modelo para entender su comportamiento y sus implicaciones en el manejo. [source] Detection of delayed density dependence in an orchid populationJOURNAL OF ECOLOGY, Issue 2 2000M. P. Gillman Summary 1,Annual censuses of Orchis morio (green-winged orchid) flowering spikes have been taken over a 27-year period in a replicated factorial experiment on the effects of fertilizer application. Census data, combined by block or treatment, were used in time,series analyses to test for density dependence. 2,Partial autocorrelation functions revealed the importance of positive correlations at lag 1 and negative correlations at lag 5. Stepwise multiple regressions provided evidence of delayed density dependence, again with a delay of about 5 years, with no evidence of direct (first-order) density dependence. 3,First-order autocorrelations and delayed density dependence were considered in the light of the known stage structure and generation time of the plant and the possibility of density dependence at different points in the life history. 4,Model structure affects the detection of density dependence, increasing the propensity for type I errors. [source] Sensitivity Analyses of Spatial Population Viability Analysis Models for Species at Risk and Habitat Conservation PlanningCONSERVATION BIOLOGY, Issue 1 2009ILONA R. NAUJOKAITIS-LEWIS análisis de sensibilidad; análisis de viabilidad poblacional; incertidumbre; metapoblación; planificación de la conservación Abstract:,Population viability analysis (PVA) is an effective framework for modeling species- and habitat-recovery efforts, but uncertainty in parameter estimates and model structure can lead to unreliable predictions. Integrating complex and often uncertain information into spatial PVA models requires that comprehensive sensitivity analyses be applied to explore the influence of spatial and nonspatial parameters on model predictions. We reviewed 87 analyses of spatial demographic PVA models of plants and animals to identify common approaches to sensitivity analysis in recent publications. In contrast to best practices recommended in the broader modeling community, sensitivity analyses of spatial PVAs were typically ad hoc, inconsistent, and difficult to compare. Most studies applied local approaches to sensitivity analyses, but few varied multiple parameters simultaneously. A lack of standards for sensitivity analysis and reporting in spatial PVAs has the potential to compromise the ability to learn collectively from PVA results, accurately interpret results in cases where model relationships include nonlinearities and interactions, prioritize monitoring and management actions, and ensure conservation-planning decisions are robust to uncertainties in spatial and nonspatial parameters. Our review underscores the need to develop tools for global sensitivity analysis and apply these to spatial PVA. Resumen:,El análisis de viabilidad poblacional (AVP) es un marco de referencia efectivo para los esfuerzos de recuperación de especie y de hábitat, pero la incertidumbre en las estimaciones de parámetros y la estructura del modelo pueden llevar a predicciones no confiables. La integración de información compleja y a menudo incierta a los modelos de AVP espaciales requiere la aplicación de análisis de sensibilidad para explorar la influencia de parámetros espaciales y no espaciales sobre las predicciones de los modelos. Revisamos 87 análisis de modelos de AVP demográficos espaciales de plantas y animales para identificar métodos comunes de análisis de sensibilidad en publicaciones recientes. En contraste con las mejores prácticas recomendadas por la comunidad de modeladores, los análisis de los sensibilidad de AVP típicamente fueron ad hoc, inconsistentes y difíciles de comparar. La mayoría de los estudios aplicaron métodos locales a los análisis de sensibilidad, pero pocos variaron parámetros múltiples simultáneamente. La falta de estándares para los análisis de sensibilidad y descripción en los AVP espaciales tiene el potencial de comprometer la habilidad de aprender colectivamente de los resultados de AVP, de interpretar con precisión los resultados en casos en que las relaciones de los modelos sean no lineales e incluyan interacciones, para priorizar las acciones de monitoreo y manejo y para asegurar que la planificación de las decisiones de conservación sean robustas ante la incertidumbre en los parámetros espaciales y no espaciales. Nuestra revisión subraya la necesidad de desarrollar herramientas para análisis de sensibilidad globales y aplicarlos a AVP espaciales. [source] Conservation Biogeography: assessment and prospectDIVERSITY AND DISTRIBUTIONS, Issue 1 2005Robert J. Whittaker ABSTRACT There is general agreement among scientists that biodiversity is under assault on a global basis and that species are being lost at a greatly enhanced rate. This article examines the role played by biogeographical science in the emergence of conservation guidance and makes the case for the recognition of Conservation Biogeography as a key subfield of conservation biology delimited as: the application of biogeographical principles, theories, and analyses, being those concerned with the distributional dynamics of taxa individually and collectively, to problems concerning the conservation of biodiversity. Conservation biogeography thus encompasses both a substantial body of theory and analysis, and some of the most prominent planning frameworks used in conservation. Considerable advances in conservation guidelines have been made over the last few decades by applying biogeographical methods and principles. Herein we provide a critical review focussed on the sensitivity to assumptions inherent in the applications we examine. In particular, we focus on four inter-related factors: (i) scale dependency (both spatial and temporal); (ii) inadequacies in taxonomic and distributional data (the so-called Linnean and Wallacean shortfalls); (iii) effects of model structure and parameterisation; and (iv) inadequacies of theory. These generic problems are illustrated by reference to studies ranging from the application of historical biogeography, through island biogeography, and complementarity analyses to bioclimatic envelope modelling. There is a great deal of uncertainty inherent in predictive analyses in conservation biogeography and this area in particular presents considerable challenges. Protected area planning frameworks and their resulting map outputs are amongst the most powerful and influential applications within conservation biogeography, and at the global scale are characterised by the production, by a small number of prominent NGOs, of bespoke schemes, which serve both to mobilise funds and channel efforts in a highly targeted fashion. We provide a simple typology of protected area planning frameworks, with particular reference to the global scale, and provide a brief critique of some of their strengths and weaknesses. Finally, we discuss the importance, especially at regional scales, of developing more responsive analyses and models that integrate pattern (the compositionalist approach) and processes (the functionalist approach) such as range collapse and climate change, again noting the sensitivity of outcomes to starting assumptions. We make the case for the greater engagement of the biogeographical community in a programme of evaluation and refinement of all such schemes to test their robustness and their sensitivity to alternative conservation priorities and goals. [source] Studies on seismic reduction of story-incresed buildings with friction layer and energy-dissipated devicesEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 14 2003Hong-Nan Li Abstract A new type of energy-dissipated structural system for existing buildings with story-increased frames is presented and investigated in this paper. In this system the sliding-friction layer between the lowest increased floor of the outer frame structure and the roof of the original building is applied, and energy-dissipated dampers are used for the connections between the columns of the outer frame and each floor of the original building. A shaking table test is performed on the model of the system and the simplified structural model of this system is given. The theory of the non-classical damping approach is introduced to the calculation analyses and compared with test results. The results show that friction and energy-dissipated devices are very effective in reducing the seismic response and dissipating the input energy of the model structure. Finally, the design scheme and dynamic time-history analyses of an existing engineering project are investigated to illustrate the application and advantages of the given method. Copyright © 2003 John Wiley & Sons, Ltd. [source] A - scab (Apple-scab), a simulation model for estimating risk of Venturia inaequalis primary infections,EPPO BULLETIN, Issue 2 2007V. Rossi A-scab (Apple-scab) is a dynamic simulation model for Venturia inaequalis primary infections on apple. It simulates development of pseudothecia, ascospore maturation, discharge, deposition and infection during the season based on hourly data of air temperature, rainfall, relative humidity and leaf wetness. A-scab produces a risk index for each infection period and forecasts the probable periods of symptoms appearance. The model was validated under different epidemiological conditions: its outputs were successfully compared with daily spore counts and actual onset and severity of the disease under orchard conditions, and neither corrections nor calibrations have been necessary to adapt the model to different apple-growing areas. Compared to other existing models, A-scab: (i) combines information from literature and data acquired from specific experiments; (ii) is completely ,open' because both model structure and algorithms have been published and are easily accessible; (iii) is not written with a specific computer language but it works on simple-to-use electronic sheets. For these reasons the model can be easily implemented in the computerized systems used by warning services. [source] Molecular modeling of the dimeric structure of human lipoprotein lipase and functional studies of the carboxyl-terminal domainFEBS JOURNAL, Issue 18 2002Yoko Kobayashi Lipoprotein lipase (LPL) plays a key role in lipid metabolism. Molecular modeling of dimeric LPL was carried out using insight ii based upon the crystal structures of human, porcine, and horse pancreatic lipase. The dimeric model reveals a saddle-shaped structure and the key heparin-binding residues in the amino-terminal domain located on the top of this saddle. The models of two dimeric conformations , a closed, inactive form and an open, active form , differ with respect to how surface-loop positions affect substrate access to the catalytic site. In the closed form, the surface loop covers the catalytic site, which becomes inaccessible to solvent. Large conformational changes in the open form, especially in the loop and carboxyl-terminal domain, allow substrate access to the active site. To dissect the structure,function relationships of the LPL carboxyl-terminal domain, several residues predicted by the model structure to be essential for the functions of heparin binding and substrate recognition were mutagenized. Arg405 plays an important role in heparin binding in the active dimer. Lys413/Lys414 or Lys414 regulates heparin affinity in both monomeric and dimeric forms. To evaluate the prediction that LPL forms a homodimer in a ,head-to-tail' orientation, two inactive LPL mutants , a catalytic site mutant (S132T) and a substrate-recognition mutant (W390A/W393A/W394A) , were cotransfected into COS7 cells. Lipase activity could be recovered only when heterodimerization occurred in a head-to-tail orientation. After cotransfection, 50% of the wild-type lipase activity was recovered, indicating that lipase activity is determined by the interaction between the catalytic site on one subunit and the substrate-recognition site on the other. [source] Model uncertainty in the ecosystem approach to fisheriesFISH AND FISHERIES, Issue 4 2007Simeon L. Hill Abstract Fisheries scientists habitually consider uncertainty in parameter values, but often neglect uncertainty about model structure, an issue of increasing importance as ecosystem models are devised to support the move to an ecosystem approach to fisheries (EAF). This paper sets out pragmatic approaches with which to account for uncertainties in model structure and we review current ways of dealing with this issue in fisheries and other disciplines. All involve considering a set of alternative models representing different structural assumptions, but differ in how those models are used. The models can be asked to identify bounds on possible outcomes, find management actions that will perform adequately irrespective of the true model, find management actions that best achieve one or more objectives given weights assigned to each model, or formalize hypotheses for evaluation through experimentation. Data availability is likely to limit the use of approaches that involve weighting alternative models in an ecosystem setting, and the cost of experimentation is likely to limit its use. Practical implementation of an EAF should therefore be based on management approaches that acknowledge the uncertainty inherent in model predictions and are robust to it. Model results must be presented in ways that represent the risks and trade-offs associated with alternative actions and the degree of uncertainty in predictions. This presentation should not disguise the fact that, in many cases, estimates of model uncertainty may be based on subjective criteria. The problem of model uncertainty is far from unique to fisheries, and a dialogue among fisheries modellers and modellers from other scientific communities will therefore be helpful. [source] Diagnostic evaluation of conceptual rainfall,runoff models using temporal clusteringHYDROLOGICAL PROCESSES, Issue 20 2010N. J. de Vos Abstract Given the structural shortcomings of conceptual rainfall,runoff models and the common use of time-invariant model parameters, these parameters can be expected to represent broader aspects of the rainfall,runoff relationship than merely the static catchment characteristics that they are commonly supposed to quantify. In this article, we relax the common assumption of time-invariance of parameters, and instead seek signature information about the dynamics of model behaviour and performance. We do this by using a temporal clustering approach to identify periods of hydrological similarity, allowing the model parameters to vary over the clusters found in this manner, and calibrating these parameters simultaneously. The diagnostic information inferred from these calibration results, based on the patterns in the parameter sets of the various clusters, is used to enhance the model structure. This approach shows how diagnostic model evaluation can be used to combine information from the data and the functioning of the hydrological model in a useful manner. Copyright © 2010 John Wiley & Sons, Ltd. [source] Towards integrating tracer studies in conceptual rainfall-runoff models: recent insights from a sub-arctic catchment in the Cairngorm Mountains, ScotlandHYDROLOGICAL PROCESSES, Issue 2 2003Chris Soulsby Abstract Hydrochemical tracers (alkalinity and silica) were used in an end-member mixing analysis (EMMA) of runoff sources in the 10 km2 Allt a' Mharcaidh catchment. A three-component mixing model was used to separate the hydrograph and estimate, to a first approximation, the range of likely contributions of overland flow, shallow subsurface storm flow, and groundwater to the annual hydrograph. A conceptual, catchment-scale rainfall-runoff model (DIY) was also used to separate the annual hydrograph in an equivalent set of flow paths. The two approaches produced independent representations of catchment hydrology that exhibited reasonable agreement. This showed the dominance of overland flow in generating storm runoff and the important role of groundwater inputs throughout the hydrological year. Moreover, DIY was successfully adapted to simulate stream chemistry (alkalinity) at daily time steps. Sensitivity analysis showed that whilst a distinct groundwater source at the catchment scale could be identified, there was considerable uncertainty in differentiating between overland flow and subsurface storm flow in both the EMMA and DIY applications. Nevertheless, the study indicated that the complementary use of tracer analysis in EMMA can increase the confidence in conceptual model structure. However, conclusions are restricted to the specific spatial and temporal scales examined. Copyright © 2003 John Wiley & Sons, Ltd. [source] Welfare-improving adverse selection in credit markets,INTERNATIONAL ECONOMIC REVIEW, Issue 4 2002James Vercammen A model of simultaneous adverse selection and moral hazard in a competitive credit market is developed and used to show that aggregate borrower welfare may be higher in the combined case than in the moral-hazard-only case. Adverse selection can be welfare improving because in the pooling equilibrium of the combined model, high-quality borrowers cross subsidize low-quality borrowers. The cross subsidization reduces the overall moral hazard effort effects, and the resulting gain in welfare may more than offset the welfare loss stemming from distorted investment choices. The analysis focuses on pooling equilibria because model structure precludes separating equilibria. [source] A constitutive model for the dynamic and high-pressure behaviour of a propellant-like material: Part I: Experimental background and general structure of the modelINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 6 2001Hervé Trumel Abstract This paper is the first part of a work that aims at developing a mechanical model for the behaviour of propellant-like materials under high confining pressure and strain rate. The behaviour of a typical material is investigated experimentally. Several microstructural deformation processes are identified and correlated with loading conditions. The resulting behaviour is complex, non-linear, and characterized by multiple couplings. The general structure of a relevant model is sought using a thermodynamic framework. A viscoelastic-viscoplastic-compaction model structure is derived under suitable simplifying assumptions, in the framework of finite, though moderate, strains. Model development, identification and numerical applications are given in the companion paper. Copyright © 2001 John Wiley & Sons, Ltd. [source] Nonlinear adaptive tracking-control synthesis for functionally uncertain systemsINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 2 2010Zenon Zwierzewicz Abstract The paper is concerned with the problem of adaptive tracking system control synthesis. It is assumed that a nonlinear, feedback linearizable object dynamics (model structure) is (partially) unknown and some of its nonlinear characteristics can be approximated by a sort of functional approximators. It has been proven that proportional state feedback plus parameter adaptation are able to assure its asymptotic stability. This form of controller permits online compensation of unknown model nonlinearities and exogenous disturbances, which results in satisfactory tracking performance. An interesting feature of the system is that the whole process control is performed without requisite asymptotic convergence of approximator parameters to the postulated ,true' values. It has been noticed that the parameters play rather a role of slack variables on which potential errors (that otherwise would affect the state variables) cumulate. The system's performance has been tested via Matlab/Simulink simulations via an example of ship path-following problem. Copyright © 2009 John Wiley & Sons, Ltd. [source] Hybrid kernel learning via genetic optimization for TS fuzzy system identificationINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 1 2010Wei Li Abstract This paper presents a new TS fuzzy system identification approach based on hybrid kernel learning and an improved genetic algorithm (GA). Structure identification is achieved by using support vector regression (SVR), in which a hybrid kernel function is adopted to improve regression performance. For multiple-parameter selection of SVR, the proposed GA is adopted to speed up the search process and guarantee the least number of support vectors. As a result, a concise model structure can be determined by these obtained support vectors. Then, the premise parameters of fuzzy rules can be extracted from results of SVR, and the consequent parameters can be optimized by the least-square method. Simulation results show that the resulting fuzzy model not only achieves satisfactory accuracy, but also takes on good generalization capability. Copyright © 2008 John Wiley & Sons, Ltd. [source] Real-time identification of vehicle chassis dynamics using a novel reparameterization based on sensitivity invarianceINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 2 2004S. Brennan Abstract This work presents a novel methodology to identify model parameters using the concept of sensitivity invariance. Within many classical system representations, relationships between Bode parameter sensitivities may exist that are not explicitly accounted for by the formal system model. These relationships, called sensitivity invariances, will explicitly limit the possible parameter variation of the system model to a small subspace of the possible parameter gradients. By constraining the parameter identification or adaptation to a model structure with uncoupled parameter sensitivities, a more efficient identification can be obtained at a reduced computational and modelling cost. As illustration, an identification method of using sensitivity invariance is demonstrated on an experimental problem to identify, in real time, a time-varying tire parameter associated with the chassis dynamics of passenger vehicles at highway speeds. The results are validated with simulations as well as an experimental implementation on a research vehicle driven under changing road conditions. Copyright © 2004 John Wiley & Sons, Ltd. [source] Addressing the unanswered questions in global water policy: a methodology framework,IRRIGATION AND DRAINAGE, Issue 1 2003Charlotte de Fraiture Demande et fourniture de l'eau et de la nourriture au niveau globale 1995,2025; modélisation globale; politique globale sur l'eau; projections 2025 Abstract Are the available water resources sufficient to produce food for the growing world population while at the same time meet increasing municipal, industrial and environmental requirements? Projections for the year 2025, presented by different research groups at the second World Water Forum in The Hague, show an increase in global agricultural water use ranging from 4 to 17%. Estimates for the growth of total withdrawals, including domestic and industrial sectors, vary from 22 to 32%. This range is the result of differences in model structure and assumptions. Although these analyses were instrumental in raising awareness concerning the extent of present and future water scarcity problems, they raise many questions, which remain largely unanswered. The questions relate to the impact of water- and food-related policies on global and regional water scarcity, food production, environment and livelihoods through the year 2025. The International Food Policy Research Institute (IFPRI) and the International Water Management Institute (IWMI) embarked on a joint modeling exercise to address these questions. This paper lays out the issues and discusses the methodology. During the 18th ICID Congress in July 2002 at Montreal, preliminary results will be presented. Copyright © 2003 John Wiley & Sons, Ltd. RÉSUMÉ Est-ce que les ressources en eau disponible sont suffisantes pour produire la nourriture pour une population mondiale qui s'accroît, et satisfaire en même temps les besoins municipaux, industriels et environnementaux? Les projections faites pour l'an 2025 par différents groupes de chercheurs lors du 2ème Forum Mondial de l'Eau à la Haye montrent une augmentation de 4 à 17% dans l'utilisation globale de l'eau agricole. Les estimations pour l'augmentation des prélèvements, y compris les secteurs domestiques et industriels, varient de 22 à 32%. Cette portée est le résultat des différences dans la structure des modèles. Quoique ces analyses permettent de sensibiliser le peuple sur les problèmes actuels et futurs de la disponibilité de l'eau, elles soulèvent de nombreuses questions qui restent non-résolues en bonne part. Ces questions concernent l'impact aux niveaux régionau et globau des politiques sur l'eau et la nourriture, la production alimentaire, l'environnement et les moyens d'existence et sources de revenu vers l'an 2025. L'Institut International de Recherche sur la Politique Alimentaire (IFPRI) et l'Institut International de Gestion d'Eau (IWMI) préparent conjointement un modèle pour traiter toutes ces questions non-résolues. Le rapport en question identifie toutes ces questions et discute la méthodologie. Au 18ème Congrès de Montréal en juillet 2002, les résultats préliminaires seront présentés. Copyright © 2003 John Wiley & Sons, Ltd. [source] The equilibrium assumption in estimating the parameters of metapopulation modelsJOURNAL OF ANIMAL ECOLOGY, Issue 1 2000Atte Moilanen 1.,The construction of a predictive metapopulation model includes three steps: the choice of factors affecting metapopulation dynamics, the choice of model structure, and finally parameter estimation and model testing. 2.,Unless the assumption is made that the metapopulation is at stochastic quasi-equilibrium and unless the method of parameter estimation of model parameters uses that assumption, estimates from a limited amount of data will usually predict a trend in metapopulation size. 3.,This implicit estimation of a trend occurs because extinction-colonization stochasticity, possibly amplified by regional stochasticity, leads to unequal numbers of observed extinction and colonization events during a short study period. 4.,Metapopulation models, such as those based on the logistic regression model, that rely on observed population turnover events in parameter estimation are sensitive to the implicit estimation of a trend. 5.,A new parameter estimation method, based on Monte Carlo inference for statistically implicit models, allows an explicit decision about whether metapopulation quasi-stability is assumed or not. 6.,Our confidence in metapopulation model parameter estimates that have been produced from only a few years of data is decreased by the need to know before parameter estimation whether the metapopulation is in quasi-stable state or not. 7.,The choice of whether metapopulation stability is assumed or not in parameter estimation should be done consciously. Typical data sets cover only a few years and rarely allow a statistical test of a possible trend. While making the decision about stability one should consider any information about the landscape history and species and metapopulation characteristics. [source] The cross-correlation function: main properties and first applicationsJOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 2 2010Benedetta Carrozzini When a model structure, and more generally a model electron density ,M(r), is available, its cross-correlation function C(u) with the unknown true structure ,(r) cannot be exactly calculated. A useful approximation of C(u) is obtained by replacing exp[i(,h , ,Mh)] by its expected value. In this case C,(u), a potentially useful approximation of the function C(u), is obtained. In this paper the main crystallographic properties of the functions C(u) and C,(u) are established. It is also shown that such functions may be useful for the success of the phasing process. [source] Evaluation of physiologically based pharmacokinetic models for use in risk assessment,JOURNAL OF APPLIED TOXICOLOGY, Issue 3 2007Weihsueh A. Chiu Abstract Physiologically based pharmacokinetic (PBPK) models are sophisticated dosimetry models that offer great flexibility in modeling exposure scenarios for which there are limited data. This is particularly of relevance to assessing human exposure to environmental toxicants, which often requires a number of extrapolations across species, route, or dose levels. The continued development of PBPK models ensures that regulatory agencies will increasingly experience the need to evaluate available models for their application in risk assessment. To date, there are few published criteria or well-defined standards for evaluating these models. Herein, important considerations for evaluating such models are described. The evaluation of PBPK models intended for risk assessment applications should include a consideration of: model purpose, model structure, mathematical representation, parameter estimation, computer implementation, predictive capacity and statistical analyses. Model purpose and structure require qualitative checks on the biological plausibility of a model. Mathematical representation, parameter estimation, computer implementation involve an assessment of the coding of the model, as well as the selection and justification of the physical, physicochemical and biochemical parameters chosen to represent a biological organism. Finally, the predictive capacity and sensitivity, variability and uncertainty of the model are analysed so that the applicability of a model for risk assessment can be determined. Published in 2007 by John Wiley & Sons, Ltd. [source] Data-driven identification of group dynamics for motion prediction and controlJOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 6-7 2008Mac Schwager A distributed model structure for representing groups of coupled dynamic agents is proposed, and the least-squares method is used for fitting model parameters based on measured position data. The difference equation model embodies a minimalist approach, incorporating only factors essential to the movement and interaction of physical bodies. The model combines effects from an agent's inertia, interactions between agents, and interactions between each agent and its environment. Global positioning system tracking data were collected in field experiments from a group of 3 cows and a group of 10 cows over the course of several days using custom-designed, head-mounted sensor boxes. These data are used with the least-squares method to fit the model to the cow groups. The modeling technique is shown to capture overall characteristics of the group as well as attributes of individual group members. Applications to livestock management are described, and the potential for surveillance, prediction, and control of various kinds of groups of dynamic agents are suggested. © 2008 Wiley Periodicals, Inc. [source] Characterizing, Propagating, and Analyzing Uncertainty in Life-Cycle Assessment: A Survey of Quantitative ApproachesJOURNAL OF INDUSTRIAL ECOLOGY, Issue 1 2007Shannon M. Lloyd Summary Life-cycle assessment (LCA) practitioners build models to quantify resource consumption, environmental releases, and potential environmental and human health impacts of product systems. Most often, practitioners define a model structure, assign a single value to each parameter, and build deterministic models to approximate environmental outcomes. This approach fails to capture the variability and uncertainty inherent in LCA. To make good decisions, decision makers need to understand the uncertainty in and divergence between LCA outcomes for different product systems. Several approaches for conducting LCA under uncertainty have been proposed and implemented. For example, Monte Carlo simulation and fuzzy set theory have been applied in a limited number of LCA studies. These approaches are well understood and are generally accepted in quantitative decision analysis. But they do not guarantee reliable outcomes. A survey of approaches used to incorporate quantitative uncertainty analysis into LCA is presented. The suitability of each approach for providing reliable outcomes and enabling better decisions is discussed. Approaches that may lead to overconfident or unreliable results are discussed and guidance for improving uncertainty analysis in LCA is provided. [source] Measurement based modeling and control of bimodal particle size distribution in batch emulsion polymerizationAICHE JOURNAL, Issue 8 2010Mazen Alamir Abstract In this article, a novel modeling approach is proposed for bimodal Particle Size Distribution (PSD) control in batch emulsion polymerization. The modeling approach is based on a behavioral model structure that captures the dynamics of PSD. The parameters of the resulting model can be easily identified using a limited number of experiments. The resulting model can then be incorporated in a simple learning scheme to produce a desired bimodal PSD while compensating for model mismatch and/or physical parameters variations using very simple updating rules. © 2010 American Institute of Chemical Engineers AIChE J, 2010 [source] ,-Hairpin folding and stability: molecular dynamics simulations of designed peptides in aqueous solutionJOURNAL OF PEPTIDE SCIENCE, Issue 9 2004Clara M. Santiveri Abstract The structural properties of a 10-residue and a 15-residue peptide in aqueous solution were investigated by molecular dynamics simulation. The two designed peptides, SYINSDGTWT and SESYINSDGTWTVTE, had been studied previously by NMR at 278 K and the resulting model structures were classified as 3:5 ,-hairpins with a type I + G1 ,-bulge turn. In simulations at 278 K, starting from the NMR model structure, the 3:5 ,-hairpin conformers proved to be stable over the time period evaluated (30 ns). Starting from an extended conformation, simulations of the decapeptide at 278 K, 323 K and 353 K were also performed to study folding. Over the relatively short time scales explored (30 ns at 278 K and 323 K, 56 ns at 353 K), folding to the 3:5 ,-hairpin could only be observed at 353 K. At this temperature, the collapse to ,-hairpin-like conformations is very fast. The conformational space accessible to the peptide is entirely dominated by loop structures with different degrees of ,-hairpin character. The transitions between different types of ordered loops and ,-hairpins occur through two unstructured loop conformations stabilized by a single side-chain interaction between Tyr2 and Trp9, which facilitates the changes of the hydrogen-bond register. In agreement with previous experimental results, ,-hairpin formation is initially driven by the bending propensity of the turn segment. Nevertheless, the fine organization of the turn region appears to be a late event in the folding process. Copyright © 2004 European Peptide Society and John Wiley & Sons, Ltd. [source] Substituent effects on ion complexation of para - tert -butylcalix[4]arene esters,JOURNAL OF PHYSICAL ORGANIC CHEMISTRY, Issue 11 2006Márcio Lazzarotto Abstract Phenoxy-carboxy-methoxy- p-tert -butylcalix[4]arene esters were synthesized in order to evaluate the role of electronic parameters on the complexation of alkaline metal cations. Extraction constants of metal picrates to organic phase were determined. Plots of log (KR/KH) against Hammett , and , gave good linear correlations. The best correlations with , were obtained for K+ and Rb+, while the best correlations with , were obtained for Li+ and Na+. All Hammett plots gave a straight descending line, which is consistent with a dependence of the electronic density on the CO. Treatment of data using the Yukawa,Tsuno equation revealed a variation in the contribution of resonance in the complexation of alkaline metal ions, which is maximum for Na+ and minimum for Rb+. Electronic parameters were calculated for a related acyclic model structure and only the HOMO energy showed a good correlation with log (KR/KH). Copyright © 2007 John Wiley & Sons, Ltd. [source] A Streamflow Forecasting Framework using Multiple Climate and Hydrological Models,JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 4 2009Paul J. Block Abstract:, Water resources planning and management efficacy is subject to capturing inherent uncertainties stemming from climatic and hydrological inputs and models. Streamflow forecasts, critical in reservoir operation and water allocation decision making, fundamentally contain uncertainties arising from assumed initial conditions, model structure, and modeled processes. Accounting for these propagating uncertainties remains a formidable challenge. Recent enhancements in climate forecasting skill and hydrological modeling serve as an impetus for further pursuing models and model combinations capable of delivering improved streamflow forecasts. However, little consideration has been given to methodologies that include coupling both multiple climate and multiple hydrological models, increasing the pool of streamflow forecast ensemble members and accounting for cumulative sources of uncertainty. The framework presented here proposes integration and offline coupling of global climate models (GCMs), multiple regional climate models, and numerous water balance models to improve streamflow forecasting through generation of ensemble forecasts. For demonstration purposes, the framework is imposed on the Jaguaribe basin in northeastern Brazil for a hindcast of 1974-1996 monthly streamflow. The ECHAM 4.5 and the NCEP/MRF9 GCMs and regional models, including dynamical and statistical models, are integrated with the ABCD and Soil Moisture Accounting Procedure water balance models. Precipitation hindcasts from the GCMs are downscaled via the regional models and fed into the water balance models, producing streamflow hindcasts. Multi-model ensemble combination techniques include pooling, linear regression weighting, and a kernel density estimator to evaluate streamflow hindcasts; the latter technique exhibits superior skill compared with any single coupled model ensemble hindcast. [source] Structural and parameter uncertainty in Bayesian cost-effectiveness modelsJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 2 2010Christopher H. Jackson Summary., Health economic decision models are subject to various forms of uncertainty, including uncertainty about the parameters of the model and about the model structure. These uncertainties can be handled within a Bayesian framework, which also allows evidence from previous studies to be combined with the data. As an example, we consider a Markov model for assessing the cost-effectiveness of implantable cardioverter defibrillators. Using Markov chain Monte Carlo posterior simulation, uncertainty about the parameters of the model is formally incorporated in the estimates of expected cost and effectiveness. We extend these methods to include uncertainty about the choice between plausible model structures. This is accounted for by averaging the posterior distributions from the competing models using weights that are derived from the pseudo-marginal-likelihood and the deviance information criterion, which are measures of expected predictive utility. We also show how these cost-effectiveness calculations can be performed efficiently in the widely used software WinBUGS. [source] Simple means to improve the interpretability of regression coefficientsMETHODS IN ECOLOGY AND EVOLUTION, Issue 2 2010Holger Schielzeth Summary 1. Linear regression models are an important statistical tool in evolutionary and ecological studies. Unfortunately, these models often yield some uninterpretable estimates and hypothesis tests, especially when models contain interactions or polynomial terms. Furthermore, the standard errors for treatment groups, although often of interest for including in a publication, are not directly available in a standard linear model. 2. Centring and standardization of input variables are simple means to improve the interpretability of regression coefficients. Further, refitting the model with a slightly modified model structure allows extracting the appropriate standard errors for treatment groups directly from the model. 3. Centring will make main effects biologically interpretable even when involved in interactions and thus avoids the potential misinterpretation of main effects. This also applies to the estimation of linear effects in the presence of polynomials. Categorical input variables can also be centred and this sometimes assists interpretation. 4. Standardization (z -transformation) of input variables results in the estimation of standardized slopes or standardized partial regression coefficients. Standardized slopes are comparable in magnitude within models as well as between studies. They have some advantages over partial correlation coefficients and are often the more interesting standardized effect size. 5. The thoughtful removal of intercepts or main effects allows extracting treatment means or treatment slopes and their appropriate standard errors directly from a linear model. This provides a simple alternative to the more complicated calculation of standard errors from contrasts and main effects. 6. The simple methods presented here put the focus on parameter estimation (point estimates as well as confidence intervals) rather than on significance thresholds. They allow fitting complex, but meaningful models that can be concisely presented and interpreted. The presented methods can also be applied to generalised linear models (GLM) and linear mixed models. [source] The difference electron density: a probabilistic reformulationACTA CRYSTALLOGRAPHICA SECTION A, Issue 3 2010Maria Cristina Burla The joint probability distribution function P(E, Ep), where E and Ep are the normalized structure factors of the target and of a model structure, respectively, is a fundamental tool in crystallographic methods devoted to crystal structure solution. It plays a central role in any attempt for improving phase estimates from a given structure model. More recently the difference electron density ,q = ,,,p has been revisited and methods based on its modifications have started to play an important role in combination with electron density modification approaches. In this paper new coefficients for the difference electron density have been obtained by using the joint probability distribution function P(E, Ep, Eq) and by taking into account both errors in the model and in measurements. The first applications show the correctness of our theoretical approach and the superiority of the new difference Fourier synthesis, particularly when the model is a rough approximation of the target structure. The new and the classic difference syntheses coincide when the model represents the target structure well. [source] Molecular replacement: the probabilistic approach of the program REMO09 and its applicationsACTA CRYSTALLOGRAPHICA SECTION A, Issue 6 2009Rocco Caliandro The method of joint probability distribution functions has been applied to molecular replacement techniques. The rotational search is performed by rotating the reciprocal lattice of the protein with respect to the calculated transform of the model structure; the translation search is performed by fast Fourier transform. Several cases of prior information are studied, both for the rotation and for the translation step: e.g. the conditional probability density for the rotation or the translation of a monomer is found both for ab initio and when the rotation and/or the translation values of other monomers are given. The new approach has been implemented in the program REMO09, which is part of the package for global phasing IL MILIONE [Burla, Caliandro, Camalli, Cascarano, De Caro, Giacovazzo, Polidori, Siliqi & Spagna (2007). J. Appl. Cryst.40, 609,613]. A large set of test structures has been used for checking the efficiency of the new algorithms, which proved to be significantly robust in finding the correct solutions and in discriminating them from noise. An important design concept is the high degree of automatism: REMO09 is often capable of providing a reliable model of the target structure without any user intervention. [source] Identification of overlapping but distinct cAMP and cGMP interaction sites with cyclic nucleotide phosphodiesterase 3A by site-directed mutagenesis and molecular modeling based on crystalline PDE4BPROTEIN SCIENCE, Issue 8 2001Wei Zhang Abstract Cyclic nucleotide phosphodiesterase 3A (PDE3A) hydrolyzes cAMP to AMP, but is competitively inhibited by cGMP due to a low kcat despite a tight Km. Cyclic AMP elevation is known to inhibit all pathways of platelet activation, and thus regulation of PDE3 activity is significant. Although cGMP elevation will inhibit platelet function, the major action of cGMP in platelets is to elevate cAMP by inhibiting PDE3A. To investigate the molecular details of how cGMP, a similar but not identical molecule to cAMP, behaves as an inhibitor of PDE3A, we constructed a molecular model of the catalytic domain of PDE3A based on homology to the recently determined X-ray crystal structure of PDE4B. Based on the excellent fit of this model structure, we mutated nine amino acids in the putative catalytic cleft of PDE3A to alanine using site-directed mutagenesis. Six of the nine mutants (Y751A, H840A, D950A, F972A, Q975A, and F1004A) significantly decreased catalytic efficiency, and had kcat/Km less than 10% of the wild-type PDE3A using cAMP as substrate. Mutants N845A, F972A, and F1004A showed a 3- to 12-fold increase of Km for cAMP. Four mutants (Y751A, H840A, D950A, and F1004A) had a 9- to 200-fold increase of Ki for cGMP in comparison to the wild-type PDE3A. Studies of these mutants and our previous study identified two groups of amino acids: E866 and F1004 contribute commonly to both cAMP and cGMP interactions while N845, E971, and F972 residues are unique for cAMP and the residues Y751, H836, H840, and D950 interact with cGMP. Therefore, our results provide biochemical evidence that cGMP interacts with the active site residues differently from cAMP. [source] |