Dynamic Models (dynamic + models)

Distribution by Scientific Domains

Kinds of Dynamic Models

  • nonlinear dynamic models

  • Selected Abstracts

    Solving, Estimating, and Selecting Nonlinear Dynamic Models Without the Curse of Dimensionality

    ECONOMETRICA, Issue 2 2010
    Viktor Winschel
    We present a comprehensive framework for Bayesian estimation of structural nonlinear dynamic economic models on sparse grids to overcome the curse of dimensionality for approximations. We apply sparse grids to a global polynomial approximation of the model solution, to the quadrature of integrals arising as rational expectations, and to three new nonlinear state space filters which speed up the sequential importance resampling particle filter. The posterior of the structural parameters is estimated by a new Metropolis,Hastings algorithm with mixing parallel sequences. The parallel extension improves the global maximization property of the algorithm, simplifies the parameterization for an appropriate acceptance ratio, and allows a simple implementation of the estimation on parallel computers. Finally, we provide all algorithms in the open source software JBendge for the solution and estimation of a general class of models. [source]

    Dynamic Models in Space and Time

    J. Paul Elhorst
    This paper presents a first-order autoregressive distributed lag model in both space and time. It is shown that this model encompasses a wide series of simpler models frequently used in the analysis of space-time data as well as models that better fit the data and have never been used before. A framework is developed to determine which model is the most likely candidate to study space-time data. As an application, the relationship between the labor force participation rate and the unemployment rate is estimated using regional data of Germany, France, and the United Kingdom derived from Eurostat, 1983,1993. [source]

    Dynamic models allowing for flexibility in complex life histories accurately predict timing of metamorphosis and antipredator strategies of prey

    FUNCTIONAL ECOLOGY, Issue 6 2009
    Andrew D. Higginson
    Summary 1.,The development of antipredator defences in the larval stage of animals with complex life cycles is likely to be affected by costs associated with creating and maintaining such defences because of their impact on the timing of maturation or metamorphosis. 2.,Various theoretical treatments have suggested that investment in defence should both increase or decrease with increasing resource availability, but a recent model predicts investment in defences should be highest at intermediate resource level and predator density. 3.,Previous models of investment in defence and timing of metamorphosis provide a poor match to empirical data. Here we develop a dynamic state-dependent model of investment in behavioural and morphological defences that enables us to allow flexibility in investment in defences over development, the timing of metamorphosis and the size of the organism at metamorphosis that were absent from previous theory. 4.,We show that the inclusion of this flexibility results in different predictions to those of the fixed investment approach used previously, especially when we allow metamorphosis to occur at the optimal time and state for the organism. 5.,Under these more flexible conditions, we predict that morphological defences should be insensitive to resource level whilst behavioural defences should either increase or decrease with increasing resources depending on the predation risk and the magnitude of the fitness benefits of large size at metamorphosis. 6.,Our work provides a formal framework in which we might progress in the study of how the use of antipredator defences is affected by their costs. Most of the predictions of our model in are in good accord with empirical results, and can be understood in terms of the underlying biological assumptions. The reasons why simpler models failed to match empirical observations can be explained, and our predictions that are a poor match help to target the circumstances which warrant future study. [source]

    Reduction and identification methods for Markovian control systems, with application to thin film deposition

    Martha A. Gallivan
    Abstract Dynamic models of nanometer-scale phenomena often require an explicit consideration of interactions among a large number of atoms or molecules. The corresponding mathematical representation may thus be high dimensional, nonlinear, and stochastic, incompatible with tools in nonlinear control theory that are designed for low-dimensional deterministic equations. We consider here a general class of probabilistic systems that are linear in the state, but whose input enters as a function multiplying the state vector. Model reduction is accomplished by grouping probabilities that evolve together, and truncating states that are unlikely to be accessed. An error bound for this reduction is also derived. A system identification approach that exploits the inherent linearity is then developed, which generates all coefficients in either a full or reduced model. These concepts are then extended to extremely high-dimensional systems, in which kinetic Monte Carlo (KMC) simulations provide the input,output data. This work was motivated by our interest in thin film deposition. We demonstrate the approaches developed in the paper on a KMC simulation of surface evolution during film growth, and use the reduced model to compute optimal temperature profiles that minimize surface roughness. Copyright © 2004 John Wiley & Sons, Ltd. [source]

    Dynamic models for spatiotemporal data

    Jonathan R. Stroud
    We propose a model for non-stationary spatiotemporal data. To account for spatial variability, we model the mean function at each time period as a locally weighted mixture of linear regressions. To incorporate temporal variation, we allow the regression coefficients to change through time. The model is cast in a Gaussian state space framework, which allows us to include temporal components such as trends, seasonal effects and autoregressions, and permits a fast implementation and full probabilistic inference for the parameters, interpolations and forecasts. To illustrate the model, we apply it to two large environmental data sets: tropical rainfall levels and Atlantic Ocean temperatures. [source]

    With the Future Behind Them: Convergent Evidence From Aymara Language and Gesture in the Crosslinguistic Comparison of Spatial Construals of Time

    Rafael E. Núñez
    Abstract Cognitive research on metaphoric concepts of time has focused on differences between moving Ego and moving time models, but even more basic is the contrast between Ego- and temporal-reference-point models. Dynamic models appear to be quasi-universal cross-culturally, as does the generalization that in Ego-reference-point models, FUTURE IS IN FRONT OF EGO and PAST IS IN BACK OF EGO. The Aymara language instead has a major static model of time wherein FUTURE IS BEHIND EGO and PAST IS IN FRONT OF EGO; linguistic and gestural data give strong confirmation of this unusual culture-specific cognitive pattern. Gestural data provide crucial information unavailable to purely linguistic analysis, suggesting that when investigating conceptual systems both forms of expression should be analyzed complementarily. Important issues in embodied cognition are raised: how fully shared are bodily grounded motivations for universal cognitive patterns, what makes a rare pattern emerge, and what are the cultural entailments of such patterns? [source]


    CRIMINOLOGY, Issue 1 2010
    In recent years, criminologists, as well as journalists, have devoted considerable attention to the potential deterrent effect of what is sometimes referred to as "proactive" policing. This policing style entails the vigorous enforcement of laws against relatively minor offenses to prevent more serious crime. The current study examines the effect of proactive policing on robbery rates for a sample of large U.S. cities using an innovative measure developed by Sampson and Cohen (1988). We replicate their cross-sectional analyses using data from 2000 to 2003, which is a period that proactive policing is likely to have become more common than that of the original study,the early 1980s. We also extend their analyses by estimating a more comprehensive regression model that incorporates additional theoretically relevant predictors. Finally, we advance previous research in this area by using panel data, The cross-sectional analyses replicate prior findings of a negative relationship between proactive policing and robbery rates. In addition, our dynamic models suggest that proactive policing is endogenous to changes in robbery rates. When this feedback between robbery and proactive policing is eliminated, we find more evidence to support our finding that proactive policing reduces robbery rates. [source]

    A cross-system synthesis of consumer and nutrient resource control on producer biomass

    ECOLOGY LETTERS, Issue 7 2008
    Daniel S. Gruner
    Abstract Nutrient availability and herbivory control the biomass of primary producer communities to varying degrees across ecosystems. Ecological theory, individual experiments in many different systems, and system-specific quantitative reviews have suggested that (i) bottom,up control is pervasive but top,down control is more influential in aquatic habitats relative to terrestrial systems and (ii) bottom,up and top,down forces are interdependent, with statistical interactions that synergize or dampen relative influences on producer biomass. We used simple dynamic models to review ecological mechanisms that generate independent vs. interactive responses of community-level biomass. We calibrated these mechanistic predictions with the metrics of factorial meta-analysis and tested their prevalence across freshwater, marine and terrestrial ecosystems with a comprehensive meta-analysis of 191 factorial manipulations of herbivores and nutrients. Our analysis showed that producer community biomass increased with fertilization across all systems, although increases were greatest in freshwater habitats. Herbivore removal generally increased producer biomass in both freshwater and marine systems, but effects were inconsistent on land. With the exception of marine temperate rocky reef systems that showed positive synergism of nutrient enrichment and herbivore removal, experimental studies showed limited support for statistical interactions between nutrient and herbivory treatments on producer biomass. Top,down control of herbivores, compensatory behaviour of multiple herbivore guilds, spatial and temporal heterogeneity of interactions, and herbivore-mediated nutrient recycling may lower the probability of consistent interactive effects on producer biomass. Continuing studies should expand the temporal and spatial scales of experiments, particularly in understudied terrestrial systems; broaden factorial designs to manipulate independently multiple producer resources (e.g. nitrogen, phosphorus, light), multiple herbivore taxa or guilds (e.g. vertebrates and invertebrates) and multiple trophic levels; and , in addition to measuring producer biomass , assess the responses of species diversity, community composition and nutrient status. [source]

    Directed Matching and Monetary Exchange

    ECONOMETRICA, Issue 3 2003
    Dean Corbae
    We develop a model of monetary exchange where, as in the random matching literature, agents trade bilaterally and not through centralized markets. Rather than assuming they match exogenously and at random, however, we determine who meets whom as part of the equilibrium. We show how to formalize this process of directed matching in dynamic models with double coincidence problems, and present several examples and applications that illustrate how the approach can be used in monetary theory. Some of our results are similar to those in the random matching literature; others differ significantly. [source]

    Parameter estimation accuracy analysis for induction motors

    E. Laroche
    Abstract Various analytical dynamic models of induction machines, some of which take magnetic saturation and iron loss into account, are available in the literature. When parameter estimation is required, models must not only be theoretically identifiable but allow for accurate parameter estimation as well. This paper presents a comparison of parameter estimation accuracies obtained using different models and sets of measurements in the case of steady-state sinusoidal measurements. An explicit expression of estimation error is established and evaluated with respect to several measurement and modelling errors. This study will show that certain models are better suited for identification purposes than others and that certain sensors are bound to be more accurate than others. Lastly, an optimal experimental design procedure is implemented in order to derive an improved measurement set that leads to reduced estimation errors. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    Full-scale study on combustion characteristics of an upholstered chair under different boundary conditions,Part 1: Ignition at the seat center

    FIRE AND MATERIALS, Issue 6 2009
    Q. Y. Xie
    Abstract The objective of this work is to investigate the effects of boundary conditions on the combustion characteristic of combustible items in a room. A series of full-scale experiments were carried out in the ISO 9705 fire test room with an upholstered chair at four typical locations, i.e. at the middle of side wall, at the center of the room with the seat toward the door, at the center of the room with the seat toward inside of the room, at the room corner, respectively. Ignition was achieved through a BS No.7 wooden crib at the geometric center of the seat surface for each test. Besides the heat release rate (HRR), four thermocouple trees were placed around the chair to monitor detailed temperature distributions during the combustion process of an upholstered chair. The results indicated that the boundary conditions had some effects on the combustion behavior of a chair in a room. It was shown that there were clearly two main peak HRRs for the cases of a chair being clung to the side wall or at the corner. However, there was only one main peak HRR when the chair was placed at the center of the room, either outwards or inwards. In addition, the results of the two cases of chairs being at the center indicate that the maximum HRR (about 829,kW) for the chair seat toward the door was relatively larger than the maximum HRR (about 641,kW) for the chair seat toward inside of the room. It was suggested that the special complex structure of a chair was also a considerable factor for the effect of boundary conditions on the combustion behavior of a chair in an enclosure. Furthermore, the measured temperature distributions around the chair also illustrated the effects of boundary condition on the combustion behavior of a chair in a room. It was suggested that although HRR was one of the most important fire parameters, HRR mainly represented the comprehensive fire behavior of a combustible item. In order to develop more suitable room fire dynamic models, more detailed information such as the surrounding temperature distributions measured by the thermocouple trees are useful. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    Dynamic versus static models in cost-effectiveness analyses of anti-viral drug therapy to mitigate an influenza pandemic

    HEALTH ECONOMICS, Issue 5 2010
    Anna K. Lugnér
    Abstract Conventional (static) models used in health economics implicitly assume that the probability of disease exposure is constant over time and unaffected by interventions. For transmissible infectious diseases this is not realistic and another class of models is required, so-called dynamic models. This study aims to examine the differences between one dynamic and one static model, estimating the effects of therapeutic treatment with antiviral (AV) drugs during an influenza pandemic in the Netherlands. Specifically, we focus on the sensitivity of the cost-effectiveness ratios to model choice, to the assumed drug coverage, and to the value of several epidemiological factors. Therapeutic use of AV-drugs is cost-effective compared with non-intervention, irrespective of which model approach is chosen. The findings further show that: (1) the cost-effectiveness ratio according to the static model is insensitive to the size of a pandemic, whereas the ratio according to the dynamic model increases with the size of a pandemic; (2) according to the dynamic model, the cost per infection and the life-years gained per treatment are not constant but depend on the proportion of cases that are treated; and (3) the age-specific clinical attack rates affect the sensitivity of cost-effectiveness ratio to model choice. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    A dynamic analysis of GP visiting in Ireland: 1995,2001

    HEALTH ECONOMICS, Issue 2 2007
    Anne Nolan
    Abstract This paper examines the determinants of GP visiting in Ireland, using panel data from the Living in Ireland Survey from 1995,2001. While cross-sectional studies provide important information on GP visiting patterns at a certain point in time, with panel data we can also control for unobserved individual heterogeneity, as well as identify whether it is the same individuals who consistently visit their GP year on year, or whether there is more mobility in visiting. We therefore estimate dynamic models of GP utilisation, and attempt to decompose the observed variation in GP visiting into components attributable to observed individual characteristics, unobserved individual heterogeneity and state dependence. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    Large eddy simulation of turbulent channel flow using an algebraic model

    S. Bhushan
    Abstract In this paper an algebraic model from the constitutive equations of the subgrid stresses has been developed. This model has an additional term in comparison with the mixed model, which represents the backscatter of energy explicitly. The proposed model thus provides independent modelling of the different energy transfer mechanisms, thereby capturing the effect of subgrid scales more accurately. The model is also found to depict the flow anisotropy better than the linear and mixed models. The energy transfer capability of the model is analysed for the isotropic decay and the forced isotropic turbulence. The turbulent plane channel flow simulation is performed over three Reynolds numbers, Re,=180, 395 and 590, and the results are compared with that of the dynamic model, Smagorinsky model, and the DNS data. Both the algebraic and dynamic models are in good agreement with the DNS data for the mean flow quantities. However, the algebraic model is found to be more accurate for the turbulence intensities and the higher-order statistics. The capability of the algebraic model to represent backscatter is also demonstrated. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    Dynamic model of one-cycle control for converters operating in continuous and discontinuous conduction modes,

    N. Femia
    Abstract In this paper a new dynamic model of one-cycle-controlled converters operating either in continuous or in discontinuous conduction mode (DCM) is introduced. The static and dynamic behaviour is analysed by using sampled-data modelling combined with the small-signal linearization of the average model of the converter's power stage. The proposed model is valid for frequencies up to half the switching frequency and, while the other dynamic models presented in the literature cover continuous conduction mode only, it also gives an accurate prediction of the system's dynamic behaviour in the DCM. The model allows to determine the closed-form expression of the reference-to-output transfer function G of the system, which is a fundamental prerequisite for the design of a conventional output feedback control circuit aimed at improving the dynamic behaviour of the system in response to load variations. In this paper it is also shown that one-cycle control does not work properly in switching converters operating in deep DCM if some specific design constraints are not fulfilled. The theoretical predictions are confirmed by the results of suitable numerical simulations and laboratory experiments on a one-cycle-controlled buck-switching converter. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    Simulation and discrete event optimization for automated decisions for in-queue flights

    D. Dimitrakiev
    The paper discusses simulation and optimization of in-queue flights, analyzed as discrete-event systems. Simulation is performed in a platform, based on MATLAB program functions and SIMULINK dynamic models. Regime optimization aims to maximize the controllability of the queue and minimize the fuel consumption of each aircraft (AC). Because of mutual preferential independence, a hierarchical additive value function is constructed, consisting of fuzzily estimated parameter value functions and weight coefficients and a multicriteria decision problem is solved under strict certainty. Two optimization algorithms are applied: one that finds the regime that leads to maximally preferred consequence and another that finds the regime with minimum total fuel consumption among those whose control parameters are set at their most preferred levels. A comparison between the two algorithms is proposed. A scheme describes how the optimization procedures can be used multiple times during the execution of the flight with respect to the occurrence of discrete events. Simulation results are also proposed for the discussed algorithms and procedures. © 2010 Wiley Periodicals, Inc. [source]

    Geophysical implications of Izu,Bonin mantle wedge hydration from chemical geodynamic modeling

    ISLAND ARC, Issue 1 2010
    Laura B. Hebert
    Abstract Using two-dimensional dynamic models of the Northern Izu,Bonin (NIB) subduction zone, we show that a particular localized low-viscosity (,LV = 3.3 × 1019 , 4.0 × 1020 Pa s), low-density (,, , ,10 kg/m3 relative to ambient mantle) geometry within the wedge is required to match surface observations of topography, gravity, and geoid anomalies. The hydration structure resulting in this low-viscosity, low-density geometry develops due to fluid release into the wedge within a depth interval from 150 to 350 km and is consistent with results from coupled geochemical and geodynamic modeling of the NIB subduction system and from previous uncoupled models of the wedge beneath the Japan arcs. The source of the fluids can be either subducting lithospheric serpentinite or stable hydrous phases in the wedge such as serpentine or chlorite. On the basis of this modeling, predictions can be made as to the specific low-viscosity geometries associated with geophysical surface observables for other subduction zones based on regional subduction parameters such as subducting slab age. [source]

    H-methods in applied sciences

    Agnar Höskuldsson
    Abstract The author has developed a framework for mathematical modelling within applied sciences. It is characteristic for data from ,nature and industry' that they have reduced rank for inference. It means that full rank solutions normally do not give satisfactory solutions. The basic idea of H-methods is to build up the mathematical model in steps by using weighing schemes. Each weighing scheme produces a score and/or a loading vector that are expected to perform a certain task. Optimisation procedures are used to obtain ,the best' solution at each step. At each step, the optimisation is concerned with finding a balance between the estimation task and the prediction task. The name H-methods has been chosen because of close analogy with the Heisenberg uncertainty inequality. A similar situation is present in modelling data. The mathematical modelling stops, when the prediction aspect of the model cannot be improved. H-methods have been applied to wide range of fields within applied sciences. In each case, the H-methods provide with superior solutions compared to the traditional ones. A background for the H-methods is presented. The H-principle of mathematical modelling is explained. It is shown how the principle leads to well-defined optimisation procedures. This is illustrated in the case of linear regression. The H-methods have been applied in different areas: general linear models, nonlinear models, multi-block methods, path modelling, multi-way data analysis, growth models, dynamic models and pattern recognition. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    Neural network-based state prediction for strategy planning of an air hockey robot

    Jung Il Park
    We analyze a neural network implementation for puck state prediction in robotic air hockey. Unlike previous prediction schemes which used simple dynamic models and continuously updated an intercept state estimate, the neural network predictor uses a complex function, computed with data acquired from various puck trajectories, and makes a single, timely estimate of the final intercept state. Theoretically, the network can account for the complete dynamics of the table if all important state parameters are included as inputs, an accurate data training set of trajectories is used, and the network has an adequate number of internal nodes. To develop our neural networks, we acquired data from 1500 no-bounce and 1500 one-bounce puck trajectories, noting only translational state information. Analysis showed that performance of neural networks designed to predict the results of no-bounce trajectories was better than the performance of neural networks designed for one-bounce trajectories. Since our neural network input parameters did not include rotational puck estimates and recent work shows the importance of spin in impact analysis, we infer that adding a spin input to the neural network will increase the effectiveness of state estimates for the one-bounce case. © 2001 John Wiley & Sons, Inc. [source]

    Effect of input excitation on the quality of empirical dynamic models for type 1 diabetes

    AICHE JOURNAL, Issue 5 2009
    Daniel A. Finan
    Abstract Accurate prediction of future blood glucose trends has the potential to significantly improve glycemic regulation in type 1 diabetes patients. A model-based controller for an artificial ,-cell, for example, would determine the most efficacious insulin dose for the current sampling interval given available input,output data and model predictions of the resultant glucose trajectory. The two inputs most influential to the glucose concentration are bolused insulin and meal carbohydrates, which in practice are often taken simultaneously and in a specified ratio. This linear dependence has adverse effects on the quality of linear dynamic models identified from such data. On the other hand, inputs with greater degrees of excitation may force the subject into extreme hypoglycemia or hyperglycemia, and thus may be clinically unacceptable. Inputs with good excitation that do not endanger the subject are shown to result in models that can predict glucose trends reasonably accurately, 1,2 h ahead. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source]

    A molecular dynamics study on binding recognition between several 4,5 and 4,6-linked aminoglycosides with A-site RNA

    Shih-Yuan Chen
    Abstract A molecular dynamics (MD) simulation has been performed for two sets of aminoglycoside antibiotics bound with an RNA duplex corresponding to the aminoacyl-tRNA decoding site of the 16S rRNA to characterize the energetics and dynamics of binding for several aminoglycosides. The binding free energy, essential dynamics and hydration analysis have been conducted to characterize the dynamics' properties associated with the binding recognition between each set of antibiotics and the RNA duplex. We have built several dynamic models with reasonable binding free energies showing good correlation with the experimental data. We have also conducted a hydration analysis on some long residency water molecules detected as W8 and W49 sites around the U1406,·,U1495 pair and which are found to be important in binding recognition and in causing some apparent stretch variations of this pair during the dynamic studies. In addition, we also find that the hydration sites with long residence time identified between the ring III of two 4,6-linked antibiotics (tobramycin and kanamycin) and phosphate oxygen atoms of G1405/U1406 may be worthy of further exploration for rational drug design. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    Experimental study of feasibility in kinetically-controlled reactive distillation

    AICHE JOURNAL, Issue 2 2005
    Madhura Chiplunkar
    Abstract Bifurcation studies predict limited ranges of feasibility for products in certain reactive distillations. These are closely related to the bifurcations in the singular points of dynamic models for simple reactive distillation (isobaric open evaporation with liquid phase reaction). A new dynamic model is described with constant vapor rate together with an experimental study for the reactive distillation of acetic acid with isopropanol to produce isopropyl acetate, catalyzed by Amberlyst-15 ion-exchange resin. An experimental apparatus with real-time measurement of liquid compositions based on Fourier transform infrared (FTIR) spectroscopy is described, and used to follow the composition dynamics at several initial conditions and Damköhler numbers (Da). The experimental results match model predictions that show four regions of behavior. For Da , 1, these show a stable node at acetic acid and several other fixed points as saddles. However, near Da , 2, both isopropanol and acetic acid are stable nodes and a quaternary singular point appears. The presence of two stable nodes requires the presence of a distillation boundary and, therefore, a limited feasibility for the bottom product compositions from continuous reactive distillation. For the reaction rates studied, the model predictions are closely consistent with the experimental findings, and are robust to variations in the vapor rate. These experiments are among the first to analyze the dynamics and feasibility in a kinetically-controlled reactive distillation and are consistent with previous studies for the reaction equilibrium limit, indicating the formation of a reactive azeotrope. © 2005 American Institute of Chemical Engineers AIChE J, 51: 464,479, 2005 [source]

    Personality Over Time: Methodological Approaches to the Study of Short-Term and Long-Term Development and Change

    Jeremy C. Biesanz
    We consider a variety of recent methods of longitudinal data analysis to examine both short-term and long-term development and change in personality, including mean-level analyses both across and within individuals across time, variance structures across time, and cycles and dynamic models across time. These different longitudinal analyses can address classic as well as new questions in the study of personality and its development. We discuss the linkages among different longitudinal analyses, measurement issues in temporal data, the spacing of assessments, and the levels of generalization and potential insights afforded by different longitudinal analyses. [source]


    Piyapong Jiwattanakulpaisarn
    ABSTRACT This paper uses recent advances in dynamic panel econometrics to examine the impact of highway infrastructure on aggregate county-level employment using data for all 100 North Carolina counties from 1985 through 1997. Results are compared to models that do not take endogeneity of highway investment and dynamics of employment adjustment into account. Fully specified dynamic models are found to give insignificant results compared to these other models. Thus, when these issues are properly modeled, the results show that improvements in highways have no discernible impact on employment. [source]

    Time varying and dynamic models for default risk in consumer loans

    Jonathan Crook
    Summary., We review the incorporation of time varying variables into models of the risk of consumer default. Lenders typically have data which are of a panel format. This allows the inclusion of time varying covariates in models of account level default by including them in survival models, panel models or ,correction factor' models. The choice depends on the aim of the model and the assumptions that can be plausibly made. At the level of the portfolio, Merton-type models have incorporated macroeconomic and latent variables in mixed (factor) models and Kalman filter models whereas reduced form approaches include Markov chains and stochastic intensity models. The latter models have mainly been applied to corporate defaults and considerable scope remains for application to consumer loans. [source]

    Chain graph models and their causal interpretations,

    Steffen L. Lauritzen
    Chain graphs are a natural generalization of directed acyclic graphs and undirected graphs. However, the apparent simplicity of chain graphs belies the subtlety of the conditional independence hypotheses that they represent. There are many simple and apparently plausible, but ultimately fallacious, interpretations of chain graphs that are often invoked, implicitly or explicitly. These interpretations also lead to flawed methods for applying background knowledge to model selection. We present a valid interpretation by showing how the distribution corresponding to a chain graph may be generated from the equilibrium distributions of dynamic models with feed-back. These dynamic interpretations lead to a simple theory of intervention, extending the theory developed for directed acyclic graphs. Finally, we contrast chain graph models under this interpretation with simultaneous equation models which have traditionally been used to model feed-back in econometrics. [source]

    Optimal auditing in the banking industry

    T. Bosch
    Abstract As a result of the new regulatory prescripts for banks, known as the Basel II Capital Accord, there has been a heightened interest in the auditing process. Our paper considers this issue with a particular emphasis on the auditing of reserves, assets and capital in both a random and non-random framework. The analysis relies on the stochastic dynamic modeling of banking items such as loans, reserves, Treasuries, outstanding debts, bank capital and government subsidies. In this regard, one of the main novelties of our contribution is the establishment of optimal bank reserves and a rate of depository consumption that is of importance during an (random) audit of the reserve requirements. Here the specific choice of a power utility function is made in order to obtain an analytic solution in a Lévy process setting. Furthermore, we provide explicit formulas for the shareholder default and regulator closure rules, for the case of a Poisson-distributed random audit. A property of these rules is that they define the standard for minimum capital adequacy in an implicit way. In addition, we solve an optimal auditing time problem for the Basel II capital adequacy requirement by making use of Lévy process-based models. This result provides information about the optimal timing of an internal audit when the ambient value of the capital adequacy ratio is taken into account and the bank is able to choose the time at which the audit takes place. Finally, we discuss some of the economic issues arising from the analysis of the stochastic dynamic models of banking items and the optimization procedure related to the auditing process. Copyright © 2007 John Wiley & Sons, Ltd. [source]

    Application of support vector regression for developing soft sensors for nonlinear processes,

    Saneej B. Chitralekha
    Abstract The field of soft sensor development has gained significant importance in the recent past with the development of efficient and easily employable computational tools for this purpose. The basic idea is to convert the information contained in the input,output data collected from the process into a mathematical model. Such a mathematical model can be used as a cost efficient substitute for hardware sensors. The Support Vector Regression (SVR) tool is one such computational tool that has recently received much attention in the system identification literature, especially because of its successes in building nonlinear blackbox models. The main feature of the algorithm is the use of a nonlinear kernel transformation to map the input variables into a feature space so that their relationship with the output variable becomes linear in the transformed space. This method has excellent generalisation capabilities to high-dimensional nonlinear problems due to the use of functions such as the radial basis functions which have good approximation capabilities as kernels. Another attractive feature of the method is its convex optimization formulation which eradicates the problem of local minima while identifying the nonlinear models. In this work, we demonstrate the application of SVR as an efficient and easy-to-use tool for developing soft sensors for nonlinear processes. In an industrial case study, we illustrate the development of a steady-state Melt Index soft sensor for an industrial scale ethylene vinyl acetate (EVA) polymer extrusion process using SVR. The SVR-based soft sensor, valid over a wide range of melt indices, outperformed the existing nonlinear least-square-based soft sensor in terms of lower prediction errors. In the remaining two other case studies, we demonstrate the application of SVR for developing soft sensors in the form of dynamic models for two nonlinear processes: a simulated pH neutralisation process and a laboratory scale twin screw polymer extrusion process. A heuristic procedure is proposed for developing a dynamic nonlinear-ARX model-based soft sensor using SVR, in which the optimal delay and orders are automatically arrived at using the input,output data. Le domaine du développement des capteurs logiciels a récemment gagné en importance avec la création d'outils de calcul efficaces et facilement utilisables à cette fin. L'idée de base est de convertir l'information obtenue dans les données d'entrée et de sortie recueillies à partir du processus dans un modèle mathématique. Un tel modèle mathématique peut servir de solution de rechange économique pour les capteurs matériels. L'outil de régression par machine à vecteur de support (RMVS) constitue un outil de calcul qui a récemment été l'objet de beaucoup d'attention dans la littérature des systèmes d'identification, surtout en raison de ses succès dans la création de modèles de boîte noire non linéaires. Dans ce travail, nous démontrons l'application de la RMVS comme outil efficace et facile à utiliser pour la création de capteurs logiciels pour les procédés non linéaires. Dans une étude de cas industrielle, nous illustrons le développement d'un capteur logiciel à indice de fluidité à état permanent pour un processus d'extrusion du polymère d'acétate de vinyle-éthylène à l'échelle industrielle en utilisant la RMVS. Le capteur logiciel fondé sur la RMVS, valide sur une vaste gamme d'indices de fluidité, a surclassé le capteur logiciel fondé sur les moindres carrés non linéaires existant en matière d'erreurs de prédiction plus faibles. Dans les deux autres études de cas, nous démontrons l'application de la RMVS pour la création de capteurs logiciels sous la forme de modèles dynamiques pour deux procédés non linéaires: un processus de neutralisation du pH simulé et un processus d'extrusion de polymère à deux vis à l'échelle laboratoire. Une procédure heuristique est proposée pour la création d'un capteur logiciel fondé sur un modèle ARX non linéaire dynamique en utilisant la RMVS, dans lequel on atteint automatiquement le délai optimal et les ordres en utilisant les données d'entrée et de sortie. [source]

    Parameter and state estimation in nonlinear stochastic continuous-time dynamic models with unknown disturbance intensity

    M. S. Varziri
    Abstract Approximate Maximum Likelihood Estimation (AMLE) is an algorithm for estimating the states and parameters of models described by stochastic differential equations (SDEs). In previous work (Varziri et al., Ind. Eng. Chem. Res., 47(2), 380-393, (2008); Varziri et al., Comp. Chem. Eng., in press), AMLE was developed for SDE systems in which process-disturbance intensities and measurement-noise variances were assumed to be known. In the current article, a new formulation of the AMLE objective function is proposed for the case in which measurement-noise variance is available but the process-disturbance intensity is not known a priori. The revised formulation provides estimates of the model parameters and disturbance intensities, as demonstrated using a nonlinear CSTR simulation study. Parameter confidence intervals are computed using theoretical linearization-based expressions. The proposed method compares favourably with a Kalman-filter-based maximum likelihood method. The resulting parameter estimates and information about model mismatch will be useful to chemical engineers who use fundamental models for process monitoring and control. L'estimation des vraisemblances maximums approximatives (AMLE) est un algorithme destiné à l'estimation des états et des paramètres de modèles décrits par les équations différentielles stochastiques (SDE). Dans un précédent travail (Varziri et al., 2008a, 2008b), l'AMLE a été mis au point pour des systèmes SDE dans lesquels les intensités de perturbations et les variances de bruits de mesure sont supposées connues. On propose dans cet article une nouvelle formulation de la fonction objectif de l'AMLE pour le cas où la variance de bruit de mesure est disponible mais où l'intensité des perturbations de procédé n'est pas connue a priori. La formulation révisée fournit des estimations des paramètres de modèle et des intensités de perturbations, comme le démontre une étude de simulation en CSTR non linéaire. Les intervalles de confiance des paramètres sont calculés par ordinateur au moyen d'expressions basées sur la linéarisation théorique. La méthode proposée se compare favorablement à une méthode de vraisemblance maximun reposant sur le filtre de Kalman. Les estimations de paramètres qui en résultent et l'information sur la discordance de modèle seront utiles aux ingénieurs en génie chimique qui utilisent des modèles fondamentaux pour la surveillance et le contrôle des procédés. [source]

    Enhancing Controller Performance via Dynamic Data Reconciliation

    Shuanghua Bai
    Abstract Measured values of process variables are subject to measurement noise. The presence of measurement noise can result in detuned controllers in order to prevent excessive adjustments of manipulated variables. Digital filters, such as exponentially weighted moving average (EWMA) and moving average (MA) filters, are commonly used to attenuate measurement noise before controllers. In this article, we present another approach, a dynamic data reconciliation (DDR) filter. This filter employs discrete dynamic models that can be phenomenological or empirical, as constraints in reconciling noisy measurements. Simulation results for a storage tank and a distillation column under PI control demonstrate that the DDR filter can significantly reduce propagation of measurement noise inside control loops. It has better performance than the EWMA and MA filters, so that the overall performance of the control system is enhanced. Les valeurs mesurées des variables de procédé sont affectées par les bruits de mesure. La présence de bruit de mesure force de régler à la baisse les régulateurs afin de prévenir des mouvements excessifs des variables manipulées. Des filtres numériques, tels que les filtres à moyenne mobile pondérée exponentiellement (EWMA) et les filtres à moyenne mobile (MA), sont communément utilisés pour atténuer le bruit de mesure avant les régulateurs. On présente dans cet article une autre approche, soit un filtre dynamique de réconciliation de données (DDR). Ce filtre emploie des modèles dynamiques discrets qui peuvent être phénoménologiques ou empiriques comme contraintes pour réconcilier les mesures bruitées. Les résultats de simulation pour un réservoir de stockage et une colonne à distiller utilisant un régulateur PI montrent que le filtre DDR peut réduire de manière significative la propagation du bruit de mesure dans les boucles de régulation. Sa performance est meilleure que celles des filtres EWMA ou MA, et par conséquent la performance globale du système de commande s'en trouve accrue. [source]