Nonlinear Models (nonlinear + models)

Distribution by Scientific Domains

Selected Abstracts


Gordon K. Smyth
Summary For normal linear models, it is generally accepted that residual maximum likelihood estimation is appropriate when covariance components require estimation. This paper considers generalized linear models in which both the mean and the dispersion are allowed to depend on unknown parameters and on covariates. For these models there is no closed form equivalent to residual maximum likelihood except in very special cases. Using a modified profile likelihood for the dispersion parameters, an adjusted score vector and adjusted information matrix are found under an asymptotic development that holds as the leverages in the mean model become small. Subsequently, the expectation of the fitted deviances is obtained directly to show that the adjusted score vector is unbiased at least to,O(1/n). Exact results are obtained in the single-sample case. The results reduce to residual maximum likelihood estimation in the normal linear case. [source]

Estimation of Nonlinear Models with Measurement Error

ECONOMETRICA, Issue 1 2004
Susanne M. Schennach
This paper presents a solution to an important econometric problem, namely the root n consistent estimation of nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available. While a root n consistent estimator has been derived for polynomial specifications (see Hausman, Ichimura, Newey, and Powell (1991)), such an estimator for general nonlinear specifications has so far not been available. Using the additional information provided by the repeated observation, the suggested estimator separates the measurement error from the "true" value of the regressors thanks to a useful property of the Fourier transform: The Fourier transform converts the integral equations that relate the distribution of the unobserved "true" variables to the observed variables measured with error into algebraic equations. The solution to these equations yields enough information to identify arbitrary moments of the "true," unobserved variables. The value of these moments can then be used to construct any estimator that can be written in terms of moments, including traditional linear and nonlinear least squares estimators, or general extremum estimators. The proposed estimator is shown to admit a representation in terms of an influence function, thus establishing its root n consistency and asymptotic normality. Monte Carlo evidence and an application to Engel curve estimation illustrate the usefulness of this new approach. [source]

Measurement Error in Nonlinear Models: a Modern Perspective

Andrew W. Roddam
No abstract is available for this article. [source]

Estimation of Nonlinear Models with Measurement Error

ECONOMETRICA, Issue 1 2004
Susanne M. Schennach
This paper presents a solution to an important econometric problem, namely the root n consistent estimation of nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available. While a root n consistent estimator has been derived for polynomial specifications (see Hausman, Ichimura, Newey, and Powell (1991)), such an estimator for general nonlinear specifications has so far not been available. Using the additional information provided by the repeated observation, the suggested estimator separates the measurement error from the "true" value of the regressors thanks to a useful property of the Fourier transform: The Fourier transform converts the integral equations that relate the distribution of the unobserved "true" variables to the observed variables measured with error into algebraic equations. The solution to these equations yields enough information to identify arbitrary moments of the "true," unobserved variables. The value of these moments can then be used to construct any estimator that can be written in terms of moments, including traditional linear and nonlinear least squares estimators, or general extremum estimators. The proposed estimator is shown to admit a representation in terms of an influence function, thus establishing its root n consistency and asymptotic normality. Monte Carlo evidence and an application to Engel curve estimation illustrate the usefulness of this new approach. [source]

End-of-Sample Instability Tests

ECONOMETRICA, Issue 6 2003
D. W. K. Andrews
This paper considers tests for structural instability of short duration, such as at the end of the sample. The key feature of the testing problem is that the number, m, of observations in the period of potential change is relatively small,possibly as small as one. The well-known F test of Chow (1960) for this problem only applies in a linear regression model with normally distributed iid errors and strictly exogenous regressors, even when the total number of observations, n+m, is large. We generalize the F test to cover regression models with much more general error processes, regressors that are not strictly exogenous, and estimation by instrumental variables as well as least squares. In addition, we extend the F test to nonlinear models estimated by generalized method of moments and maximum likelihood. Asymptotic critical values that are valid as n,, with m fixed are provided using a subsampling-like method. The results apply quite generally to processes that are strictly stationary and ergodic under the null hypothesis of no structural instability. [source]


We analyze the fiscal adjustment process in the United States using a multivariate threshold vector error regression model. The shift from single-equation to multivariate setting adds value both in terms of our economic understanding of the fiscal adjustment process and the forecasting performance of nonlinear models. We find evidence that fiscal authorities intervene to reduce real per capita deficit only when it reaches a certain threshold and that fiscal adjustment takes place primarily by cutting government expenditure. The results of out-of-sample density forecast and probability forecasts suggest that a shift from a univariate autoregressive model to a multivariate model improves forecast performance. (JEL C32, C53, E62) [source]

Smooth Transition Models and Arbitrage Consistency

ECONOMICA, Issue 287 2005
David A. Peel
Slow adjustment of real exchange rate towards equilibrium in linear models has long puzzled researchers, stimulating the adoption of nonlinear models. The exponential smooth transition model has been particularly successful, providing faster adjustment speeds. This paper discusses some of its theoretical limitations, for example that expectations are adaptive. We propose a new nonlinear model conceptually superior to the ESTAR model since it is consistent with rational expectations. One of its advantages is that it can be solved and estimated by nonlinear least squares. Using monthly post-1973 real exchange rate data, we show that the model implies even faster speeds of adjustment. [source]

Multi-step forecasting for nonlinear models of high frequency ground ozone data: a Monte Carlo approach

Alessandro Fassò
Abstract Multi-step prediction using high frequency environmental data is considered. The complex dynamics of ground ozone often requires models involving covariates, multiple frequency periodicities, long memory, nonlinearity and heteroscedasticity. For these reasons parametric models, which include seasonal fractionally integrated components, self-exciting threshold autoregressive components, covariates and autoregressive conditionally heteroscedastic errors with heavy tails, have been recently introduced. Here, to obtain an h step ahead forecast for these models we use a Monte Carlo approach. The performance of the forecast is evaluated on different nonlinear models comparing some statistical indices with respect to the prediction horizon. As an application of this method, the forecast precision of a 2 year hourly ozone data set coming from an air traffic pollution station located in Bergamo, Italy, is analyzed. Copyright © 2002 John Wiley & Sons, Ltd. [source]

Interpreting temporal variation in omnivore foraging ecology via stable isotope modelling

Carolyn M. Kurle
Summary 1The use of stable carbon (C) and nitrogen (N) isotopes (,15N and ,13C, respectively) to delineate trophic patterns in wild animals is common in ecology. Their utility as a tool for interpreting temporal change in diet due to seasonality, migration, climate change or species invasion depends upon an understanding of the rates at which stable isotopes incorporate from diet into animal tissues. To best determine the foraging habits of invasive rats on island ecosystems and to illuminate the interpretation of wild omnivore diets in general, I investigated isotope incorporation rates of C and N in fur, liver, kidney, muscle, serum and red blood cells (RBC) from captive rats raised on a diet with low ,15N and ,13C values and switched to a diet with higher ,15N and ,13C values. 2I used the reaction progress variable method (RPVM), a linear fitting procedure, to estimate whether a single or multiple compartment model best described isotope turnover in each tissue. Small sample Akaike Information criterion (AICc) model comparison analysis indicated that 1 compartment nonlinear models best described isotope incorporation rates for liver, RBC, muscle, and fur, whereas 2 compartment nonlinear models were best for serum and kidney. 3I compared isotope incorporation rates using the RPVM versus nonlinear models. There were no differences in estimated isotope retention times between the model types for serum and kidney (except for N turnover in kidney from females). Isotope incorporation took longer when estimated using the nonlinear models for RBC, muscle, and fur, but was shorter for liver tissue. 4There were no statistical differences between sexes in the isotope incorporation rates. I also found that N and C isotope incorporation rates were decoupled for liver, with C incorporating into liver tissue faster than N. 5The data demonstrate the utility of analysing isotope ratios of multiple tissues from a single animal when estimating temporal variation in mammalian foraging ecology. [source]


Andres Aradillas-Lopez
This article extends the pairwise difference estimators for various semilinear limited dependent variable models proposed by Honoré and Powell (Identification and Inference in Econometric Models. Essays in Honor of Thomas Rothenberg Cambridge: Cambridge University Press, 2005) to permit the regressor appearing in the nonparametric component to itself depend upon a conditional expectation that is nonparametrically estimated. This permits the estimation approach to be applied to nonlinear models with sample selectivity and/or endogeneity, in which a "control variable" for selectivity or endogeneity is nonparametrically estimated. We develop the relevant asymptotic theory for the proposed estimators and we illustrate the theory to derive the asymptotic distribution of the estimator for the partially linear logit model. [source]

The real exchange rate and real interest differentials: the role of nonlinearities

Nelson C. Mark
Abstract Recent empirical work has shown the importance of nonlinear adjustment in the dynamics of real exchange rates and real interest differentials. This work suggests that the tenuous empirical linkage between the real exchange rate and the real interest differential might be strengthened by explicitly accounting for these nonlinearities. We pursue this strategy by pricing the real exchange rate by real interest parity. The resulting first-order stochastic difference equation gives the real exchange rate as the expected present value of future real interest differentials which we compute numerically for three candidate nonlinear processes. Regressions of the log real US dollar prices of the Canadian dollar, deutschemark, yen and pound on the fundamental values implied by these nonlinear models are used to evaluate the linkage. The evidence for linkage is stronger when these present values are computed over shorter horizons than for longer horizons. Copyright © 2005 John Wiley & Sons, Ltd. [source]

Asymmetric adjustment and nonlinear dynamics in real exchange rates

Hyginus Leon
Abstract This paper examines whether deviations from PPP are stationary in the presence of nonlinearity, and whether the adjustment towards PPP is symmetric from above and below. Using alternative nonlinear models, our results support mean reversion and asymmetric adjustment dynamics. We find differences in magnitudes, frequencies and durations of the deviations of exchange rates from fixed and time-varying thresholds, both between over-appreciations and over-depreciations and between developed and developing countries. In particular, the average cumulative sum of deviations during periods when exchange rates are below forecasts is twice that during periods of over-appreciation and larger for developing than advanced countries. Copyright © 2005 John Wiley & Sons, Ltd. [source]

Improved process monitoring using nonlinear principal component models,

David Antory
This paper presents two new approaches for use in complete process monitoring. The first concerns the identification of nonlinear principal component models. This involves the application of linear principal component analysis (PCA), prior to the identification of a modified autoassociative neural network (AAN) as the required nonlinear PCA (NLPCA) model. The benefits are that (i) the number of the reduced set of linear principal components (PCs) is smaller than the number of recorded process variables, and (ii) the set of PCs is better conditioned as redundant information is removed. The result is a new set of input data for a modified neural representation, referred to as a T2T network. The T2T NLPCA model is then used for complete process monitoring, involving fault detection, identification and isolation. The second approach introduces a new variable reconstruction algorithm, developed from the T2T NLPCA model. Variable reconstruction can enhance the findings of the contribution charts still widely used in industry by reconstructing the outputs from faulty sensors to produce more accurate fault isolation. These ideas are illustrated using recorded industrial data relating to developing cracks in an industrial glass melter process. A comparison of linear and nonlinear models, together with the combined use of contribution charts and variable reconstruction, is presented. © 2008 Wiley Periodicals, Inc. [source]

Expanding definitions of gain by taking harmonic content into account,

Jeffrey Jargon
Abstract We expand the definitions of power gain, transducer gain, and available gain by taking harmonic content into account. Furthermore, we show that under special conditions, these expanded definitions of gain can be expressed in terms of nonlinear large-signal scattering parameters. Finally, we provide an example showing how these expanded forms of gain and nonlinear large-signal scattering parameters can provide us with valuable information regarding the behavior of nonlinear models. Published 2003 Wiley Periodicals, Inc. Int J RF and Microwave CAE 13, 357,369, 2003. [source]

Patterns of species richness on very small islands: the plants of the Aegean archipelago

Maria Panitsa
Abstract Aim, To investigate the species,area relationship (SAR) of plants on very small islands, to examine the effect of other factors on species richness, and to check for a possible Small Island Effect (SIE). Location, The study used data on the floral composition of 86 very small islands (all < 0.050 km2) of the Aegean archipelago (Greece). Methods, We used standard techniques for linear and nonlinear regression in order to check several models of the SAR, and stepwise multiple regression to check for the effects of factors other than area on species richness (,habitat diversity', elevation, and distance from nearest large island), as well as the performance of the Choros model. We also checked for the SAR of certain taxonomic and ecological plant groups that are of special importance in eastern Mediterranean islands, such as halophytes, therophytes, Leguminosae and Gramineae. We used one-way anova to check for differences in richness between grazed and non-grazed islands, and we explored possible effects of nesting seabirds on the islands' flora. Results, Area explained a small percentage of total species richness variance in all cases. The linearized power model of the SAR provided the best fit for the total species list and several subgroups of species, while the semi-log model provided better fits for grazed islands, grasses and therophytes. None of the nonlinear models explained more variance. The slope of the SAR was very high, mainly due to the contribution of non-grazed islands. No significant SIE could be detected. The Choros model explained more variance than all SARs, although a large amount of variance of species richness still remained unexplained. Elevation was found to be the only important factor, other than area, to influence species richness. Habitat diversity did not seem important, although there were serious methodological problems in properly defining it, especially for plants. Grazing was an important factor influencing the flora of small islands. Grazed islands were richer than non-grazed, but the response of their species richness to area was particularly low, indicating decreased floral heterogeneity among islands. We did not detect any important effects of the presence of nesting seabird colonies. Main conclusions, Species richness on small islands may behave idiosyncratically, but this does not always lead to a typical SIE. Plants of Aegean islets conform to the classical Arrhenius model of the SAR, a result mainly due to the contribution of non-grazed islands. At the same time, the factors examined explain a small portion of total variance in species richness, indicating the possible contribution of other, non-standard factors, or even of stochastic effects. The proper definition of habitat diversity as pertaining to the taxon examined in each case is a recurrent problem in such studies. Nevertheless, the combined effect of area and a proxy for environmental heterogeneity is once again superior to area alone in explaining species richness. [source]

H-methods in applied sciences

Agnar Höskuldsson
Abstract The author has developed a framework for mathematical modelling within applied sciences. It is characteristic for data from ,nature and industry' that they have reduced rank for inference. It means that full rank solutions normally do not give satisfactory solutions. The basic idea of H-methods is to build up the mathematical model in steps by using weighing schemes. Each weighing scheme produces a score and/or a loading vector that are expected to perform a certain task. Optimisation procedures are used to obtain ,the best' solution at each step. At each step, the optimisation is concerned with finding a balance between the estimation task and the prediction task. The name H-methods has been chosen because of close analogy with the Heisenberg uncertainty inequality. A similar situation is present in modelling data. The mathematical modelling stops, when the prediction aspect of the model cannot be improved. H-methods have been applied to wide range of fields within applied sciences. In each case, the H-methods provide with superior solutions compared to the traditional ones. A background for the H-methods is presented. The H-principle of mathematical modelling is explained. It is shown how the principle leads to well-defined optimisation procedures. This is illustrated in the case of linear regression. The H-methods have been applied in different areas: general linear models, nonlinear models, multi-block methods, path modelling, multi-way data analysis, growth models, dynamic models and pattern recognition. Copyright © 2008 John Wiley & Sons, Ltd. [source]

Threshold changes in vegetation along a grazing gradient in Mongolian rangelands

Takehiro Sasaki
Summary 1The concept of threshold has become important in ecology, but the nature of potential threshold responses of vegetation to grazing in rangeland ecosystems remains poorly understood. We aimed to identify ecological thresholds in vegetation changes along a grazing gradient and to examine whether threshold changes were expressed similarly at a variety of ecological sites. 2To accomplish this, we surveyed the vegetation along grazing gradients at 10 ecological sites, each located at different landscape positions in Mongolia's central and southern rangelands. Evidence for a threshold in changes in floristic composition along the grazing gradient was examined by comparing linear models of the data with nonlinear models fitted using an exponential curve, an inverse curve, a piecewise regression and a sigmoid logistic curve. 3Three nonlinear models (piecewise, exponential and sigmoid) provided a much better fit to the data than the linear models, highlighting the presence of a discontinuity in vegetation changes along the grazing gradient. The shapes of the best-fit models and their fit to the data were generally similar across sites, indicating that the changes in floristic composition were relatively constant below a threshold level of grazing, after which the curve changed sharply. 4Except for two sites, the best-fit models had relatively narrow bootstrap confidence intervals (95% CI), especially around threshold points or zones where the rate of change accelerated, emphasizing that our results were robust and conclusive. 5Synthesis. Our study provided strong evidence for the existence of ecological thresholds in vegetation change along a grazing gradient across all ecological sites. This suggests that vegetation responses to grazing in the study areas are essentially nonlinear. The recognition that real threshold changes exist in real grazing gradients will help land managers to prevent the occurrence of undesirable states and promote the occurrence of desirable states, and will therefore permit a major step forward in the sustainable management of rangeland ecosystems. [source]

Building neural network models for time series: a statistical approach

Marcelo C. Medeiros
Abstract This paper is concerned with modelling time series by single hidden layer feedforward neural network models. A coherent modelling strategy based on statistical inference is presented. Variable selection is carried out using simple existing techniques. The problem of selecting the number of hidden units is solved by sequentially applying Lagrange multiplier type tests, with the aim of avoiding the estimation of unidentified models. Misspecification tests are derived for evaluating an estimated neural network model. All the tests are entirely based on auxiliary regressions and are easily implemented. A small-sample simulation experiment is carried out to show how the proposed modelling strategy works and how the misspecification tests behave in small samples. Two applications to real time series, one univariate and the other multivariate, are considered as well. Sets of one-step-ahead forecasts are constructed and forecast accuracy is compared with that of other nonlinear models applied to the same series. Copyright © 2006 John Wiley & Sons, Ltd. [source]

Distributed model predictive control of nonlinear process systems

AICHE JOURNAL, Issue 5 2009
Jinfeng Liu
Abstract This work focuses on a class of nonlinear control problems that arise when new control systems which may use networked sensors and/or actuators are added to already operating control loops to improve closed-loop performance. In this case, it is desirable to design the pre-existing control system and the new control system in a way such that they coordinate their actions. To address this control problem, a distributed model predictive control method is introduced where both the pre-existing control system and the new control system are designed via Lyapunov-based model predictive control. Working with general nonlinear models of chemical processes and assuming that there exists a Lyapunov-based controller that stabilizes the nominal closed-loop system using only the pre-existing control loops, two separate Lyapunov-based model predictive controllers are designed that coordinate their actions in an efficient fashion. Specifically, the proposed distributed model predictive control design preserves the stability properties of the Lyapunov-based controller, improves the closed-loop performance, and allows handling input constraints. In addition, the proposed distributed control design requires reduced communication between the two distributed controllers since it requires that these controllers communicate only once at each sampling time and is computationally more efficient compared to the corresponding centralized model predictive control design. The theoretical results are illustrated using a chemical process example. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source]

A General Misspecification Test for Spatial Regression Models: Dependence, Heterogeneity, and Nonlinearity

Thomas De Graaff
There is an increasing awareness of the potentials of nonlinear modeling in regional science. This can be explained partly by the recognition of the limitations of conventional equilibrium models in complex situations, and also by the easy availability and accessibility of sophisticated computational techniques. Among the class of nonlinear models, dynamic variants based on, for example, chaos theory stand out as an interesting approach. However, the operational significance of such approaches is still rather limited and a rigorous statistical-econometric treatment of nonlinear dynamic modeling experiments is lacking. Against this background this paper is concerned with a methodological and empirical analysis of a general misspecification test for spatial regression models that is expected to have power against nonlinearity, spatial dependence, and heteroskedasticity. The paper seeks to break new research ground by linking the classical diagnostic tools developed in spatial econometrics to a misspecification test derived directly from chaos theory,the BDS test, developed by Brock, Dechert, and Scheinkman (1987). A spatial variant of the BDS test is introduced and applied in the context of two examples of spatial process models, one of which is concerned with the spatial distribution of regional investments in The Netherlands, the other with spatial crime patterns in Columbus, Ohio. [source]

The impact of stochasticity on the behaviour of nonlinear population models: synchrony and the Moran effect

OIKOS, Issue 2 2001
J. V. Greenman
Environmental variation is ubiquitous, but its effects on nonlinear population dynamics are poorly understood. Using simple (unstructured) nonlinear models we investigate the effects of correlated noise on the dynamics of two otherwise independent populations (the Moran effect), i.e. we focus on noise rather than dispersion or trophic interaction as the cause of population synchrony. We find that below the bifurcation threshold for periodic behaviour (1) synchrony between populations is strongly dependent on the shape of the noise distribution but largely insensitive to which model is studied, (2) there is, in general, a loss of synchrony as the noise is filtered by the model, (3) for specially structured noise distributions this loss can be effectively eliminated over a restricted range of distribution parameter values even though the model might be nonlinear, (4) for unstructured models there is no evidence of correlation enhancement, a mechanism suggested by Moran, but above the bifurcation threshold enhancement is possible for weak noise through phase-locking, (5) rapid desynchronisation occurs as the chaotic regime is approached. To carry out the investigation the stochastic models are (a) reformulated in terms of their joint asymptotic probability distributions and (b) simulated to analyse temporal patterns. [source]

Designing experiments for nonlinear models,an introduction,

Rachel T. Johnson
Abstract We illustrate the construction of Bayesian D -optimal designs for nonlinear models and compare the relative efficiency of standard designs with these designs for several models and prior distributions on the parameters. Through a relative efficiency analysis, we show that standard designs can perform well in situations where the nonlinear model is intrinsically linear. However, if the model is nonlinear and its expectation function cannot be linearized by simple transformations, the nonlinear optimal design is considerably more efficient than the standard design. Published in 2009 by John Wiley & Sons, Ltd. [source]

Application of support vector regression for developing soft sensors for nonlinear processes,

Saneej B. Chitralekha
Abstract The field of soft sensor development has gained significant importance in the recent past with the development of efficient and easily employable computational tools for this purpose. The basic idea is to convert the information contained in the input,output data collected from the process into a mathematical model. Such a mathematical model can be used as a cost efficient substitute for hardware sensors. The Support Vector Regression (SVR) tool is one such computational tool that has recently received much attention in the system identification literature, especially because of its successes in building nonlinear blackbox models. The main feature of the algorithm is the use of a nonlinear kernel transformation to map the input variables into a feature space so that their relationship with the output variable becomes linear in the transformed space. This method has excellent generalisation capabilities to high-dimensional nonlinear problems due to the use of functions such as the radial basis functions which have good approximation capabilities as kernels. Another attractive feature of the method is its convex optimization formulation which eradicates the problem of local minima while identifying the nonlinear models. In this work, we demonstrate the application of SVR as an efficient and easy-to-use tool for developing soft sensors for nonlinear processes. In an industrial case study, we illustrate the development of a steady-state Melt Index soft sensor for an industrial scale ethylene vinyl acetate (EVA) polymer extrusion process using SVR. The SVR-based soft sensor, valid over a wide range of melt indices, outperformed the existing nonlinear least-square-based soft sensor in terms of lower prediction errors. In the remaining two other case studies, we demonstrate the application of SVR for developing soft sensors in the form of dynamic models for two nonlinear processes: a simulated pH neutralisation process and a laboratory scale twin screw polymer extrusion process. A heuristic procedure is proposed for developing a dynamic nonlinear-ARX model-based soft sensor using SVR, in which the optimal delay and orders are automatically arrived at using the input,output data. Le domaine du développement des capteurs logiciels a récemment gagné en importance avec la création d'outils de calcul efficaces et facilement utilisables à cette fin. L'idée de base est de convertir l'information obtenue dans les données d'entrée et de sortie recueillies à partir du processus dans un modèle mathématique. Un tel modèle mathématique peut servir de solution de rechange économique pour les capteurs matériels. L'outil de régression par machine à vecteur de support (RMVS) constitue un outil de calcul qui a récemment été l'objet de beaucoup d'attention dans la littérature des systèmes d'identification, surtout en raison de ses succès dans la création de modèles de boîte noire non linéaires. Dans ce travail, nous démontrons l'application de la RMVS comme outil efficace et facile à utiliser pour la création de capteurs logiciels pour les procédés non linéaires. Dans une étude de cas industrielle, nous illustrons le développement d'un capteur logiciel à indice de fluidité à état permanent pour un processus d'extrusion du polymère d'acétate de vinyle-éthylène à l'échelle industrielle en utilisant la RMVS. Le capteur logiciel fondé sur la RMVS, valide sur une vaste gamme d'indices de fluidité, a surclassé le capteur logiciel fondé sur les moindres carrés non linéaires existant en matière d'erreurs de prédiction plus faibles. Dans les deux autres études de cas, nous démontrons l'application de la RMVS pour la création de capteurs logiciels sous la forme de modèles dynamiques pour deux procédés non linéaires: un processus de neutralisation du pH simulé et un processus d'extrusion de polymère à deux vis à l'échelle laboratoire. Une procédure heuristique est proposée pour la création d'un capteur logiciel fondé sur un modèle ARX non linéaire dynamique en utilisant la RMVS, dans lequel on atteint automatiquement le délai optimal et les ordres en utilisant les données d'entrée et de sortie. [source]

Australian Economic Growth: Nonlinearities and International Influences

This paper considers the extent to which fluctuations in Australian economic growth are affected by domestic and overseas economic performance. We investigate the performance of a range of nonlinear models versus linear models, comparing the models using Bayes factors and posterior odds ratios. The posterior odds ratios favour nonlinear specifications in which fluctuations in economic activity in the US affect Australia's economic performance. Our results suggest that an exogenous negative shock will be more persistent, lead to greater output volatility, and have a greater impact on growth, than a positive shock of equal magnitude. [source]

Dynamics and Rate-Dependence of the Spatial Angle between Ventricular Depolarization and Repolarization Wave Fronts during Exercise ECG

Tuomas Kenttä M.Sc.
Background: QRS/T angle and the cosine of the angle between QRS and T-wave vectors (TCRT), measured from standard 12-lead electrocardiogram (ECG), have been used in risk stratification of patients. This study assessed the possible rate dependence of these variables during exercise ECG in healthy subjects. Methods: Forty healthy volunteers, 20 men and 20 women, aged 34.6 ± 3.4, underwent an exercise ECG testing. Twelve-lead ECG was recorded from each test subject and the spatial QRS/T angle and TCRT were automatically analyzed in a beat-to-beat manner with custom-made software. The individual TCRT/RR and QRST/RR patterns were fitted with seven different regression models, including a linear model and six nonlinear models. Results: TCRT and QRS/T angle showed a significant rate dependence, with decreased values at higher heart rates (HR). In individual subjects, the second-degree polynomic model was the best regression model for TCRT/RR and QRST/RR slopes. It provided the best fit for both exercise and recovery. The overall TCRT/RR and QRST/RR slopes were similar between men and women during exercise and recovery. However, women had predominantly higher TCRT and QRS/T values. With respect to time, the dynamics of TCRT differed significantly between men and women; with a steeper exercise slope in women (women, ,0.04/min vs ,0.02/min in men, P < 0.0001). In addition, evident hysteresis was observed in the TCRT/RR slopes; with higher TCRT values during exercise. Conclusions: The individual patterns of TCRT and QRS/T angle are affected by HR and gender. Delayed rate adaptation creates hysteresis in the TCRT/RR slopes. Ann Noninvasive Electrocardiol 2010;15(3):264,275 [source]

Using ARX and NARX approaches for modeling and prediction of the process behavior: application to a reactor-exchanger

Yahya Chetouani
Abstract Chemical industries are characterized often by nonlinear processes. Therefore, it is often difficult to obtain nonlinear models that accurately describe a plant in all regimes. The main contribution of this work is to establish a reliable model of a process behavior. The use of this model should reflect the normal behavior of the process and allow distinguishing it from an abnormal one. Consequently, the black-box identification based on the neural network (NN) approach by means of a nonlinear autoregressive with exogenous input (NARX) model has been chosen in this study. A comparison with an autoregressive with exogenous input (ARX) model based on the least squares criterion is carried out. This study also shows the choice and the performance of ARX and NARX models in the training and test phases. Statistical criteria are used for the validation of the experimental data of these approaches. The identified neural model is implemented by training a multilayer perceptron artificial neural network (MLP-ANN) with input,output experimental data. An analysis of the inputs number, hidden neurons and their influence on the behavior of the neural predictor is carried out. In order to illustrate the proposed ideas, a reactor-exchanger is used. Satisfactory agreement between identified and experimental data is found and results show that the neural model predicts the evolution of the process dynamics in a better way. Copyright © 2008 Curtin University of Technology and John Wiley & Sons, Ltd. [source]

Bayesian Nonparametric Estimation of Continuous Monotone Functions with Applications to Dose,Response Analysis

BIOMETRICS, Issue 1 2009
Björn Bornkamp
Summary In this article, we consider monotone nonparametric regression in a Bayesian framework. The monotone function is modeled as a mixture of shifted and scaled parametric probability distribution functions, and a general random probability measure is assumed as the prior for the mixing distribution. We investigate the choice of the underlying parametric distribution function and find that the two-sided power distribution function is well suited both from a computational and mathematical point of view. The model is motivated by traditional nonlinear models for dose,response analysis, and provides possibilities to elicitate informative prior distributions on different aspects of the curve. The method is compared with other recent approaches to monotone nonparametric regression in a simulation study and is illustrated on a data set from dose,response analysis. [source]

Conditional Generalized Estimating Equations for the Analysis of Clustered and Longitudinal Data

BIOMETRICS, Issue 3 2008
Sylvie Goetgeluk
Summary A common and important problem in clustered sampling designs is that the effect of within-cluster exposures (i.e., exposures that vary within clusters) on outcome may be confounded by both measured and unmeasured cluster-level factors (i.e., measurements that do not vary within clusters). When some of these are ill/not accounted for, estimation of this effect through population-averaged models or random-effects models may introduce bias. We accommodate this by developing a general theory for the analysis of clustered data, which enables consistent and asymptotically normal estimation of the effects of within-cluster exposures in the presence of cluster-level confounders. Semiparametric efficient estimators are obtained by solving so-called conditional generalized estimating equations. We compare this approach with a popular proposal by Neuhaus and Kalbfleisch (1998, Biometrics54, 638,645) who separate the exposure effect into a within- and a between-cluster component within a random intercept model. We find that the latter approach yields consistent and efficient estimators when the model is linear, but is less flexible in terms of model specification. Under nonlinear models, this approach may yield inconsistent and inefficient estimators, though with little bias in most practical settings. [source]