Home About us Contact | |||
Recurrent Neural Networks (recurrent + neural_network)
Selected AbstractsDynamic Process Modelling using a PCA-based Output Integrated Recurrent Neural NetworkTHE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 4 2002Yu Qian Abstract A new methodology for modelling of dynamic process systems, the output integrated recurrent neural network (OIRNN), is presented in this paper. OIRNN can be regarded as a modified Jordan recurrent neural network, in which the past values for certain steps of the output variables are integrated with the input variables, and the original input variables are pre-processed using principal component analysis (PCA) for the purpose of dimension reduction. The main advantage of the PCA-based OIRNN is that the input dimension is reduced, so that the network can be used to model the dynamic behavior of multiple input multiple output (MIMO) systems effectively. The new method is illustrated with reference to the Tennessee-Eastman process and compared with principal component regression and feedforward neural networks. On présente dans cet article une nouvelle méthodologie pour la modélisation de systèmes de procédés dynamiques, soit le réseau neuronal récurrent avec intégration de la réponse (OIRNN). Ce dernier peut être vu comme un réseau neuronal récurrent de Jordan modifié, dans lequel les valeurs passées pour certaines étapes des valeurs de sortie sont intégrées aux variables d'entrée et les variables d'entrée originales pré-traitée par l'analyse des composants principaux (PCA) dans un but de réduction des dimensions. Le principal avantage de l'OIRNN basé sur la PCA est que la dimension d'entée est réduite de sorte que le réseau peut servir à modéliser le comportement dynamique de systèmes à entrée et sorties multiples (MIMO) de façon efficace. La nouvell méthod est illustrée dans le cas du procédé Tennessee-Eastman et est comparée aux réseaux neuronaux anticipés et à régression des composants principaux. [source] Simple Recurrent Neural Network-Based Adaptive Predictive Control for Nonlinear SystemsASIAN JOURNAL OF CONTROL, Issue 2 2002Xiang Li ABSTRACT Making use of the neural network universal approximation ability, a nonlinear predictive control scheme is studied in this paper. On the basis of a uniform structure of simple recurrent neural networks, a one-step neural predictive controller (OSNPC) is designed. The whole closed-loop system's asymptotic stability and passivity are discussed, and stable conditions for the learning rate are determined based on the Lyapunov stability theory for the whole neural system. The effectiveness of OSNPC is verified via exhaustive simulations. [source] Recurrent Neural Networks for Uncertain Time-Dependent Structural BehaviorCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2010W. Graf The approach is based on recurrent neural networks trained by time-dependent measurement results. Thereby, the uncertainty of the measurement results is modeled as fuzzy processes which are considered within the recurrent neural network approach. An efficient solution for network training and prediction is developed utilizing ,-cuts and interval arithmetic. The capability of the approach is demonstrated by means of the prediction of the long-term structural behavior of a reinforced concrete plate strengthened by a textile reinforced concrete layer. [source] Recurrent neural networks with multi-branch structureELECTRONICS & COMMUNICATIONS IN JAPAN, Issue 9 2008Takashi Yamashita Abstract Universal Learning Networks (ULNs) provide a generalized framework for many kinds of structures in neural networks with supervised learning. Multi-Branch Neural Networks (MBNNs) which use the framework of ULNs have already been shown to have better representation ability in feedforward neural networks (FNNs). The multi-branch structure of MBNNs can be easily extended to recurrent neural networks (RNNs) because the characteristics of ULNs include the connection of multiple branches with arbitrary time delays. In this paper, therefore, RNNs with multi-branch structure are proposed and are shown to have better representation ability than conventional RNNs. RNNs can represent dynamical systems and are useful for time series prediction. The performance evaluation of RNNs with multi-branch structure was carried out using a benchmark of time series prediction. Simulation results showed that RNNs with multi-branch structure could obtain better performance than conventional RNNs, and also showed that they could improve the representation ability even if they are smaller-sized networks. © 2009 Wiley Periodicals, Inc. Electron Comm Jpn, 91(9): 37,44, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecj.10157 [source] Reconstruction of chaotic signals with application to channel equalization in chaos-based communication systemsINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 3 2004Jiuchao Feng Abstract A number of schemes have been proposed for communication using chaos over the past years. Regardless of the exact modulation method used, the transmitted signal must go through a physical channel which undesirably introduces distortion to the signal and adds noise to it. The problem is particularly serious when coherent-based demodulation is used because the necessary process of chaos synchronization is difficult to implement in practice. This paper addresses the channel distortion problem and proposes a technique for channel equalization in chaos-based communication systems. The proposed equalization is realized by a modified recurrent neural network (RNN) incorporating a specific training (equalizing) algorithm. Computer simulations are used to demonstrate the performance of the proposed equalizer in chaos-based communication systems. The Hénon map and Chua's circuit are used to generate chaotic signals. It is shown that the proposed RNN-based equalizer outperforms conventional equalizers as well as those based on feedforward neural networks for noisy, distorted linear and non-linear channels. Copyright © 2004 John Wiley & Sons, Ltd. [source] A levenberg,marquardt learning applied for recurrent neural identification and control of a wastewater treatment bioprocessINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 11 2009Ieroham S. Baruch The paper proposed a new recurrent neural network (RNN) model for systems identification and states estimation of nonlinear plants. The proposed RNN identifier is implemented in direct and indirect adaptive control schemes, incorporating a noise rejecting plant output filter and recurrent neural or linear-sliding mode controllers. For sake of comparison, the RNN model is learned both by the backpropagation and by the recursive Levenberg,Marquardt (L,M) learning algorithm. The estimated states and parameters of the RNN model are used for direct and indirect adaptive trajectory tracking control. The proposed direct and indirect schemes are applied for real-time control of wastewater treatment bioprocess, where a good, convergence, noise filtering, and low mean squared error of reference tracking is achieved for both learning algorithms, with priority of the L,M one. © 2009 Wiley Periodicals, Inc. [source] Direction-of-change forecasting using a volatility-based recurrent neural networkJOURNAL OF FORECASTING, Issue 5 2008S. D. Bekiros Abstract This paper investigates the profitability of a trading strategy, based on recurrent neural networks, that attempts to predict the direction-of-change of the market in the case of the NASDAQ composite index. The sample extends over the period 8 February 1971 to 7 April 1998, while the sub-period 8 April 1998 to 5 February 2002 has been reserved for out-of-sample testing purposes. We demonstrate that the incorporation in the trading rule of estimates of the conditional volatility changes strongly enhances its profitability, after the inclusion of transaction costs, during bear market periods. This improvement is being measured with respect to a nested model that does not include the volatility variable as well as to a buy-and-hold strategy. We suggest that our findings can be justified by invoking either the ,volatility feedback' theory or the existence of portfolio insurance schemes in the equity markets. Our results are also consistent with the view that volatility dependence produces sign dependence. Copyright © 2008 John Wiley & Sons, Ltd. [source] Dynamic Process Modelling using a PCA-based Output Integrated Recurrent Neural NetworkTHE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 4 2002Yu Qian Abstract A new methodology for modelling of dynamic process systems, the output integrated recurrent neural network (OIRNN), is presented in this paper. OIRNN can be regarded as a modified Jordan recurrent neural network, in which the past values for certain steps of the output variables are integrated with the input variables, and the original input variables are pre-processed using principal component analysis (PCA) for the purpose of dimension reduction. The main advantage of the PCA-based OIRNN is that the input dimension is reduced, so that the network can be used to model the dynamic behavior of multiple input multiple output (MIMO) systems effectively. The new method is illustrated with reference to the Tennessee-Eastman process and compared with principal component regression and feedforward neural networks. On présente dans cet article une nouvelle méthodologie pour la modélisation de systèmes de procédés dynamiques, soit le réseau neuronal récurrent avec intégration de la réponse (OIRNN). Ce dernier peut être vu comme un réseau neuronal récurrent de Jordan modifié, dans lequel les valeurs passées pour certaines étapes des valeurs de sortie sont intégrées aux variables d'entrée et les variables d'entrée originales pré-traitée par l'analyse des composants principaux (PCA) dans un but de réduction des dimensions. Le principal avantage de l'OIRNN basé sur la PCA est que la dimension d'entée est réduite de sorte que le réseau peut servir à modéliser le comportement dynamique de systèmes à entrée et sorties multiples (MIMO) de façon efficace. La nouvell méthod est illustrée dans le cas du procédé Tennessee-Eastman et est comparée aux réseaux neuronaux anticipés et à régression des composants principaux. [source] Recurrent Neural Networks for Uncertain Time-Dependent Structural BehaviorCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2010W. Graf The approach is based on recurrent neural networks trained by time-dependent measurement results. Thereby, the uncertainty of the measurement results is modeled as fuzzy processes which are considered within the recurrent neural network approach. An efficient solution for network training and prediction is developed utilizing ,-cuts and interval arithmetic. The capability of the approach is demonstrated by means of the prediction of the long-term structural behavior of a reinforced concrete plate strengthened by a textile reinforced concrete layer. [source] Recurrent neural networks with multi-branch structureELECTRONICS & COMMUNICATIONS IN JAPAN, Issue 9 2008Takashi Yamashita Abstract Universal Learning Networks (ULNs) provide a generalized framework for many kinds of structures in neural networks with supervised learning. Multi-Branch Neural Networks (MBNNs) which use the framework of ULNs have already been shown to have better representation ability in feedforward neural networks (FNNs). The multi-branch structure of MBNNs can be easily extended to recurrent neural networks (RNNs) because the characteristics of ULNs include the connection of multiple branches with arbitrary time delays. In this paper, therefore, RNNs with multi-branch structure are proposed and are shown to have better representation ability than conventional RNNs. RNNs can represent dynamical systems and are useful for time series prediction. The performance evaluation of RNNs with multi-branch structure was carried out using a benchmark of time series prediction. Simulation results showed that RNNs with multi-branch structure could obtain better performance than conventional RNNs, and also showed that they could improve the representation ability even if they are smaller-sized networks. © 2009 Wiley Periodicals, Inc. Electron Comm Jpn, 91(9): 37,44, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecj.10157 [source] Memetic evolutionary training for recurrent neural networks: an application to time-series predictionEXPERT SYSTEMS, Issue 2 2006M. Delgado Abstract: Artificial neural networks are bio-inspired mathematical models that have been widely used to solve complex problems. The training of a neural network is an important issue to deal with, since traditional gradient-based algorithms become easily trapped in local optimal solutions, therefore increasing the time taken in the experimental step. This problem is greater in recurrent neural networks, where the gradient propagation across the recurrence makes the training difficult for long-term dependences. On the other hand, evolutionary algorithms are search and optimization techniques which have been proved to solve many problems effectively. In the case of recurrent neural networks, the training using evolutionary algorithms has provided promising results. In this work, we propose two hybrid evolutionary algorithms as an alternative to improve the training of dynamic recurrent neural networks. The experimental section makes a comparative study of the algorithms proposed, to train Elman recurrent neural networks in time-series prediction problems. [source] Direction-of-change forecasting using a volatility-based recurrent neural networkJOURNAL OF FORECASTING, Issue 5 2008S. D. Bekiros Abstract This paper investigates the profitability of a trading strategy, based on recurrent neural networks, that attempts to predict the direction-of-change of the market in the case of the NASDAQ composite index. The sample extends over the period 8 February 1971 to 7 April 1998, while the sub-period 8 April 1998 to 5 February 2002 has been reserved for out-of-sample testing purposes. We demonstrate that the incorporation in the trading rule of estimates of the conditional volatility changes strongly enhances its profitability, after the inclusion of transaction costs, during bear market periods. This improvement is being measured with respect to a nested model that does not include the volatility variable as well as to a buy-and-hold strategy. We suggest that our findings can be justified by invoking either the ,volatility feedback' theory or the existence of portfolio insurance schemes in the equity markets. Our results are also consistent with the view that volatility dependence produces sign dependence. Copyright © 2008 John Wiley & Sons, Ltd. [source] Inverse optimal noise-to-state stabilization of stochastic recurrent neural networks driven by noise of unknown covarianceOPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 2 2009Ziqian Liu Abstract In this paper, we extend our previous research results regarding the stabilization of recurrent neural networks from the concept of input-to-state stability to noise-to-state stability, and present a new approach to achieve noise-to-state stabilization in probability for stochastic recurrent neural networks driven by the noise of unknown covariance. This approach is developed by using the Lyapunov technique, inverse optimality, differential game theory, and the Hamilton,Jacobi,Isaacs equation. Numerical examples demonstrate the effectiveness of the proposed approach. Copyright © 2008 John Wiley & Sons, Ltd. [source] Simple Recurrent Neural Network-Based Adaptive Predictive Control for Nonlinear SystemsASIAN JOURNAL OF CONTROL, Issue 2 2002Xiang Li ABSTRACT Making use of the neural network universal approximation ability, a nonlinear predictive control scheme is studied in this paper. On the basis of a uniform structure of simple recurrent neural networks, a one-step neural predictive controller (OSNPC) is designed. The whole closed-loop system's asymptotic stability and passivity are discussed, and stable conditions for the learning rate are determined based on the Lyapunov stability theory for the whole neural system. The effectiveness of OSNPC is verified via exhaustive simulations. [source] Dynamic On-Line Reoptimization Control of a Batch MMA Polymerization Reactor Using Hybrid Neural Network ModelsCHEMICAL ENGINEERING & TECHNOLOGY (CET), Issue 9 2004Y. Tian Abstract A hybrid neural network model based on-line reoptimization control strategy is developed for a batch polymerization reactor. To address the difficulties in batch polymerization reactor modeling, the hybrid neural network model contains a simplified mechanistic model covering material balance assuming perfect temperature control, and recurrent neural networks modeling the residuals of the simplified mechanistic model due to imperfect temperature control. This hybrid neural network model is used to calculate the optimal control policy. A difficulty in the optimal control of batch polymerization reactors is that the optimization effort can be seriously hampered by unknown disturbances such as reactive impurities and reactor fouling. With the presence of an unknown amount of reactive impurities, the off-line calculated optimal control profile will be no longer optimal. To address this issue, a strategy combining on-line reactive impurity estimation and on-line reoptimization is proposed in this paper. The amount of reactive impurities is estimated on-line during the early stage of a batch by using a neural network based inverse model. Based on the estimated amount of reactive impurities, on-line reoptimization is then applied to calculate the optimal reactor temperature profile for the remaining time period of the batch reactor operation. This approach is illustrated on the optimization control of a simulated batch methyl methacrylate polymerization process. [source] |