Learning Rate (learning + rate)

Distribution by Scientific Domains


Selected Abstracts


Sharing in teams of heterogeneous, collaborative learning agents

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 2 2009
Christopher M. Gifford
This paper is focused on the effects of sharing knowledge and collaboration of multiple heterogeneous, intelligent agents (hardware or software) which work together to learn a task. As each agent employs a different machine learning technique, the system consists of multiple knowledge sources and their respective heterogeneous knowledge representations. Collaboration between agents involves sharing knowledge to both speed up team learning, as well as refine the team's overall performance and group behavior. Experiments have been performed that vary the team composition in terms of machine learning algorithms, learning strategies employed by the agents, and sharing frequency for a predator-prey cooperative pursuit task. For lifelong learning, heterogeneous learning teams were more successful than homogeneous learning counterparts. Interestingly, sharing increased the learning rate, but sharing with higher frequency showed diminishing results. Lastly, knowledge conflicts are reduced over time the more sharing takes place. These results support further investigation of the merits of heterogeneous learning. © 2008 Wiley Periodicals, Inc. [source]


PREDICTION OF MECHANICAL PROPERTIES OF CUMIN SEED USING ARTIFICIAL NEURAL NETWORKS

JOURNAL OF TEXTURE STUDIES, Issue 1 2010
M.H. SAIEDIRAD
ABSTRACT In this paper, two artificial neural networks (ANNs) are applied to acquire the relationship between the mechanical properties and moisture content of cumin seed, using the data of quasi-static loading test. In establishing these relationship, the moisture content, seed size, loading rate and seed orientation were taken as the inputs of both models. The force and energy required for fracturing of cumin seed, under quasi-static loading were taken as the outputs of two models. The activation function in the output layer of models obeyed a linear output, whereas the activation function in the hidden layers were in the form of a sigmoid function. Adjusting ANN parameters such as learning rate and number of neurons and hidden layers affected the accuracy of force and energy prediction. Comparison of the predicted and experimented data showed that the ANN models used to predict the relationships of mechanical properties of cumin seed have a good learning precision and good generalization, because the root mean square errors of the predicated data by ANNs were rather low (4.6 and 7.7% for the force and energy, respectively). PRACTICAL APPLICATIONS Cumin seed is generally used as a food additive in the form of powder for imparting flavor to different food preparations and for a variety of medicinal properties. Physical properties of cumin seeds are essential for the design of equipment for handling, harvesting, aeration, drying, storing, grinding and processing. For powder preparation especially the fracture behavior of the seeds are essential. These properties are affected by numerous factors such as size, form and moisture content of the grain and deformation speed. A neural network model was developed that can be used to predict the relationships of mechanical properties. Artificial neural network models are powerful empirical models approach, which can be compared with mathematical models. [source]


Allocation of quality improvement targets based on investments in learning

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 8 2001
Herbert Moskowitz
Abstract Purchased materials often account for more than 50% of a manufacturer's product nonconformance cost. A common strategy for reducing such costs is to allocate periodic quality improvement targets to suppliers of such materials. Improvement target allocations are often accomplished via ad hoc methods such as prescribing a fixed, across-the-board percentage improvement for all suppliers, which, however, may not be the most effective or efficient approach for allocating improvement targets. We propose a formal modeling and optimization approach for assessing quality improvement targets for suppliers, based on process variance reduction. In our models, a manufacturer has multiple product performance measures that are linear functions of a common set of design variables (factors), each of which is an output from an independent supplier's process. We assume that a manufacturer's quality improvement is a result of reductions in supplier process variances, obtained through learning and experience, which require appropriate investments by both the manufacturer and suppliers. Three learning investment (cost) models for achieving a given learning rate are used to determine the allocations that minimize expected costs for both the supplier and manufacturer and to assess the sensitivity of investment in learning on the allocation of quality improvement targets. Solutions for determining optimal learning rates, and concomitant quality improvement targets are derived for each learning investment function. We also account for the risk that a supplier may not achieve a targeted learning rate for quality improvements. An extensive computational study is conducted to investigate the differences between optimal variance allocations and a fixed percentage allocation. These differences are examined with respect to (i) variance improvement targets and (ii) total expected cost. For certain types of learning investment models, the results suggest that orders of magnitude differences in variance allocations and expected total costs occur between optimal allocations and those arrived at via the commonly used rule of fixed percentage allocations. However, for learning investments characterized by a quadratic function, there is surprisingly close agreement with an "across-the-board" allocation of 20% quality improvement targets. © John Wiley & Sons, Inc. Naval Research Logistics 48: 684,709, 2001 [source]


Identification and control of a riser-type FCC unit

THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 6 2001
Abdul-Alghasim Alaradi
Abstract This paper addresses the use of feedforward neural networks for the steady-state and dynamic identification and control of a riser type fluid catalytic cracking unit (FCCU). The results are compared with a conventional PI controller and a model predictive control (MPC) using a state space subspace identification algorithm. A back propagation algorithm with momentum term and adaptive learning rate is used for training the identification networks. The back propagation algorithm is also used for the neuro-control of the process. It is shown that for a noise-free system the adaptive neuro-controller and the MPC are capable of maintaining the riser temperature, the pressure difference between the reactor vessel and the regenerator, and the catalyst bed level in the reactor vessel, in the presence of set-point and disturbance changes. The MPC performs better than the neuro controller that in turn is superior to the conventional multi-loop diagonal PI controller. On examine dans cet article l'utilisation de réseaux neuronaux à anticipation pour la détermination et la régulation en régimes dynamique et permanent d'une unité de craquage catalytique de fluide de type colonne montante (FCCU). Un algorithme de rétro-propagation avec un terme de quantité de mouvement et une vitesse d'apprentissage adaptative est utilisé pour l'entraînement des réseaux d'identification. L'algorithme de rétro-propagation est également utilisé pour le controle neuronal du procédé. On montre que pour un système non bruité le contôleur neuronal adaptatif est capable de maintenir la température de colonne, la différence de pression entre le réacteur et le régénerateur ainsi que le niveau de lit de catalyseur dans le réacteur, en présence de changements dans les point de consigne et les perturbations. [source]


Simple Recurrent Neural Network-Based Adaptive Predictive Control for Nonlinear Systems

ASIAN JOURNAL OF CONTROL, Issue 2 2002
Xiang Li
ABSTRACT Making use of the neural network universal approximation ability, a nonlinear predictive control scheme is studied in this paper. On the basis of a uniform structure of simple recurrent neural networks, a one-step neural predictive controller (OSNPC) is designed. The whole closed-loop system's asymptotic stability and passivity are discussed, and stable conditions for the learning rate are determined based on the Lyapunov stability theory for the whole neural system. The effectiveness of OSNPC is verified via exhaustive simulations. [source]


Allocation of quality improvement targets based on investments in learning

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 8 2001
Herbert Moskowitz
Abstract Purchased materials often account for more than 50% of a manufacturer's product nonconformance cost. A common strategy for reducing such costs is to allocate periodic quality improvement targets to suppliers of such materials. Improvement target allocations are often accomplished via ad hoc methods such as prescribing a fixed, across-the-board percentage improvement for all suppliers, which, however, may not be the most effective or efficient approach for allocating improvement targets. We propose a formal modeling and optimization approach for assessing quality improvement targets for suppliers, based on process variance reduction. In our models, a manufacturer has multiple product performance measures that are linear functions of a common set of design variables (factors), each of which is an output from an independent supplier's process. We assume that a manufacturer's quality improvement is a result of reductions in supplier process variances, obtained through learning and experience, which require appropriate investments by both the manufacturer and suppliers. Three learning investment (cost) models for achieving a given learning rate are used to determine the allocations that minimize expected costs for both the supplier and manufacturer and to assess the sensitivity of investment in learning on the allocation of quality improvement targets. Solutions for determining optimal learning rates, and concomitant quality improvement targets are derived for each learning investment function. We also account for the risk that a supplier may not achieve a targeted learning rate for quality improvements. An extensive computational study is conducted to investigate the differences between optimal variance allocations and a fixed percentage allocation. These differences are examined with respect to (i) variance improvement targets and (ii) total expected cost. For certain types of learning investment models, the results suggest that orders of magnitude differences in variance allocations and expected total costs occur between optimal allocations and those arrived at via the commonly used rule of fixed percentage allocations. However, for learning investments characterized by a quadratic function, there is surprisingly close agreement with an "across-the-board" allocation of 20% quality improvement targets. © John Wiley & Sons, Inc. Naval Research Logistics 48: 684,709, 2001 [source]


LEARNING TO SOLVE PROBLEMS FROM EXERCISES

COMPUTATIONAL INTELLIGENCE, Issue 4 2008
Prasad Tadepalli
It is a common observation that learning easier skills makes it possible to learn the more difficult skills. This fact is routinely exploited by parents, teachers, textbook writers, and coaches. From driving, to music, to science, there hardly exists a complex skill that is not learned by gradations. Natarajan's model of "learning from exercises" captures this kind of learning of efficient problem solving skills using practice problems or exercises (Natarajan 1989). The exercises are intermediate subproblems that occur in solving the main problems and span all levels of difficulty. The learner iteratively bootstraps what is learned from simpler exercises to generalize techniques for solving more complex exercises. In this paper, we extend Natarajan's framework to the problem reduction setting where problems are solved by reducing them to simpler problems. We theoretically characterize the conditions under which efficient learning from exercises is possible. We demonstrate the generality of our framework with successful implementations in the Eight Puzzle, symbolic integration, and simulated robot planning domains illustrating three different representations of control knowledge, namely, macro-operators, control rules, and decision lists. The results show that the learning rates for the exercises framework are competitive with those for learning from problems solved by the teacher. [source]