Optimization Approach (optimization + approach)

Distribution by Scientific Domains


Selected Abstracts


Human motion reconstruction from monocular images using genetic algorithms

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2004
Jianhui Zhao
Abstract This paper proposed an optimization approach for human motion recovery from the un-calibrated monocular images containing unlimited human movements. A 3D skeleton human model based on anatomy knowledge is employed with encoded biomechanical constraints for the joints. Energy Function is defined to represent the deviations between projection features and extracted image features. Reconstruction procedure is developed to adjust joints and segments of the human body into their proper positions. Genetic Algorithms are adopted to find the optimal solution effectively in the high dimensional parameter space by simultaneously considering all the parameters of the human model. The experimental results are analysed by Deviation Penalty. Copyright © 2004 John Wiley & Sons, Ltd. [source]


A Multiobjective and Stochastic System for Building Maintenance Management

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2000
Z. Lounis
Building maintenance management involves decision making under multiple objectives and uncertainty, in addition to budgetary constraints. This article presents the development of a multiobjective and stochastic optimization system for maintenance management of roofing systems that integrates stochastic condition-assessment and performance-prediction models with a multiobjective optimization approach. The maintenance optimization includes determination of the optimal allocation of funds and prioritization of roofs for maintenance, repair, and replacement that simultaneously satisfy the following conflicting objectives: (1) minimization of maintenance and repair costs, (2) maximization of network performance, and (3) minimization of risk of failure. A product model of the roof system is used to provide the data framework for collecting and processing data. Compromise programming is used to solve this multiobjective optimization problem and provides building managers an effective decision support system that identifies the optimal projects for repair and replacement while it achieves a satisfactory tradeoff between the conflicting objectives. [source]


Determining arresters best positions in power system for lightning shielding failure protection using simulation optimization approach

EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 3 2010
B. Vahidi
Abstract The lightning stroke to power system structures especially overhead lines makes severe damages and results in less reliable power supply. The invention of surge arresters was a revolution in these systems for protecting the precise equipments from lightning stroke overvoltages. Nowadays, with ever decreasing prices, using arrester not only for protecting certain instruments but also for decreasing total risk of flashover in overall network, is investigated by academic and industrial pioneers in this area. In this paper, our goal is to introduce a heuristic method for determining optimum positions for placing transmission lines surge arresters (TLSAs) with acceptable approximation, to get lowest possible value of shielding failure risk of flashover in a selected set of overhead lines. Simulation optimization based on neural net (i.e. Meta Model) and genetic algorithm (optimization algorithm) is invoked to suggest best positions for placing TLSAs. A case study on Kerman 230,kV network shows good achievement of simulation optimization for finding optimum positions of TLSAs. Comparison is also made with the results of transient simulation to reveal the effectiveness of the method. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Stacking velocities in the presence of overburden velocity anomalies

GEOPHYSICAL PROSPECTING, Issue 3 2009
Emil Blias
ABSTRACT Lateral velocity changes (velocity anomalies) in the overburden may cause significant oscillations in normal moveout velocities. Explicit analytical moveout formulas are presented and provide a direct explanation of these lateral fluctuations and other phenomena for a subsurface with gentle deep structures and shallow overburden anomalies. The analytical conditions for this have been derived for a depth-velocity model with gentle structures with dips not exceeding 12°. The influence of lateral interval velocity changes and curvilinear overburden velocity boundaries can be estimated and analysed using these formulas. An analytical approach to normal moveout velocity analysis in a laterally inhomogeneous medium provides an understanding of the connection between lateral interval velocity changes and normal moveout velocities. In the presence of uncorrected shallow velocity anomalies, the difference between root-mean-square and stacking velocity can be arbitrarily large to the extent of reversing the normal moveout function around normal incidence traveltimes. The main reason for anomalous stacking velocity behaviour is non-linear lateral variations in the shallow overburden interval velocities or the velocity boundaries. A special technique has been developed to determine and remove shallow velocity anomaly effects. This technique includes automatic continuous velocity picking, an inversion method for the determination of shallow velocity anomalies, improving the depth-velocity model by an optimization approach to traveltime inversion (layered reflection tomography) and shallow velocity anomaly replacement. Model and field data examples are used to illustrate this technique. [source]


Evaluation of coalbed methane reservoirs from geophysical log data using an improved fuzzy comprehensive decision method and a homologous neural network

GEOPHYSICAL PROSPECTING, Issue 5 2002
J. Hou
The evaluation of coalbed methane reservoirs using log data is an important approach in the exploration and development of coalbed methane reservoirs. Most commonly, regression techniques, fuzzy recognition and neural networks have been used to evaluate coalbed methane reservoirs. It is known that a coalbed is an unusual reservoir. There are many difficulties in using regression methods and empirical qualitative recognition to evaluate a coalbed, but fuzzy recognition, such as the fuzzy comprehensive decision method, and neural networks, such as the back-propagation (BP) network, are widely used. However, there are no effective methods for computing weights for the fuzzy comprehensive decision method, and the BP algorithm is a local optimization algorithm, easily trapped in local minima, which significantly affect the results. In this paper, the recognition method for coal formations is developed; the improved fuzzy comprehensive decision method, which uses an optimization approach for computing weighted coefficients, is developed for the qualitative recognition of coalbed methane reservoirs. The homologous neural network, using a homologous learning algorithm, which is a global search optimization, is presented for the quantitative analysis of parameters for coalbed methane reservoirs. The applied procedures for these methods and some problems related to their application are also discussed. Verification of the above methods is made using log data from the coalbed methane testing area in North China. The effectiveness of the methods is demonstrated by the analysis of results for real log data. [source]


A numerical method for the determination of dextrous workspaces of Gough,Stewart platforms

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 4 2001
L. J. du Plessis
Abstract An optimization approach to the computation of the boundaries of different dextrous workspaces of parallel manipulators is presented. A specific dextrous workspace is the region in space in which, at each position of the working point, a manipulator can control the orientation of its upper working platform through a specified range of orientation angles. Here the dextrous workspace is determined from the intersection of suitably chosen fixed orientation workspaces, which are found by application of a constrained optimization algorithm. The procedure is simple and has the considerable advantage that it may easily be automated. The method is illustrated by its application to both a planar and spatial Gough,Stewart platform. Copyright © 2001 John Wiley & Sons, Ltd. [source]


A new approach to response surface development for detailed gas-phase and surface reaction kinetic model optimization

INTERNATIONAL JOURNAL OF CHEMICAL KINETICS, Issue 2 2004
Scott G. Davis
We propose a new method for constructing kinetic response surfaces used in the development and optimization of gas-phase and surface reaction kinetic models. The method, termed as the sensitivity analysis based (SAB) method, is based on a multivariate Taylor expansion of model response with respect to model parameters, neglecting terms higher than the second order. The expansion coefficients are obtained by a first-order local sensitivity analysis. Tests are made for gas-phase combustion reaction models. The results show that the response surface obtained with the SAB method is as accurate as the factorial design method traditionally used in reaction model optimization. The SAB method, however, presents significant computational savings compared to factorial design. The effect of including the partial and full third order terms was also examined and discussed. The SAB method is applied to optimization of a relatively complex surface reaction mechanism where large uncertainty in rate parameters exists. The example chosen is laser-induced fluorescence signal of OH desorption from a platinum foil in the water/oxygen reaction at low pressures. We introduce an iterative solution mapping and optimization approach for improved accuracy. © 2003 Wiley Periodicals, Inc. Int J Chem Kinet 36: 94,106, 2004 [source]


Resource allocation in satellite networks: certainty equivalent approaches versus sensitivity estimation algorithms

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 1 2005
Franco Davoli
Abstract In this paper, we consider a resource allocation problem for a satellite network, where variations of fading conditions are added to those of traffic load. Since the capacity of the system is finite and divided in finite discrete portions, the resource allocation problem reveals to be a discrete stochastic programming one, which is typically NP-hard. We propose a new approach based on the minimization over a discrete constraint set using an estimation of the gradient, obtained through a ,relaxed continuous extension' of the performance measure. The computation of the gradient estimation is based on the infinitesimal perturbation analysis technique, applied on a stochastic fluid model of the network. No closed-forms of the performance measure, nor additional feedback concerning the state of the system, and very mild assumptions on the probabilistic properties about the statistical processes involved in the problem are requested. Such optimization approach is compared with a dynamic programming algorithm that maintains a perfect knowledge about the state of the satellite network (traffic load statistics and fading levels). The comparison shows that the sensitivity estimation capability of the proposed algorithm allows to maintain the optimal resource allocation in dynamic conditions and it is able to provide even better performance than the one reached by employing the dynamic programming approach. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Nonlinear wave function expansions: A progress report

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 15 2007
Ron Shepard
Abstract Some recent progress is reported for a novel nonlinear expansion form for electronic wave functions. This expansion form is based on spin eigenfunctions using the Graphical Unitary Group Approach and the wave function is expanded in a basis of product functions, allowing application to closed and open shell systems and to ground and excited electronic states. Each product basis function is itself a multiconfigurational expansion that depends on a relatively small number of nonlinear parameters called arc factors. Efficient recursive procedures for the computation of reduced one- and two-particle density matrices, overlap matrix elements, and Hamiltonian matrix elements result in a very efficient computational procedure that is applicable to very large configuration state function (CSF) expansions. A new energy-based optimization approach is presented based on product function splitting and variational recombination. Convergence of both valence correlation energy and dynamical correlation energy with respect to the product function basis dimension is examined. A wave function analysis approach suitable for very large CSF expansions is presented based on Shavitt graph node density and arc density. Some new closed-form expressions for various Shavitt Graph and Auxiliary Pair Graph statistics are presented. © 2007 Wiley Periodicals, Inc. Int J Quantum Chem, 2007 [source]


LMI optimization approach to robust H, observer design and static output feedback stabilization for discrete-time nonlinear uncertain systems

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 3 2009
Masoud Abbaszadeh
Abstract A new approach for the design of robust H, observers for a class of Lipschitz nonlinear systems with time-varying uncertainties is proposed based on linear matrix inequalities (LMIs). The admissible Lipschitz constant of the system and the disturbance attenuation level are maximized simultaneously through convex multiobjective optimization. The resulting H, observer guarantees asymptotic stability of the estimation error dynamics and is robust against nonlinear additive uncertainty and time-varying parametric uncertainties. Explicit norm-wise and element-wise bounds on the tolerable nonlinear uncertainty are derived. Also, a new method for the robust output feedback stabilization with H, performance for a class of uncertain nonlinear systems is proposed. Our solution is based on a noniterative LMI optimization and is less restrictive than the existing solutions. The bounds on the nonlinear uncertainty and multiobjective optimization obtained for the observer are also applicable to the proposed static output feedback stabilizing controller. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Highway alignment optimization through feasible gates

JOURNAL OF ADVANCED TRANSPORTATION, Issue 2 2007
Min Wook Kang
Abstract An efficient optimization approach, called feasible gate (FG), is developed to enhance the computation efficiency and solution quality of the previously developed highway alignment optimization (HAO) model. This approach seeks to realistically represent various user preferences and environmentally sensitive areas and consider them along with geometric design constraints in the optimization process. This is done by avoiding the generation of infeasible solutions that violate various constraints and thus focusing the search on the feasible solutions. The proposed method is simple, but improves significantly the model's computation time and solution quality. Such improvements are demonstrated with two test examples from a real road project. [source]


Fully quantum mechanical energy optimization for protein,ligand structure

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 12 2004
Yun Xiang
Abstract We present a quantum mechanical approach to study protein,ligand binding structure with application to a Adipocyte lipid-binding protein complexed with Propanoic Acid. The present approach employs a recently develop molecular fractionation with a conjugate caps (MFCC) method to compute protein,ligand interaction energy and performs energy optimization using the quasi-Newton method. The MFCC method enables us to compute fully quantum mechanical ab initio protein,ligand interaction energy and its gradients that are used in energy minimization. This quantum optimization approach is applied to study the Adipocyte lipid-binding protein complexed with Propanoic Acid system, a complex system consisting of a 2057-atom protein and a 10-atom ligand. The MFCC calculation is carried out at the Hartree,Fock level with a 3-21G basis set. The quantum optimized structure of this complex is in good agreement with the experimental crystal structure. The quantum energy calculation is implemented in a parallel program that dramatically speeds up the MFCC calculation for the protein,ligand system. Similarly good agreement between MFCC optimized structure and the experimental structure is also obtained for the streptavidin,biotin complex. Due to heavy computational cost, the quantum energy minimization is carried out in a six-dimensional space that corresponds to the rigid-body protein,ligand interaction. © 2004 Wiley Periodicals, Inc. J Comput Chem 25: 1431,1437, 2004 [source]


Nonlinear optimization of autonomous undersea vehicle sampling strategies for oceanographic data-assimilation

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 6 2007
Kevin D. Heaney
The problem of how to optimally deploy a suite of sensors to estimate the oceanographic environment is addressed. An optimal way to estimate (nowcast) and predict (forecast) the ocean environment is to assimilate measurements from dynamic and uncertain regions into a dynamical ocean model. In order to determine the sensor deployment strategy that optimally samples the regions of uncertainty, a Genetic Algorithm (GA) approach is presented. The scalar cost function is defined as a weighted combination of a sensor suite's sampling of the ocean variability, ocean dynamics, transmission loss sensitivity, modeled temperature uncertainty (and others). The benefit of the GA approach is that the user can determine "optimal" via a weighting of constituent cost functions, which can include ocean dynamics, acoustics, cost, time, etc. A numerical example with three gliders, two powered AUVs, and three moorings is presented to illustrate the optimization approach in the complex shelfbreak region south of New England. © 2007 Wiley Periodicals, Inc. [source]


A multi-objective optimization approach to polygeneration energy systems design

AICHE JOURNAL, Issue 5 2010
Pei Liu
Abstract Polygeneration, typically involving co-production of methanol and electricity, is a promising energy conversion technology which provides opportunities for high energy utilization efficiency and low/zero emissions. The optimal design of such a complex, large-scale and highly nonlinear process system poses significant challenges. In this article, we present a multiobjective optimization model for the optimal design of a methanol/electricity polygeneration plant. Economic and environmental criteria are simultaneously optimized over a superstructure capturing a number of possible combinations of technologies and types of equipment. Aggregated models are considered, including a detailed methanol synthesis step with chemical kinetics and phase equilibrium considerations. The resulting model is formulated as a non-convex mixed-integer nonlinear programming problem. Global optimization and parallel computation techniques are employed to generate an optimal Pareto frontier. © 2009 American Institute of Chemical Engineers AIChE J, 2010 [source]


A bi-criterion optimization approach for the design and planning of hydrogen supply chains for vehicle use

AICHE JOURNAL, Issue 3 2010
Gonzalo Guillén-Gosálbez
Abstract In this article, we address the design of hydrogen supply chains for vehicle use with economic and environmental concerns. Given a set of available technologies to produce, store, and deliver hydrogen, the problem consists of determining the optimal design of the production-distribution network capable of satisfying a predefined hydrogen demand. The design task is formulated as a bi-criterion mixed-integer linear programming (MILP) problem, which simultaneously accounts for the minimization of cost and environmental impact. The environmental impact is measured through the contribution to climate change made by the hydrogen network operation. The emissions considered in the analysis are those associated with the entire life cycle of the process, and are quantified according to the principles of Life Cycle Assessment (LCA). To expedite the search of the Pareto solutions of the problem, we introduce a bi-level algorithm that exploits its specific structure. A case study that addresses the optimal design of the hydrogen infrastructure needed to fulfill the expected hydrogen demand in Great Britain is introduced to illustrate the capabilities of the proposed approach. © 2009 American Institute of Chemical Engineers AIChE J, 2010 [source]


Dynamic optimization of the methylmethacrylate cell-cast process for plastic sheet production

AICHE JOURNAL, Issue 6 2009
Martín Rivera-Toledo
Abstract Traditionally, the methylmethacrylate (MMA) polymerization reaction process for plastic sheet production has been carried out using warming baths. However, it has been observed that the manufactured polymer tends to feature poor homogeneity characteristics measured in terms of properties like molecular weight distribution. Nonhomogeneous polymer properties should be avoided because they give rise to a product with undesired wide quality characteristics. To improve homogeneity properties force-circulated warm air reactors have been proposed, such reactors are normally operated under isothermal air temperature conditions. However, we demonstrate that dynamic optimal warming temperature profiles lead to a polymer sheet with better homogeneity characteristics, especially when compared against simple isothermal operating policies. In this work, the dynamic optimization of a heating and polymerization reaction process for plastic sheet production in a force-circulated warm air reactor is addressed. The optimization formulation is based on the dynamic representation of the two-directional heating and reaction process taking place within the system, and includes kinetic equations for the bulk free radical polymerization reactions of MMA. The mathematical model is cast as a time dependent partial differential equation (PDE) system, the optimal heating profile calculation turns out to be a dynamic optimization problem embedded in a distributed parameter system. A simultaneous optimization approach is selected to solve the dynamic optimization problem. Trough full discretization of all decision variables, a nonlinear programming (NLP) model is obtained and solved by using the IPOPT optimization solver. The results are presented about the dynamic optimization for two plastic sheets of different thickness and compared them against simple operating policies. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source]


A global optimization approach for generating efficient points for multiobjective concave fractional programs

JOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 1 2005
Harold P. Benson
Abstract In this article, we present a global optimization approach for generating efficient points for multiobjective concave fractional programming problems. The main work of the approach involves solving an instance of a concave multiplicative fractional program (W,). Problem (W,) is a global optimization problem for which no known algorithms are available. Therefore, to render the approach practical, we develop and validate a branch and bound algorithm for globally solving problem (W,). To illustrate the performance of the global optimization approach, we use it to generate efficient points for a sample multiobjective concave fractional program. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Multiobjective optimization of semibatch reactive crystallization processes

AICHE JOURNAL, Issue 5 2007
Debasis Sarkar
Abstract The determination of the optimal feed profiles for a reactive crystallizer is an important dynamic optimization problem, as the feed profiles offer a significant control over the quality of the product crystals. Crystallization processes typically have multiple performance objectives and optimization using different objective functions leads to significantly different optimal operating conditions. Therefore, a multiobjective approach is more appropriate for optimization of these processes. The potential for multiobjective optimization approach is demonstrated for semibatch reactive crystallization processes. The multiobjective approach usually gives rise to a set of optimal solutions, largely known as Pareto-optimal solutions. The Pareto-optimal solutions can help the designer visualize the trade-offs between different objectives, and select an appropriate operating condition for the process. A well known multiobjective evolutionary algorithm, the elitist nondominated sorting genetic algorithm, has been adapted to illustrate the potential for the multiobjective optimization approach. © 2007 American Institute of Chemical Engineers AIChE J, 2007 [source]


Enantioseparation of nuarimol by affinity electrokinetic chromatography-partial filling technique using human serum albumin as chiral selector

JOURNAL OF SEPARATION SCIENCE, JSS, Issue 18 2008
Maria Amparo Martínez-Gómez
Abstract The present paper deals with the enantiomeric separation of nuarimol enantiomers by affinity EKC-partial filling technique using HSA as chiral selector. Firstly, a study of nuarimol interactions with HSA by CE-frontal analysis was performed. The binding parameters obtained for the first site of interaction were n1 = 0.84; K1 = 9.7 ± 0.3×103 M,1 and the protein binding percentage of nuarimol at physiological concentration of HSA was 75.2 ± 0.2%. Due to the moderate affinity of nuarimol towards HSA the possibility of using this protein as chiral selector for the separation of nuarimol using the partial filling technique was evaluated. A multivariate optimization approach of the most critical experimental variables in enantioresolution, running pH, HSA concentration and plug length was carried out. Separation of nuarimol enantiomers was obtained under the following selected conditions: electrophoretic buffer composed of 50 mM Tris at pH 7.3; 160 ,M HSA solution applied at 50 mbar for 156 s as chiral selector; nuarimol solutions in the range of 2,8×10,4 M injected hydrodynamically at 30 mbar for 2 s and the electrophoretic runs performed at 30°C applying 15 kV voltage. Resolution, accuracy, reproducibility speed and cost of the proposed method make it suitable for quality control of the enantiomeric composition of nuarimol in formulations and for further toxicological studies. The results showed a different affinity between nuarimol enantiomers towards HSA. [source]


Statistical optimization of medium components for extracellular protease production by an extreme haloarchaeon, Halobacterium sp.

LETTERS IN APPLIED MICROBIOLOGY, Issue 1 2009
SP1(1)
Abstract Aims:, Optimization of medium components for extracellular protease production by Halobacterium sp. SP1(1) using statistical approach. Methods and Results:, The significant factors influencing the protease production as screened by Plackett,Burman method were identified as soybean flour and FeCl3. Response surface methodology such as central composite design was applied for further optimization studies. The concentrations of medium components for higher protease production as optimized using this approach were (g l,1): NaCl, 250; KCl, 2; MgSO4, 10; tri-Na-citrate, 1·5; soybean flour, 10 and FeCl3, 0·16. This statistical optimization approach led to production of 69·44 ± 0·811 U ml,1 of protease. Conclusions:, Soybean flour and FeCl3 were identified as important factors controlling the production of extracellular protease by Halobacterium sp. SP1(1). The statistical approach was found to be very effective in optimizing the medium components in manageable number of experimental runs with overall 3·9-fold increase in extracellular protease production. Significance and Impact of the Study:, The present study is the first report on statistical optimization of medium components for production of haloarchaeal protease. The study also explored the possibility of using extracellular protease produced by Halobacterium sp. SP1(1) for various applications like antifouling coatings and fish sauce preparation using cheaper raw material. [source]


Optimal boundary control of glass cooling processes

MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 11 2004
René Pinnau
Abstract In this paper, an optimal control problem for glass cooling processes is studied. We model glass cooling using the SP1 approximations to the radiative heat transfer equations. The control variable is the temperature at the boundary of the domain. This results in a boundary control problem for a parabolic/elliptic system which is treated by a constrained optimization approach. We consider several cost functionals of tracking-type and formally derive the first-order optimality system. Several numerical methods based on the adjoint variables are investigated. We present results of numerical simulations illustrating the feasibility and performance of the different approaches. Copyright © 2004 John Wiley & Sons, Ltd. [source]


An optimal critical level policy for inventory systems with two demand classes

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 7 2008
Karin T. Möllering
Abstract Traditional inventory systems treat all demands of a given item equally. This approach is optimal if the penalty costs of all customers are the same, but it is not optimal if the penalty costs are different for different customer classes. Then, demands of customers with high penalty costs must be filled before demands of customers with low penalty costs. A commonly used inventory policy for dealing with demands with different penalty costs is the critical level inventory policy. Under this policy demands with low penalty costs are filled as long as inventory is above a certain critical level. If the inventory reaches the critical level, only demands with high penalty costs are filled and demands with low penalty costs are backordered. In this article, we consider a critical level policy for a periodic review inventory system with two demand classes. Because traditional approaches cannot be used to find the optimal parameters of the policy, we use a multidimensional Markov chain to model the inventory system. We use a sample path approach to prove several properties of this inventory system. Although the cost function is not convex, we can build on these properties to develop an optimization approach that finds the optimal solution. We also present some numerical results. © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008 [source]


Allocation of quality improvement targets based on investments in learning

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 8 2001
Herbert Moskowitz
Abstract Purchased materials often account for more than 50% of a manufacturer's product nonconformance cost. A common strategy for reducing such costs is to allocate periodic quality improvement targets to suppliers of such materials. Improvement target allocations are often accomplished via ad hoc methods such as prescribing a fixed, across-the-board percentage improvement for all suppliers, which, however, may not be the most effective or efficient approach for allocating improvement targets. We propose a formal modeling and optimization approach for assessing quality improvement targets for suppliers, based on process variance reduction. In our models, a manufacturer has multiple product performance measures that are linear functions of a common set of design variables (factors), each of which is an output from an independent supplier's process. We assume that a manufacturer's quality improvement is a result of reductions in supplier process variances, obtained through learning and experience, which require appropriate investments by both the manufacturer and suppliers. Three learning investment (cost) models for achieving a given learning rate are used to determine the allocations that minimize expected costs for both the supplier and manufacturer and to assess the sensitivity of investment in learning on the allocation of quality improvement targets. Solutions for determining optimal learning rates, and concomitant quality improvement targets are derived for each learning investment function. We also account for the risk that a supplier may not achieve a targeted learning rate for quality improvements. An extensive computational study is conducted to investigate the differences between optimal variance allocations and a fixed percentage allocation. These differences are examined with respect to (i) variance improvement targets and (ii) total expected cost. For certain types of learning investment models, the results suggest that orders of magnitude differences in variance allocations and expected total costs occur between optimal allocations and those arrived at via the commonly used rule of fixed percentage allocations. However, for learning investments characterized by a quadratic function, there is surprisingly close agreement with an "across-the-board" allocation of 20% quality improvement targets. © John Wiley & Sons, Inc. Naval Research Logistics 48: 684,709, 2001 [source]


A Genetic Algorithm Based Approach to Coalescence Parameters: Estimation in Liquid-Liquid Extraction Columns

CHEMICAL ENGINEERING & TECHNOLOGY (CET), Issue 12 2006
A. Hasseine
Abstract The population balance model is a useful tool for the design and prediction of a range of processes that involve dispersed phases and particulates. The inverse problem method for the droplet population balance model is applied to estimate coalescences parameters for two-phase liquid-liquid systems. This is undertaken for two systems, namely toluene/water and n-butyl acetate/water in a rotating disc contactor (RDC), using a droplet population balance model. In the literature, the estimation procedure applied to this problem is often based on the deterministic optimization approach. These methods generate instabilities near a local minimum, inevitably requiring information about the derivatives at each iteration. To overcome these limitations, a method providing an estimate for the coalescences parameters is proposed. It is based on a simple and adapted structure of the genetic algorithm, for this particular problem. The agreement between the experimental observations and the simulations is encouraging and, in particular, the models used have proven to be suitable for the prediction of hold-up and Sauter diameter profiles for these systems. Finally, these results demonstrate that the optimization procedure proposed is very convenient for estimating the coalescences parameters for extraction column systems. [source]


Nichtlineare stochastische Optimierung unter Unsicherheiten

CHEMIE-INGENIEUR-TECHNIK (CIT), Issue 7 2003
H. Arellano-Garcia Dipl.-Ing.
Abstract Das Treffen von Entscheidungen unter Berücksichtigung von Unsicherheiten wird in zahlreichen Fachrichtungen und Anwendungsgebieten als grundlegend wichtig erachtet. In dynamischen chemischen Prozessen existieren insbesondere Parameter, die üblicherweise unsicher sind und große Auswirkungen auf die Auslegung von Anlagen, die Betriebsdurchführung sowie auf die wirtschaftliche Analyse haben. Daher ist die Einbeziehung der Stochastik der unsicheren Variablen in die Formulierung eines Optimierungsproblems in Hinblick auf ein optimales Prozessdesign sowie auf eine robuste Prozessführung notwendig. Infolgedessen ist die effiziente Optimierung unter Wahrscheinlichkeitsrestriktionen zu einem wichtigen Forschungsgebiet in der Prozesssystemtechnik geworden. Ein neuer Lösungsansatz für die stochastische Optimierung dynamischer Systeme wird am Beispiel eines diskontinuierlichen Prozesses mit einem rigorosen Modell vorgestellt und angewandt. Nonlinear Stochastic Optimization under Uncertainty Robust decision making under uncertainty is considered to be of fundamental importance in numerous disciplines and application areas. In dynamic chemical processes in particular there are parameters which are usually uncertain, but may have a large impact on equipment decisions, plant operability, and economic analysis. Thus the consideration of the stochastic property of the uncertainties in the optimization approach is necessary for robust process design and operation. As a part of it, efficient chance constrained programming has become an important field of research in process systems engineering. A new approach is presented and applied for stochastic optimization problems of batch distillation with a detailed dynamic process model. [source]


Why are species' body size distributions usually skewed to the right?

FUNCTIONAL ECOLOGY, Issue 4 2002
Jan Koz, owski
Summary 1.,Species' body size distributions are right-skewed, symmetric or left-skewed, but right-skewness strongly prevails. 2.,Skewness changes with taxonomic level, with a tendency to high right-skewness in classes and diverse skewness in orders within a class. Where the number of lower taxa allows for analysis, skewness coefficients have normal distributions, with the majority of taxa being right-skewed. 3.,Skewness changes with geographical scale. For a broad range, distributions in a class are usually right-skewed. For a narrower scale, distributions remain right-skewed or become symmetric or even close to uniform. 4.,The prevailing right-skewness of species' body size distributions is explained with macroevolutionary models, the fractal character of the environment, or body size optimization. 5.,Macroevolutionary models assume either size-biased speciation and extinction, or the existence of a constraint on small size. Macroevolutionary mechanisms seem insufficient to explain the pattern of species' body size distributions, but they may operate together with other mechanisms. 6.,Optimization models assume that directional and then stabilizing selection works after speciation events. There are two kinds of optimization approaches to study species' body size distributions. Under the first approach, it is assumed that a single energetic optimum exists for an entire taxon, and that species are distributed around this optimum. Under the second approach, each species has a separate optimum, and the species' body size distribution reflects the distribution of optimal values. 7.,Because not only energetic properties but also mortality are important in determining optimal sizes, only the second approach, that is, seeking the distribution of optimal values, seems appropriate in the context of life-history evolution. This approach predicts diverse shapes of body size distributions, with right-skewness prevailing. [source]


An algorithm for fast optimal Latin hypercube design of experiments

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 2 2010
Felipe A. C. Viana
Abstract This paper presents the translational propagation algorithm, a new method for obtaining optimal or near optimal Latin hypercube designs (LHDs) without using formal optimization. The procedure requires minimal computational effort with results virtually provided in real time. The algorithm exploits patterns of point locations for optimal LHDs based on the ,p criterion (a variation of the maximum distance criterion). Small building blocks, consisting of one or more points each, are used to recreate these patterns by simple translation in the hyperspace. Monte Carlo simulations were used to evaluate the performance of the new algorithm for different design configurations where both the dimensionality and the point density were studied. The proposed algorithm was also compared against three formal optimization approaches (namely random search, genetic algorithm, and enhanced stochastic evolutionary algorithm). It was found that (i) the distribution of the ,p values tends to lower values as the dimensionality is increased and (ii) the proposed translational propagation algorithm represents a computationally attractive strategy to obtain near optimum LHDs up to medium dimensions. Copyright © 2009 John Wiley & Sons, Ltd. [source]


A novel global optimization technique for high dimensional functions

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 4 2009
Crina Grosan
Several types of line search methods are documented in the literature and are well known for unconstraint optimization problems. This paper proposes a modified line search method, which makes use of partial derivatives and restarts the search process after a given number of iterations by modifying the boundaries based on the best solution obtained at the previous iteration (or set of iterations). Using several high-dimensional benchmark functions, we illustrate that the proposed line search restart (LSRS) approach is very suitable for high-dimensional global optimization problems. Performance of the proposed algorithm is compared with two popular global optimization approaches, namely, genetic algorithm and particle swarm optimization method. Empirical results for up to 2000 dimensions clearly illustrate that the proposed approach performs very well for the tested high-dimensional functions. © 2009 Wiley Periodicals, Inc. [source]


The state of the art of microwave CAD: EM-based optimization and modeling

INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 5 2010
Qingsha S. Cheng
Abstract We briefly review the current state of the art of microwave CAD technologies. We look into the history of design optimization and CAD-oriented modeling of microwave circuits as well as electromagnetics-based optimization techniques. We emphasize certain direct approaches that utilize efficient sensitivity evaluations as well as surrogate-based optimization approaches that greatly enhance electromagnetics-based optimization performance. On the one hand, we review recent adjoint methodologies, on the other we focus on space mapping implementations, including the original, aggressive, implicit, output, tuning, and related developments. We illustrate our presentation with suitable examples and applications. © 2010 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2010. [source]


Detecting local convexity on the pareto surface

JOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 1 2002
Georges Fadel
Abstract Recent regain in interest in multi-criteria optimization approaches to provide a designer with multiple solutions to select from and support decisions has led to various methodologies to generate such solutions and possibly approximate the Pareto set. This paper introduces the notions of H - and w -convexities, and develops a simple method to identify local convexity of Pareto hyper-surfaces since their shape can dictate the choice of the method to use to obtain Pareto solutions and possibly to build an approximation of that set. The method is based on comparing the results from the weighting method to those of the Tchebycheff method at any point on the Pareto hyper-surface. If, under some conditions, the points obtained from the two methods are identical or not, a local convexity or its lack can be assumed at that location and in its immediate neighbourhood. A numerical example is included. Copyright © 2002 John Wiley & Sons, Ltd. [source]