Home About us Contact | |||
Optimization Problem (optimization + problem)
Kinds of Optimization Problem Selected AbstractsA COMPLETE THEORY OF COMPARATIVE STATICS FOR DIFFERENTIABLE OPTIMIZATION PROBLEMSMETROECONOMICA, Issue 1 2006M. Hossein Partovi ABSTRACT A new comparative statics formalism using generalized compensated derivatives is presented that, in contrast to existing methodologies, directly yields constraint-free semidefiniteness results for any differentiable, constrained optimization problem. The formalism provides a natural and powerful method of constructing comparative statics results, free of constraints and unrestricted in scope. New results on envelope relations, invariance conditions, rank inequalities and non-uniqueness are derived that greatly extend their utility and reach. The methodology is illustrated by deriving the comparative statics of multiple linear constraint utility maximization models and the principal-agent problem with hidden actions, both highly nontrivial and hitherto unsolved problems. [source] SOLVING DYNAMIC WILDLIFE RESOURCE OPTIMIZATION PROBLEMS USING REINFORCEMENT LEARNINGNATURAL RESOURCE MODELING, Issue 1 2005CHRISTOPHER J. FONNESBECK ABSTRACT. An important technical component of natural resource management, particularly in an adaptive management context, is optimization. This is used to select the most appropriate management strategy, given a model of the system and all relevant available information. For dynamic resource systems, dynamic programming has been the de facto standard for deriving optimal state-specific management strategies. Though effective for small-dimension problems, dynamic programming is incapable of providing solutions to larger problems, even with modern microcomputing technology. Reinforcement learning is an alternative, related procedure for deriving optimal management strategies, based on stochastic approximation. It is an iterative process that improves estimates of the value of state-specific actions based in interactions with a system, or model thereof. Applications of reinforcement learning in the field of artificial intelligence have illustrated its ability to yield near-optimal strategies for very complex model systems, highlighting the potential utility of this method for ecological and natural resource management problems, which tend to be of high dimension. I describe the concept of reinforcement learning and its approach of estimating optimal strategies by temporal difference learning. I then illustrate the application of this method using a simple, well-known case study of Anderson [1975], and compare the reinforcement learning results with those of dynamic programming. Though a globally-optimal strategy is not discovered, it performs very well relative to the dynamic programming strategy, based on simulated cumulative objective return. I suggest that reinforcement learning be applied to relatively complex problems where an approximate solution to a realistic model is preferable to an exact answer to an oversimplified model. [source] Quantitative Comparison of Approximate Solution Sets for Bi-criteria Optimization Problems,DECISION SCIENCES, Issue 1 2003W. Matthew Carlyle ABSTRACT We present the Integrated Preference Functional (IPF) for comparing the quality of proposed sets of near-pareto-optimal solutions to bi-criteria optimization problems. Evaluating the quality of such solution sets is one of the key issues in developing and comparing heuristics for multiple objective combinatorial optimization problems. The IPF is a set functional that, given a weight density function provided by a decision maker and a discrete set of solutions for a particular problem, assigns a numerical value to that solution set. This value can be used to compare the quality of different sets of solutions, and therefore provides a robust, quantitative approach for comparing different heuristic, a posteriori solution procedures for difficult multiple objective optimization problems. We provide specific examples of decision maker preference functions and illustrate the calculation of the resulting IPF for specific solution sets and a simple family of combined objectives. [source] Multiple Objective Optimization Problems in StatisticsINTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 4 2002Subhash C. Narula Many statistical problems can be viewed as optimization problems. A careful and detailed analysis of these procedures reveals that many of these problems are essentially multiple objective optimization problems. Furthermore, most of the standard statistical procedures aim at finding an efficient (non-dominated) solution. Our objective in this paper is to introduce some of the single sample statistical problems that can be formulated and solved as multiple objective optimization problems. [source] Lagrange Multipliers as Marginal Rates of Substitution in Multi-Constraint Optimization ProblemsMETROECONOMICA, Issue 1 2001Christian E. Weber This paper shows that, when a function is optimized subject to several binding constraints, some of the Lagrange multipliers in the dual problems can be interpreted as marginal rates of substitution among certain arguments in the generalized indirect objective function for the primal problem. It also shows how to calculate these Lagrange multipliers from observable price,quantity data. Three particular examples are discussed: a firm that minimizes costs subject to both fixed output and rationing constraints, a household that maximizes utility subject to both income and time constraints, and portfolio choice under uncertainty treated as a multiple constraint optimization problem. [source] Interactive low-dimensional human motion synthesis by combining motion models and PIKCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 4-5 2007Schubert R. Carvalho Abstract This paper explores the issue of interactive low-dimensional human motion synthesis. We compare the performances of two motion models, i.e. Principal Components Analysis (PCA) or Probabilistic PCA (PPCA), for solving a constrained optimization problem within a low-dimensional latent space. We use PCA or PPCA as a first step of preprocessing to reduce the dimensionality of the database to make it tractable, and to encapsulate only the essential aspects of a specific motion pattern. Interactive user control is provided by formulating a low-dimensional optimization framework that uses a Prioritized Inverse Kinematics (PIK) strategy. The key insight of PIK is that the user can adjust a motion by adding constraints with different priorities. We demonstrate the robustness of our approach by synthesizing various styles of golf swing. This movement is challenging in the sense that it is highly coordinated and requires a great precision while moving with high speeds. Hence, any artifact is clearly noticeable in the solution movement. We simultaneously show results comparing local and global motion models regarding synthesis realism and performance. Finally, the quality of the synthesized animations is assessed by comparing our results against a per-frame PIK technique. Copyright © 2007 John Wiley & Sons, Ltd. [source] MATLAB based GUIs for linear controller design via convex optimizationCOMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 1 2003Wathanyoo Khaisongkram Abstract Owing to the current evolution of computational tools, a complicated parameter optimization problem could be effectively solved by a computer. In this paper, a CAD tool for multi-objective controller design based on MATLAB program is developed. In addition, we construct simple GUIs (using GUIDE tools within MATLAB) to provide a visual approach in specifying the constraints. The linear controller design problem can be cast as the convex optimization subjected to time domain and frequency domain constraints. This optimization problem is efficiently solved within a finite dimensional subspace by a practical ellipsoid algorithm. In the design process, we include a model reduction of the resulting controller to speed up the computational efficiency. Finally, a numerical example shows the capability of the program to design multi-objective controller for a one-link flexible robot arm. © 2003 Wiley Periodicals, Inc. Comput Appl Eng Educ 11: 13,24, 2003; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.10035 [source] Optimizing Structure Preserving Embedded Deformation for Resizing Images and Vector ArtCOMPUTER GRAPHICS FORUM, Issue 7 2009Qi-xing Huang Abstract Smart deformation and warping tools play an important part in modern day geometric modeling systems. They allow existing content to be stretched or scaled while preserving visually salient information. To date, these techniques have primarily focused on preserving local shape details, not taking into account important global structures such as symmetry and line features. In this work we present a novel framework that can be used to preserve the global structure in images and vector art. Such structures include symmetries and the spatial relations in shapes and line features in an image. Central to our method is a new formulation of preserving structure as an optimization problem. We use novel optimization strategies to achieve the interactive performance required by modern day modeling applications. We demonstrate the effectiveness of our framework by performing structure preservation deformation of images and complex vector art at interactive rates. [source] Range Scan Registration Using Reduced Deformable ModelsCOMPUTER GRAPHICS FORUM, Issue 2 2009W. Chang Abstract We present an unsupervised method for registering range scans of deforming, articulated shapes. The key idea is to model the motion of the underlying object using a reduced deformable model. We use a linear skinning model for its simplicity and represent the weight functions on a regular grid localized to the surface geometry. This decouples the deformation model from the surface representation and allows us to deal with the severe occlusion and missing data that is inherent in range scan data. We formulate the registration problem using an objective function that enforces close alignment of the 3D data and includes an intuitive notion of joints. This leads to an optimization problem that we solve using an efficient EM-type algorithm. With our algorithm we obtain smooth deformations that accurately register pairs of range scans with significant motion and occlusion. The main advantages of our approach are that it does not require user specified markers, a template, nor manual segmentation of the surface geometry into rigid parts. [source] A survey on Mesh Segmentation TechniquesCOMPUTER GRAPHICS FORUM, Issue 6 2008Ariel Shamir Abstract We present a review of the state of the art of segmentation and partitioning techniques of boundary meshes. Recently, these have become a part of many mesh and object manipulation algorithms in computer graphics, geometric modelling and computer aided design. We formulate the segmentation problem as an optimization problem and identify two primarily distinct types of mesh segmentation, namely part segmentation and surface-patch segmentation. We classify previous segmentation solutions according to the different segmentation goals, the optimization criteria and features used, and the various algorithmic techniques employed. We also present some generic algorithms for the major segmentation techniques. [source] Three-Dimensional Optimization of Urban Drainage SystemsCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2000A. Freire Diogo A global mathematical model for simultaneously obtaining the optimal layout and design of urban drainage systems for foul sewage and stormwater is presented. The model can handle every kind of network, including parallel storm and foul sewers. It selects the optimal location for pumping systems and outfalls or wastewater treatment plants (defining the natural and artificial drainage basins), and it allows the presence of special structures and existing subsystems for optimal remodeling or expansion. It is possible to identify two basic optimization levels: in the first level, the generation and transformation of general layouts (consisting of forests of trees) until a convergence criterion is reached, and in the second level, the design and evaluation of each forest. The global strategy adopted combines and develops a sequence of optimal design and plan layout subproblems. Dynamic programming is used as a very powerful technique, alongside simulated annealing and genetic algorithms, in this discrete combinatorial optimization problem of huge dimension. [source] A Multiobjective and Stochastic System for Building Maintenance ManagementCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2000Z. Lounis Building maintenance management involves decision making under multiple objectives and uncertainty, in addition to budgetary constraints. This article presents the development of a multiobjective and stochastic optimization system for maintenance management of roofing systems that integrates stochastic condition-assessment and performance-prediction models with a multiobjective optimization approach. The maintenance optimization includes determination of the optimal allocation of funds and prioritization of roofs for maintenance, repair, and replacement that simultaneously satisfy the following conflicting objectives: (1) minimization of maintenance and repair costs, (2) maximization of network performance, and (3) minimization of risk of failure. A product model of the roof system is used to provide the data framework for collecting and processing data. Compromise programming is used to solve this multiobjective optimization problem and provides building managers an effective decision support system that identifies the optimal projects for repair and replacement while it achieves a satisfactory tradeoff between the conflicting objectives. [source] A performance-oriented adaptive scheduler for dependent tasks on grids,CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2008Luiz F. Bittencourt Abstract A scheduler must consider the heterogeneity and communication delays when scheduling dependent tasks on a grid. The task-scheduling problem is NP-Complete in general, which led us to the development of a heuristic for the associated optimization problem. In this work we present a dynamic adaptive approach to schedule dependent tasks onto a grid based on the Xavantes grid middleware. The developed dynamic approach is applied to the Path Clustering Heuristic, and introduces the concept of rounds, which take turns sending tasks to execution and evaluating the performance of the resources. The adaptive extension changes the size of rounds during the process execution, taking task attributes and resources performance as parameters, and it can be adopted in other task schedulers. The experiments show that the dynamic round-based and adaptive schedule can minimize the effects of performance losses while executing processes on the grid. Copyright © 2007 John Wiley & Sons, Ltd. [source] Seismic design of RC structures: A critical assessment in the framework of multi-objective optimizationEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 12 2007Nikos D. Lagaros Abstract The assessment of seismic design codes has been the subject of intensive research work in an effort to reveal weak points that originated from the limitations in predicting with acceptable precision the response of the structures under moderate or severe earthquakes. The objective of this work is to evaluate the European seismic design code, i.e. the Eurocode 8 (EC8), when used for the design of 3D reinforced concrete buildings, versus a performance-based design (PBD) procedure, in the framework of a multi-objective optimization concept. The initial construction cost and the maximum interstorey drift for the 10/50 hazard level are the two objectives considered for the formulation of the multi-objective optimization problem. The solution of such optimization problems is represented by the Pareto front curve which is the geometric locus of all Pareto optimum solutions. Limit-state fragility curves for selected designs, taken from the Pareto front curves of the EC8 and PBD formulations, are developed for assessing the two seismic design procedures. Through this comparison it was found that a linear analysis in conjunction with the behaviour factor q of EC8 cannot capture the nonlinear behaviour of an RC structure. Consequently the corrected EC8 Pareto front curve, using the nonlinear static procedure, differs significantly with regard to the corresponding Pareto front obtained according to EC8. Furthermore, similar designs, with respect to the initial construction cost, obtained through the EC8 and PBD formulations were found to exhibit different maximum interstorey drift and limit-state fragility curves. Copyright © 2007 John Wiley & Sons, Ltd. [source] Multiobjective heuristic approaches to seismic design of steel frames with standard sectionsEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 11 2007M. Ohsaki Abstract Seismic design problem of a steel moment-resisting frame is formulated as a multiobjective programming problem. The total structural (material) volume and the plastic dissipated energy at the collapse state against severe seismic motions are considered as performance measures. Geometrically nonlinear inelastic time-history analysis is carried out against recorded ground motions that are incrementally scaled to reach the predefined collapse state. The frame members are chosen from the lists of the available standard sections. Simulated annealing (SA) and tabu search (TS), which are categorized as single-point-search heuristics, are applied to the multiobjective optimization problem. It is shown in the numerical examples that the frames that collapse with uniform interstorey drift ratios against various levels of ground motions can be obtained as a set of Pareto optimal solutions. Copyright © 2007 John Wiley & Sons, Ltd. [source] Optimal design of supplemental viscous dampers for irregular shear-frames in the presence of yieldingEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 8 2005Oren Lavan Abstract A methodology for the optimal design of supplemental viscous dampers for regular as well as irregular yielding shear-frames is presented. It addresses the problem of minimizing the added damping subject to a constraint on an energy-based global damage index (GDI) for an ensemble of realistic ground motion records. The applicability of the methodology for irregular structures is achieved by choosing an appropriate GDI. For a particular choice of the parameters comprising the GDI, a design for the elastic behavior of the frame or equal damage for all stories is achieved. The use of a gradient-based optimization algorithm for the solution of the optimization problem is enabled by first deriving an expression for the gradient of the constraint. The optimization process is started for one ,active' ground motion record which is efficiently selected from the given ensemble. If the resulting optimal design fails to satisfy the constraints for other records from the original ensemble, additional ground motions (loading conditions) are added one by one to the ,active' set until the optimum is reached. Two examples for the optimal designs of supplemental dampers are given: a 2-story shear frame with varying strength distribution and a 10-story shear frame. The 2-story shear frame is designed for one given ground motion whereas the 10-story frame is designed for an ensemble of twenty ground motions. Copyright © 2005 John Wiley & Sons, Ltd. [source] Gain-scheduling control of a rotary inverted pendulum by weight optimization and H, loop shaping procedureELECTRICAL ENGINEERING IN JAPAN, Issue 2 2008Kazuhiro Yubai Abstract Gain-scheduling control is an effective method for use with plants whose dynamics change significantly according to the operating point. The frozen parameter method, a practical gain-scheduling controller synthesis method, interpolates the controllers designed at prespecified (frozen) operating points according to the current operation point. Hyde and Glover proposed a gain-scheduling control method in which the H, loop shaping procedure is adopted as a controller synthesis method at each operating point. The H, loop shaping procedure is based on loop shaping of an open loop characteristic by frequency weights and is known to be effective for plants with bad condition numbers. However, weight selection satisfying the control specifications is a difficult job for a designer. This paper describes the design of suboptimal weights and a controller by means of an algorithm that maximizes the robust stability margin and shapes the open-loop characteristic into the desired shape at each operating point. In addition, we formulate the weight optimization problem as a generalized eigenvalue minimization problem, which reduces the burden on the designer in weight selection. Finally, we realize a robust, high-performance control system by scheduling both weights and controllers. The effectiveness of the proposed control system is verified in terms of the achieved robust stability margin and the experimental time responses of a rotary inverted pendulum, which involves strong nonlinear dynamics. © 2008 Wiley Periodicals, Inc. Electr Eng Jpn, 163(2): 30,40, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/eej.20647 [source] Robust optimum design of SAW filters by the penalty function methodELECTRICAL ENGINEERING IN JAPAN, Issue 3 2007Kiyoharu Tagawa Abstract In order to increase the reliability of surface acoustic wave (SAW) filters, a robust optimum design technique is presented. The frequency response characteristics of SAW filters are governed primarily by their geometrical structures, that is, the configurations of the interdigital transducers (IDTs) and reflectors fabricated on piezoelectric substrates. To choose desirable structures of SAW filters through computer simulation, conventional design techniques utilize the equivalent circuit model of the IDT. However, they have rarely considered the accuracy of the underlying model, which may be degraded by the dispersion of the circuit parameters. In this paper, considering the errors of these parameters, the robust optimum design of SAW filters is formulated as a constrained optimization problem. Then, a penalty function method combined with an improved variable neighborhood search is proposed and applied to the problem. Computational experiments conducted on a practical design problem of a resonator type SAW filter demonstrate the usefulness of the proposed method. © 2006 Wiley Periodicals, Inc. Electr Eng Jpn, 158(3): 45,54, 2007; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/eej.20469 [source] Optimization of Operating Temperature for Continuous Immobilized Glucose Isomerase Reactor with Pseudo Linear KineticsENGINEERING IN LIFE SCIENCES (ELECTRONIC), Issue 5 2004N.M. Faqir Abstract In this work, the optimal operating temperature for the enzymatic isomerization of glucose to fructose using a continuous immobilized glucose isomerase packed bed reactor is studied. This optimization problem describing the performance of such reactor is based on reversible pseudo linear kinetics and is expressed in terms of a recycle ratio. The thermal deactivation of the enzyme as well as the substrate protection during the reactor operation is considered. The formulation of the problem is expressed in terms of maximization of the productivity of fructose. This constrained nonlinear optimization problem is solved using the disjoint policy of the calculus of variations. Accordingly, this method of solution transforms the nonlinear optimization problem into a system of two coupled nonlinear ordinary differential equations (ODEs) of the initial value type, one equation for the operating temperature profile and the other one for the enzyme activity. The ODE for the operating temperature profile is dependent on the recycle ratio, operating time period, and the reactor residence time as well as the kinetics of the reaction and enzyme deactivation. The optimal initial operating temperature is selected by solving the ODEs system by maximizing the fructose productivity. This results into an unconstrained one-dimensional optimization problem with simple bounds on the operating temperature. Depending on the limits of the recycle ratio, which represents either a plug flow or a mixed flow reactor, it is found that the optimal temperature of operation is characterized by an increasing temperature profile. For higher residence time and low operating periods the residual enzyme activity in the mixed flow reactor is higher than that for the plug flow reactor, which in turn allows the mixed flow reactor to operate at lower temperature than that of the plug flow reactor. At long operating times and short residence time, the operating temperature profiles are almost the same for both reactors. This could be attributed to the effect of substrate protection on the enzyme stability, which is almost the same for both reactors. Improvement in the fructose productivity for both types of reactors is achieved when compared to the constant optimum temperature of operation. The improvement in the fructose productivity for the plug flow reactor is significant in comparison with the mixed flow reactor. [source] An efficient hybrid evolutionary algorithm based on PSO and ACO for distribution feeder reconfigurationEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 5 2010Taher Niknam Abstract A new formulation based on norm2 method for the multi objective distribution feeder reconfiguration (DFR) is introduced in order to minimize the real power loss, deviation of the nodes' voltage, the number of switching operations, and to balance the loads on the feeders. In the proposed method, since the objective functions are not the same and commensurable, the objective functions are considered as a vector and the aim is to maximize the distance (norm2) between the objective function vector and the worst objective function vector while the constraints are met. The status of the tie and sectionalizing switches are considered as the control variables. The proposed DFR problem is a multi objective and non-differentiable optimization problem so a hybrid evolutionary algorithm based on Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO), called HPSO, is proposed to solve it. The feasibility of the HPSO algorithm and the proposed DFR is demonstrated and compared with the solutions obtained by other approaches and evolutionary methods such as genetic algorithm (GA), ACO and the original PSO, over different distribution test systems. Copyright © 2009 John Wiley & Sons, Ltd. [source] Incorporating power system security into market-clearing of day-ahead joint energy and reserves auctionsEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 2 2010J. Aghaei Abstract This paper is intended to introduce a technique for incorporating system security into the clearing of day-ahead joint electricity markets, with particular emphasis on the voltage stability. A Multiobjective Mathematical Programming (MMP) formulation is implemented for provision of ancillary services (Automatic Generation Control or AGC, spinning, non-spinning, and operating reserves) as well as energy in simultaneous auctions by pool-based aggregated market scheme. In the proposed market-clearing structure, the security problem, as an important responsibility of ISO, is addressed and a nonlinear model is formulated and used as the extra objective functions of the optimization problem. Thus, in the MMP formulation of the market-clearing process, the objective functions (including augmented generation offer cost, overload index, voltage drop index, and loading margin) are optimized while meeting AC power flow constraints, system reserve requirements, and lost opportunity cost (LOC) considerations. The IEEE 24-bus Reliability Test System (RTS 24-bus) is used to demonstrate the performance of the proposed method. Copyright © 2008 John Wiley & Sons, Ltd. [source] Algorithmic challenges and current problems in market coupling regimesEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 4 2009Bernd Tersteegen Abstract Increasing cross-border trade at European borders has lead to the necessity of an efficient allocation of scarce cross-border capacities. Explicit auctions used to be the commonly applied auction method in the past at most borders, but due to the separation of the trade of electrical energy and the allocation of cross-border capacity, market inefficiencies arise. As a consequence, a trend toward a market coupling, which combines the trade of electrical energy with the allocation of cross-border capacity, can be observed across Europe. The most convincing approach to solve the complex optimization task associated with market couplings solves the problem by a maximization of the system-wide welfare based on a closed-form optimization. Practical experience shows that problems remain with such an approach. This paper thoroughly analyzes problems that may occur in market coupling regimes with a closed-form optimization. In this paper an extension of formerly presented formulations of the optimization problem is presented, which avoids the described problems. The extended formulation still assures practically feasible calculation times of far less than 10 minutes even for systems with up to 12 market areas. Further, a fair and transparent approach to determine feasible market clearing prices not neglecting the time and market coupling relationship between prices is shown in this paper and it is demonstrated that this approach does not lead to practically infeasible calculation times. Copyright © 2009 John Wiley & Sons, Ltd. [source] A genetic algorithm multi-objective approach for efficient operational planning technique of distribution systemsEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 2 2009C. Lakshminarayana Abstract This paper presents a genetic algorithm multi-objective approach for efficient operational planning technique for electrical power distribution systems (EPDS). Service restoration is a non-linear, combinatorial, non-differential and multi-objective optimization problem that often has a great number of candidate solutions to be evaluated by the operators. To tackle the problem of service restoration with multiple objectives, the weighted sum strategy is employed to convert these objectives into a single objective function by giving equal weighting values. The transformer/feeders capacity in the post-fault distribution network creates a major problem for the electrical power distribution engineers and distribution system substation operators to satisfy the operational constraints at the consumer localities while restoring power supply. The feasibility of the developed algorithm for service restoration demonstrated on several distribution networks with fast and effective convergence of the results of this technique helps to find the efficient operational planning of the EPDS. Copyright © 2007 John Wiley & Sons, Ltd. [source] Improved genetic algorithm for multi-objective reactive power dispatch problemEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 6 2007D. DevarajArticle first published online: 12 JAN 200 Abstract This paper presents an improved genetic algorithm (GA) approach for solving the multi-objective reactive power dispatch problem. Loss minimization and maximization of voltage stability margin are taken as the objectives. Maximum L-index of the system is used to specify the voltage stability level. Generator terminal voltages, reactive power generation of capacitor banks and tap changing transformer setting are taken as the optimization variables. In the proposed GA, voltage magnitudes are represented as floating point numbers and transformer tap-setting and reactive power generation of capacitor bank are represented as integers. This alleviates the problems associated with conventional binary-coded GAs to deal with real variables and integer variables with total number of permissible choices not equal to 25. Crossover and mutation operators which can deal with mixed variables are proposed. The proposed method has been tested on IEEE 30-bus system and is compared with conventional methods and binary-coded GA. The proposed method has produced the loss which is less than the value reported earlier and is well suitable for solving the mixed integer optimization problem. Copyright © 2007 John Wiley & Sons, Ltd. [source] Optimal allocation of distributed generation and reactive sources considering tap positions of voltage regulators as control variablesEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 3 2007Mohamad Esmail Hamedani Golshan Abstract In this paper, by defining and solving an optimization problem, amount of distributed generators (DGs) and reactive power sources (RSs) in selected buses of a distribution system are computed to make up a given total of distributed generation for minimizing losses, line loadings, and total required reactive power capacity. The formulated problem is a combinatorial problem, therefore Tabu search algorithm is applied for solving the optimization problem. Results of solving the optimization problem for a radial 33-bus distribution system and a meshed 6-bus system are presented. When using less amount of reactive capacity, regarding tap positions of voltage regulators as control variables has considerable role in loss reduction and improvement of voltage profile. In the case of meshed systems, including line loadings in the cost function can significantly change results of solving the optimization problem such as amount of the required reactive capacity and how to assign DGs and RSs to the selected buses. Copyright © 2006 John Wiley & Sons, Ltd. [source] A genetic algorithm approach to solving the anti-covering location problemEXPERT SYSTEMS, Issue 5 2006Sohail S. Chaudhry Abstract: In this paper we address the problem of locating a maximum weighted number of facilities such that no two are within a specified distance from each other. A natural process of evolution approach, more specifically a genetic algorithm, is proposed to solve this problem. It is shown that through the use of a commercially available spreadsheet-based genetic algorithm software package, the decision-maker with a fundamental knowledge of spreadsheets can easily set up and solve this optimization problem. Also, we report on our extensive computational experience using three different data sets. [source] Rapid risk assessment using probability of fracture nomographsFATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 11 2009R. PENMETSA ABSTRACT Traditional risk-based design process involves designing the structure based on risk estimates obtained during several iterations of an optimization routine. This approach is computationally expensive for large-scale aircraft structural systems. Therefore, this paper introduces the concept of risk-based design plots that can be used for both structural sizing and risk assessment for fracture strength when maximum allowable crack length is available. In situations when crack length is defined as a probability distribution the presented approach can only be applied for various percentiles of crack lengths. These plots are obtained using normalized probability density models of load and material properties and are applicable for any arbitrary load and strength values. Risk-based design plots serve as a tool for failure probability assessment given geometry and applied load or they can determine geometric constraints to be used in sizing given allowable failure probability. This approach would transform a reliability-based optimization problem into a deterministic optimization problem with geometric constraints that implicitly incorporate risk into the design. In this paper, cracked flat plate and stiffened plate are used to demonstrate the methodology and its applicability. [source] Joint inversion of multiple data types with the use of multiobjective optimization: problem formulation and application to the seismic anisotropy investigationsGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2007E. Kozlovskaya SUMMARY In geophysical studies the problem of joint inversion of multiple experimental data sets obtained by different methods is conventionally considered as a scalar one. Namely, a solution is found by minimization of linear combination of functions describing the fit of the values predicted from the model to each set of data. In the present paper we demonstrate that this standard approach is not always justified and propose to consider a joint inversion problem as a multiobjective optimization problem (MOP), for which the misfit function is a vector. The method is based on analysis of two types of solutions to MOP considered in the space of misfit functions (objective space). The first one is a set of complete optimal solutions that minimize all the components of a vector misfit function simultaneously. The second one is a set of Pareto optimal solutions, or trade-off solutions, for which it is not possible to decrease any component of the vector misfit function without increasing at least one other. We investigate connection between the standard formulation of a joint inversion problem and the multiobjective formulation and demonstrate that the standard formulation is a particular case of scalarization of a multiobjective problem using a weighted sum of component misfit functions (objectives). We illustrate the multiobjective approach with a non-linear problem of the joint inversion of shear wave splitting parameters and longitudinal wave residuals. Using synthetic data and real data from three passive seismic experiments, we demonstrate that random noise in the data and inexact model parametrization destroy the complete optimal solution, which degenerates into a fairly large Pareto set. As a result, non-uniqueness of the problem of joint inversion increases. If the random noise in the data is the only source of uncertainty, the Pareto set expands around the true solution in the objective space. In this case the ,ideal point' method of scalarization of multiobjective problems can be used. If the uncertainty is due to inexact model parametrization, the Pareto set in the objective space deviates strongly from the true solution. In this case all scalarization methods fail to find the solution close to the true one and a change of model parametrization is necessary. [source] A review of the adjoint-state method for computing the gradient of a functional with geophysical applicationsGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2006R.-E. Plessix SUMMARY Estimating the model parameters from measured data generally consists of minimizing an error functional. A classic technique to solve a minimization problem is to successively determine the minimum of a series of linearized problems. This formulation requires the Fréchet derivatives (the Jacobian matrix), which can be expensive to compute. If the minimization is viewed as a non-linear optimization problem, only the gradient of the error functional is needed. This gradient can be computed without the Fréchet derivatives. In the 1970s, the adjoint-state method was developed to efficiently compute the gradient. It is now a well-known method in the numerical community for computing the gradient of a functional with respect to the model parameters when this functional depends on those model parameters through state variables, which are solutions of the forward problem. However, this method is less well understood in the geophysical community. The goal of this paper is to review the adjoint-state method. The idea is to define some adjoint-state variables that are solutions of a linear system. The adjoint-state variables are independent of the model parameter perturbations and in a way gather the perturbations with respect to the state variables. The adjoint-state method is efficient because only one extra linear system needs to be solved. Several applications are presented. When applied to the computation of the derivatives of the ray trajectories, the link with the propagator of the perturbed ray equation is established. [source] Frequency-domain finite-difference amplitude-preserving migrationGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2004R.-E. Plessix SUMMARY A migration algorithm based on the least-squares formulation will find the correct reflector amplitudes if proper migration weights are applied. The migration weights can be viewed as a pre-conditioner for a gradient-based optimization problem. The pre-conditioner should approximate the pseudo-inverse of the Hessian of the least-squares functional. Usually, an infinite receiver coverage is assumed to derive this approximation, but this may lead to poor amplitude estimates for deep reflectors. To avoid the assumption of infinite coverage, new amplitude-preserving migration weights are proposed based on a Born approximation of the Hessian. The expressions are tested in the context of frequency-domain finite-difference two-way migration and show improved amplitudes for the deeper reflectors. [source] |