Optimization

Distribution by Scientific Domains
Distribution within Chemistry

Kinds of Optimization

  • ant colony optimization
  • character optimization
  • colony optimization
  • combinatorial optimization
  • constrained optimization
  • continuous optimization
  • convex optimization
  • cost optimization
  • design optimization
  • direct optimization
  • dose optimization
  • dynamic optimization
  • energy optimization
  • experimental optimization
  • future optimization
  • genetic optimization
  • geometry optimization
  • global optimization
  • lead optimization
  • mesh optimization
  • model optimization
  • multi-objective optimization
  • multiobjective optimization
  • numerical optimization
  • parameter optimization
  • particle swarm optimization
  • performance optimization
  • process optimization
  • quantum chemical geometry optimization
  • rigorous optimization
  • shape optimization
  • simultaneous optimization
  • structural optimization
  • structure optimization
  • swarm optimization
  • systematic optimization
  • topology optimization

  • Terms modified by Optimization

  • optimization algorithm
  • optimization algorithms
  • optimization analysis
  • optimization approach
  • optimization criterioN
  • optimization design
  • optimization formulation
  • optimization framework
  • optimization method
  • optimization methodology
  • optimization methods
  • optimization model
  • optimization models
  • optimization problem
  • optimization procedure
  • optimization process
  • optimization program
  • optimization routine
  • optimization scheme
  • optimization strategy
  • optimization studies
  • optimization study
  • optimization system
  • optimization technique
  • optimization techniques
  • optimization theory
  • optimization tool

  • Selected Abstracts


    OPTIMIZATION OF ELECTROHYDRODYNAMIC WRITING TECHNIQUE TO PRINT COLLAGEN

    EXPERIMENTAL TECHNIQUES, Issue 4 2007
    H.-S. Kim
    First page of article [source]


    OPTIMIZATION OF ENZYMATIC SYNTHESIS OF ISOMALTO-OLIGOSACCHARIDES PRODUCTION

    JOURNAL OF FOOD BIOCHEMISTRY, Issue 3 2009
    M.C. RABELO
    ABSTRACT Glucosyltransferases can be applied in the synthesis of prebiotic oligosaccharides. Enzymatic synthesis using acceptors can be used to obtain these carbohydrates. When maltose is the acceptor, oligosaccharides containing one maltose moiety and up to eight glucose units linked by ,-1,6-glycosidic bonds are obtained as the product of dextransucrase acceptor reaction. In this work, the enzymatic synthesis of isomalto-oligosaccharides using dextransucrase from Leuconostoc mesenteroides NRRL B-512F was optimized by response surface methodology. The effect of maltose and sucrose concentrations on the acceptor reaction was evaluated in a batch reactor system. Partially purified enzyme was used to reduce the enzyme purification cost. The results showed that high sucrose concentrations in conjunction with high maltose levels enhanced the isomalto-oligosaccharide synthesis. A productivity of 42.95 mmol/L.h of isomalto-oligosaccharides was obtained at the optimal operating condition (100 mmol/L of sucrose and 200 mmol/L of maltose). PRATICAL APPLICATIONS Oligosaccharides as prebiotic have a large application in food formulations, and their beneficial role in human health have been extensively studied. Although the acceptor mechanism of dextransucrase has already been extensively studied, an industrial process has not been developed yet for enzyme synthesis of isomalto-oligosaccharide. The process studied in this work allows the large-scale preparation of isomalto-oligosaccharide using partially purified enzyme. [source]


    OPTIMIZATION OF PRE-FRY DRYING OF YAM SLICES USING RESPONSE SURFACE METHODOLOGY

    JOURNAL OF FOOD PROCESS ENGINEERING, Issue 4 2010
    OLAJIDE PHILIP SOBUKOLA
    ABSTRACT The effect of convective hot-air drying pretreatment and frying time at a frying temperature of 170 ± 1C on moisture and oil contents, breaking force (crispness) and color parameters of yam chips was investigated. Response surface methodology technique was used to develop models for the responses as a result of variation in levels of drying temperature (60,80C), drying time (1,5°min) and frying time (2,6°min). Drying pretreatment had a significant effect on oil and moisture contents, breaking force and color parameters of yam chips, with water removal exhibiting a typical drying profile. Response surface regression analysis shows that responses were significantly (P < 0.05) correlated with drying temperature and time and frying time. The optimum pre-fry drying condition observed was a drying temperature of 70,75C for about 3,4 min while frying for 4,5 min. PRACTICAL APPLICATIONS Deep-fat frying is a very important cooking method and a lot of effort has been devoted to manufacturing fried products with lower oil content and acceptable quality parameters. The information provided in this work will be very useful in manufacturing fried yam chips of acceptable quality attributes through the combination of drying pretreatment conditions. The result is very useful in considering different processing variables and responses at the same time as compared with single factor experiment common in the literature. [source]


    OPTIMIZATION OF NEW FLOUR IMPROVER MIXING FORMULA BY SURFACE RESPONSE METHODOLOGY

    JOURNAL OF FOOD PROCESS ENGINEERING, Issue 2 2010
    RAOUDHA ELLOUZE GHORBEL
    ABSTRACT In the present study, we search to improve the viscoelastic properties of wheat flour characterized by a low bread-making quality. Six regulators were tested: broad bean flour, gluten, monodiglyceride (MDG), ascorbic acid, sodium alginate and a mixture of amylase and xylanase. A hybrid design was carried out in order to study the effect of these regulators on the alveographic properties of wheat flour dough. Two alveographic responses (W: deformation energy and P/L: elasticity-to-extensibility ratio) were studied and simultaneously optimized via the desirability functions. An optimal mixture, containing 13.17 g/kg of broad bean flour, 15.13 g/kg of gluten, 0.155 g/kg of ascorbic acid, 3.875 g/kg of MDG, 2.75 g/kg of sodium alginate and 0.3 g/kg of enzyme mixture, was obtained and tested in a Tunisian flour. It led to a dough characterized by a W = 274 × 10,4 J and P/L = 0.74 versus 191 × 10,4 J and 0.40, respectively, for the Tunisian flour without improvers. PRACTICAL APPLICATIONS In this work, we developed a new flour improver mixing formula intended to be used with wheat flour characterized by a low bread-making quality. This improver mixture is in powder form and contains 13.17 g of broad bean flour, 15.13 g of gluten, 0.155 g of ascorbic acid, 3.875 g of monodiglyceride, 2.75 g of sodium alginate and 0.3 g of enzyme mixture per kilogram of wheat flour. The incorporation of this improver mixture in low bread-making quality wheatflour leads to an increase of its deformation energy (W) of about 43% and produces large volume bread. [source]


    OPTIMIZATION OF PERMEABILIZATION PROCESS FOR LACTOSE HYDROLYSIS IN WHEY USING RESPONSE SURFACE METHODOLOGY

    JOURNAL OF FOOD PROCESS ENGINEERING, Issue 3 2009
    GURPREET KAUR
    ABSTRACT To overcome the permeability barrier and prepare whole cell biocatalysts with high activities, permeabilization of Kluyveromyces marxianus var. lactis NCIM 3566 in relation to, -galactosidase activity was optimized using cetyltrimethylammonium bromide (CTAB) as permeabilizing agent. Permeabilized whole cells can be advantageous over pure enzyme preparations in terms of cost-effectiveness and increased stability maintained by the intracellular environment. Response surface methodology (RSM) was applied to optimize concentration of CTAB, temperature and the treatment time for maximum permeabilization of yeast cells. The optimum operating conditions for permeabilization process to achieve maximum enzyme activity obtained by RSM were 0.06% (w/v) CTAB concentration, 28C temperature and process duration of 14 min. At these conditions of process variables, the maximum value of enzyme activity was found to be 1,334 IU/g. The permeabilized yeast cells were highly effective and resulted in 90.5% lactose hydrolysis in whey. PRACTICAL APPLICATION , -Galactosidase is one of the most promising enzymes, which has several applications in the food, fermentation and dairy industry. However, the industrial applications of , -galactosidase have been hampered by the costs involved in downstream processing. The present investigation was focused on developing the low-cost technology for lactose hydrolysis based on permeabilization process. Disposal of lactose in whey and whey permeates is one of the most significant problems with regard to economics and environmental impact faced by the dairy industries. Keeping this in view, lactose hydrolysis in whey has been successfully performed using permeabilized Kluyveromyces marxianus cells. Hydrolysis of lactose using , -galactosidase converts whey into a potentially very useful food ingredient, which has immense applications in food industries. Its use has increased significantly in recent years, mainly in the dairy products and in digestive preparations. Lactose hydrolysis causes several potential changes in the manufacture and marketing of dairy products, including increased solubility, sweetness and broader fermentation possibilities. [source]


    OPTIMIZATION OF NATTOKINASE PRODUCTION CONDUCTION USING RESPONSE SURFACE METHODOLOGY

    JOURNAL OF FOOD PROCESS ENGINEERING, Issue 1 2006
    DJA-SHIN WANG
    ABSTRACT Natto has attracted worldwide attention because of its health benefits and long history in Japanese food. It has been found that a potent fibrinolytic enzyme named nattokinase, which is extracted from natto, is able to prevent atherosclerosis. The production of nattokinase may be influenced by various factors such as temperature, shaking speed, volume of medium, fermentation time and so forth. Three-step response surface methodology was applied to obtain the optimal operation conditions of the fermentation process in order to maximize the nattokinase yield. The three major steps are described as follows. First, the important factors for fermentation were identified by L8 orthogonal array experiment. The chosen factors were temperature (37 or 45C), shaking speed (110 or 150 rpm), volume of medium (80 or 120 mL), Brix of wheat bran extract (1.5 or 3°), Brix of soy meal extract (1 or 2°), glucose concentration (0.6 or 1.2%) and fermentation time (24 or 36 h). Second, a regression equation was established between the response (i.e., the enzyme activity) and the two statistically significant factors (i.e., the volume of medium and fermentation time). Third, the optimal solutions for the volume of medium and fermentation time were obtained based on the response surface of the regression equation. According to the response surface analysis, the optimal operation conditions for the fermentation process should be 80 mL and 37.0817 h for the volume of medium and the fermentation time, respectively, which resulted in 459.11 FU/mL as the predicted enzyme activity. [source]


    COMPLEX METHOD FOR NONLINEAR CONSTRAINED MULTI-CRITERIA (MULTI-OBJECTIVE FUNCTION) OPTIMIZATION of THERMAL PROCESSING

    JOURNAL OF FOOD PROCESS ENGINEERING, Issue 4 2003
    FERRUH ERDO
    ABSTRACT The goal in a multi-objective function optimization problem is to optimize the several objective functions simultaneously. the complex method is a powerful algorithm to find the optimum of a general nonlinear function within a constrained region. the objective of this study was to apply the complex method to two different shapes (a sphere and a finite cylinder) subjected to the same thermal processing boundary conditions to find a variable process temperature profile (decision variable) to maximize the volume-average retention of thiamine. A process temperature range of 5 to 150C was used as an explicit constraint. Implicit constraints were center temperature and accumulated center lethality of the sphere and the finite cylinder. the objective functions for both shapes were combined into a single one using a weighting method. Then, the previously developed complex algorithm was applied using Lexicographic Ordering to order the objective functions with respect to their significance. the results were reported as optimum variable process temperature profiles using the given geometries and objective functions. the thiamine retentions were also compared with a constant process temperature process, and 3.0% increase was obtained in the combined objective function. the results showed that the complex method can be successfully used to predict the optimum variable process temperature profiles in multi-criteria thermal processing problems. [source]


    NONLINEAR CONSTRAINED OPTIMIZATION of THERMAL PROCESSING II.

    JOURNAL OF FOOD PROCESS ENGINEERING, Issue 3 2003
    FINITE CYLINDRICAL GEOMETRIES, VARIABLE PROCESS TEMPERATURE PROFILES to REDUCE PROCESS TIME, to IMPROVE NUTRIENT RETENTION IN SPHERICAL
    ABSTRACT Conventional methods for thermal processing of foods use constant processing temperature profiles (CPTPs) for a prescribed processing time, which is based on achieving a required microbial lethality to comply with public health standards. This also results in degradation of nutrients and quality factors. the variable process temperature profiles (VPTPs) obtained by using optimization methods can reduce quality losses and/or processing time compared to CPTPs. the objective of this research was to evaluate VPTPs using the Complex Method to reduce the processing time and/or improve quality retention for a specified level of lethality in thermal processing of conduction heated foods. the VPTPs were obtained for volume average retention of thiamine considering different sizes of spheres (small and large) and finite cylinders (small and large), and the thiamine retention and processing time results were compared with a conventional method (processing at 121.1C) for a specified lethality level. the use of VPTPs resulted in a 37 and 10% decrease in processing times in spherical and 40 % and 6 % for finite cylindrical shapes, for the same objective function value and specified lethality compared to the CPTP process. For the same processing time, the improvements in thiamine destruction were 3.7 and 2 % for spheres, and 3.9 and 2.2% for finite cylinders. [source]


    OPTIMIZATION OF WHEAT BLENDING TO PRODUCE BREADMAKING FLOUR

    JOURNAL OF FOOD PROCESS ENGINEERING, Issue 3 2001
    MEHMET HAYTA
    ABSTRACT Linear programming was utilized to optimize the blending of wheat lots which have different quality characteristic and costs. Using best subsets regression three quality tests (particle size index, dough volume and falling number value) were selected in relation to loaf volume of bread to be produced. The chosen criteria were set up in a linear programming format as a model for the computerized solution. The model's applicability was assessed in a commercial mill. As a result of applying the model it was found possible to produce breadmaking flour with a reasonable quality and at a lower cost. [source]


    CONSUMER-BASED OPTIMIZATION OF PEANUT-CHOCOLATE BAR USING RESPONSE SURFACE METHODOLOGY

    JOURNAL OF FOOD PROCESSING AND PRESERVATION, Issue 3-4 2005
    EDITH M. SAN JUAN
    ABSTRACT The acceptability of the sensory properties of a peanut-chocolate bar was optimized for consumer acceptance using response surface methodology. The factors studied included sugar, peanuts, cocoa powder and a process variable, degree of roast. Twenty-seven peanut-chocolate bar formulations with two replications were evaluated for consumer acceptance (n = 168) for overall liking and acceptance of color, appearance, flavor, sweetness and texture using 9-point hedonic scales. In terms of overall liking, the use of dark-roasted peanuts received the largest number of acceptable formulations when compared to the medium- and light-roasted peanuts. Sensory evaluation indicated that sweetness acceptance was the limiting factor for acceptability. An acceptable peanut-chocolate bar can be obtained by using formulations containing 44,54% dark-, medium- or light-roasted peanuts, 1,4% cocoa powder and 41,55% sugar. [source]


    OPTIMIZATION OF VACUUM PULSE OSMOTIC DEHYDRATION OF CANTALOUPE USING RESPONSE SURFACE METHODOLOGY

    JOURNAL OF FOOD PROCESSING AND PRESERVATION, Issue 1 2005
    WILMER J. FERMIN
    ABSTRACT The optimum levels of vacuum pressure, concentration of osmotic solution and dehydration time for vacuum pulse osmotic dehydration of cantaloupe were determined by response surface methodology (RSM). The response surface equations ( P < 0.05 and lack of fit > 0.1) explain the 97.6, 88.0 and 97.1% of the variability in weight loss, water loss and °Brix increase, respectively, at 95% confidence level. The canonical analysis for each response indicated that the stationary point is a saddle point for weight loss and °Brix increase, and a point of maximum response for water loss. The region that best satisfied all the constraints (low values in weight loss and °Brix increase, and high value in water loss) is located within the intervals from 49.5 °Brix to 52.5 °Brix for concentration and from 75 min to 84 min for dehydration time at a vacuum pulse of 740 mbar. [source]


    OPTIMIZATION OF GUAVA JUICE AND POWDER PRODUCTION

    JOURNAL OF FOOD PROCESSING AND PRESERVATION, Issue 6 2001
    CHETAN A. CHOPDA
    Enzyme treatment of guava puree was optimized for yield and clarity by first determining the most effective concentration, then varying both incubation time and temperature. Application of Pectinex Ultra SP-L® was optimal using 700 ppm enzyme for 1.5 h at 50C. Clarified guava juice was clearer (89.6%) when prepared using ultrafiltration (MW cut-off 40,60 kDa) rather than plate and frame filtration (82.8%); however, the latter was higher in both soluble solids and ascorbic acid. Clarified guava juice powders were made using freeze-drying, spray drying and tunnel drying. The freeze-dried product had superior quality; however, the spray-dried product was stable and may be more economical. Sensory panelists ranked the cloudy juice prepared from aseptic guava puree highest, and there were no significant differences between the juices from pasteurized, clear nectar, freeze-dried puree powder or juice powder. [source]


    OPTIMIZATION OF SPRAY DRYING CONDITIONS FOR PRODUCTION OF BIFIDUS MILK POWDER FROM COW MILK

    JOURNAL OF FOOD QUALITY, Issue 4 2006
    M. SELVAMUTHUKUMARAN
    ABSTRACT Bifidus milk powder was prepared by supplementing cow's milk with predetermined level of additives to obtain slurry of desired concentration. The slurry was sterilized, cooled and inoculated with 24-h-old bulk culture of Bifidobacterium bifidum at 10% and incubated at 37C for 24 h, cooled and dried in SM Scientech Lab model spray dryer with predetermined spray drying conditions. The bifidus milk powder contains bifidobacteria counts from 1.88 × 109 to 15.80 × 109 cells/g dry weight and their percent survival was 4.17,35.11%. Maximum survival was obtained by using the following spray drying conditions: inlet temperature of 164.02C, slurry concentration of 25.62% total soluble solids and air pressure of 2.5 kg/cm2. The high temperature and air pressure of spray drying markedly influenced the color and appearance of final product. The inlet temperature and air pressure showed a significant effect on survival of bifidobacteria in the final product. [source]


    OPTIMIZATION OF A CHOCOLATE PEANUT SPREAD USING RESPONSE SURFACE METHODOLOGY (RSM)

    JOURNAL OF SENSORY STUDIES, Issue 3 2004
    C.A. CHU
    ABSTRACT Response surface methodology was used to optimize formulations of chocolate peanut spread. Thirty-six formulations with varying levels of peanut (25-90%), chocolate (5-70%) and sugar (5-55%) were processed using a three-component constrained simplex lattice design. The processing variable, roast (light, medium, dark) was also included in the design. Response variables, measured with consumers (n = 60) participating in the test, were spreadability, overall acceptability, appearance, color, flavor, sweetness and texture/mouthfeel, using a 9-point hedonic scale. Regression analysis was performed and models were built for each significant (p < 0.01) response variable. Contour plots for each attribute, at each level of roast, were generated and superimposed to determine areas of overlap. Optimum formulations (consumer acceptance rating of , 6.0 for all attributes) for chocolate peanut spread were all combinations of 29-65% peanut, 9-41% chocolate, and 17-36% sugar, adding up to 100%, at a medium roast. Verification of two formulations indicated no difference between predicted and observed values. [source]


    THE MEULLENET-OWENS RAZOR SHEAR (MORS) FOR PREDICTING POULTRY MEAT TENDERNESS: ITS APPLICATIONS AND OPTIMIZATION

    JOURNAL OF TEXTURE STUDIES, Issue 6 2008
    Y.S. LEE
    ABSTRACT The Meullenet-Owens Razor Shear (MORS), recently developed for the assessment of poultry meat tenderness, is a reliable instrumental method. Three different studies were conducted to (1) investigate the adaptation of MORS to an Instron InSpec 2200 tester (InSpec); (2) optimize the number of replications necessary per fillet to obtain a reliable instrumental tenderness mean; and (3) test the efficacy of a blunt version of MORS (BMORS). In study 1, the tenderness of 157 cooked broiler breast fillets was predicted by the MORS performed with both a texture analyzer (MORS standard) and InSpec. The correlation coefficient of 0.95 was reported for the MORS energy obtained from the both tests, indicating that the MORS performed with an InSpec is equivalent to that performed on the more expensive texture analyzer. In study 2, eight shears were taken on each cooked fillet (101 fillets) to determine a recommended number of shears per fillet for the MORS. The composite hypothesis test was conducted considering the average of 8 shears as Y (representative estimated tenderness of a fillet) and the average of 2, 3, 4, 5, 6 or 7 as X (potentials for recommended number of shears). The results showed that the optimal number of replications of the MORS for a reliable estimate of tenderness to be four shears or greater per fillet. A blunt version of MORS (BMORS) was introduced in study 3. A total of 288 broilers (576 fillets) were deboned at eight different postmortem deboning times. Tenderness of cooked fillets was assessed by both the MORS and BMORS on the same individual fillets. Both methods were equivalent in performance for predicting broiler breast meat tenderness, giving a correlation coefficient of 0.99 with all instrumental parameters obtained from both methods. Tenderness intensity perceived by consumers was slightly more highly correlated to BMORS energy (r = ,0.90) than MORS energy (r = ,0.87). The BMORS was recommended to use especially for tough meat because of its better discrimination ability among tough meat. Overall, both the MORS and BMORS were proven to be reliable predictors for broiler breast meat tenderness. PRACTICAL APPLICATIONS The incidence of tough meat has been a major issue the poultry industry faces. Therefore, the need to ensure consumer acceptance and the increased recognition of the importance of tenderness has led to the development of instrumental methods for monitoring meat tenderness. To date, a great deal of efforts has been devoted to the development of such instrumental methods. One promising method is the Meullenet-Owens Razor Shear (MORS). The method has gained in popularity for predicting poultry meat tenderness because of its high reliability as well as simplicity compared with that of other industry standards (Warner-Bratzler shear or Allo-Kramer shear). The MORS is not only as reliable as the industry standards, but also more rapid because of the elimination of the sample cutting steps. The application of the MORS will be of benefit to the poultry industry as it could significantly save labor or time to implement for routine quality control. [source]


    RISK-REWARD OPTIMIZATION WITH DISCRETE-TIME COHERENT RISK

    MATHEMATICAL FINANCE, Issue 4 2010
    Alexander S. Cherny
    We solve the risk-reward optimization problem in the discrete-time setting, the reward being measured as the expected Profit and Loss and the risk being measured by a dynamic coherent risk measure. [source]


    PORTFOLIO OPTIMIZATION WITH JUMPS AND UNOBSERVABLE INTENSITY PROCESS

    MATHEMATICAL FINANCE, Issue 2 2007
    Nicole Bäuerle
    We consider a financial market with one bond and one stock. The dynamics of the stock price process allow jumps which occur according to a Markov-modulated Poisson process. We assume that there is an investor who is only able to observe the stock price process and not the driving Markov chain. The investor's aim is to maximize the expected utility of terminal wealth. Using a classical result from filter theory it is possible to reduce this problem with partial observation to one with complete observation. With the help of a generalized Hamilton,Jacobi,Bellman equation where we replace the derivative by Clarke's generalized gradient, we identify an optimal portfolio strategy. Finally, we discuss some special cases of this model and prove several properties of the optimal portfolio strategy. In particular, we derive bounds and discuss the influence of uncertainty on the optimal portfolio strategy. [source]


    PORTFOLIO OPTIMIZATION WITH DOWNSIDE CONSTRAINTS

    MATHEMATICAL FINANCE, Issue 2 2006
    Peter Lakner
    We consider the portfolio optimization problem for an investor whose consumption rate process and terminal wealth are subject to downside constraints. In the standard financial market model that consists of d risky assets and one riskless asset, we assume that the riskless asset earns a constant instantaneous rate of interest, r > 0, and that the risky assets are geometric Brownian motions. The optimal portfolio policy for a wide scale of utility functions is derived explicitly. The gradient operator and the Clark,Ocone formula in Malliavin calculus are used in the derivation of this policy. We show how Malliavin calculus approach can help us get around certain difficulties that arise in using the classical "delta hedging" approach. [source]


    CONSTRAINING HEAT INPUT BY TRAJECTORY OPTIMIZATION FOR MINIMUM-FUEL HYPERSONIC CRUISE

    ASIAN JOURNAL OF CONTROL, Issue 4 2006
    M. Wächter
    ABSTRACT Unsteady heat input effects are considered for the range cruise of a future hypersonic vehicle equipped with a turbo/ram jet engines combination. A realistic mathematical model for describing the unsteady heat effects has been developed. It is coupled to the model for the dynamics of the vehicle. To compute the heat load in hypersonic flight, several points on the vehicle surface are treated simultaneously. A two-step technique consisting of an efficient optimization algorithm and an ordinary differential equations (ODE) solver is applied to generate a solution. The results show that the heat load can be significantly reduced, with only a small increase in fuel consumption. [source]


    OPEN-LOOP AND CLOSED-LOOP OPTIMIZATION OF LINEAR CONTROL SYSTEMS

    ASIAN JOURNAL OF CONTROL, Issue 3 2000
    R. Gabasov
    ABSTRACT A canonical optimal control problem for linear systems with time-varying coefficients is considered in the class of discrete controls. On the basis of linear programming methods, two primal and two dual methods of constructing optimal open-loop controls are proposed. A method of synthesis of optimal feedback control is described. Results are illustrated by a fourth-order problem; estimates of efficiency of proposed methods are given. [source]


    ADAPTIVE MULTI-OBJECTIVE OPTIMIZATION BASED ON NONDOMINATED SOLUTIONS

    COMPUTATIONAL INTELLIGENCE, Issue 2 2009
    Dongdong Yang
    An adaptive hybrid model (AHM) based on nondominated solutions is presented in this study for multi-objective optimization problems (MOPs). In this model, three search phases are devised according to the number of nondominated solutions in the current population: 1) emphasizing the dominated solutions when the population contains very few nondominated solutions; 2) maintaining the balance between nondominated and dominated solutions when nondominated ones become more; 3) when the population consists of adequate nondominated solutions, dominated ones could be ignored and the isolated nondominated ones are allocated more computational budget by their crowding distance values for heuristic search. To exploit local information efficiently, a local incremental search algorithm, LISA, is proposed and merged into the model. This model maintains the adaptive mechanism between the optimization process by the online discovered nondominated solutions. The proposed model is validated using five ZDT and five DTLZ problems. Compared with three other state-of-the-art multi-objective algorithms, namely NSGA-II, SPEA2, and PESA-II, AHM achieves comparable results in terms of convergence and diversity metrics. Finally, the sensitivity of introduced parameters and scalability to the number of objectives are investigated. [source]


    Preference-Based Constrained Optimization with CP-Nets

    COMPUTATIONAL INTELLIGENCE, Issue 2 2004
    Craig Boutilier
    Many artificial intelligence (AI) tasks, such as product configuration, decision support, and the construction of autonomous agents, involve a process of constrained optimization, that is, optimization of behavior or choices subject to given constraints. In this paper we present an approach for constrained optimization based on a set of hard constraints and a preference ordering represented using a CP-network,a graphical model for representing qualitative preference information. This approach offers both pragmatic and computational advantages. First, it provides a convenient and intuitive tool for specifying the problem, and in particular, the decision maker's preferences. Second, it admits an algorithm for finding the most preferred feasible (Pareto-optimal) outcomes that has the following anytime property: the set of preferred feasible outcomes are enumerated without backtracking. In particular, the first feasible solution generated by this algorithm is Pareto optimal. [source]


    An elaborate education of basic genetic programming using C++

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 3 2010
    Nirod C. Sahoo
    Abstract Evolutionary search is a global search method based on natural selection. In engineering curriculum, these techniques are taught in courses like Evolutionary Computation, Engineering Optimization, etc. Genetic algorithm (GA) is popular among these algorithms. Genetic programming (GP), developed by John Koza, is a powerful extension of GA where a chromosome/computer program (CP) is coded as a rooted point-labeled tree with ordered branches. The search space is the space of all possible CPs (trees) consisting of functions and terminals appropriate to the problem domain. GP uses, like GA, crossover and mutation for evolution. Due to tree-structured coding of individuals, the initial population generation, genetic operators' use, and tree decoding for fitness evaluations demand careful computer programming. This article describes the programming steps of GP implementation (using C++ language) for students' easy understanding with pseudocodes for each step. Two application examples are also illustrated. © 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ 18: 434,448, 2010; View this article online at wileyonlinelibrary.com; DOI 10.1002/cae.20165 [source]


    SIMD Optimization of Linear Expressions for Programmable Graphics Hardware

    COMPUTER GRAPHICS FORUM, Issue 4 2004
    Chandrajit Bajaj
    Abstract The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of, where A is a matrix, and and are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods. [source]


    Multiobjective Optimization of Concrete Frames by Simulated Annealing

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 8 2008
    Ignacio Paya
    The evaluation of solutions follows the Spanish Code for structural concrete. The methodology was applied to a symmetrical building frame with two bays and four floors. This example has 77 design variables. Pareto results of the MOSA algorithm indicate that more practical, more constructable, more sustainable, and safer solutions than the lowest cost solution are available at a cost increment acceptable in practice. Results Ns -SMOSA1 and Ns -SMOSA2 of the cost versus constructability Pareto front are finally recommended because they are especially good in terms of cost, constructability, and environmental impact. Further, the methodology proposed will help structural engineers to enhance their designs of building frames. [source]


    Comparison of Two Evolutionary Algorithms for Optimization of Bridge Deck Repairs

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 8 2006
    Hatem Elbehairy
    These decisions, however, represent complex optimization problems that traditional optimization techniques are often unable to solve. This article introduces an integrated model for bridge deck repairs with detailed life cycle costs of network-level and project-level decisions. Two evolutionary-based optimization techniques that are capable of handling large-size problems, namely Genetic Algorithms and Shuffled Frog Leaping, are then applied on the model to optimize maintenance and repair decisions. Results of both techniques are compared on case study problems with different numbers of bridges. Based on the results, the benefits of the bridge deck management system are illustrated along with various strategies to improve optimization performance. [source]


    Bi-level Programming Formulation and Heuristic Solution Approach for Dynamic Traffic Signal Optimization

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2006
    Dazhi Sun
    Conventional methods of signal timing optimization assume given traffic flow pattern, whereas traffic assignment is performed with the assumption of fixed signal timing. This study develops a bi-level programming formulation and heuristic solution approach (HSA) for dynamic traffic signal optimization in networks with time-dependent demand and stochastic route choice. In the bi-level programming model, the upper level problem represents the decision-making behavior (signal control) of the system manager, while the user travel behavior is represented at the lower level. The HSA consists of a Genetic Algorithm (GA) and a Cell Transmission Simulation (CTS) based Incremental Logit Assignment (ILA) procedure. GA is used to seek the upper level signal control variables. ILA is developed to find user optimal flow pattern at the lower level, and CTS is implemented to propagate traffic and collect real-time traffic information. The performance of the HSA is investigated in numerical applications in a sample network. These applications compare the efficiency and quality of the global optima achieved by Elitist GA and Micro GA. Furthermore, the impact of different frequencies of updating information and different population sizes of GA on system performance is analyzed. [source]


    Multimode Project Scheduling Based on Particle Swarm Optimization

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2006
    Hong Zhang
    This article introduces a methodology for solving the MRCPSP based on particle swarm optimization (PSO) that has not been utilized for this and other construction-related problems. The framework of the PSO-based methodology is developed. A particle representation formulation is proposed to represent the potential solution to the MRCPSP in terms of priority combination and mode combination for activities. Each particle-represented solution should be checked against the nonrenewable resource infeasibility and will be handled by adjusting the mode combination. The feasible particle-represented solution is transformed to a schedule through a serial generation scheme. Experimental analyses are presented to investigate the performance of the proposed methodology. [source]


    Hybrid Meta-Heuristic Algorithm for the Simultaneous Optimization of the O,D Trip Matrix Estimation

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2004
    Antony Stathopoulos
    These include a genetic algorithm (GA), a simulated annealing (SA) algorithm, and a hybrid algorithm (GASA) based on the combination of GA and SA. The computational performance of the three algorithms is evaluated and compared by implementing them on a realistic urban road network. The results of the simulation tests demonstrate that SA and GASA produce a more accurate final solution than GA, whereas GASA shows a superior convergence rate, that is, faster improvement from the initial solution, in comparison to SA and GA. In addition, GASA produces a final solution that is more robust and less dependent on the initial demand pattern, in comparison to that obtained from a greedy search algorithm. [source]


    Dynamic Optimal Traffic Assignment and Signal Time Optimization Using Genetic Algorithms

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2004
    H. R. Varia
    A simulation-based approach is employed for the case of multiple-origin-multiple-destination traffic flows. The artificial intelligence technique of genetic algorithms (GAs) is used to minimize the overall travel cost in the network with fixed signal timings and optimization of signal timings. The proposed method is applied to the example network and results are discussed. It is concluded that GAs allow the relaxation of many of the assumptions that may be needed to solve the problem analytically by traditional methods. [source]