Home About us Contact | |||
Search Space (search + space)
Selected AbstractsHYBRID ACE: COMBINING SEARCH DIRECTIONS FOR HEURISTIC PLANNINGCOMPUTATIONAL INTELLIGENCE, Issue 3 2005Dimitris Vrakas One of the most promising trends in Domain-Independent AI Planning, nowadays, is state-space heuristic planning. The planners of this category construct general but efficient heuristic functions, which are used as a guide to traverse the state space either in a forward or in a backward direction. Although specific problems may favor one or the other direction, there is no clear evidence why any of them should be generally preferred. This paper presents Hybrid-AcE, a domain-independent planning system that combines search in both directions utilizing a complex criterion that monitors the progress of the search, to switch between them. Hybrid AcE embodies two powerful domain-independent heuristic functions extending one of the AcE planning systems. Moreover, the system is equipped with a fact-ordering technique and two methods for problem simplification that limit the search space and guide the algorithm to the most promising states. The bi-directional system has been tested on a variety of problems adopted from the AIPS planning competitions with quite promising results. [source] Seamless Montage for Texturing ModelsCOMPUTER GRAPHICS FORUM, Issue 2 2010Ran Gal Abstract We present an automatic method to recover high-resolution texture over an object by mapping detailed photographs onto its surface. Such high-resolution detail often reveals inaccuracies in geometry and registration, as well as lighting variations and surface reflections. Simple image projection results in visible seams on the surface. We minimize such seams using a global optimization that assigns compatible texture to adjacent triangles. The key idea is to search not only combinatorially over the source images, but also over a set of local image transformations that compensate for geometric misalignment. This broad search space is traversed using a discrete labeling algorithm, aided by a coarse-to-fine strategy. Our approach significantly improves resilience to acquisition errors, thereby allowing simple and easy creation of textured models for use in computer graphics. [source] Lazy Solid Texture SynthesisCOMPUTER GRAPHICS FORUM, Issue 4 2008Yue Dong Abstract Existing solid texture synthesis algorithms generate a full volume of color content from a set of 2D example images. We introduce a new algorithm with the unique ability to restrict synthesis to a subset of the voxels, while enforcing spatial determinism. This is especially useful when texturing objects, since only a thick layer around the surface needs to be synthesized. A major difficulty lies in reducing the dependency chain of neighborhood matching, so that each voxel only depends on a small number of other voxels. Our key idea is to synthesize a volume from a set of pre-computed 3D candidates, each being a triple of interleaved 2D neighborhoods. We present an efficient algorithm to carefully select in a pre-process only those candidates forming consistent triples. This significantly reduces the search space during subsequent synthesis. The result is a new parallel, spatially deterministic solid texture synthesis algorithm which runs efficiently on the GPU. Our approach generates high resolution solid textures on surfaces within seconds. Memory usage and synthesis time only depend on the output textured surface area. The GPU implementation of our method rapidly synthesizes new textures for the surfaces appearing when interactively breaking or cutting objects. [source] Using GIS, Genetic Algorithms, and Visualization in Highway DevelopmentCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2001Manoj K. Jha A model for highway development is presented, which uses geographic information systems (GIS), genetic algorithms (GA), and computer visualization (CV). GIS serves as a repository of geographic information and enables spatial manipulations and database management. GAs are used to optimize highway alignments in a complex search space. CV is a technique used to convey the characteristics of alternative solutions, which can be the basis of decisions. The proposed model implements GIS and GA to find an optimized alignment based on the minimization of highway costs. CV is implemented to investigate the effects of intangible parameters, such as unusual land and environmental characteristics not considered in optimization. Constrained optimization using GAs may be performed at subsequent stages if necessary using feedback received from CVs. Implementation of the model in a real highway project from Maryland indicates that integration of GIS, GAs, and CV greatly enhances the highway development process. [source] Augmentation of a nearest neighbour clustering algorithm with a partial supervision strategy for biomedical data classificationEXPERT SYSTEMS, Issue 1 2009Sameh A. Salem Abstract: In this paper, a partial supervision strategy for a recently developed clustering algorithm, the nearest neighbour clustering algorithm (NNCA), is proposed. The proposed method (NNCA-PS) offers classification capability with a smaller amount of a priori knowledge, where a small number of data objects from the entire data set are used as labelled objects to guide the clustering process towards a better search space. Experimental results show that NNCA-PS gives promising results of 89% sensitivity at 95% specificity when used to segment retinal blood vessels, and a maximum classification accuracy of 99.5% with 97.2% average accuracy when applied to a breast cancer data set. Comparisons with other methods indicate the robustness of the proposed method in classification. Additionally, experiments on parallel environments indicate the suitability and scalability of NNCA-PS in handling larger data sets. [source] A metagenetic algorithm for information filtering and collection from the World Wide WebEXPERT SYSTEMS, Issue 2 2001Z.N. Zacharis This paper describes the implementation of evolutionary techniques for information filtering and collection from the World Wide Web. We consider the problem of building intelligent agents to facilitate a person's search for information on the Web. An intelligent agent has been developed that uses a metagenetic algorithm in order to collect and recommend Web pages that will be interesting to the user. The user's feedback on the agent's recommendations drives the learning process to adapt the user's profile with his/her interests. The software agent utilizes the metagenetic algorithm to explore the search space of user interests. Experimental results are presented in order to demonstrate the suitability of the metagenetic algorithm's approach on the Web. [source] A sparse marker extension tree algorithm for selecting the best set of haplotype tagging single nucleotide polymorphismsGENETIC EPIDEMIOLOGY, Issue 4 2005Ke Hao Abstract Single nucleotide polymorphisms (SNPs) play a central role in the identification of susceptibility genes for common diseases. Recent empirical studies on human genome have revealed block-like structures, and each block contains a set of haplotype tagging SNPs (htSNPs) that capture a large fraction of the haplotype diversity. Herein, we present an innovative sparse marker extension tree (SMET) algorithm to select optimal htSNP set(s). SMET reduces the search space considerably (compared to full enumeration strategy), and therefore improves computing efficiency. We tested this algorithm on several datasets at three different genomic scales: (1) gene-wide (NOS3, CRP, IL6 PPARA, and TNF), (2) region-wide (a Whitehead Institute inflammatory bowel disease dataset and a UK Graves' disease dataset), and (3) chromosome-wide (chromosome 22) levels. SMET offers geneticists with greater flexibilities in SNP tagging than lossless methods with adjustable haplotype diversity coverage (,). In simulation studies, we found that (1) an initial sample size of 50 individuals (100 chromosomes) or more is needed for htSNP selection; (2) the SNP tagging strategy is considerably more efficient when the underlying block structure is taken into account; and (3) htSNP sets at 80,90% , are more cost-effective than the lossless sets in term of relative power, relative risk ratio estimation, and genotyping efforts. Our study suggests that the novel SMET algorithm is a valuable tool for association tests. Genet. Epidemiol. 29:336,352, 2005. © 2005 Wiley-Liss, Inc. [source] Use of VFSA for resolution, sensitivity and uncertainty analysis in 1D DC resistivity and IP inversionGEOPHYSICAL PROSPECTING, Issue 5 2003Bimalendu B. Bhattacharya ABSTRACT We present results from the resolution and sensitivity analysis of 1D DC resistivity and IP sounding data using a non-linear inversion. The inversion scheme uses a theoretically correct Metropolis,Gibbs' sampling technique and an approximate method using numerous models sampled by a global optimization algorithm called very fast simulated annealing (VFSA). VFSA has recently been found to be computationally efficient in several geophysical parameter estimation problems. Unlike conventional simulated annealing (SA), in VFSA the perturbations are generated from the model parameters according to a Cauchy-like distribution whose shape changes with each iteration. This results in an algorithm that converges much faster than a standard SA. In the course of finding the optimal solution, VFSA samples several models from the search space. All these models can be used to obtain estimates of uncertainty in the derived solution. This method makes no assumptions about the shape of an a posteriori probability density function in the model space. Here, we carry out a VFSA-based sensitivity analysis with several synthetic and field sounding data sets for resistivity and IP. The resolution capability of the VFSA algorithm as seen from the sensitivity analysis is satisfactory. The interpretation of VES and IP sounding data by VFSA, incorporating resolution, sensitivity and uncertainty of layer parameters, would generally be more useful than the conventional best-fit techniques. [source] Proper Splitting of Interconnected Power SystemsIEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING, Issue 2 2010S. Najafi Non-member Abstract Power system islanding is the last defense line to protect power grids from incidence of wide-area blackout. As a wide-area control action, power system splitting is a comprehensive decision making problem that includes different subproblems. This paper introduces a novel approach for separation of the entire power system into several stable islands in different loading levels. The proposed method combines both the dynamic and the static characteristics of interconnected power network and determines the proper splitting schemes. The presented algorithm searches for proper islanding strategy in the boundary of primary determined coherent machines using Krylov subspace method and finds the proper splitting points by transferring some of the buses in one island to another island such that total load shedding is minimized. A spanning tree-based depth first search algorithm is used to find all possible combination of transferred buses. The presented method reduces the huge initial search space of islanding strategy considering dynamic characteristics of integrated power system and reduction of search space to only boundary network. The speed of the proposed algorithm is remarkably high and can be applied for islanding the power system in real-time. The presented algorithm is applied to IEEE 118 BUS test system. Results show the robustness, effectiveness, and capability of the algorithm to determine fast and accurate proper islanding strategy. Time domain simulation of the islanding strategies confirms that all the islands which are specified by the proposed method are stable. Copyright © 2010 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc. [source] Handling constraints using multiobjective optimization conceptsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 15 2004Arturo Hernández Aguirre Abstract In this paper, we propose a new constraint-handling technique for evolutionary algorithms which we call inverted-shrinkable PAES (IS-PAES). This approach combines the use of multiobjective optimization concepts with a mechanism that focuses the search effort onto specific areas of the feasible region by shrinking the constrained search space. IS-PAES also uses an adaptive grid to store the solutions found, but has a more efficient memory-management scheme than its ancestor (the Pareto archived evolution strategy for multiobjective optimization). The proposed approach is validated using several examples taken from the standard evolutionary and engineering optimization literature. Comparisons are provided with respect to the stochastic ranking method (one of the most competitive constraint-handling approaches used with evolutionary algorithms currently available) and with respect to other four multiobjective-based constraint-handling techniques. Copyright© 2004 John Wiley & Sons, Ltd. [source] Crack identification of a planar frame structure based on a synthetic artificial intelligence techniqueINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 1 2003Mun-Bo Shim Abstract It has been established that a crack has an important effect on the dynamic behaviour of a structure. This effect depends mainly on the location and depth of the crack. To identify the location and depth of a crack in a planar frame structure, a method is presented in this paper which uses a synthetic artificial intelligence technique, i.e. adaptive-network-based fuzzy inference system (ANFIS) solved via a hybrid learning algorithm (the backpropagation gradient descent and the least-squares method) and continuous evolutionary algorithms (CEAs) solving single objective optimization problems with a continuous function and continuous search space efficiently are unified. With ANFIS and CEAs it is possible to formulate the inverse problem. ANFIS is used to obtain the input (the location and depth of a crack),output (the structural eigenfrequencies) relation of the structural system. CEAs are used to identify the crack location and depth by minimizing the difference from the measured frequencies. We have tried this idea on 2D beam structures and the results are promising. Copyright © 2003 John Wiley & Sons, Ltd. [source] A reduced-order simulated annealing approach for four-dimensional variational data assimilation in meteorology and oceanographyINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2008I. Hoteit Abstract Four-dimensional variational data assimilation in meteorology and oceanography suffers from the presence of local minima in the cost function. These local minima arise when the system under study is strongly nonlinear. The number of local minima further dramatically increases with the length of the assimilation period and often renders the solution to the problem intractable. Global optimization methods are therefore needed to resolve this problem. However, the huge computational burden makes the application of these sophisticated techniques unfeasible for large variational data assimilation systems. In this study, a Simulated Annealing (SA) algorithm, complemented with an order-reduction of the control vector, is used to tackle this problem. SA is a very powerful tool of combinatorial minimization in the presence of several local minima at the cost of increasing the execution time. Order-reduction is then used to reduce the dimension of the search space in order to speed up the convergence rate of the SA algorithm. This is achieved through a proper orthogonal decomposition. The new approach was implemented with a realistic eddy-permitting configuration of the Massachusetts Institute of Technology general circulation model (MITgcm) of the tropical Pacific Ocean. Numerical results indicate that the reduced-order SA approach was able to efficiently reduce the cost function with a reasonable number of function evaluations. Copyright © 2008 John Wiley & Sons, Ltd. [source] Genetic fuzzy systems to evolve interaction strategies in multiagent systemsINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 9 2007Igor Walter This article suggests an evolutionary approach to designing interaction strategies for multiagent systems, focusing on strategies modeled as fuzzy rule-based systems. The aim is to learn models evolving database and rule bases to improve agent performance when playing in a competitive environment. In competitive situations, data for learning and tuning are rare, and rule bases must jointly evolve with the databases. We introduce an evolutionary algorithm whose operators use variable length chromosomes, a hierarchical relationship among individuals through fitness, and a scheme that successively explores and exploits the search space along generations. Evolution of interaction strategies uncovers unknown and unexpected agent behaviors and allows a richer analysis of negotiation mechanisms and their role as a coordination protocol. An application concerning an electricity market illustrates the effectiveness of the approach. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 971,991, 2007. [source] Learning cooperative linguistic fuzzy rules using the best,worst ant system algorithmINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 4 2005Jorge Casillas Within the field of linguistic fuzzy modeling with fuzzy rule-based systems, the automatic derivation of the linguistic fuzzy rules from numerical data is an important task. In the last few years, a large number of contributions based on techniques such as neural networks and genetic algorithms have been proposed to face this problem. In this article, we introduce a novel approach to the fuzzy rule learning problem with ant colony optimization (ACO) algorithms. To do so, this learning task is formulated as a combinatorial optimization problem. Our learning process is based on the COR methodology proposed in previous works, which provides a search space that allows us to obtain fuzzy models with a good interpretability,accuracy trade-off. A specific ACO-based algorithm, the Best,Worst Ant System, is used for this purpose due to the good performance shown when solving other optimization problems. We analyze the behavior of the proposed method and compare it to other learning methods and search techniques when solving two real-world applications. The obtained results lead us to remark the good performance of our proposal in terms of interpretability, accuracy, and efficiency. © 2005 Wiley Periodicals, Inc. Int J Int Syst 20: 433,452, 2005. [source] Wavelength selection with Tabu SearchJOURNAL OF CHEMOMETRICS, Issue 8-9 2003J. A. Hageman Abstract This paper introduces Tabu Search in analytical chemistry by applying it to wavelength selection. Tabu Search is a deterministic global optimization technique loosely based on concepts from artificial intelligence. Wavelength selection is a method which can be used for improving the quality of calibration models. Tabu Search uses basic, problem-specific operators to explore a search space, and memory to keep track of parts already visited. Several implementational aspects of wavelength selection with Tabu Search will be discussed. Two ways of memorizing the search space are investigated: storing the actual solutions and storing the steps necessary to create them. Parameters associated with Tabu Search are configured with a Plackett,Burman design. In addition, two extension schemes for Tabu Search, intensification and diversification, have been implemented and are applied with good results. Eventually, two implementations of wavelength selection with Tabu Search are tested, one which searches for a solution with a constant number of wavelengths and one with a variable number of wavelengths. Both implementations are compared with results obtained by wavelength selection methods based on simulated annealing (SA) and genetic algorithms (GAs). It is demonstrated with three real-world data sets that Tabu Search performs equally well as and can be a valuable alternative to SA and GAs. The improvements in predictive abilities increased by a factor of 20 for data set 1 and by a factor of 2 for data sets 2 and 3. In addition, when the number of wavelengths in a solution is variable, measurements on the coverage of the search space show that the coverage is usually higher for Tabu Search compared with SA and GAs. Copyright © 2003 John Wiley & Sons, Ltd. [source] Stochastic mixed integer nonlinear programming using rank filter and ordinal optimizationAICHE JOURNAL, Issue 11 2009Chengtao Wen Abstract A rank filter algorithm is developed to cope with the computational-difficulty in solving stochastic mixed integer nonlinear programming (SMINLP) problems. The proposed approximation method estimates the expected performance values, whose relative rank forms a subset of good solutions with high probability. Suboptimal solutions are obtained by searching the subset using the accurate performances. High-computational efficiency is achieved, because the accurate performance is limited to a small subset of the search space. Three benchmark problems show that the rank filter algorithm can reduce computational expense by several orders of magnitude without significant loss of precision. The rank filter algorithm presents an efficient approach for solving the large-scale SMINLP problems that are nonconvex, highly combinatorial, and strongly nonlinear. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source] Stochastic modeling of usage patterns in a web-based information systemJOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 7 2002Hui-Min Chen Users move from one state (or task) to another in an information system's labyrinth as they try to accomplish their work, and the amount of time they spend in each state varies. This article uses continuous-time stochastic models, mainly based on semi-Markov chains, to derive user state transition patterns (both in rates and in probabilities) in a Web-based information system. The methodology was demonstrated with 126,925 search sessions drawn from the transaction logs of the University of California's MELVYL® library catalog system (www.melvyl.ucop.edu). First, user sessions were categorized into six groups based on their similar use of the system. Second, by using a three-layer hierarchical taxonomy of the system Web pages, user sessions in each usage group were transformed into a sequence of states. All the usage groups but one have third-order sequential dependency in state transitions. The sole exception has fourth-order sequential dependency. The transition rates as well as transition probabilities of the semi-Markov model provide a background for interpreting user behavior probabilistically, at various levels of detail. Finally, the differences in derived usage patterns between usage groups were tested statistically. The test results showed that different groups have distinct patterns of system use. Knowledge of the extent of sequential dependency is beneficial because it allows one to predict a user's next move in a search space based on the past moves that have been made. It can also be used to help customize the design of the user interface to the system to facilitate interaction. The group CL6 labeled "knowledgeable and sophisticated usage" and the group CL7 labeled "unsophisticated usage" both had third-order sequential dependency and had the same most-frequently occurring search pattern: screen display, record display, screen display, and record display. The group CL8 called "highly interactive use with good search results" had fourth-order sequential dependency, and its most frequently occurring pattern was the same as CL6 and CL7 with one more screen display action added. The group CL13, called "known-item searching" had third-order sequential dependency, and its most frequently occurring pattern was index access, search with retrievals, screen display, and record display. Group CL14 called "help intensive searching," and CL18 called "relatively unsuccessful" both had third-order sequential dependency, and for both groups the most frequently occurring pattern was index access, search without retrievals, index access, and again, search without retrievals. [source] An algorithm for term conflation based on tree structuresJOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 3 2002Irene Diaz This work presents a new stemming algorithm. This algorithm stores the stemming information in tree structures. This storage allows us to enhance the performance of the algorithm due to the reduction of the search space and the overall complexity. The final result of that stemming algorithm is a normalized concept, understanding this process as the automatic extraction of the generic form (or a lexeme) for a selected term. [source] Discrete search allocation game with false contactsNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 1 2007Ryusuke Hohzaki Abstract This paper deals with a two-person zero-sum game called a search allocation game, where a searcher and a target participate, taking account of false contacts. The searcher distributes his search effort in a search space in order to detect the target. On the other hand, the target moves to avoid the searcher. As a payoff of the game, we take the cumulative amount of search effort weighted by the target distribution, which can be derived as an approximation of the detection probability of the target. The searcher's strategy is a plan of distributing search effort and the target's is a movement represented by a path or transition probability across the search space. In the search, there are false contacts caused by environmental noises, signal processing noises, or real objects resembling true targets. If they happen, the searcher must take some time for their investigation, which interrupts the search for a while. There have been few researches dealing with search games with false contacts. In this paper, we formulate the game into a mathematical programming problem to obtain its equilibrium point. © 2006 Wiley Periodicals, Inc. Naval Research Logistics, 2007 [source] Harmonic and refined Rayleigh,Ritz for the polynomial eigenvalue problemNUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 1 2008Michiel E. Hochstenbach Abstract After reviewing the harmonic Rayleigh,Ritz approach for the standard and generalized eigenvalue problem, we discuss several extraction processes for subspace methods for the polynomial eigenvalue problem. We generalize the harmonic and refined Rayleigh,Ritz approaches which lead to new approaches to extract promising approximate eigenpairs from a search space. We give theoretical as well as numerical results of the methods. In addition, we study the convergence of the Jacobi,Davidson method for polynomial eigenvalue problems with exact and inexact linear solves and discuss several algorithmic details. Copyright © 2008 John Wiley & Sons, Ltd. [source] Verified global optimization with GloptLabPROCEEDINGS IN APPLIED MATHEMATICS & MECHANICS, Issue 1 2007Ferenc Domes GloptLab is a testing and development platform for solving quadratic constraint satisfaction problems, written in MATLAB. All applied methods are rigorous, hence it is guaranteed that no feasible point is lost. Some emphasis is given to finding a bounded initial box containing all feasible points, in cases where other complete solvers rely on non-rigorous heuristics. The algorithms implemented in GloptLab are used to reduce the search space: scaling, constraint propagation, linear relaxations, strictly convex enclosures, conic methods, and branch and bound. From the method repertoire custom made strategies can be built, with a user friendly graphical interface. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Automatic design of conventional distillation column sequence by genetic algorithmTHE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 3 2009Ramin Bozorgmehry Boozarjomehry Abstract Synthesis of the optimum conventional (with non-sharp separations) distillation column sequence (DCS) is a challenging problem, in the field of chemical process design and optimization, due to its huge search space and combinatorial nature. In this paper, a novel procedure for the synthesis of optimum Conventional Distillation Column Sequence is proposed. The proposed method is based on evolutionary algorithms. The main criterion used to screen alternative DCS's is the Total Annual Cost (TAC). In order to estimate the TAC of each DCS alternative all columns that exist in the DCS are designed using short-cut methods. The performance of the proposed method and other alternatives are compared based on the results obtained for four standard benchmark problems used by researchers working in this area. Based on the results of the comparison, the proposed method outperforms the other methods and is also more flexible than other existing methods. La synthèse de la séquence de colonne de distillation (SCD) traditionnelle optimale (caractérisée par des séparations non tranchantes) représente un problème difficile à régler, dans le domaine de la conception et de l'optimisation de procédés chimiques, en raison de son immense espace de recherche et de sa nature combinatoire. Ce document propose une procédure nouvelle en vue de la synthèse de la séquence de colonne de distillation traditionnelle optimale. La méthode proposée est fondée sur des algorithmes évolutionnistes. Le critère principal utilisé pour présélectionner les autres possibilités de SCD est le coût total annuel (CTA). Pour estimer le CTA de chaque option de SCD, toutes les colonnes qui existent dans la SCD sont conçues en utilisant des méthodes abrégées. On compare ensuite le rendement de la méthode proposée et des autres possibilités en s'appuyant sur les résultats obtenus dans le cadre de quatre problèmes d'évaluation standard utilisés par les chercheurs oeuvrant dans ce domaine. En s'appuyant sur les résultats de la comparaison, la méthode proposée donne un meilleur rendement que les autres méthodes et est également plus souple que les autres méthodes qui existent. [source] Design and synthesis of separation process based on a hybrid methodASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERING, Issue 6 2009Chunshan Li Abstract A new general hybrid methodology for separation process synthesis and design is proposed, which considers different separation technologies by integrating mathematical modeling,Analytical Hierarchy Process (AHP) with heuristic approaches and thermodynamic insights. The methodology can provide suitable guidance for the initial separation process design and energy saving. Firstly, a general separation synthesis system based on thermodynamic insights is developed to select suitable separation techniques before sequencing, which reduces the complexity and size of synthesis search space. Then, the pseudo-component concept is proposed and used to deal with the azeotrope contained in the mixture, which widens the scope of the application of the proposed methodology. The AHP method is used to make a separation sequence by pairwise comparison matrices. Lastly, the separation of the pseudo-component will be considered, and it performs energy integration and a detailed process design. Application of the proposed methodology is highlighted through two industrial examples: one is the separation synthesis of a light-end refinery mixture. The other is azeotrope system, the mixture of phenol, o -cresol, p -cresol, and m -cresol. Copyright © 2009 Curtin University of Technology and John Wiley & Sons, Ltd. [source] Optimization of a Process Synthesis Superstructure Using an Ant Colony AlgorithmCHEMICAL ENGINEERING & TECHNOLOGY (CET), Issue 3 2008B. Raeesi Abstract The optimization of chemical syntheses based on superstructure modeling is a perfect way for achieving the optimal plant design. However, the combinatorial optimization problem arising from this method is very difficult to solve, particularly for the entire plant. Relevant literature has focused on the use of mathematical programming approaches. Some research has also been conducted based on meta-heuristic algorithms. In this paper, two approaches are presented to optimize process synthesis superstructure. Firstly, mathematical formulation of a superstructure model is presented. Then, an ant colony algorithm is proposed for solving this nonlinear combinatorial problem. In order to ensure that all the constraints are satisfied, an adaptive, feasible bound for each variable is defined to limit the search space. Adaptation of these bounds is executed by the suggested bound updating rule. Finally, the capability of the proposed algorithm is compared with the conventional Branch and Bound method by a case study. [source] How neutral networks influence evolvabilityCOMPLEXITY, Issue 2 2001Marc Ebner Abstract Evolutionary algorithms apply the process of variation, reproduction, and selection to look for an individual capable of solving the task at hand. In order to improve the evolvability of a population we propose to copy important characteristics of nature's search space. Desired characteristics for a genotype,phenotype mapping are described and several highly redundant genotype,phenotype mappings are analyzed in the context of a population-based search. We show that evolvability, defined as the ability of random variations to sometimes produce improvement, is influenced by the existence of neutral networks in genotype space. Redundant mappings allow the population to spread along the network of neutral mutations and the population is quickly able to recover after a change has occurred. The extent of the neutral networks affects the interconnectivity of the search space and thereby affects evolvability. © 2002 Wiley Periodicals, Inc. [source] DYNAMIC SEARCH SPACE TRANSFORMATIONS FOR SOFTWARE TEST DATA GENERATIONCOMPUTATIONAL INTELLIGENCE, Issue 1 2008Ramón Sagarna Among the tasks in software testing, test data generation is particularly difficult and costly. In recent years, several approaches that use metaheuristic search techniques to automatically obtain the test inputs have been proposed. Although work in this field is very active, little attention has been paid to the selection of an appropriate search space. The present work describes an alternative to this issue. More precisely, two approaches which employ an Estimation of Distribution Algorithm as the metaheuristic technique are explained. In both cases, different regions are considered in the search for the test inputs. Moreover, to depart from a region near to the one containing the optimum, the definition of the initial search space incorporates static information extracted from the source code of the software under test. If this information is not enough to complete the definition, then a grid search method is used. According to the results of the experiments conducted, it is concluded that this is a promising option that can be used to enhance the test data generation process. [source] MRMOGA: a new parallel multi-objective evolutionary algorithm based on the use of multiple resolutionsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2007Antonio López Jaimes Abstract In this paper, we introduce MRMOGA (Multiple Resolution Multi-Objective Genetic Algorithm), a new parallel multi-objective evolutionary algorithm which is based on an injection island approach. This approach is characterized by adopting an encoding of solutions which uses a different resolution for each island. This approach allows us to divide the decision variable space into well-defined overlapped regions to achieve an efficient use of multiple processors. Also, this approach guarantees that the processors only generate solutions within their assigned region. In order to assess the performance of our proposed approach, we compare it to a parallel version of an algorithm that is representative of the state-of-the-art in the area, using standard test functions and performance measures reported in the specialized literature. Our results indicate that our proposed approach is a viable alternative to solve multi-objective optimization problems in parallel, particularly when dealing with large search spaces. Copyright © 2006 John Wiley & Sons, Ltd. [source] Large-scale topology optimization using preconditioned Krylov subspace methods with recyclingINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2007Shun Wang Abstract The computational bottleneck of topology optimization is the solution of a large number of linear systems arising in the finite element analysis. We propose fast iterative solvers for large three-dimensional topology optimization problems to address this problem. Since the linear systems in the sequence of optimization steps change slowly from one step to the next, we can significantly reduce the number of iterations and the runtime of the linear solver by recycling selected search spaces from previous linear systems. In addition, we introduce a MINRES (minimum residual method) version with recycling (and a short-term recurrence) to make recycling more efficient for symmetric problems. Furthermore, we discuss preconditioning to ensure fast convergence. We show that a proper rescaling of the linear systems reduces the huge condition numbers that typically occur in topology optimization to roughly those arising for a problem with constant density. We demonstrate the effectiveness of our solvers by solving a topology optimization problem with more than a million unknowns on a fast PC. Copyright © 2006 John Wiley & Sons, Ltd. [source] An artificial beehive algorithm for continuous optimizationINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 11 2009Mario A. Muñoz This paper presents an artificial beehive algorithm for optimization in continuous search spaces based on a model aimed at individual bee behavior. The algorithm defines a set of behavioral rules for each agent to determine what kind of actions must be carried out. Also, the algorithm proposed includes some adaptations not considered in the biological model to increase the performance in the search for better solutions. To compare the performance of the algorithm with other swarm-based Techniques, we conducted statistical analyses by using the so-called t test. This comparison is done with several common benchmark functions. © 2009 Wiley Periodicals, Inc. [source] |