Home About us Contact | |||
Better Solutions (good + solution)
Selected AbstractsAn agent-based scheduling method enabling rescheduling with trial-and-error approachELECTRICAL ENGINEERING IN JAPAN, Issue 1 2007Hiroyasu Mitsui Abstract Scheduling optimization is an extremely difficult problem; therefore, many scheduling methods such as linear programming or stochastic searching have been investigated in order to obtain better solutions close to the optimum one. After obtaining a certain solution, scheduling managers may need to reschedule another solution that corresponds to changes in requirements or resources. However, rescheduling problems become more difficult as they become larger in scale. In this paper, we propose an agent-based rescheduling system using the linear programming approach. In our system, agents can autonomously conduct rescheduling on behalf of managers by repeated trial and error in balancing loads or changing the priority of resource allocation until it reaches a better solution for the requirement is obtained. In addition, managers can engage in trial and error with the help of agents to seek a better solution by changing constraint conditions. © 2007 Wiley Periodicals, Inc. Electr Eng Jpn, 159(1): 26,38, 2007; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/eej.20434 [source] Performance analysis of data scheduling algorithms for multi-item requests in multi-channel broadcast environmentsINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 4 2010Kai Liu Abstract Nowadays querying multiple-dependent data items in a request is common in many advanced mobile applications, such as traffic information enquiry services. In addition, multi-channel architectures are widely deployed in many data dissemination systems. In this paper, we extend a number of data productivity-based scheduling algorithms and evaluate their performance in scheduling multi-item requests in multi-channel broadcast environments. We observe from the experimental results two performance problems that render these algorithms ineffective. Lastly, we discuss possible causes of these problems to give insights in the design of a better solution. Copyright © 2009 John Wiley & Sons, Ltd. [source] Pose Optimization of Serial Manipulators Using Knowledge of Their Velocity-Degenerate (Singular) ConfigurationsJOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 5 2003Scott B. Nokleby This work investigates the exploitation of velocity-degenerate configurations to optimize the pose of either nonredundant or redundant serial manipulators to sustain desired wrenches. An algorithm is developed that determines a desirable start point for the optimization of a serial manipulator's pose. The start-point algorithm (SPA) uses analytical expressions of the velocity-degenerate (singular) configurations of a serial manipulator to determine a pose that would be best suitable to sustain a desired wrench. Results for an example redundant serial manipulator are presented. The example results show that by using the SPA with the optimization routine, the resulting poses obtained require less effort from the actuators when compared to the poses obtained without using the SPA. It is shown that when no constraint is imposed on the position of the end-effector, the SPA excels at providing a better solution with less iterations than running the optimization without the SPA. © 2003 Wiley Periodicals, Inc. [source] Efficiency measure, modelling and estimation in combined array designsAPPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 4 2003Tak Mak Abstract In off-line quality control, the settings that minimize the variance of a quality characteristic are unknown and must be determined based on an estimated dual response model of mean and variance. The present paper proposes a direct measure of the efficiency of any given design-estimation procedure for variance minimization. This not only facilitates the comparison of different design-estimation procedures, but may also provide a guideline for choosing a better solution when the estimated dual response model suggests multiple solutions. Motivated by the analysis of an industrial experiment on spray painting, the present paper also applies a class of link functions to model process variances in off-line quality control. For model fitting, a parametric distribution is employed in updating the variance estimates used in an iteratively weighted least squares procedure for mean estimation. In analysing combined array experiments, Engel and Huele (Technometrics, 1996; 39:365) used log-link to model process variances and considered an iteratively weighted least squares leading to the pseudo-likelihood estimates of variances as discussed in Carroll and Ruppert (Transformation and Weighting in Regression, Chapman & Hall: New York). Their method is a special case of the approach considered in this paper. It is seen for the spray paint data that the log-link may not be satisfactory and the class of link functions considered here improves substantially the fit to process variances. This conclusion is reached with a suggested method of comparing ,empirical variances' with the ,theoretical variances' based on the assumed model. Copyright © 2003 John Wiley & Sons, Ltd. [source] An agent-based scheduling method enabling rescheduling with trial-and-error approachELECTRICAL ENGINEERING IN JAPAN, Issue 1 2007Hiroyasu Mitsui Abstract Scheduling optimization is an extremely difficult problem; therefore, many scheduling methods such as linear programming or stochastic searching have been investigated in order to obtain better solutions close to the optimum one. After obtaining a certain solution, scheduling managers may need to reschedule another solution that corresponds to changes in requirements or resources. However, rescheduling problems become more difficult as they become larger in scale. In this paper, we propose an agent-based rescheduling system using the linear programming approach. In our system, agents can autonomously conduct rescheduling on behalf of managers by repeated trial and error in balancing loads or changing the priority of resource allocation until it reaches a better solution for the requirement is obtained. In addition, managers can engage in trial and error with the help of agents to seek a better solution by changing constraint conditions. © 2007 Wiley Periodicals, Inc. Electr Eng Jpn, 159(1): 26,38, 2007; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/eej.20434 [source] An artificial beehive algorithm for continuous optimizationINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 11 2009Mario A. Muñoz This paper presents an artificial beehive algorithm for optimization in continuous search spaces based on a model aimed at individual bee behavior. The algorithm defines a set of behavioral rules for each agent to determine what kind of actions must be carried out. Also, the algorithm proposed includes some adaptations not considered in the biological model to increase the performance in the search for better solutions. To compare the performance of the algorithm with other swarm-based Techniques, we conducted statistical analyses by using the so-called t test. This comparison is done with several common benchmark functions. © 2009 Wiley Periodicals, Inc. [source] Direct shipping logistic planning for a hub-and-spoke network with given discrete intershipment timesINTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 1 2006Libby Chong Abstract This paper proposes a heuristic procedure to solve the problem of scheduling and routing shipments in a hybrid hub-and-spoke network, when a given set of feasible discrete intershipment times is given. The heuristic procedure may be used to assist in the cooperative operational planning of a physical goods network between shippers and logistics service provider, or to assist shippers in making logistics outsourcing decisions. The objective is to minimise the transportation and inventory holding costs. It is shown through a set of problem instances that this heuristic procedure provides better solutions than existing economic order quantity-based approaches. Computational results are presented and discussed. [source] Addressing the scheduling of chemical supply chains under demand uncertaintyAICHE JOURNAL, Issue 11 2006Gonzalo Guillén Abstract A multistage stochastic optimization model is presented to address the scheduling of supply chains with embedded multipurpose batch chemical plants under demand uncertainty. In order to overcome the numerical difficulties associated with the resulting large-scale stochastic mixed-integer-linear-programming (MILP) problem, an approximation strategy comprising two steps, and based on the resolution of a set of deterministic and two-stage stochastic models is presented. The performance of the proposed strategy regarding computation time and optimality gap is studied through comparison with other traditional approaches that address optimization under uncertainty. Results indicate that the proposed strategy provides better solutions than stand-alone two-stage stochastic programming and two-stage shrinking-horizon algorithms for similar computational efforts and incurs much lower computation times than the rigorous multistage stochastic model. © 2006 American Institute of Chemical Engineers AIChE J, 2006 [source] A branch-and-price-based large neighborhood search algorithm for the vehicle routing problem with time windowsNETWORKS: AN INTERNATIONAL JOURNAL, Issue 4 2009Eric Prescott-Gagnon Abstract Given a fleet of vehicles assigned to a single depot, the vehicle routing problem with time windows (VRPTW) consists of determining a set of feasible vehicle routes to deliver goods to a set of customers while minimizing, first, the number of vehicles used and, second, total distance traveled. A large number of heuristic approaches for the VRPTW have been proposed in the literature. In this article, we present a large neighborhood search algorithm that takes advantage of the power of branch-and-price which is the leading methodology for the exact solution of the VRPTW. To ensure diversification during the search, this approach uses different procedures for defining the neighborhood explored at each iteration. Computational results on the Solomo's and the Gehring and Homberge's benchmark instances are reported. Compared to the best known methods, the proposed algorithm produces better solutions, especially on the largest instances where the number of vehicles used is significantly reduced. © 2009 Wiley Periodicals, Inc. NETWORKS, 2009 [source] RANDOM APPROXIMATED GREEDY SEARCH FOR FEATURE SUBSET SELECTIONASIAN JOURNAL OF CONTROL, Issue 3 2004Feng Gao ABSTRACT We propose a sequential approach called Random Approximated Greedy Search (RAGS) in this paper and apply it to the feature subset selection for regression. It is an extension of GRASP/Super-heuristics approach to complex stochastic combinatorial optimization problems, where performance estimation is very expensive. The key points of RAGS are from the methodology of Ordinal Optimization (OO). We soften the goal and define success as good enough but not necessarily optimal. In this way, we use more crude estimation model, and treat the performance estimation error as randomness, so it can provide random perturbations mandated by the GRASP/Super-heuristics approach directly and save a lot of computation effort at the same time. By the multiple independent running of RAGS, we show that we obtain better solutions than standard greedy search under the comparable computation effort. [source] Intravascular catheter-related infections: a growing problem, the search for better solutionsCLINICAL MICROBIOLOGY AND INFECTION, Issue 5 2002E. Bouza No abstract is available for this article. [source] Shaking table model test on Shanghai World Financial Center TowerEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 4 2007Xilin Lu Abstract The height of 101-storey Shanghai World Financial Center Tower is 492m above ground making it possible the tallest building in the world when completed. Three parallel structural systems including mega-frame structure, reinforced concrete and braced steel services core and outrigger trusses, are combined to resist vertical and lateral loads. The building could be classified as a vertically irregular structure due to a number of stiffened and transfer stories in the building. Complexities related to structural system layout are mainly exhibited in the design of services core, mega-diagonals and outrigger trusses. According to Chinese Code, the height 190 m of the building clearly exceeds the stipulated maximum height of for a composite frame/reinforced concrete core building. The aspect ratio of height to width also exceeds the stipulated limit of 7 for seismic design intensity 7. A 1/50 scaled model is made and tested on shaking table under a series of one and two-dimensional base excitations with gradually increasing acceleration amplitudes. This paper presents the dynamic characteristics, the seismic responses and the failure mechanism of the structure. The test results demonstrate that the structural system is a good solution to withstand earthquakes. The inter-storey drift and the overall behaviour meet the requirements of Chinese Design Code. Furthermore, weak positions under seldom-occurred earthquakes of seismic design intensity 8 are found based on the visible damages on the testing model, and some corresponding suggestions are proposed for the engineering design of the structure under extremely strong earthquake. Copyright © 2006 John Wiley & Sons, Ltd. [source] Architecture design, performance analysis and VLSI implementation of a reconfigurable shared buffer for high-speed switch/router,INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 2 2009Ling Wu Abstract Modern switches and routers require massive storage space to buffer packets. This becomes more significant as link speed increases and switch size grows. From the memory technology perspective, while DRAM is a good choice to meet capacity requirement, the access time causes problems for high-speed applications. On the other hand, though SRAM is faster, it is more costly and does not have high storage density. The SRAM/DRAM hybrid architecture provides a good solution to meet both capacity and speed requirements. From the switch design and network traffic perspective, to minimize packet loss, the buffering space allocated for each switch port is normally based on the worst-case scenario, which is usually huge. However, under normal traffic load conditions, the buffer utilization for such configuration is very low. Therefore, we propose a reconfigurable buffer-sharing scheme that can dynamically adjust the buffering space for each port according to the traffic patterns and buffer saturation status. The target is to achieve high performance and improve buffer utilization, while not posing much constraint on the buffer speed. In this paper, we study the performance of the proposed buffer-sharing scheme by both a numerical model and extensive simulations under uniform and non-uniform traffic conditions. We also present the architecture design and VLSI implementation of the proposed reconfigurable shared buffer using the 0.18 µm CMOS technology. Our results manifest that the proposed architecture can always achieve high performance and provide much flexibility for the high-speed packet switches to adapt to various traffic patterns. Furthermore, it can be easily integrated into the functionality of port controllers of modern switches and routers. Copyright © 2008 John Wiley & Sons, Ltd. [source] Call admission control in cellular networks: A reinforcement learning solutionINTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 2 2004Sidi-Mohammed Senouci In this paper, we address the call admission control (CAC) problem in a cellular network that handles several classes of traffic with different resource requirements. The problem is formulated as a semi-Markov decision process (SMDP) problem. We use a real-time reinforcement learning (RL) [neuro-dynamic programming (NDP)] algorithm to construct a dynamic call admission control policy. We show that the policies obtained using our TQ-CAC and NQ-CAC algorithms, which are two different implementations of the RL algorithm, provide a good solution and are able to earn significantly higher revenues than classical solutions such as guard channel. A large number of experiments illustrates the robustness of our policies and shows how they improve quality of service (QoS) and reduce call-blocking probabilities of handoff calls even with variable traffic conditions.,Copyright © 2004 John Wiley & Sons, Ltd. [source] Approximation algorithms for general one-warehouse multi-retailer systemsNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 7 2009Zuo-Jun Max Shen Abstract Logistical planning problems are complicated in practice because planners have to deal with the challenges of demand planning and supply replenishment, while taking into account the issues of (i) inventory perishability and storage charges, (ii) management of backlog and/or lost sales, and (iii) cost saving opportunities due to economies of scale in order replenishment and transportation. It is therefore not surprising that many logistical planning problems are computationally difficult, and finding a good solution to these problems necessitates the development of many ad hoc algorithmic procedures to address various features of the planning problems. In this article, we identify simple conditions and structural properties associated with these logistical planning problems in which the warehouse is managed as a cross-docking facility. Despite the nonlinear cost structures in the problems, we show that a solution that is within ,-optimality can be obtained by solving a related piece-wise linear concave cost multi-commodity network flow problem. An immediate consequence of this result is that certain classes of logistical planning problems can be approximated by a factor of (1 + ,) in polynomial time. This significantly improves upon the results found in literature for these classes of problems. We also show that the piece-wise linear concave cost network flow problem can be approximated to within a logarithmic factor via a large scale linear programming relaxation. We use polymatroidal constraints to capture the piece-wise concavity feature of the cost functions. This gives rise to a unified and generic LP-based approach for a large class of complicated logistical planning problems. © 2009 Wiley Periodicals, Inc. Naval Research Logistics, 2009 [source] A practical method for computing the largest M -eigenvalue of a fourth-order partially symmetric tensorNUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 7 2009Yiju Wang Abstract In this paper, we consider a bi-quadratic homogeneous polynomial optimization problem over two unit spheres arising in nonlinear elastic material analysis and in entanglement studies in quantum physics. The problem is equivalent to computing the largest M -eigenvalue of a fourth-order tensor. To solve the problem, we propose a practical method whose validity is guaranteed theoretically. To make the sequence generated by the method converge to a good solution of the problem, we also develop an initialization scheme. The given numerical experiments show the effectiveness of the proposed method. Copyright © 2009 John Wiley & Sons, Ltd. [source] On the Application of Inductive Machine Learning Tools to Geographical AnalysisGEOGRAPHICAL ANALYSIS, Issue 2 2000Mark Gahegan Inductive machine learning tools, such as neural networks and decision trees, offer alternative methods for classification, clustering, and pattern recognition that can, in theory, extend to the complex or "deep" data sets that pervade geography. By contrast, traditional statistical approaches may fail, due to issues of scalability and flexibility. This paper discusses the role of inductive machine learning as it relates to geographical analysis. The discussion presented is not based on comparative results or on mathematical description, but instead focuses on the often subtle ways in which the various inductive learning approaches differ operationally, describing (1) the manner in which the feature space is partitioned or clustered, (2) the search mechanisms employed to identify good solutions, and (3) the different biases that each technique imposes. The consequences arising from these issues, when considering complex geographic feature spaces, are then described in detail. The overall aim is to provide a foundation upon which reliable inductive analysis methods can be constructed, instead of depending on piecemeal or haphazard experimentation with the various operational criteria that inductive learning tools call for. Often, it would appear that these criteria are not well understood by practitioners in the geographic sphere, which can lead to difficulties in configuration and operation, and ultimately to poor performance. [source] Portfolio management using value at risk: A comparison between genetic algorithms and particle swarm optimizationINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 7 2009V. A. F. Dallagnol In this paper, it is shown a comparison of the application of particle swarm optimization and genetic algorithms to portfolio management, in a constrained portfolio optimization problem where no short sales are allowed. The objective function to be minimized is the value at risk calculated using historical simulation where several strategies for handling the constraints of the problem were implemented. The results of the experiments performed show that, generally speaking, the methods are capable of consistently finding good solutions quite close to the best solution found in a reasonable amount of time. In addition, it is demonstrated statistically that the algorithms, on average, do not all consistently achieve the same best solution. PSO turned out to be faster than GA, both in terms of number of iterations and in terms of total running time. However, PSO appears to be much more sensitive to the initial position of the particles than GA. Tests were also made regarding the number of particles needed to solve the problem, and 50 particles/chromosomes seem to be enough for problems up to 20 assets. © 2009 Wiley Periodicals, Inc. [source] Hierarchical multiobjective routing in Multiprotocol Label Switching networks with two service classes: a heuristic solutionINTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 3 2009Rita Girão-Silva Abstract Modern multiservice network routing functionalities have to deal with multiple, heterogeneous and multifaceted Quality of Service (QoS) requirements. A heuristic approach devised to find "good" solutions to a hierarchical multiobjective alternative routing optimization problem in Multiprotocol Label Switching networks with two service classes (and different types of traffic flows in each class), namely QoS and Best Effort services, formulated within a hierarchical network-wide optimization framework, is presented. This heuristic solution is based on a bi-objective constrained shortest path model and is applied to a test network used in a benchmarking case study. An experimental study based on analytic and discrete event simulation results is presented, allowing for an assessment of the quality of results obtained with this new heuristic solution for various traffic matrices. A dynamic version of the routing method is formulated and its performance with the same case study network is analysed. [source] A genetic algorithm and the Monte Carlo method for stochastic job-shop schedulingINTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 6 2003Y. Yoshitomi Abstract This paper proposes a method for solving stochastic job-shop scheduling problems using a hybrid of a genetic algorithm in uncertain environments and the Monte Carlo method. First, the genetic algorithm in uncertain environments is applied to stochastic job-shop scheduling problems where the processing times are treated as stochastic variables. The Roulette strategy is adopted for selecting the optimum solution having the minimum expected value for makespan. Applying crossover based on Giffler and Thompson's algorithm results in two offspring inheriting the ancestor's characteristics as the operation completion times averaged up to the parent's generation. Individuals having very high frequency through all generations are selected as the good solutions. Second, the Monte Carlo method is effectively used for finding out the approximately optimum solution among these good solutions. [source] Stochastic mixed integer nonlinear programming using rank filter and ordinal optimizationAICHE JOURNAL, Issue 11 2009Chengtao Wen Abstract A rank filter algorithm is developed to cope with the computational-difficulty in solving stochastic mixed integer nonlinear programming (SMINLP) problems. The proposed approximation method estimates the expected performance values, whose relative rank forms a subset of good solutions with high probability. Suboptimal solutions are obtained by searching the subset using the accurate performances. High-computational efficiency is achieved, because the accurate performance is limited to a small subset of the search space. Three benchmark problems show that the rank filter algorithm can reduce computational expense by several orders of magnitude without significant loss of precision. The rank filter algorithm presents an efficient approach for solving the large-scale SMINLP problems that are nonconvex, highly combinatorial, and strongly nonlinear. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source] A Genetic Algorithm Hybrid for Constructing Optimal Response Surface DesignsQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 7 2004David Drain Abstract Hybrid heuristic optimization methods can discover efficient experiment designs in situations where traditional designs cannot be applied, exchange methods are ineffective, and simple heuristics like simulated annealing fail to find good solutions. One such heuristic hybrid is GASA (genetic algorithm,simulated annealing), developed to take advantage of the exploratory power of the genetic algorithm, while utilizing the local optimum exploitive properties of simulated annealing. The successful application of this method is demonstrated in a difficult design problem with multiple optimization criteria in an irregularly shaped design region. Copyright © 2004 John Wiley & Sons, Ltd. [source] SNP Selection for Association Studies: Maximizing Power across SNP Choice and Study SizeANNALS OF HUMAN GENETICS, Issue 6 2005F. Pardi Summary Selection of single nucleotide polymorphisms (SNPs) is a problem of primary importance in association studies and several approaches have been proposed. However, none provides a satisfying answer to the problem of how many SNPs should be selected, and how this should depend on the pattern of linkage disequilibrium (LD) in the region under consideration. Moreover, SNP selection is usually considered as independent from deciding the sample size of the study. However, when resources are limited there is a tradeoff between the study size and the number of SNPs to genotype. We show that tuning the SNP density to the LD pattern can be achieved by looking for the best solution to this tradeoff. Our approach consists of formulating SNP selection as an optimization problem: the objective is to maximize the power of the final association study, whilst keeping the total costs below a given budget. We also propose two alternative algorithms for the solution of this optimization problem: a genetic algorithm and a hill climbing search. These standard techniques efficiently find good solutions, even when the number of possible SNPs to choose from is large. We compare the performance of these two algorithms on different chromosomal regions and show that, as expected, the selected SNPs reflect the LD pattern: the optimal SNP density varies dramatically between chromosomal regions. [source] |