Home About us Contact | |||
Linear Program (linear + program)
Selected AbstractsCOALITIONS AMONG INTELLIGENT AGENTS: A TRACTABLE CASECOMPUTATIONAL INTELLIGENCE, Issue 1 2006M. V. Belmonte Coalition formation is an important mechanism for cooperation in multiagent systems. In this paper we address the problem of coalition formation among self-interested agents in superadditive task-oriented domains. We assume that each agent has some "structure," i.e., that it can be described by the values taken by a set of m nonnegative attributes that represent the resources w each agent is endowed with. By defining the coalitional value as a function V of w, we prove a sufficient condition for the existence of a stable payment configuration,in the sense of the core,in terms of certain properties of V. We apply these ideas to a simple case that can be described by a linear program and show that it is possible to compute for it,in polynomial time,an optimal task allocation and a stable payment configuration. [source] Two Parallel Computing Methods for Coupled Thermohydromechanical ProblemsCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 3 2000B. A. Schrefler Two different approaches are presented for the parallel implementation of computer programs for the solution of coupled thermohydromechanical problems. One is an asynchronous method in connection with staggered and monolithic solution procedures. The second one is a domain decomposition method making use of substructuring techniques and a Newton-Raphson procedure. The advantages of the proposed methods are illustrated by examples. Both methods are promising, but we actually have no comparison between the two because one works on a linear program with only two interacting fields and the other on a full nonlinear set of (multifield) equations. [source] Inf,sup control of discontinuous piecewise affine systemsINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 13 2009J. Spjøtvold Abstract This paper considers the worst-case optimal control of discontinuous piecewise affine (PWA) systems, which are subjected to constraints and disturbances. We seek to pre-compute, via dynamic programming, an explicit control law for these systems when a PWA cost function is utilized. One difficulty with this problem class is that, even for initial states for which the value function of the optimal control problem is finite, there might not exist a control law that attains the infimum. Hence, we propose a method that is guaranteed to obtain a sub-optimal solution, and where the degree of sub-optimality can be specified a priori. This is achieved by approximating the underlying sub-problems with a parametric piecewise linear program. Copyright © 2008 John Wiley & Sons, Ltd. [source] Integer programming solution approach for inventory-production,distribution problems with direct shipmentsINTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 3 2008Miguel A. Lejeune Abstract We construct an integrated multi-period inventory,production,distribution replenishment plan for three-stage supply chains. The supply chain maintains close relationships with a small group of suppliers, and the nature of the products (bulk, chemical, etc.) makes it more economical to rely upon a direct shipment, full-truck load distribution policy between supply chain nodes. In this paper, we formulate the problem as an integer linear program that proves challenging to solve due to the general integer variables associated with the distribution requirements. We propose new families of valid cover inequalities, and we derive a practical closed-form expression for generating them, upon the determination of a single parameter. We study their performances through benchmarking several branch-and-bound and branch-and-cut approaches. Computational testing is performed using a large-scale planning problem faced by a North American company. [source] Branch Network and Modular Service Optimization for Community BankingINTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 5 2002G. Ioannou In the information society, what is clearly changing is the role and image of bank branches in order to satsify in a more efficient way customers' needs. This paper develops an integrated approach to assist the bank's management in reconfiguring a branch network according to the dictates of the market. We are seeking the optimum number of branches and the optimum mix of services that each branch should offer in order to maximize the revenue,generating measures of the branches within a community. The problem is modeled using a linear program that accounts for community performance as a function of performance variables that are explained by a set of external and internal factors, which reflect community characteristics and modular branch banking parameters, respectively. The relationships between factor and performance variables are identified using regression analysis. An iterative algorithm allows convergence to a solution that provides the best configuration of branches after all possible branch mergers and modular branch adjustments are accomplished. [source] Using assignment examples to infer category limits for the ELECTRE TRI methodJOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 1 2002An Ngo The Abstract Given a finite set of alternatives, the sorting (or assignment) problem consists in the assignment of each alternative to one of the predefined categories. In this paper, we are interested in multiple criteria sorting problems and, more precisely, in the existing method ELECTRE TRI. This method requires the elicitation of preferential parameters (importance coefficients, thresholds, profiles, etc.) in order to construct the decision-maker's (DM) preference model. A direct elicitation of these parameters being sometimes difficult, Mousseau and Slowinski proposed an interactive aggregation,disaggregation approach that infer ELECTRE TRI parameters indirectly from holistic information, i.e. assignment examples. In this approach, the determination of ELECTRE TRI parameters that best restore the assignment examples is formulated through a non-linear optimization program. Also in this direction, Mousseau et al. considered the subproblem of the determination of the importance coefficients only (the thresholds and category limits being fixed). This subproblem leads to solve a linear program (rather that non-linear in the global inference model). We pursue the idea of partial inference model by considering the complementary subproblem which determines the category limits (the importance coefficients being fixed). With some simplification, it also leads to solve a linear program. Together with the result of Mousseau et al., we have a couple of complementary models which can be combined in an interactive approach inferring the parameters of an ELECTRE TRI model from assignment examples. In each interaction, the DM can revise his/her assignment examples, to give additional information and to choose which parameters to fix before the optimization phase restarts. Copyright © 2002 John Wiley & Sons, Ltd. [source] A refined deterministic linear program for the network revenue management problem with customer choice behaviorNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 6 2008Sumit Kunnumkal Abstract We present a new deterministic linear program for the network revenue management problem with customer choice behavior. The novel aspect of our linear program is that it naturally generates bid prices that depend on how much time is left until the time of departure. Similar to the earlier linear program used by van Ryzin and Liu (2004), the optimal objective value of our linear program provides an upper bound on the optimal total expected revenue over the planning horizon. In addition, the percent gap between the optimal objective value of our linear program and the optimal total expected revenue diminishes in an asymptotic regime where the leg capacities and the number of time periods in the planning horizon increase linearly with the same rate. Computational experiments indicate that when compared with the linear program that appears in the existing literature, our linear program can provide tighter upper bounds, and the control policies that are based on our linear program can obtain higher total expected revenues. © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008 [source] Risk-sensitive sizing of responsive facilitiesNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 3 2008Sergio Chayet Abstract We develop a risk-sensitive strategic facility sizing model that makes use of readily obtainable data and addresses both capacity and responsiveness considerations. We focus on facilities whose original size cannot be adjusted over time and limits the total production equipment they can hold, which is added sequentially during a finite planning horizon. The model is parsimonious by design for compatibility with the nature of available data during early planning stages. We model demand via a univariate random variable with arbitrary forecast profiles for equipment expansion, and assume the supporting equipment additions are continuous and decided ex-post. Under constant absolute risk aversion, operating profits are the closed-form solution to a nontrivial linear program, thus characterizing the sizing decision via a single first-order condition. This solution has several desired features, including the optimal facility size being eventually decreasing in forecast uncertainty and decreasing in risk aversion, as well as being generally robust to demand forecast uncertainty and cost errors. We provide structural results and show that ignoring risk considerations can lead to poor facility sizing decisions that deteriorate with increased forecast uncertainty. Existing models ignore risk considerations and assume the facility size can be adjusted over time, effectively shortening the planning horizon. Our main contribution is in addressing the problem that arises when that assumption is relaxed and, as a result, risk sensitivity and the challenges introduced by longer planning horizons and higher uncertainty must be considered. Finally, we derive accurate spreadsheet-implementable approximations to the optimal solution, which make this model a practical capacity planning tool.© 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008 [source] A simple algorithm that proves half-integrality of bidirected network programming,NETWORKS: AN INTERNATIONAL JOURNAL, Issue 1 2006Ethan D. Bolker Abstract In a bidirected graph, each end of each edge is independently oriented. We show how to express any column of the incidence matrix as a half-integral linear combination of any column basis, through a simplification, based on an idea of Bolker, of a combinatorial algorithm of Appa and Kotnyek. Corollaries are that the inverse of each nonsingular square submatrix has entries 0, , and ±1, and that a bidirected integral linear program has half-integral solutions. © 2006 Wiley Periodicals, Inc. NETWORKS, Vol. 48(1), 36,38 2006 [source] Minimizing beam-on time in cancer radiation treatment using multileaf collimatorsNETWORKS: AN INTERNATIONAL JOURNAL, Issue 4 2004Natashia Boland Abstract In this article the modulation of intensity matrices arising in cancer radiation therapy using multileaf collimators (MLC) is investigated. It is shown that the problem is equivalent to decomposing a given integer matrix into a positive linear combination of (0, 1) matrices. These matrices, called shape matrices, must have the strict consecutive-1-property, together with another property derived from the technological restrictions of the MLC equipment. Various decompositions can be evaluated by their beam-on time (time during which radiation is applied to the patient) or the treatment time (beam-on time plus time for setups). We focus on the former, and develop a nonlinear mixed-integer programming formulation of the problem. This formulation can be decomposed to yield a column generation formulation: a linear program with a large number of variables that can be priced by solving a subproblem. We then develop a network model in which paths in the network correspond to feasible shape matrices. As a consequence, we deduce that the column generation subproblem can be solved as a shortest path problem. Furthermore, we are able to develop two alternative models of the problem as side-constrained network flow formulations, and so obtain our main theoretical result that the problem is solvable in polynomial time. Finally, a numerical comparison of our exact solutions with those of well-known heuristic methods shows that the beam-on time can be reduced by a considerable margin. © 2004 Wiley Periodicals, Inc. [source] The Ring Star Problem: Polyhedral analysis and exact algorithmNETWORKS: AN INTERNATIONAL JOURNAL, Issue 3 2004Martine Labbé Abstract In the Ring Star Problem, the aim is to locate a simple cycle through a subset of vertices of a graph with the objective of minimizing the sum of two costs: a ring cost proportional to the length of the cycle and an assignment cost from the vertices not in the cycle to their closest vertex on the cycle. The problem has several applications in telecommunications network design and in rapid transit systems planning. It is an extension of the classical location,allocation problem introduced in the early 1960s, and closely related versions have been recently studied by several authors. This article formulates the problem as a mixed-integer linear program and strengthens it with the introduction of several families of valid inequalities. These inequalities are shown to be facet-defining and are used to develop a branch-and-cut algorithm. Computational results show that instances involving up to 300 vertices can be solved optimally using the proposed methodology. © 2004 Wiley Periodicals, Inc. [source] An efficient linear programming solver for optimal filter synthesisNUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 9 2007Jihong Ren Abstract We consider the problem of l, optimal deconvolution arising in high data-rate communication between integrated circuits. The optimal deconvolver can be found by solving a linear program for which we use Mehrotra's interior-point approach. The critical step is solving the linear system for the normal equations in each iteration. We show that this linear system has a special block structure that can be exploited to obtain a fast solution technique whose overall computational cost depends mostly on the number of design variables, and only linearly on the number of constraints. Numerical experiments validate our findings and illustrate the merits of our approach. Copyright © 2007 John Wiley & Sons, Ltd. [source] On,off minimum-time control with limited fuel usage: near global optima via linear programmingOPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 3 2006Brian J. Driessen Abstract A method for finding a global optimum to the on,off minimum-time control problem with limited fuel usage is presented. Each control can take on only three possible values: maximum, zero, or minimum. The simplex method for linear systems naturally yields nearly such a solution for the re-formulation presented herein because the simplex method always produces an extreme point solution to the linear program. Numerical examples for the benchmark linear flexible system are presented. Copyright © 2006 John Wiley & Sons, Ltd. [source] Sampling subproblems of heterogeneous Max-Cut problems and approximation algorithms,RANDOM STRUCTURES AND ALGORITHMS, Issue 3 2008Petros Drineas Abstract Recent work in the analysis of randomized approximation algorithms for NP -hard optimization problems has involved approximating the solution to a problem by the solution of a related subproblem of constant size, where the subproblem is constructed by sampling elements of the original problem uniformly at random. In light of interest in problems with a heterogeneous structure, for which uniform sampling might be expected to yield suboptimal results, we investigate the use of nonuniform sampling probabilities. We develop and analyze an algorithm which uses a novel sampling method to obtain improved bounds for approximating the Max-Cut of a graph. In particular, we show that by judicious choice of sampling probabilities one can obtain error bounds that are superior to the ones obtained by uniform sampling, both for unweighted and weighted versions of Max-Cut. Of at least as much interest as the results we derive are the techniques we use. The first technique is a method to compute a compressed approximate decomposition of a matrix as the product of three smaller matrices, each of which has several appealing properties. The second technique is a method to approximate the feasibility or infeasibility of a large linear program by checking the feasibility or infeasibility of a nonuniformly randomly chosen subprogram of the original linear program. We expect that these and related techniques will prove fruitful for the future development of randomized approximation algorithms for problems whose input instances contain heterogeneities. © 2007 Wiley Periodicals, Inc. Random Struct. Alg., 2008 [source] The Random-Facet simplex algorithm on combinatorial cubes,RANDOM STRUCTURES AND ALGORITHMS, Issue 3 2002Bernd Gärtner The RANDOM -FACET algorithm is a randomized variant of the simplex method which is known to solve any linear program with n variables and m constraints using an expected number of pivot steps which is subexponential in both n and m. This is the theoretically fastest simplex algorithm known to date if m , n; it provably beats most of the classical deterministic variants which require exp(,(n)) pivot steps in the worst case. RANDOM -FACET has independently been discovered and analyzed ten years ago by Kalai as a variant of the primal simplex method, and by Matous,ek, Sharir, and Welzl in a dual form. The essential ideas and results connected to RANDOM -FACET can be presented in a particularly simple and instructive way for the case of linear programs over combinatorialn - cubes. I derive an explicit upper bound of (1) on the expected number of pivot steps in this case, using a new technique of "fingerprinting" pivot steps. This bound also holds for generalized linear programs, similar flavors of which have been introduced and studied by several researchers. I then review an interesting class of generalized linear programs, due to Matous,ek, showing that RANDOM -FACET may indeed require an expected number of pivot steps in the worst case. The main new result of the paper is a proof that all actual linear programs in Matous,ek's class are solved by RANDOM -FACET with an expected polynomial number of pivot steps. This proof exploits a combinatorial property of linear programming which has only recently been discovered by Holt and Klee. The result establishes the first scenario in which an algorithm that works for generalized linear programs "recognizes" proper linear programs. Thus, despite Matous,ek's worst-case result, the question remains open whether RANDOM -FACET (or any other simplex variant) is a polynomial-time algorithm for linear programming. Finally, I briefly discuss extensions of the combinatorial cube results to the general case. © 2002 Wiley Periodicals, Inc. Random Struct. Alg., 20:353,381, 2002 [source] Optimal No-Arbitrage Bounds on S&P 500 Index Options and the Volatility SmileTHE JOURNAL OF FUTURES MARKETS, Issue 12 2001Patrick J. Dennis This article shows that the volatility smile is not necessarily inconsistent with the Black,Scholes analysis. Specifically, when transaction costs are present, the absence of arbitrage opportunities does not dictate that there exists a unique price for an option. Rather, there exists a range of prices within which the option's price may fall and still be consistent with the Black,Scholes arbitrage pricing argument. This article uses a linear program (LP) cast in a binomial framework to determine the smallest possible range of prices for Standard & Poor's 500 Index options that are consistent with no arbitrage in the presence of transaction costs. The LP method employs dynamic trading in the underlying and risk-free assets as well as fixed positions in other options that trade on the same underlying security. One-way transaction-cost levels on the index, inclusive of the bid,ask spread, would have to be below six basis points for deviations from Black,Scholes pricing to present an arbitrage opportunity. Monte Carlo simulations are employed to assess the hedging error induced with a 12-period binomial model to approximate a continuous-time geometric Brownian motion. Once the risk caused by the hedging error is accounted for, transaction costs have to be well below three basis points for the arbitrage opportunity to be profitable two times out of five. This analysis indicates that market prices that deviate from those given by a constant-volatility option model, such as the Black,Scholes model, can be consistent with the absence of arbitrage in the presence of transaction costs. © 2001 John Wiley & Sons, Inc. Jrl Fut Mark 21:1151,1179, 2001 [source] ON LABOUR DEMAND AND EQUILIBRIA OF THE FIRM,THE MANCHESTER SCHOOL, Issue 5 2005ROBERT L. VIENNEAU This note considers a linear programming formulation of the problem of the firm. A neoclassical non-increasing labour demand function is derived from the solution of the linear program. Only a set of measure zero on this function, one or two points in the examples examined, provides equilibria of the representative firm. Equilibria of the representative firm are characterized by decisions of its managers that allow the same decisions to be made in successive periods. Hence, one can explain the quantity of labour that firms desire to hire either by a traditional neoclassical labour demand function or by equilibria of the firm, but generally not both. [source] Some remarks on the LSOWA approach for obtaining OWA operator weightsINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 12 2009Byeong Seok Ahn One of the key issues in the theory of ordered-weighted averaging (OWA) operators is the determination of their associated weights. To this end, numerous weighting methods have appeared in the literature, with their main difference occurring in the objective function used to determine the weights. A minimax disparity approach for obtaining OWA operator weights is one particular case, which involves the formulation and solution of a linear programming model subject to a given value of orness and the adjacent weight constraints. It is clearly easier for obtaining the OWA operator weights than from previously reported OWA weighting methods. However, this approach still requires solving linear programs by a conventional linear program package. Here, we revisit the least-squared OWA method, which intends to produce spread-out weights as much as possible while strictly satisfying a predefined value of orness, and we show that it is an equivalent of the minimax disparity approach. The proposed solution takes a closed form and thus can be easily used for simple calculations. © 2009 Wiley Periodicals, Inc. [source] Solution of fuzzy matrix games: An application of the extension principleINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 8 2007Shiang-Tai Liu Conventional game theory is concerned with how rational individuals make decisions when they are faced with known payoffs. This article develops a solution method for the two-person zero-sum game where the payoffs are only approximately known and can be represented by fuzzy numbers. Because the payoffs are fuzzy, the value of the game is fuzzy as well. Based on the extension principle, a pair of two-level mathematical programs is formulated to obtain the upper bound and lower bound of the value of the game at possibility level ,. By applying a dual formulation and a variable substitution technique, the pair of two-level mathematical programs is transformed to a pair of ordinary one-level linear programs so they can be manipulated. From different values of ,, the membership function of the fuzzy value of the game is constructed. It is shown that the two players have the same fuzzy value of the game. An example illustrates the whole idea of a fuzzy matrix game. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 891,903, 2007. [source] A hybrid search combining interior point methods and metaheuristics for 0,1 programmingINTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 6 2002Agnès Plateau Our search deals with methods hybridizing interior point processes and metaheuristics for solving 0,1 linear programs. This paper shows how metaheuristics can take advantage of a sequence of interior points generated by an interior point method. After introducing our work field, we present our hybrid search which generates a diversified population. Next, we explain the whole method combining the solutions encountered in the previous phase through a path relinking template. Computational experiments are reported on 0,1 multiconstraint knapsack problems. [source] Approximation algorithms for finding low-degree subgraphsNETWORKS: AN INTERNATIONAL JOURNAL, Issue 3 2004Philip N. Klein Abstract We give quasipolynomial-time approximation algorithms for designing networks with a minimum degree. Using our methods, one can design networks whose connectivity is specified by "proper" functions, a class of 0,1 functions indicating the number of edges crossing each cut. We also provide quasipolynomial-time approximation algorithms for finding two-edge-connected spanning subgraphs of approximately minimum degree of a given two-edge-connected graph, and a spanning tree (branching) of approximately minimum degree of a directed graph. The degree of the output network in all cases is guaranteed to be at most (1 + ,) times the optimal degree, plus an additive O(log1+,n) for any , > 0. Our analysis indicates that the degree of an optimal subgraph for each of the problems above is well estimated by certain polynomially solvable linear programs. This suggests that the linear programs we describe could be useful in obtaining optimal solutions via branch and bound. © 2004 Wiley Periodicals, Inc. NETWORKS, Vol. 44(3), 203,215 2004 [source] The Random-Facet simplex algorithm on combinatorial cubes,RANDOM STRUCTURES AND ALGORITHMS, Issue 3 2002Bernd Gärtner The RANDOM -FACET algorithm is a randomized variant of the simplex method which is known to solve any linear program with n variables and m constraints using an expected number of pivot steps which is subexponential in both n and m. This is the theoretically fastest simplex algorithm known to date if m , n; it provably beats most of the classical deterministic variants which require exp(,(n)) pivot steps in the worst case. RANDOM -FACET has independently been discovered and analyzed ten years ago by Kalai as a variant of the primal simplex method, and by Matous,ek, Sharir, and Welzl in a dual form. The essential ideas and results connected to RANDOM -FACET can be presented in a particularly simple and instructive way for the case of linear programs over combinatorialn - cubes. I derive an explicit upper bound of (1) on the expected number of pivot steps in this case, using a new technique of "fingerprinting" pivot steps. This bound also holds for generalized linear programs, similar flavors of which have been introduced and studied by several researchers. I then review an interesting class of generalized linear programs, due to Matous,ek, showing that RANDOM -FACET may indeed require an expected number of pivot steps in the worst case. The main new result of the paper is a proof that all actual linear programs in Matous,ek's class are solved by RANDOM -FACET with an expected polynomial number of pivot steps. This proof exploits a combinatorial property of linear programming which has only recently been discovered by Holt and Klee. The result establishes the first scenario in which an algorithm that works for generalized linear programs "recognizes" proper linear programs. Thus, despite Matous,ek's worst-case result, the question remains open whether RANDOM -FACET (or any other simplex variant) is a polynomial-time algorithm for linear programming. Finally, I briefly discuss extensions of the combinatorial cube results to the general case. © 2002 Wiley Periodicals, Inc. Random Struct. Alg., 20:353,381, 2002 [source] |