Programming Approach (programming + approach)

Distribution by Scientific Domains
Distribution within Business, Economics, Finance and Accounting

Kinds of Programming Approach

  • mathematical programming approach


  • Selected Abstracts


    A Mathematical Programming Approach for Procurement Using Activity Based Costing

    JOURNAL OF BUSINESS FINANCE & ACCOUNTING, Issue 1-2 2000
    Zeger Degraeve
    Activity Based Costing and Management are important topics in today's management accounting literature. While there has been much attention paid in the Activity Based Costing literature to customer profitability analysis, process improvement and product design, there has been far less notice taken of purchasing. In this paper we develop an Activity Based Costing approach for the determination of procurement strategies. Vendor selection using an Activity Based Costing approach is choosing the combination of suppliers for a given product group that minimizes the total costs associated with the purchasing strategy. To this end we develop a mathematical programming model where decisions involve the selection of vendors and the determination of order quantities. The system computes the total cost of ownership, thereby increasing the objectivity in the selection process and giving the opportunity for various kinds of sensitivity analysis. [source]


    Performance comparison of MPI and OpenMP on shared memory multiprocessors

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 1 2006
    Géraud Krawezik
    Abstract When using a shared memory multiprocessor, the programmer faces the issue of selecting the portable programming model which will provide the best performance. Even if they restricts their choice to the standard programming environments (MPI and OpenMP), they have to select a programming approach among MPI and the variety of OpenMP programming styles. To help the programmer in their decision, we compare MPI with three OpenMP programming styles (loop level, loop level with large parallel sections, SPMD) using a subset of the NAS benchmark (CG, MG, FT, LU), two dataset sizes (A and B), and two shared memory multiprocessors (IBM SP3 NightHawk II, SGI Origin 3800). We have developed the first SPMD OpenMP version of the NAS benchmark and gathered other OpenMP versions from independent sources (PBN, SDSC and RWCP). Experimental results demonstrate that OpenMP provides competitive performance compared with MPI for a large set of experimental conditions. Not surprisingly, the two best OpenMP versions are those requiring the strongest programming effort. MPI still provides the best performance under some conditions. We present breakdowns of the execution times and measurements of hardware performance counters to explain the performance differences. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    OpenMP-oriented applications for distributed shared memory architectures

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2004
    Ami Marowka
    Abstract The rapid rise of OpenMP as the preferred parallel programming paradigm for small-to-medium scale parallelism could slow unless OpenMP can show capabilities for becoming the model-of-choice for large scale high-performance parallel computing in the coming decade. The main stumbling block for the adaptation of OpenMP to distributed shared memory (DSM) machines, which are based on architectures like cc-NUMA, stems from the lack of capabilities for data placement among processors and threads for achieving data locality. The absence of such a mechanism causes remote memory accesses and inefficient cache memory use, both of which lead to poor performance. This paper presents a simple software programming approach called copy-inside,copy-back (CC) that exploits the data privatization mechanism of OpenMP for data placement and replacement. This technique enables one to distribute data manually without taking away control and flexibility from the programmer and is thus an alternative to the automat and implicit approaches. Moreover, the CC approach improves on the OpenMP-SPMD style of programming that makes the development process of an OpenMP application more structured and simpler. The CC technique was tested and analyzed using the NAS Parallel Benchmarks on SGI Origin 2000 multiprocessor machines. This study shows that OpenMP improves performance of coarse-grained parallelism, although a fast copy mechanism is essential. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    An agent-based scheduling method enabling rescheduling with trial-and-error approach

    ELECTRICAL ENGINEERING IN JAPAN, Issue 1 2007
    Hiroyasu Mitsui
    Abstract Scheduling optimization is an extremely difficult problem; therefore, many scheduling methods such as linear programming or stochastic searching have been investigated in order to obtain better solutions close to the optimum one. After obtaining a certain solution, scheduling managers may need to reschedule another solution that corresponds to changes in requirements or resources. However, rescheduling problems become more difficult as they become larger in scale. In this paper, we propose an agent-based rescheduling system using the linear programming approach. In our system, agents can autonomously conduct rescheduling on behalf of managers by repeated trial and error in balancing loads or changing the priority of resource allocation until it reaches a better solution for the requirement is obtained. In addition, managers can engage in trial and error with the help of agents to seek a better solution by changing constraint conditions. © 2007 Wiley Periodicals, Inc. Electr Eng Jpn, 159(1): 26,38, 2007; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/eej.20434 [source]


    Linear models for minimizing misclassification costs in bankruptcy prediction

    INTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 3 2001
    Sudhir Nanda
    This paper illustrates how a misclassification cost matrix can be incorporated into an evolutionary classification system for bankruptcy prediction. Most classification systems for predicting bankruptcy have attempted to minimize misclassifications. The minimizing misclassification approach assumes that Type I and Type II error costs for misclassifications are equal. There is evidence that these costs are not equal and incorporating costs into the classification systems can lead to better and more desirable results. In this paper, we use the principles of evolution to develop and test a genetic algorithm (GA) based approach that incorporates the asymmetric Type I and Type II error costs. Using simulated and real-life bankruptcy data, we compare the results of our proposed approach with three linear approaches: statistical linear discriminant analysis (LDA), a goal programming approach, and a GA-based classification approach that does not incorporate the asymmetric misclassification costs. Our results indicate that the proposed approach, incorporating Type I and Type II error costs, results in lower misclassification costs when compared to LDA and GA approaches that do not incorporate misclassification costs. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Shape optimization of piezoelectric devices using an enriched meshfree method

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 2 2009
    C. W. Liu
    Abstract We present an enriched reproducing kernel particle method for shape sensitivity analysis and shape optimization of two-dimensional electromechanical domains. This meshfree method incorporates enrichment functions for better representation of discontinuous electromechanical fields across internal boundaries. We use cubic splines for delineating the geometry of internal/external domain boundaries; and the nodal coordinates and slopes of these splines at their control points become the design parameters. This approach enables smooth manipulations of bi-material interfaces and external boundaries during the optimization process. It also enables the calculation of displacement and electric-potential field sensitivities with respect to the design parameters through direct differentiation, for which we adopt the classical material derivative approach. We verify this implementation of sensitivity calculations against an exact solution to a variant of Lamé's problem, and also, finite-difference approximations. We follow a sequential quadratic programming approach to minimize the cost function; and demonstrate the utility of the overall technique through a model problem that involves the shape optimization of a piezoelectric fan. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Resource allocation in satellite networks: certainty equivalent approaches versus sensitivity estimation algorithms

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 1 2005
    Franco Davoli
    Abstract In this paper, we consider a resource allocation problem for a satellite network, where variations of fading conditions are added to those of traffic load. Since the capacity of the system is finite and divided in finite discrete portions, the resource allocation problem reveals to be a discrete stochastic programming one, which is typically NP-hard. We propose a new approach based on the minimization over a discrete constraint set using an estimation of the gradient, obtained through a ,relaxed continuous extension' of the performance measure. The computation of the gradient estimation is based on the infinitesimal perturbation analysis technique, applied on a stochastic fluid model of the network. No closed-forms of the performance measure, nor additional feedback concerning the state of the system, and very mild assumptions on the probabilistic properties about the statistical processes involved in the problem are requested. Such optimization approach is compared with a dynamic programming algorithm that maintains a perfect knowledge about the state of the satellite network (traffic load statistics and fading levels). The comparison shows that the sensitivity estimation capability of the proposed algorithm allows to maintain the optimal resource allocation in dynamic conditions and it is able to provide even better performance than the one reached by employing the dynamic programming approach. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Portfolio selection on the Madrid Exchange: a compromise programming model

    INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 1 2003
    E. Ballestero
    As a contribution to portfolio selection analysis, we develop a compromise programming approach to the investor's utility optimum on the Madrid Online Market. This approach derives from linkages between utility functions under incomplete information, Yu's compromise set, and certain biased sets of portfolios on the efficient frontier. These linkages rely on recent theorems in multi,criteria literature, which allow us to approximate the investor's utility optimum between bounds which are determined either by linear programming models or graphic techniques. Returns on 104 stocks are computed from capital gains and cash,flows, including dividends and rights offerings, over the period 1992,1997. The first step consists in normalizing the mean,variance efficient frontier, which is defined in terms of two indexes, profitability and safety. In the second step, interactive dialogues to elicit the investor's preferences for profitability and safety are described. In the third step, the utility optimum for each particular investor who pursues a buy,&,hold policy is bounded on the efficient frontier. From this step, a number of portfolios close to the investor's utility optimum are obtained. In the fourth step, compromise programming is used again to select one ,satisficing' portfolio from the set already bounded for each investor. This step is new with respect to previous papers in which compromise/utility models are employed. Computing processes are detailed in tables and figures which also display the numerical results. Extensions to active management policies are suggested. [source]


    Food consumption impacts of adherence to dietary norms in the United States: a quantitative assessment

    AGRICULTURAL ECONOMICS, Issue 2-3 2007
    C. S. Srinivasan
    Dietary norms; Dietary adjustment; Food consumption impacts; Quadratic programming Abstract Promotion of adherence to healthy-eating norms has become an important element of nutrition policy in the United States and other developed countries. We assess the potential consumption impacts of adherence to a set of recommended dietary norms in the United States using a mathematical programming approach. We find that adherence to recommended dietary norms would involve significant changes in diets, with large reductions in the consumption of fats and oils along with large increases in the consumption of fruits, vegetables, and cereals. Compliance with norms recommended by the World Health Organization for energy derived from sugar would involve sharp reductions in sugar intakes. We also analyze how dietary adjustments required vary across demographic groups. Most socio-demographic characteristics appear to have relatively little influence on the pattern of adjustment required to comply with norms. Income levels have little effect on required dietary adjustments. Education is the only characteristic to have a significant influence on the magnitude of adjustments required. The least educated rather than the poorest have to bear the highest burden of adjustment. Our analysis suggests that fiscal measures like nutrient-based taxes may not be as regressive as commonly believed. Dissemination of healthy-eating norms to the less educated will be a key challenge for nutrition policy. [source]


    A property-based optimization of direct recycle networks and wastewater treatment processes

    AICHE JOURNAL, Issue 9 2009
    José María Ponce-Ortega
    Abstract This article presents a mathematical programming approach to optimize direct recycle-reuse networks together with wastewater treatment processes in order to satisfy a given set of environmental regulations. A disjunctive programming formulation is developed to optimize the recycle/reuse of process streams to units and the performance of wastewater treatment units. In addition to composition-based constraints, the formulation also incorporates in-plant property constraints as well as properties impacting the environment toxicity, ThOD, pH, color, and odor. The MINLP model is used to minimize the total annual cost of the system, which includes the cost for the fresh sources, the piping cost for the process integration and the waste stream treatment cost. An example problem is used to show the application of the proposed model. The results show that the simultaneous optimization of a recycle network and waste treatment process yields significant savings with respect to a commonly-used sequential optimization strategy. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source]


    Risk management for a global supply chain planning under uncertainty: Models and algorithms

    AICHE JOURNAL, Issue 4 2009
    Fengqi You
    Abstract In this article, we consider the risk management for mid-term planning of a global multi-product chemical supply chain under demand and freight rate uncertainty. A two-stage stochastic linear programming approach is proposed within a multi-period planning model that takes into account the production and inventory levels, transportation modes, times of shipments, and customer service levels. To investigate the potential improvement by using stochastic programming, we describe a simulation framework that relies on a rolling horizon approach. The studies suggest that at least 5% savings in the total real cost can be achieved compared with the deterministic case. In addition, an algorithm based on the multi-cut L-shaped method is proposed to effectively solve the resulting large scale industrial size problems. We also introduce risk management models by incorporating risk measures into the stochastic programming model, and multi-objective optimization schemes are implemented to establish the tradeoffs between cost and risk. To demonstrate the effectiveness of the proposed stochastic models and decomposition algorithms, a case study of a realistic global chemical supply chain problem is presented. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source]


    A dynamic interval goal programming approach to the regulation of a lake,river system

    JOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 2 2001
    Raimo P. Hämäläinen
    Abstract This paper describes a model and a decision support tool for the regulation of a lake,river system using both goal sets and goal points for the water level over the year. The inflow forecast is updated periodically, which results in a series of dynamic rolling horizon goal programming problems. This involves heavy computation, and yet it can be successfully done with the spreadsheet program. The related decision support tool with a graphical user interface is called Interactive analysis of dynamic water regulation Strategies by Multi-criteria Optimization (ISMO). The Finnish Environment Institute (FEI) actively uses it in the generation of regulation policy alternatives when considered from the different perspectives of the stakeholders. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    New product introduction against a predator: A bilevel mixed-integer programming approach

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 8 2009
    J. Cole Smith
    Abstract We consider a scenario with two firms determining which products to develop and introduce to the market. In this problem, there exists a finite set of potential products and market segments. Each market segment has a preference list of products and will buy its most preferred product among those available. The firms play a Stackelberg game in which the leader firm first introduces a set of products, and the follower responds with its own set of products. The leader's goal is to maximize its profit subject to a product introduction budget, assuming that the follower will attempt to minimize the leader's profit using a budget of its own. We formulate this problem as a multistage integer program amenable to decomposition techniques. Using this formulation, we develop three variations of an exact mathematical programming method for solving the multistage problem, along with a family of heuristic procedures for estimating the follower solution. The efficacy of our approaches is demonstrated on randomly generated test instances. This article contributes to the operations research literature a multistage algorithm that directly addresses difficulties posed by degeneracy, and contributes to the product variety literature an exact optimization algorithm for a novel competitive product introduction problem. © 2009 Wiley Periodicals, Inc. Naval Research Logistics, 2009 [source]


    A mathematical programming approach for improving the robustness of least sum of absolute deviations regression

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 4 2006
    Avi Giloni
    Abstract This paper discusses a novel application of mathematical programming techniques to a regression problem. While least squares regression techniques have been used for a long time, it is known that their robustness properties are not desirable. Specifically, the estimators are known to be too sensitive to data contamination. In this paper we examine regressions based on Least-sum of Absolute Deviations (LAD) and show that the robustness of the estimator can be improved significantly through a judicious choice of weights. The problem of finding optimum weights is formulated as a nonlinear mixed integer program, which is too difficult to solve exactly in general. We demonstrate that our problem is equivalent to a mathematical program with a single functional constraint resembling the knapsack problem and then solve it for a special case. We then generalize this solution to general regression designs. Furthermore, we provide an efficient algorithm to solve the general nonlinear, mixed integer programming problem when the number of predictors is small. We show the efficacy of the weighted LAD estimator using numerical examples. © 2006 Wiley Periodicals, Inc. Naval Research Logistics, 2006 [source]


    Economies of Scale and Scope, Contestability, Windfall Profits and Regulatory Risk

    THE MANCHESTER SCHOOL, Issue 6 2000
    Michael J. Ryan
    In this paper I introduce new results on economies of scale and scope and develop implications of these results for contestability and regulation. This is done using a goal programming approach which endogenizes regulatory frameworks in a multiperiod and multiregion monopolistic and oligopolistic analysis. This explicitly spatial approach leads to useful distinctions between industrial contestability and market contestability and a multiperiod contestability-based regulatory model. That model is then extended to a state preference framework with regulatory risk and windfall gains and losses. [source]


    Investment planning under uncertainty and flexibility: the case of a purchasable sales contract*

    AUSTRALIAN JOURNAL OF AGRICULTURAL & RESOURCE ECONOMICS, Issue 1 2008
    Oliver Musshoff
    Investment decisions are not only characterised by irreversibility and uncertainty but also by flexibility with regard to the timing of the investment. This paper describes how stochastic simulation can be successfully integrated into a backward recursive programming approach in the context of flexible investment planning. We apply this hybrid approach to a marketing question from primary production which can be viewed as an investment problem: should grain farmers purchase sales contracts which guarantee fixed product prices over the next 10 years? The model results support the conclusion from dynamic investment theory that it is essential to take simultaneously account of uncertainty and flexibility. [source]


    369 Tflop/s molecular dynamics simulations on the petaflop hybrid supercomputer ,Roadrunner'

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 17 2009
    Timothy C. Germann
    Abstract We describe the implementation of a short-range parallel molecular dynamics (MD) code, SPaSM, on the heterogeneous general-purpose Roadrunner supercomputer. Each Roadrunner ,TriBlade' compute node consists of two AMD Opteron dual-core microprocessors and four IBM PowerXCell 8i enhanced Cell microprocessors (each consisting of one PPU and eight SPU cores), so that there are four MPI ranks per node, each with one Opteron and one Cell. We will briefly describe the Roadrunner architecture and some of the initial hybrid programming approaches that have been taken, focusing on the SPaSM application as a case study. An initial ,evolutionary' port, in which the existing legacy code runs with minor modifications on the Opterons and the Cells are only used to compute interatomic forces, achieves roughly a 2× speedup over the unaccelerated code. On the other hand, our ,revolutionary' implementation adopts a Cell-centric view, with data structures optimized for, and living on, the Cells. The Opterons are mainly used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard,Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), nearly 10× faster than the unaccelerated (Opteron-only) version. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Algorithms for the Weight Constrained Shortest Path Problem

    INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 1 2001
    Irina Dumitrescu
    Given a directed graph whose arcs have an associated cost, and associated weight, the weight constrained shortest path problem (WCSPP) consists of finding a least-cost path between two specified nodes, such that the total weight along the path is less than a specified value. We will consider the case of the WCSPP defined on a graph without cycles. Even in this case, the problem is NP-hard, unless all weights are equal or all costs are equal, however pseudopolynomial time algorithms are known. The WCSPP applies to a number of real-world problems. Traditionally, dynamic programming approaches were most commonly used, but in recent times other methods have been developed, including exact approaches based on Lagrangean relaxation, and fully polynomial approximation schemes. We will review the area and present a new exact algorithm, based on scaling and rounding of weights. [source]


    An interactive fuzzy satisficing method for multiobjective stochastic linear programming problems using chance constrained conditions

    JOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 3 2002
    Masatoshi Sakawa
    Abstract Two major approaches to deal with randomness or ambiguity involved in mathematical programming problems have been developed. They are stochastic programming approaches and fuzzy programming approaches. In this paper, we focus on multiobjective linear programming problems with random variable coefficients in objective functions and/or constraints. Using chance constrained programming techniques, the stochastic programming problems are transformed into deterministic ones. As a fusion of stochastic approaches and fuzzy ones, after determining the fuzzy goals of the decision maker, interactive fuzzy satisficing methods to derive a satisficing solution for the decision maker by updating the reference membership levels is presented. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Optimization of a Process Synthesis Superstructure Using an Ant Colony Algorithm

    CHEMICAL ENGINEERING & TECHNOLOGY (CET), Issue 3 2008
    B. Raeesi
    Abstract The optimization of chemical syntheses based on superstructure modeling is a perfect way for achieving the optimal plant design. However, the combinatorial optimization problem arising from this method is very difficult to solve, particularly for the entire plant. Relevant literature has focused on the use of mathematical programming approaches. Some research has also been conducted based on meta-heuristic algorithms. In this paper, two approaches are presented to optimize process synthesis superstructure. Firstly, mathematical formulation of a superstructure model is presented. Then, an ant colony algorithm is proposed for solving this nonlinear combinatorial problem. In order to ensure that all the constraints are satisfied, an adaptive, feasible bound for each variable is defined to limit the search space. Adaptation of these bounds is executed by the suggested bound updating rule. Finally, the capability of the proposed algorithm is compared with the conventional Branch and Bound method by a case study. [source]