Home About us Contact | |||
Heuristic Methods (heuristic + methods)
Selected AbstractsHeuristic Methods for Computer EthicsMETAPHILOSOPHY, Issue 3 2002Walter Maner The domain of "procedural ethics" is the set of reflective and deliberative methods that maximize the reliability of moral judgment. While no general algorithmic method exists that will guarantee the validity of ethical deliberation, non-algorithmic "heuristic" methods can guide and inform the process, making it significantly more robust and dependable. This essay examines various representative heuristic procedures commonly recommended for use in applied ethics, maps them into a uniform set of twelve stages, identifies common faults, then shows how the resulting stage-by-stage decision-making model could be adapted for general use and for use in computer ethics. [source] Comparison of LiDAR waveform processing methods for very shallow water bathymetry using Raman, near-infrared and green signalsEARTH SURFACE PROCESSES AND LANDFORMS, Issue 6 2010Tristan Allouis Abstract Airborne light detection and ranging (LiDAR) bathymetry appears to be a useful technology for bed topography mapping of non-navigable areas, offering high data density and a high acquisition rate. However, few studies have focused on continental waters, in particular, on very shallow waters (<2,m) where it is difficult to extract the surface and bottom positions that are typically mixed in the green LiDAR signal. This paper proposes two new processing methods for depth extraction based on the use of different LiDAR signals [green, near-infrared (NIR), Raman] of the SHOALS-1000T sensor. They have been tested on a very shallow coastal area (Golfe du Morbihan, France) as an analogy to very shallow rivers. The first method is based on a combination of mathematical and heuristic methods using the green and the NIR LiDAR signals to cross validate the information delivered by each signal. The second method extracts water depths from the Raman signal using statistical methods such as principal components analysis (PCA) and classification and regression tree (CART) analysis. The obtained results are then compared to the reference depths, and the performances of the different methods, as well as their advantages/disadvantages are evaluated. The green/NIR method supplies 42% more points compared to the operator process, with an equivalent mean error (,4·2,cm verusu ,4·5,cm) and a smaller standard deviation (25·3,cm verusu 33·5,cm). The Raman processing method provides very scattered results (standard deviation of 40·3,cm) with the lowest mean error (,3·1,cm) and 40% more points. The minimum detectable depth is also improved by the two presented methods, being around 1,m for the green/NIR approach and 0·5,m for the statistical approach, compared to 1·5,m for the data processed by the operator. Despite its ability to measure other parameters like water temperature, the Raman method needed a large amount of reference data to provide reliable depth measurements, as opposed to the green/NIR method. Copyright © 2010 John Wiley & Sons, Ltd. [source] Index tracking with constrained portfoliosINTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 1-2 2007Dietmar Maringer Passive portfolio management strategies, such as index tracking, are popular in the industry, but so far little research has been done on the cardinality of such a portfolio, i.e. on how many different assets ought to be included in it. One reason for this is the computational complexity of the associated optimization problems. Traditional optimization techniques cannot deal appropriately with the discontinuities and the many local optima emerging from the introduction of explicit cardinality constraints. More recent approaches, such as heuristic methods, on the other hand, can overcome these hurdles. This paper demonstrates how one of these methods, differential evolution, can be used to solve the constrained index-tracking problem. We analyse the financial implication of cardinality constraints for a tracking portfolio using an empirical study of the Down Jones Industrial Average. We find that the index can be tracked satisfactorily with a subset of its components and, more important, that the deviation between computed actual tracking error and the theoretically achievable tracking error out of sample is negligibly affected by the portfolio's cardinality. Copyright © 2007 John Wiley & Sons, Ltd. [source] Network service scheduling and routingINTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 6 2004G. Groves Abstract Real-life vehicle routing problems generally have both routing and scheduling aspects to consider. Although this fact is well acknowledged, few heuristic methods exist that address both these complicated aspects simultaneously. We present a graph theoretic heuristic to determine an efficient service route for a single service vehicle through a transportation network that requires a subset of its edges to be serviced, each a specified (potentially different) number of times. The times at which each of these edges are to be serviced should additionally be as evenly spaced over the scheduling time window as possible, thus introducing a scheduling consideration to the problem. Our heuristic is based on the tabu search method, used in conjunction with various well-known graph theoretic algorithms, such as those of Floyd (for determining shortest routes) and Frederickson (for solving the rural postman problem). This heuristic forms the backbone of a decision support system that prompts the user for certain parameters from the physical situation (such as the service frequencies and travel times for each network link as well as bounds in terms of acceptability of results) after which a service routing schedule is suggested as output. The decision support system is applied to a special case study, where a service routing schedule is sought for the South African national railway system by Spoornet (the semi-privatised South African national railways authority and service provider) as part of their rationalisation effort, in order to remain a lucrative company. [source] Realizing the Potential of Real Options: Does Theory Meet Practice?JOURNAL OF APPLIED CORPORATE FINANCE, Issue 2 2005Alexander Triantis The idea of viewing corporate investment opportunities as "real options" has been around for over 25 years. Real options concepts and techniques now routinely appear in academic research in finance and economics, and have begun to influence scholarly work in virtually every business discipline, including strategy, organizations, management science, operations management, information systems, accounting, and marketing. Real options concepts have also made considerable headway in practice. Corporate managers are more likely to recognize options in their strategic planning process, and have become more proactive in designing flexibility into projects and contracts, frequently using real options vocabulary in their discussions. Thanks in part to the spread of real options thinking, today's strategic planners are more likely than their predecessors to recognize the "option" value of actions like the following: , dividing up large projects into a number of stages; , investing in the acquisition or production of information; , introducing "modularity" in manufacturing and design; , developing competing prototypes for new products; and , investing in overseas markets. But if real options has clearly succeeded as a way of thinking, the application of real options valuation methods has been limited to companies in relatively few industries and has thus failed to live up to expectations created in the mid- to late-1990s. Increased corporate acceptance and implementations of real options valuation techniques will require several changes coming together. On the theory side, we need more realistic models that better reflect differences between financial and real options, simple heuristic methods that can be more easily implemented (but that have been carefully benchmarked against more precise models), and better guidance on implementation issues such as the estimation of discount rates for the "optionless" underlying projects. On the practitioner side, we need user-friendly real options software, more senior-level buy-in, more deliberate diffusion of real options knowledge throughout organizations, better alignment of managerial incentives with long-term shareholder value, and better-designed contracts to correct the misalignment of incentives across the value chain. If these challenges can be met, there will continue to be a steady if gradual diffusion of real options analysis throughout organizations over the next few decades, with real options eventually becoming not only a standard part of corporate strategic planning, but also the primary valuation tool for assessing the expected shareholder effect of large capital investment projects. [source] Heuristic Methods for Computer EthicsMETAPHILOSOPHY, Issue 3 2002Walter Maner The domain of "procedural ethics" is the set of reflective and deliberative methods that maximize the reliability of moral judgment. While no general algorithmic method exists that will guarantee the validity of ethical deliberation, non-algorithmic "heuristic" methods can guide and inform the process, making it significantly more robust and dependable. This essay examines various representative heuristic procedures commonly recommended for use in applied ethics, maps them into a uniform set of twelve stages, identifies common faults, then shows how the resulting stage-by-stage decision-making model could be adapted for general use and for use in computer ethics. [source] Neighborhood search heuristics for selecting hierarchically well-formulated subsets in polynomial regressionNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 1 2010Michael J. Brusco Abstract The importance of subset selection in multiple regression has been recognized for more than 40 years and, not surprisingly, a variety of exact and heuristic procedures have been proposed for choosing subsets of variables. In the case of polynomial regression, the subset selection problem is complicated by two issues: (1) the substantial growth in the number of candidate predictors, and (2) the desire to obtain hierarchically well-formulated subsets that facilitate proper interpretation of the regression parameter estimates. The first of these issues creates the need for heuristic methods that can provide solutions in reasonable computation time; whereas the second requires innovative neighborhood search approaches that accommodate the hierarchical constraints. We developed tabu search and variable neighborhood search heuristics for subset selection in polynomial regression. These heuristics are applied to a classic data set from the literature and, subsequently, evaluated in a simulation study using synthetic data sets. © 2009 Wiley Periodicals, Inc. Naval Research Logistics, 2010 [source] New heuristic methods for the capacitated multi-facility Weber problemNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 1 2007Necati Aras Abstract In this paper we consider the capacitated multi-facility Weber problem with the Euclidean, squared Euclidean, and ,p -distances. This problem is concerned with locating m capacitated facilities in the Euclidean plane to satisfy the demand of n customers with the minimum total transportation cost. The demand and location of each customer are known a priori and the transportation cost between customers and facilities is proportional to the distance between them. We first present a mixed integer linear programming approximation of the problem. We then propose new heuristic solution methods based on this approximation. Computational results on benchmark instances indicate that the new methods are both accurate and efficient. © 2006 Wiley Periodicals, Inc. Naval Research Logistics 2006 [source] Quay crane scheduling at container terminals to minimize the maximum relative tardiness of vessel departuresNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 1 2006Jiyin Liu Abstract In this paper, we study the problem of scheduling quay cranes (QCs) at container terminals where incoming vessels have different ready times. The objective is to minimize the maximum relative tardiness of vessel departures. The problem can be formulated as a mixed integer linear programming (MILP) model of large size that is difficult to solve directly. We propose a heuristic decomposition approach to breakdown the problem into two smaller, linked models, the vessel-level and the berth-level models. With the same berth-level model, two heuristic methods are developed using different vessel-level models. Computational experiments show that the proposed approach is effective and efficient. © 2005 Wiley Periodicals, Inc. Naval Research Logistics, 2006 [source] Focusing on Customer Time in Field Service: A Normative ApproachPRODUCTION AND OPERATIONS MANAGEMENT, Issue 2 2007Aruna Apte Although customer convenience should be rightfully considered a central element in field services, the customer experience suggests that service enterprises rarely take the customer's preferred time into account in making operational and scheduling decisions. In this paper we present the results of our exploratory research into two interrelated topics: the explicit inclusion of customer time in nonemergency field service delivery decisions and the analysis of trade-off between the customer's convenience and field service provider's cost. Based on prior research in service quality we identify and illustrate two time-based performance metrics that are particularly appropriate for assessing service quality in nonemergency field services: performance and conformance quality. To determine vehicle routes, we develop a hybrid heuristic derived from the existing and proven heuristic methods. A numerical example closely patterned after real-life data is generated and used within a computational experiment to investigate alternate policies for promise time windows. Our experiment shows that over a reasonable range of customer cost parameters the policy of shorter promise time windows reduces the combined total cost incurred by the provider and the customers and should be considered a preferred policy by the field service provider. Managerial implications of this result are discussed. [source] Construction and Optimality of a Special Class of Balanced DesignsQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 5 2006Stefano Barone Abstract The use of balanced designs is generally advisable in experimental practice. In technological experiments, balanced designs optimize the exploitation of experimental resources, whereas in marketing research experiments they avoid erroneous conclusions caused by the misinterpretation of interviewed customers. In general, the balancing property assures the minimum variance of first-order effect estimates. In this work the authors consider situations in which all factors are categorical and minimum run size is required. In a symmetrical case, it is often possible to find an economical balanced design by means of algebraic methods. Conversely, in an asymmetrical case algebraic methods lead to expensive designs, and therefore it is necessary to adopt heuristic methods. The existing methods implemented in widespread statistical packages do not guarantee the balancing property as they are designed to pursue other optimality criteria. To deal with this problem, the authors recently proposed a new method to generate balanced asymmetrical designs aimed at estimating first- and second-order effects. To reduce the run size as much as possible, the orthogonality cannot be guaranteed. However, the method enables designs that approach the orthogonality as much as possible (near orthogonality). A collection of designs with two- and three-level factors and run size lower than 100 was prepared. In this work an empirical study was conducted to understand how much is lost in terms of other optimality criteria when pursuing balancing. In order to show the potential applications of these designs, an illustrative example is provided. Copyright © 2006 John Wiley & Sons, Ltd. [source] An Automated System for Argument Invention in Law Using Argumentation and Heuristic Search Procedures,RATIO JURIS, Issue 4 2005DOUGLAS WALTON Argumentation schemes are forms of argument representing premise-conclusion and inference structures of common types of arguments. Schemes especially useful in law represent defeasible arguments, like argument from expert opinion. Argument diagramming is a visualization tool used to display a chain of connected arguments linked together. One such tool, Araucaria, available free at http://araucaria.computing.dundee.ac.uk/, helps a user display an argument on the computer screen as an inverted tree structure with an ultimate conclusion as the root of the tree. These argumentation tools are applicable to analyzing a mass of evidence in a case at trial, in a manner already known in law using heuristic methods (Schum 1994) and Wigmore diagrams (Wigmore 1931). In this paper it is shown how they can be automated and applied to the task of inventing legal arguments. One important application is to proof construction in trial preparation (Palmer 2003). [source] |