Home About us Contact | |||
Decision Problem (decision + problem)
Selected AbstractsOptimal Experimentation in Signal-Dependent Decision Problems*INTERNATIONAL ECONOMIC REVIEW, Issue 2 2002Manjira Datta First page of article [source] Distribution of Aggregate Utility Using Stochastic Elements of Additive Multiattribute Utility ModelsDECISION SCIENCES, Issue 2 2000Herbert Moskowitz ABSTRACT Conventionally, elements of a multiattribute utility model characterizing a decision maker's preferences, such as attribute weights and attribute utilities, are treated as deterministic, which may be unrealistic because assessment of such elements can be imprecise and erroneous, or differ among a group of individuals. Moreover, attempting to make precise assessments can be time consuming and cognitively demanding. We propose to treat such elements as stochastic variables to account for inconsistency and imprecision in such assessments. Under these assumptions, we develop procedures for computing the probability distribution of aggregate utility for an additive multiattribute utility function (MAUF), based on the Edgeworth expansion. When the distributions of aggregate utility for all alternatives in a decision problem are known, stochastic dominance can then be invoked to filter inferior alternatives. We show that, under certain mild conditions, the aggregate utility distribution approaches normality as the number of attributes increases. Thus, only a few terms from the Edgeworth expansion with a standard normal density as the base function will be sufficient for approximating an aggregate utility distribution in practice. Moreover, the more symmetric the attribute utility distributions, the fewer the attributes to achieve normality. The Edgeworth expansion thus can provide a basis for a computationally viable approach for representing an aggregate utility distribution with imprecisely specified attribute weights and utilities assessments (or differing weights and utilities across individuals). Practical guidelines for using the Edgeworth approximation are given. The proposed methodology is illustrated using a vendor selection problem. [source] Modelling Audit Risk Assessments: Exploration of an Alternative to the Use of Knowledge-based SystemsINTERNATIONAL JOURNAL OF AUDITING, Issue 1 2003Paul Lloyd This paper compares decision-modelling approaches. The decision modelled is the assessment of inherent and control risk in the purchases, accounts payable and inventory cycle. It is modelled using two different approaches. Firstly, knowledge-based models are constructed using established development shells. Secondly, a model is constructed using a conventional procedural programming language. Both modelling approaches are tested against the output of human practitioners and compared to each other to determine if the more restrictive, assumption-laden approach offered by the procedural model is adequate to deal with the decision problem under examination, or if the greater flexibility offered by the knowledge-based approach is required. This comparison yields positive results. The procedural model is able to reproduce satisfactorily the output of the human decision makers and the knowledge-based models for the chosen decision problem. This result emphasises the importance of devoting time to the selection of the most appropriate modelling approach for a given decision problem. [source] Simulation and discrete event optimization for automated decisions for in-queue flightsINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 5 2010D. Dimitrakiev The paper discusses simulation and optimization of in-queue flights, analyzed as discrete-event systems. Simulation is performed in a platform, based on MATLAB program functions and SIMULINK dynamic models. Regime optimization aims to maximize the controllability of the queue and minimize the fuel consumption of each aircraft (AC). Because of mutual preferential independence, a hierarchical additive value function is constructed, consisting of fuzzily estimated parameter value functions and weight coefficients and a multicriteria decision problem is solved under strict certainty. Two optimization algorithms are applied: one that finds the regime that leads to maximally preferred consequence and another that finds the regime with minimum total fuel consumption among those whose control parameters are set at their most preferred levels. A comparison between the two algorithms is proposed. A scheme describes how the optimization procedures can be used multiple times during the execution of the flight with respect to the occurrence of discrete events. Simulation results are also proposed for the discussed algorithms and procedures. © 2010 Wiley Periodicals, Inc. [source] Towards a general and unified characterization of individual and collective choice functions under fuzzy and nonfuzzy preferences and majority via the ordered weighted average operatorsINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 1 2009Janusz Kacprzyk A fuzzy preference relation is a powerful and popular model to represent both individual and group preferences and can be a basis for decision-making models that in general provide as a result a subset of alternatives that can constitute an ultimate solution of a decision problem. To arrive at such a final solution individual and/or group choice rules may be employed. There is a wealth of such rules devised in the context of the classical, crisp preference relations. Originally, most of the popular group decision-making rules were conceived for classical (crisp) preference relations (orderings) and then extended to the traditional fuzzy preference relations. In this paper we pursue the path towards a universal representation of such choice rules that can provide an effective generalization,for the case of fuzzy preference relations,of the classical choice rules. © 2008 Wiley Periodicals, Inc. [source] Bayesian Hypothesis Testing: a Reference ApproachINTERNATIONAL STATISTICAL REVIEW, Issue 3 2002José M. Bernardo Summary For any probability model M={p(x|,, ,), ,,,, ,,,} assumed to describe the probabilistic behaviour of data x,X, it is argued that testing whether or not the available data are compatible with the hypothesis H0={,=,0} is best considered as a formal decision problem on whether to use (a0), or not to use (a0), the simpler probability model (or null model) M0={p(x|,0, ,), ,,,}, where the loss difference L(a0, ,, ,) ,L(a0, ,, ,) is proportional to the amount of information ,(,0, ,), which would be lost if the simplified model M0 were used as a proxy for the assumed model M. For any prior distribution ,(,, ,), the appropriate normative solution is obtained by rejecting the null model M0 whenever the corresponding posterior expectation ,,,(,0, ,, ,),(,, ,|x)d,d, is sufficiently large. Specification of a subjective prior is always difficult, and often polemical, in scientific communication. Information theory may be used to specify a prior, the reference prior, which only depends on the assumed model M, and mathematically describes a situation where no prior information is available about the quantity of interest. The reference posterior expectation, d(,0, x) =,,,(,|x)d,, of the amount of information ,(,0, ,, ,) which could be lost if the null model were used, provides an attractive nonnegative test function, the intrinsic statistic, which is invariant under reparametrization. The intrinsic statistic d(,0, x) is measured in units of information, and it is easily calibrated (for any sample size and any dimensionality) in terms of some average log-likelihood ratios. The corresponding Bayes decision rule, the Bayesian reference criterion (BRC), indicates that the null model M0 should only be rejected if the posterior expected loss of information from using the simplified model M0 is too large or, equivalently, if the associated expected average log-likelihood ratio is large enough. The BRC criterion provides a general reference Bayesian solution to hypothesis testing which does not assume a probability mass concentrated on M0 and, hence, it is immune to Lindley's paradox. The theory is illustrated within the context of multivariate normal data, where it is shown to avoid Rao's paradox on the inconsistency between univariate and multivariate frequentist hypothesis testing. Résumé Pour un modèle probabiliste M={p(x|,, ,) ,,,, ,,,} censé décrire le comportement probabiliste de données x,X, nous soutenons que tester si les données sont compatibles avec une hypothèse H0={,=,0 doit être considéré comme un problème décisionnel concernant l'usage du modèle M0={p(x|,0, ,) ,,,}, avec une fonction de coût qui mesure la quantité d'information qui peut être perdue si le modèle simplifiéM0 est utilisé comme approximation du véritable modèle M. Le coût moyen, calculé par rapport à une loi a priori de référence idoine fournit une statistique de test pertinente, la statistique intrinsèque d(,0, x), invariante par reparamétrisation. La statistique intrinsèque d(,0, x) est mesurée en unités d'information, et sa calibrage, qui est independante de la taille de léchantillon et de la dimension du paramètre, ne dépend pas de sa distribution à l'échantillonage. La règle de Bayes correspondante, le critère de Bayes de référence (BRC), indique que H0 doit seulement êetre rejeté si le coût a posteriori moyen de la perte d'information à utiliser le modèle simplifiéM0 est trop grande. Le critère BRC fournit une solution bayésienne générale et objective pour les tests d'hypothèses précises qui ne réclame pas une masse de Dirac concentrée sur M0. Par conséquent, elle échappe au paradoxe de Lindley. Cette théorie est illustrée dans le contexte de variables normales multivariées, et on montre qu'elle évite le paradoxe de Rao sur l'inconsistence existant entre tests univariés et multivariés. [source] On undesirable consequences of thinking: framing effects as a function of substantive processingJOURNAL OF BEHAVIORAL DECISION MAKING, Issue 2 2007Eric R. Igou Abstract Three studies investigate the impact of effortful constructive processing on framing effects. The results replicated previous findings: Participants avoided the risky option when the scenario was framed in terms of gains, but preferred this option when the scenario was framed in terms of losses. Importantly, framing effects were most pronounced when conditions allowed for an effortful constructive processing style (i.e., substantive processing). This impact of decision frames varied when decision time served as an indicator for the elaboration extent (Study 1), and also when processing motivation (accountability; Study 2) and processing ability (decision time; Study 3) were manipulated. Moreover, effortful processing did not increase framing effects when contextual cues reduced the necessity for constructive thinking (Study 1). We suggest that decision frames may take on very different roles as a function of the ambiguity of the decision problem, and the degree and style of processing. Copyright © 2006 John Wiley & Sons, Ltd. [source] Life Cycle Cost Disclosure, Consumer Behavior, and Business ImplicationsJOURNAL OF INDUSTRIAL ECOLOGY, Issue 1 2010Evidence From an Online Field Experiment Summary Comprehensive assessments of final consumption have identified "housing" as a major contributor to total environmental impacts. Within this category, electrical-energy-using products are important. Do consumers opt for more energy-efficient household appliances if they are provided with life cycle cost (LCC),that is, the sum of purchase price and operating cost estimated over the life span of the appliance? And what consequences does LCC disclosure have for business? Physical energy figures shown on appliance labels may be cognitively demanding for consumers, whereas monetary information promises to simplify the decision problem. Despite the rising interest in monetary cost disclosure, its effectiveness relative to physical cost disclosure has not been rigorously evaluated. This research approached the question of effectiveness with an online field experiment for washing machines. Customers of a commercially operating online shop were randomly assigned to two groups. The control group was provided with regular product price information; the treatment group received additional LCC information. A total of 2,065 clicks were recorded and analyzed with multiple regression that controlled for several product characteristics. The evidence suggests that LCC disclosure decreases the mean specific energy use of chosen washing machines by 0.8% (p < 0.01) and their mean specific water use by 0.7% (p < 0.05). As to business implications, LCC disclosure had no effect on the indicator of retail volume, which makes it unattractive for retailers to provide LCC on their own initiative. [source] Utility transversality: a value-based approachJOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 5-6 2005James E. Matheson Abstract We examine multiattribute decision problems where a value function is specified over the attributes of a decision problem, as is typically done in the deterministic phase of a decision analysis. When uncertainty is present, a utility function is assigned over the value function to represent the decision maker's risk attitude towards value, which we refer to as a value-based approach. A fundamental result of using the value-based approach is a closed form expression that relates the risk aversion functions of the individual attributes to the trade-off functions between them. We call this relation utility transversality. The utility transversality relation asserts that once the value function is specified there is only one dimension of risk attitude in multiattribute decision problems. The construction of multiattribute utility functions using the value-based approach provides the flexibility to model more general functional forms that do not require assumptions of utility independence. For example, we derive a new family of multiattribute utility functions that describes richer preference structures than the usual multilinear family. We also show that many classical results of utility theory, such as risk sharing and the notion of a corporate risk tolerance, can be derived simply from the utility transversality relations by appropriate choice of the value function. Copyright © 2007 John Wiley & Sons, Ltd. [source] Two new cases of rank reversals when the AHP and some of its additive variants are used that do not occur with the multiplicative AHPJOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 1 2001Evangelos Triantaphyllou Abstract Many researchers have long observed some cases in which certain ranking irregularities can occur when the original analytic hierarchy process (AHP), or some of its variants, are used. This paper presents two new categories of ranking irregularities which defy common intuition. These ranking irregularities occur when one decomposes a decision problem into a set of smaller problems each defined on two alternatives and the same criteria as the original problem. These irregularities are possible when the original AHP, or some of its additive variants, are used. Computational experiments on random test problems and an examination of some real-life case studies suggest that these ranking irregularities are dramatically likely to occur. This paper also proves that these ranking irregularities are not possible when a multiplicative variant of the AHP is used. Copyright © 2001 John Wiley & Sons, Ltd. [source] Minimum boundary touching tilings of polyominoesMLQ- MATHEMATICAL LOGIC QUARTERLY, Issue 1 2006Andreas Spillner Abstract We study the problem of tiling a polyomino P with as few squares as possible such that every square in the tiling has a non-empty intersection with the boundary of P . Our main result is an algorithm which given a simply connected polyomino P computes such a tiling of P . We indicate how one can improve the running time of this algorithm for the more restricted row-column-convex polyominoes. Finally we show that a related decision problem is in NP for rectangular polyominoes. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Optimal service rates of a service facility with perishable inventory itemsNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 5 2002O. Berman In this paper we optimally control service rates for an inventory system of service facilities with perishable products. We consider a finite capacity system where arrivals are Poisson-distributed, lifetime of items have exponential distribution, and replenishment is instantaneous. We determine the service rates to be employed at each instant of time so that the long-run expected cost rate is minimized for fixed maximum inventory level and capacity. The problem is modelled as a semi-Markov decision problem. We establish the existence of a stationary optimal policy and we solve it by employing linear programming. Several numerical examples which provide insight to the behavior of the system are presented. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 464,482, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10021 [source] Optimal control of a revenue management system with dynamic pricing facing linear demandOPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 6 2006Fee-Seng Chou Abstract This paper considers a dynamic pricing problem over a finite horizon where demand for a product is a time-varying linear function of price. It is assumed that at the start of the horizon there is a fixed amount of the product available. The decision problem is to determine the optimal price at each time period in order to maximize the total revenue generated from the sale of the product. In order to obtain structural results we formulate the decision problem as an optimal control problem and solve it using Pontryagin's principle. For those problems which are not easily solvable when formulated as an optimal control problem, we present a simple convergent algorithm based on Pontryagin's principle that involves solving a sequence of very small quadratic programming (QP) problems. We also consider the case where the initial inventory of the product is a decision variable. We then analyse the two-product version of the problem where the linear demand functions are defined in the sense of Bertrand and we again solve the problem using Pontryagin's principle. A special case of the optimal control problem is solved by transforming it into a linear complementarity problem. For the two-product problem we again present a simple algorithm that involves solving a sequence of small QP problems and also consider the case where the initial inventory levels are decision variables. Copyright © 2006 John Wiley & Sons, Ltd. [source] Economic Evaluation of Scale Dependent Technology InvestmentsPRODUCTION AND OPERATIONS MANAGEMENT, Issue 1 2005Phillip J. Lederer We study the effect of financial risk on the economic evaluation of a project with capacity decisions. Capacity decisions have an important effect on the project,s value through the up-front investment, the associated operating cost, and constraints on output. However, increased scale also affects the financial risk of the project through its effect on the operating leverage of the investment. Although it has long been recognized in the finance literature that operating leverage affects project risk, this result has not been incorporated in the operations management literature when evaluating projects. We study the decision problem of a firm that must choose project scale. Future cash flow uncertainty is introduced by uncertain future market prices. The firm's capacity decision affects the firm's potential sales, its expected price for output, and its costs. We study the firm's profit maximizing scale decision using the CAPM model for risk adjustment. Our results include that project risk, as measured by the required rate of return, is related to the inverse of the expected profit per unit sold. We also show that project risk is related to the scale choice. In contrast, in traditional discounted cash flow analysis (DCF), a fixed prescribed rate is used to evaluate the project and choose its scale. When a fixed rate is used with DCF, a manager will ignore the effect of scale on risk and choose suboptimal capacity that reduces project value. S/he will also misestimate project value. Use of DCF for choosing scale is studied for two special cases. It is shown that if the manager is directed to use a prescribed discount rate that induces the optimal scale decision, then the manager will greatly undervalue the project. In contrast, if the discount rate is set to the risk of the optimally-scaled project, the manager will undersize the project by a small amount, and slightly undervalue the project with the economic impact of the error being small. These results underline the importance of understanding the source of financial risk in projects where risk is endogenous to the project design. [source] Optimal model for warehouse location and retailer allocationAPPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 3 2007Avninder Gill Abstract Warehouse location and retailer allocation is a high-level strategic decision problem that is commonly encountered by logisticians and supply chain managers, especially during the supply chain design phase. Considering the product distribution cost and warehouse capital cost trade-offs, this paper models the warehouse location and retailer allocation problem as a 0,1 integer programming problem and provides an efficient two-stage set covering heuristic algorithm to solve large-sized problems. Finally, concluding remarks and some recommendations for further research are also presented. Copyright © 2007 John Wiley & Sons, Ltd. [source] Geographical information systems-based models for offshore floating marine fish cage aquaculture site selection in Tenerife, Canary IslandsAQUACULTURE RESEARCH, Issue 10 2005Oscar M Pérez Abstract The present study focuses on the development of a standard methodology for selection of suitable sites for offshore (exposed) marine fish-cage farming (floating cages) of seabream (Sparus aurata) and seabass (Dicentrarchus labrax) in an island environment, using Tenerife as an example. Site selection is a key factor in any aquaculture operation, affecting both success and sustainability and can solve conflicts between different activities, making a rational use of the coastal space. Site selection was achieved by using geographical information systems (GIS)-based models and related technology to support the decision-making process. The framework for spatial multicriteria decision analysis used in this study began with a recognition and definition of the decision problem. Subsequently, 31 production functions (factors and constraints) were identified, defined and subdivided into eight submodels. These were then integrated into a GIS database in the form of thematic layers and later scored for standardization. At this stage, the database was verified by field sampling to establish the quality of data used. The decision maker's preferences were incorporated into the decision model by assigning weights of relative importance to the evaluation under consideration. These, together with the thematic layers, were incorporated using multicriteria evaluation techniques and simple overlays to provide an overall assessment of possible alternatives. The integration, manipulation and presentation of the results by means of GIS-based models in this sequential and logical flow of steps proved to be very effective for helping the decision-making process of site selection. Tenerife has very favourable environmental conditions for culture of marine fish and there are no totally unsuitable sites for cage farming identified in this study. On the other hand, there are few very suitable sites (high scores) either, principally due to the heavy use of the coastline and the conflicts between different users. From the 228 km2 of available area for siting cages in the coastal regions with depth less than 50 m, the total area suitable for siting cages (scores 6,8) was 37 km2. There are only 0.51 km2 of very suitable areas (score 8) and approximately 5.37 km2 of suitable (score 7), most of these being located in the southeast of the island. These relatively small areas of suitability should be put into the context of the wider use of the coastal environment around Tenerife. [source] Probabilistic Neural Network for Reliability Assessment of Oil and Gas PipelinesCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2002Sunil K. Sinha A fuzzy artificial neural network (ANN),based approach is proposed for reliability assessment of oil and gas pipelines. The proposed ANN model is trained with field observation data collected using magnetic flux leakage (MFL) tools to characterize the actual condition of aging pipelines vulnerable to metal loss corrosion. The objective of this paper is to develop a simulation-based probabilistic neural network model to estimate the probability of failure of aging pipelines vulnerable to corrosion. The approach is to transform a simulation-based probabilistic analysis framework to estimate the pipeline reliability into an adaptable connectionist representation, using supervised training to initialize the weights so that the adaptable neural network predicts the probability of failure for oil and gas pipelines. This ANN model uses eight pipe parameters as input variables. The output variable is the probability of failure. The proposed method is generic, and it can be applied to several decision problems related with the maintenance of aging engineering systems. [source] The impact of problem size on decision processes: an experimental investigation on very large choice problems with support of decision support systemsEXPERT SYSTEMS, Issue 2 2004H. Wang Abstract: Choice problems as a class of decision problems have attracted great attention for the last couple of decades. Among the frameworks and supporting theories used in their study, two have had the greatest impact: bounded rationality and cost,benefit. Both theories could find support from past empirical studies under different conditions or problem environments. In the past studies, problem size has been shown to play an important role in decision-making. As problem size increases, a decision process may be detoured and the decision outcome may be different. In this paper we investigate the impact of problem size on three important aspects of the computer-aided decision process , strategy selection, decision time/effort, and decision quality , through very large choice problems. [source] A choice prediction competition: Choices from experience and from descriptionJOURNAL OF BEHAVIORAL DECISION MAKING, Issue 1 2010Ido Erev Abstract Erev, Ert, and Roth organized three choice prediction competitions focused on three related choice tasks: One shot decisions from description (decisions under risk), one shot decisions from experience, and repeated decisions from experience. Each competition was based on two experimental datasets: An estimation dataset, and a competition dataset. The studies that generated the two datasets used the same methods and subject pool, and examined decision problems randomly selected from the same distribution. After collecting the experimental data to be used for estimation, the organizers posted them on the Web, together with their fit with several baseline models, and challenged other researchers to compete to predict the results of the second (competition) set of experimental sessions. Fourteen teams responded to the challenge: The last seven authors of this paper are members of the winning teams. The results highlight the robustness of the difference between decisions from description and decisions from experience. The best predictions of decisions from descriptions were obtained with a stochastic variant of prospect theory assuming that the sensitivity to the weighted values decreases with the distance between the cumulative payoff functions. The best predictions of decisions from experience were obtained with models that assume reliance on small samples. Merits and limitations of the competition method are discussed. Copyright © 2009 John Wiley & Sons, Ltd. [source] Reflections of the self: how self-esteem determines decision framing and increases risk takingJOURNAL OF BEHAVIORAL DECISION MAKING, Issue 3 2007Todd McElroy Abstract Historically, research examining the influence of individual personality factors on decision processing has been sparse. In this paper we investigate how one important individual aspect, self-esteem, influences imposition and subsequent processing of ambiguously, negatively or positively framed decision tasks. We hypothesized that low self-esteem individuals would impose a negative frame onto ambiguous decision problems and would be especially sensitive to negatively framed decision tasks. In Study 1 we utilized a self-framing procedure and demonstrated that HSE participants were evenly divided in the hedonic valence they self-imposed whereas LSE participants were more likely to self-impose a negative frame. When these differences were accounted for, HSE and LSE participants were equivalent in risk seeking/avoiding choices. Study 2 used a risky-choice framing task and found that LSE individuals were especially sensitive to the negative frame. Study 3, provided converging evidence and generalization of these findings to a reflection tasks involving money. Copyright © 2006 John Wiley & Sons, Ltd. [source] Random error reduction in analytic hierarchies: a comparison of holistic and decompositional decision strategiesJOURNAL OF BEHAVIORAL DECISION MAKING, Issue 3 2001Osvaldo F. Morera Abstract The principle of ,divide and conquer' (DAC) suggests that complex decision problems should be decomposed into smaller, more manageable parts, and that these parts should be logically aggregated to derive an overall value for each alternative. Decompositional procedures have been contrasted with holistic evaluations that require decision makers to simultaneously consider all the relevant attributes of the alternatives under consideration (Fischer, 1977). One area where decompositional procedures have a clear advantage over holistic procedures is in the reduction of random error (Ravinder, 1992; Ravinder and Kleinmuntz, 1991; Kleinmuntz, 1990). Adopting the framework originally developed by Ravinder and colleagues, this paper details the results of a study of the random error variances associated with another popular multi-criteria decision-making technique, the Analytic Hierarchy Process (AHP); (Saaty, 1977, 1980), as well as the random error variances of a holistic version of the Analytic Hierarchy Process (Jensen, 1983). In addition, data concerning various psychometric properties (e.g. the convergent validity and temporal stability) and values of AHP inconsistency are reported for both the decompositional and holistic evaluations. The results of the study show that the Ravinder and Kleinmuntz (1991) error-propagation framework extends to the AHP and decompositional AHP judgments are more consistent than their holistic counterparts. Copyright © 2001 John Wiley & Sons, Ltd. [source] Utility transversality: a value-based approachJOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 5-6 2005James E. Matheson Abstract We examine multiattribute decision problems where a value function is specified over the attributes of a decision problem, as is typically done in the deterministic phase of a decision analysis. When uncertainty is present, a utility function is assigned over the value function to represent the decision maker's risk attitude towards value, which we refer to as a value-based approach. A fundamental result of using the value-based approach is a closed form expression that relates the risk aversion functions of the individual attributes to the trade-off functions between them. We call this relation utility transversality. The utility transversality relation asserts that once the value function is specified there is only one dimension of risk attitude in multiattribute decision problems. The construction of multiattribute utility functions using the value-based approach provides the flexibility to model more general functional forms that do not require assumptions of utility independence. For example, we derive a new family of multiattribute utility functions that describes richer preference structures than the usual multilinear family. We also show that many classical results of utility theory, such as risk sharing and the notion of a corporate risk tolerance, can be derived simply from the utility transversality relations by appropriate choice of the value function. Copyright © 2007 John Wiley & Sons, Ltd. [source] The relevance of MCDM for financial decisionsJOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 4-5 2002Winfried G. Hallerbach Abstract For people working in finance, either in academia or in practice or in both, the combination of ,finance' and ,multiple criteria' is not obvious. However, we believe that many of the tools developed in the field of MCDM can contribute both to the quality of the financial economic decision making process and to the quality of the resulting decisions. In this paper we answer the question why financial decision problems should be considered as multiple criteria decision problems and should be treated accordingly. Copyright © 2003 John Wiley & Sons, Ltd. [source] Simulation and multi-attribute utility modelling of life cycle profitJOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 4 2001Tony RosqvistArticle first published online: 16 NOV 200 Abstract Investments on capital goods are assessed with respect to the life cycle profit as well as the economic lifetime of the investment. The outcome of an investment with respect to these economic criteria is generally non-deterministic. An assessment of different investment options thus requires probabilistic modelling to explicitly account for the uncertainties. A process for the assessment of life cycle profit and the evaluation of the adequacy of the assessment is developed. The primary goal of the assessment process is to aid the decision-maker in structuring and quantifying investment decision problems characterized by multiple criteria and uncertainty. The adequacy of the assessment process can be evaluated by probabilistic criteria indicating the degree of uncertainty in the assessment. Bayesian inference is used to re-evaluate the initial assessment, as evidence of the system performance becomes available. Thus authentication of contracts of guarantee is supported. Numerical examples are given to demonstrate features of the described life cycle profit assessment process. Copyright © 2001 John Wiley & Sons, Ltd. [source] Bayesian clustering and product partition modelsJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2003Fernando A. Quintana Summary. We present a decision theoretic formulation of product partition models (PPMs) that allows a formal treatment of different decision problems such as estimation or hypothesis testing and clustering methods simultaneously. A key observation in our construction is the fact that PPMs can be formulated in the context of model selection. The underlying partition structure in these models is closely related to that arising in connection with Dirichlet processes. This allows a straightforward adaptation of some computational strategies,originally devised for nonparametric Bayesian problems,to our framework. The resulting algorithms are more flexible than other competing alternatives that are used for problems involving PPMs. We propose an algorithm that yields Bayes estimates of the quantities of interest and the groups of experimental units. We explore the application of our methods to the detection of outliers in normal and Student t regression models, with clustering structure equivalent to that induced by a Dirichlet process prior. We also discuss the sensitivity of the results considering different prior distributions for the partitions. [source] Dynamic adjustment cost models with forward-looking behaviourTHE ECONOMETRICS JOURNAL, Issue 1 2006Luca Fanelli Summary, In this paper we propose a new approach for dynamic decision problems where forward-looking agents choose a set of non-stationary variables subject to quadratic adjustment costs. It is assumed that expectations are computed by a cointegrated Vector Equilibrium Correction Model (VEqCM). The role of feedbacks from the decision to the explanatory variables on solution properties and modelling approach is discussed. We show that once the system of interrelated Euler equations stemming from the agent's optimization problem is embedded within the VEqCM, a switching algorithm based on Generalized Least Squares can be used to estimate and test the model. A labour demand model for two Danish manufacturing industries is investigated empirically. [source] Generalising the Hit Rates Test for Racial Bias in Law Enforcement, With an Application to Vehicle Searches in Wichita,THE ECONOMIC JOURNAL, Issue 515 2006Nicola Persico This article considers the use of outcomes-based tests for detecting racial bias in the context of police searches of motor vehicles. We characterise the police and motorist decision problems in a game theoretic framework, where police encounter motorists and decide whether to search them and motorists decide whether to carry contraband. Our modelling framework generalises that of Knowles et al. (2001). We apply the tests to data on police searches of motor vehicles gathered by the Wichita police department. The empirical findings are consistent with the notion that police in Wichita choose their search strategies to maximise successful searches. [source] |