Utility Function (utility + function)

Distribution by Scientific Domains
Distribution within Business, Economics, Finance and Accounting

Kinds of Utility Function

  • indirect utility function

  • Selected Abstracts


    E. Jouini
    We consider the problem of optimal risk sharing of some given total risk between two economic agents characterized by law-invariant monetary utility functions or equivalently, law-invariant risk measures. We first prove existence of an optimal risk sharing allocation which is in addition increasing in terms of the total risk. We next provide an explicit characterization in the case where both agents' utility functions are comonotone. The general form of the optimal contracts turns out to be given by a sum of options (stop-loss contracts, in the language of insurance) on the total risk. In order to show the robustness of this type of contracts to more general utility functions, we introduce a new notion of strict risk aversion conditionally on lower tail events, which is typically satisfied by the semi-deviation and the entropic risk measures. Then, in the context of an AV@R-agent facing an agent with strict monotone preferences and exhibiting strict risk aversion conditional on lower tail events, we prove that optimal contracts again are European options on the total risk. [source]


    Article first published online: 21 DEC 200, KRIS DE JAEGHER
    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are familiar. In this paper we derive utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity and zero own-price elasticity. It is shown how each of these utility functions arises from a simple graphical construction based on a single given indifference curve. Also, it is shown that possessors of such utility functions may be seen as thinking in a particular sense of their utility, and may be seen as using simple rules of thumb to determine their demand. [source]

    Utility Functions for Ceteris Paribus Preferences

    Michael McGeachie
    Ceteris paribus (all-else equal) preference statements concisely represent preferences over outcomes or goals in a way natural to human thinking. Although deduction in a logic of such statements can compare the desirability of specific conditions or goals, many decision-making methods require numerical measures of degrees of desirability. To permit ceteris paribus specifications of preferences while providing quantitative comparisons, we present an algorithm that compiles a set of qualitative ceteris paribus preferences into an ordinal utility function. Our algorithm is complete for a finite universe of binary features. Constructing the utility function can, in the worst case, take time exponential in the number of features, but common independence conditions reduce the computational burden. We present heuristics using utility independence and constraint-based search to obtain efficient utility functions. [source]

    Representing Utility Functions via Weighted Goals,

    Joel Uckelman
    Abstract We analyze the expressivity, succinctness, and complexity of a family of languages based on weighted propositional formulas for the representation of utility functions. The central idea underlying this form of preference modeling is to associate numerical weights with goals specified in terms of propositional formulas, and to compute the utility value of an alternative as the sum of the weights of the goals it satisfies. We define a large number of representation languages based on this idea, each characterized by a set of restrictions on the syntax of formulas and the range of weights. Our aims are threefold. First, for each language we try to identify the class of utility functions it can express. Second, when different languages can express the same class of utility functions, one may allow for a more succinct representation than another. Therefore, we analyze the relative succinctness of languages. Third, for each language we study the computational complexity of the problem of finding the most preferred alternative given a utility function expressed in that language (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

    Utility Functions for Ceteris Paribus Preferences

    Michael McGeachie
    Ceteris paribus (all-else equal) preference statements concisely represent preferences over outcomes or goals in a way natural to human thinking. Although deduction in a logic of such statements can compare the desirability of specific conditions or goals, many decision-making methods require numerical measures of degrees of desirability. To permit ceteris paribus specifications of preferences while providing quantitative comparisons, we present an algorithm that compiles a set of qualitative ceteris paribus preferences into an ordinal utility function. Our algorithm is complete for a finite universe of binary features. Constructing the utility function can, in the worst case, take time exponential in the number of features, but common independence conditions reduce the computational burden. We present heuristics using utility independence and constraint-based search to obtain efficient utility functions. [source]

    Development of an Estimation Procedure for an Activity-Based Travel Demand Model

    W. Recker
    The method uses a genetic algorithm to estimate coefficient values of the utility function, based on a particular multidimensional sequence alignment method to deal with the nominal, discrete attributes of the activity/travel pattern (e.g., which household member performs which activity, which vehicle is used, sequencing of activities), and a time sequence alignment method to handle temporal attributes of the activity pattern (e.g., starting and ending time of each activity and/or travel). The estimation procedure is tested on data drawn from a well-known activity/travel survey. [source]

    Distribution of Aggregate Utility Using Stochastic Elements of Additive Multiattribute Utility Models

    DECISION SCIENCES, Issue 2 2000
    Herbert Moskowitz
    ABSTRACT Conventionally, elements of a multiattribute utility model characterizing a decision maker's preferences, such as attribute weights and attribute utilities, are treated as deterministic, which may be unrealistic because assessment of such elements can be imprecise and erroneous, or differ among a group of individuals. Moreover, attempting to make precise assessments can be time consuming and cognitively demanding. We propose to treat such elements as stochastic variables to account for inconsistency and imprecision in such assessments. Under these assumptions, we develop procedures for computing the probability distribution of aggregate utility for an additive multiattribute utility function (MAUF), based on the Edgeworth expansion. When the distributions of aggregate utility for all alternatives in a decision problem are known, stochastic dominance can then be invoked to filter inferior alternatives. We show that, under certain mild conditions, the aggregate utility distribution approaches normality as the number of attributes increases. Thus, only a few terms from the Edgeworth expansion with a standard normal density as the base function will be sufficient for approximating an aggregate utility distribution in practice. Moreover, the more symmetric the attribute utility distributions, the fewer the attributes to achieve normality. The Edgeworth expansion thus can provide a basis for a computationally viable approach for representing an aggregate utility distribution with imprecisely specified attribute weights and utilities assessments (or differing weights and utilities across individuals). Practical guidelines for using the Edgeworth approximation are given. The proposed methodology is illustrated using a vendor selection problem. [source]

    Combining Economic and Conjoint Analysis to Determine Optimal Academic Services

    Mona Whitley Howard
    ABSTRACT In today's era of global competition, organizations must manage their functions and activities in a manner such that they are responsive to customers' needs and can provide excellence in service to the customer while also being efficient and cost conscious. These issues are extremely common in corporate organizations, but such concerns are equally relevant in service industries, including institutions of higher education. This study is conducted at a private, undergraduate institution of higher education. We utilize focus group evaluation and conjoint analysis combined with economic analysis in the form of a newly designed preferred utility economic cost diagram to pick the ideal services that should be provided to enrolled students at the institution. The package of ideal services accounts for preferred utility expressed by students and a new methodology (preferred utility function) to balance these against financial considerations to optimize services and financial gains for a college adult education program. This combination of focus groups and mathematical techniques can be easily employed by educational institutes. [source]

    The Malleability of Undiscounted Utilitarianism as a Criterion of Intergenerational Justice

    ECONOMICA, Issue 279 2003
    Geir B. Asheim
    Discounting future utilities is often justified by the ethically motivated objective of protecting earlier generations from the excessive saving that seems to be implied by undiscounted utilitarianism in productive economies. We question this justification of discounting by showing that undiscounted utilitarianism has sufficient malleability within important classes of technologies: any efficient and non-decreasing allocation can be the unique optimum according to an undiscounted utilitarian criterion for some choice of utility function. [source]

    Non-Monotonicity of the Tversky-Kahneman Probability-Weighting Function: A Cautionary Note

    Jonathan Ingersoll
    C91; D10; D81; G19 Abstract Cumulative Prospect Theory has gained a great deal of support as an alternative to Expected Utility Theory as it accounts for a number of anomalies in the observed behavior of economic agents. Expected Utility Theory uses a utility function and subjective or objective probabilities to compare risky prospects. Cumulative Prospect Theory alters both of these aspects. The concave utility function is replaced by a loss-averse utility function and probabilities are replaced by decision weights. The latter are determined with a weighting function applied to the cumulative probability of the outcomes. Several different probability weighting functions have been suggested. The two most popular are the original proposal of Tversky and Kahneman and the compound-invariant form proposed by Prelec. This note shows that the Tversky-Kahneman probability weighting function is not increasing for all parameter values and therefore can assign negative decision weights to some outcomes. This in turn implies that Cumulative Prospect Theory could make choices not consistent with first-order stochastic dominance. [source]

    Competitive flow control in general multi-node multi-link communication networks

    Ismet Sahin
    Abstract In this paper, we consider the flow control in a general multi-node multi-link communication network with competing users. Each user has a source node, a destination node, and an existing route for its data flow over any set of links in the network from its source to its destination node. The flow rate for each user is a control variable that is determined by optimizing a user-specific utility function which combines maximizing the flow rate and minimizing the network congestion for that user. A preference parameter in the utility function allows each user to adjust the trade-off between these two objectives. Since all users share the same network resources and are only interested in optimizing their own utility functions, the Nash equilibrium of game theory represents a reasonable solution concept for this multi-user general network. The existence and uniqueness of such an equilibrium is therefore very important for the network to admit an enforceable flow configuration. In this paper, we derive an expression for the Nash equilibrium and prove its uniqueness. We illustrate the results with an example and discuss some properties and observations related to the network performance when in the Nash equilibrium. Copyright © 2007 John Wiley & Sons, Ltd. [source]

    The Phelps,Koopmans theorem and potential optimality

    Debraj Ray
    D90; O41 The Phelps,Koopmans theorem states that if every limit point of a path of capital stocks exceeds the "golden rule," then that path is inefficient: there is another feasible path from the same initial stock that provides at least as much consumption at every date and strictly more consumption at some date. I show that in a model with nonconvex technologies and preferences, the theorem is false in a strong sense. Not only can there be efficient paths with capital stocks forever above and bounded away from a unique golden rule, such paths can also be optimal under the infinite discounted sum of a one-period utility function. The paper makes clear, moreover, that this latter criterion is strictly more demanding than the efficiency of a path. [source]

    On optimal income taxation with heterogeneous work preferences

    Ritva Tarkiainen
    C63; H21; H24 This paper considers the problem of optimal income taxation when individuals are assumed to differ with respect to their earnings potential and work preferences. A numerical method for solving this two-dimensional problem has been developed. We assume an additive utility function, and utilitarian social objectives. Rather than solve the first order conditions associated with the problem, we directly compute the best tax function, which can be written in terms of a second order B-spline function. Our findings show that marginal tax rates are higher than might be anticipated, and that very little bunching occurs at the optimum. Our simulation results show that the correlation between taste for work and productivity has a crucial role in determining the extent of redistribution in our model. [source]

    A game-theoretic model for capacity-constrained fair bandwidth allocation

    Yonghe Yan
    Data stream providers face a hard decision to satisfy the requirements of their subscribers. Each user has a minimum and a maximum required bandwidth. The server should be able to decide which requests can be satisfied and how much bandwidth will be allocated to each. We present a theoretical framework in a distributed mechanism for fair bandwidth allocation on a network with various bottleneck links. In our model, a user is guaranteed a minimum bandwidth and charged a price for the bandwidth allocated. A utility function is defined over the allocated bandwidth for a specific maximum requested bandwidth. We then present a non-cooperative game with social welfare function to resolve users' conflicting bandwidth capacity requests at bottleneck links. We also show that our proposed game-theoretic solution guarantees fair bandwidth allocation as defined in our residual capacity fairness. In order to guarantee the minimum bandwidth requirement, we integrate an admission control mechanism in our solution. However, global optimal admission conditions are not easy to implement for large networks. Therefore, we propose a distributed admission scheme. As a result, the paper presents fair and practical distributed algorithms for bandwidth allocation and admission control in enterprise networks. Our simulation and evaluation study shows that the distributed approach is sufficiently close to the global optimal solution. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    Estimating risk aversion from ascending and sealed-bid auctions: the case of timber auction data

    Jingfeng Lu
    Estimating bidders' risk aversion in auctions is a challenging problem because of identification issues. This paper takes advantage of bidding data from two auction designs to identify nonparametrically the bidders' utility function within a private value framework. In particular, ascending auction data allow one to recover the latent distribution of private values, while first-price sealed-bid auction data allow one to recover the bidders' utility function. This leads to a nonparametric estimator. An application to the US Forest Service timber auctions is proposed. Estimated utility functions display concavity, which can be partly captured by constant relative risk aversion. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    Evaluating animal welfare with choice experiments: an application to Swedish pig production

    Carolina Liljenstolpe
    In this study, the demand for animal welfare attributes when buying pork fillet is investigated among Swedish respondents. The issue is of importance in order to ensure an economically viable pig industry while applying an increasing number of animal friendly practices. In order to obtain information about consumer demand, an indirect utility function and willingness to pay (WTP) for animal welfare attributes are estimated. The attributes are solely associated with animal friendly practices. An investigation of numerous housing and managerial practices of pig production has not yet been performed. The indirect utility function is estimated using a random parameter logit model. A realistic approach when modeling consumer choice is to allow for heterogeneity in preferences. The relevance of assuming randomness of some of the parameters is evaluated by using a specification test developed by McFadden and Train (2000). The WTP is also estimated at the individual level. The results indicate that WTP for animal welfare attributes may be negative or positive. The preferences are also heterogeneous among respondents, which may be explained by a segmentation of preferences. Finally, the WTP estimates for animal welfare practices are compared with cost estimates for such production systems. [Econlit subject codes: C010, C500, Q100] © 2008 Wiley Periodicals, Inc. [source]

    Using a heterogeneous multinomial probit model with a neural net extension to model brand choice

    Harald Hruschka
    Abstract The multinomial probit model introduced here combines heterogeneity across households with flexibility of the (deterministic) utility function. To achieve flexibility deterministic utility is approximated by a neural net of the multilayer perceptron type. A Markov Chain Monte Carlo method serves to estimate heterogeneous multinomial probit models which fulfill economic restrictions on signs of (marginal) effects of predictors (e.g., negative for price). For empirical choice data the heterogeneous multinomial probit model extended by a multilayer perceptron clearly outperforms all the other models studied. Moreover, replacing homogeneous by heterogeneous reference price mechanisms and thus allowing price expectations to be formed differently across households also leads to better model performance. Mean utility differences and mean elasticities w.r.t. price and price deviation from reference price demonstrate that models with linear utility and nonlinear utility approximated by a multilayer perceptron lead to very different implications for managerial decision making.,,Copyright © 2007 John Wiley & Sons, Ltd. [source]

    Utility transversality: a value-based approach

    James E. Matheson
    Abstract We examine multiattribute decision problems where a value function is specified over the attributes of a decision problem, as is typically done in the deterministic phase of a decision analysis. When uncertainty is present, a utility function is assigned over the value function to represent the decision maker's risk attitude towards value, which we refer to as a value-based approach. A fundamental result of using the value-based approach is a closed form expression that relates the risk aversion functions of the individual attributes to the trade-off functions between them. We call this relation utility transversality. The utility transversality relation asserts that once the value function is specified there is only one dimension of risk attitude in multiattribute decision problems. The construction of multiattribute utility functions using the value-based approach provides the flexibility to model more general functional forms that do not require assumptions of utility independence. For example, we derive a new family of multiattribute utility functions that describes richer preference structures than the usual multilinear family. We also show that many classical results of utility theory, such as risk sharing and the notion of a corporate risk tolerance, can be derived simply from the utility transversality relations by appropriate choice of the value function. Copyright © 2007 John Wiley & Sons, Ltd. [source]

    Credit cards scoring with quadratic utility functions

    Vladimir Bugera
    Abstract The paper considers a general approach for classifying objects using mathematical programming algorithms. The approach is based on optimizing a utility function, which is quadratic in indicator parameters and is linear in control parameters (which need to be identified). Qualitative characteristics of the utility function, such as monotonicity in some variables, are included using additional constraints. The methodology was tested with a ,credit cards scoring' problem. Credit scoring is a way of separating specific subgroups in a population of objects (such as applications for credit), which have significantly different credit risk characteristics. A new feature of our approach is incorporating expert judgments in the model. For instance, the following preference was included with an additional constraint: ,give more preference to customers with higher incomes.' Numerical experiments showed that including constraints based on expert judgments improves the performance of the algorithm. Copyright © 2003 John Wiley & Sons, Ltd. [source]

    Government and the Reverse-Holdup Problem

    When the government bargains with a private firm, the firm cares about only its own profits, but the firm's profits may also enter into the government's utility function. As a result, the government will not bargain as aggressively for a low price. This can lead the government to "over pay" for quality. In contrast to the standard holdup problem, this reverse-holdup problem can give the firm an incentive to overinvest in non-contractible quality. The paper also discusses some examples where the reverse-holdup problem may explain excessive quality in government procurement. [source]

    Doing Wonders with an Egg: Optimal Re-distribution When Households Differ in Market and Non-Market Abilities

    Alessandro Balestrino
    The paper studies non-linear income taxation and linear commodity taxation in a household production context with households differentiated by market and non-market ability. In such a setting, there is an efficiency motive for re-distribution which is independent from the usual equity motive, and operates also when the social planner is indifferent to utility inequality. As a consequence, some of the policy prescriptions applicable to the case in which households differ in market ability only do not hold when households differ also in non-market ability. For instance, re-distribution is not necessarily from high- to low-wage households, and it is not necessarily true that the marginal rate of income tax should be zero for high incomes and positive for low incomes. In some cases, re-distribution may accentuate rather than lessen utility inequality, and can reverse the direction of income inequality relative to the laissez-faire equilibrium. Furthermore, contrary to Atkinson-Stiglitz, it may be optimal to use indirect and direct taxation simultaneously even when the utility function is separable in commodities and labour. [source]

    The Role of Family Ties in the Labour Market.

    LABOUR, Issue 4 2001
    An Interpretation Based on Efficiency Wage Theory
    By casual empiricism, it seems that many firms take explicit account of the family ties connecting workers, often hiring individuals belonging to the same family or passing jobs on from parents to their children. This paper makes an attempt to explain this behaviour by introducing the assumption of altruism within the family and supposing that agents maximize a family utility function rather than an individual one. This hypothesis has been almost ignored in the analysis of the relationship between employers and employees. The implications of this assumption in the efficiency wage models are explored: by employing members of the same family, firms can use a (credible) harsher threat , involving a sanction for all the family's members in case of one member's shirking , that allows them to pay a lower efficiency wage. On the other hand, workers who accept this agreement exchange a reduction in wage with an increase in their probability of being employed: this can be optimal in a situation of high unemployment. Moreover, the link between parents and children allows the firm to follow a strategy that solves the problem of an individual's finite time horizon by its making use of the family's reputation. [source]

    Trade Union Preferences in Double Dividend Models

    LABOUR, Issue 3 2001
    Torsten Sløk
    This paper analyses wage formation in a unionized economy where consumers as an externality in their utility function have the level of local pollution. If modelled in a microeconomically consistent way this externality should also be present in the preferences of the trade union. The key result is that when this trade-off between pollution and employment is included in the trade unions' preferences then they are willing to lower wages to generate substitution towards higher employment and lower pollution. As a consequence, an increase in the pollution tax will lead to lower wages. At a more general level the results show that in models analyzing pollution issues, such as the double dividend literature, it is very important for the policy conclusions how trade unions are introduced. [source]

    The informational content of the shape of utility functions: financial strategic behavior

    Joost M.E. Pennings
    Recently, Pennings and Smidts (2003) showed a relationship between organizational behavior and the global shape of the utility function. Their results suggest that the shape of the utility function may be related to ,higher-order' decisions. This research examines the relationship between financial strategic decisions and the global shape of the utility function of real decision makers. We assess the shape of utility functions of portfolio managers and show that the global shape is related to their strategic asset allocation. The findings demonstrate the informational content of the shape of utility functions in the context of financial strategic behavior. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    Pricing training and development programs using stochastic CVP analysis

    James A. Yunker
    This paper sets forth, analyzes and applies a stochastic cost-volume-profit (CVP) model specifically geared toward the determination of enrollment fees for training and development (T+D) programs. It is a simpler model than many of those developed in the research literature, but it does incorporate one advanced component: an ,economic' demand function relating the expected sales level to price. Price is neither a constant nor a random variable in this model but rather the decision-maker's basic control variable. The simplicity of the model permits analytical solutions for five ,special prices': (1) the highest price which sets breakeven probability equal to a minimum acceptable level; (2) the price which maximizes expected profits; (3) the price which maximizes a Cobb,Douglas utility function based on expected profits and breakeven probability; (4) the price which maximizes breakeven probability; and (5) the lowest price which sets breakeven probability equal to a minimum acceptable level. The model is applied to data provided by the Center for Management and Professional Development at the authors' university. The results suggest that there could be a significant payoff to fine-tuning a T+D provider's pricing strategy using formal analysis. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Virginia R. Young
    We apply the principle of equivalent utility to calculate the indifference price of the writer of a contingent claim in an incomplete market. To recognize the long-term nature of many such claims, we allow the short rate to be random in such a way that the term structure is affine. We also consider a general diffusion process for the risky stock (index) in our market. In a complete market setting, the resulting indifference price is the same as the one obtained by no-arbitrage arguments. We also show how to compute indifference prices for two types of contingent claims in an incomplete market, in the case for which the utility function is exponential. The first is a catastrophe risk bond that pays a fixed amount at a given time if a catastrophe does not occur before that time. The second is equity-indexed term life insurance which pays a death benefit that is a function of the short rate and stock price at the random time of the death of the insured. Because we assume that the occurrence of the catastrophe or the death of the insured is independent of the financial market, the markets for the catastrophe risk bond and the equity-indexed life insurance are incomplete. [source]

    A Dynamic Investment Model with Control on the Portfolio's Worst Case Outcome

    Yonggan Zhao
    This paper considers a portfolio problem with control on downside losses. Incorporating the worst-case portfolio outcome in the objective function, the optimal policy is equivalent to the hedging portfolio of a European option on a dynamic mutual fund that can be replicated by market primary assets. Applying the Black-Scholes formula, a closed-form solution is obtained when the utility function is HARA and asset prices follow a multivariate geometric Brownian motion. The analysis provides a useful method of converting an investment problem to an option pricing model. [source]

    Optimal Dynamic Portfolio Selection: Multiperiod Mean-Variance Formulation

    Duan Li
    The mean-variance formulation by Markowitz in the 1950s paved a foundation for modern portfolio selection analysis in a single period. This paper considers an analytical optimal solution to the mean-variance formulation in multiperiod portfolio selection. Specifically, analytical optimal portfolio policy and analytical expression of the mean-variance efficient frontier are derived in this paper for the multiperiod mean-variance formulation. An efficient algorithm is also proposed for finding an optimal portfolio policy to maximize a utility function of the expected value and the variance of the terminal wealth. [source]


    METROECONOMICA, Issue 2 2005
    Thomas Paulsson
    ABSTRACT We show that, if an individual's utility function exhibits a degree of relative temperance smaller than one, the individual will react, in a plausible way, to each of three common shifts in the stochastic distribution of his wealth, namely to FSD shifts, mean-preserving spreads and increases in downside risk. First, we derive, in a unified setting, necessary and sufficient conditions for signing the comparative-static effects of each of these shifts separately, and, second, we invoke implications of the property of mixed risk aversion to merge these separate conditions into a single sufficient condition for jointly signing all comparative-static effects. [source]

    Consumption Externalities, Production Externalities and Indeterminacy

    METROECONOMICA, Issue 4 2000
    Mark Weder
    In this paper we show that consumption externalities reduce the degree of increasing returns needed to generate indeterminacy in a two-sector optimal growth model. In equilibrium, consumption externalities operate as if the utility function is (close to) linear. If these externalities are strong, the minimum necessary increasing returns approach zero. Therefore, this paper,in a stylized fashion,provides an example of how microbehavior, i.e. interactions at the household level, can generate aggregate instability. Consumption externalities also help to eliminate the counterfactual cyclical behavior of consumption in the two-sector model. [source]