Home About us Contact | |||
Time Horizon (time + horizon)
Kinds of Time Horizon Selected AbstractsPricing Loans Using Default ProbabilitiesECONOMIC NOTES, Issue 2 2003Stuart M. Turnbull This paper examines the pricing of loans using the term structure of the probability of default over the life of the loan. We describe two methodologies for pricing loans. The first methodology uses the term structure of credit spreads to price a loan, after adjusting for the difference in recovery rates between bonds and loans. In loan origination, it is common practice to estimate the probability of default for a loan over a specified time horizon and the loss given default. The second methodology shows how to incorporate this information into the arbitrage free pricing of a loan. We also show how to derive an estimate of the credit spread due to liquidity risk. For both methodologies, we show how to calculate a break,even credit spread, taking into account the fee structure of a loan and the costs associated with the term structure of marginal economic capital. The break,even spread is the minimum spread for the loan to be EVA neutral in a multi,period setting. (J.E.L.: G12, G33). [source] The Dependence of Growth-Model Results on Proficiency Cut ScoresEDUCATIONAL MEASUREMENT: ISSUES AND PRACTICE, Issue 4 2009Andrew D. Ho States participating in the Growth Model Pilot Program reference individual student growth against "proficiency" cut scores that conform with the original No Child Left Behind Act (NCLB). Although achievement results from conventional NCLB models are also cut-score dependent, the functional relationships between cut-score location and growth results are more complex and are not currently well described. We apply cut-score scenarios to longitudinal data to demonstrate the dependence of state- and school-level growth results on cut-score choice. This dependence is examined along three dimensions: 1) rigor, as states set cut scores largely at their discretion, 2) across-grade articulation, as the rigor of proficiency standards may vary across grades, and 3) the time horizon chosen for growth to proficiency. Results show that the selection of plausible alternative cut scores within a growth model can change the percentage of students "on track to proficiency" by more than 20 percentage points and reverse accountability decisions for more than 40% of schools. We contribute a framework for predicting these dependencies, and we argue that the cut-score dependence of large-scale growth statistics must be made transparent, particularly for comparisons of growth results across states. [source] Whither trial-based economic evaluation for health care decision making?HEALTH ECONOMICS, Issue 7 2006Mark J. Sculpher Abstract The randomised controlled trial (RCT) has developed a central role in applied cost-effectiveness studies in health care as the vehicle for analysis. This paper considers the role of trial-based economic evaluation in this era of explicit decision making. It is argued that any framework for economic analysis can only be judged insofar as it can inform two key decisions and be consistent with the objectives of a health care system subject to its resource constraints. The two decisions are, firstly, whether to adopt a health technology given existing evidence and, secondly, an assessment of whether more evidence is required to support this decision in the future. It is argued that a framework of economic analysis is needed which can estimate costs and effects, based on all the available evidence, relating to the full range of possible alternative interventions and clinical strategies, over an appropriate time horizon and for specific patient groups. It must also enable the accumulated evidence to be synthesised in an explicit and transparent way in order to fully represent the decision uncertainty. These requirements suggest that, in most circumstances, the use of a single RCT as a vehicle for economic analysis will be an inadequate and partial basis for decision making. It is argued that RCT evidence, with or without economic content, should be viewed as simply one of the sources of evidence, which must be placed in a broader framework of evidence synthesis and decision analysis. Copyright © 2006 John Wiley & Sons, Ltd. [source] Downscaling of global climate models for flood frequency analysis: where are we now?HYDROLOGICAL PROCESSES, Issue 6 2002Christel Prudhomme Abstract The issues of downscaling the results from global climate models (GCMs) to a scale relevant for hydrological impact studies are examined. GCM outputs, typically at a spatial resolution of around 3° latitude and 4° longitude, are currently not considered reliable at time scales shorter than 1 month. Continuous rainfall-runoff modelling for flood regime assessment requires input at the daily or even hourly time-step. A review of the different methodologies suggested in the literature to downscale GCM results at smaller spatial and temporal resolutions is presented. The methods, from simple interpolation to more sophisticated dynamical modelling, through multiple regression and weather generators, are, however, mostly based directly on GCM outputs, sometimes at daily time-step. The approach adopted is a simple, empirical methodology based on modelled monthly changes from the HadCM2 greenhouse gases experiment for the time horizon 2050s. Three daily rainfall scenarios are derived from the same set of monthly changes, representing different possible changes in the rainfall regime. The first scenario represents an increase of the occurrence of frontal systems, corresponding to a decrease in the rainfall intensity; the second corresponds to an increase in convective storm-type rainfall, characterized by extreme events with higher intensity; the third one assumes an increase in the monthly rainfall without any change in rainfall variability. A continuous daily rainfall-runoff model, calibrated for the Severn catchment, was used to generate daily flow series for the 1961,90 baseline period and the 2050s, and a peaks-over-threshold analysis was undertaken to produce flood frequency distributions for the two time horizons. Though the three scenarios lead to an increase in the magnitude and the frequency of the extreme flood events, the impact is strongly influenced by the type of daily rainfall scenario applied. We conclude that if the next generation of GCMs produce more reliable rainfall variance estimates, then more appropriate ways of deriving rainfall scenarios could be developed using weather generators rather than empirical methods. Copyright © 2002 John Wiley & Sons, Ltd. [source] Directional leakage and parameter drift,INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 1 2006M. Hovd Abstract A new method for eliminating parameter drift in parameter estimation problems is proposed. Existing methods for eliminating parameter drift work either on a limited time horizon, restricts the parameter estimates to a range that has to be determined a priori, or introduces bias in the parameter estimates which will degrade steady state performance. The idea of the new method is to apply leakage only in the directions in parameter space in which the exciting signal is not informative. This avoids the problem of parameter bias associated with conventional leakage. Copyright © 2005 John Wiley & Sons, Ltd. [source] Glucosamine sulphate in the treatment of knee osteoarthritis: cost-effectiveness comparison with paracetamolINTERNATIONAL JOURNAL OF CLINICAL PRACTICE, Issue 6 2010S. Scholtissen Summary Introduction:, The aim of this study was to explore the cost-effectiveness of glucosamine sulphate (GS) compared with paracetamol and placebo (PBO) in the treatment of knee osteoarthritis. For this purpose, a 6-month time horizon and a health care perspective was used. Material and methods:, The cost and effectiveness data were derived from Western Ontario and McMaster Universities Osteoarthritis Index data of the Glucosamine Unum In Die (once-a-day) Efficacy trial study by Herrero-Beaumont et al. Clinical effectiveness was converted into utility scores to allow for the computation of cost per quality-adjusted life year (QALY) For the three treatment arms Incremental Cost-Effectiveness Ratio were calculated and statistical uncertainty was explored using a bootstrap simulation. Results:, In terms of mean utility score at baseline, 3 and 6 months, no statistically significant difference was observed between the three groups. When considering the mean utility score changes from baseline to 3 and 6 months, no difference was observed in the first case but there was a statistically significant difference from baseline to 6 months with a p-value of 0.047. When comparing GS with paracetamol, the mean baseline incremental cost-effectiveness ratio (ICER) was dominant and the mean ICER after bootstrapping was ,1376 ,/QALY indicating dominance (with 79% probability). When comparing GS with PBO, the mean baseline and after bootstrapping ICER were 3617.47 and 4285 ,/QALY, respectively. Conclusion:, The results of the present cost-effectiveness analysis suggested that GS is a highly cost-effective therapy alternative compared with paracetamol and PBO to treat patients diagnosed with primary knee OA. [source] Chartism and exchange rate volatility,INTERNATIONAL JOURNAL OF FINANCE & ECONOMICS, Issue 3 2007Mikael Bask Abstract The purpose of this paper is to implement theoretically, the observation that the relative importance of fundamental versus technical analysis in the foreign exchange market depends on the time horizon in currency trade. For shorter time horizons, more weight is placed on technical analysis, while more weight is placed on fundamental analysis for longer horizons. The theoretical framework is the Dornbusch overshooting model, where moving averages is the technical trading technique used by the chartists. The perfect foresight path near long-run equilibrium is derived, and it is shown that the magnitude of exchange rate overshooting is larger than in the Dornbusch model. Specifically, the extent of overshooting depends inversely on the time horizon in currency trade. How changes in the model's structural parameters endogenously affect this time horizon and the magnitude of overshooting along the perfect foresight path are also derived. Copyright © 2007 John Wiley & Sons, Ltd. [source] Managing interdisciplinary health research,theoretical and practical aspectsINTERNATIONAL JOURNAL OF HEALTH PLANNING AND MANAGEMENT, Issue 3 2002Jens Aagaard-Hansen Abstract Interdisciplinary health research can offer valuable evidence for health care managers. However, there are specific challenges regarding the management of such projects. Based on 7 years of experience from a project in western Kenya, the authors point to the need for a sufficient time horizon, a high level of communication, equity between the disciplines and the identification of appropriate evaluation criteria as issues to be considered. The theoretical framework of Rosenfield was modified to comply with the complexities of field management. Copyright © 2002 John Wiley & Sons, Ltd. [source] Signal reconstruction in the presence of finite-rate measurements: finite-horizon control applicationsINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 1 2010Sridevi V. Sarma Abstract In this paper, we study finite-length signal reconstruction over a finite-rate noiseless channel. We allow the class of signals to belong to a bounded ellipsoid and derive a universal lower bound on a worst-case reconstruction error. We then compute upper bounds on the error that arise from different coding schemes and under different causality assumptions. When the encoder and decoder are noncausal, we derive an upper bound that either achieves the universal lower bound or is comparable to it. When the decoder and encoder are both causal operators, we show that within a very broad class of causal coding schemes, memoryless coding prevails as optimal, imposing a hard limitation on reconstruction. Finally, we map our general reconstruction problem into two important control problems in which the plant and controller are local to each other, but are together driven by a remote reference signal that is transmitted through a finite-rate noiseless channel. The first problem is to minimize a finite-horizon weighted tracking error between the remote system output and a reference command. The second problem is to navigate the state of the remote system from a nonzero initial condition to as close to the origin as possible in finite-time. Our analysis enables us to quantify the tradeoff between time horizon and performance accuracy, which is not well studied in the area of control with limited information as most works address infinite-horizon control objectives (e.g. stability, disturbance rejection). Copyright © 2009 John Wiley & Sons, Ltd. [source] Regulated Quality Diffusion RevisitedINTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 4 2001Andreas J. Novak In many manufacturing applications, the regulation of a quality process to attain a given target is of central interest. In a recent paper, Liu and Nam (1999) present a model of optimal quality regulation. The underlying quality process evolves due to regulation actions and superimposed random disturbances. Optimal regulation is sought as to minimize the regulation costs and the mean squared deviation from the desired target over a finite time horizon. Unfortunately the model is incorrectly analyzed in Liu and Nam (1999), and we therefore present the correct results in the following paper. [source] Two Job Cyclic Scheduling With Incompatibility ConstraintsINTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 2 2001Piero Persi The present paper deals with the problem of scheduling several repeated occurrences of two jobs over a finite or infinite time horizon in order to maximize the yielded profit. The constraints of the problem are the incompatibilities between some pairs of tasks which require a same resource. [source] Reducing risk of shortages due to drought in water supply systems using genetic algorithms,IRRIGATION AND DRAINAGE, Issue 2 2009V. Nicolosi évaluation des risques; gestion de l'eau; sécheresse; éléments déclenchant pour les plans sécheresse Abstract The evaluation of risk of shortages due to drought in water supply systems is a necessary step both in the planning and in the operation stage. A methodology for unconditional (planning) and conditional (operation) risk evaluation is presented in this study. The risk evaluation is carried out by means of an optimisation model based on genetic algorithms aimed to define thresholds for the implementation of mitigation measures tested through Monte Carlo simulation that makes use of a stochastic generation of streamflows. The GA enables the optimisation of reservoir storages which identify monthly thresholds for shifting three states of the system (normal, alert and alarm) to which correspond different mitigation measures such as water demand rationing, additional supplies from alternative sources or reduction of release for ecological use. For unconditional risk evaluation a long time horizon has been considered (40 years), while the conditional risk evaluation is performed on a short time horizon (2,3 months). Results of simulations have been studied by means of consolidated indices of performance and frequency analysis of shortages of a given entity corresponding to different planning/management policies. A multi-use water system has been used as a case study including competing irrigation and industrial demands. Copyright © 2008 John Wiley & Sons, Ltd. L'évaluation du risque de manques d'eau dus à la sécheresse dans les systèmes d'approvisionnement en eau est une étape nécessaire à la fois pour la planification et l'exploitation. Une méthodologie pour l'évaluation du risque inconditionnel (planification) et conditionnel (exploitation) est présentée dans cette étude. L'évaluation du risque est effectuée au moyen d'un modèle d'optimisation basé sur des algorithmes génétiques visant à définir des seuils pour la mise en ,uvre des mesures d'atténuation testés par une méthode de Monte Carlo générant les débits des rivières. L'algorithme génétique permet d'optimiser les stockages de réservoir avec des seuils mensuels pour identifier trois états du système (normal, alerte et alarme) auxquels correspondent différentes mesures d'atténuation telles que rationnement de la demande en eau, approvisionnements complémentaires par des sources alternatives ou réduction des lâchures pour l'usage écologique. Pour l'évaluation des risques inconditionnels un horizon à long terme a été considéré (40 ans) tandis que l'évaluation conditionnelle est faite sur un horizon à court terme (2 ou 3 mois). Les résultats des simulations ont été étudiés au moyen d'indices de performance consolidés et de l'analyse de la fréquence des manques d'eau pour une entité donnée correspondant à différentes politiques de planification et gestion. L'étude de cas porte sur un système multi-usage comportant une demande d'irrigation en concurrence avec les demandes industrielles. Copyright © 2008 John Wiley & Sons, Ltd. [source] Identifying and Attracting the "right" Investors: Evidence on the Behavior of Institutional InvestorsJOURNAL OF APPLIED CORPORATE FINANCE, Issue 4 2004Brian Bushee This article summarizes the findings of research the author has conducted over the past seven years that aims to answer a number of questions about institutional investors: Are there significant differences among institutional investors in time horizon and other trading practices that would enable such investors to be classified into types on the basis of their observable behavior? Assuming the answer to the first is yes, do corporate managers respond differently to the pressures created by different types of investors, and, by implication, are certain kinds of investors more desirable from corporate management's point of view? What kinds of companies tend to attract each type of investor, and how does a company's disclosure policy affect that process? The author's approach identifies three categories of institutional investors: (1) "transient" institutions, which exhibit high portfolio turnover and own small stakes in portfolio companies; (2) "dedicated" holders, which provide stable ownership and take large positions in individual firms; and (3) "quasi-indexers," which also trade infrequently but own small stakes (similar to an index strategy). As might be expected, the disproportionate presence of transient institutions in a company's investor base appears to intensify pressure for short-term performance while also resulting in excess volatility in the stock price. Also not surprising, transient investors are attracted to companies with investor relations activities geared toward forward-looking information and "news events," like management earnings forecasts, that constitute trading opportunities for such investors. By contrast, quasi-indexers and dedicated institutions are largely insensitive to shortterm performance and their presence is associated with lower stock price volatility. The research also suggests that companies that focus their disclosure activities on historical information as opposed to earnings forecasts tend to attract quasi-indexers instead of transient investors. In sum, the author's research suggests that changes in disclosure practices have the potential to shift the composition of a firm's investor base away from transient investors and toward more patient capital. By removing some of the external pressures for short-term performance, such a shift could encourage managers to establish a culture based on long-run value maximization. [source] An assessment of the EU growth forecasts under asymmetric preferencesJOURNAL OF FORECASTING, Issue 6 2008George A. Christodoulakis Abstract EU Commission forecasts are used as a benchmark within the framework of the Stability and Growth Pact, aimed at providing a prudential view of economic outlook, especially for member states in an Excessive Deficit Procedure. Following Elliott et al. (2005), we assess whether there exist asymmetries in the loss preference of the Commission's GDP growth forecasts from 1969 to 2004. Our empirical evidence is robust across information sets and reveals that the loss preferences tend to show some variation in terms of asymmetry across member states. Given certain conditions concerning the time horizon of forecasts and the functional form of the loss preferences, the evidence further reveals that the Commission forecasting exercise could be subject to caveats. Copyright © 2008 John Wiley & Sons, Ltd. [source] An error corrected almost ideal demand system for major cereals in KenyaAGRICULTURAL ECONOMICS, Issue 1 2010Jonathan M. Nzuma Error correction model; AIDS; Cereal consumption; Kenya Abstract Despite significant progress in theory and empirical methods, the analysis of food consumption patterns in developing countries, particularly those in Sub-Saharan Africa (SSA), has received very limited attention. An attempt is made in this article to estimate an Error Corrected Almost Ideal Demand System for four major cereals consumed in Kenya employing annual data from 1963 to 2005. This demand system performs well on both theoretical and empirical grounds. The symmetry and homogeneity conditions are supported by the data and the,Le Chatelier,principle holds. Empirically, all own-price elasticities are negative and significant at 5% level and irrespective of the time horizon, maize, wheat, rice, and sorghum may be considered as necessities in Kenya. While the expenditure elasticities of all four cereals are positive, they are inelastic both in the short run and in the long run. Finally, wheat and rice complement maize consumption in Kenya while sorghum acts as a substitute. Since cereal consumers have price and income inelastic responses, a combination of income and price-oriented policies could improve cereal consumption in Kenya. [source] An error correction almost ideal demand system for meat in GreeceAGRICULTURAL ECONOMICS, Issue 1 2000G. Karagiannis Abstract This paper represents a dynamic specification of the Almost Ideal Demand System (AIDS) based on recent developments on cointegration techniques and error correction models. Based on Greek meat consumption data over the period 1958,1993, it was found that the proposed formulation performs well on both theoretical and statistical grounds, as the theoretical properties of homogeneity and symmetry are supported by the data and the LeChatelier principle holds. Regardless of the time horizon, beef and chicken may be considered as luxuries while mutton-lamb and pork as necessities. In the short-run, beef was found to have price elastic demand, pork an almost unitary elasticity, whereas mutton-lamb, chicken and sausages had inelastic demands; in the long-run, beef, and pork were found to have a demand elasticity greater than one, whereas mutton-lamb, chicken, and sausages still had inelastic demands. All meat items are found to be substitutes to each other except chicken and mutton-lamb, and pork and chicken. [source] Life-Cycle Assessment and Temporal Distributions of Emissions: Developing a Fleet-Based AnalysisJOURNAL OF INDUSTRIAL ECOLOGY, Issue 2 2000Frank Field Summary Although the product-centered focus of life-cycle assessment has been one of its strengths, this analytical perspective embeds assumptions that may conflict with the realities of environmental problems. This article demonstrates, through a series of mathematical derivations, that all the products in use, rather than a single product, frequently should be the appropriate unit of analysis. Such a "fleet-centered" approach supplies a richer perspective on the comparative emissions burdens generated by alternative products, and it eliminates certain simplifying assumptions imposed upon the analysis by a product-centered approach. A sample numerical case, examining the comparative emissions of steel-intensive and aluminum-intensive automobiles, is presented to contrast the results of the two approaches. The fleet-centered analysis shows that the "crossover time" (i.e., the time required before the fuel economy benefits of the lighter aluminum vehicle offset the energy intensity of the processes used to manufacture the aluminum in the first place) can be dramatically longer than that predicted by a product-centered life-cycle assessment. The fleet-centered perspective explicitly introduces the notion of time as a critical element of comparative life-cycle assessments and raises important questions about the role of the analyst in selecting the appropriate time horizon for analysis. Moreover, with the introduction of time as an appropriate dimension to life-cycle assessment, the influences of effects distributed over time can be more naturally and consistently treated. [source] Planning models for parallel batch reactors with sequence-dependent changeoversAICHE JOURNAL, Issue 9 2007Muge Erdirik-Dogan Abstract In this article we address the production planning of parallel multiproduct batch reactors with sequence-dependent changeovers, a challenging problem that has been motivated by a real-world application of a specialty chemicals business. We propose two production planning models that anticipate the impact of the changeovers in this batch processing problem. The first model is based on underestimating the effects of the changeovers that leads to an MILP problem of moderate size. The second model incorporates sequencing constraints that yield very accurate predictions, but at the expense of a larger MILP problem. To solve large scale problems in terms of number of products and reactors, or length of the time horizon, we propose a decomposition technique based on rolling horizon scheme and also a relaxation of the detailed planning model. Several examples are presented to illustrate the performance of the proposed models. © 2007 American Institute of Chemical Engineers AIChE J, 2007 [source] The Role of Family Ties in the Labour Market.LABOUR, Issue 4 2001An Interpretation Based on Efficiency Wage Theory By casual empiricism, it seems that many firms take explicit account of the family ties connecting workers, often hiring individuals belonging to the same family or passing jobs on from parents to their children. This paper makes an attempt to explain this behaviour by introducing the assumption of altruism within the family and supposing that agents maximize a family utility function rather than an individual one. This hypothesis has been almost ignored in the analysis of the relationship between employers and employees. The implications of this assumption in the efficiency wage models are explored: by employing members of the same family, firms can use a (credible) harsher threat , involving a sanction for all the family's members in case of one member's shirking , that allows them to pay a lower efficiency wage. On the other hand, workers who accept this agreement exchange a reduction in wage with an increase in their probability of being employed: this can be optimal in a situation of high unemployment. Moreover, the link between parents and children allows the firm to follow a strategy that solves the problem of an individual's finite time horizon by its making use of the family's reputation. [source] Economic evaluation and 1-year survival analysis of MARS in patients with alcoholic liver diseaseLIVER INTERNATIONAL, Issue 2003Franz P. Hessel Abstract Objective of this study was to determine 1-year survival, costs and cost-effectiveness of the artificial liver support system Molecular Adsorbent Recirculating System (MARS) in patients with acute-on-chronic liver failure (ACLF) and an underlying alcoholic liver disease. In a case,control study, 13 patients treated with MARS were compared to 23 controls of similar age, sex and severity of disease. Inpatient hospital costs data were extracted from patients' files and hospital's internal costing. Patients and treating GPs were contacted, thus determining resource use and survival 1-year after treatment. Mean 1-year survival time in MARS group was 261 days and 148 days in controls. Kaplan,Meier analysis shows advantages of MARS patients (Logrank: P = 0.057). Direct medical costs per patient for initial hospital stay and 1-year follow-up from a payer's perspective were ,18 792 for MARS patients and ,9638 for controls. The costs per life-year gained are ,29 719 (time horizon 1 year). From a societal perspective, the numbers are higher (costs per life-year gained: ,79 075), mainly because of the fact that there is no regular reimbursement of MARS and therefore intervention costs were not calculated from payer's perspective. A trade-off between medical benefit and higher costs has to be made, but 1-year results suggest an acceptable cost-effectiveness of MARS. Prolonging the time horizon and including indirect costs, which will be done in future research, would probably improve cost-effectiveness. [source] Cost-effectiveness of screening for hepatopulmonary syndrome in liver transplant candidates,LIVER TRANSPLANTATION, Issue 2 2007D. Neil Roberts The hepatopulmonary syndrome (HPS) is present in 15,20% of patients with cirrhosis undergoing orthotopic liver transplantation (OLT) evaluation. Both preoperative and post-OLT mortality is increased in HPS patients particularly when hypoxemia is severe. Screening for HPS could enhance detection of OLT candidates with sufficient hypoxemia to merit higher priority for transplant and thereby decrease mortality. However, the cost-effectiveness of such an approach has not been assessed. Our objective was to perform a cost-effectiveness analysis from a third-party payer's perspective of screening for HPS in liver OLT candidates. The costs and outcomes of 3 different strategies were compared: (1) no screening, (2) screening patients with a validated dyspnea questionnaire, and (3) screening all patients with pulse oximetry. Arterial blood gas analyses and contrast echocardiography were performed in patients with dyspnea or a pulse oximetry (SpO2) ,97% to define the presence of HPS. A Markov model was constructed simulating the natural history of cirrhosis in a cohort of patients 50 years old over a time horizon of their remaining life expectancy. Transition probabilities were obtained from published data available through Medline and U.S. vital statistics. Costs represented Medicare reimbursement data at our institution. Costs and health effects were discounted at a 3% annual rate. No screening was associated with a total cost of $291,898 and a life expectancy of 11.131 years. Screening with pulse oximetry was associated with a cost of $299,719 and a life expectancy of 12.27 years. Screening patients with the dyspnea-fatigue index was associated with a cost and life expectancy of $300,278 and 12.28 years, respectively. The incremental cost-effectiveness ratio of screening with pulse oximetry (compared to no screening) was $6,867 per life year gained, whereas that of the dyspnea-fatigue index (compared to pulse oximetry) was $55,900 per life year gained. The cost-effectiveness of screening depended on the prevalence and severity of HPS, and the choice of screening strategy was dependent on the sensitivity of the screening modality. In conclusion, screening for HPS, especially with pulse oximetry, is a cost-effective strategy that improves survival in transplant candidates predominantly by targeting the transplant to the subgroup of patients most likely to benefit. The utility of screening depends on the prevalence and severity of HPS in the target population. Liver Transpl, 2006. © 2006 AASLD. [source] MARSHALL'S CETERIS PARIBUS IN A DYNAMIC FRAMEWORKMETROECONOMICA, Issue 1 2009Fabio Cerina ABSTRACT The paper aims to propose a formalization of the concept of ceteris paribus (CP) by means of a dynamic model. The basic result of the analysis is that the CP clause may assume essentially different meanings according to (1) the kind of variables assumed to be ,frozen' and (2) the length of the time horizon. It is then possible to distinguish, respectively, between an exogenous and an endogenous CP and, within the latter, between a short-run and a long-run CP. This double analytical distinction helps in understanding the role the CP clause plays in Marshall's thought and in economics in general. [source] The value of information sharing in a two-stage supply chain with production capacity constraintsNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 8 2003David Simchi-Levi Abstract We consider a simple two-stage supply chain with a single retailer facing i.i.d. demand and a single manufacturer with finite production capacity. We analyze the value of information sharing between the retailer and the manufacturer over a finite time horizon. In our model, the manufacturer receives demand information from the retailer even during time periods in which the retailer does not order. To analyze the impact of information sharing, we consider the following three strategies: (1) the retailer does not share demand information with the manufacturer; (2) the retailer does share demand information with the manufacturer and the manufacturer uses the optimal policy to schedule production; (3) the retailer shares demand information with the manufacturer and the manufacturer uses a greedy policy to schedule production. These strategies allow us to study the impact of information sharing on the manufacturer as a function of the production capacity, and the frequency and timing in which demand information is shared. © 2003 Wiley Periodicals, Inc. Naval Research Logistics, 2003 [source] Solving the asymmetric traveling salesman problem with periodic constraintsNETWORKS: AN INTERNATIONAL JOURNAL, Issue 1 2004Giuseppe Paletta Abstract In this article we describe a heuristic algorithm to solve the asymmetrical traveling salesman problem with periodic constraints over a given m -day planning horizon. Each city i must be visited ri times within this time horizon, and these visit days are assigned to i by selecting one of the feasible combinations of ri visit days with the objective of minimizing the total distance traveled by the salesman. The proposed algorithm is a heuristic that starts by designing feasible tours, one for each day of the m -day planning horizon, and then employs an improvement procedure that modifies the assigned combination to each of the cities, to improve the objective function. Our heuristic has been tested on a set of test problems purposely generated by slightly modifying known test problems taken from the literature. Computational comparisons on special instances indicate encouraging results. © 2004 Wiley Periodicals, Inc. NETWORKS, Vol. 44(1), 31,37 2004 [source] Optimal control of a water reservoir with expected value,variance criteriaOPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 1 2007Andrzej Karbowski Abstract The article presents how to solve a reservoir management problem, which has been formulated as a two-criteria stochastic optimal control problem. Apart from the expected value of a performance index, its variance is also considered. Three approaches are described: a method based on the Lagrange function; a method based on the ordinary moment of the second order (finite time horizon); and a method based on linear programming (infinite time horizon). In the second part of the article, they are assessed in a case study concerning a reservoir in the southern part of Poland. Copyright © 2006 John Wiley & Sons, Ltd. [source] Optimal sequence of landfills in solid waste managementOPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 5-6 2001Francisco J. André Abstract Given that landfills are depletable and replaceable resources, the right approach, when dealing with landfill management, is that of designing an optimal sequence of landfills rather than designing every single landfill separately. In this paper, we use Optimal Control models, with mixed elements of both continuous-and discrete-time problems, to determine an optimal sequence of landfills, as regarding their capacity and lifetime. The resulting optimization problems involve splitting a time horizon of planning into several subintervals, the length of which has to be decided. In each of the subintervals some costs, the amount of which depends on the value of the decision variables, have to be borne. The obtained results may be applied to other economic problems such as private and public investments, consumption decisions on durable goods, etc. Copyright © 2001 John Wiley & Sons, Ltd. [source] Dynamic optimization and Skiba sets in economic examplesOPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 5-6 2001Wolf-Jürgen Beyn Abstract We discuss two optimization problems from economics. The first is a model of optimal investment and the second is a model of resource management. In both cases the time horizon is infinite and the optimal control variables are continuous. Typically, in these optimal control problems multiple steady states and periodic orbits occur. This leads to multiple solutions of the state,costate system each of which relates to a locally optimal strategy but has its own limiting behaviour (stationary or periodic). Initial states that allow different optimal solutions with the same value of the objective function are called Skiba points. The set of Skiba points is of interest, because it provides thresholds for a global change of optimal strategies. We provide a systematic numerical method for calculating locally optimal solutions and Skiba points via boundary value problems. In parametric or higher dimensional systems Skiba curves (or manifolds) appear and we show how to follow them by a continuation process. We apply our method to the models above where Skiba sets consist of points or curves and where optimal solutions have different stationary or periodic asymptotic behaviour. Copyright © 2001 John Wiley & Sons, Ltd. [source] A space-time network for telecommuting versus commuting decision-making,PAPERS IN REGIONAL SCIENCE, Issue 4 2003Anna Nagurney Transportation and telecommunication networks; telecommuting and commuting; space-time networks; variational inequalities Abstract. In this article, we develop a theoretical framework for the study of telecommuting versus commuting decision-making over a fixed time horizon, such as a work week through the use of a space-time network to conceptualize the decision-makers' choices over space and time. The decision-makers are multiclass and multicriteria ones and perceive the criteria of travel cost, travel time, and opportunity cost in an individual fashion. The model is a network equilibrium type and allows for the prediction of the equilibrium flows and, hence, the number of periods that members of each class of decision-makers will telecommute or commute. Qualitative properties of the equilibrium are obtained and an algorithm is given, along with convergence results, and applied to numerical examples. [source] Public-Value Failure: When Efficient Markets May Not DoPUBLIC ADMINISTRATION REVIEW, Issue 2 2002Barry Bozeman The familiar market-failure model remains quite useful for issues of price efficiency and traditional utilitarianism, but it has many shortcomings as a standard for public-value aspects of public policy and management. In a public-value-failure model, I present criteria for diagnosing values problems that are not easily addressed by market-failure models. Public-value failure occurs when: (1) mechanisms for values articulation and aggregation have broken down; (2) "imperfect monopolies" occur; (3) benefit hoarding occurs; (4) there is a scarcity of providers of public value; (5) a short time horizon threatens public value; (6) a focus on substitutability of assets threatens conservation of public resources; and (7) market transactions threaten fundamental human subsistence. After providing examples for diagnosis of public-values failure, including an extended example concerning the market for human organs, I introduce a "public-failure grid" to facilitate values choices in policy and public management. [source] Maintenance contract assessment for aging systemsQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 5 2008Anatoly Lisnianski Abstract This paper considers an aging system, where the system failure rate is known to be an increasing function. After any failure, maintenance is performed by an external repair team. Repair rate and cost of repair are determined by a corresponding maintenance contract with a repair team. There are many different maintenance contracts suggested by the service market to the system owner. In order to choose the best maintenance contract, a total expected cost during a specified time horizon should be evaluated for an aging system. In this paper, a method is suggested based on a piecewise constant approximation for the increasing failure rate function. Two different approximations are used. For both types of approximations, the general approach for building the Markov reward model is suggested in order to assess lower and upper bounds of the total expected cost. Failure and repair rates define the transition matrix of the corresponding Markov process. Operation cost, repair cost and penalty cost for system failures are taken into account by the corresponding reward matrix definition. A numerical example is presented in order to illustrate the approach. Copyright © 2008 John Wiley & Sons, Ltd. [source] |