Variance Reduction (variance + reduction)

Distribution by Scientific Domains


Selected Abstracts


Trivial reductions of dimensionality in the propagation of uncertainties: a physical example

ENVIRONMETRICS, Issue 1 2004
Ricardo Bolado
Abstract When performing uncertainty analysis on a mathematical model of a physical process, some coefficients of the differential equations appear as a result of elementary operations of other coefficients. It is shown in this article that variance reduction techniques should be applied on the ,final' or ,reduced' coefficients and not on the original ones, thus reducing the variance of the estimators of the parameters of the output variable distribution. We illustrate the methodology with an application to a physical problem, a radioactive contaminant transport code. A substantial variance reduction is achieved for the estimators of the distribution function, the mean and the variance of the output. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Forced Detection Monte Carlo Algorithms for Accelerated Blood Vessel Image Simulations

JOURNAL OF BIOPHOTONICS, Issue 3 2009
Ingemar Fredriksson
Abstract Two forced detection (FD) variance reduction Monte Carlo algorithms for image simulations of tissue-embedded objects with matched refractive index are presented. The principle of the algorithms is to force a fraction of the photon weight to the detector at each and every scattering event. The fractional weight is given by the probability for the photon to reach the detector without further interactions. Two imaging setups are applied to a tissue model including blood vessels, where the FD algorithms produce identical results as traditional brute force simulations, while being accelerated with two orders of magnitude. Extending the methods to include refraction mismatches is discussed. (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Correlation method for variance reduction of Monte Carlo integration in RS-HDMR

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 3 2003
Genyuan Li
Abstract The High Dimensional Model Representation (HDMR) technique is a procedure for efficiently representing high-dimensional functions. A practical form of the technique, RS-HDMR, is based on randomly sampling the overall function and utilizing orthonormal polynomial expansions. The determination of expansion coefficients employs Monte Carlo integration, which controls the accuracy of RS-HDMR expansions. In this article, a correlation method is used to reduce the Monte Carlo integration error. The determination of the expansion coefficients becomes an iteration procedure, and the resultant RS-HDMR expansion has much better accuracy than that achieved by direct Monte Carlo integration. For an illustration in four dimensions a few hundred random samples are sufficient to construct an RS-HDMR expansion by the correlation method with an accuracy comparable to that obtained by direct Monte Carlo integration with thousands of samples. © 2003 Wiley Periodicals, Inc. J Comput Chem 24: 277,283, 2003 [source]


Allocation of quality improvement targets based on investments in learning

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 8 2001
Herbert Moskowitz
Abstract Purchased materials often account for more than 50% of a manufacturer's product nonconformance cost. A common strategy for reducing such costs is to allocate periodic quality improvement targets to suppliers of such materials. Improvement target allocations are often accomplished via ad hoc methods such as prescribing a fixed, across-the-board percentage improvement for all suppliers, which, however, may not be the most effective or efficient approach for allocating improvement targets. We propose a formal modeling and optimization approach for assessing quality improvement targets for suppliers, based on process variance reduction. In our models, a manufacturer has multiple product performance measures that are linear functions of a common set of design variables (factors), each of which is an output from an independent supplier's process. We assume that a manufacturer's quality improvement is a result of reductions in supplier process variances, obtained through learning and experience, which require appropriate investments by both the manufacturer and suppliers. Three learning investment (cost) models for achieving a given learning rate are used to determine the allocations that minimize expected costs for both the supplier and manufacturer and to assess the sensitivity of investment in learning on the allocation of quality improvement targets. Solutions for determining optimal learning rates, and concomitant quality improvement targets are derived for each learning investment function. We also account for the risk that a supplier may not achieve a targeted learning rate for quality improvements. An extensive computational study is conducted to investigate the differences between optimal variance allocations and a fixed percentage allocation. These differences are examined with respect to (i) variance improvement targets and (ii) total expected cost. For certain types of learning investment models, the results suggest that orders of magnitude differences in variance allocations and expected total costs occur between optimal allocations and those arrived at via the commonly used rule of fixed percentage allocations. However, for learning investments characterized by a quadratic function, there is surprisingly close agreement with an "across-the-board" allocation of 20% quality improvement targets. © John Wiley & Sons, Inc. Naval Research Logistics 48: 684,709, 2001 [source]


Estimation and hedging effectiveness of time-varying hedge ratio: Flexible bivariate garch approaches

THE JOURNAL OF FUTURES MARKETS, Issue 1 2010
Sung Yong Park
Bollerslev's (1990, Review of Economics and Statistics, 52, 5,59) constant conditional correlation and Engle's (2002, Journal of Business & Economic Statistics, 20, 339,350) dynamic conditional correlation (DCC) bivariate generalized autoregressive conditional heteroskedasticity (BGARCH) models are usually used to estimate time-varying hedge ratios. In this study, we extend the above model to more flexible ones to analyze the behavior of the optimal conditional hedge ratio based on two (BGARCH) models: (i) adopting more flexible bivariate density functions such as a bivariate skewed- t density function; (ii) considering asymmetric individual conditional variance equations; and (iii) incorporating asymmetry in the conditional correlation equation for the DCC-based model. Hedging performance in terms of variance reduction and also value at risk and expected shortfall of the hedged portfolio are also conducted. Using daily data of the spot and futures returns of corn and soybeans we find asymmetric and flexible density specifications help increase the goodness-of-fit of the estimated models, but do not guarantee higher hedging performance. We also find that there is an inverse relationship between the variance of hedge ratios and hedging effectiveness. © 2009 Wiley Periodicals, Inc. Jrl Fut Mark 30:71,99, 2010 [source]


Optimal hedging with a regime-switching time-varying correlation GARCH model

THE JOURNAL OF FUTURES MARKETS, Issue 5 2007
Hsiang-Tai Lee
The authors develop a Markov regime-switching time-varying correlation generalized autoregressive conditional heteroscedasticity (RS-TVC GARCH) model for estimating optimal hedge ratios. The RS-TVC nests within it both the time-varying correlation GARCH (TVC) and the constant correlation GARCH (CC). Point estimates based on the Nikkei 225 and the Hang Seng index futures data show that the RS-TVC outperforms the CC and the TVC both in- and out-of-sample in terms of variance reduction. Based on H. White's (2000) reality check, the null hypothesis of no improvement of the RS-TVC over the TVC is rejected for the Nikkei 225 index contract but is not rejected for the Hang Seng index contract. © 2007 Wiley Periodicals, Inc. Jrl Fut Mark 27:495,516, 2007 [source]


A Markov regime switching approach for hedging stock indices

THE JOURNAL OF FUTURES MARKETS, Issue 7 2004
Amir Alizadeh
In this paper we describe a new approach for determining time-varying minimum variance hedge ratio in stock index futures markets by using Markov Regime Switching (MRS) models. The rationale behind the use of these models stems from the fact that the dynamic relationship between spot and futures returns may be characterized by regime shifts, which, in turn, suggests that by allowing the hedge ratio to be dependent upon the "state of the market," one may obtain more efficient hedge ratios and hence, superior hedging performance compared to other methods in the literature. The performance of the MRS hedge ratios is compared to that of alternative models such as GARCH, Error Correction and OLS in the FTSE 100 and S&P 500 markets. In and out-of-sample tests indicate that MRS hedge ratios outperform the other models in reducing portfolio risk in the FTSE 100 market. In the S&P 500 market the MRS model outperforms the other hedging strategies only within sample. Overall, the results indicate that by using MRS models market agents may be able to increase the performance of their hedges, measured in terms of variance reduction and increase in their utility. © 2004 Wiley Periodicals, Inc. Jrl Fut Mark 24:649,674, 2004 [source]


Ex ante analysis of the benefits of transgenic drought tolerance research on cereal crops in low-income countries

AGRICULTURAL ECONOMICS, Issue 4 2009
Genti Kostandini
Drought tolerance; Transgenic; Research benefits; Intellectual property rights Abstract This article develops a framework to examine the ex ante benefits of transgenic research on drought in eight low-income countries, including the benefits to producers and consumers from farm income stabilization and the potential magnitude of private sector profits from intellectual property rights (IPRs). The framework employs country-specific agroecological,drought risk zones and considers both yield increases and yield variance reductions when estimating producer and consumer benefits from research. Benefits from yield variance reductions are shown to be an important component of aggregate drought research benefits, representing 40% of total benefits across the eight countries. Further, estimated annual benefits of US$178 million to the private sector suggest that significant incentives exist for participation in transgenic drought tolerance research. [source]


Portfolio Value-at-Risk with Heavy-Tailed Risk Factors

MATHEMATICAL FINANCE, Issue 3 2002
Paul Glasserman
This paper develops efficient methods for computing portfolio value-at-risk (VAR) when the underlying risk factors have a heavy-tailed distribution. In modeling heavy tails, we focus on multivariate t distributions and some extensions thereof. We develop two methods for VAR calculation that exploit a quadratic approximation to the portfolio loss, such as the delta-gamma approximation. In the first method, we derive the characteristic function of the quadratic approximation and then use numerical transform inversion to approximate the portfolio loss distribution. Because the quadratic approximation may not always yield accurate VAR estimates, we also develop a low variance Monte Carlo method. This method uses the quadratic approximation to guide the selection of an effective importance sampling distribution that samples risk factors so that large losses occur more often. Variance is further reduced by combining the importance sampling with stratified sampling. Numerical results on a variety of test portfolios indicate that large variance reductions are typically obtained. Both methods developed in this paper overcome difficulties associated with VAR calculation with heavy-tailed risk factors. The Monte Carlo method also extends to the problem of estimating the conditional excess, sometimes known as the conditional VAR. [source]