Process Variance (process + variance)

Distribution by Scientific Domains


Selected Abstracts


Demographic Issues in Longevity Risk Analysis

JOURNAL OF RISK AND INSURANCE, Issue 4 2006
Eric Stallard
Fundamental to the modeling of longevity risk is the specification of the assumptions used in demographic forecasting models that are designed to project past experience into future years, with or without modifications based on expert opinion about influential factors not represented in the historical data. Stochastic forecasts are required to explicitly quantify the uncertainty of forecasted cohort survival functions, including uncertainty due to process variance, parameter errors, and model misspecification errors. Current applications typically ignore the latter two sources although the potential impact of model misspecification errors is substantial. Such errors arise from a lack of understanding of the nature and causes of historical changes in longevity and the implications of these factors for the future. This article reviews the literature on the nature and causes of historical changes in longevity and recent efforts at deterministic and stochastic forecasting based on these data. The review reveals that plausible alternative sets of forecasting assumptions have been derived from the same sets of historical data, implying that further methodological development will be needed to integrate the various assumptions into a single coherent forecasting model. Illustrative calculations based on existing forecasts indicate that the ranges of uncertainty for older cohorts' survival functions will be at a manageable level. Uncertainty ranges for younger cohorts will be larger and the need for greater precision will likely motivate further model development. [source]


CUSUM charts for detecting special causes in integrated process control

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 3 2010
Marion R. Reynolds Jr
Abstract This paper investigates control charts for detecting special causes in an ARIMA(0, 1, 1) process that is being adjusted automatically after each observation using a minimum mean-squared error adjustment policy. It is assumed that the adjustment mechanism is designed to compensate for the inherent variation due to the ARIMA(0, 1, 1) process, but it is desirable to detect and eliminate special causes that occur occasionally and produce additional process variation. It is assumed that these special causes can change the process mean, the process variance, the moving average parameter, or the effect of the adjustment mechanism. Expressions are derived for the process deviation from target for all of these process parameter changes. Numerical results are presented for sustained shifts, transient shifts, and sustained drifts in the process parameters. The objective is to find control charts or combinations of control charts that will be effective for detecting special causes that result in any of these types of parameter changes in any or all of the parameters. CUSUM charts designed for detecting specific parameter changes are considered. It is shown that combinations of CUSUM charts that include a CUSUM chart designed to detect mean shifts and a CUSUM chart of squared deviations from target give good overall performance in detecting a wide range of process changes. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Distributional properties of estimated capability indices based on subsamples

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 2 2003
Kerstin Vännman
Abstract Under the assumption of normality, the distribution of estimators of a class of capability indices, containing the indices , , and , is derived when the process parameters are estimated from subsamples. The process mean is estimated using the grand average and the process variance is estimated using the pooled variance from subsamples collected over time for an in-control process. The derived theory is then applied to study the use of hypothesis testing to assess process capability. Numerical investigations are made to explore the effect of the size and number of subsamples on the efficiency of the hypothesis test for some indices in the studied class. The results for and indicate that, even when the total number of sampled observations remains constant, the power of the test decreases as the subsample size decreases. It is shown how the power of the test is dependent not only on the subsample size and the number of subsamples, but also on the relative location of the process mean from the target value. As part of this investigation, a simple form of the cumulative distribution function for the non-central -distribution is also provided. Copyright © 2003 John Wiley & Sons, Ltd. [source]


WARRANT PRICING USING OBSERVABLE VARIABLES

THE JOURNAL OF FINANCIAL RESEARCH, Issue 3 2004
Andrey D. Ukhov
Abstract The classical warrant pricing formula requires knowledge of the firm value and of the firm-value process variance. When warrants are outstanding, the firm value itself is a function of the warrant price. Firm value and firm-value variance are then unobservable variables. I develop an algorithm for pricing warrants using stock prices, an observable variable, and stock return variance. The method also enables estimation of firm-value variance. A proof of existence of the solution is provided. [source]


Allocation of quality improvement targets based on investments in learning

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 8 2001
Herbert Moskowitz
Abstract Purchased materials often account for more than 50% of a manufacturer's product nonconformance cost. A common strategy for reducing such costs is to allocate periodic quality improvement targets to suppliers of such materials. Improvement target allocations are often accomplished via ad hoc methods such as prescribing a fixed, across-the-board percentage improvement for all suppliers, which, however, may not be the most effective or efficient approach for allocating improvement targets. We propose a formal modeling and optimization approach for assessing quality improvement targets for suppliers, based on process variance reduction. In our models, a manufacturer has multiple product performance measures that are linear functions of a common set of design variables (factors), each of which is an output from an independent supplier's process. We assume that a manufacturer's quality improvement is a result of reductions in supplier process variances, obtained through learning and experience, which require appropriate investments by both the manufacturer and suppliers. Three learning investment (cost) models for achieving a given learning rate are used to determine the allocations that minimize expected costs for both the supplier and manufacturer and to assess the sensitivity of investment in learning on the allocation of quality improvement targets. Solutions for determining optimal learning rates, and concomitant quality improvement targets are derived for each learning investment function. We also account for the risk that a supplier may not achieve a targeted learning rate for quality improvements. An extensive computational study is conducted to investigate the differences between optimal variance allocations and a fixed percentage allocation. These differences are examined with respect to (i) variance improvement targets and (ii) total expected cost. For certain types of learning investment models, the results suggest that orders of magnitude differences in variance allocations and expected total costs occur between optimal allocations and those arrived at via the commonly used rule of fixed percentage allocations. However, for learning investments characterized by a quadratic function, there is surprisingly close agreement with an "across-the-board" allocation of 20% quality improvement targets. © John Wiley & Sons, Inc. Naval Research Logistics 48: 684,709, 2001 [source]


Efficiency measure, modelling and estimation in combined array designs

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 4 2003
Tak Mak
Abstract In off-line quality control, the settings that minimize the variance of a quality characteristic are unknown and must be determined based on an estimated dual response model of mean and variance. The present paper proposes a direct measure of the efficiency of any given design-estimation procedure for variance minimization. This not only facilitates the comparison of different design-estimation procedures, but may also provide a guideline for choosing a better solution when the estimated dual response model suggests multiple solutions. Motivated by the analysis of an industrial experiment on spray painting, the present paper also applies a class of link functions to model process variances in off-line quality control. For model fitting, a parametric distribution is employed in updating the variance estimates used in an iteratively weighted least squares procedure for mean estimation. In analysing combined array experiments, Engel and Huele (Technometrics, 1996; 39:365) used log-link to model process variances and considered an iteratively weighted least squares leading to the pseudo-likelihood estimates of variances as discussed in Carroll and Ruppert (Transformation and Weighting in Regression, Chapman & Hall: New York). Their method is a special case of the approach considered in this paper. It is seen for the spray paint data that the log-link may not be satisfactory and the class of link functions considered here improves substantially the fit to process variances. This conclusion is reached with a suggested method of comparing ,empirical variances' with the ,theoretical variances' based on the assumed model. Copyright © 2003 John Wiley & Sons, Ltd. [source]