Random Variables (random + variable)

Distribution by Scientific Domains
Distribution within Mathematics and Statistics

Kinds of Random Variables

  • independent random variable

  • Selected Abstracts


    Miguel López-Díaz
    Summary A new univariate stochastic ordering is introduced. Some characterization results for such an ordering are stated. It is proved that the ordering is an integral stochastic ordering, obtaining a maximal generator. By means of this generator, the main properties of the ordering are deduced. A method for introducing univariate stochastic orderings, suggested by the new ordering, is analysed. Relationships with other stochastic orderings are also developed. To conclude, an example of an application of the new ordering to the field of medicine is proposed. [source]


    Han-Ying Liang
    Summary In this paper, we establish strong laws for weighted sums of negatively associated (NA) random variables which have a higher-order moment condition. Some results of Bai Z.D. & Cheng P.E. (2000)[Marcinkiewicz strong laws for linear statistics. Statist. and Probab. Lett.43, 105,112,] and Sung S.K. (2001)[Strong laws for weighted sums of i.i.d. random variables, Statist. and Probab. Lett.52, 413,419] are sharpened and extended from the independent identically distributed case to the NA setting. Also, one of the results of Li D.L. et al. (1995)[Complete convergence and almost sure convergence of weighted sums of random variables. J. Theoret. Probab.8, 49,76,] is complemented and extended. [source]

    Probability and Random Variables by G. P. Beaumont

    Sreenivasan Ravi
    No abstract is available for this article. [source]

    Comparative Statics for a Decision Model with Two Choice Variables and Two Random Variables: A Mean,Standard Deviation Approach

    Gyemyung Choi
    The decision model with two choice variables and two random variables, an extension of Feder, or Just and Pope, is examined here. Under the mean,standard deviation framework we derive the comparative statics results concerning the effects of a change in the mean, variance and covariance of the random parameters. Some restrictions on the random variables and on the decision model are considered for unambiguous comparative statics predictions. [source]

    Tests for an Epidemic Change in a Sequence of Exponentially Distributed Random Variables

    Asoka Ramanayake
    Abstract Consider a sequence of independent exponential random variables that is susceptible to a change in the means. We would like to test whether the means have been subjected to an epidemic change after an unknown point, for an unknown duration in the sequence. The likelihood ratio statistic and a likelihood ratio type statistic are derived. The distribution theories and related properties of the test statistics are discussed. Percentage points and powers of the tests are tabulated for selected values of the parameters. The powers of these two tests are then compared to the two statistics proposed by Aly and Bouzar. The tests are applied to find epidemic changes in the set of Stanford heart transplant data and air traffic arrival data. [source]

    Marginal Mark Regression Analysis of Recurrent Marked Point Process Data

    BIOMETRICS, Issue 2 2009
    Benjamin French
    Summary Longitudinal studies typically collect information on the timing of key clinical events and on specific characteristics that describe those events. Random variables that measure qualitative or quantitative aspects associated with the occurrence of an event are known as marks. Recurrent marked point process data consist of possibly recurrent events, with the mark (and possibly exposure) measured if and only if an event occurs. Analysis choices depend on which aspect of the data is of primary scientific interest. First, factors that influence the occurrence or timing of the event may be characterized using recurrent event analysis methods. Second, if there is more than one event per subject, then the association between exposure and the mark may be quantified using repeated measures regression methods. We detail assumptions required of any time-dependent exposure process and the event time process to ensure that linear or generalized linear mixed models and generalized estimating equations provide valid estimates. We provide theoretical and empirical evidence that if these conditions are not satisfied, then an independence estimating equation should be used for consistent estimation of association. We conclude with the recommendation that analysts carefully explore both the exposure and event time processes prior to implementing a repeated measures analysis of recurrent marked point process data. [source]

    Identification and Estimation of Regression Models with Misclassification

    ECONOMETRICA, Issue 3 2006
    Aprajit Mahajan
    This paper studies the problem of identification and estimation in nonparametric regression models with a misclassified binary regressor where the measurement error may be correlated with the regressors. We show that the regression function is nonparametrically identified in the presence of an additional random variable that is correlated with the unobserved true underlying variable but unrelated to the measurement error. Identification for semiparametric and parametric regression functions follows straightforwardly from the basic identification result. We propose a kernel estimator based on the identification strategy, derive its large sample properties, and discuss alternative estimation procedures. We also propose a test for misclassification in the model based on an exclusion restriction that is straightforward to implement. [source]

    Performance analysis of optically preamplified DC-coupled burst mode receivers

    T. J. Zuo
    Bit error rate and threshold acquisition penalty evaluation is performed for an optically preamplified DC-coupled burst mode receiver using a moment generating function (MGF) description of the signal plus noise. The threshold itself is a random variable and is also described using an appropriate MGF. Chernoff bound (CB), modified Chernoff bound (MCB) and the saddle-point approximation (SPA) techniques make use of the MGF to provide the performance analyses. This represents the first time that these widely used approaches to receiver performance evaluation have been applied to an optically preamplified burst mode receiver and it is shown that they give threshold acquisition penalty results in good agreement with a prior existing approach, whilst having the facility to incorporate arbitrary receiver filtering, receiver thermal noise and non-ideal extinction ratio. A traditional Gaussian approximation (GA) is also calculated and comparison shows that it is clearly less accurate (it exceeds the upper bounds provided by CB and MCB) in the realistic cases examined. It is deduced, in common with the equivalent continuous mode analysis, that the MCB is the most sensible approach. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    An analysis of P times reported in the Reviewed Event Bulletin for Chinese underground explosions

    A. Douglas
    SUMMARY Analysis of variance is used to estimate the measurement error and path effects in the P times reported in the Reviewed Event Bulletins (REBs, produced by the provisional International Data Center, Arlington, USA) and in times we have read, for explosions at the Chinese Test Site. Path effects are those differences between traveltimes calculated from tables and the true times that result in epicentre error. The main conclusions of the study are: (1) the estimated variance of the measurement error for P times reported in the REB at large signal-to-noise ratio (SNR) is 0.04 s2, the bulk of the readings being analyst-adjusted automatic-detections, whereas for our times the variance is 0.01 s2 and (2) the standard deviation of the path effects for both sets of observations is about 0.6 s. The study shows that measurement error is about twice (,0.2 s rather than ,0.1 s) and path effects about half the values assumed for the REB times. However, uncertainties in the estimated epicentres are poorly described by treating path effects as a random variable with a normal distribution. Only by estimating path effects and using these to correct onset times can reliable estimates of epicentre uncertainty be obtained. There is currently an international programme to do just this. The results imply that with P times from explosions at three or four stations with good SNR (so that the measurement error is around 0.1 s) and well distributed in azimuth, then with correction for path effects the area of the 90 per cent coverage ellipse should be much less than 1000 km2,the area allowed for an on-site inspection under the Comprehensive Test Ban Treaty,and should cover the true epicentre with the given probability. [source]

    Adaptive preconditioning of linear stochastic algebraic systems of equations

    Y. T. Feng
    Abstract This paper proposes an adaptively preconditioned iterative method for the solution of large-scale linear stochastic algebraic systems of equations with one random variable that arise from the stochastic finite element modelling of linear elastic problems. Firstly, a Rank-one posteriori preconditioner is introduced for a general linear system of equations. This concept is then developed into an effective adaptive preconditioning scheme for the iterative solution of the stochastic equations in the context of a modified Monte Carlo simulation approach. To limit the maximum number of base vectors used in the scheme, a simple selection criterion is proposed to update the base vectors. Finally, numerical experiments are conducted to assess the performance of the proposed adaptive preconditioning strategy, which indicates that the scheme with very few base vectors can improve the convergence of the standard Incomplete Cholesky preconditioning up to 50%. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    On generalized stochastic perturbation-based finite element method

    Marcin Kami
    Abstract Generalized nth order stochastic perturbation technique, that can be applied to solve some boundary value or boundary initial problems in computational physics and/or engineering with random parameters is proposed here. This technique is demonstrated in conjunction with the finite element method (FEM) to model 1D linear elastostatics problem with a single random variable. The symbolic computer program is employed to perform computational studies on convergence of the first two probabilistic moments for simple unidirectional tension of a bar. These numerical studies verify the influence of coefficient of variation of the random input and, at the same time, of the perturbation parameter on the first two probabilistic moments of the final solution vector. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    Reliability-based design optimization with equality constraints

    Xiaoping Du
    Abstract Equality constraints have been well studied and widely used in deterministic optimization, but they have rarely been addressed in reliability-based design optimization (RBDO). The inclusion of an equality constraint in RBDO results in dependency among random variables. Theoretically, one random variable can be substituted in terms of remaining random variables given an equality constraint; and the equality constraint can then be eliminated. However, in practice, eliminating an equality constraint may be difficult or impossible because of complexities such as coupling, recursion, high dimensionality, non-linearity, implicit formats, and high computational costs. The objective of this work is to develop a methodology to model equality constraints and a numerical procedure to solve a RBDO problem with equality constraints. Equality constraints are classified into demand-based type and physics-based type. A sequential optimization and reliability analysis strategy is used to solve RBDO with physics-based equality constraints. The first-order reliability method is employed for reliability analysis. The proposed method is illustrated by a mathematical example and a two-member frame design problem. Copyright © 2007 John Wiley & Sons, Ltd. [source]

    A robust design method using variable transformation and Gauss,Hermite integration

    Beiqing Huang
    Abstract Robust design seeks an optimal solution where the design objective is insensitive to the variations of input variables while the design feasibility under the variations is maintained. Accurate robustness assessment for both design objective and feasibility usually requires an intensive computational effort. In this paper, an accurate robustness assessment method with a moderate computational effort is proposed. The numerical Gauss,Hermite integration technique is employed to calculate the mean and standard deviation of the objective and constraint functions. To effectively use the Gauss,Hermite integration technique, a transformation from a general random variable into a normal variable is performed. The Gauss,Hermite integration and the transformation result in concise formulas and produce an accurate approximation to the mean and standard deviation. This approach is then incorporated into the framework of robust design optimization. The design of a two-bar truss and an automobile torque arm is used to demonstrate the effectiveness of the proposed method. The results are compared with the commonly used Taylor expansion method and Monte Carlo simulation in terms of accuracy and efficiency. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    Accurate and time efficient estimation of the probability of error in bursty channels,

    M. Stevan Berber
    Abstract A method and a technique for the probability of error estimation in digital channels with memory are developed and demonstrated. The expressions for the mean and variance of a random variable, representing a block of bits transmission in a bursty channel (channel with memory), are derived. The influence of the memory is expressed by a parameter called the memory factor. It is shown that the traditional Monte Carlo method can be applied for the probability of error estimation. In order to control the accuracy and increase the time efficiency of estimation this method is modified and a new method, called the modified Monte Carlo method, is proposed. Based on this modified method a technique of estimation with controlled accuracy is developed and demonstrated using data obtained by simulation. According to this technique the sample size is adjusted in the course of estimating procedure to give an accurate estimate of the probability of error for a minimum required time of estimation. Copyright © 2003 John Wiley & Sons, Ltd. [source]

    Asymmetric power distribution: Theory and applications to risk measurement

    Ivana Komunjer
    Theoretical literature in finance has shown that the risk of financial time series can be well quantified by their expected shortfall, also known as the tail value-at-risk. In this paper, I construct a parametric estimator for the expected shortfall based on a flexible family of densities, called the asymmetric power distribution (APD). The APD family extends the generalized power distribution to cases where the data exhibits asymmetry. The first contribution of the paper is to provide a detailed description of the properties of an APD random variable, such as its quantiles and expected shortfall. The second contribution of the paper is to derive the asymptotic distribution of the APD maximum likelihood estimator (MLE) and construct a consistent estimator for its asymptotic covariance matrix. The latter is based on the APD score whose analytic expression is also provided. A small Monte Carlo experiment examines the small sample properties of the MLE and the empirical coverage of its confidence intervals. An empirical application to four daily financial market series reveals that returns tend to be asymmetric, with innovations which cannot be modeled by either Laplace (double-exponential) or Gaussian distribution, even if we allow the latter to be asymmetric. In an out-of-sample exercise, I compare the performances of the expected shortfall forecasts based on the APD-GARCH, Skew- t -GARCH and GPD-EGARCH models. While the GPD-EGARCH 1% expected shortfall forecasts seem to outperform the competitors, all three models perform equally well at forecasting the 5% and 10% expected shortfall. Copyright © 2007 John Wiley & Sons, Ltd. [source]

    Statistical thermodynamics of internal rotation in a hindering potential of mean force obtained from computer simulations

    Vladimir Hnizdo
    Abstract A method of statistical estimation is applied to the problem of one-dimensional internal rotation in a hindering potential of mean force. The hindering potential, which may have a completely general shape, is expanded in a Fourier series, the coefficients of which are estimated by fitting an appropriate statistical,mechanical distribution to the random variable of internal rotation angle. The function of reduced moment of inertia of an internal rotation is averaged over the thermodynamic ensemble of atomic configurations of the molecule obtained in stochastic simulations. When quantum effects are not important, an accurate estimate of the absolute internal rotation entropy of a molecule with a single rotatable bond is obtained. When there is more than one rotatable bond, the "marginal" statistical,mechanical properties corresponding to a given internal rotational degree of freedom are educed. The method is illustrated using Monte Carlo simulations of two public health relevant halocarbon molecules, each having a single internal-rotation degree of freedom, and a molecular dynamics simulation of an immunologically relevant polypeptide, in which several dihedral angles are analyzed. © 2003 Wiley Periodicals, Inc. J Comput Chem 24: 1172,1183, 2003 [source]

    Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model

    Wen-Chung Wang
    This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random coefficients multinomial logit model (MRCMLM) (Adams, Wilson, & Wang, 1997) so that the estimation procedures for the MRCMLM can be directly applied. The results of the simulation indicated that when the data were generated from the RSM, using the RSM and the RE-RSM to fit the data made little difference: both resulting in accurate parameter recovery. When the data were generated from the RE-RSM, using the RE-RSM to fit the data resulted in unbiased estimates, whereas using the RSM resulted in biased estimates, large fit statistics for the thresholds, and inflated test reliability. An empirical example of 10 items with four-point rating scales was illustrated in which four models were compared: the RSM, the RE-RSM, the partial credit model (Masters, 1982), and the constrained random-effects partial credit model. In this real data set, the need for a random-effects formulation becomes clear. [source]

    Consistent poverty comparisons and inference

    Channing Arndt
    Poverty measurement; Entropy estimation; Revealed preferences; Bootstrap Abstract Building upon the cost of basic needs (CBN) approach, an integrated approach to making consumption-based poverty comparisons is presented. This approach contains two principal modifications to the standard CBN approach. The first permits the development of multiple poverty bundles that are utility consistent. The second recognizes that the poverty line itself is a random variable whose variation influences the degree of confidence in poverty measures. We illustrate the empirical importance of these two methodological changes for the case of Mozambique. With utility consistency imposed, estimated poverty rates tend to be systematically higher in rural areas and lower in urban areas. We also find that the true confidence intervals on the poverty estimates,those incorporating poverty line variance,tend to be considerably larger than those that ignore poverty line variance. Finally, we show that these two methodological changes interact. Specifically, we find that imposing utility consistency on poverty bundles tends to tighten confidence intervals, sometimes dramatically, on provincial poverty estimates. We conclude that this revised approach represents an important advance in poverty analysis. The revised approach is straightforward and directly applicable in empirical work. [source]


    ABSTRACT Six topical formulations were evaluated by a trained panel according to a descriptive analysis methodology and by a group of consumers who rated the products on a hedonic scale. We present a new approach that describes the categorical appreciation of appearance, texture and skinfeel of the formulations by the consumers as a function of related sensory attributes assessed by the trained panel. For each hedonic attribute, a latent random variable depending on the sensory attributes is constructed and made discrete (in a nonlinear fashion) according to the distribution of consumer-hedonic scores in such a way as to minimize a corresponding chi-square criterion. Standard partial least squares (PLS) regression, bootstrapping and cross-validation techniques describing the overall liking of the hedonic attributes as a function of associated sensory attributes were also applied. Results from both methods were compared, and it was concluded that chi-square minimization can work as a complementary method to the PLS regression. [source]

    A latent Gaussian model for compositional data with zeros

    Adam Butler
    Summary., Compositional data record the relative proportions of different components within a mixture and arise frequently in many fields. Standard statistical techniques for the analysis of such data assume the absence of proportions which are genuinely zero. However, real data can contain a substantial number of zero values. We present a latent Gaussian model for the analysis of compositional data which contain zero values, which is based on assuming that the data arise from a (deterministic) Euclidean projection of a multivariate Gaussian random variable onto the unit simplex. We propose an iterative algorithm to simulate values from this model and apply the model to data on the proportions of fat, protein and carbohydrate in different groups of food products. Finally, evaluation of the likelihood involves the calculation of difficult integrals if the number of components is more than 3, so we present a hybrid Gibbs rejection sampling scheme that can be used to draw inferences about the parameters of the model when the number of components is arbitrarily large. [source]

    Explosive Random-Coefficient AR(1) Processes and Related Asymptotics for Least-Squares Estimation

    S. Y. Hwang
    Abstract., Large sample properties of the least-squares and weighted least-squares estimates of the autoregressive parameter of the explosive random-coefficient AR(1) process are discussed. It is shown that, contrary to the standard AR(1) case, the least-squares estimator is inconsistent whereas the weighted least-squares estimator is consistent and asymptotically normal even when the error process is not necessarily Gaussian. Conditional asymptotics on the event that a certain limiting random variable is non-zero is also discussed. [source]

    Duality, income and substitution effects for the competitive firm under price uncertainty

    Carmen F. Menezes
    This paper uses duality theory to decompose the total effect on the competitive firm's output of an increase in the riskiness of output price into income and substitution effects. Properties of preferences that control the sign of each effect are identified. The analysis extends to the general class of quasi-linear decision models in which the payoff is linear in the random variable. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    Pricing training and development programs using stochastic CVP analysis

    James A. Yunker
    This paper sets forth, analyzes and applies a stochastic cost-volume-profit (CVP) model specifically geared toward the determination of enrollment fees for training and development (T+D) programs. It is a simpler model than many of those developed in the research literature, but it does incorporate one advanced component: an ,economic' demand function relating the expected sales level to price. Price is neither a constant nor a random variable in this model but rather the decision-maker's basic control variable. The simplicity of the model permits analytical solutions for five ,special prices': (1) the highest price which sets breakeven probability equal to a minimum acceptable level; (2) the price which maximizes expected profits; (3) the price which maximizes a Cobb,Douglas utility function based on expected profits and breakeven probability; (4) the price which maximizes breakeven probability; and (5) the lowest price which sets breakeven probability equal to a minimum acceptable level. The model is applied to data provided by the Center for Management and Professional Development at the authors' university. The results suggest that there could be a significant payoff to fine-tuning a T+D provider's pricing strategy using formal analysis. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    Time-to-market, window of opportunity, and salvageability of a new product development

    A. Messica
    The time-to-market in the presence of a window of opportunity is analyzed using ;a probabilistic model, i.e. a model where the completion time of new product development is a random variable characterized by a gamma distribution. Two cases are considered: the first, a case where the discounted return-on-investment exceeds the return expected from a conservative investment,e.g. investment in bonds,termed ,the profitable case'; and the second, a case where the discounted return-on-investment just balances the cost of new product development, termed ,the salvageable case'. The model constructed is focused on the financial aspects of new product development. It allows a decision-maker to monitor, as well as terminate, a project based on its expected value (at any time prior to completion) by computing the mean time-to-market that provides profit, investment salvage, or loss. The mean time-to-market computed by the model may be compared with that estimated by the technology development team for decision-making purposes. Finally, in the presence of a window of opportunity and for the specific cases analyzed, we recommend to always keep the expenditure rate lower than the expected return rate. This will provide the decision-maker a salvageable exit opportunity if project termination is decided. Copyright © 2002 John Wiley & Sons, Ltd. [source]

    Risk-sensitive sizing of responsive facilities

    Sergio Chayet
    Abstract We develop a risk-sensitive strategic facility sizing model that makes use of readily obtainable data and addresses both capacity and responsiveness considerations. We focus on facilities whose original size cannot be adjusted over time and limits the total production equipment they can hold, which is added sequentially during a finite planning horizon. The model is parsimonious by design for compatibility with the nature of available data during early planning stages. We model demand via a univariate random variable with arbitrary forecast profiles for equipment expansion, and assume the supporting equipment additions are continuous and decided ex-post. Under constant absolute risk aversion, operating profits are the closed-form solution to a nontrivial linear program, thus characterizing the sizing decision via a single first-order condition. This solution has several desired features, including the optimal facility size being eventually decreasing in forecast uncertainty and decreasing in risk aversion, as well as being generally robust to demand forecast uncertainty and cost errors. We provide structural results and show that ignoring risk considerations can lead to poor facility sizing decisions that deteriorate with increased forecast uncertainty. Existing models ignore risk considerations and assume the facility size can be adjusted over time, effectively shortening the planning horizon. Our main contribution is in addressing the problem that arises when that assumption is relaxed and, as a result, risk sensitivity and the challenges introduced by longer planning horizons and higher uncertainty must be considered. Finally, we derive accurate spreadsheet-implementable approximations to the optimal solution, which make this model a practical capacity planning tool.© 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008 [source]

    The statistics of refractive error maps: managing wavefront aberration analysis without Zernike polynomials

    D. Robert Iskander
    Abstract The refractive error of a human eye varies across the pupil and therefore may be treated as a random variable. The probability distribution of this random variable provides a means for assessing the main refractive properties of the eye without the necessity of traditional functional representation of wavefront aberrations. To demonstrate this approach, the statistical properties of refractive error maps are investigated. Closed-form expressions are derived for the probability density function (PDF) and its statistical moments for the general case of rotationally-symmetric aberrations. A closed-form expression for a PDF for a general non-rotationally symmetric wavefront aberration is difficult to derive. However, for specific cases, such as astigmatism, a closed-form expression of the PDF can be obtained. Further, interpretation of the distribution of the refractive error map as well as its moments is provided for a range of wavefront aberrations measured in real eyes. These are evaluated using a kernel density and sample moments estimators. It is concluded that the refractive error domain allows non-functional analysis of wavefront aberrations based on simple statistics in the form of its sample moments. Clinicians may find this approach to wavefront analysis easier to interpret due to the clinical familiarity and intuitive appeal of refractive error maps. [source]

    A statistical model of the aberration structure of normal, well-corrected eyes

    Larry N. Thibos
    Abstract A statistical model of the wavefront aberration function of the normal, well-corrected eye was constructed based on normative data from 200 eyes which show that, apart from spherical aberration, the higher-order aberrations of the human eye tend to be randomly distributed about a mean value of zero. The vector of Zernike aberration coefficients describing the aberration function for any individual eye was modelled as a multivariate, Gaussian, random variable with known mean, variance and covariance. The model was verified by analysing the statistical properties of 1000 virtual eyes generated by the model. Potential applications of the model include computer simulation of individual variation in aberration structure, retinal image quality, visual performance, benefit of novel designs of ophthalmic lenses, or outcome of refractive surgery. [source]

    Variation mode and effect analysis: an application to fatigue life prediction

    Pär Johannesson
    Abstract We present an application of the probabilistic branch of variation mode and effect analysis (VMEA) implemented as a first-order, second-moment reliability method. First order means that the failure function is approximated to be linear around the nominal values with respect to the main influencing variables, while second moment means that only means and variances are taken into account in the statistical procedure. We study the fatigue life of a jet engine component and aim at a safety margin that takes all sources of prediction uncertainties into account. Scatter is defined as random variation due to natural causes, such as non-homogeneous material, geometry variation within tolerances, load variation in usage, and other uncontrolled variations. Other uncertainties are unknown systematic errors, such as model errors in the numerical calculation of fatigue life, statistical errors in estimates of parameters, and unknown usage profile. By treating also systematic errors as random variables, the whole safety margin problem is put into a common framework of second-order statistics. The final estimated prediction variance of the logarithmic life is obtained by summing the variance contributions of all sources of scatter and other uncertainties, and it represents the total uncertainty in the life prediction. Motivated by the central limit theorem, this logarithmic life random variable may be regarded as normally distributed, which gives possibilities to calculate relevant safety margins. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    A note on stochastically acceptable quality

    Max S. Finkelstein
    Abstract A random variable of quality, characterized by the quality distribution function, is considered both for continuous and discrete cases. The acceptable quality distribution function is introduced and different types of stochastic comparison are discussed: comparison in the mean, ordering in distribution, variability ordering and hazard-rate ordering. Acceptable, unacceptable and intermediate regions for levels of quality are determined by the discretization procedure. It is assumed that the quality of a product is revealed through usage. A simple reliability setting is considered when the quality of a product is directly dependent on its reliability characteristics in usage. Possible generalizations are discussed. Copyright © 2001 John Wiley & Sons, Ltd. [source]

    An affine-invariant multivariate sign test for cluster correlated data

    Denis Larocque
    Abstract The author presents a multivariate location model for cluster correlated observations. He proposes an affine-invariant multivariate sign statistic for testing the value of the location parameter. His statistic is an adaptation of that proposed by Randles (2000). The author shows, under very mild conditions, that his test statistic is asymptotically distributed as a chi-squared random variable under the null hypothesis. In particular, the test can be used for skewed populations. In the context of a general multivariate normal model, the author obtains values of his test's Pitman asymptotic efficiency relative to another test based on the overall average. He shows that there is an improvement in the relative performance of the new test as soon as intra-cluster correlation is present Even in the univariate case, the new test can be very competitive for Gaussian data. Furthermore, the statistic is easy to compute, even for large dimensional data. The author shows through simulations that his test performs well compared to the average-based test. He illustrates its use with real data. L'auteur présente un modèle de position multivarié pour données corrélées en grappes. Il propose une statistique du signe multivarié affine-invariant permettant de tester la valeur du vecteur de position. Sa statistique est une adaptation de celle proposée par Randles (2000). L'auteur montre que sous des conditions peu restrictives, la loi asymptotique de sa statistique sous l'hypothèse nulle est celle du khi-deux. En particulier, le test peut ,tre utilisé avec des populations asymétriques. Dans le cadre d'un modèle multinormal général, l'auteur calcule les valeurs de l'efficacité asymptotique de Pitman de son test par rapport à un autre test basé sur la moyenne globale. Ses résultats montrent que la performance du nouveau test s'améliore en présence de corrélation intra-grappe. M,me dans le cas univarié, le nouveau test s'avère très performant pour des données gaussiennes. De plus, la statistique se calcule facilement, m,me en haute dimension. L'auteur montre par simulation que son test se comporte bien par rapport à celui fondé sur la moyenne globale. Il en illustre l'emploi au moyen de données réelles. [source]