Distributional Properties (distributional + property)

Distribution by Scientific Domains


Selected Abstracts


Distributional properties of estimated capability indices based on subsamples

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 2 2003
Kerstin Vännman
Abstract Under the assumption of normality, the distribution of estimators of a class of capability indices, containing the indices , , and , is derived when the process parameters are estimated from subsamples. The process mean is estimated using the grand average and the process variance is estimated using the pooled variance from subsamples collected over time for an in-control process. The derived theory is then applied to study the use of hypothesis testing to assess process capability. Numerical investigations are made to explore the effect of the size and number of subsamples on the efficiency of the hypothesis test for some indices in the studied class. The results for and indicate that, even when the total number of sampled observations remains constant, the power of the test decreases as the subsample size decreases. It is shown how the power of the test is dependent not only on the subsample size and the number of subsamples, but also on the relative location of the process mean from the target value. As part of this investigation, a simple form of the cumulative distribution function for the non-central -distribution is also provided. Copyright © 2003 John Wiley & Sons, Ltd. [source]


German-learning infants' ability to detect unstressed closed-class elements in continuous speech

DEVELOPMENTAL SCIENCE, Issue 2 2003
Barbara Höhle
The paper reports on two experiments with the head turn preference method which provide evidence that already at 7 to 9 months, but not yet at 6 months, German-learning infants do recognize unstressed closed-class lexical elements in continuous speech. These findings support the view that even preverbal children are able to compute at least phonological representations for closed-class functional elements. They also suggest that these elements must be available to the language learning mechanisms of the child from very early on, allowing the child to make use of the distributional properties of closed-class lexical elements for further top-down analysis of the linguistic input, e.g. segmentation and syntactic categorization. [source]


A score for Bayesian genome screening

GENETIC EPIDEMIOLOGY, Issue 3 2003
E. Warwick Daw
Abstract Bayesian Monte Carlo Markov chain (MCMC) techniques have shown promise in dissecting complex genetic traits. The methods introduced by Heath ([1997], Am. J. Hum. Genet. 61:748,760), and implemented in the program Loki, have been able to localize genes for complex traits in both real and simulated data sets. Loki estimates the posterior probability of quantitative trait loci (QTL) at locations on a chromosome in an iterative MCMC process. Unfortunately, interpretation of the results and assessment of their significance have been difficult. Here, we introduce a score, the log of the posterior placement probability ratio (LOP), for assessing oligogenic QTL detection and localization. The LOP is the log of the posterior probability of linkage to the real chromosome divided by the posterior probability of linkage to an unlinked pseudochromosome, with marker informativeness similar to the marker data on the real chromosome. Since the LOP cannot be calculated exactly, we estimate it in simultaneous MCMC on both real and pseudochromosomes. We investigate empirically the distributional properties of the LOP in the presence and absence of trait genes. The LOP is not subject to trait model misspecification in the way a lod score may be, and we show that the LOP can detect linkage for loci of small effect when the lod score cannot. We show how, in the absence of linkage, an empirical distribution of the LOP may be estimated by simulation and used to provide an assessment of linkage detection significance. Genet Epidemiol 24:181,190, 2003. © 2003 Wiley-Liss, Inc. [source]


Continuous-time models, realized volatilities, and testable distributional implications for daily stock returns

JOURNAL OF APPLIED ECONOMETRICS, Issue 2 2010
Torben G. Andersen
We provide an empirical framework for assessing the distributional properties of daily speculative returns within the context of the continuous-time jump diffusion models traditionally used in asset pricing finance. Our approach builds directly on recently developed realized variation measures and non-parametric jump detection statistics constructed from high-frequency intra-day data. A sequence of simple-to-implement moment-based tests involving various transformations of the daily returns speak directly to the importance of different distributional features, and may serve as useful diagnostic tools in the specification of empirically more realistic continuous-time asset pricing models. On applying the tests to the 30 individual stocks in the Dow Jones Industrial Average index, we find that it is important to allow for both time-varying diffusive volatility, jumps, and leverage effects to satisfactorily describe the daily stock price dynamics. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Econometric modelling of non-ferrous metal prices

JOURNAL OF ECONOMIC SURVEYS, Issue 5 2004
Clinton Watkins
Abstract., This article evaluates the significance of the empirical models and the distributional properties of prices in non-ferrous metal spots and futures markets published in leading refereed economics and finance journals between 1980 and 2002. The survey focuses on econometric analyses of pricing and return models applied to exchange-based spot and futures markets for the main industrially used non-ferrous metals, namely aluminium, copper, lead, nickel, tin and zinc. Published empirical research is evaluated in the light of the type of contract examined, frequency of data used, choice of both dependent and explanatory variables, use of proxy variables, type of model chosen, economic hypotheses tested, methods of estimation and calculation of SEs for inference, reported descriptive statistics, use of diagnostic tests of auxiliary assumptions, use of nested and non-nested tests, use of information criteria and empirical implications for non-ferrous metals. [source]


Mathematical Model for Vinyl-Divinyl Polymerization

MACROMOLECULAR REACTION ENGINEERING, Issue 6 2007
Seda Kizilel
Abstract A mathematical model for the crosslinking copolymerization of a vinyl and divinyl monomer was developed and applied to the case of methyl methacrylate and ethylene glycol dimethacrylate batch polymerization. Model results compare favorably to the experimental findings of Li and Hamielec23 for the system investigated. The model presented utilizes the numerical fractionation technique15 and is capable of predicting a broad range of distributional properties both for pre- and post-gel operating conditions as well as polymer properties that were not experimentally determined from the experimental findings of Li and Hamielec, such as crosslink density and branching frequency. The effects of divinyl monomer fraction and chain transfer agent level on the polymer properties and the dynamics of gelation were also investigated. [source]


EXPONENTIAL SMOOTHING AND NON-NEGATIVE DATA

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 4 2009
Muhammad Akram
Summary The most common forecasting methods in business are based on exponential smoothing, and the most common time series in business are inherently non-negative. Therefore it is of interest to consider the properties of the potential stochastic models underlying exponential smoothing when applied to non-negative data. We explore exponential smoothing state space models for non-negative data under various assumptions about the innovations, or error, process. We first demonstrate that prediction distributions from some commonly used state space models may have an infinite variance beyond a certain forecasting horizon. For multiplicative error models that do not have this flaw, we show that sample paths will converge almost surely to zero even when the error distribution is non-Gaussian. We propose a new model with similar properties to exponential smoothing, but which does not have these problems, and we develop some distributional properties for our new model. We then explore the implications of our results for inference, and compare the short-term forecasting performance of the various models using data on the weekly sales of over 300 items of costume jewelry. The main findings of the research are that the Gaussian approximation is adequate for estimation and one-step-ahead forecasting. However, as the forecasting horizon increases, the approximate prediction intervals become increasingly problematic. When the model is to be used for simulation purposes, a suitably specified scheme must be employed. [source]


Parsimony overcomes statistical inconsistency with the addition of more data from the same gene

CLADISTICS, Issue 5 2005
Kurt M. Pickett
Many authors have demonstrated that the parsimony method of phylogenetic analysis can fail to estimate phylogeny accurately under certain conditions when data follow a model that stipulates homogeneity of the evolutionary process. These demonstrations further show that no matter how much data are added, parsimony will forever exhibit this statistical inconsistency if the additional data have the same distributional properties as the original data. This final component,that the additional data must follow the same distribution as the original data,is crucial to the demonstration. Recent simulations show, however, that if data evolve heterogeneously, parsimony can perform consistently. Here we show, using natural data, that parsimony can overcome inconsistency if new data from the same gene are added to an analysis already exhibiting a condition indistinguishable from inconsistency. © The Willi Hennig Society 2005. [source]