Random Vectors (random + vector)

Distribution by Scientific Domains


Selected Abstracts


Initialization Strategies in Simulation-Based SFE Eigenvalue Analysis

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2005
Song Du
Poor initializations often result in slow convergence, and in certain instances may lead to an incorrect or irrelevant answer. The problem of selecting an appropriate starting vector becomes even more complicated when the structure involved is characterized by properties that are random in nature. Here, a good initialization for one sample could be poor for another sample. Thus, the proper eigenvector initialization for uncertainty analysis involving Monte Carlo simulations is essential for efficient random eigenvalue analysis. Most simulation procedures to date have been sequential in nature, that is, a random vector to describe the structural system is simulated, a FE analysis is conducted, the response quantities are identified by post-processing, and the process is repeated until the standard error in the response of interest is within desired limits. A different approach is to generate all the sample (random) structures prior to performing any FE analysis, sequentially rank order them according to some appropriate measure of distance between the realizations, and perform the FE analyses in similar rank order, using the results from the previous analysis as the initialization for the current analysis. The sample structures may also be ordered into a tree-type data structure, where each node represents a random sample, the traverse of the tree starts from the root of the tree until every node in the tree is visited exactly once. This approach differs from the sequential ordering approach in that it uses the solution of the "closest" node to initialize the iterative solver. The computational efficiencies that result from such orderings (at a modest expense of additional data storage) are demonstrated through a stability analysis of a system with closely spaced buckling loads and the modal analysis of a simply supported beam. [source]


An empirical method for inferring species richness from samples

ENVIRONMETRICS, Issue 2 2006
Paul A. Murtaugh
Abstract We introduce an empirical method of estimating the number of species in a community based on a random sample. The numbers of sampled individuals of different species are modeled as a multinomial random vector, with cell probabilities estimated by the relative abundances of species in the sample and, for hypothetical species missing from the sample, by linear extrapolation from the abundance of the rarest observed species. Inference is then based on likelihoods derived from the multinomial distribution, conditioning on a range of possible values of the true richness in the community. The method is shown to work well in simulations based on a variety of real data sets. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Construction of Exact Simultaneous Confidence Bands for a Simple Linear Regression Model

INTERNATIONAL STATISTICAL REVIEW, Issue 1 2008
Wei Liu
Summary A simultaneous confidence band provides a variety of inferences on the unknown components of a regression model. There are several recent papers using confidence bands for various inferential purposes; see for example, Sun et al. (1999), Spurrier (1999), Al-Saidy et al. (2003), Liu et al. (2004), Bhargava & Spurrier (2004), Piegorsch et al. (2005) and Liu et al. (2007). Construction of simultaneous confidence bands for a simple linear regression model has a rich history, going back to the work of Working & Hotelling (1929). The purpose of this article is to consolidate the disparate modern literature on simultaneous confidence bands in linear regression, and to provide expressions for the construction of exact 1 ,, level simultaneous confidence bands for a simple linear regression model of either one-sided or two-sided form. We center attention on the three most recognized shapes: hyperbolic, two-segment, and three-segment (which is also referred to as a trapezoidal shape and includes a constant-width band as a special case). Some of these expressions have already appeared in the statistics literature, and some are newly derived in this article. The derivations typically involve a standard bivariate t random vector and its polar coordinate transformation. Résumé Un intervalle de confiance simultanée fournit une variété d'inférences sur les composantes inconnues d'un modéle de régression. Plusieurs articles récents utilisent des intervalles de confiance dans des buts variés; voir par exemple Sun, Raz et Faraway (1999), Spurrier (1999), Al-Saidy et al. (2003), Liu, Jamshidian et Zhang (2004), Bhargava et Spurrier (2004), Piegorsch et al. (2005), Liu et al. (2007). La construction d'intervalles de confiance simultanés pour un simple modéle de régression linéaire a une histoire riche, qui remonte aux travaux de Working et hotelling (1929). L'objet de cet article est de consolider la littérature moderne disparate sur les intervalles de confiance simultanés dans la régression linéaire, de fournir des expressions pour la construction d'intervalles de confiance simultanés de niveau exact 1 ,, pour un modéle de régression linéaire simple ou pour des formes unilatérales ou bilatérales. Nous concentrons notre attention sur les trois formes les plus reconnues: hyperbolique, à deux segments et à trois segments (qui est aussi appelée forme trapézoïdale et inclut un intervalle de largeur constante comme cas spécial). Certaines de ces expressions sont déjà apparues dans la littérature statistique, d'autres sont nouvellement introduites dans cet article. Les dérivations comprennent typiquement un vecteur aléatoire standard bivarié t et sa transformation en coordonnées polaires. [source]


Structural learning with time-varying components: tracking the cross-section of financial time series

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2005
Makram Talih
Summary., When modelling multivariate financial data, the problem of structural learning is compounded by the fact that the covariance structure changes with time. Previous work has focused on modelling those changes by using multivariate stochastic volatility models. We present an alternative to these models that focuses instead on the latent graphical structure that is related to the precision matrix. We develop a graphical model for sequences of Gaussian random vectors when changes in the underlying graph occur at random times, and a new block of data is created with the addition or deletion of an edge. We show how a Bayesian hierarchical model incorporates both the uncertainty about that graph and the time variation thereof. [source]


Random vectors satisfying Khinchine,Kahane type inequalities for linear and quadratic forms

MATHEMATISCHE NACHRICHTEN, Issue 9 2005
Jesús Bastero
Abstract We study the behaviour of moments of order p (1 < p < ,) of affine and quadratic forms with respect to non log-concave measures and we obtain an extension of Khinchine,Kahane inequality for new families of random vectors by using Pisier's inequalities for martingales. As a consequence, we get some estimates for the moments of affine and quadratic forms with respect to a tail volume of the unit ball of lnq (0 < q < 1). (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Nonparametric smoothing using state space techniques

THE CANADIAN JOURNAL OF STATISTICS, Issue 1 2001
Patrick E. Brown
Abstract The authors examine the equivalence between penalized least squares and state space smoothing using random vectors with infinite variance. They show that despite infinite variance, many time series techniques for estimation, significance testing, and diagnostics can be used. The Kalman filter can be used to fit penalized least squares models, computing the smoothed quantities and related values. Infinite variance is equivalent to differencing to stationarity, and to adding explanatory variables. The authors examine constructs called "smoothations" which they show to be fundamental in smoothing. Applications illustrate concepts and methods. Les auteurs examinent l'équivalence entre les moindres carrés pénalisés et le lissage de l'espace d'états au moyen de vecteurs aléatoires à variance infinie. Ils montrent que malgré le problème de variance infinie, plusieurs techniques de diagnostic, d'estimation et de test de signification propres aux chroniques restent valables. Le filtre de Kalman permet d'évaluer les modèles des moindres carrés pénalisés en fournissant entre autres des valeurs lissées. La variance infinie est équivalente à la différenciation à des fins de stationnarité et à l'ajout de variables explicatives. Les auteurs étudient en outre des quantités appelées "lissations," dont ils montrent l'importance pour le lissage. Des applications illustrent les méthodes et procédures décrites. [source]


Legendre polynomial kernel estimation of a density function with censored observations and an application to clinical trials

COMMUNICATIONS ON PURE & APPLIED MATHEMATICS, Issue 8 2007
Simeon M. Berman
Let f(x), x , ,M, M , 1, be a density function on ,M, and X1, ,., Xn a sample of independent random vectors with this common density. For a rectangle B in ,M, suppose that the X's are censored outside B, that is, the value Xk is observed only if Xk , B. The restriction of f(x) to x , B is clearly estimable by established methods on the basis of the censored observations. The purpose of this paper is to show how to extrapolate a particular estimator, based on the censored sample, from the rectangle B to a specified rectangle C containing B. The results are stated explicitly for M = 1, 2, and are directly extendible to M , 3. For M = 2, the extrapolation from the rectangle B to the rectangle C is extended to the case where B and C are triangles. This is done by means of an elementary mapping of the positive quarter-plane onto the strip {(u, v): 0 , u , 1, v > 0}. This particular extrapolation is applied to the estimation of the survival distribution based on censored observations in clinical trials. It represents a generalization of a method proposed in 2001 by the author [2]. The extrapolator has the following form: For m , 1 and n , 1, let Km, n(x) be the classical kernel estimator of f(x), x , B, based on the orthonormal Legendre polynomial kernel of degree m and a sample of n observed vectors censored outside B. The main result, stated in the cases M = 1, 2, is an explicit bound for E|Km, n(x) , f(x)| for x , C, which represents the expected absolute error of extrapolation to C. It is shown that the extrapolator is a consistent estimator of f(x), x , C, if f is sufficiently smooth and if m and n both tend to , in a way that n increases sufficiently rapidly relative to m. © 2006 Wiley Periodicals, Inc. [source]