Covariance Matrix (covariance + matrix)

Distribution by Scientific Domains


Selected Abstracts


EFFECTS OF MIGRATION ON THE GENETIC COVARIANCE MATRIX

EVOLUTION, Issue 10 2007
Frédéric Guillaume
In 1996, Schluter showed that the direction of morphological divergence of closely related species is biased toward the line of least genetic resistance, represented by gmax, the leading eigenvector of the matrix of genetic variance,covariance (the G -matrix). G is used to predict the direction of evolutionary change in natural populations. However, this usage requires that G is sufficiently constant over time to have enough predictive significance. Here, we explore the alternative explanation that G can evolve due to gene flow to conform to the direction of divergence between incipient species. We use computer simulations in a mainland,island migration model with stabilizing selection on two quantitative traits. We show that a high level of gene flow from a mainland population is required to significantly affect the orientation of the G -matrix in an island population. The changes caused by the introgression of the mainland alleles into the island population affect all aspects of the shape of G (size, eccentricity, and orientation) and lead to the alignment of gmax with the line of divergence between the two populations' phenotypic optima. Those changes decrease with increased correlation in mutational effects and with a correlated selection. Our results suggest that high migration rates, such as those often seen at the intraspecific level, will substantially affect the shape and orientation of G, whereas low migration (e.g., at the interspecific level) is unlikely to substantially affect the evolution of G. [source]


Separable approximations of space-time covariance matrices

ENVIRONMETRICS, Issue 7 2007
Marc G. Genton
Abstract Statistical modeling of space-time data has often been based on separable covariance functions, that is, covariances that can be written as a product of a purely spatial covariance and a purely temporal covariance. The main reason is that the structure of separable covariances dramatically reduces the number of parameters in the covariance matrix and thus facilitates computational procedures for large space-time data sets. In this paper, we discuss separable approximations of nonseparable space-time covariance matrices. Specifically, we describe the nearest Kronecker product approximation, in the Frobenius norm, of a space-time covariance matrix. The algorithm is simple to implement and the solution preserves properties of the space-time covariance matrix, such as symmetry, positive definiteness, and other structures. The separable approximation allows for fast kriging of large space-time data sets. We present several illustrative examples based on an application to data of Irish wind speeds, showing that only small differences in prediction error arise while computational savings for large data sets can be obtained. Copyright © 2007 John Wiley & Sons, Ltd. [source]


THE EVOLVABILITY OF GROWTH FORM IN A CLONAL SEAWEED

EVOLUTION, Issue 12 2009
Keyne Monro
Although modular construction is considered the key to adaptive growth or growth-form plasticity in sessile taxa (e.g., plants, seaweeds and colonial invertebrates), the serial expression of genes in morphogenesis may compromise its evolutionary potential if growth forms emerge as integrated wholes from module iteration. To explore the evolvability of growth form in the red seaweed, Asparagopsis armata, we estimated genetic variances, covariances, and cross-environment correlations for principal components of growth-form variation in contrasting light environments. We compared variance,covariance matrices across environments to test environmental effects on heritable variation and examined the potential for evolutionary change in the direction of plastic responses to light. Our results suggest that growth form in Asparagopsis may constitute only a single genetic entity whose plasticity affords only limited evolutionary potential. We argue that morphological integration arising from modular construction may constrain the evolvability of growth form in Asparagopsis, emphasizing the critical distinction between genetic and morphological modularity in this and other modular taxa. [source]


Geodetic imaging: reservoir monitoring using satellite interferometry

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2002
D. W. Vasco
Summary Fluid fluxes within subsurface reservoirs give rise to surface displacements, particularly over periods of a year or more. Observations of such deformation provide a powerful tool for mapping fluid migration within the Earth, providing new insights into reservoir dynamics. In this paper we use Interferometric Synthetic Aperture Radar (InSAR) range changes to infer subsurface fluid volume strain at the Coso geothermal field. Furthermore, we conduct a complete model assessment, using an iterative approach to compute model parameter resolution and covariance matrices. The method is a generalization of a Lanczos-based technique which allows us to include fairly general regularization, such as roughness penalties. We find that we can resolve quite detailed lateral variations in volume strain both within the reservoir depth range (0.4,2.5 km) and below the geothermal production zone (2.5,5.0 km). The fractional volume change in all three layers of the model exceeds the estimated model parameter uncertainty by a factor of two or more. In the reservoir depth interval (0.4,2.5 km), the predominant volume change is associated with northerly and westerly oriented faults and their intersections. However, below the geothermal production zone proper [the depth range 2.5,5.0 km], there is the suggestion that both north- and northeast-trending faults may act as conduits for fluid flow. [source]


Exponential convergence of the Kalman filter based parameter estimation algorithm

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 10 2003
Liyu Cao
Abstract In this paper we shall present a new method to analyse the convergence property of the Kalman filter based parameter estimation algorithms. This method for convergence analysis is mainly based on some matrix inequalities and is more simple than some of the existing approaches in the literature. This method can simultaneously provide both lower and upper bounds on the exponential convergence rate as the functions of bounds of the related matrices, such as the covariance matrices. A simulation example is provided to illustrate the convergence property of the Kalman filter based algorithms. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Large-scale site diversity for satellite communication networks

INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 4 2002
M. Luglio
Abstract The utilization of high frequencies, such as Ka-band and beyond, necessary to avoid the highly congested lower satellite frequencies and to get larger bandwidth availability is considered for many developing satellite systems. The new satellite low-margin systems in Ka-band will need to be designed using fade countermeasures to counteract rain attenuation. One of these techniques foresees the possibility of switching the communication link among different Earth stations spread on a very large territory to reduce the system outage time to the joint outage time of all the stations. The design of such systems depends on the probability that the Earth stations simultaneously exceed their margins. In this paper, a well-assessed model is utilized for the prediction of joint statistics of rain attenuation in multiple locations, using Monte Carlo simulation. The model is based on a pair of multi-variate normal processes whose parameters are related to those characterizing the single-location statistics and whose covariance matrices are assumed to depend only on the distances between locations. The main results concerning both the probability and margin improvement will be presented and discussed. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Reliable computing in estimation of variance components

JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 6 2008
I. Misztal
Summary The purpose of this study is to present guidelines in selection of statistical and computing algorithms for variance components estimation when computing involves software packages. For this purpose two major methods are to be considered: residual maximal likelihood (REML) and Bayesian via Gibbs sampling. Expectation-Maximization (EM) REML is regarded as a very stable algorithm that is able to converge when covariance matrices are close to singular, however it is slow. However, convergence problems can occur with random regression models, especially if the starting values are much lower than those at convergence. Average Information (AI) REML is much faster for common problems but it relies on heuristics for convergence, and it may be very slow or even diverge for complex models. REML algorithms for general models become unstable with larger number of traits. REML by canonical transformation is stable in such cases but can support only a limited class of models. In general, REML algorithms are difficult to program. Bayesian methods via Gibbs sampling are much easier to program than REML, especially for complex models, and they can support much larger datasets; however, the termination criterion can be hard to determine, and the quality of estimates depends on a number of details. Computing speed varies with computing optimizations, with which some large data sets and complex models can be supported in a reasonable time; however, optimizations increase complexity of programming and restrict the types of models applicable. Several examples from past research are discussed to illustrate the fact that different problems required different methods. [source]


Computing simplifications for non-additive genetic models

JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 6 2003
L. R. Schaeffer
Summary A limiting factor in the analysis of non-additive genetic models has been the ability to compute the inverses of non-additive genetic covariance matrices for large populations. Also, the order of the equations was equal to the number of animals times the number of non-additive genetic effects that were included in the model. This paper describes a computing algorithm that avoids the inverses of the non-additive genetic covariance matrices and keeps the size of the equations to be the same as any animal model with only additive genetic effects. Quadratic forms for the non-additive genetic variances could also be computed without the inverses of the non-additive genetic covariance matrices. Zusammenfassung In der Analyse von nicht additiven genetischen Modellen war der limitierende Faktor die Fähigkeit Inversen der Matrizen nicht additiver genetischer Kovarianzen in großen Populationen zu berechnen. Auch die Reihenfolge der Gleichungen war gleich zu der Anzahl der Tiere mal der Anzahl der nicht additiven genetischen Effekte, die im Model berücksichtigt wurden. Diese Veröffentlichung beschreibt einen Berechnungsalgorithmus, der die Umkehrung der Matrizen nicht additiver genetischer Kovarianzen umgeht und die Gleichungen auf der selben Größe hält wie ein Tiermodel mit additiven genetischen Effekten. Auch quadratische Formen für nicht additive genetische Kovarianzen können ohne die Umkehrung der Matrizen nicht additiver genetischer Kovarianzen berechnet werden. [source]


Changes in gingival crevicular fluid inflammatory mediator levels during the induction and resolution of experimental gingivitis in humans

JOURNAL OF CLINICAL PERIODONTOLOGY, Issue 4 2010
Steven Offenbacher
Offenbacher S, Barros S, Mendoza L, Mauriello S, Preisser J, Moss K, de Jager M, Aspiras M. Changes in gingival crevicular fluid inflammatory mediator levels during the induction and resolution of experimental gingivitis in humans. J Clin Periodontol 2010; 37: 324,333. doi: 10.1111/j.1600-051X.2010.01543.x Abstract Aim: The goal of this study is to characterize the changes in 33 biomarkers within the gingival crevicular fluid during the 3-week induction and 4-week resolution of stent-induced, biofilm overgrowth mediated, experimental gingivitis in humans. Methods: Experimental gingivitis was induced in 25 subjects for 21 days followed by treatment with a sonic powered toothbrush for 28 days. Clinical indices and gingival crevicular fluids were collected weekly during induction and biweekly during resolution. Samples were analysed using a bead-based multiplexing analysis for the simultaneous measurements of 33 biomarkers within each sample including cytokines, matrix-metalloproteinases (MMPs) and adipokines. Prostaglandin-E2 was measured by enzyme-linked immunoadsorbant assay. Statistical testing using general linear models with structured covariance matrices were performed to compare stent to contralateral (non-stent) changes in clinical signs and in biomarker levels over time. Results: Gingivitis induction was associated with a significant 2.6-fold increase in interleukin 1- , (IL- ,), a 3.1-fold increase in IL-1, and a significant decrease in multiple chemokines as well as MMPs-1, -3 and 13. All changes in clinical signs and mediators rebounded to baseline in response to treatment in the resolution phase. Conclusions: Stent-induced gingivitis is associated with marked, but reversible increases in IL- ,a and IL-1, with suppression of multiple chemokines as well as selected MMPs. [source]


Wavelet-based functional mixed models

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2006
Jeffrey S. Morris
Summary., Increasingly, scientific studies yield functional data, in which the ideal units of observation are curves and the observed data consist of sets of curves that are sampled on a fine grid. We present new methodology that generalizes the linear mixed model to the functional mixed model framework, with model fitting done by using a Bayesian wavelet-based approach. This method is flexible, allowing functions of arbitrary form and the full range of fixed effects structures and between-curve covariance structures that are available in the mixed model framework. It yields nonparametric estimates of the fixed and random-effects functions as well as the various between-curve and within-curve covariance matrices. The functional fixed effects are adaptively regularized as a result of the non-linear shrinkage prior that is imposed on the fixed effects' wavelet coefficients, and the random-effect functions experience a form of adaptive regularization because of the separately estimated variance components for each wavelet coefficient. Because we have posterior samples for all model quantities, we can perform pointwise or joint Bayesian inference or prediction on the quantities of the model. The adaptiveness of the method makes it especially appropriate for modelling irregular functional data that are characterized by numerous local features like peaks. [source]


Hierarchical Bayesian modeling of random and residual variance,covariance matrices in bivariate mixed effects models

BIOMETRICAL JOURNAL, Issue 3 2010
Nora M. Bello
Abstract Bivariate mixed effects models are often used to jointly infer upon covariance matrices for both random effects (u) and residuals (e) between two different phenotypes in order to investigate the architecture of their relationship. However, these (co)variances themselves may additionally depend upon covariates as well as additional sets of exchangeable random effects that facilitate borrowing of strength across a large number of clusters. We propose a hierarchical Bayesian extension of the classical bivariate mixed effects model by embedding additional levels of mixed effects modeling of reparameterizations of u- level and e -level (co)variances between two traits. These parameters are based upon a recently popularized square-root-free Cholesky decomposition and are readily interpretable, each conveniently facilitating a generalized linear model characterization. Using Markov Chain Monte Carlo methods, we validate our model based on a simulation study and apply it to a joint analysis of milk yield and calving interval phenotypes in Michigan dairy cows. This analysis indicates that the e -level relationship between the two traits is highly heterogeneous across herds and depends upon systematic herd management factors. [source]


Modelling Multivariate Outcomes in Hierarchical Data, with Application to Cluster Randomised Trials

BIOMETRICAL JOURNAL, Issue 3 2006
Rebecca M. Turner
Abstract In the cluster randomised study design, the data collected have a hierarchical structure and often include multivariate outcomes. We present a flexible modelling strategy that permits several normally distributed outcomes to be analysed simultaneously, in which intervention effects as well as individual-level and cluster-level between-outcome correlations are estimated. This is implemented in a Bayesian framework which has several advantages over a classical approach, for example in providing credible intervals for functions of model parameters and in allowing informative priors for the intracluster correlation coefficients. In order to declare such informative prior distributions, and fit models in which the between-outcome covariance matrices are constrained, priors on parameters within the covariance matrices are required. Careful specification is necessary however, in order to maintain non-negative definiteness and symmetry between the different outcomes. We propose a novel solution in the case of three multivariate outcomes, and present a modified existing approach and novel alternative for four or more outcomes. The methods are applied to an example of a cluster randomised trial in the prevention of coronary heart disease. The modelling strategy presented would also be useful in other situations involving hierarchical multivariate outcomes. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Model identification in presence of incomplete information by generalized principal component analysis: Application to the common and differential responses of Escherichia coli to multiple pulse perturbations in continuous, high-biomass density culture

BIOTECHNOLOGY & BIOENGINEERING, Issue 4 2009
Daniel V. Guebel
Abstract In a previous report we described a multivariate approach to discriminate between the different response mechanisms operating in Escherichia coli when a steady, continuous culture of these bacteria was perturbed by a glycerol pulse (Guebel et al., 2009, Biotechnol Bioeng 102: 910,922). Herein, we present a procedure to extend this analysis when multiple, spaced pulse perturbations (glycerol, fumarate, acetate, crotonobetaine, hypersaline plus high-glycerol basal medium and crotonobetaine plus hypersaline basal medium) are being assessed. The proposed method allows us to identify not only the common responses among different perturbation conditions, but to recognize the specific response for a given stimulus even when the dynamics of the perturbation is unknown. Components common to all conditions are determined first by Generalized Principal Components Analysis (GPCA) upon a set of covariance matrices. A metrics is then built to quantify the similitude distance. This is based on the degree of variance extraction achieved for each variable along the GPCA deflation processes by the common factors. This permits a cluster analysis, which recognizes several compact sub-sets containing only the most closely related responsive groups. The GPCA is then run again but is restricted to the groups in each sub-set. Finally, after the data have been exhaustively deflated by the common sub-set factors, the resulting residual matrices are used to determine the specific response factors by classical principal component analysis (PCA). The proposed method was validated by comparing its predictions with those obtained when the dynamics of the perturbation was determined. In addition, it showed to have a better performance than the obtained with other multivariate alternatives (e.g., orthogonal contrasts based on direct GPCA, Tucker-3 model, PARAFAC, etc.). Biotechnol. Bioeng. 2009; 104: 785,795 © 2009 Wiley Periodicals, Inc. [source]


Efficiency of base isolation systems in structural seismic protection and energetic assessment

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 10 2003
Giuseppe Carlo Marano
Abstract This paper concerns the seismic response of structures isolated at the base by means of High Damping Rubber Bearings (HDRB). The analysis is performed by using a stochastic approach, and a Gaussian zero mean filtered non-stationary stochastic process is used in order to model the seismic acceleration acting at the base of the structure. More precisely, the generalized Kanai,Tajimi model is adopted to describe the non-stationary amplitude and frequency characteristics of the seismic motion. The hysteretic differential Bouc,Wen model (BWM) is adopted in order to take into account the non-linear constitutive behaviour both of the base isolation device and of the structure. Moreover, the stochastic linearization method in the time domain is adopted to estimate the statistical moments of the non-linear system response in the state space. The non-linear differential equation of the response covariance matrix is then solved by using an iterative procedure which updates the coefficients of the equivalent linear system at each step and searches for the solution of the response covariance matrix equation. After the system response variance is estimated, a sensitivity analysis is carried out. The final aim of the research is to assess the real capacity of base isolation devices in order to protect the structures from seismic actions, by avoiding a non-linear response, with associated large plastic displacements and, therefore, by limiting related damage phenomena in structural and non-structural elements. In order to attain this objective the stochastic response of a non-linear n -dof shear-type base-isolated building is analysed; the constitutive law both of the structure and of the base devices is described, as previously reported, by adopting the BWM and by using appropriate parameters for this model, able to suitably characterize an ordinary building and the base isolators considered in the study. The protection level offered to the structure by the base isolators is then assessed by evaluating the reduction both of the displacement response and the hysteretic dissipated energy. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Decision Theory Applied to an Instrumental Variables Model

ECONOMETRICA, Issue 3 2007
Gary Chamberlain
This paper applies some general concepts in decision theory to a simple instrumental variables model. There are two endogenous variables linked by a single structural equation; k of the exogenous variables are excluded from this structural equation and provide the instrumental variables (IV). The reduced-form distribution of the endogenous variables conditional on the exogenous variables corresponds to independent draws from a bivariate normal distribution with linear regression functions and a known covariance matrix. A canonical form of the model has parameter vector (,, ,, ,), where ,is the parameter of interest and is normalized to be a point on the unit circle. The reduced-form coefficients on the instrumental variables are split into a scalar parameter ,and a parameter vector ,, which is normalized to be a point on the (k,1)-dimensional unit sphere; ,measures the strength of the association between the endogenous variables and the instrumental variables, and ,is a measure of direction. A prior distribution is introduced for the IV model. The parameters ,, ,, and ,are treated as independent random variables. The distribution for ,is uniform on the unit circle; the distribution for ,is uniform on the unit sphere with dimension k-1. These choices arise from the solution of a minimax problem. The prior for ,is left general. It turns out that given any positive value for ,, the Bayes estimator of ,does not depend on ,; it equals the maximum-likelihood estimator. This Bayes estimator has constant risk; because it minimizes average risk with respect to a proper prior, it is minimax. The same general concepts are applied to obtain confidence intervals. The prior distribution is used in two ways. The first way is to integrate out the nuisance parameter ,in the IV model. This gives an integrated likelihood function with two scalar parameters, ,and ,. Inverting a likelihood ratio test, based on the integrated likelihood function, provides a confidence interval for ,. This lacks finite sample optimality, but invariance arguments show that the risk function depends only on ,and not on ,or ,. The second approach to confidence sets aims for finite sample optimality by setting up a loss function that trades off coverage against the length of the interval. The automatic uniform priors are used for ,and ,, but a prior is also needed for the scalar ,, and no guidance is offered on this choice. The Bayes rule is a highest posterior density set. Invariance arguments show that the risk function depends only on ,and not on ,or ,. The optimality result combines average risk and maximum risk. The confidence set minimizes the average,with respect to the prior distribution for ,,of the maximum risk, where the maximization is with respect to ,and ,. [source]


A Conditional Likelihood Ratio Test for Structural Models

ECONOMETRICA, Issue 4 2003
Marcelo J. Moreira
This paper develops a general method for constructing exactly similar tests based on the conditional distribution of nonpivotal statistics in a simultaneous equations model with normal errors and known reduced-form covariance matrix. These tests are shown to be similar under weak-instrument asymptotics when the reduced-form covariance matrix is estimated and the errors are non-normal. The conditional test based on the likelihood ratio statistic is particularly simple and has good power properties. Like the score test, it is optimal under the usual local-to-null asymptotics, but it has better power when identification is weak. [source]


Modeling and Forecasting Realized Volatility

ECONOMETRICA, Issue 2 2003
Torben G. Andersen
We provide a framework for integration of high,frequency intraday data into the measurement, modeling, and forecasting of daily and lower frequency return volatilities and return distributions. Building on the theory of continuous,time arbitrage,free price processes and the theory of quadratic variation, we develop formal links between realized volatility and the conditional covariance matrix. Next, using continuously recorded observations for the Deutschemark/Dollar and Yen/Dollar spot exchange rates, we find that forecasts from a simple long,memory Gaussian vector autoregression for the logarithmic daily realized volatilities perform admirably. Moreover, the vector autoregressive volatility forecast, coupled with a parametric lognormal,normal mixture distribution produces well,calibrated density forecasts of future returns, and correspondingly accurate quantile predictions. Our results hold promise for practical modeling and forecasting of the large covariance matrices relevant in asset pricing, asset allocation, and financial risk management applications. [source]


Clustering composition vectors using uncertainty information

ENVIRONMETRICS, Issue 8 2007
William F. Christensen
Abstract In the biological and environmental sciences, interest often lies in using multivariate observations to discover natural clusters of objects. In this manuscript, the incorporation of measurement uncertainty information into a cluster analysis is discussed. This study is motivated by a problem involving the clustering of composition vectors associated with each of several chemical species. The observed abundance of each component is available along with its estimated uncertainty (measurement error standard deviation). An approach is proposed for converting the abundance vectors into composition (relative abundance) vectors, obtaining the covariance matrix associated with each composition vector, and defining a Mahalanobis distance between composition vectors that are suitable for cluster analysis. The approach is illustrated using particle size distributions obtained near Houston, Texas in 2000. Computer simulation is used to compare the performance of Mahalanobis-distance-based and Euclidean-distance-based clustering approaches. The use of a modified Mahalanobis distance along with Ward's method is recommended for use. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Separable approximations of space-time covariance matrices

ENVIRONMETRICS, Issue 7 2007
Marc G. Genton
Abstract Statistical modeling of space-time data has often been based on separable covariance functions, that is, covariances that can be written as a product of a purely spatial covariance and a purely temporal covariance. The main reason is that the structure of separable covariances dramatically reduces the number of parameters in the covariance matrix and thus facilitates computational procedures for large space-time data sets. In this paper, we discuss separable approximations of nonseparable space-time covariance matrices. Specifically, we describe the nearest Kronecker product approximation, in the Frobenius norm, of a space-time covariance matrix. The algorithm is simple to implement and the solution preserves properties of the space-time covariance matrix, such as symmetry, positive definiteness, and other structures. The separable approximation allows for fast kriging of large space-time data sets. We present several illustrative examples based on an application to data of Irish wind speeds, showing that only small differences in prediction error arise while computational savings for large data sets can be obtained. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Risk management lessons from Long-Term Capital Management

EUROPEAN FINANCIAL MANAGEMENT, Issue 3 2000
Philippe Jorion
The 1998 failure of Long-Term Capital Management (LTCM) is said to have nearly blown up the world's financial system. For such a near-catastrophic event, the finance profession has precious little information to draw from. By piecing together publicly available information, this paper draws risk management lessons from LTCM. LTCM's strategies are analysed in terms of the fund's Value at Risk (VAR) and the amount of capital necessary to support its risk profile. The paper shows that LTCM had severely underestimated its risk due to its reliance on short-term history and risk concentration. LTCM also provides a good example of risk management taken to the extreme. Using the same covariance matrix to measure risk and to optimize positions inevitably leads to biases in the measurement of risk. This approach also induces the strategy to take positions that appear to generate ,arbitrage' profits based on recent history but also represent bets on extreme events, like selling options. Overall, LTCM's strategy exploited the intrinsic weaknesses of its risk management system. [source]


EMPIRICAL COMPARISON OF G MATRIX TEST STATISTICS: FINDING BIOLOGICALLY RELEVANT CHANGE

EVOLUTION, Issue 10 2009
Brittny Calsbeek
A central assumption of quantitative genetic theory is that the breeder's equation (R=GP,1S) accurately predicts the evolutionary response to selection. Recent studies highlight the fact that the additive genetic variance,covariance matrix (G) may change over time, rendering the breeder's equation incapable of predicting evolutionary change over more than a few generations. Although some consensus on whether G changes over time has been reached, multiple, often-incompatible methods for comparing G matrices are currently used. A major challenge of G matrix comparison is determining the biological relevance of observed change. Here, we develop a "selection skewers"G matrix comparison statistic that uses the breeder's equation to compare the response to selection given two G matrices while holding selection intensity constant. We present a bootstrap algorithm that determines the significance of G matrix differences using the selection skewers method, random skewers, Mantel's and Bartlett's tests, and eigenanalysis. We then compare these methods by applying the bootstrap to a dataset of laboratory populations of Tribolium castaneum. We find that the results of matrix comparison statistics are inconsistent based on differing a priori goals of each test, and that the selection skewers method is useful for identifying biologically relevant G matrix differences. [source]


FROM MICRO- TO MACROEVOLUTION THROUGH QUANTITATIVE GENETIC VARIATION: POSITIVE EVIDENCE FROM FIELD CRICKETS

EVOLUTION, Issue 10 2004
Mattieu Bégin
Abstract . -Quantitative genetics has been introduced to evolutionary biologists with the suggestion that microevolution could be directly linked to macroevolutionary patterns using, among other parameters, the additive genetic variance/ covariance matrix (G) which is a statistical representation of genetic constraints to evolution. However, little is known concerning the rate and pattern of evolution of G in nature, and it is uncertain whether the constraining effect of G is important over evolutionary time scales. To address these issues, seven species of field crickets from the genera Gryllus and Teleogryllus were reared in the laboratory, and quantitative genetic parameters for morphological traits were estimated from each of them using a nested full-sibling family design. We used three statistical approaches (T method, Flury hierarchy, and Mantel test) to compare G matrices or genetic correlation matrices in a phylogenetic framework. Results showed that G matrices were generally similar across species, with occasional differences between some species. We suggest that G has evolved at a low rate, a conclusion strengthened by the consideration that part of the observed across-species variation in G can be explained by the effect of a genotype by environment interaction. The observed pattern of G matrix variation between species could not be predicted by either morphological trait values or phylogeny. The constraint hypothesis was tested by comparing the multivariate orientation of the reconstructed ancestral G matrix to the orientation of the across-species divergence matrix (D matrix, based on mean trait values). The D matrix mainly revealed divergence in size and, to a much smaller extent, in a shape component related to the ovipositor length. This pattern of species divergence was found to be predictable from the ancestral G matrix in agreement with the expectation of the constraint hypothesis. Overall, these results suggest that the G matrix seems to have an influence on species divergence, and that macroevolution can be predicted, at least qualitatively, from quantitative genetic theory. Alternative explanations are discussed. [source]


Reliability Analysis of Technical Systems/Structures by means of Polyhedral Approximation of the Safe/Unsafe Domain

GAMM - MITTEILUNGEN, Issue 2 2007
K. Marti
Abstract Reliability analysis of technical structures and systems is based on an appropriate (limit) state function separating the safe and unsafe/states in the space of random parameters. Starting with the survival conditions, hence, the state equation and the condition for the admissibility of states, an optimizational representation of the state function can be given in terms of the minimum function of a certainminimization problem. Selecting a certain number of boundary points of the safe/unsafe domain, hence, on the limit state surface, the safe/unsafe domain is approximated by a convex polyhedron defined by the intersection of the half spaces in the parameter space generated by the tangent hyperplanes to the safe/unsafe domain at the selected boundary points on the limit state surface. The resulting approximative probability functions are then defined by means of probabilistic linear constraints in the parameter space, where, after an appropriate transformation, the probability distribution of the parameter vector can be assumed to be normal with zero mean vector and unit covariance matrix. Working with separate linear constraints, approximation formulas for the probability of survival of the structure are obtained immediately. More exact approximations are obtained by considering joint probability constraints, which, in a second approximation step, can be evaluated by using probability inequalities and/or discretization of the underlying probability distribution. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Asymptotic tests of association with multiple SNPs in linkage disequilibrium

GENETIC EPIDEMIOLOGY, Issue 6 2009
Wei Pan
Abstract We consider detecting associations between a trait and multiple single nucleotide polymorphisms (SNPs) in linkage disequilibrium (LD). To maximize the use of information contained in multiple SNPs while minimizing the cost of large degrees of freedom (DF) in testing multiple parameters, we first theoretically explore the sum test derived under a working assumption of a common association strength between the trait and each SNP, testing on the corresponding parameter with only one DF. Under the scenarios that the association strengths between the trait and the SNPs are close to each other (and in the same direction), as considered by Wang and Elston [Am. J. Hum. Genet. [2007] 80:353,360], we show with simulated data that the sum test was powerful as compared to several existing tests; otherwise, the sum test might have much reduced power. To overcome the limitation of the sum test, based on our theoretical analysis of the sum test, we propose five new tests that are closely related to each other and are shown to consistently perform similarly well across a wide range of scenarios. We point out the close connection of the proposed tests to the Goeman test. Furthermore, we derive the asymptotic distributions of the proposed tests so that P -values can be easily calculated, in contrast to the use of computationally demanding permutations or simulations for the Goeman test. A distinguishing feature of the five new tests is their use of a diagonal working covariance matrix, rather than a full covariance matrix as used in the usual Wald or score test. We recommend the routine use of two of the new tests, along with several other tests, to detect disease associations with multiple linked SNPs. Genet. Epidemiol. 33:497,507, 2009. © 2009 Wiley-Liss, Inc. [source]


Bivariate combined linkage and association mapping of quantitative trait loci

GENETIC EPIDEMIOLOGY, Issue 5 2008
Jeesun Jung
Abstract In this paper, bivariate/multivariate variance component models are proposed for high-resolution combined linkage and association mapping of quantitative trait loci (QTL), based on combinations of pedigree and population data. Suppose that a quantitative trait locus is located in a chromosome region that exerts pleiotropic effects on multiple quantitative traits. In the region, multiple markers such as single nucleotide polymorphisms are typed. Two regression models, "genotype effect model" and "additive effect model", are proposed to model the association between the markers and the trait locus. The linkage information, i.e., recombination fractions between the QTL and the markers, is modeled in the variance and covariance matrix. By analytical formulae, we show that the "genotype effect model" can be used to model the additive and dominant effects simultaneously; the "additive effect model" only takes care of additive effect. Based on the two models, F -test statistics are proposed to test association between the QTL and markers. By analytical power analysis, we show that bivariate models can be more powerful than univariate models. For moderate-sized samples, the proposed models lead to correct type I error rates; and so the models are reasonably robust. As a practical example, the method is applied to analyze the genetic inheritance of rheumatoid arthritis for the data of The North American Rheumatoid Arthritis Consortium, Problem 2, Genetic Analysis Workshop 15, which confirms the advantage of the proposed bivariate models. Genet. Epidemiol. 2008. © 2008 Wiley-Liss, Inc. [source]


Quantitative trait linkage analysis by generalized estimating equations: Unification of variance components and Haseman-Elston regression

GENETIC EPIDEMIOLOGY, Issue 4 2004
Wei-Min Chen
Two of the major approaches for linkage analysis with quantitative traits in humans include variance components and Haseman-Elston regression. Previously, these were viewed as quite separate methods. We describe a general model, fit by use of generalized estimating equations (GEE), for which the variance components and Haseman-Elston methods (including many of the extensions to the original Haseman-Elston method) are special cases, corresponding to different choices for a working covariance matrix. We also show that the regression-based test of Sham et al. ([2002] Am. J. Hum. Genet. 71:238,253) is equivalent to a robust score statistic derived from our GEE approach. These results have several important implications. First, this work provides new insight regarding the connection between these methods. Second, asymptotic approximations for power and sample size allow clear comparisons regarding the relative efficiency of the different methods. Third, our general framework suggests important extensions to the Haseman-Elston approach which make more complete use of the data in extended pedigrees and allow a natural incorporation of environmental and other covariates. © 2004 Wiley-Liss, Inc. [source]


Iterative generalized cross-validation for fusing heteroscedastic data of inverse ill-posed problems

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2009
Peiliang Xu
SUMMARY The method of generalized cross-validation (GCV) has been widely used to determine the regularization parameter, because the criterion minimizes the average predicted residuals of measured data and depends solely on data. The data-driven advantage is valid only if the variance,covariance matrix of the data can be represented as the product of a given positive definite matrix and a scalar unknown noise variance. In practice, important geophysical inverse ill-posed problems have often been solved by combining different types of data. The stochastic model of measurements in this case contains a number of different unknown variance components. Although the weighting factors, or equivalently the variance components, have been shown to significantly affect joint inversion results of geophysical ill-posed problems, they have been either assumed to be known or empirically chosen. No solid statistical foundation is available yet to correctly determine the weighting factors of different types of data in joint geophysical inversion. We extend the GCV method to accommodate both the regularization parameter and the variance components. The extended version of GCV essentially consists of two steps, one to estimate the variance components by fixing the regularization parameter and the other to determine the regularization parameter by using the GCV method and by fixing the variance components. We simulate two examples: a purely mathematical integral equation of the first kind modified from the first example of Phillips (1962) and a typical geophysical example of downward continuation to recover the gravity anomalies on the surface of the Earth from satellite measurements. Based on the two simulated examples, we extensively compare the iterative GCV method with existing methods, which have shown that the method works well to correctly recover the unknown variance components and determine the regularization parameter. In other words, our method lets data speak for themselves, decide the correct weighting factors of different types of geophysical data, and determine the regularization parameter. In addition, we derive an unbiased estimator of the noise variance by correcting the biases of the regularized residuals. A simplified formula to save the time of computation is also given. The two new estimators of the noise variance are compared with six existing methods through numerical simulations. The simulation results have shown that the two new estimators perform as well as Wahba's estimator for highly ill-posed problems and outperform any existing methods for moderately ill-posed problems. [source]


The design of an optimal filter for monthly GRACE gravity models

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2008
R. Klees
SUMMARY Most applications of the publicly released Gravity Recovery and Climate Experiment monthly gravity field models require the application of a spatial filter to help suppressing noise and other systematic errors present in the data. The most common approach makes use of a simple Gaussian averaging process, which is often combined with a ,destriping' technique in which coefficient correlations within a given degree are removed. As brute force methods, neither of these techniques takes into consideration the statistical information from the gravity solution itself and, while they perform well overall, they can often end up removing more signal than necessary. Other optimal filters have been proposed in the literature; however, none have attempted to make full use of all information available from the monthly solutions. By examining the underlying principles of filter design, a filter has been developed that incorporates the noise and full signal variance,covariance matrix to tailor the filter to the error characteristics of a particular monthly solution. The filter is both anisotropic and non-symmetric, meaning it can accommodate noise of an arbitrary shape, such as the characteristic stripes. The filter minimizes the mean-square error and, in this sense, can be considered as the most optimal filter possible. Through both simulated and real data scenarios, this improved filter will be shown to preserve the highest amount of gravity signal when compared to other standard techniques, while simultaneously minimizing leakage effects and producing smooth solutions in areas of low signal. [source]


Sequential integrated inversion of refraction and wide-angle reflection traveltimes and gravity data for two-dimensional velocity structures

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2000
Rosaria Tondi
A new algorithm is presented for the integrated 2-D inversion of seismic traveltime and gravity data. The algorithm adopts the ,maximum likelihood' regularization scheme. We construct a ,probability density function' which includes three kinds of information: information derived from gravity measurements; information derived from the seismic traveltime inversion procedure applied to the model; and information on the physical correlation among the density and the velocity parameters. We assume a linear relation between density and velocity, which can be node-dependent; that is, we can choose different relationships for different parts of the velocity,density grid. In addition, our procedure allows us to consider a covariance matrix related to the error propagation in linking density to velocity. We use seismic data to estimate starting velocity values and the position of boundary nodes. Subsequently, the sequential integrated inversion (SII) optimizes the layer velocities and densities for our models. The procedure is applicable, as an additional step, to any type of seismic tomographic inversion. We illustrate the method by comparing the velocity models recovered from a standard seismic traveltime inversion with those retrieved using our algorithm. The inversion of synthetic data calculated for a 2-D isotropic, laterally inhomogeneous model shows the stability and accuracy of this procedure, demonstrates the improvements to the recovery of true velocity anomalies, and proves that this technique can efficiently overcome some of the limitations of both gravity and seismic traveltime inversions, when they are used independently. An interpretation of field data from the 1994 Vesuvius test experiment is also presented. At depths down to 4.5 km, the model retrieved after a SII shows a more detailed structure than the model obtained from an interpretation of seismic traveltime only, and yields additional information for a further study of the area. [source]


Measuring Monetary Policy in Germany: A Structural Vector Error Correction Approach

GERMAN ECONOMIC REVIEW, Issue 3 2003
Imke Brüggemann
Monetary policy; cointegration; structural VAR analysis Abstract. A structural vector error correction (SVEC) model is used to investigate several monetary policy issues. While being data-oriented the SVEC framework allows structural modeling of the short-run and long-run properties of the data. The statistical model is estimated with monthly German data for 1975,98 where a structural break is detected in 1984. After splitting the sample, three stable long-run relations are found in each subsample which can be interpreted in terms of a money-demand equation, a policy rule and a relation for real output, respectively. Since the cointegration restrictions imply a particular shape of the long-run covariance matrix this information can be used to distinguish between permanent and transitory innovations in the estimated system. Additional restrictions are introduced to identify a monetary policy shock. [source]