Home About us Contact | |||
Square Matrices (square + matrix)
Selected AbstractsA tale of two matrices: multivariate approaches in evolutionary biologyJOURNAL OF EVOLUTIONARY BIOLOGY, Issue 1 2007M. W. BLOWS Abstract Two symmetric matrices underlie our understanding of microevolutionary change. The first is the matrix of nonlinear selection gradients (,) which describes the individual fitness surface. The second is the genetic variance,covariance matrix (G) that influences the multivariate response to selection. A common approach to the empirical analysis of these matrices is the element-by-element testing of significance, and subsequent biological interpretation of pattern based on these univariate and bivariate parameters. Here, I show why this approach is likely to misrepresent the genetic basis of quantitative traits, and the selection acting on them in many cases. Diagonalization of square matrices is a fundamental aspect of many of the multivariate statistical techniques used by biologists. Applying this, and other related approaches, to the analysis of the structure of , and G matrices, gives greater insight into the form and strength of nonlinear selection, and the availability of genetic variance for multiple traits. [source] A new investigation of the extended Krylov subspace method for matrix function evaluationsNUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 4 2010L. Knizhnerman Abstract For large square matrices A and functions f, the numerical approximation of the action of f(A) to a vector v has received considerable attention in the last two decades. In this paper we investigate theextended Krylov subspace method, a technique that was recently proposed to approximate f(A)v for A symmetric. We provide a new theoretical analysis of the method, which improves the original result for A symmetric, and gives a new estimate for A nonsymmetric. Numerical experiments confirm that the new error estimates correctly capture the linear asymptotic convergence rate of the approximation. By using recent algorithmic improvements, we also show that the method is computationally competitive with respect to other enhancement techniques. Copyright © 2009 John Wiley & Sons, Ltd. [source] Some remarks on the perturbation of polar decompositions for rectangular matricesNUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 4 2006Wen Li Abstract In this article we focus on perturbation bounds of unitary polar factors in polar decompositions for rectangular matrices. First we present two absolute perturbation bounds in unitarily invariant norms and in spectral norm, respectively, for any rectangular complex matrices, which improve recent results of Li and Sun (SIAM J. Matrix Anal. Appl. 2003; 25:362,372). Secondly, a new absolute bound for complex matrices of full rank is given. When ,A , Ã,2 , ,A , Ã,F, our bound for complex matrices is the same as in real case. Finally, some asymptotic bounds given by Mathias (SIAM J. Matrix Anal. Appl. 1993; 14:588,593) for both real and complex square matrices are generalized. Copyright © 2005 John Wiley & Sons, Ltd. [source] The square lattice shuffleRANDOM STRUCTURES AND ALGORITHMS, Issue 4 2006Johan Håstad We show that the operations of permuting columns and rows separately and independently mix a square matrix in constant time. © 2006 Wiley Periodicals, Inc. Random Struct. Alg., 2006 [source] A robust formulation of the ensemble Kalman filter,THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 639 2009S. J. Thomas Abstract The ensemble Kalman filter (EnKF) can be interpreted in the more general context of linear regression theory. The recursive filter equations are equivalent to the normal equations for a weighted least-squares estimate that minimizes a quadratic functional. Solving the normal equations is numerically unreliable and subject to large errors when the problem is ill-conditioned. A numerically reliable and efficient algorithm is presented, based on the minimization of an alternative functional. The method relies on orthogonal rotations, is highly parallel and does not ,square' matrices in order to compute the analysis update. Computation of eigenvalue and singular-value decompositions is not required. The algorithm is formulated to process observations serially or in batches and therefore easily handles spatially correlated observation errors. Numerical results are presented for existing algorithms with a hierarchy of models characterized by chaotic dynamics. Under a range of conditions, which may include model error and sampling error, the new algorithm achieves the same or lower mean square errors as the serial Potter and ensemble adjustment Kalman filter (EAKF) algorithms. Published in 2009 by John Wiley and Sons, Ltd. [source] |