Home About us Contact | |||
Approximate Inverse (approximate + inverse)
Selected AbstractsJava multithreading-based parallel approximate arrow-type inversesCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2008George A. Gravvanis Abstract A new parallel shared memory Java multithreaded design and implementation of the explicit approximate inverse preconditioning, for efficiently solving arrow-type linear systems on symmetric multiprocessor systems (SMPs), is presented. A new parallel algorithm for computing a class of optimized approximate arrow-type inverse matrix is introduced. The performance on an SMP, using Java multithreading, is investigated by solving arrow-type linear systems and numerical results are given. The parallel performance of the construction of the optimized approximate inverse and the explicit preconditioned generalized conjugate gradient square scheme, using a dynamic workload scheduling, is also presented. Copyright © 2007 John Wiley & Sons, Ltd. [source] An efficient diagonal preconditioner for finite element solution of Biot's consolidation equationsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 4 2002K. K. Phoon Abstract Finite element simulations of very large-scale soil,structure interaction problems (e.g. excavations, tunnelling, pile-rafts, etc.) typically involve the solution of a very large, ill-conditioned, and indefinite Biot system of equations. The traditional preconditioned conjugate gradient solver coupled with the standard Jacobi (SJ) preconditioner can be very inefficient for this class of problems. This paper presents a robust generalized Jacobi (GJ) preconditioner that is extremely effective for solving very large-scale Biot's finite element equations using the symmetric quasi-minimal residual method. The GJ preconditioner can be formed, inverted, and implemented within an ,element-by-element' framework as readily as the SJ preconditioner. It was derived as a diagonal approximation to a theoretical form, which can be proven mathematically to possess an attractive eigenvalue clustering property. The effectiveness of the GJ preconditioner over a wide range of soil stiffness and permeability was demonstrated numerically using a simple three-dimensional footing problem. This paper casts a new perspective on the potentialities of the simple diagonal preconditioner, which has been commonly perceived as being useful only in situations where it can serve as an approximate inverse to a diagonally dominant coefficient matrix. Copyright © 2002 John Wiley & Sons, Ltd. [source] Sparse approximate inverse preconditioning of deflated block-GMRES algorithm for the fast monostatic RCS calculationINTERNATIONAL JOURNAL OF NUMERICAL MODELLING: ELECTRONIC NETWORKS, DEVICES AND FIELDS, Issue 5 2008P. L. Rui Abstract A sparse approximate inverse (SAI) preconditioning of deflated block-generalized minimal residual (GMRES) algorithm is proposed to solve large dense linear systems with multiple right-hand sides arising from monostatic radar cross section (RCS) calculations. The multilevel fast multipole method (MLFMM) is used to accelerate the matrix,vector product operations, and the SAI preconditioning technique is employed to speed up the convergence rate of block-GMRES (BGMRES) iterations. The main purpose of this study is to show that the convergence rate of the SAI preconditioned BGMRES method can be significantly improved by deflating a few smallest eigenvalues. Numerical experiments indicate that the combined effect of the SAI preconditioning technique that clusters most of eigenvalues to one, coupled with the deflation technique that shifts the rest of the smallest eigenvalues in the spectrum, can be very beneficial in the MLFMM, thus reducing the overall simulation time substantially. Copyright © 2008 John Wiley & Sons, Ltd. [source] Application of the preconditioned GMRES to the Crank-Nicolson finite-difference time-domain algorithm for 3D full-wave analysis of planar circuitsMICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 6 2008Y. Yang Abstract The increase of the time step size significantly deteriorates the property of the coefficient matrix generated from the Crank-Nicolson finite-difference time-domain (CN-FDTD) method. As a result, the convergence of classical iterative methods, such as generalized minimal residual method (GMRES) would be substantially slowed down. To address this issue, this article mainly concerns efficient computation of this large sparse linear equations using preconditioned generalized minimal residual (PGMRES) method. Some typical preconditioning techniques, such as the Jacobi preconditioner, the sparse approximate inverse (SAI) preconditioner, and the symmetric successive over-relaxation (SSOR) preconditioner, are introduced to accelerate the convergence of the GMRES iterative method. Numerical simulation shows that the SSOR preconditioned GMRES method can reach convergence five times faster than GMRES for some typical structures. © 2008 Wiley Periodicals, Inc. Microwave Opt Technol Lett 50: 1458,1463, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.23396 [source] A Kronecker product approximate preconditioner for SANsNUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 8-9 2004Amy N. Langville Abstract Many very large Markov chains can be modelled efficiently as stochastic automata networks (SANs). A SAN is composed of individual automata which, for the most part, act independently, requiring only infrequent interaction. SANs represent the generator matrix Q of the underlying Markov chain compactly as the sum of Kronecker products of smaller matrices. Thus, storage savings are immediate. The benefit of a SAN's compact representation, known as the descriptor, is often outweighed by its tendency to make analysis of the underlying Markov chain tough. While iterative or projections methods have been used to solve the system ,Q=0, the time until these methods converge to the stationary solution , is still unsatisfactory. SAN's compact representation has made the next logical research step of preconditioning thorny. Several preconditioners for SANs have been proposed and tested, yet each has enjoyed little or no success. Encouraged by the recent success of approximate inverses as preconditioners, we have explored their potential as SAN preconditioners. One particularly relevant finding on approximate inverse preconditioning is the nearest Kronecker product approximation discovered by Pitsianis and Van Loan. In this paper, we extend the nearest Kronecker product technique to approximate the Q matrix for an SAN with a Kronecker product, A1 , A2 ,,, AN. Then, we take M = A , A ,,, A as our SAN NKP preconditioner. Copyright © 2004 John Wiley & Sons, Ltd. [source] |