Sparse Linear Systems (sparse + linear_system)

Distribution by Scientific Domains

Kinds of Sparse Linear Systems

  • large sparse linear system


  • Selected Abstracts


    Fast simulation of skin sliding

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2-3 2009
    Xiaosong Yang
    Abstract Skin sliding is the phenomenon of the skin moving over underlying layers of fat, muscle and bone. Due to the complex interconnections between these separate layers and their differing elasticity properties, it is difficult to model and expensive to compute. We present a novel method to simulate this phenomenon at real-time by remeshing the surface based on a parameter space resampling. In order to evaluate the surface parametrization, we borrow a technique from structural engineering known as the force density method (FDM)which solves for an energy minimizing form with a sparse linear system. Our method creates a realistic approximation of skin sliding in real-time, reducing texture distortions in the region of the deformation. In addition it is flexible, simple to use, and can be incorporated into any animation pipeline. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    A Local/Global Approach to Mesh Parameterization

    COMPUTER GRAPHICS FORUM, Issue 5 2008
    Ligang Liu
    Abstract We present a novel approach to parameterize a mesh with disk topology to the plane in a shape-preserving manner. Our key contribution is a local/global algorithm, which combines a local mapping of each 3D triangle to the plane, using transformations taken from a restricted set, with a global "stitch" operation of all triangles, involving a sparse linear system. The local transformations can be taken from a variety of families, e.g. similarities or rotations, generating different types of parameterizations. In the first case, the parameterization tries to force each 2D triangle to be an as-similar-as-possible version of its 3D counterpart. This is shown to yield results identical to those of the LSCM algorithm. In the second case, the parameterization tries to force each 2D triangle to be an as-rigid-as-possible version of its 3D counterpart. This approach preserves shape as much as possible. It is simple, effective, and fast, due to pre-factoring of the linear system involved in the global phase. Experimental results show that our approach provides almost isometric parameterizations and obtains more shape-preserving results than other state-of-the-art approaches. We present also a more general "hybrid" parameterization model which provides a continuous spectrum of possibilities, controlled by a single parameter. The two cases described above lie at the two ends of the spectrum. We generalize our local/global algorithm to compute these parameterizations. The local phase may also be accelerated by parallelizing the independent computations per triangle. [source]


    SIMD Optimization of Linear Expressions for Programmable Graphics Hardware

    COMPUTER GRAPHICS FORUM, Issue 4 2004
    Chandrajit Bajaj
    Abstract The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of, where A is a matrix, and and are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods. [source]


    A domain decomposition approach to finite volume solutions of the Euler equations on unstructured triangular meshes

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 6 2001
    Victoria Dolean
    Abstract We report on our recent efforts on the formulation and the evaluation of a domain decomposition algorithm for the parallel solution of two-dimensional compressible inviscid flows. The starting point is a flow solver for the Euler equations, which is based on a mixed finite element/finite volume formulation on unstructured triangular meshes. Time integration of the resulting semi-discrete equations is obtained using a linearized backward Euler implicit scheme. As a result, each pseudo-time step requires the solution of a sparse linear system for the flow variables. In this study, a non-overlapping domain decomposition algorithm is used for advancing the solution at each implicit time step. First, we formulate an additive Schwarz algorithm using appropriate matching conditions at the subdomain interfaces. In accordance with the hyperbolic nature of the Euler equations, these transmission conditions are Dirichlet conditions for the characteristic variables corresponding to incoming waves. Then, we introduce interface operators that allow us to express the domain decomposition algorithm as a Richardson-type iteration on the interface unknowns. Algebraically speaking, the Schwarz algorithm is equivalent to a Jacobi iteration applied to a linear system whose matrix has a block structure. A substructuring technique can be applied to this matrix in order to obtain a fully implicit scheme in terms of interface unknowns. In our approach, the interface unknowns are numerical (normal) fluxes. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    A study into the feasibility of using two parallel sparse direct solvers for the Helmholtz equation on Linux clusters

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 7 2006
    G. Z. M. Berglund
    Abstract Two state-of-the-art parallel software packages for the direct solution of sparse linear systems based on LU-decomposition, MUMPS and SuperLU_DIST have been tested as black-box solvers on problems derived from finite difference discretizations of the Helmholtz equation. The target architecture has been Linux clusters, for which no consistent set of tests of the algorithms implemented in these packages has been published. The investigation consists of series of memory and time scalability checks and has focused on examining the applicability of the algorithms when processing very large sparse matrices on Linux cluster platforms. Special emphasis has been put on monitoring the behaviour of the packages when the equation systems need to be solved for multiple right-hand sides, which is the case, for instance, when modelling a seismic survey. The outcome of the tests points at poor efficiency of the tested algorithms during application of the LU-factors in the solution phase on this type of architecture, where the communication acts as an impasse. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    A frontal solver for the 21st century

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 10 2006
    Jennifer A. Scott
    Abstract In recent years there have been a number of important developments in frontal algorithms for solving the large sparse linear systems of equations that arise from finite-element problems. We report on the design of a new fully portable and efficient frontal solver for large-scale real and complex unsymmetric linear systems from finite-element problems that incorporates these developments. The new package offers both a flexible reverse communication interface and a simple to use all-in-one interface, which is designed to make the package more accessible to new users. Other key features include automatic element ordering using a state-of-the-art hybrid multilevel spectral algorithm, minimal main memory requirements, the use of high-level BLAS, and facilities to allow the solver to be used as part of a parallel multiple front solver. The performance of the new solver, which is written in Fortran 95, is illustrated using a range of problems from practical applications. The solver is available as package HSL_MA42_ELEMENT within the HSL mathematical software library and, for element problems, supersedes the well-known MA42 package. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    An efficient out-of-core multifrontal solver for large-scale unsymmetric element problems

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 7 2009
    J. K. Reid
    Abstract In many applications where the efficient solution of large sparse linear systems of equations is required, a direct method is frequently the method of choice. Unfortunately, direct methods have a potentially severe limitation: as the problem size grows, the memory needed generally increases rapidly. However, the in-core memory requirements can be limited by storing the matrix and its factors externally, allowing the solver to be used for very large problems. We have designed a new out-of-core package for the large sparse unsymmetric systems that arise from finite-element problems. The code, which is called HSL_MA78, implements a multifrontal algorithm and achieves efficiency through the use of specially designed code for handling the input/output operations and efficient dense linear algebra kernels. These kernels, which are available as a separate package called HSL_MA74, use high-level BLAS to perform the partial factorization of the frontal matrices and offer both threshold partial and rook pivoting. In this paper, we describe the design of HSL_MA78 and explain its user interface and the options it offers. We also describe the algorithms used by HSL_MA74 and illustrate the performance of our new codes using problems from a range of practical applications. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Performance of algebraic multigrid methods for non-symmetric matrices arising in particle methods

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 2-3 2010
    B. Seibold
    Abstract Large linear systems with sparse, non-symmetric matrices are known to arise in the modeling of Markov chains or in the discretization of convection,diffusion problems. Due to their potential of solving sparse linear systems with an effort that is linear in the number of unknowns, algebraic multigrid (AMG) methods are of fundamental interest for such systems. For symmetric positive definite matrices, fundamental theoretical convergence results are established, and efficient AMG solvers have been developed. In contrast, for non-symmetric matrices, theoretical convergence results have been provided only recently. A property that is sufficient for convergence is that the matrix be an M-matrix. In this paper, we present how the simulation of incompressible fluid flows with particle methods leads to large linear systems with sparse, non-symmetric matrices. In each time step, the Poisson equation is approximated by meshfree finite differences. While traditional least squares approaches do not guarantee an M-matrix structure, an approach based on linear optimization yields optimally sparse M-matrices. For both types of discretization approaches, we investigate the performance of a classical AMG method, as well as an algebraic multilevel iteration (AMLI) type method. While in the considered test problems, the M-matrix structure turns out not to be necessary for the convergence of AMG, problems can occur when it is violated. In addition, the matrices obtained by the linear optimization approach result in fast solution times due to their optimal sparsity. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Distance-two interpolation for parallel algebraic multigrid

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 2-3 2008
    Hans De Sterck
    Abstract Algebraic multigrid (AMG) is one of the most efficient and scalable parallel algorithms for solving sparse linear systems on unstructured grids. However, for large 3D problems, the coarse grids that are normally used in AMG often lead to growing complexity in terms of memory use and execution time per AMG V-cycle. Sparser coarse grids, such as those obtained by the parallel modified independent set (PMIS) coarsening algorithm, remedy this complexity growth but lead to nonscalable AMG convergence factors when traditional distance-one interpolation methods are used. In this paper, we study the scalability of AMG methods that combine PMIS coarse grids with long-distance interpolation methods. AMG performance and scalability are compared for previously introduced interpolation methods as well as new variants of them for a variety of relevant test problems on parallel computers. It is shown that the increased interpolation accuracy largely restores the scalability of AMG convergence factors for PMIS-coarsened grids, and in combination with complexity reducing methods, such as interpolation truncation, one obtains a class of parallel AMG methods that enjoy excellent scalability properties on large parallel computers. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    A multilevel Crout ILU preconditioner with pivoting and row permutation

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 10 2007
    Jan MayerArticle first published online: 4 SEP 200
    Abstract In this paper, we present a new incomplete LU factorization using pivoting by columns and row permutation. Pivoting by columns helps to avoid small pivots and row permutation is used to promote sparsity. This factorization is used in a multilevel framework as a preconditioner for iterative methods for solving sparse linear systems. In most multilevel incomplete ILU factorization preconditioners, preprocessing (scaling and permutation of rows and columns of the coefficient matrix) results in further improvements. Numerical results illustrate that these preconditioners are suitable for a wide variety of applications. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Linear system solution by null-space approximation and projection (SNAP)

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 1 2007
    M. Ili
    Abstract Solutions of large sparse linear systems of equations are usually obtained iteratively by constructing a smaller dimensional subspace such as a Krylov subspace. The convergence of these methods is sometimes hampered by the presence of small eigenvalues, in which case, some form of deflation can help improve convergence. The method presented in this paper enables the solution to be approximated by focusing the attention directly on the ,small' eigenspace (,singular vector' space). It is based on embedding the solution of the linear system within the eigenvalue problem (singular value problem) in order to facilitate the direct use of methods such as implicitly restarted Arnoldi or Jacobi,Davidson for the linear system solution. The proposed method, called ,solution by null-space approximation and projection' (SNAP), differs from other similar approaches in that it converts the non-homogeneous system into a homogeneous one by constructing an annihilator of the right-hand side. The solution then lies in the null space of the resulting matrix. We examine the construction of a sequence of approximate null spaces using a Jacobi,Davidson style singular value decomposition method, called restarted SNAP-JD, from which an approximate solution can be obtained. Relevant theory is discussed and the method is illustrated by numerical examples where SNAP is compared with both GMRES and GMRES-IR. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Modifying CLJP to select grid hierarchies with lower operator complexities and better performance

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 2-3 2006
    David M. Alber
    Abstract Algebraic multigrid (AMG) is an efficient algorithm for solving certain types of large, sparse linear systems. For solving very large problems with AMG it becomes necessary to use parallel algorithms. Coarse grid selection algorithms such as CLJP were created to parallelize the setup phase of AMG. For some problems, such as those discretized on structured meshes, CLJP tends to select coarse grids with more nodes than alternative coarsening algorithms. In this paper, the cause for the selection of too many coarse nodes by CLJP is examined, and a new technique which lowers the operator complexities generated by CLJP is introduced. To validate the new method, the modified CLJP is compared to other coarsening algorithms for large-scale problems. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Algebraic multigrid and algebraic multilevel methods: a theoretical comparison

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 5-6 2005
    Y. NotayArticle first published online: 3 MAY 200
    Abstract We consider algebraic methods of the two-level type for the iterative solution of large sparse linear systems. We assume that a fine/coarse partitioning and an algebraic interpolation have been defined in one way or another, and review different schemes that may be built with these ingredients. This includes algebraic multigrid (AMG) schemes, two-level approximate block factorizations, and several methods that exploit generalized hierarchical bases. We develop their theoretical analysis in a unified way, gathering some known results, rewriting some other and stating some new. This includes lower bounds, that is, we do not only investigate sufficient conditions of convergence, but also look at necessary conditions. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    BCCB preconditioners for systems of BVM-based numerical integrators

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 1 2004
    Siu-Long Lei
    Abstract Boundary value methods (BVMs) for ordinary differential equations require the solution of non-symmetric, large and sparse linear systems. In this paper, these systems are solved by using the generalized minimal residual (GMRES) method. A block-circulant preconditioner with circulant blocks (BCCB preconditioner) is proposed to speed up the convergence rate of the GMRES method. The BCCB preconditioner is shown to be invertible when the BVM is Ak1,k2 -stable. The spectrum of the preconditioned matrix is clustered and therefore, the preconditioned GMRES method converges fast. Moreover, the operation cost in each iteration of the preconditioned GMRES method by using our BCCB preconditioner is less than that required by using block-circulant preconditioners proposed earlier. In numerical experiments, we compare the number of iterations of various preconditioners. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Robust parameter-free algebraic multilevel preconditioning

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 6-7 2002
    Y. Notay
    Abstract To precondition large sparse linear systems resulting from the discretization of second-order elliptic partial differential equations, many recent works focus on the so-called algebraic multilevel methods. These are based on a block incomplete factorization process applied to the system matrix partitioned in hierarchical form. They have been shown to be both robust and efficient in several circumstances, leading to iterative solution schemes of optimal order of computational complexity. Now, despite the procedure is essentially algebraic, previous works focus generally on a specific context and consider schemes that use classical grid hierarchies with characteristic mesh sizes h,2h,4h, etc. Therefore, these methods require some extra information besides the matrix of the linear system and lack of robustness in some situations where semi-coarsening would be desirable. In this paper, we develop a general method that can be applied in a black box fashion to a wide class of problems, ranging from 2D model Poisson problems to 3D singularly perturbed convection,diffusion equations. It is based on an automatic coarsening process similar to the one used in the AMG method, and on coarse grid matrices computed according to a simple and cheap aggregation principle. Numerical experiments illustrate the efficiency and the robustness of the proposed approach. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    ARMS: an algebraic recursive multilevel solver for general sparse linear systems

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 5 2002
    Y. Saad
    Abstract This paper presents a general preconditioning method based on a multilevel partial elimination approach. The basic step in constructing the preconditioner is to separate the initial points into two parts. The first part consists of ,block' independent sets, or ,aggregates'. Unknowns of two different aggregates have no coupling between them, but those in the same aggregate may be coupled. The nodes not in the first part constitute what might be called the ,coarse' set. It is natural to call the nodes in the first part ,fine' nodes. The idea of the methods is to form the Schur complement related to the coarse set. This leads to a natural block LU factorization which can be used as a preconditioner for the system. This system is then solved recursively using as preconditioner the factorization that could be obtained from the next level. Iterations between levels are allowed. One interesting aspect of the method is that it provides a common framework for many other techniques. Numerical experiments are reported which indicate that the method can be fairly robust. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Some observations on the l2 convergence of the additive Schwarz preconditioned GMRES method

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 5 2002
    Xiao-Chuan Cai
    Abstract Additive Schwarz preconditioned GMRES is a powerful method for solving large sparse linear systems of equations on parallel computers. The algorithm is often implemented in the Euclidean norm, or the discrete l2 norm, however, the optimal convergence result is available only in the energy norm (or the equivalent Sobolev H1 norm). Very little progress has been made in the theoretical understanding of the l2 behaviour of this very successful algorithm. To add to the difficulty in developing a full l2 theory, in this note, we construct explicit examples and show that the optimal convergence of additive Schwarz preconditioned GMRES in l2 cannot be obtained using the existing GMRES theory. More precisely speaking, we show that the symmetric part of the preconditioned matrix, which plays a role in the Eisenstat,Elman,Schultz theory, has at least one negative eigenvalue, and we show that the condition number of the best possible eigenmatrix that diagonalizes the preconditioned matrix, key to the Saad,Schultz theory, is bounded from both above and below by constants multiplied by h,1/2. Here h is the finite element mesh size. The results presented in this paper are mostly negative, but we believe that the techniques used in our proofs may have wide applications in the further development of the l2 convergence theory and in other areas of domain decomposition methods. Copyright © 2002 John Wiley & Sons, Ltd. [source]