Home About us Contact | |||
Matrix Computations (matrix + computation)
Selected AbstractsData structures in Java for matrix computationsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 8 2004Geir Gundersen Abstract In this paper we show how to utilize Java's native arrays for matrix computations. The disadvantages of Java arrays used as a 2D array for dense matrix computation are discussed and ways to improve the performance are examined. We show how to create efficient dynamic data structures for sparse matrix computations using Java's native arrays. This data structure is unique for Java and shown to be more dynamic and efficient than the traditional storage schemes for large sparse matrices. Numerical testing indicates that this new data structure, called Java Sparse Array, is competitive with the traditional Compressed Row Storage scheme on matrix computation routines. Java gives increased flexibility without losing efficiency. Compared with other object-oriented data structures Java Sparse Array is shown to have the same flexibility. Copyright © 2004 John Wiley & Sons, Ltd. [source] Usability levels for sparse linear algebra components,CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 12 2008M. Sosonkina Abstract Sparse matrix computations are ubiquitous in high-performance computing applications and often are their most computationally intensive part. In particular, efficient solution of large-scale linear systems may drastically improve the overall application performance. Thus, the choice and implementation of the linear system solver are of paramount importance. It is difficult, however, to navigate through a multitude of available solver packages and to tune their performance to the problem at hand, mainly because of the plethora of interfaces, each requiring application adaptations to match the specifics of solver packages. For example, different ways of setting parameters and a variety of sparse matrix formats hinder smooth interactions of sparse matrix computations with user applications. In this paper, interfaces designed for components that encapsulate sparse matrix computations are discussed in the light of their matching with application usability requirements. Consequently, we distinguish three levels of interfaces, high, medium, and low, corresponding to the degree of user involvement in the linear system solution process and in sparse matrix manipulations. We demonstrate when each interface design choice is applicable and how it may be used to further users' scientific goals. Component computational overheads caused by various design choices are also examined, ranging from low level, for matrix manipulation components, to high level, in which a single component contains the entire linear system solver. Published in 2007 by John Wiley & Sons, Ltd. [source] Data structures in Java for matrix computationsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 8 2004Geir Gundersen Abstract In this paper we show how to utilize Java's native arrays for matrix computations. The disadvantages of Java arrays used as a 2D array for dense matrix computation are discussed and ways to improve the performance are examined. We show how to create efficient dynamic data structures for sparse matrix computations using Java's native arrays. This data structure is unique for Java and shown to be more dynamic and efficient than the traditional storage schemes for large sparse matrices. Numerical testing indicates that this new data structure, called Java Sparse Array, is competitive with the traditional Compressed Row Storage scheme on matrix computation routines. Java gives increased flexibility without losing efficiency. Compared with other object-oriented data structures Java Sparse Array is shown to have the same flexibility. Copyright © 2004 John Wiley & Sons, Ltd. [source] A subspace approach to balanced truncation for model reduction of nonlinear control systemsINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 6 2002Sanjay Lall Abstract In this paper, we introduce a new method of model reduction for nonlinear control systems. Our approach is to construct an approximately balanced realization. The method requires only standard matrix computations, and we show that when it is applied to linear systems it results in the usual balanced truncation. For nonlinear systems, the method makes use of data from either simulation or experiment to identify the dynamics relevant to the input,output map of the system. An important feature of this approach is that the resulting reduced-order model is nonlinear, and has inputs and outputs suitable for control. We perform an example reduction for a nonlinear mechanical system. Copyright © 2002 John Wiley & Sons, Ltd. [source] A low complexity partially adaptive CDMA receiver for downlink mobile satellite communicationsINTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 1 2003Gau-Joe Lin Abstract A novel CDMA receiver with enhanced interference suppression is proposed for pilot symbols assisted mobile satellite systems in the presence of frequency offset. The design of the receiver involves the following procedure. First, adaptive correlators are constructed at different fingers, based on the scheme of generalized sidelobe canceller (GSC), to collect the multipath signals and suppress multi-access interference (MAI). In particular, a partially adaptive (PA) realization of the GSC correlators is proposed based on the Krylov subspace technique, leading to an efficient algorithm without the need of complicated matrix computations. Second, pilot symbols assisted frequency offset estimation, channel estimation and RAKE combining give the estimate of signal symbols. Finally, further performance enhancement is achieved by an iterative scheme in which the signal is reconstructed and subtracted from the GSC correlators input, leading to faster convergence of the receiver. The proposed low complexity PA receiver is suitable or the downlink of mobile satellite CDMA systems, and shown to outperform the conventional fully adaptive MMSE receiver by using a small number of pilot symbols. Copyright © 2003 John Wiley & Sons, Ltd. [source] Additive preconditioning in matrix computationsPROCEEDINGS IN APPLIED MATHEMATICS & MECHANICS, Issue 1 2007V. Y. Pan We combine our novel SVD-free additive preconditioning with aggregation and other relevant techniques to facilitate the solution of a linear system of equations and other fundamental matrix computations. Our analysis and experiments show the power of our algorithms, guide us in selecting most effective policies of preconditioning and aggregation, and provide some new insights into these and related subjects. Compared to the popular SVD-based multiplicative preconditioners, our additive preconditioners are generated more readily and for a much larger class of matrices. Furthermore, they better preserve matrix structure and sparseness and have a wider range of applications (e.g., they facilitate the solution of a consistent singular linear system of equations and of the eigen-problem). (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] |