Home About us Contact | |||
N Matrix (n + matrix)
Selected AbstractsDirichlet duality and the nonlinear Dirichlet problemCOMMUNICATIONS ON PURE & APPLIED MATHEMATICS, Issue 3 2009F. Reese Harvey We study the Dirichlet problem for fully nonlinear, degenerate elliptic equations of the form F(Hess u) = 0 on a smoothly bounded domain , , ,n. In our approach the equation is replaced by a subset F , Sym2(,n) of the symmetric n × n matrices with ,F , {F = 0}. We establish the existence and uniqueness of continuous solutions under an explicit geometric "F -convexity" assumption on the boundary ,,. We also study the topological structure of F -convex domains and prove a theorem of Andreotti-Frankel type. Two key ingredients in the analysis are the use of "subaffine functions" and "Dirichlet duality." Associated to F is a Dirichlet dual set F, that gives a dual Dirichlet problem. This pairing is a true duality in that the dual of F, is F, and in the analysis the roles of F and F, are interchangeable. The duality also clarifies many features of the problem including the appropriate conditions on the boundary. Many interesting examples are covered by these results including: all branches of the homogeneous Monge-Ampère equation over ,, ,, and ,; equations appearing naturally in calibrated geometry, Lagrangian geometry, and p -convex Riemannian geometry; and all branches of the special Lagrangian potential equation. © 2008 Wiley Periodicals, Inc. [source] An implicit QR algorithm for symmetric semiseparable matricesNUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 7 2005Raf Vandebril Abstract The QR algorithm is one of the classical methods to compute the eigendecomposition of a matrix. If it is applied on a dense n × n matrix, this algorithm requires O(n3) operations per iteration step. To reduce this complexity for a symmetric matrix to O(n), the original matrix is first reduced to tridiagonal form using orthogonal similarity transformations. In the report (Report TW360, May 2003) a reduction from a symmetric matrix into a similar semiseparable one is described. In this paper a QR algorithm to compute the eigenvalues of semiseparable matrices is designed where each iteration step requires O(n) operations. Hence, combined with the reduction to semiseparable form, the eigenvalues of symmetric matrices can be computed via intermediate semiseparable matrices, instead of tridiagonal ones. The eigenvectors of the intermediate semiseparable matrix will be computed by applying inverse iteration to this matrix. This will be achieved by using an O(n) system solver, for semiseparable matrices. A combination of the previous steps leads to an algorithm for computing the eigenvalue decompositions of semiseparable matrices. Combined with the reduction of a symmetric matrix towards semiseparable form, this algorithm can also be used to calculate the eigenvalue decomposition of symmetric matrices. The presented algorithm has the same order of complexity as the tridiagonal approach, but has larger lower order terms. Numerical experiments illustrate the complexity and the numerical accuracy of the proposed method. Copyright © 2005 John Wiley & Sons, Ltd. [source] Computing projections via Householder transformations and Gram,Schmidt orthogonalizationsNUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 7 2004Achiya Dax Abstract Let x* denote the solution of a linear least-squares problem of the form where A is a full rank m × n matrix, m > n. Let r*=b - Ax* denote the corresponding residual vector. In most problems one is satisfied with accurate computation of x*. Yet in some applications, such as affine scaling methods, one is also interested in accurate computation of the unit residual vector r*/,r*,2. The difficulties arise when ,r*,2 is much smaller than ,b,2. Let x, and r, denote the computed values of x* and r*, respectively. Let ,denote the machine precision in our computations, and assume that r, is computed from the equality r, =b - Ax,. Then, no matter how accurate x, is, the unit residual vector û =r,/,r,,2 contains an error vector whose size is likely to exceed ,,b,2/,r*,2. That is, the smaller ,r*,2 the larger the error. Thus although the computed unit residual should satisfy ATû=0, in practice the size of ,ATû,2 is about ,,A,2,b,2/,r*,2. The methods discussed in this paper compute a residual vector, r,, for which ,ATr,,2 is not much larger than ,,A,2,r,,2. Numerical experiments illustrate the difficulties in computing small residuals and the usefulness of the proposed safeguards. Copyright © 2004 John Wiley & Sons, Ltd. [source] Smallest singular value of a random rectangular matrixCOMMUNICATIONS ON PURE & APPLIED MATHEMATICS, Issue 12 2009Mark Rudelson We prove an optimal estimate of the smallest singular value of a random sub-Gaussian matrix, valid for all dimensions. For an N × n matrix A with independent and identically distributed sub-Gaussian entries, the smallest singular value of A is at least of the order ,N , ,n , 1 with high probability. A sharp estimate on the probability is also obtained. © 2009 Wiley Periodicals, Inc. [source] |