IMSL Mathematics Reference Guide > Eigensystem Analysis > Introduction
  

Introduction
An ordinary linear eigensystem problem is represented by the equation Ax = λx, where A denotes an n × n matrix. The value λ is an eigenvalue, and x 0 is the corresponding eigenvector. The eigenvector is determined up to a scalar factor. In all functions, this factor has been chosen so that x has Euclidean length 1, and the component of x of largest magnitude is positive. The eigenvalues and corresponding eigenvectors are sorted then returned in the order of largest to smallest complex magnitude If x is a complex vector, this component of largest magnitude is scaled to be real and positive. The entry where this component occurs can be arbitrary for eigenvectors having nonunique maximum magnitude values.
A generalized linear eigensystem problem is represented by Ax = λBx, where A and B are n × n matrices. The value λ is a generalized eigenvalue, and x is the corresponding generalized eigenvector. The generalized eigenvectors are normalized in the same manner as for ordinary eigensystem problems.
Error Analysis and Accuracy
This section discusses ordinary eigenvalue problems. Except in special cases, functions do not return the exact eigenvalue-eigenvector pair for the ordinary eigenvalue problem Ax = λx. Typically, the computed pair:
is an exact eigenvector-eigenvalue pair for a “nearby” matrix A + E . Information about E is known only in terms of bounds of the form:
The value of f (n) depends on the algorithm but is typically a small fractional power of n. The parameter ε is the machine precision. By a theorem due to Bauer and Fike (see Golub and Van Loan 1989, p. 342):
for all λ in
where σ (A) is the set of all eigenvalues of A (called the spectrum of A), X is the matrix of eigenvectors:
|| · ||2
is Euclidean length, and κ (X) is the condition number of X defined as:
If A is a real symmetric or complex Hermitian matrix, then its eigenvector matrix X is respectively orthogonal or unitary. For these matrices, κ (X) = 1.
The accuracy of the computed eigenvalues:
and eigenvectors
can be checked by computing their performance index τ. The performance index is defined to be:
where ε is again the machine precision.
The performance index τ is related to the error analysis because:
where E is the “nearby” matrix discussed above.
While the exact value of τ is precision and data dependent, the performance of an eigensystem analysis function is defined as excellent if τ < 1, good if 1 ≤ τ ≤ 100, and poor if τ > 100. This is an arbitrary definition, but large values of τ can serve as a warning that there is a significant error in the calculation.
If the condition number κ (X) of the eigenvector matrix X is large, there can be large errors in the eigenvalues even if τ is small. In particular, it is often difficult to recognize near multiple eigenvalues or unstable mathematical problems from numerical results. This facet of the eigenvalue problem is often difficult for users to understand. Suppose the accuracy of an individual eigenvalue is desired. This can be answered approximately by computing the condition number of an individual eigenvalue (see Golub and Van Loan 1989, pp. 344–345). For matrices A such that the computed array of normalized eigenvectors X is invertible, the condition number of λj is:
the Euclidean length of the jth row of X –1. Users can choose to compute this matrix using the EIG Function (PV-WAVE Advantage). An approximate bound for the accuracy of a computed eigenvalue is then given by:
To compute an approximate bound for the relative accuracy of an eigenvalue, divide this bound by |λj|.
Reformulating Generalized Eigenvalue Problems
The generalized eigenvalue problem Ax = λBx is often difficult for users to analyze because it is frequently ill-conditioned. Occasionally, there are changes of variables that can be performed on the given problem to ease this ill-conditioning. Suppose that B is singular, but A is nonsingular. Define the reciprocal μ = λ–1. Then, the roles of A and B are interchanged so that the reformulated problem Bx = μAx is solved. Those generalized eigenvalues μj = 0 correspond to eigenvalues λj = infinity. The remaining λj = μj–1. The generalized eigenvectors for λj correspond to those for μj.
Now, suppose that B is nonsingular. The user can solve the ordinary eigenvalue problem Cx = λx , where C = B–1A. Matrix C is subject to perturbations due to ill-conditioning and rounding errors when computing B–1A. Computing condition numbers of the eigenvalues for C may, however, be helpful for analyzing the accuracy of results for the generalized problem.
Another method to reduce the generalized problem to an alternate ordinary problem is based on first computing a matrix decomposition B = PQ, where both P and Q are matrices that are “simple” to invert. Then, the given generalized problem is equivalent to the ordinary eigenvalue problem Fy = λy. The matrix F = P–1AQ–1 and the unnormalized eigenvectors of the generalized problem are given by x = Q-1y. An example of this reformulation is used in the case where A and B are real and symmetric, with B positive definite. The EIGSYMGEN Function (PV-WAVE Advantage) uses P = RT and Q = R , where R is an upper-triangular matrix obtained from a Cholesky decomposition, B = RTR. The matrix F = RTAR–1 is symmetric and real. Computation of the eigenvalue-eigenvector expansion for F is based on function EIG.

Version 2017.0
Copyright © 2017, Rogue Wave Software, Inc. All Rights Reserved.