Solves a real symmetric definite linear system using the conjugate gradient method with optional preconditioning.

Namespace: Imsl.Math
Assembly: ImslCS (in ImslCS.dll) Version: 6.5.0.0

Syntax

C#
[SerializableAttribute]
public class ConjugateGradient
Visual Basic (Declaration)
<SerializableAttribute> _
Public Class ConjugateGradient
Visual C++
[SerializableAttribute]
public ref class ConjugateGradient

Remarks

Class ConjugateGradient solves the symmetric positive or negative definite linear system Ax = b using the conjugate gradient method with optional preconditioning. This method is described in detail by Golub and Van Loan (1983, Chapter 10), and in Hageman and Young (1981, Chapter 7).

The preconditioning matrix M is a matrix that approximates A, and for which the linear system Mz=r is easy to solve. These two properties are in conflict; balancing them is a topic of current research. If no preconditioning matrix is specified, M is set to the identity, i.e. M=I.

The number of iterations needed depends on the matrix and the error tolerance. As a rough guide,

{\rm{itmax}}=\sqrt{n}\;\mbox{for}\; n\gg 1,
where n is the order of matrix A. See the references for details.

Let M be the preconditioning matrix, let b,p,r,x and z be vectors and let \tau be the desired relative error. Then the algorithm used is as follows:

  • \lambda = -1
  • p_0 = x_0
  • r_1 = b-Ap_0
  • for k=1,\ldots,\mbox{{\tt{itmax}}}
    • z_k = M^{-1}r_k
    • if k=1 then
      • \beta_k = 1
      • p_k = z_k
    • else
      • \beta_k=(z_k^Tr_k)/(z_{k-1}^Tr_{k-1})
      • p_k=z_k+\beta_kp_{k-1}
    • \mbox{endif}
    • \alpha_k=(r_k^Tz_k)/(p_k^TAp_k)
    • x_k=x_{k-1}+\alpha_kp_k
    • r_{k+1}=r_k-\alpha_kAp_k
    • if (\|Ap_k\|_2 \leq \tau (1-\lambda)\|x_k\|_2) then
      • recompute \lambda
      • if (\|Ap_k\|_2 \leq \tau(1-\lambda)\|x_k\|_2) exit
    • endif
  • endfor

Here, \lambda is an estimate of \lambda_{\max}(\Gamma), the largest eigenvalue of the iteration matrix \Gamma=I-M^{-1}A. The stopping criterion is based on the result (Hageman and Young 1981, pp. 148-151)


             \frac{\|x_k-x\|_M}{\|x\|_M} \leq \left(\frac{1}{1-\lambda_{\max}(\Gamma)}\right)\left(\frac{\|z_k\|_M}{\|x_k\|_M}\right),
where

             \|x\|_M^2=x^TMx\,.
It is also known that
\lambda_{\max}(T_1) \leq \lambda_{\max}(T_2) \leq \ldots \leq \lambda_{\max}(\Gamma) \lt 1,
where the T_l are the symmetric, tridiagonal matrices

             T_l = \left[ \begin{array}{ccccc}
            		  \mu_1 & \omega_2 & & & \\
                   \omega_2 & \mu_2 & \omega_3 & & \\
                   & \omega_3 & \mu_3 & \raisebox{-1ex}{$\ddots$} & \\
            		  & & \ddots & \ddots & \omega_l \\
            		  & & & \omega_l & \mu_l
                  \end{array} \right]
with \mu_1=1-1/\alpha_1 and, for k=2,\ldots,l,

             \mu_k=1-\beta_k/\alpha_{k-1}-1/\alpha_k \quad \mbox{and} \quad \omega_k=\sqrt{\beta_k}/\alpha_{k-1}.

Usually, the eigenvalue computation is needed for only a few of the iterations.

Inheritance Hierarchy

System..::.Object
Imsl.Math..::.ConjugateGradient

See Also