2-norm Error Estimates for Least-Squares and SPD problems

Home

  • Author: Eric Hallman
  • Description: A method for estimating the 2-norm error \(\|x-x_*\|_2\) when using the conjugate gradient method to solve \(Ax=b\) or \((A+\lambda I)x = b\), where \(A\) is a symmetric positive definite matrix. If no regularization parameter \(\lambda\) is used, then a lower bound on the smallest eigenvalue of \(A\) is required.

    An algorithm closely related to CG produces iterates with slightly smaller 2-norm error bounds, although the true error may or may not be smaller than that of the CG iterates.

    A similar method is given for estimating the 2-norm error \(\|x-x_*\|_2\) when using LSQR to solve \begin{align*} \min_x & \|Ax-b\|^2 \\ \text{or } \min_x & \|Ax-b\|^2 + \lambda^2 \|x\|^2, \end{align*} where the matrix \(A\) may have any dimension or rank. If \(A\) has low rank then the solution in the first case is not unique, and LSQR returns the minimum norm solution. If no regularization parameter \(\lambda\) is used, then a lower bound on the smallest singular value of \(A\) is required.

    An algorithm closely related to LSQR produces iterates with slightly smaller 2-norm error bounds, although the true error may or may not be smaller than that of the LSQR iterates.

    Downloads:
  • Matlab file for CG and related algorithm.
  • Matlab file for LSQR and related algorithm.