LSMB: An Iterative Least-Squares Solver
Home
Authors: Eric Hallman and Ming Gu
Description: A conjugate-gradient type algorithm for solving
sparse least-squares problems:
\begin{align*}
\min_x & \|Ax-b\|^2
\\ \text{or } \min_x & \|Ax-b\|^2 + \lambda^2 \|x\|^2,
\end{align*}
where the matrix \(A\) may have any dimension or rank. If \(A\) has low rank then the solution is not unique, and LSMB returns the minimum-norm solution.
LSMB is based on the Golub-Kahan bidiagonalization process, and the iterates it produces are convex combinations of the iterates produced by algorithms LSQR and LSMR. Algorithm LSMB is designed to minimize an objective function closely related to the backward error with every iteration, allowing it to terminate sooner than either LSQR or LSMR.
If the smallest singular value of \(A\) is known, it can be used to produce tighter error estimates and allow LSMB (or LSQR) to terminate sooner, particularly on nearly-combatible systems.
Downloads:
Matlab file for LSMB.
Matlab file for a method that runs LSQR and LSMR simultaneously. In practice, LSMB is nearly equivalent to running LSQR and LSMR in tandem and stopping when either method satisfies the stopping rule. Although LSMB may terminate marginally sooner, the implementation of LSQR+LSMR is somewhat simpler.
Errata:
In Section 4.5, Step 4, the first line of the pseudocode for LSMB should read \rho_k = (\underline \rho_k^2 + beta_{k+1}^2)^(1/2). Thanks to Anqi Fu for catching the error.