lsmr(A, b, damp=0.0, atol=1e-06, btol=1e-06, conlim=100000000.0, maxiter=None, show=False, x0=None)
lsmr solves the system of linear equations Ax = b
. If the system is inconsistent, it solves the least-squares problem min ||b - Ax||_2
. A
is a rectangular matrix of dimension m-by-n, where all cases are allowed: m = n, m > n, or m < n. b
is a vector of length m. The matrix A may be dense or sparse (usually sparse).
Matrix A in the linear system. Alternatively, A
can be a linear operator which can produce Ax
and A^H x
using, e.g., scipy.sparse.linalg.LinearOperator
.
Vector b
in the linear system.
Damping factor for regularized least-squares. lsmr
solves the regularized least-squares problem:
min ||(b) - ( A )x|| ||(0) (damp*I) ||_2
where damp is a scalar. If damp is None or 0, the system is solved without regularization. Default is 0.
Stopping tolerances. lsmr
continues iterations until a certain backward error estimate is smaller than some quantity depending on atol and btol. Let r = b - Ax
be the residual vector for the current approximate solution x
. If Ax = b
seems to be consistent, lsmr
terminates when norm(r) <= atol * norm(A) * norm(x) + btol * norm(b)
. Otherwise, lsmr
terminates when norm(A^H r) <=
atol * norm(A) * norm(r)
. If both tolerances are 1.0e-6 (default), the final norm(r)
should be accurate to about 6 digits. (The final x
will usually have fewer correct digits, depending on cond(A)
and the size of LAMBDA.) If atol
or :None:None:`btol`
is None, a default value of 1.0e-6 will be used. Ideally, they should be estimates of the relative error in the entries of A
and b
respectively. For example, if the entries of A
have 7 correct digits, set atol = 1e-7
. This prevents the algorithm from doing unnecessary work beyond the uncertainty of the input data.
lsmr
terminates if an estimate of cond(A)
exceeds :None:None:`conlim`
. For compatible systems Ax = b
, conlim could be as large as 1.0e+12 (say). For least-squares problems, :None:None:`conlim`
should be less than 1.0e+8. If :None:None:`conlim`
is None, the default value is 1e+8. Maximum precision can be obtained by setting atol = btol = conlim = 0
, but the number of iterations may then be excessive. Default is 1e8.
lsmr
terminates if the number of iterations reaches :None:None:`maxiter`
. The default is maxiter = min(m, n)
. For ill-conditioned systems, a larger value of :None:None:`maxiter`
may be needed. Default is False.
Print iterations logs if show=True
. Default is False.
Initial guess of x
, if None zeros are used. Default is None.
Least-square solution returned.
istop gives the reason for stopping:
istop = 0 means x=0 is a solution. If x0 was given, then x=x0 is a solution. = 1 means x is an approximate solution to A@x = B, according to atol and btol. = 2 means x approximately solves the least-squares problem according to atol. = 3 means COND(A) seems to be greater than CONLIM. = 4 is the same as 1 with atol = btol = eps (machine precision) = 5 is the same as 2 with atol = eps. = 6 is the same as 3 with CONLIM = 1/eps. = 7 means ITN reached maxiter before the other stopping conditions were satisfied.
Number of iterations used.
norm(b-Ax)
norm(A^H (b - Ax))
norm(A)
Condition number of A.
norm(x)
Iterative solver for least-squares problems.
>>> from scipy.sparse import csc_matrix
... from scipy.sparse.linalg import lsmr
... A = csc_matrix([[1., 0.], [1., 1.], [0., 1.]], dtype=float)
The first example has the trivial solution :None:None:`[0, 0]`
>>> b = np.array([0., 0., 0.], dtype=float)
... x, istop, itn, normr = lsmr(A, b)[:4]
... istop 0
>>> x array([0., 0.])
The stopping code :None:None:`istop=0`
returned indicates that a vector of zeros was found as a solution. The returned solution x
indeed contains :None:None:`[0., 0.]`
. The next example has a non-trivial solution:
>>> b = np.array([1., 0., -1.], dtype=float)
... x, istop, itn, normr = lsmr(A, b)[:4]
... istop 1
>>> x array([ 1., -1.])
>>> itn 1
>>> normr 4.440892098500627e-16
As indicated by :None:None:`istop=1`
, lsmr
found a solution obeying the tolerance limits. The given solution :None:None:`[1., -1.]`
obviously solves the equation. The remaining return values include information about the number of iterations (:None:None:`itn=1`
) and the remaining difference of left and right side of the solved equation. The final example demonstrates the behavior in the case where there is no solution for the equation:
>>> b = np.array([1., 0.01, -1.], dtype=float)
... x, istop, itn, normr = lsmr(A, b)[:4]
... istop 2
>>> x array([ 1.00333333, -0.99666667])
>>> A.dot(x)-b array([ 0.00333333, -0.00333333, 0.00333333])
>>> normr 0.005773502691896255
:None:None:`istop`
indicates that the system is inconsistent and thus x
is rather an approximate solution to the corresponding least-squares problem. :None:None:`normr`
contains the minimal distance that was found.
The following pages refer to to this document either explicitly or contain code examples using this.
scipy.sparse.linalg._isolve.lsmr.lsmr
Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.
Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)
SVG is more flexible but power hungry; and does not scale well to 50 + nodes.
All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them