lobpcg(A, X, B=None, M=None, Y=None, tol=None, maxiter=None, largest=True, verbosityLevel=0, retLambdaHistory=False, retResidualNormsHistory=False)
LOBPCG is a preconditioned eigensolver for large symmetric positive definite (SPD) generalized eigenproblems.
If both retLambdaHistory
and retResidualNormsHistory
are True, the return tuple has the following format (lambda, V, lambda history, residual norms history)
.
In the following n
denotes the matrix size and m
the number of required eigenvalues (smallest or largest).
The LOBPCG code internally solves eigenproblems of the size 3m
on every iteration by calling the "standard" dense eigensolver, so if m
is not small enough compared to n
, it does not make sense to call the LOBPCG code, but rather one should use the "standard" eigensolver, e.g. numpy or scipy function in this case. If one calls the LOBPCG algorithm for 5m > n
, it will most likely break internally, so the code tries to call the standard function instead.
It is not that n
should be large for the LOBPCG to work, but rather the ratio n / m
should be large. It you call LOBPCG with m=1
and n=10
, it works though n
is small. The method is intended for extremely large n / m
.
The convergence speed depends basically on two factors:
How well relatively separated the seeking eigenvalues are from the rest of the eigenvalues. One can try to vary m
to make this better.
How well conditioned the problem is. This can be changed by using proper preconditioning. For example, a rod vibration test problem (under tests directory) is ill-conditioned for large n
, so convergence will be slow, unless efficient preconditioning is used. For this specific problem, a good simple preconditioner function would be a linear solve for A
, which is easy to code since A is tridiagonal.
The symmetric linear operator of the problem, usually a sparse matrix. Often called the "stiffness matrix".
Initial approximation to the k
eigenvectors (non-sparse). If A
has shape=(n,n)
then X
should have shape shape=(n,k)
.
The right hand side operator in a generalized eigenproblem. By default, B = Identity
. Often called the "mass matrix".
Preconditioner to A
; by default M = Identity
. M
should approximate the inverse of A
.
n-by-sizeY matrix of constraints (non-sparse), sizeY < n The iterations will be performed in the B-orthogonal complement of the column-space of Y. Y must be full rank.
Solver tolerance (stopping criterion). The default is tol=n*sqrt(eps)
.
Maximum number of iterations. The default is maxiter = 20
.
When True, solve for the largest eigenvalues, otherwise the smallest.
Controls solver output. The default is verbosityLevel=0
.
Whether to return eigenvalue history. Default is False.
Whether to return history of residual norms. Default is False.
Array of k
eigenvalues
The eigenvalue history, if :None:None:`retLambdaHistory`
is True.
The history of residual norms, if :None:None:`retResidualNormsHistory`
is True.
Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG)
Solve A x = lambda x
with constraints and preconditioning.
>>> import numpy as np
... from scipy.sparse import spdiags, issparse
... from scipy.sparse.linalg import lobpcg, LinearOperator
... n = 100
... vals = np.arange(1, n + 1)
... A = spdiags(vals, 0, n, n)
... A.toarray() array([[ 1., 0., 0., ..., 0., 0., 0.], [ 0., 2., 0., ..., 0., 0., 0.], [ 0., 0., 3., ..., 0., 0., 0.], ..., [ 0., 0., 0., ..., 98., 0., 0.], [ 0., 0., 0., ..., 0., 99., 0.], [ 0., 0., 0., ..., 0., 0., 100.]])
Constraints:
>>> Y = np.eye(n, 3)
Initial guess for eigenvectors, should have linearly independent columns. Column dimension = number of requested eigenvalues.
>>> rng = np.random.default_rng()
... X = rng.random((n, 3))
Preconditioner in the inverse of A in this example:
>>> invA = spdiags([1./vals], 0, n, n)
The preconditiner must be defined by a function:
>>> def precond( x ):
... return invA @ x
The argument x of the preconditioner function is a matrix inside lobpcg
, thus the use of matrix-matrix product @
.
The preconditioner function is passed to lobpcg as a LinearOperator
:
>>> M = LinearOperator(matvec=precond, matmat=precond,
... shape=(n, n), dtype=np.float64)
Let us now solve the eigenvalue problem for the matrix A:
>>> eigenvalues, _ = lobpcg(A, X, Y=Y, M=M, largest=False)
... eigenvalues array([4., 5., 6.])
Note that the vectors passed in Y are the eigenvectors of the 3 smallest eigenvalues. The results returned are orthogonal to those.
See :The following pages refer to to this document either explicitly or contain code examples using this.
scipy.sparse.linalg._eigen.lobpcg.lobpcg.lobpcg
Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.
Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)
SVG is more flexible but power hungry; and does not scale well to 50 + nodes.
All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them