The constraint has the general inequality form:
lb <= fun(x) <= ub
Here the vector of independent variables x is passed as ndarray of shape (n,) and fun
returns a vector with m components.
It is possible to use equal bounds to represent an equality constraint or infinite bounds to represent a one-sided constraint.
Finite difference schemes {'2-point', '3-point', 'cs'} may be used for approximating either the Jacobian or the Hessian. We, however, do not allow its use for approximating both simultaneously. Hence whenever the Jacobian is estimated via finite-differences, we require the Hessian to be estimated using one of the quasi-Newton strategies.
The scheme 'cs' is potentially the most accurate, but requires the function to correctly handles complex inputs and be analytically continuable to the complex plane. The scheme '3-point' is more accurate than '2-point' but requires twice as many operations.
The function defining the constraint. The signature is fun(x) -> array_like, shape (m,)
.
Lower and upper bounds on the constraint. Each array must have the shape (m,) or be a scalar, in the latter case a bound will be the same for all components of the constraint. Use np.inf
with an appropriate sign to specify a one-sided constraint. Set components of :None:None:`lb`
and :None:None:`ub`
equal to represent an equality constraint. Note that you can mix constraints of different types: interval, one-sided or equality, by setting different components of :None:None:`lb`
and :None:None:`ub`
as necessary.
Method of computing the Jacobian matrix (an m-by-n matrix, where element (i, j) is the partial derivative of f[i] with respect to x[j]). The keywords {'2-point', '3-point', 'cs'} select a finite difference scheme for the numerical estimation. A callable must have the following signature: jac(x) -> {ndarray, sparse matrix}, shape (m, n)
. Default is '2-point'.
Method for computing the Hessian matrix. The keywords {'2-point', '3-point', 'cs'} select a finite difference scheme for numerical estimation. Alternatively, objects implementing HessianUpdateStrategy
interface can be used to approximate the Hessian. Currently available implementations are:
A callable must return the Hessian matrix of dot(fun, v)
and must have the following signature: hess(x, v) -> {LinearOperator, sparse matrix, array_like}, shape (n, n)
. Here v
is ndarray with shape (m,) containing Lagrange multipliers.
Whether to keep the constraint components feasible throughout iterations. A single value set this property for all components. Default is False. Has no effect for equality constraints.
Relative step size for the finite difference approximation. Default is None, which will select a reasonable value automatically depending on a finite difference scheme.
Defines the sparsity structure of the Jacobian matrix for finite difference estimation, its shape must be (m, n). If the Jacobian has only few non-zero elements in each row, providing the sparsity structure will greatly speed up the computations. A zero entry means that a corresponding element in the Jacobian is identically zero. If provided, forces the use of 'lsmr' trust-region solver. If None (default) then dense differencing will be used.
Nonlinear constraint on the variables.
Constrain x[0] < sin(x[1]) + 1.9
>>> from scipy.optimize import NonlinearConstraintSee :
... con = lambda x: x[0] - np.sin(x[1])
... nlc = NonlinearConstraint(con, -np.inf, 1.9)
The following pages refer to to this document either explicitly or contain code examples using this.
scipy.optimize._differentialevolution.differential_evolution
scipy.optimize._minimize.minimize
scipy.optimize._constraints.NonlinearConstraint
Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.
Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)
SVG is more flexible but power hungry; and does not scale well to 50 + nodes.
All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them