scipy 1.8.0 Pypi GitHub Homepage
Other Docs
NotesParametersReturnsBackRef
approx_derivative(fun, x0, method='3-point', rel_step=None, abs_step=None, f0=None, bounds=(-inf, inf), sparsity=None, as_linear_operator=False, args=(), kwargs={})

If a function maps from R^n to R^m, its derivatives form m-by-n matrix called the Jacobian, where an element (i, j) is a partial derivative of f[i] with respect to x[j].

Notes

If :None:None:`rel_step` is not provided, it assigned as EPS**(1/s) , where EPS is determined from the smallest floating point dtype of :None:None:`x0` or :None:None:`fun(x0)`, np.finfo(x0.dtype).eps , s=2 for '2-point' method and s=3 for '3-point' method. Such relative step approximately minimizes a sum of truncation and round-off errors, see . Relative steps are used by default. However, absolute steps are used when abs_step is not None . If any of the absolute or relative steps produces an indistinguishable difference from the original :None:None:`x0`, (x0 + dx) - x0 == 0 , then a automatic step size is substituted for that particular entry.

A finite difference scheme for '3-point' method is selected automatically. The well-known central difference scheme is used for points sufficiently far from the boundary, and 3-point forward or backward scheme is used for points near the boundary. Both schemes have the second-order accuracy in terms of Taylor expansion. Refer to for the formulas of 3-point forward and backward difference schemes.

For dense differencing when m=1 Jacobian is returned with a shape (n,), on the other hand when n=1 Jacobian is returned with a shape (m, 1). Our motivation is the following: a) It handles a case of gradient computation (m=1) in a conventional way. b) It clearly separates these two different cases. b) In all cases np.atleast_2d can be called to get 2-D Jacobian with correct dimensions.

Parameters

fun : callable

Function of which to estimate the derivatives. The argument x passed to this function is ndarray of shape (n,) (never a scalar even if n=1). It must return 1-D array_like of shape (m,) or a scalar.

x0 : array_like of shape (n,) or float

Point at which to estimate the derivatives. Float will be converted to a 1-D array.

method : {'3-point', '2-point', 'cs'}, optional

Finite difference method to use:

  • '2-point' - use the first order accuracy forward or backward

    difference.

  • '3-point' - use central difference in interior points and the

    second order accuracy forward or backward difference near the boundary.

  • 'cs' - use a complex-step finite difference scheme. This assumes

    that the user function is real-valued and can be analytically continued to the complex plane. Otherwise, produces bogus results.

rel_step : None or array_like, optional

Relative step size to use. If None (default) the absolute step size is computed as h = rel_step * sign(x0) * max(1, abs(x0)) , with :None:None:`rel_step` being selected automatically, see Notes. Otherwise h = rel_step * sign(x0) * abs(x0) . For method='3-point' the sign of h is ignored. The calculated step size is possibly adjusted to fit into the bounds.

abs_step : array_like, optional

Absolute step size to use, possibly adjusted to fit into the bounds. For method='3-point' the sign of :None:None:`abs_step` is ignored. By default relative steps are used, only if abs_step is not None are absolute steps used.

f0 : None or array_like, optional

If not None it is assumed to be equal to fun(x0) , in this case the fun(x0) is not called. Default is None.

bounds : tuple of array_like, optional

Lower and upper bounds on independent variables. Defaults to no bounds. Each bound must match the size of :None:None:`x0` or be a scalar, in the latter case the bound will be the same for all variables. Use it to limit the range of function evaluation. Bounds checking is not implemented when :None:None:`as_linear_operator` is True.

sparsity : {None, array_like, sparse matrix, 2-tuple}, optional

Defines a sparsity structure of the Jacobian matrix. If the Jacobian matrix is known to have only few non-zero elements in each row, then it's possible to estimate its several columns by a single function evaluation . To perform such economic computations two ingredients are required:

as_linear_operator : bool, optional

When True the function returns an scipy.sparse.linalg.LinearOperator . Otherwise it returns a dense array or a sparse matrix depending on :None:None:`sparsity`. The linear operator provides an efficient way of computing J.dot(p) for any vector p of shape (n,), but does not allow direct access to individual elements of the matrix. By default :None:None:`as_linear_operator` is False.

args, kwargs : tuple and dict, optional

Additional arguments passed to :None:None:`fun`. Both empty by default. The calling signature is fun(x, *args, **kwargs) .

Returns

J : {ndarray, sparse matrix, LinearOperator}

Finite difference approximation of the Jacobian matrix. If :None:None:`as_linear_operator` is True returns a LinearOperator with shape (m, n). Otherwise it returns a dense array or sparse matrix depending on how :None:None:`sparsity` is defined. If :None:None:`sparsity` is None then a ndarray with shape (m, n) is returned. If :None:None:`sparsity` is not None returns a csr_matrix with shape (m, n). For sparse matrices and linear operators it is always returned as a 2-D structure, for ndarrays, if m=1 it is returned as a 1-D gradient array with shape (n,).

Compute finite difference approximation of the derivatives of a vector-valued function.

See Also

check_derivative

Check correctness of a function computing derivatives.

Examples

This example is valid syntax, but raise an exception at execution
>>> import numpy as np
... from scipy.optimize import approx_derivative >>>
>>> def f(x, c1, c2):
...  return np.array([x[0] * np.sin(c1 * x[1]),
...  x[0] * np.cos(c2 * x[1])]) ...
>>> x0 = np.array([1.0, 0.5 * np.pi])
... approx_derivative(f, x0, args=(1, 2)) array([[ 1., 0.], [-1., 0.]])

Bounds can be used to limit the region of function evaluation. In the example below we compute left and right derivative at point 1.0.

>>> def g(x):
...  return x**2 if x >= 1 else x ...
>>> x0 = 1.0
... approx_derivative(g, x0, bounds=(-np.inf, 1.0)) array([ 1.])
>>> approx_derivative(g, x0, bounds=(1.0, np.inf))
array([ 2.])
See :

Back References

The following pages refer to to this document either explicitly or contain code examples using this.

scipy.optimize._numdiff.approx_derivative scipy.optimize._numdiff.check_derivative

Local connectivity graph

Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.

Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)

SVG is more flexible but power hungry; and does not scale well to 50 + nodes.

All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them


GitHub : /scipy/optimize/_numdiff.py#275
type: <class 'function'>
Commit: