gradient(f, *varargs, **kwargs)
This docstring was copied from numpy.gradient.
Some inconsistencies with the Dask version may exist.
The gradient is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array.
Assuming that $f\in C^{3}$ (i.e., $f$ has at least 3 continuous derivatives) and let $h_{*}$ be a non-homogeneous stepsize, we minimize the "consistency error" $\eta_{i}$ between the true gradient and its estimate from a linear combination of the neighboring grid-points:
$$\eta_{i} = f_{i}^{\left(1\right)} - \left[ \alpha f\left(x_{i}\right) + \beta f\left(x_{i} + h_{d}\right) + \gamma f\left(x_{i}-h_{s}\right) \right]$$By substituting $f(x_{i} + h_{d})$ and $f(x_{i} - h_{s})$ with their Taylor series expansion, this translates into solving the following the linear system:
$$\left\{ \begin{array}{r} \alpha+\beta+\gamma=0 \\ \beta h_{d}-\gamma h_{s}=1 \\ \beta h_{d}^{2}+\gamma h_{s}^{2}=0 \end{array} \right.$$The resulting approximation of $f_{i}^{(1)}$ is the following:
$$\hat f_{i}^{(1)} = \frac{ h_{s}^{2}f\left(x_{i} + h_{d}\right) + \left(h_{d}^{2} - h_{s}^{2}\right)f\left(x_{i}\right) - h_{d}^{2}f\left(x_{i}-h_{s}\right)} { h_{s}h_{d}\left(h_{d} + h_{s}\right)} + \mathcal{O}\left(\frac{h_{d}h_{s}^{2} + h_{s}h_{d}^{2}}{h_{d} + h_{s}}\right)$$It is worth noting that if $h_{s}=h_{d}$ (i.e., data are evenly spaced) we find the standard second order approximation:
$$\hat f_{i}^{(1)}= \frac{f\left(x_{i+1}\right) - f\left(x_{i-1}\right)}{2h} + \mathcal{O}\left(h^{2}\right)$$With a similar procedure the forward/backward approximations used for boundaries can be derived.
An N-dimensional array containing samples of a scalar function.
Spacing between f values. Default unitary spacing for all dimensions. Spacing can be specified using:
single scalar to specify a sample distance for all dimensions.
N scalars to specify a constant sample distance for each dimension. i.e. :None:None:`dx`
, :None:None:`dy`
, :None:None:`dz`
, ...
N arrays to specify the coordinates of the values along each dimension of F. The length of the array must match the size of the corresponding dimension
Any combination of N scalars/arrays with the meaning of 2. and 3.
If :None:None:`axis`
is given, the number of varargs must equal the number of axes. Default: 1.
Gradient is calculated using N-th order accurate differences at the boundaries. Default: 1.
Gradient is calculated only along the given axis or axes The default (axis = None) is to calculate the gradient for all the axes of the input array. axis may be negative, in which case it counts from the last to the first axis.
A list of ndarrays (or a single ndarray if there is only one dimension) corresponding to the derivatives of f with respect to each dimension. Each derivative has the same shape as f.
Return the gradient of an N-dimensional array.
>>> f = np.array([1, 2, 4, 7, 11, 16], dtype=float) # doctest: +SKIPThis example is valid syntax, but we were not able to check execution
... np.gradient(f) # doctest: +SKIP array([1. , 1.5, 2.5, 3.5, 4.5, 5. ])
>>> np.gradient(f, 2) # doctest: +SKIP array([0.5 , 0.75, 1.25, 1.75, 2.25, 2.5 ])
Spacing can be also specified with an array that represents the coordinates of the values F along the dimensions. For instance a uniform spacing:
This example is valid syntax, but we were not able to check execution>>> x = np.arange(f.size) # doctest: +SKIP
... np.gradient(f, x) # doctest: +SKIP array([1. , 1.5, 2.5, 3.5, 4.5, 5. ])
Or a non uniform one:
This example is valid syntax, but we were not able to check execution>>> x = np.array([0., 1., 1.5, 3.5, 4., 6.], dtype=float) # doctest: +SKIP
... np.gradient(f, x) # doctest: +SKIP array([1. , 3. , 3.5, 6.7, 6.9, 2.5])
For two dimensional arrays, the return will be two arrays ordered by axis. In this example the first array stands for the gradient in rows and the second one in columns direction:
This example is valid syntax, but we were not able to check execution>>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float)) # doctest: +SKIP [array([[ 2., 2., -1.], [ 2., 2., -1.]]), array([[1. , 2.5, 4. ], [1. , 1. , 1. ]])]
In this example the spacing is also specified: uniform for axis=0 and non uniform for axis=1
This example is valid syntax, but we were not able to check execution>>> dx = 2. # doctest: +SKIP
... y = [1., 1.5, 3.5] # doctest: +SKIP
... np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float), dx, y) # doctest: +SKIP [array([[ 1. , 1. , -0.5], [ 1. , 1. , -0.5]]), array([[2. , 2. , 2. ], [2. , 1.7, 0.5]])]
It is possible to specify how boundaries are treated using :None:None:`edge_order`
>>> x = np.array([0, 1, 2, 3, 4]) # doctest: +SKIPThis example is valid syntax, but we were not able to check execution
... f = x**2 # doctest: +SKIP
... np.gradient(f, edge_order=1) # doctest: +SKIP array([1., 2., 4., 6., 7.])
>>> np.gradient(f, edge_order=2) # doctest: +SKIP array([0., 2., 4., 6., 8.])
The :None:None:`axis`
keyword can be used to specify a subset of axes of which the gradient is calculated
>>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float), axis=0) # doctest: +SKIP array([[ 2., 2., -1.], [ 2., 2., -1.]])See :
The following pages refer to to this document either explicitly or contain code examples using this.
dask.array.routines.ediff1d
dask.array.routines.diff
Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.
Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)
SVG is more flexible but power hungry; and does not scale well to 50 + nodes.
All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them