_generic_edge_filter(image, *, smooth_weights, edge_weights=[1, 0, -1], axis=None, mode='reflect', cval=0.0, mask=None)
The filter is computed by applying the edge weights along one dimension and the smoothing weights along all other dimensions. If no axis is given, or a tuple of axes is given the filter is computed along all axes in turn, and the magnitude is computed as the square root of the average square magnitude of all the axes.
The input image.
The smoothing weights for the filter. These are applied to dimensions orthogonal to the edge axis.
The weights to compute the edge along the chosen axes.
Compute the edge filter along this axis. If not provided, the edge magnitude is computed. This is defined as:
edge_mag = np.sqrt(sum([_generic_edge_filter(image, ..., axis=i)**2 for i in range(image.ndim)]) / image.ndim)
The magnitude is also computed if axis is a sequence.
The boundary mode for the convolution. See scipy.ndimage.convolve
for a description of the modes. This can be either a single boundary mode or one boundary mode per axis.
When :None:None:`mode`
is 'constant'
, this is the constant used in values outside the boundary of the image data.
Apply a generic, n-dimensional edge filter.
Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.
Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)
SVG is more flexible but power hungry; and does not scale well to 50 + nodes.
All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them