numpy 1.22.4 Pypi GitHub Homepage
Other Docs
ParametersRaisesReturns
shares_memory(a, b, /, max_work=None)
warning

This function can be exponentially slow for some inputs, unless :None:None:`max_work` is set to a finite number or MAY_SHARE_BOUNDS . If in doubt, use :None:None:`numpy.may_share_memory` instead.

Parameters

a, b : ndarray

Input arrays

max_work : int, optional

Effort to spend on solving the overlap problem (maximum number of candidate solutions to consider). The following special values are recognized:

max_work=MAY_SHARE_EXACT (default)

The problem is solved exactly. In this case, the function returns True only if there is an element shared between the arrays. Finding the exact solution may take extremely long in some cases.

max_work=MAY_SHARE_BOUNDS

Only the memory bounds of a and b are checked.

Raises

numpy.TooHardError

Exceeded max_work.

Returns

out : bool

Determine if two arrays share memory.

See Also

may_share_memory

Examples

This example is valid syntax, but we were not able to check execution
>>> x = np.array([1, 2, 3, 4])
... np.shares_memory(x, np.array([5, 6, 7])) False
This example is valid syntax, but we were not able to check execution
>>> np.shares_memory(x[::2], x)
True
This example is valid syntax, but we were not able to check execution
>>> np.shares_memory(x[::2], x[1::2])
False

Checking whether two arrays share memory is NP-complete, and runtime may increase exponentially in the number of dimensions. Hence, :None:None:`max_work` should generally be set to a finite number, as it is possible to construct examples that take extremely long to run:

This example is valid syntax, but we were not able to check execution
>>> from numpy.lib.stride_tricks import as_strided
... x = np.zeros([192163377], dtype=np.int8)
... x1 = as_strided(x, strides=(36674, 61119, 85569), shape=(1049, 1049, 1049))
... x2 = as_strided(x[64023025:], strides=(12223, 12224, 1), shape=(1049, 1049, 1))
... np.shares_memory(x1, x2, max_work=1000) Traceback (most recent call last): ... numpy.TooHardError: Exceeded max_work

Running np.shares_memory(x1, x2) without :None:None:`max_work` set takes around 1 minute for this case. It is possible to find problems that take still significantly longer.

See :

Local connectivity graph

Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.

Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)

SVG is more flexible but power hungry; and does not scale well to 50 + nodes.

All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them


GitHub : None#None
type: <class 'builtin_function_or_method'>
Commit: